Sample records for lookup table based

  1. Memory-efficient table look-up optimized algorithm for context-based adaptive variable length decoding in H.264/advanced video coding

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong

    2016-03-01

    Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.

  2. Non-tables look-up search algorithm for efficient H.264/AVC context-based adaptive variable length coding decoding

    NASA Astrophysics Data System (ADS)

    Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong

    2014-09-01

    In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.

  3. Lookup Tables Versus Stacked Rasch Analysis in Comparing Pre- and Postintervention Adult Strabismus-20 Data.

    PubMed

    Leske, David A; Hatt, Sarah R; Liebermann, Laura; Holmes, Jonathan M

    2016-02-01

    We compare two methods of analysis for Rasch scoring pre- to postintervention data: Rasch lookup table versus de novo stacked Rasch analysis using the Adult Strabismus-20 (AS-20). One hundred forty-seven subjects completed the AS-20 questionnaire prior to surgery and 6 weeks postoperatively. Subjects were classified 6 weeks postoperatively as "success," "partial success," or "failure" based on angle and diplopia status. Postoperative change in AS-20 scores was compared for all four AS-20 domains (self-perception, interactions, reading function, and general function) overall and by success status using two methods: (1) applying historical Rasch threshold measures from lookup tables and (2) performing a stacked de novo Rasch analysis. Change was assessed by analyzing effect size, improvement exceeding 95% limits of agreement (LOA), and score distributions. Effect sizes were similar for all AS-20 domains whether obtained from lookup tables or stacked analysis. Similar proportions exceeded 95% LOAs using lookup tables versus stacked analysis. Improvement in median score was observed for all AS-20 domains using lookup tables and stacked analysis ( P < 0.0001 for all comparisons). The Rasch-scored AS-20 is a responsive and valid instrument designed to measure strabismus-specific health-related quality of life. When analyzing pre- to postoperative change in AS-20 scores, Rasch lookup tables and de novo stacked Rasch analysis yield essentially the same results. We describe a practical application of lookup tables, allowing the clinician or researcher to score the Rasch-calibrated AS-20 questionnaire without specialized software.

  4. Lookup Tables Versus Stacked Rasch Analysis in Comparing Pre- and Postintervention Adult Strabismus-20 Data

    PubMed Central

    Leske, David A.; Hatt, Sarah R.; Liebermann, Laura; Holmes, Jonathan M.

    2016-01-01

    Purpose We compare two methods of analysis for Rasch scoring pre- to postintervention data: Rasch lookup table versus de novo stacked Rasch analysis using the Adult Strabismus-20 (AS-20). Methods One hundred forty-seven subjects completed the AS-20 questionnaire prior to surgery and 6 weeks postoperatively. Subjects were classified 6 weeks postoperatively as “success,” “partial success,” or “failure” based on angle and diplopia status. Postoperative change in AS-20 scores was compared for all four AS-20 domains (self-perception, interactions, reading function, and general function) overall and by success status using two methods: (1) applying historical Rasch threshold measures from lookup tables and (2) performing a stacked de novo Rasch analysis. Change was assessed by analyzing effect size, improvement exceeding 95% limits of agreement (LOA), and score distributions. Results Effect sizes were similar for all AS-20 domains whether obtained from lookup tables or stacked analysis. Similar proportions exceeded 95% LOAs using lookup tables versus stacked analysis. Improvement in median score was observed for all AS-20 domains using lookup tables and stacked analysis (P < 0.0001 for all comparisons). Conclusions The Rasch-scored AS-20 is a responsive and valid instrument designed to measure strabismus-specific health-related quality of life. When analyzing pre- to postoperative change in AS-20 scores, Rasch lookup tables and de novo stacked Rasch analysis yield essentially the same results. Translational Relevance We describe a practical application of lookup tables, allowing the clinician or researcher to score the Rasch-calibrated AS-20 questionnaire without specialized software. PMID:26933524

  5. Instantaneous and controllable integer ambiguity resolution: review and an alternative approach

    NASA Astrophysics Data System (ADS)

    Zhang, Jingyu; Wu, Meiping; Li, Tao; Zhang, Kaidong

    2015-11-01

    In the high-precision application of Global Navigation Satellite System (GNSS), integer ambiguity resolution is the key step to realize precise positioning and attitude determination. As the necessary part of quality control, integer aperture (IA) ambiguity resolution provides the theoretical and practical foundation for ambiguity validation. It is mainly realized by acceptance testing. Due to the constraint of correlation between ambiguities, it is impossible to realize the controlling of failure rate according to analytical formula. Hence, the fixed failure rate approach is implemented by Monte Carlo sampling. However, due to the characteristics of Monte Carlo sampling and look-up table, we have to face the problem of a large amount of time consumption if sufficient GNSS scenarios are included in the creation of look-up table. This restricts the fixed failure rate approach to be a post process approach if a look-up table is not available. Furthermore, if not enough GNSS scenarios are considered, the table may only be valid for a specific scenario or application. Besides this, the method of creating look-up table or look-up function still needs to be designed for each specific acceptance test. To overcome these problems in determination of critical values, this contribution will propose an instantaneous and CONtrollable (iCON) IA ambiguity resolution approach for the first time. The iCON approach has the following advantages: (a) critical value of acceptance test is independently determined based on the required failure rate and GNSS model without resorting to external information such as look-up table; (b) it can be realized instantaneously for most of IA estimators which have analytical probability formulas. The stronger GNSS model, the less time consumption; (c) it provides a new viewpoint to improve the research about IA estimation. To verify these conclusions, multi-frequency and multi-GNSS simulation experiments are implemented. Those results show that IA estimators based on iCON approach can realize controllable ambiguity resolution. Besides this, compared with ratio test IA based on look-up table, difference test IA and IA least square based on the iCON approach most of times have higher success rates and better controllability to failure rates.

  6. Performance of a lookup table-based approach for measuring tissue optical properties with diffuse optical spectroscopy

    NASA Astrophysics Data System (ADS)

    Nichols, Brandon S.; Rajaram, Narasimhan; Tunnell, James W.

    2012-05-01

    Diffuse optical spectroscopy (DOS) provides a powerful tool for fast and noninvasive disease diagnosis. The ability to leverage DOS to accurately quantify tissue optical parameters hinges on the model used to estimate light-tissue interaction. We describe the accuracy of a lookup table (LUT)-based inverse model for measuring optical properties under different conditions relevant to biological tissue. The LUT is a matrix of reflectance values acquired experimentally from calibration standards of varying scattering and absorption properties. Because it is based on experimental values, the LUT inherently accounts for system response and probe geometry. We tested our approach in tissue phantoms containing multiple absorbers, different sizes of scatterers, and varying oxygen saturation of hemoglobin. The LUT-based model was able to extract scattering and absorption properties under most conditions with errors of less than 5 percent. We demonstrate the validity of the lookup table over a range of source-detector separations from 0.25 to 1.48 mm. Finally, we describe the rapid fabrication of a lookup table using only six calibration standards. This optimized LUT was able to extract scattering and absorption properties with average RMS errors of 2.5 and 4 percent, respectively.

  7. Table look-up estimation of signal and noise parameters from quantized observables

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V. A.; Rodemich, E. R.

    1986-01-01

    A table look-up algorithm for estimating underlying signal and noise parameters from quantized observables is examined. A general mathematical model is developed, and a look-up table designed specifically for estimating parameters from four-bit quantized data is described. Estimator performance is evaluated both analytically and by means of numerical simulation, and an example is provided to illustrate the use of the look-up table for estimating signal-to-noise ratios commonly encountered in Voyager-type data.

  8. A look-up table based approach to characterize crystal twinning for synchrotron X-ray Laue microdiffraction scans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yao; Wan, Liang; Chen, Kai

    An automated method has been developed to characterize the type and spatial distribution of twinning in crystal orientation maps from synchrotron X-ray Laue microdiffraction results. The method relies on a look-up table approach. Taking into account the twin axis and twin plane for plausible rotation and reflection twins, respectively, and the point group symmetry operations for a specific crystal, a look-up table listing crystal-specific rotation angle–axis pairs, which reveal the orientation relationship between the twin and the parent lattice, is generated. By comparing these theoretical twin–parent orientation relationships in the look-up table with the measured misorientations, twin boundaries are mappedmore » automatically from Laue microdiffraction raster scans with thousands of data points. Finally, taking advantage of the high orientation resolution of the Laue microdiffraction method, this automated approach is also applicable to differentiating twinning elements among multiple twinning modes in any crystal system.« less

  9. A look-up table based approach to characterize crystal twinning for synchrotron X-ray Laue microdiffraction scans

    DOE PAGES

    Li, Yao; Wan, Liang; Chen, Kai

    2015-04-25

    An automated method has been developed to characterize the type and spatial distribution of twinning in crystal orientation maps from synchrotron X-ray Laue microdiffraction results. The method relies on a look-up table approach. Taking into account the twin axis and twin plane for plausible rotation and reflection twins, respectively, and the point group symmetry operations for a specific crystal, a look-up table listing crystal-specific rotation angle–axis pairs, which reveal the orientation relationship between the twin and the parent lattice, is generated. By comparing these theoretical twin–parent orientation relationships in the look-up table with the measured misorientations, twin boundaries are mappedmore » automatically from Laue microdiffraction raster scans with thousands of data points. Finally, taking advantage of the high orientation resolution of the Laue microdiffraction method, this automated approach is also applicable to differentiating twinning elements among multiple twinning modes in any crystal system.« less

  10. Multiple Lookup Table-Based AES Encryption Algorithm Implementation

    NASA Astrophysics Data System (ADS)

    Gong, Jin; Liu, Wenyi; Zhang, Huixin

    Anew AES (Advanced Encryption Standard) encryption algorithm implementation was proposed in this paper. It is based on five lookup tables, which are generated from S-box(the substitution table in AES). The obvious advantages are reducing the code-size, improving the implementation efficiency, and helping new learners to understand the AES encryption algorithm and GF(28) multiplication which are necessary to correctly implement AES[1]. This method can be applied on processors with word length 32 or above, FPGA and others. And correspondingly we can implement it by VHDL, Verilog, VB and other languages.

  11. Intrinsic fluorescence of protein in turbid media using empirical relation based on Monte Carlo lookup table

    NASA Astrophysics Data System (ADS)

    Einstein, Gnanatheepam; Udayakumar, Kanniyappan; Aruna, Prakasarao; Ganesan, Singaravelu

    2017-03-01

    Fluorescence of Protein has been widely used in diagnostic oncology for characterizing cellular metabolism. However, the intensity of fluorescence emission is affected due to the absorbers and scatterers in tissue, which may lead to error in estimating exact protein content in tissue. Extraction of intrinsic fluorescence from measured fluorescence has been achieved by different methods. Among them, Monte Carlo based method yields the highest accuracy for extracting intrinsic fluorescence. In this work, we have attempted to generate a lookup table for Monte Carlo simulation of fluorescence emission by protein. Furthermore, we fitted the generated lookup table using an empirical relation. The empirical relation between measured and intrinsic fluorescence is validated using tissue phantom experiments. The proposed relation can be used for estimating intrinsic fluorescence of protein for real-time diagnostic applications and thereby improving the clinical interpretation of fluorescence spectroscopic data.

  12. Adjusting the specificity of an engine map based on the sensitivity of an engine control parameter relative to a performance variable

    DOEpatents

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2014-10-28

    Methods and systems for engine control optimization are provided. A first and a second operating condition of a vehicle engine are detected. An initial value is identified for a first and a second engine control parameter corresponding to a combination of the detected operating conditions according to a first and a second engine map look-up table. The initial values for the engine control parameters are adjusted based on a detected engine performance variable to cause the engine performance variable to approach a target value. A first and a second sensitivity of the engine performance variable are determined in response to changes in the engine control parameters. The first engine map look-up table is adjusted when the first sensitivity is greater than a threshold, and the second engine map look-up table is adjusted when the second sensitivity is greater than a threshold.

  13. A microprocessor-based table lookup approach for magnetic bearing linearization

    NASA Technical Reports Server (NTRS)

    Groom, N. J.; Miller, J. B.

    1981-01-01

    An approach for producing a linear transfer characteristic between force command and force output of a magnetic bearing actuator without flux biasing is presented. The approach is microprocessor based and uses a table lookup to generate drive signals for the magnetic bearing power driver. An experimental test setup used to demonstrate the feasibility of the approach is described, and test results are presented. The test setup contains bearing elements similar to those used in a laboratory model annular momentum control device.

  14. SU-F-T-406: Verification of Total Body Irradiation Commissioned MU Lookup Table Accuracy Using Treatment Planning System for Wide Range of Patient Sizes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, D; Chi, P; Tailor, R

    Purpose: To verify the accuracy of total body irradiation (TBI) measurement commissioning data using the treatment planning system (TPS) for a wide range of patient separations. Methods: Our institution conducts TBI treatments with an 18MV photon beam at 380cm extended SSD using an AP/PA technique. Currently, the monitor units (MU) per field for patient treatments are determined using a lookup table generated from TMR measurements in a water phantom (75 × 41 × 30.5 cm3). The dose prescribed to an umbilicus midline point at spine level is determined based on patient separation, dose/ field and dose rate/MU. One-dimensional heterogeneous dosemore » calculations from Pinnacle TPS were validated with thermoluminescent dosimeters (TLD) placed in an average adult anthropomorphic phantom and also in-vivo on four patients with large separations. Subsequently, twelve patients with various separations (17–47cm) were retrospectively analyzed. Computed tomography (CT) scans were acquired in the left and right decubitus positions from vertex to knee. A treatment plan for each patient was generated. The ratio of the lookup table MU to the heterogeneous TPS MU was compared. Results: TLD Measurements in the anthropomorphic phantom and large TBI patients agreed with Pinnacle calculated dose within 2.8% and 2%, respectively. The heterogeneous calculation compared to the lookup table agreed within 8.1% (ratio range: 1.014–1.081). A trend of reduced accuracy was observed when patient separation increases. Conclusion: The TPS dose calculation accuracy was confirmed by TLD measurements, showing that Pinnacle can model the extended SSD dose without commissioning a special beam model for the extended SSD geometry. The difference between the lookup table and TPS calculation potentially comes from lack of scatter during commissioning when compared to extreme patient sizes. The observed trend suggests the need for development of a correction factor between the lookup table and TPS dose calculations.« less

  15. Table-driven software architecture for a stitching system

    NASA Technical Reports Server (NTRS)

    Thrash, Patrick J. (Inventor); Miller, Jeffrey L. (Inventor); Pallas, Ken (Inventor); Trank, Robert C. (Inventor); Fox, Rhoda (Inventor); Korte, Mike (Inventor); Codos, Richard (Inventor); Korolev, Alexandre (Inventor); Collan, William (Inventor)

    2001-01-01

    Native code for a CNC stitching machine is generated by generating a geometry model of a preform; generating tool paths from the geometry model, the tool paths including stitching instructions for making stitches; and generating additional instructions indicating thickness values. The thickness values are obtained from a lookup table. When the stitching machine runs the native code, it accesses a lookup table to determine a thread tension value corresponding to the thickness value. The stitching machine accesses another lookup table to determine a thread path geometry value corresponding to the thickness value.

  16. Generation of Look-Up Tables for Dynamic Job Shop Scheduling Decision Support Tool

    NASA Astrophysics Data System (ADS)

    Oktaviandri, Muchamad; Hassan, Adnan; Mohd Shaharoun, Awaluddin

    2016-02-01

    Majority of existing scheduling techniques are based on static demand and deterministic processing time, while most job shop scheduling problem are concerned with dynamic demand and stochastic processing time. As a consequence, the solutions obtained from the traditional scheduling technique are ineffective wherever changes occur to the system. Therefore, this research intends to develop a decision support tool (DST) based on promising artificial intelligent that is able to accommodate the dynamics that regularly occur in job shop scheduling problem. The DST was designed through three phases, i.e. (i) the look-up table generation, (ii) inverse model development and (iii) integration of DST components. This paper reports the generation of look-up tables for various scenarios as a part in development of the DST. A discrete event simulation model was used to compare the performance among SPT, EDD, FCFS, S/OPN and Slack rules; the best performances measures (mean flow time, mean tardiness and mean lateness) and the job order requirement (inter-arrival time, due dates tightness and setup time ratio) which were compiled into look-up tables. The well-known 6/6/J/Cmax Problem from Muth and Thompson (1963) was used as a case study. In the future, the performance measure of various scheduling scenarios and the job order requirement will be mapped using ANN inverse model.

  17. Overview of fast algorithm in 3D dynamic holographic display

    NASA Astrophysics Data System (ADS)

    Liu, Juan; Jia, Jia; Pan, Yijie; Wang, Yongtian

    2013-08-01

    3D dynamic holographic display is one of the most attractive techniques for achieving real 3D vision with full depth cue without any extra devices. However, huge 3D information and data should be preceded and be computed in real time for generating the hologram in 3D dynamic holographic display, and it is a challenge even for the most advanced computer. Many fast algorithms are proposed for speeding the calculation and reducing the memory usage, such as:look-up table (LUT), compressed look-up table (C-LUT), split look-up table (S-LUT), and novel look-up table (N-LUT) based on the point-based method, and full analytical polygon-based methods, one-step polygon-based method based on the polygon-based method. In this presentation, we overview various fast algorithms based on the point-based method and the polygon-based method, and focus on the fast algorithm with low memory usage, the C-LUT, and one-step polygon-based method by the 2D Fourier analysis of the 3D affine transformation. The numerical simulations and the optical experiments are presented, and several other algorithms are compared. The results show that the C-LUT algorithm and the one-step polygon-based method are efficient methods for saving calculation time. It is believed that those methods could be used in the real-time 3D holographic display in future.

  18. Spectral Retrieval of Latent Heating Profiles from TRMM PR Data: Comparison of Look-Up Tables

    NASA Technical Reports Server (NTRS)

    Shige, Shoichi; Takayabu, Yukari N.; Tao, Wei-Kuo; Johnson, Daniel E.; Shie, Chung-Lin

    2003-01-01

    The primary goal of the Tropical Rainfall Measuring Mission (TRMM) is to use the information about distributions of precipitation to determine the four dimensional (i.e., temporal and spatial) patterns of latent heating over the whole tropical region. The Spectral Latent Heating (SLH) algorithm has been developed to estimate latent heating profiles for the TRMM Precipitation Radar (PR) with a cloud- resolving model (CRM). The method uses CRM- generated heating profile look-up tables for the three rain types; convective, shallow stratiform, and anvil rain (deep stratiform with a melting level). For convective and shallow stratiform regions, the look-up table refers to the precipitation top height (PTH). For anvil region, on the other hand, the look- up table refers to the precipitation rate at the melting level instead of PTH. For global applications, it is necessary to examine the universality of the look-up table. In this paper, we compare the look-up tables produced from the numerical simulations of cloud ensembles forced with the Tropical Ocean Global Atmosphere (TOGA) Coupled Atmosphere-Ocean Response Experiment (COARE) data and the GARP Atlantic Tropical Experiment (GATE) data. There are some notable differences between the TOGA-COARE table and the GATE table, especially for the convective heating. First, there is larger number of deepest convective profiles in the TOGA-COARE table than in the GATE table, mainly due to the differences in SST. Second, shallow convective heating is stronger in the TOGA COARE table than in the GATE table. This might be attributable to the difference in the strength of the low-level inversions. Third, altitudes of convective heating maxima are larger in the TOGA COARE table than in the GATE table. Levels of convective heating maxima are located just below the melting level, because warm-rain processes are prevalent in tropical oceanic convective systems. Differences in levels of convective heating maxima probably reflect differences in melting layer heights. We are now extending our study to simulations of other field experiments (e.g. SCSMEX and ARM) in order to examine the universality of the look-up table. The impact of look-up tables on the retrieved latent heating profiles will also be assessed.

  19. 40 CFR Table Nn-2 to Subpart Hh of... - Lookup Default Values for Calculation Methodology 2 of This Subpart

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Municipal Solid Waste Landfills Pt. 98, Subpt. NN, Table NN-2 Table NN-2 to Subpart HH of Part 98—Lookup Default Values...

  20. A VLSI architecture for performing finite field arithmetic with reduced table look-up

    NASA Technical Reports Server (NTRS)

    Hsu, I. S.; Truong, T. K.; Reed, I. S.

    1986-01-01

    A new table look-up method for finding the log and antilog of finite field elements has been developed by N. Glover. In his method, the log and antilog of a field element is found by the use of several smaller tables. The method is based on a use of the Chinese Remainder Theorem. The technique often results in a significant reduction in the memory requirements of the problem. A VLSI architecture is developed for a special case of this new algorithm to perform finite field arithmetic including multiplication, division, and the finding of an inverse element in the finite field.

  1. On the look-up tables for the critical heat flux in tubes (history and problems)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirillov, P.L.; Smogalev, I.P.

    1995-09-01

    The complication of critical heat flux (CHF) problem for boiling in channels is caused by the large number of variable factors and the variety of two-phase flows. The existence of several hundreds of correlations for the prediction of CHF demonstrates the unsatisfactory state of this problem. The phenomenological CHF models can provide only the qualitative predictions of CHF primarily in annular-dispersed flow. The CHF look-up tables covered the results of numerous experiments received more recognition in the last 15 years. These tables are based on the statistical averaging of CHF values for each range of pressure, mass flux and quality.more » The CHF values for regions, where no experimental data is available, are obtained by extrapolation. The correction of these tables to account for the diameter effect is a complicated problem. There are ranges of conditions where the simple correlations cannot produce the reliable results. Therefore, diameter effect on CHF needs additional study. The modification of look-up table data for CHF in tubes to predict CHF in rod bundles must include a method which to take into account the nonuniformity of quality in a rod bundle cross section.« less

  2. Device and circuit analysis of a sub 20 nm double gate MOSFET with gate stack using a look-up-table-based approach

    NASA Astrophysics Data System (ADS)

    Chakraborty, S.; Dasgupta, A.; Das, R.; Kar, M.; Kundu, A.; Sarkar, C. K.

    2017-12-01

    In this paper, we explore the possibility of mapping devices designed in TCAD environment to its modeled version developed in cadence virtuoso environment using a look-up table (LUT) approach. Circuit simulation of newly designed devices in TCAD environment is a very slow and tedious process involving complex scripting. Hence, the LUT based modeling approach has been proposed as a faster and easier alternative in cadence environment. The LUTs are prepared by extracting data from the device characteristics obtained from device simulation in TCAD. A comparative study is shown between the TCAD simulation and the LUT-based alternative to showcase the accuracy of modeled devices. Finally the look-up table approach is used to evaluate the performance of circuits implemented using 14 nm nMOSFET.

  3. GPU color space conversion

    NASA Astrophysics Data System (ADS)

    Chase, Patrick; Vondran, Gary

    2011-01-01

    Tetrahedral interpolation is commonly used to implement continuous color space conversions from sparse 3D and 4D lookup tables. We investigate the implementation and optimization of tetrahedral interpolation algorithms for GPUs, and compare to the best known CPU implementations as well as to a well known GPU-based trilinear implementation. We show that a 500 NVIDIA GTX-580 GPU is 3x faster than a 1000 Intel Core i7 980X CPU for 3D interpolation, and 9x faster for 4D interpolation. Performance-relevant GPU attributes are explored including thread scheduling, local memory characteristics, global memory hierarchy, and cache behaviors. We consider existing tetrahedral interpolation algorithms and tune based on the structure and branching capabilities of current GPUs. Global memory performance is improved by reordering and expanding the lookup table to ensure optimal access behaviors. Per multiprocessor local memory is exploited to implement optimally coalesced global memory accesses, and local memory addressing is optimized to minimize bank conflicts. We explore the impacts of lookup table density upon computation and memory access costs. Also presented are CPU-based 3D and 4D interpolators, using SSE vector operations that are faster than any previously published solution.

  4. Optoelectronic switch matrix as a look-up table for residue arithmetic.

    PubMed

    Macdonald, R I

    1987-10-01

    The use of optoelectronic matrix switches to perform look-up table functions in residue arithmetic processors is proposed. In this application, switchable detector arrays give the advantage of a greatly reduced requirement for optical sources by comparison with previous optoelectronic residue processors.

  5. A robust and accurate numerical method for transcritical turbulent flows at supercritical pressure with an arbitrary equation of state

    NASA Astrophysics Data System (ADS)

    Kawai, Soshi; Terashima, Hiroshi; Negishi, Hideyo

    2015-11-01

    This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture the steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier-Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.

  6. A robust and accurate numerical method for transcritical turbulent flows at supercritical pressure with an arbitrary equation of state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawai, Soshi, E-mail: kawai@cfd.mech.tohoku.ac.jp; Terashima, Hiroshi; Negishi, Hideyo

    2015-11-01

    This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture themore » steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier–Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.« less

  7. Random Fill Cache Architecture (Preprint)

    DTIC Science & Technology

    2014-10-01

    a concrete example, we show how the cache collision attack works to extract the AES encryption keys (e.g., in the OpenSSL implementation of AES). AES...each round are implemented as table lookups for performance reasons. OpenSSL uses ten 1-KB lookup tables, five for encryption and five for decryption

  8. A color management system for multi-colored LED lighting

    NASA Astrophysics Data System (ADS)

    Chakrabarti, Maumita; Thorseth, Anders; Jepsen, Jørgen; Corell, Dennis D.; Dam-Hansen, Carsten

    2015-09-01

    A new color control system is described and implemented for a five-color LED light engine, covering a wide white gamut. The system combines a new way of using pre-calibrated lookup tables and a rule-based optimization of chromaticity distance from the Planckian locus with a calibrated color sensor. The color sensor monitors the chromaticity of the mixed light providing the correction factor for the current driver by using the generated lookup table. The long term stability and accuracy of the system will be experimentally investigated with target tolerance within a circle radius of 0.0013 in the uniform chromaticity diagram (CIE1976).

  9. A Memory-Based Programmable Logic Device Using Look-Up Table Cascade with Synchronous Static Random Access Memories

    NASA Astrophysics Data System (ADS)

    Nakamura, Kazuyuki; Sasao, Tsutomu; Matsuura, Munehiro; Tanaka, Katsumasa; Yoshizumi, Kenichi; Nakahara, Hiroki; Iguchi, Yukihiro

    2006-04-01

    A large-scale memory-technology-based programmable logic device (PLD) using a look-up table (LUT) cascade is developed in the 0.35-μm standard complementary metal oxide semiconductor (CMOS) logic process. Eight 64 K-bit synchronous SRAMs are connected to form an LUT cascade with a few additional circuits. The features of the LUT cascade include: 1) a flexible cascade connection structure, 2) multi phase pseudo asynchronous operations with synchronous static random access memory (SRAM) cores, and 3) LUT-bypass redundancy. This chip operates at 33 MHz in 8-LUT cascades at 122 mW. Benchmark results show that it achieves a comparable performance to field programmable gate array (FPGAs).

  10. Content dependent selection of image enhancement parameters for mobile displays

    NASA Astrophysics Data System (ADS)

    Lee, Yoon-Gyoo; Kang, Yoo-Jin; Kim, Han-Eol; Kim, Ka-Hee; Kim, Choon-Woo

    2011-01-01

    Mobile devices such as cellular phones and portable multimedia player with capability of playing terrestrial digital multimedia broadcasting (T-DMB) contents have been introduced into consumer market. In this paper, content dependent image quality enhancement method for sharpness and colorfulness and noise reduction is presented to improve perceived image quality on mobile displays. Human visual experiments are performed to analyze viewers' preference. Relationship between the objective measures and the optimal values of image control parameters are modeled by simple lookup tables based on the results of human visual experiments. Content dependent values of image control parameters are determined based on the calculated measures and predetermined lookup tables. Experimental results indicate that dynamic selection of image control parameters yields better image quality.

  11. A shower look-up table to trace the dynamics of meteoroid streams and their sources

    NASA Astrophysics Data System (ADS)

    Jenniskens, Petrus

    2018-04-01

    Meteor showers are caused by meteoroid streams from comets (and some primitive asteroids). They trace the comet population and its dynamical evolution, warn of dangerous long-period comets that can pass close to Earth's orbit, outline volumes of space with a higher satellite impact probability, and define how meteoroids evolve in the interplanetary medium. Ongoing meteoroid orbit surveys have mapped these showers in recent years, but the surveys are now running up against a more and more complicated scene. The IAU Working List of Meteor Showers has reached 956 entries to be investigated (per March 1, 2018). The picture is even more complicated with the discovery that radar-detected streams are often different, or differently distributed, than video-detected streams. Complicating matters even more, some meteor showers are active over many months, during which their radiant position gradually changes, which makes the use of mean orbits as a proxy for a meteoroid stream's identity meaningless. The dispersion of the stream in space and time is important to that identity and contains much information about its origin and dynamical evolution. To make sense of the meteor shower zoo, a Shower Look-Up Table was created that captures this dispersion. The Shower Look-Up Table has enabled the automated identification of showers in the ongoing CAMS video-based meteoroid orbit survey, results of which are presented now online in near-real time at http://cams.seti.org/FDL/. Visualization tools have been built that depict the streams in a planetarium setting. Examples will be presented that sample the range of meteoroid streams that this look-up table describes. Possibilities for further dynamical studies will be discussed.

  12. Extension of Generalized Fluid System Simulation Program's Fluid Property Database

    NASA Technical Reports Server (NTRS)

    Patel, Kishan

    2011-01-01

    This internship focused on the development of additional capabilities for the General Fluid Systems Simulation Program (GFSSP). GFSSP is a thermo-fluid code used to evaluate system performance by a finite volume-based network analysis method. The program was developed primarily to analyze the complex internal flow of propulsion systems and is capable of solving many problems related to thermodynamics and fluid mechanics. GFSSP is integrated with thermodynamic programs that provide fluid properties for sub-cooled, superheated, and saturation states. For fluids that are not included in the thermodynamic property program, look-up property tables can be provided. The look-up property tables of the current release version can only handle sub-cooled and superheated states. The primary purpose of the internship was to extend the look-up tables to handle saturated states. This involves a) generation of a property table using REFPROP, a thermodynamic property program that is widely used, and b) modifications of the Fortran source code to read in an additional property table containing saturation data for both saturated liquid and saturated vapor states. Also, a method was implemented to calculate the thermodynamic properties of user-fluids within the saturation region, given values of pressure and enthalpy. These additions required new code to be written, and older code had to be adjusted to accommodate the new capabilities. Ultimately, the changes will lead to the incorporation of this new capability in future versions of GFSSP. This paper describes the development and validation of the new capability.

  13. Fast Pixel Buffer For Processing With Lookup Tables

    NASA Technical Reports Server (NTRS)

    Fisher, Timothy E.

    1992-01-01

    Proposed scheme for buffering data on intensities of picture elements (pixels) of image increases rate or processing beyond that attainable when data read, one pixel at time, from main image memory. Scheme applied in design of specialized image-processing circuitry. Intended to optimize performance of processor in which electronic equivalent of address-lookup table used to address those pixels in main image memory required for processing.

  14. An Aerosol Extinction-to-Backscatter Ratio Database Derived from the NASA Micro-Pulse Lidar Network: Applications for Space-based Lidar Observations

    NASA Technical Reports Server (NTRS)

    Welton, Ellsworth J.; Campbell, James R.; Spinhime, James D.; Berkoff, Timothy A.; Holben, Brent; Tsay, Si-Chee; Bucholtz, Anthony

    2004-01-01

    Backscatter lidar signals are a function of both backscatter and extinction. Hence, these lidar observations alone cannot separate the two quantities. The aerosol extinction-to-backscatter ratio, S, is the key parameter required to accurately retrieve extinction and optical depth from backscatter lidar observations of aerosol layers. S is commonly defined as 4*pi divided by the product of the single scatter albedo and the phase function at 180-degree scattering angle. Values of S for different aerosol types are not well known, and are even more difficult to determine when aerosols become mixed. Here we present a new lidar-sunphotometer S database derived from Observations of the NASA Micro-Pulse Lidar Network (MPLNET). MPLNET is a growing worldwide network of eye-safe backscatter lidars co-located with sunphotometers in the NASA Aerosol Robotic Network (AERONET). Values of S for different aerosol species and geographic regions will be presented. A framework for constructing an S look-up table will be shown. Look-up tables of S are needed to calculate aerosol extinction and optical depth from space-based lidar observations in the absence of co-located AOD data. Applications for using the new S look-up table to reprocess aerosol products from NASA's Geoscience Laser Altimeter System (GLAS) will be discussed.

  15. Real-time look-up table-based color correction for still image stabilization of digital cameras without using frame memory

    NASA Astrophysics Data System (ADS)

    Luo, Lin-Bo; An, Sang-Woo; Wang, Chang-Shuai; Li, Ying-Chun; Chong, Jong-Wha

    2012-09-01

    Digital cameras usually decrease exposure time to capture motion-blur-free images. However, this operation will generate an under-exposed image with a low-budget complementary metal-oxide semiconductor image sensor (CIS). Conventional color correction algorithms can efficiently correct under-exposed images; however, they are generally not performed in real time and need at least one frame memory if they are implemented by hardware. The authors propose a real-time look-up table-based color correction method that corrects under-exposed images with hardware without using frame memory. The method utilizes histogram matching of two preview images, which are exposed for a long and short time, respectively, to construct an improved look-up table (ILUT) and then corrects the captured under-exposed image in real time. Because the ILUT is calculated in real time before processing the captured image, this method does not require frame memory to buffer image data, and therefore can greatly save the cost of CIS. This method not only supports single image capture, but also bracketing to capture three images at a time. The proposed method was implemented by hardware description language and verified by a field-programmable gate array with a 5 M CIS. Simulations show that the system can perform in real time with a low cost and can correct the color of under-exposed images well.

  16. Generating functional analysis of minority games with inner product strategy definitions

    NASA Astrophysics Data System (ADS)

    Coolen, A. C. C.; Shayeghi, N.

    2008-08-01

    We use generating functional methods to solve the so-called inner product versions of the minority game (MG), with fake and/or real market histories, by generalizing the theory developed recently for look-up table MGs with real histories. The phase diagrams of the look-up table and inner product MG versions are generally found to be identical, with the exception of inner product MGs where histories are sampled linearly, which are found to be structurally critical. However, we encounter interesting differences both in the theory (where the role of the history frequency distribution in look-up table MGs is taken over by the eigenvalue spectrum of a history covariance matrix in inner product MGs) and in the static and dynamic phenomenology of the models. Our theoretical predictions are supported by numerical simulations.

  17. An IPv6 routing lookup algorithm using weight-balanced tree based on prefix value for virtual router

    NASA Astrophysics Data System (ADS)

    Chen, Lingjiang; Zhou, Shuguang; Zhang, Qiaoduo; Li, Fenghua

    2016-10-01

    Virtual router enables the coexistence of different networks on the same physical facility and has lately attracted a great deal of attention from researchers. As the number of IPv6 addresses is rapidly increasing in virtual routers, designing an efficient IPv6 routing lookup algorithm is of great importance. In this paper, we present an IPv6 lookup algorithm called weight-balanced tree (WBT). WBT merges Forwarding Information Bases (FIBs) of virtual routers into one spanning tree, and compresses the space cost. WBT's average time complexity and the worst case time complexity of lookup and update process are both O(logN) and space complexity is O(cN) where N is the size of routing table and c is a constant. Experiments show that WBT helps reduce more than 80% Static Random Access Memory (SRAM) cost in comparison to those separation schemes. WBT also achieves the least average search depth comparing with other homogeneous algorithms.

  18. 76 FR 8990 - Hours of Service of Drivers; Availability of Supplemental Documents and Corrections to Notice of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-16

    .... FMCSA used a look-up table for scaling the individual hours of work in the previous week. Look-up tables...). Column C (cells C93-137) presents the values scaled to our average work (52 hours per week of work) and... Evaluation to link the hours worked in the previous week to fatigue the following week. On January 28, 2011...

  19. Implementation of a fast digital optical matrix-vector multiplier using a holographic look-up table and residue arithmetic

    NASA Technical Reports Server (NTRS)

    Habiby, Sarry F.; Collins, Stuart A., Jr.

    1987-01-01

    The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. A Hughes liquid crystal light valve, the residue arithmetic representation, and a holographic optical memory are used to construct position coded optical look-up tables. All operations are performed in effectively one light valve response time with a potential for a high information density.

  20. Implementation of a fast digital optical matrix-vector multiplier using a holographic look-up table and residue arithmetic.

    PubMed

    Habiby, S F; Collins, S A

    1987-11-01

    The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. A Hughes liquid crystal light valve, the residue arithmetic representation, and a holographic optical memory are used to construct position coded optical look-up tables. All operations are performed in effectively one light valve response time with a potential for a high information density.

  1. Aerosol polarization effects on atmospheric correction and aerosol retrievals in ocean color remote sensing.

    PubMed

    Wang, Menghua

    2006-12-10

    The current ocean color data processing system for the Sea-viewing Wide Field-of-View Sensor (SeaWiFS) and the moderate resolution imaging spectroradiometer (MODIS) uses the Rayleigh lookup tables that were generated using the vector radiative transfer theory with inclusion of the polarization effects. The polarization effects, however, are not accounted for in the aerosol lookup tables for the ocean color data processing. I describe a study of the aerosol polarization effects on the atmospheric correction and aerosol retrieval algorithms in the ocean color remote sensing. Using an efficient method for the multiple vector radiative transfer computations, aerosol lookup tables that include polarization effects are generated. Simulations have been carried out to evaluate the aerosol polarization effects on the derived ocean color and aerosol products for all possible solar-sensor geometries and the various aerosol optical properties. Furthermore, the new aerosol lookup tables have been implemented in the SeaWiFS data processing system and extensively tested and evaluated with SeaWiFS regional and global measurements. Results show that in open oceans (maritime environment), the aerosol polarization effects on the ocean color and aerosol products are usually negligible, while there are some noticeable effects on the derived products in the coastal regions with nonmaritime aerosols.

  2. Delivery of Nano-Tethered Therapies to Brain Metastases of Primary Breast Cancer Using a Cellular Trojan Horse

    DTIC Science & Technology

    2015-12-01

    Hounsfield units (HU) of the brain were translated into corresponding optical properties (absorption coefficient, scattering coefficient, and anisotropy...factor) using lookup tables (Fig 2). The lookup tables were prepared from earlier studies which derived the Hounsfield units and optical properties of... Hounsfield Units /HU) are segmented and translated into optical properties of the brain tissue (white/gray matter, CSF, skull bone, etc.). Monte

  3. Improved look-up table method of computer-generated holograms.

    PubMed

    Wei, Hui; Gong, Guanghong; Li, Ni

    2016-11-10

    Heavy computation load and vast memory requirements are major bottlenecks of computer-generated holograms (CGHs), which are promising and challenging in three-dimensional displays. To solve these problems, an improved look-up table (LUT) method suitable for arbitrarily sampled object points is proposed and implemented on a graphics processing unit (GPU) whose reconstructed object quality is consistent with that of the coherent ray-trace (CRT) method. The concept of distance factor is defined, and the distance factors are pre-computed off-line and stored in a look-up table. The results show that while reconstruction quality close to that of the CRT method is obtained, the on-line computation time is dramatically reduced compared with the LUT method on the GPU and the memory usage is lower than that of the novel-LUT considerably. Optical experiments are carried out to validate the effectiveness of the proposed method.

  4. Identification of sea ice types in spaceborne synthetic aperture radar data

    NASA Technical Reports Server (NTRS)

    Kwok, Ronald; Rignot, Eric; Holt, Benjamin; Onstott, R.

    1992-01-01

    This study presents an approach for identification of sea ice types in spaceborne SAR image data. The unsupervised classification approach involves cluster analysis for segmentation of the image data followed by cluster labeling based on previously defined look-up tables containing the expected backscatter signatures of different ice types measured by a land-based scatterometer. Extensive scatterometer observations and experience accumulated in field campaigns during the last 10 yr were used to construct these look-up tables. The classification approach, its expected performance, the dependence of this performance on radar system performance, and expected ice scattering characteristics are discussed. Results using both aircraft and simulated ERS-1 SAR data are presented and compared to limited field ice property measurements and coincident passive microwave imagery. The importance of an integrated postlaunch program for the validation and improvement of this approach is discussed.

  5. Fuzzy Rule Suram for Wood Drying

    NASA Astrophysics Data System (ADS)

    Situmorang, Zakarias

    2017-12-01

    Implemented of fuzzy rule must used a look-up table as defuzzification analysis. Look-up table is the actuator plant to doing the value of fuzzification. Rule suram based of fuzzy logic with variables of weather is temperature ambient and humidity ambient, it implemented for wood drying process. The membership function of variable of state represented in error value and change error with typical map of triangle and map of trapezium. Result of analysis to reach 4 fuzzy rule in 81 conditions to control the output system can be constructed in a number of way of weather and conditions of air. It used to minimum of the consumption of electric energy by heater. One cycle of schedule drying is a serial of condition of chamber to process as use as a wood species.

  6. The FORWARD Project: Incorporating Long-Term Hydrologic Datasets Into Detailed Forest Management Plans for the Canadian Boreal Forest

    NASA Astrophysics Data System (ADS)

    Dinsmore, P.; Prepas, E.; Putz, G.; Smith, D.

    2008-12-01

    The Forest Watershed and Riparian Disturbance (FORWARD) Project has collected data on weather, soils, vegetation, streamflow and stream water quality under relatively undisturbed conditions, as well as after experimental forest harvest, in partnership with industrial forest operations within the Boreal Plain and Boreal Shield ecozones of Canada. Research-based contributions from FORWARD were integrated into our Boreal Plain industry partner's 2007-2016 Detailed Forest Management Plan. These contributions consisted of three components: 1) A GIS watershed and stream layer that included a hydrological network, a Digital Elevation Model, and Strahler classified streams and watersheds for 1st- and 3rd-order watersheds; 2) a combined soil and wetland GIS layer that included maps and associated datasets for relatively coarse mineral soils (which drain quickly) and wetlands (which retain water), which were the key features that needed to be identified for the FORWARD modelling effort; and 3) a lookup table was developed that permits planners to determine runoff coefficients (the variable selected for hydrological modelling) for 1st-order watersheds, based upon slope, vegetation and soil attributes in forest polygons. The lookup table was populated with output from the deterministic Soil and Water Assessment Tool (SWAT), adapted for boreal forest vegetation with a version of the plant growth model, ALMANAC. The runoff coefficient lookup table facilitated integration of predictions of hydrologic impacts of forest harvest into planning. This pilot-scale effort will ultimately be extended to the Boreal Shield study area.

  7. The Development of a Motor-Free Short-Form of the Wechsler Intelligence Scale for Children-Fifth Edition.

    PubMed

    Piovesana, Adina M; Harrison, Jessica L; Ducat, Jacob J

    2017-12-01

    This study aimed to develop a motor-free short-form of the Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V) that allows clinicians to estimate the Full Scale Intelligence Quotients of youths with motor impairments. Using the reliabilities and intercorrelations of six WISC-V motor-free subtests, psychometric methodologies were applied to develop look-up tables for four Motor-free Short-form indices: Verbal Comprehension Short-form, Perceptual Reasoning Short-form, Working Memory Short-form, and a Motor-free Intelligence Quotient. Index-level discrepancy tables were developed using the same methods to allow clinicians to statistically compare visual, verbal, and working memory abilities. The short-form indices had excellent reliabilities ( r = .92-.97) comparable to the original WISC-V. This motor-free short-form of the WISC-V is a reliable alternative for the assessment of intellectual functioning in youths with motor impairments. Clinicians are provided with user-friendly look-up tables, index level discrepancy tables, and base rates, displayed similar to those in the WISC-V manuals to enable interpretation of assessment results.

  8. Reference manual for data base on Nevada well logs

    USGS Publications Warehouse

    Bauer, E.M.; Cartier, K.D.

    1995-01-01

    The U.S. Geological Survey and Nevada Division of Water Resources are cooperatively using a data base for are cooperatively using a data base for managing well-log information for the State of Nevada. The Well-Log Data Base is part of an integrated system of computer data bases using the Ingres Relational Data-Base Management System, which allows efficient storage and access to water information from the State Engineer's office. The data base contains a main table, two ancillary tables, and nine lookup tables, as well as a menu-driven system for entering, updating, and reporting on the data. This reference guide outlines the general functions of the system and provides a brief description of data tables and data-entry screens.

  9. Application Program Interface for the Orion Aerodynamics Database

    NASA Technical Reports Server (NTRS)

    Robinson, Philip E.; Thompson, James

    2013-01-01

    The Application Programming Interface (API) for the Crew Exploration Vehicle (CEV) Aerodynamic Database has been developed to provide the developers of software an easily implemented, fully self-contained method of accessing the CEV Aerodynamic Database for use in their analysis and simulation tools. The API is programmed in C and provides a series of functions to interact with the database, such as initialization, selecting various options, and calculating the aerodynamic data. No special functions (file read/write, table lookup) are required on the host system other than those included with a standard ANSI C installation. It reads one or more files of aero data tables. Previous releases of aerodynamic databases for space vehicles have only included data tables and a document of the algorithm and equations to combine them for the total aerodynamic forces and moments. This process required each software tool to have a unique implementation of the database code. Errors or omissions in the documentation, or errors in the implementation, led to a lengthy and burdensome process of having to debug each instance of the code. Additionally, input file formats differ for each space vehicle simulation tool, requiring the aero database tables to be reformatted to meet the tool s input file structure requirements. Finally, the capabilities for built-in table lookup routines vary for each simulation tool. Implementation of a new database may require an update to and verification of the table lookup routines. This may be required if the number of dimensions of a data table exceeds the capability of the simulation tools built-in lookup routines. A single software solution was created to provide an aerodynamics software model that could be integrated into other simulation and analysis tools. The highly complex Orion aerodynamics model can then be quickly included in a wide variety of tools. The API code is written in ANSI C for ease of portability to a wide variety of systems. The input data files are in standard formatted ASCII, also for improved portability. The API contains its own implementation of multidimensional table reading and lookup routines. The same aerodynamics input file can be used without modification on all implementations. The turnaround time from aerodynamics model release to a working implementation is significantly reduced

  10. Reference manual for data base on Nevada water-rights permits

    USGS Publications Warehouse

    Cartier, K.D.; Bauer, E.M.; Farnham, J.L.

    1995-01-01

    The U.S. Geological Survey and Nevada Division of Water Resources have cooperatively developed and implemented a data-base system for managing water-rights permit information for the State of Nevada. The Water-Rights Permit data base is part of an integrated system of computer data bases using the Ingres Relational Data-Base Manage-ment System, which allows efficient storage and access to water information from the State Engineer's office. The data base contains a main table, three ancillary tables, and five lookup tables, as well as a menu-driven system for entering, updating, and reporting on the data. This reference guide outlines the general functions of the system and provides a brief description of data tables and data-entry screens.

  11. Athermal laser design.

    PubMed

    Bovington, Jock; Srinivasan, Sudharsanan; Bowers, John E

    2014-08-11

    This paper discusses circuit based and waveguide based athermalization schemes and provides some design examples of athermalized lasers utilizing fully integrated athermal components as an alternative to power hungry thermo-electric controllers (TECs), off-chip wavelength lockers or monitors with lookup tables for tunable lasers. This class of solutions is important for uncooled transmitters on silicon.

  12. Design of a magnetic-tunnel-junction-oriented nonvolatile lookup table circuit with write-operation-minimized data shifting

    NASA Astrophysics Data System (ADS)

    Suzuki, Daisuke; Hanyu, Takahiro

    2018-04-01

    A magnetic-tunnel-junction (MTJ)-oriented nonvolatile lookup table (LUT) circuit, in which a low-power data-shift function is performed by minimizing the number of write operations in MTJ devices is proposed. The permutation of the configuration memory cell for read/write access is performed as opposed to conventional direct data shifting to minimize the number of write operations, which results in significant write energy savings in the data-shift function. Moreover, the hardware cost of the proposed LUT circuit is small since the selector is shared between read access and write access. In fact, the power consumption in the data-shift function and the transistor count are reduced by 82 and 52%, respectively, compared with those in a conventional static random-access memory-based implementation using a 90 nm CMOS technology.

  13. Efficient generation of 3D hologram for American Sign Language using look-up table

    NASA Astrophysics Data System (ADS)

    Park, Joo-Sup; Kim, Seung-Cheol; Kim, Eun-Soo

    2010-02-01

    American Sign Language (ASL) is one of the languages giving the greatest help for communication of the hearing impaired person. Current 2-D broadcasting, 2-D movies are used the ASL to give some information, help understand the situation of the scene and translate the foreign language. These ASL will not be disappeared in future three-dimensional (3-D) broadcasting or 3-D movies because the usefulness of the ASL. On the other hands, some approaches for generation of CGH patterns have been suggested like the ray-tracing method and look-up table (LUT) method. However, these methods have some drawbacks that needs much time or needs huge memory size for look-up table. Recently, a novel LUT (N-LUT) method for fast generation of CGH patterns of 3-D objects with a dramatically reduced LUT without the loss of computational speed was proposed. Therefore, we proposed the method to efficiently generate the holographic ASL in holographic 3DTV or 3-D movies using look-up table method. The proposed method is largely consisted of five steps: construction of the LUT for each ASL images, extraction of characters in scripts or situation, call the fringe patterns for characters in the LUT for each ASL, composition of hologram pattern for 3-D video and hologram pattern for ASL and reconstruct the holographic 3D video with ASL. Some simulation results confirmed the feasibility of the proposed method in efficient generation of CGH patterns for ASL.

  14. Ways to estimate speeds for the purposes of air quality conformity analyses.

    DOT National Transportation Integrated Search

    2002-01-01

    A speed post-processor refers to equations or lookup tables that can determine vehicle speeds on a particular roadway link using only the limited information available in a long-range planning model. An estimated link speed is usually based on volume...

  15. Efficient Lookup Table-Based Adaptive Baseband Predistortion Architecture for Memoryless Nonlinearity

    NASA Astrophysics Data System (ADS)

    Ba, Seydou N.; Waheed, Khurram; Zhou, G. Tong

    2010-12-01

    Digital predistortion is an effective means to compensate for the nonlinear effects of a memoryless system. In case of a cellular transmitter, a digital baseband predistorter can mitigate the undesirable nonlinear effects along the signal chain, particularly the nonlinear impairments in the radiofrequency (RF) amplifiers. To be practically feasible, the implementation complexity of the predistorter must be minimized so that it becomes a cost-effective solution for the resource-limited wireless handset. This paper proposes optimizations that facilitate the design of a low-cost high-performance adaptive digital baseband predistorter for memoryless systems. A comparative performance analysis of the amplitude and power lookup table (LUT) indexing schemes is presented. An optimized low-complexity amplitude approximation and its hardware synthesis results are also studied. An efficient LUT predistorter training algorithm that combines the fast convergence speed of the normalized least mean squares (NLMSs) with a small hardware footprint is proposed. Results of fixed-point simulations based on the measured nonlinear characteristics of an RF amplifier are presented.

  16. SU-E-T-275: Dose Verification in a Small Animal Image-Guided Radiation Therapy X-Ray Machine: A Dose Comparison between TG-61 Based Look-Up Table and MOSFET Method for Various Collimator Sizes.

    PubMed

    Rodrigues, A; Nguyen, G; Li, Y; Roy Choudhury, K; Kirsch, D; Das, S; Yoshizumi, T

    2012-06-01

    To verify the accuracy of TG-61 based dosimetry with MOSFET technology using a tissue-equivalent mouse phantom. Accuracy of mouse dose between a TG-61 based look-up table was verified with MOSFET technology. The look-up table followed a TG-61 based commissioning and used a solid water block and radiochromic film. A tissue-equivalent mouse phantom (2 cm diameter, 8 cm length) was used for the MOSFET method. Detectors were placed in the phantom at the head and center of the body. MOSFETs were calibrated in air with an ion chamber and f-factor was applied to derive the dose to tissue. In CBCT mode, the phantom was positioned such that the system isocenter coincided with the center of the MOSFET with the active volume perpendicular to the beam. The absorbed dose was measured three times for seven different collimators, respectively. The exposure parameters were 225 kVp, 13 mA, and an exposure time of 20 s. For a 10 mm, 15 mm, and 20 mm circular collimator, the dose measured by the phantom was 4.3%, 2.7%, and 6% lower than TG-61 based measurements, respectively. For a 10 × 10 mm, 20 × 20 mm, and 40 × 40 mm collimator, the dose difference was 4.7%, 7.7%, and 2.9%, respectively. The MOSFET data was systematically lower than the commissioning data. The dose difference is due to the increased scatter radiation in the solid water block versus the dimension of the mouse phantom leading to an overestimation of the actual dose in the solid water block. The MOSFET method with the use of a tissue- equivalent mouse phantom provides less labor intensive geometry-specific dosimetry and accuracy with better dose tolerances of up to ± 2.7%. © 2012 American Association of Physicists in Medicine.

  17. Reduced-Order Model for Leakage Through an Open Wellbore from the Reservoir due to Carbon Dioxide Injection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Lehua; Oldenburg, Curtis M.

    Potential CO 2 leakage through existing open wellbores is one of the most significant hazards that need to be addressed in geologic carbon sequestration (GCS) projects. In the framework of the National Risk Assessment Partnership (NRAP) which requires fast computations for uncertainty analysis, rigorous simulation of the coupled wellbore-reservoir system is not practical. We have developed a 7,200-point look-up table reduced-order model (ROM) for estimating the potential leakage rate up open wellbores in response to CO 2 injection nearby. The ROM is based on coupled simulations using T2Well/ECO2H which was run repeatedly for representative conditions relevant to NRAP to createmore » a look-up table response-surface ROM. The ROM applies to a wellbore that fully penetrates a 20-m thick reservoir that is used for CO 2 storage. The radially symmetric reservoir is assumed to have initially uniform pressure, temperature, gas saturation, and brine salinity, and it is assumed these conditions are held constant at the far-field boundary (100 m away from the wellbore). In such a system, the leakage can quickly reach quasi-steady state. The ROM table can be used to estimate both the free-phase CO 2 and brine leakage rates through an open well as a function of wellbore and reservoir conditions. Results show that injection-induced pressure and reservoir gas saturation play important roles in controlling leakage. Caution must be used in the application of this ROM because well leakage is formally transient and the ROM lookup table was populated using quasi-steady simulation output after 1000 time steps which may correspond to different physical times for the various parameter combinations of the coupled wellbore-reservoir system.« less

  18. County-based estimates of nitrogen and phosphorus content of animal manure in the United States for 1982, 1987, and 1992

    USGS Publications Warehouse

    Puckett, Larry; Hitt, Kerie; Alexander, Richard

    1998-01-01

    names that correspond to the FIPS codes. 2. Tabular component - Nine tab-delimited ASCII lookup tables of animal counts and nutrient estimates organized by 5-digit state/county FIPS (Federal Information Processing Standards) code. Another table lists the county names that correspond to the FIPS codes. The use of trade names is for identification purposes only and does not constitute endorsement by the U.S. Geological Survey.

  19. A dual-channel fusion system of visual and infrared images based on color transfer

    NASA Astrophysics Data System (ADS)

    Pei, Chuang; Jiang, Xiao-yu; Zhang, Peng-wei; Liang, Hao-cong

    2013-09-01

    A dual-channel fusion system of visual and infrared images based on color transfer The increasing availability and deployment of imaging sensors operating in multiple spectrums has led to a large research effort in image fusion, resulting in a plethora of pixel-level image fusion algorithms. However, most of these algorithms have gray or false color fusion results which are not adapt to human vision. Transfer color from a day-time reference image to get natural color fusion result is an effective way to solve this problem, but the computation cost of color transfer is expensive and can't meet the request of real-time image processing. We developed a dual-channel infrared and visual images fusion system based on TMS320DM642 digital signal processing chip. The system is divided into image acquisition and registration unit, image fusion processing unit, system control unit and image fusion result out-put unit. The image registration of dual-channel images is realized by combining hardware and software methods in the system. False color image fusion algorithm in RGB color space is used to get R-G fused image, then the system chooses a reference image to transfer color to the fusion result. A color lookup table based on statistical properties of images is proposed to solve the complexity computation problem in color transfer. The mapping calculation between the standard lookup table and the improved color lookup table is simple and only once for a fixed scene. The real-time fusion and natural colorization of infrared and visual images are realized by this system. The experimental result shows that the color-transferred images have a natural color perception to human eyes, and can highlight the targets effectively with clear background details. Human observers with this system will be able to interpret the image better and faster, thereby improving situational awareness and reducing target detection time.

  20. Investigating a method for estimating direct nitrous oxide emissions from grazed pasture soils in New Zealand using NZ-DNDC.

    PubMed

    Giltrap, Donna L; Ausseil, Anne-Gaelle E; Thakur, Kailash P; Sutherland, M Anne

    2013-11-01

    In this study, we developed emission factor (EF) look-up tables for calculating the direct nitrous oxide (N2O) emissions from grazed pasture soils in New Zealand. Look-up tables of long-term average direct emission factors (and their associated uncertainties) were generated using multiple simulations of the NZ-DNDC model over a representative range of major soil, climate and management conditions occurring in New Zealand using 20 years of climate data. These EFs were then combined with national activity data maps to estimate direct N2O emissions from grazed pasture in New Zealand using 2010 activity data. The total direct N2O emissions using look-up tables were 12.7±12.1 Gg N2O-N (equivalent to using a national average EF of 0.70±0.67%). This agreed with the amount calculated using the New Zealand specific EFs (95% confidence interval 7.7-23.1 Gg N2O-N), although the relative uncertainty increased. The high uncertainties in the look-up table EFs were primarily due to the high uncertainty of the soil parameters within the selected soil categories. Uncertainty analyses revealed that the uncertainty in soil parameters contributed much more to the uncertainty in N2O emissions than the inter-annual weather variability. The effect of changes to fertiliser applications was also examined and it was found that for fertiliser application rates of 0-50 kg N/ha for sheep and beef and 60-240 kg N/ha for dairy the modelled EF was within ±10% of the value simulated using annual fertiliser application rates of 15 kg N/ha and 140 kg N/ha respectively. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Bitstream decoding processor for fast entropy decoding of variable length coding-based multiformat videos

    NASA Astrophysics Data System (ADS)

    Jo, Hyunho; Sim, Donggyu

    2014-06-01

    We present a bitstream decoding processor for entropy decoding of variable length coding-based multiformat videos. Since most of the computational complexity of entropy decoders comes from bitstream accesses and table look-up process, the developed bitstream processing unit (BsPU) has several designated instructions to access bitstreams and to minimize branch operations in the table look-up process. In addition, the instruction for bitstream access has the capability to remove emulation prevention bytes (EPBs) of H.264/AVC without initial delay, repeated memory accesses, and additional buffer. Experimental results show that the proposed method for EPB removal achieves a speed-up of 1.23 times compared to the conventional EPB removal method. In addition, the BsPU achieves speed-ups of 5.6 and 3.5 times in entropy decoding of H.264/AVC and MPEG-4 Visual bitstreams, respectively, compared to an existing processor without designated instructions and a new table mapping algorithm. The BsPU is implemented on a Xilinx Virtex5 LX330 field-programmable gate array. The MPEG-4 Visual (ASP, Level 5) and H.264/AVC (Main Profile, Level 4) are processed using the developed BsPU with a core clock speed of under 250 MHz in real time.

  2. An On-Chip Learning Neuromorphic Autoencoder With Current-Mode Transposable Memory Read and Virtual Lookup Table.

    PubMed

    Cho, Hwasuk; Son, Hyunwoo; Seong, Kihwan; Kim, Byungsub; Park, Hong-June; Sim, Jae-Yoon

    2018-02-01

    This paper presents an IC implementation of on-chip learning neuromorphic autoencoder unit in a form of rate-based spiking neural network. With a current-mode signaling scheme embedded in a 500 × 500 6b SRAM-based memory, the proposed architecture achieves simultaneous processing of multiplications and accumulations. In addition, a transposable memory read for both forward and backward propagations and a virtual lookup table are also proposed to perform an unsupervised learning of restricted Boltzmann machine. The IC is fabricated using 28-nm CMOS process and is verified in a three-layer network of encoder-decoder pair for training and recovery of images with two-dimensional pixels. With a dataset of 50 digits, the IC shows a normalized root mean square error of 0.078. Measured energy efficiencies are 4.46 pJ per synaptic operation for inference and 19.26 pJ per synaptic weight update for learning, respectively. The learning performance is also estimated by simulations if the proposed hardware architecture is extended to apply to a batch training of 60 000 MNIST datasets.

  3. X-window-based 2K display workstation

    NASA Astrophysics Data System (ADS)

    Weinberg, Wolfram S.; Hayrapetian, Alek S.; Cho, Paul S.; Valentino, Daniel J.; Taira, Ricky K.; Huang, H. K.

    1991-07-01

    A high-definition, high-performance display station for reading and review of digital radiological images is introduced. The station is based on a Sun SPARC Station 4 and employs X window system for display and manipulation of images. A mouse-operated graphic user interface is implemented utilizing Motif-style tools. The system supports up to four MegaScan gray-scale 2560 X 2048 monitors. A special configuration of frame and video buffer yields a data transfer of 50 M pixels/s. A magnetic disk array supplies a storage capacity of 2 GB with a data transfer rate of 4-6 MB/s. The system has access to the central archive through an ultrahigh-speed fiber-optic network and patient studies are automatically transferred to the local disk. The available image processing functions include change of lookup table, zoom and pan, and cine. Future enhancements will provide for manual contour tracing, length, area, and density measurements, text and graphic overlay, as well as composition of selected images. Additional preprocessing procedures under development will optimize the initial lookup table and adjust the images to a standard orientation.

  4. Three-directional motion-compensation mask-based novel look-up table on graphics processing units for video-rate generation of digital holographic videos of three-dimensional scenes.

    PubMed

    Kwon, Min-Woo; Kim, Seung-Cheol; Kim, Eun-Soo

    2016-01-20

    A three-directional motion-compensation mask-based novel look-up table method is proposed and implemented on graphics processing units (GPUs) for video-rate generation of digital holographic videos of three-dimensional (3D) scenes. Since the proposed method is designed to be well matched with the software and memory structures of GPUs, the number of compute-unified-device-architecture kernel function calls can be significantly reduced. This results in a great increase of the computational speed of the proposed method, allowing video-rate generation of the computer-generated hologram (CGH) patterns of 3D scenes. Experimental results reveal that the proposed method can generate 39.8 frames of Fresnel CGH patterns with 1920×1080 pixels per second for the test 3D video scenario with 12,088 object points on dual GPU boards of NVIDIA GTX TITANs, and they confirm the feasibility of the proposed method in the practical application fields of electroholographic 3D displays.

  5. Methyl Bromide Buffer Zone Distances for Commodity and Structural Fumigation: Treatment 8 Hours or Less

    EPA Pesticide Factsheets

    This document contains buffer zone tables required by certain methyl bromide commodity fumigant product labels that refer to Buffer Zone Lookup Tables located at epa.gov/pesticide-registration/mbcommoditybuffer on the label.

  6. Methyl Bromide Buffer Zone Distances for Commodity and Structural Fumigation: Treatment Longer than 8 Hours

    EPA Pesticide Factsheets

    This document contains buffer zone tables required by certain methyl bromide commodity fumigant product labels that refer to Buffer Zone Lookup Tables located at epa.gov/pesticide-registration/mbcommoditybuffer on the label.

  7. ICL: The Image Composition Language

    NASA Technical Reports Server (NTRS)

    Foley, James D.; Kim, Won Chul

    1986-01-01

    The Image Composition Language (ICL) provides a convenient way for programmers of interactive graphics application programs to define how the video look-up table of a raster display system is to be loaded. The ICL allows one or several images stored in the frame buffer to be combined in a variety of ways. The ICL treats these images as variables, and provides arithematic, relational, and conditional operators to combine the images, scalar variables, and constants in image composition expressions. The objective of ICL is to provide programmers with a simple way to compose images, to relieve the tedium usually associated with loading the video look-up table to obtain desired results.

  8. A Low-Complexity and High-Performance 2D Look-Up Table for LDPC Hardware Implementation

    NASA Astrophysics Data System (ADS)

    Chen, Jung-Chieh; Yang, Po-Hui; Lain, Jenn-Kaie; Chung, Tzu-Wen

    In this paper, we propose a low-complexity, high-efficiency two-dimensional look-up table (2D LUT) for carrying out the sum-product algorithm in the decoding of low-density parity-check (LDPC) codes. Instead of employing adders for the core operation when updating check node messages, in the proposed scheme, the main term and correction factor of the core operation are successfully merged into a compact 2D LUT. Simulation results indicate that the proposed 2D LUT not only attains close-to-optimal bit error rate performance but also enjoys a low complexity advantage that is suitable for hardware implementation.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alhroob, M.; Boyd, G.; Hasib, A.

    Precision ultrasonic measurements in binary gas systems provide continuous real-time monitoring of mixture composition and flow. Using custom micro-controller-based electronics, we have developed an ultrasonic instrument, with numerous potential applications, capable of making continuous high-precision sound velocity measurements. The instrument measures sound transit times along two opposite directions aligned parallel to - or obliquely crossing - the gas flow. The difference between the two measured times yields the gas flow rate while their average gives the sound velocity, which can be compared with a sound velocity vs. molar composition look-up table for the binary mixture at a given temperature andmore » pressure. The look-up table may be generated from prior measurements in known mixtures of the two components, from theoretical calculations, or from a combination of the two. We describe the instrument and its performance within numerous applications in the ATLAS experiment at the CERN Large Hadron Collider (LHC). The instrument can be of interest in other areas where continuous in-situ binary gas analysis and flowmetry are required. (authors)« less

  10. Methyl Bromide Buffer Zone Distances for Commodity and Structural Fumigation: Active Aeration, Horizontal Stacks, 8 Hours or Less

    EPA Pesticide Factsheets

    This document contains buffer zone tables required by certain methyl bromide commodity fumigant product labels that refer to Buffer Zone Lookup Tables located at epa.gov/pesticide-registration/mbcommoditybuffer on the label.

  11. Methyl Bromide Buffer Zone Distances for Commodity and Structural Fumigation: Active Aeration, No Stack, More than 8 Hours

    EPA Pesticide Factsheets

    This document contains buffer zone tables required by certain methyl bromide commodity fumigant product labels that refer to Buffer Zone Lookup Tables located at epa.gov/pesticide-registration/mbcommoditybuffer on the label.

  12. Methyl Bromide Buffer Zone Distances for Commodity and Structural Fumigation: Active Aeration, No Stack, 8 Hours or Less

    EPA Pesticide Factsheets

    This document contains buffer zone tables required by certain methyl bromide commodity fumigant product labels that refer to Buffer Zone Lookup Tables located at epa.gov/pesticide-registration/mbcommoditybuffer on the label.

  13. Ice Cloud Properties in Ice-Over-Water Cloud Systems Using TRMM VIRS and TMI Data

    NASA Technical Reports Server (NTRS)

    Minnis, Patrick; Huang, Jianping; Lin, Bing; Yi, Yuhong; Arduini, Robert F.; Fan, Tai-Fang; Ayers, J. Kirk; Mace, Gerald G.

    2007-01-01

    A multi-layered cloud retrieval system (MCRS) is updated and used to estimate ice water path in maritime ice-over-water clouds using Visible and Infrared Scanner (VIRS) and TRMM Microwave Imager (TMI) measurements from the Tropical Rainfall Measuring Mission spacecraft between January and August 1998. Lookup tables of top-of-atmosphere 0.65- m reflectance are developed for ice-over-water cloud systems using radiative transfer calculations with various combinations of ice-over-water cloud layers. The liquid and ice water paths, LWP and IWP, respectively, are determined with the MCRS using these lookup tables with a combination of microwave (MW), visible (VIS), and infrared (IR) data. LWP, determined directly from the TMI MW data, is used to define the lower-level cloud properties to select the proper lookup table. The properties of the upper-level ice clouds, such as optical depth and effective size, are then derived using the Visible Infrared Solar-infrared Split-window Technique (VISST), which matches the VIRS IR, 3.9- m, and VIS data to the multilayer-cloud lookup table reflectances and a set of emittance parameterizations. Initial comparisons with surface-based radar retrievals suggest that this enhanced MCRS can significantly improve the accuracy and decrease the IWP in overlapped clouds by 42% and 13% compared to using the single-layer VISST and an earlier simplified MW-VIS-IR (MVI) differencing method, respectively, for ice-over-water cloud systems. The tropical distribution of ice-over-water clouds is the same as derived earlier from combined TMI and VIRS data, but the new values of IWP and optical depth are slightly larger than the older MVI values, and exceed those of single-layered layered clouds by 7% and 11%, respectively. The mean IWP from the MCRS is 8-14% greater than that retrieved from radar retrievals of overlapped clouds over two surface sites and the standard deviations of the differences are similar to those for single-layered clouds. Examples of a method for applying the MCRS over land without microwave data yield similar differences with the surface retrievals. By combining the MCRS with other techniques that focus primarily on optically thin cirrus over low water clouds, it will be possible to more fully assess the IWP in all conditions over ocean except for precipitating systems.

  14. A Flight Control Approach for Small Reentry Vehicles

    NASA Technical Reports Server (NTRS)

    Bevacqoa, Tim; Adams, Tony; Zhu. J. Jim; Rao, P. Prabhakara

    2004-01-01

    Flight control of small crew return vehicles during atmospheric reentry will be an important technology in any human space flight mission undertaken in the future. The control system presented in this paper is applicable to small crew return vehicles in which reaction control system (RCS) thrusters are the only actuators available for attitude control. The control system consists of two modules: (i) the attitude controller using the trajectory linearization control (TLC) technique, and (ii) the reaction control system (RCS) control allocation module using a dynamic table-lookup technique. This paper describes the design and implementation of the TLC attitude control and the dynamic table-lookup RCS control allocation for nonimal flight along with design verification test results.

  15. Load balancing strategy and its lookup-table enhancement in deterministic space delay/disruption tolerant networks

    NASA Astrophysics Data System (ADS)

    Huang, Jinhui; Liu, Wenxiang; Su, Yingxue; Wang, Feixue

    2018-02-01

    Space networks, in which connectivity is deterministic and intermittent, can be modeled by delay/disruption tolerant networks. In space delay/disruption tolerant networks, a packet is usually transmitted from the source node to the destination node indirectly via a series of relay nodes. If anyone of the nodes in the path becomes congested, the packet will be dropped due to buffer overflow. One of the main reasons behind congestion is the unbalanced network traffic distribution. We propose a load balancing strategy which takes the congestion status of both the local node and relay nodes into account. The congestion status, together with the end-to-end delay, is used in the routing selection. A lookup-table enhancement is also proposed. The off-line computation and the on-line adjustment are combined together to make a more precise estimate of the end-to-end delay while at the same time reducing the onboard computation. Simulation results show that the proposed strategy helps to distribute network traffic more evenly and therefore reduces the packet drop ratio. In addition, the average delay is also decreased in most cases. The lookup-table enhancement provides a compromise between the need for better communication performance and the desire for less onboard computation.

  16. A novel data reduction technique for single slanted hot-wire measurements used to study incompressible compressor tip leakage flows

    NASA Astrophysics Data System (ADS)

    Berdanier, Reid A.; Key, Nicole L.

    2016-03-01

    The single slanted hot-wire technique has been used extensively as a method for measuring three velocity components in turbomachinery applications. The cross-flow orientation of probes with respect to the mean flow in rotating machinery results in detrimental prong interference effects when using multi-wire probes. As a result, the single slanted hot-wire technique is often preferred. Typical data reduction techniques solve a set of nonlinear equations determined by curve fits to calibration data. A new method is proposed which utilizes a look-up table method applied to a simulated triple-wire sensor with application to turbomachinery environments having subsonic, incompressible flows. Specific discussion regarding corrections for temperature and density changes present in a multistage compressor application is included, and additional consideration is given to the experimental error which accompanies each data reduction process. Hot-wire data collected from a three-stage research compressor with two rotor tip clearances are used to compare the look-up table technique with the traditional nonlinear equation method. The look-up table approach yields velocity errors of less than 5 % for test conditions deviating by more than 20 °C from calibration conditions (on par with the nonlinear solver method), while requiring less than 10 % of the computational processing time.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Andrew; Lawrence, Earl

    The Response Surface Modeling (RSM) Tool Suite is a collection of three codes used to generate an empirical interpolation function for a collection of drag coefficient calculations computed with Test Particle Monte Carlo (TPMC) simulations. The first code, "Automated RSM", automates the generation of a drag coefficient RSM for a particular object to a single command. "Automated RSM" first creates a Latin Hypercube Sample (LHS) of 1,000 ensemble members to explore the global parameter space. For each ensemble member, a TPMC simulation is performed and the object drag coefficient is computed. In the next step of the "Automated RSM" code,more » a Gaussian process is used to fit the TPMC simulations. In the final step, Markov Chain Monte Carlo (MCMC) is used to evaluate the non-analytic probability distribution function from the Gaussian process. The second code, "RSM Area", creates a look-up table for the projected area of the object based on input limits on the minimum and maximum allowed pitch and yaw angles and pitch and yaw angle intervals. The projected area from the look-up table is used to compute the ballistic coefficient of the object based on its pitch and yaw angle. An accurate ballistic coefficient is crucial in accurately computing the drag on an object. The third code, "RSM Cd", uses the RSM generated by the "Automated RSM" code and the projected area look-up table generated by the "RSM Area" code to accurately compute the drag coefficient and ballistic coefficient of the object. The user can modify the object velocity, object surface temperature, the translational temperature of the gas, the species concentrations of the gas, and the pitch and yaw angles of the object. Together, these codes allow for the accurate derivation of an object's drag coefficient and ballistic coefficient under any conditions with only knowledge of the object's geometry and mass.« less

  18. All-optical 10Gb/s ternary-CAM cell for routing look-up table applications.

    PubMed

    Mourgias-Alexandris, George; Vagionas, Christos; Tsakyridis, Apostolos; Maniotis, Pavlos; Pleros, Nikos

    2018-03-19

    We experimentally demonstrate the first all-optical Ternary-Content Addressable Memory (T-CAM) cell that operates at 10Gb/s and comprises two monolithically integrated InP Flip-Flops (FF) and a SOA-MZI optical XOR gate. The two FFs are responsible for storing the data bit and the ternary state 'X', respectively, with the XOR gate used for comparing the stored FF-data and the search bit. The experimental results reveal error-free operation at 10Gb/s for both Write and Ternary Content Addressing of the T-CAM cell, indicating that the proposed optical T-CAM cell could in principle lead to all-optical T-CAM-based Address Look-up memory architectures for high-end routing applications.

  19. Methyl Bromide Buffer Zone Distances for Commodity and Structural Fumigation: Active Aeration, Attached Vertical Stacks, More than 8 hours, 10 Foot Stack Height

    EPA Pesticide Factsheets

    This document contains buffer zone tables required by certain methyl bromide commodity fumigant product labels that refer to Buffer Zone Lookup Tables located at epa.gov/pesticide-registration/mbcommoditybuffer on the label.

  20. Methyl Bromide Buffer Zone Distances for Commodity and Structural Fumigation: Active Aeration, Attached Vertical Stacks , 8 Hours or Less, 10 Foot Stack Height

    EPA Pesticide Factsheets

    This document contains buffer zone tables required by certain methyl bromide commodity fumigant product labels that refer to Buffer Zone Lookup Tables located at epa.gov/pesticide-registration/mbcommoditybuffer on the label.

  1. Methyl Bromide Buffer Zone Distances for Commodity and Structural Fumigation: Active Aeration, Attached Vertical Stacks, 8 Hours or Less, 50 Foot Stack Height

    EPA Pesticide Factsheets

    This document contains buffer zone tables required by certain methyl bromide commodity fumigant product labels that refer to Buffer Zone Lookup Tables located at epa.gov/pesticide-registration/mbcommoditybuffer on the label.

  2. Methyl Bromide Buffer Zone Distances for Commodity and Structural Fumigation: Active Aeration, Attached Vertical Stacks , 8 Hours or Less, 25 Foot Stack Height

    EPA Pesticide Factsheets

    This document contains buffer zone tables required by certain methyl bromide commodity fumigant product labels that refer to Buffer Zone Lookup Tables located at epa.gov/pesticide-registration/mbcommoditybuffer on the label.

  3. Methyl Bromide Buffer Zone Distances for Commodity and Structural Fumigation: Active Aeration, Attached Vertical Stacks, More than 8 hours, 25 Foot Stack Height

    EPA Pesticide Factsheets

    This document contains buffer zone tables required by certain methyl bromide commodity fumigant product labels that refer to Buffer Zone Lookup Tables located at epa.gov/pesticide-registration/mbcommoditybuffer on the label.

  4. Methyl Bromide Buffer Zone Distances for Commodity and Structural Fumigation: Active Aeration, Attached Vertical Stacks, More than 8 hours, 50 Foot Stack Height

    EPA Pesticide Factsheets

    This document contains buffer zone tables required by certain methyl bromide commodity fumigant product labels that refer to Buffer Zone Lookup Tables located at epa.gov/pesticide-registration/mbcommoditybuffer on the label.

  5. Simulation model of a twin-tail, high performance airplane

    NASA Technical Reports Server (NTRS)

    Buttrill, Carey S.; Arbuckle, P. Douglas; Hoffler, Keith D.

    1992-01-01

    The mathematical model and associated computer program to simulate a twin-tailed high performance fighter airplane (McDonnell Douglas F/A-18) are described. The simulation program is written in the Advanced Continuous Simulation Language. The simulation math model includes the nonlinear six degree-of-freedom rigid-body equations, an engine model, sensors, and first order actuators with rate and position limiting. A simplified form of the F/A-18 digital control laws (version 8.3.3) are implemented. The simulated control law includes only inner loop augmentation in the up and away flight mode. The aerodynamic forces and moments are calculated from a wind-tunnel-derived database using table look-ups with linear interpolation. The aerodynamic database has an angle-of-attack range of -10 to +90 and a sideslip range of -20 to +20 degrees. The effects of elastic deformation are incorporated in a quasi-static-elastic manner. Elastic degrees of freedom are not actively simulated. In the engine model, the throttle-commanded steady-state thrust level and the dynamic response characteristics of the engine are based on airflow rate as determined from a table look-up. Afterburner dynamics are switched in at a threshold based on the engine airflow and commanded thrust.

  6. Methyl Bromide Buffer Zone Distances for Commodity and Structural Fumigation: Active Aeration, Open Area Vertical Stacks, More than 8 Hours, 25 Foot Stack Height

    EPA Pesticide Factsheets

    This document contains buffer zone tables required by certain methyl bromide commodity fumigant product labels that refer to Buffer Zone Lookup Tables located at epa.gov/pesticide-registration/mbcommoditybuffer on the label.

  7. Methyl Bromide Buffer Zone Distances for Commodity and Structural Fumigation: Active Aeration, Open Area Vertical Stacks , 8 Hours or Less, 5 Foot Stack Height

    EPA Pesticide Factsheets

    This document contains buffer zone tables required by certain methyl bromide commodity fumigant product labels that refer to Buffer Zone Lookup Tables located at epa.gov/pesticide-registration/mbcommoditybuffer on the label.

  8. Methyl Bromide Buffer Zone Distances for Commodity and Structural Fumigation: Active Aeration, Open Area Vertical Stacks, More than 8 Hours, 5 Foot Stack Height

    EPA Pesticide Factsheets

    This document contains buffer zone tables required by certain methyl bromide commodity fumigant product labels that refer to Buffer Zone Lookup Tables located at epa.gov/pesticide-registration/mbcommoditybuffer on the label.

  9. Methyl Bromide Buffer Zone Distances for Commodity and Structural Fumigation: Active Aeration, Open Area Vertical Stacks, 8 Hours or Less, 10 Foot Stack Height

    EPA Pesticide Factsheets

    This document contains buffer zone tables required by certain methyl bromide commodity fumigant product labels that refer to Buffer Zone Lookup Tables located at epa.gov/pesticide-registration/mbcommoditybuffer on the label.

  10. Methyl Bromide Buffer Zone Distances for Commodity and Structural Fumigation: Active Aeration, Open Area Vertical Stacks, 8 Hours or Less, 50 Foot Stack Height

    EPA Pesticide Factsheets

    This document contains buffer zone tables required by certain methyl bromide commodity fumigant product labels that refer to Buffer Zone Lookup Tables located at epa.gov/pesticide-registration/mbcommoditybuffer on the label.

  11. Methyl Bromide Buffer Zone Distances for Commodity and Structural Fumigation: Active Aeration, Open Area Vertical Stacks, 8 Hours or Less, 25 Foot Stack Height

    EPA Pesticide Factsheets

    This document contains buffer zone tables required by certain methyl bromide commodity fumigant product labels that refer to Buffer Zone Lookup Tables located at epa.gov/pesticide-registration/mbcommoditybuffer on the label.

  12. Methyl Bromide Buffer Zone Distances for Commodity and Structural Fumigation: Active Aeration, Open Area Vertical Stacks, More than 8 Hours, 50 Foot Stack Height

    EPA Pesticide Factsheets

    This document contains buffer zone tables required by certain methyl bromide commodity fumigant product labels that refer to Buffer Zone Lookup Tables located at epa.gov/pesticide-registration/mbcommoditybuffer on the label.

  13. Methyl Bromide Buffer Zone Distances for Commodity and Structural Fumigation: Active Aeration, Open Area Vertical Stacks, More than 8 Hours, 10 Foot Stack Height

    EPA Pesticide Factsheets

    This document contains buffer zone tables required by certain methyl bromide commodity fumigant product labels that refer to Buffer Zone Lookup Tables located at epa.gov/pesticide-registration/mbcommoditybuffer on the label.

  14. Differential Binary Encoding Method for Calibrating Image Sensors Based on IOFBs

    PubMed Central

    Fernández, Pedro R.; Lázaro-Galilea, José Luis; Gardel, Alfredo; Espinosa, Felipe; Bravo, Ignacio; Cano, Ángel

    2012-01-01

    Image transmission using incoherent optical fiber bundles (IOFBs) requires prior calibration to obtain the spatial in-out fiber correspondence necessary to reconstruct the image captured by the pseudo-sensor. This information is recorded in a Look-Up Table called the Reconstruction Table (RT), used later for reordering the fiber positions and reconstructing the original image. This paper presents a very fast method based on image-scanning using spaces encoded by a weighted binary code to obtain the in-out correspondence. The results demonstrate that this technique yields a remarkable reduction in processing time and the image reconstruction quality is very good compared to previous techniques based on spot or line scanning, for example. PMID:22666023

  15. Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods.

    PubMed

    Kim, Seung-Cheol; Kim, Eun-Soo

    2009-02-20

    In this paper we propose a new approach for fast generation of computer-generated holograms (CGHs) of a 3D object by using the run-length encoding (RLE) and the novel look-up table (N-LUT) methods. With the RLE method, spatially redundant data of a 3D object are extracted and regrouped into the N-point redundancy map according to the number of the adjacent object points having the same 3D value. Based on this redundancy map, N-point principle fringe patterns (PFPs) are newly calculated by using the 1-point PFP of the N-LUT, and the CGH pattern for the 3D object is generated with these N-point PFPs. In this approach, object points to be involved in calculation of the CGH pattern can be dramatically reduced and, as a result, an increase of computational speed can be obtained. Some experiments with a test 3D object are carried out and the results are compared to those of the conventional methods.

  16. Portable real-time color night vision

    NASA Astrophysics Data System (ADS)

    Toet, Alexander; Hogervorst, Maarten A.

    2008-03-01

    We developed a simple and fast lookup-table based method to derive and apply natural daylight colors to multi-band night-time images. The method deploys an optimal color transformation derived from a set of samples taken from a daytime color reference image. The colors in the resulting colorized multiband night-time images closely resemble the colors in the daytime color reference image. Also, object colors remain invariant under panning operations and are independent of the scene content. Here we describe the implementation of this method in two prototype portable dual band realtime night vision systems. One system provides co-aligned visual and near-infrared bands of two image intensifiers, the other provides co-aligned images from a digital image intensifier and an uncooled longwave infrared microbolometer. The co-aligned images from both systems are further processed by a notebook computer. The color mapping is implemented as a realtime lookup table transform. The resulting colorised video streams can be displayed in realtime on head mounted displays and stored on the hard disk of the notebook computer. Preliminary field trials demonstrate the potential of these systems for applications like surveillance, navigation and target detection.

  17. New realisation of Preisach model using adaptive polynomial approximation

    NASA Astrophysics Data System (ADS)

    Liu, Van-Tsai; Lin, Chun-Liang; Wing, Home-Young

    2012-09-01

    Modelling system with hysteresis has received considerable attention recently due to the increasing accurate requirement in engineering applications. The classical Preisach model (CPM) is the most popular model to demonstrate hysteresis which can be represented by infinite but countable first-order reversal curves (FORCs). The usage of look-up tables is one way to approach the CPM in actual practice. The data in those tables correspond with the samples of a finite number of FORCs. This approach, however, faces two major problems: firstly, it requires a large amount of memory space to obtain an accurate prediction of hysteresis; secondly, it is difficult to derive efficient ways to modify the data table to reflect the timing effect of elements with hysteresis. To overcome, this article proposes the idea of using a set of polynomials to emulate the CPM instead of table look-up. The polynomial approximation requires less memory space for data storage. Furthermore, the polynomial coefficients can be obtained accurately by using the least-square approximation or adaptive identification algorithm, such as the possibility of accurate tracking of hysteresis model parameters.

  18. Quantifying anti-gravity torques in the design of a powered exoskeleton.

    PubMed

    Ragonesi, Daniel; Agrawal, Sunil; Sample, Whitney; Rahman, Tariq

    2011-01-01

    Designing an upper extremity exoskeleton for people with arm weakness requires knowledge of the passive and active residual force capabilities of users. This paper experimentally measures the passive gravitational torques of 3 groups of subjects: able-bodied adults, able bodied children, and children with neurological disabilities. The experiment involves moving the arm to various positions in the sagittal plane and measuring the gravitational force at the wrist. This force is then converted to static gravitational torques at the elbow and shoulder. Data are compared between look-up table data based on anthropometry and empirical data. Results show that the look-up torques deviate from experimentally measured torques as the arm reaches up and down. This experiment informs designers of Upper Limb orthoses on the contribution of passive human joint torques.

  19. Estimating effective dose to pediatric patients undergoing interventional radiology procedures using anthropomorphic phantoms and MOSFET dosimeters.

    PubMed

    Miksys, Nelson; Gordon, Christopher L; Thomas, Karen; Connolly, Bairbre L

    2010-05-01

    The purpose of this study was to estimate the effective doses received by pediatric patients during interventional radiology procedures and to present those doses in "look-up tables" standardized according to minute of fluoroscopy and frame of digital subtraction angiography (DSA). Organ doses were measured with metal oxide semiconductor field effect transistor (MOSFET) dosimeters inserted within three anthropomorphic phantoms, representing children at ages 1, 5, and 10 years, at locations corresponding to radiosensitive organs. The phantoms were exposed to mock interventional radiology procedures of the head, chest, and abdomen using posteroanterior and lateral geometries, varying magnification, and fluoroscopy or DSA exposures. Effective doses were calculated from organ doses recorded by the MOSFET dosimeters and are presented in look-up tables according to the different age groups. The largest effective dose burden for fluoroscopy was recorded for posteroanterior and lateral abdominal procedures (0.2-1.1 mSv/min of fluoroscopy), whereas procedures of the head resulted in the lowest effective doses (0.02-0.08 mSv/min of fluoroscopy). DSA exposures of the abdomen imparted higher doses (0.02-0.07 mSv/DSA frame) than did those involving the head and chest. Patient doses during interventional procedures vary significantly depending on the type of procedure. User-friendly look-up tables may provide a helpful tool for health care providers in estimating effective doses for an individual procedure.

  20. Integrated data lookup and replication scheme in mobile ad hoc networks

    NASA Astrophysics Data System (ADS)

    Chen, Kai; Nahrstedt, Klara

    2001-11-01

    Accessing remote data is a challenging task in mobile ad hoc networks. Two problems have to be solved: (1) how to learn about available data in the network; and (2) how to access desired data even when the original copy of the data is unreachable. In this paper, we develop an integrated data lookup and replication scheme to solve these problems. In our scheme, a group of mobile nodes collectively host a set of data to improve data accessibility for all members of the group. They exchange data availability information by broadcasting advertising (ad) messages to the group using an adaptive sending rate policy. The ad messages are used by other nodes to derive a local data lookup table, and to reduce data redundancy within a connected group. Our data replication scheme predicts group partitioning based on each node's current location and movement patterns, and replicates data to other partitions before partitioning occurs. Our simulations show that data availability information can quickly propagate throughout the network, and that the successful data access ratio of each node is significantly improved.

  1. Soft-output decoding algorithms in iterative decoding of turbo codes

    NASA Technical Reports Server (NTRS)

    Benedetto, S.; Montorsi, G.; Divsalar, D.; Pollara, F.

    1996-01-01

    In this article, we present two versions of a simplified maximum a posteriori decoding algorithm. The algorithms work in a sliding window form, like the Viterbi algorithm, and can thus be used to decode continuously transmitted sequences obtained by parallel concatenated codes, without requiring code trellis termination. A heuristic explanation is also given of how to embed the maximum a posteriori algorithms into the iterative decoding of parallel concatenated codes (turbo codes). The performances of the two algorithms are compared on the basis of a powerful rate 1/3 parallel concatenated code. Basic circuits to implement the simplified a posteriori decoding algorithm using lookup tables, and two further approximations (linear and threshold), with a very small penalty, to eliminate the need for lookup tables are proposed.

  2. HIA: a genome mapper using hybrid index-based sequence alignment.

    PubMed

    Choi, Jongpill; Park, Kiejung; Cho, Seong Beom; Chung, Myungguen

    2015-01-01

    A number of alignment tools have been developed to align sequencing reads to the human reference genome. The scale of information from next-generation sequencing (NGS) experiments, however, is increasing rapidly. Recent studies based on NGS technology have routinely produced exome or whole-genome sequences from several hundreds or thousands of samples. To accommodate the increasing need of analyzing very large NGS data sets, it is necessary to develop faster, more sensitive and accurate mapping tools. HIA uses two indices, a hash table index and a suffix array index. The hash table performs direct lookup of a q-gram, and the suffix array performs very fast lookup of variable-length strings by exploiting binary search. We observed that combining hash table and suffix array (hybrid index) is much faster than the suffix array method for finding a substring in the reference sequence. Here, we defined the matching region (MR) is a longest common substring between a reference and a read. And, we also defined the candidate alignment regions (CARs) as a list of MRs that is close to each other. The hybrid index is used to find candidate alignment regions (CARs) between a reference and a read. We found that aligning only the unmatched regions in the CAR is much faster than aligning the whole CAR. In benchmark analysis, HIA outperformed in mapping speed compared with the other aligners, without significant loss of mapping accuracy. Our experiments show that the hybrid of hash table and suffix array is useful in terms of speed for mapping NGS sequencing reads to the human reference genome sequence. In conclusion, our tool is appropriate for aligning massive data sets generated by NGS sequencing.

  3. InfoTrac TFD: a microcomputer implementation of the Transcription Factor Database TFD with a graphical user interface.

    PubMed

    Hoeck, W G

    1994-06-01

    InfoTrac TFD provides a graphical user interface (GUI) for viewing and manipulating datasets in the Transcription Factor Database, TFD. The interface was developed in Filemaker Pro 2.0 by Claris Corporation, which provides cross platform compatibility between Apple Macintosh computers running System 7.0 and higher and IBM-compatibles running Microsoft Windows 3.0 and higher. TFD ASCII-tables were formatted to fit data into several custom data tables using Add/Strip, a shareware utility and Filemaker Pro's lookup feature. The lookup feature was also put to use to allow TFD data tables to become linked within a flat-file database management system. The 'Navigator', consisting of several pop-up menus listing transcription factor abbreviations, facilitates the search for transcription factor entries. Data are presented onscreen in several layouts, that can be further customized by the user. InfoTrac TFD makes the transcription factor database accessible to a much wider community of scientists by making it available on two popular microcomputer platforms.

  4. Oven controlled N++ [1 0 0] length-extensional mode silicon resonator with frequency stability of 1 ppm over industrial temperature range

    NASA Astrophysics Data System (ADS)

    You, Weilong; Pei, Binbin; Sun, Ke; Zhang, Lei; Yang, Heng; Li, Xinxin

    2017-10-01

    This paper presents an oven controlled N++ [1 0 0] length-extensional mode silicon resonator, with a lookup-table based control algorithm. The temperature coefficient of resonant frequency (TCF) of the N++ doped resonator is nonlinear, and there is a turnover temperature point at which the TCF is equal to zero. The resonator is maintained at the turnover point by Joule heating; this temperature is a little higher than the upper limit of the industrial temperature range. It is demonstrated that the control algorithm based on the thermoresistor on the substrate and the lookup table for heating voltage versus chip temperature is sufficiently accurate to achieve a frequency stability of  ±0.5 ppm over the industrial temperature range. Because only two leads are required for electrical heating and piezoresistive sensing, the power required for heating of this resonator can be potentially lower than that of the oscillators with closed-loop oven control algorithm. It is also shown that the phase noise can be suppressed at the turnover temperature because of the very low value of the TCF, which justifies the usage of the heating voltage as the excitation voltage of the Wheatstone half-bridge.

  5. LMDS Lightweight Modular Display System.

    DTIC Science & Technology

    1982-02-16

    based on standard functions. This means that the cost to produce a particular display function can be met in the most economical fashion and at the same...not mean that the NTDS interface would be eliminated. What is anticipated is the use of ETHERNET at a low level of system interface, ie internal to...GENERATOR dSYMBOL GEN eCOMMUNICATION 3-2 The architecture of the unit’s (fig 3-4) input circuitry is based on a video table look-up ROM. The function

  6. Regional mapping of aerosol population and surface albedo of Titan by the massive inversion of the Cassini/VIMS dataset

    NASA Astrophysics Data System (ADS)

    Rodriguez, S.; Cornet, T.; Maltagliati, L.; Appéré, T.; Le Mouelic, S.; Sotin, C.; Barnes, J. W.; Brown, R. H.

    2017-12-01

    Mapping Titan's surface albedo is a necessary step to give reliable constraints on its composition. However, even after the end of the Cassini mission, surface albedo maps of Titan, especially over large regions, are still very rare, the surface windows being strongly affected by atmospheric contributions (absorption, scattering). A full radiative transfer model is an essential tool to remove these effects, but too time-consuming to treat systematically the 50000 hyperspectral images VIMS acquired since the beginning of the mission. We developed a massive inversion of VIMS data based on lookup tables computed from a state-of-the-art radiative transfer model in pseudo-spherical geometry, updated with new aerosol properties coming from our analysis of observations acquired recently by VIMS (solar occultations and emission phase curves). Once the physical properties of gases, aerosols and surface are fixed, the lookup tables are built for the remaining free parameters: the incidence, emergence and azimuth angles, given by navigation; and two products (the aerosol opacity and the surface albedo at all wavelengths). The lookup table grid was carefully selected after thorough testing. The data inversion on these pre-computed spectra (opportunely interpolated) is more than 1000 times faster than recalling the full radiative transfer at each minimization step. We present here the results from selected flybys. We invert mosaics composed by couples of flybys observing the same area at two different times. The composite albedo maps do not show significant discontinuities in any of the surface windows, suggesting a robust correction of the effects of the geometry (and thus the aerosols) on the observations. Maps of aerosol and albedo uncertainties are also provided, along with absolute errors. We are thus able to provide reliable surface albedo maps at pixel scale for entire regions of Titan and for the whole VIMS spectral range.

  7. Retrieval of aerosol optical depth from surface solar radiation measurements using machine learning algorithms, non-linear regression and a radiative transfer-based look-up table

    NASA Astrophysics Data System (ADS)

    Huttunen, Jani; Kokkola, Harri; Mielonen, Tero; Esa Juhani Mononen, Mika; Lipponen, Antti; Reunanen, Juha; Vilhelm Lindfors, Anders; Mikkonen, Santtu; Erkki Juhani Lehtinen, Kari; Kouremeti, Natalia; Bais, Alkiviadis; Niska, Harri; Arola, Antti

    2016-07-01

    In order to have a good estimate of the current forcing by anthropogenic aerosols, knowledge on past aerosol levels is needed. Aerosol optical depth (AOD) is a good measure for aerosol loading. However, dedicated measurements of AOD are only available from the 1990s onward. One option to lengthen the AOD time series beyond the 1990s is to retrieve AOD from surface solar radiation (SSR) measurements taken with pyranometers. In this work, we have evaluated several inversion methods designed for this task. We compared a look-up table method based on radiative transfer modelling, a non-linear regression method and four machine learning methods (Gaussian process, neural network, random forest and support vector machine) with AOD observations carried out with a sun photometer at an Aerosol Robotic Network (AERONET) site in Thessaloniki, Greece. Our results show that most of the machine learning methods produce AOD estimates comparable to the look-up table and non-linear regression methods. All of the applied methods produced AOD values that corresponded well to the AERONET observations with the lowest correlation coefficient value being 0.87 for the random forest method. While many of the methods tended to slightly overestimate low AODs and underestimate high AODs, neural network and support vector machine showed overall better correspondence for the whole AOD range. The differences in producing both ends of the AOD range seem to be caused by differences in the aerosol composition. High AODs were in most cases those with high water vapour content which might affect the aerosol single scattering albedo (SSA) through uptake of water into aerosols. Our study indicates that machine learning methods benefit from the fact that they do not constrain the aerosol SSA in the retrieval, whereas the LUT method assumes a constant value for it. This would also mean that machine learning methods could have potential in reproducing AOD from SSR even though SSA would have changed during the observation period.

  8. Modelling alkali metal emissions in large-eddy simulation of a preheated pulverised-coal turbulent jet flame using tabulated chemistry

    NASA Astrophysics Data System (ADS)

    Wan, Kaidi; Xia, Jun; Vervisch, Luc; Liu, Yingzu; Wang, Zhihua; Cen, Kefa

    2018-03-01

    The numerical modelling of alkali metal reacting dynamics in turbulent pulverised-coal combustion is discussed using tabulated sodium chemistry in large eddy simulation (LES). A lookup table is constructed from a detailed sodium chemistry mechanism including five sodium species, i.e. Na, NaO, NaO2, NaOH and Na2O2H2, and 24 elementary reactions. This sodium chemistry table contains four coordinates, i.e. the equivalence ratio, the mass fraction of the sodium element, the gas-phase temperature, and a progress variable. The table is first validated against the detailed sodium chemistry mechanism by zero-dimensional simulations. Then, LES of a turbulent pulverised-coal jet flame is performed and major coal-flame parameters compared against experiments. The chemical percolation devolatilisation (CPD) model and the partially stirred reactor (PaSR) model are employed to predict coal pyrolysis and gas-phase combustion, respectively. The response of the five sodium species in the pulverised-coal jet flame is subsequently examined. Finally, a systematic global sensitivity analysis of the sodium lookup table is performed and the accuracy of the proposed tabulated sodium chemistry approach has been calibrated.

  9. A memory efficient implementation scheme of Gauss error function in a Laguerre-Volterra network for neuroprosthetic devices

    NASA Astrophysics Data System (ADS)

    Li, Will X. Y.; Cui, Ke; Zhang, Wei

    2017-04-01

    Cognitive neural prosthesis is a manmade device which can be used to restore or compensate for lost human cognitive modalities. The generalized Laguerre-Volterra (GLV) network serves as a robust mathematical underpinning for the development of such prosthetic instrument. In this paper, a hardware implementation scheme of Gauss error function for the GLV network targeting reconfigurable platforms is reported. Numerical approximations are formulated which transform the computation of nonelementary function into combinational operations of elementary functions, and memory-intensive look-up table (LUT) based approaches can therefore be circumvented. The computational precision can be made adjustable with the utilization of an error compensation scheme, which is proposed based on the experimental observation of the mathematical characteristics of the error trajectory. The precision can be further customizable by exploiting the run-time characteristics of the reconfigurable system. Compared to the polynomial expansion based implementation scheme, the utilization of slice LUTs, occupied slices, and DSP48E1s on a Xilinx XC6VLX240T field-programmable gate array has decreased by 94.2%, 94.1%, and 90.0%, respectively. While compared to the look-up table based scheme, 1.0 ×1017 bits of storage can be spared under the maximum allowable error of 1.0 ×10-3 . The proposed implementation scheme can be employed in the study of large-scale neural ensemble activity and in the design and development of neural prosthetic device.

  10. Miniaturized Retrodirective Arrays for a Nanosatellite Platform

    DTIC Science & Technology

    2012-01-01

    TABLE I ABBREVIATED CONTROL MODULE LOOKUP TABLE PS2 PS3 PS4 e B4 B3 82 81 84 83 82 81 84 83 82 81 30.00 1 1 0 0 I 0 0 0 0 I 0 0 22.02 l l 0 1 l 0 I...is controlled by bit values in columns PS2, PS3 , and PS4 of Table I. 4.3.2 Experimental Results Full-Duplex Operation To show the full-duplex

  11. First Retrieval of Surface Lambert Albedos From Mars Reconnaissance Orbiter CRISM Data

    NASA Astrophysics Data System (ADS)

    McGuire, P. C.; Arvidson, R. E.; Murchie, S. L.; Wolff, M. J.; Smith, M. D.; Martin, T. Z.; Milliken, R. E.; Mustard, J. F.; Pelkey, S. M.; Lichtenberg, K. A.; Cavender, P. J.; Humm, D. C.; Titus, T. N.; Malaret, E. R.

    2006-12-01

    We have developed a pipeline-processing software system to convert radiance-on-sensor for each of 72 out of 544 CRISM spectral bands used in global mapping to the corresponding surface Lambert albedo, accounting for atmospheric, thermal, and photoclinometric effects. We will present and interpret first results from this software system for the retrieval of Lambert albedos from CRISM data. For the multispectral mapping modes, these pipeline-processed 72 spectral bands constitute all of the available bands, for wavelengths from 0.362-3.920 μm, at 100-200 m/pixel spatial resolution, and ~ 0.006\\spaceμm spectral resolution. For the hyperspectral targeted modes, these pipeline-processed 72 spectral bands are only a selection of all of the 544 spectral bands, but at a resolution of 15-38 m/pixel. The pipeline processing for both types of observing modes (multispectral and hyperspectral) will use climatology, based on data from MGS/TES, in order to estimate ice- and dust-aerosol optical depths, prior to the atmospheric correction with lookup tables based upon radiative-transport calculations via DISORT. There is one DISORT atmospheric-correction lookup table for converting radiance-on-sensor to Lambert albedo for each of the 72 spectral bands. The measurements of the Emission Phase Function (EPF) during targeting will not be employed in this pipeline processing system. We are developing a separate system for extracting more accurate aerosol optical depths and surface scattering properties. This separate system will use direct calls (instead of lookup tables) to the DISORT code for all 544 bands, and it will use the EPF data directly, bootstrapping from the climatology data for the aerosol optical depths. The pipeline processing will thermally correct the albedos for the spectral bands above ~ 2.6 μm, by a choice between 4 different techniques for determining surface temperature: 1) climatology, 2) empirical estimation of the albedo at 3.9 μm from the measured albedo at 2.5 μm, 3) a physical thermal model (PTM) based upon maps of thermal inertia from TES and coarse-resolution surface slopes (SS) from MOLA, and 4) a photoclinometric extension to the PTM that uses CRISM albedos at 0.41 μm to compute the SS at CRISM spatial resolution. For the thermal correction, we expect that each of these 4 different techniques will be valuable for some fraction of the observations.

  12. Efficient hash tables for network applications.

    PubMed

    Zink, Thomas; Waldvogel, Marcel

    2015-01-01

    Hashing has yet to be widely accepted as a component of hard real-time systems and hardware implementations, due to still existing prejudices concerning the unpredictability of space and time requirements resulting from collisions. While in theory perfect hashing can provide optimal mapping, in practice, finding a perfect hash function is too expensive, especially in the context of high-speed applications. The introduction of hashing with multiple choices, d-left hashing and probabilistic table summaries, has caused a shift towards deterministic DRAM access. However, high amounts of rare and expensive high-speed SRAM need to be traded off for predictability, which is infeasible for many applications. In this paper we show that previous suggestions suffer from the false precondition of full generality. Our approach exploits four individual degrees of freedom available in many practical applications, especially hardware and high-speed lookups. This reduces the requirement of on-chip memory up to an order of magnitude and guarantees constant lookup and update time at the cost of only minute amounts of additional hardware. Our design makes efficient hash table implementations cheaper, more predictable, and more practical.

  13. Using Neural Networks to Improve the Performance of Radiative Transfer Modeling Used for Geometry Dependent LER Calculations

    NASA Astrophysics Data System (ADS)

    Fasnacht, Z.; Qin, W.; Haffner, D. P.; Loyola, D. G.; Joiner, J.; Krotkov, N. A.; Vasilkov, A. P.; Spurr, R. J. D.

    2017-12-01

    In order to estimate surface reflectance used in trace gas retrieval algorithms, radiative transfer models (RTM) such as the Vector Linearized Discrete Ordinate Radiative Transfer Model (VLIDORT) can be used to simulate the top of the atmosphere (TOA) radiances with advanced models of surface properties. With large volumes of satellite data, these model simulations can become computationally expensive. Look up table interpolation can improve the computational cost of the calculations, but the non-linear nature of the radiances requires a dense node structure if interpolation errors are to be minimized. In order to reduce our computational effort and improve the performance of look-up tables, neural networks can be trained to predict these radiances. We investigate the impact of using look-up table interpolation versus a neural network trained using the smart sampling technique, and show that neural networks can speed up calculations and reduce errors while using significantly less memory and RTM calls. In future work we will implement a neural network in operational processing to meet growing demands for reflectance modeling in support of high spatial resolution satellite missions.

  14. Methyl Bromide Commodity Fumigation Buffer Zone Lookup Tables

    EPA Pesticide Factsheets

    Product labels for methyl bromide used in commodity and structural fumigation include requirements for buffer zones around treated areas. The information on this page will allow you to find the appropriate buffer zone for your planned application.

  15. Use of Fluka to Create Dose Calculations

    NASA Technical Reports Server (NTRS)

    Lee, Kerry T.; Barzilla, Janet; Townsend, Lawrence; Brittingham, John

    2012-01-01

    Monte Carlo codes provide an effective means of modeling three dimensional radiation transport; however, their use is both time- and resource-intensive. The creation of a lookup table or parameterization from Monte Carlo simulation allows users to perform calculations with Monte Carlo results without replicating lengthy calculations. FLUKA Monte Carlo transport code was used to develop lookup tables and parameterizations for data resulting from the penetration of layers of aluminum, polyethylene, and water with areal densities ranging from 0 to 100 g/cm^2. Heavy charged ion radiation including ions from Z=1 to Z=26 and from 0.1 to 10 GeV/nucleon were simulated. Dose, dose equivalent, and fluence as a function of particle identity, energy, and scattering angle were examined at various depths. Calculations were compared against well-known results and against the results of other deterministic and Monte Carlo codes. Results will be presented.

  16. The OpenCalphad thermodynamic software interface.

    PubMed

    Sundman, Bo; Kattner, Ursula R; Sigli, Christophe; Stratmann, Matthias; Le Tellier, Romain; Palumbo, Mauro; Fries, Suzana G

    2016-12-01

    Thermodynamic data are needed for all kinds of simulations of materials processes. Thermodynamics determines the set of stable phases and also provides chemical potentials, compositions and driving forces for nucleation of new phases and phase transformations. Software to simulate materials properties needs accurate and consistent thermodynamic data to predict metastable states that occur during phase transformations. Due to long calculation times thermodynamic data are frequently pre-calculated into "lookup tables" to speed up calculations. This creates additional uncertainties as data must be interpolated or extrapolated and conditions may differ from those assumed for creating the lookup table. Speed and accuracy requires that thermodynamic software is fully parallelized and the Open-Calphad (OC) software is the first thermodynamic software supporting this feature. This paper gives a brief introduction to computational thermodynamics and introduces the basic features of the OC software and presents four different application examples to demonstrate its versatility.

  17. Accelerated Gaussian mixture model and its application on image segmentation

    NASA Astrophysics Data System (ADS)

    Zhao, Jianhui; Zhang, Yuanyuan; Ding, Yihua; Long, Chengjiang; Yuan, Zhiyong; Zhang, Dengyi

    2013-03-01

    Gaussian mixture model (GMM) has been widely used for image segmentation in recent years due to its superior adaptability and simplicity of implementation. However, traditional GMM has the disadvantage of high computational complexity. In this paper an accelerated GMM is designed, for which the following approaches are adopted: establish the lookup table for Gaussian probability matrix to avoid the repetitive probability calculations on all pixels, employ the blocking detection method on each block of pixels to further decrease the complexity, change the structure of lookup table from 3D to 1D with more simple data type to reduce the space requirement. The accelerated GMM is applied on image segmentation with the help of OTSU method to decide the threshold value automatically. Our algorithm has been tested through image segmenting of flames and faces from a set of real pictures, and the experimental results prove its efficiency in segmentation precision and computational cost.

  18. Recognizing human actions by learning and matching shape-motion prototype trees.

    PubMed

    Jiang, Zhuolin; Lin, Zhe; Davis, Larry S

    2012-03-01

    A shape-motion prototype-based approach is introduced for action recognition. The approach represents an action as a sequence of prototypes for efficient and flexible action matching in long video sequences. During training, an action prototype tree is learned in a joint shape and motion space via hierarchical K-means clustering and each training sequence is represented as a labeled prototype sequence; then a look-up table of prototype-to-prototype distances is generated. During testing, based on a joint probability model of the actor location and action prototype, the actor is tracked while a frame-to-prototype correspondence is established by maximizing the joint probability, which is efficiently performed by searching the learned prototype tree; then actions are recognized using dynamic prototype sequence matching. Distance measures used for sequence matching are rapidly obtained by look-up table indexing, which is an order of magnitude faster than brute-force computation of frame-to-frame distances. Our approach enables robust action matching in challenging situations (such as moving cameras, dynamic backgrounds) and allows automatic alignment of action sequences. Experimental results demonstrate that our approach achieves recognition rates of 92.86 percent on a large gesture data set (with dynamic backgrounds), 100 percent on the Weizmann action data set, 95.77 percent on the KTH action data set, 88 percent on the UCF sports data set, and 87.27 percent on the CMU action data set.

  19. Current Pressure Transducer Application of Model-based Prognostics Using Steady State Conditions

    NASA Technical Reports Server (NTRS)

    Teubert, Christopher; Daigle, Matthew J.

    2014-01-01

    Prognostics is the process of predicting a system's future states, health degradation/wear, and remaining useful life (RUL). This information plays an important role in preventing failure, reducing downtime, scheduling maintenance, and improving system utility. Prognostics relies heavily on wear estimation. In some components, the sensors used to estimate wear may not be fast enough to capture brief transient states that are indicative of wear. For this reason it is beneficial to be capable of detecting and estimating the extent of component wear using steady-state measurements. This paper details a method for estimating component wear using steady-state measurements, describes how this is used to predict future states, and presents a case study of a current/pressure (I/P) Transducer. I/P Transducer nominal and off-nominal behaviors are characterized using a physics-based model, and validated against expected and observed component behavior. This model is used to map observed steady-state responses to corresponding fault parameter values in the form of a lookup table. This method was chosen because of its fast, efficient nature, and its ability to be applied to both linear and non-linear systems. Using measurements of the steady state output, and the lookup table, wear is estimated. A regression is used to estimate the wear propagation parameter and characterize the damage progression function, which are used to predict future states and the remaining useful life of the system.

  20. Electronics for a prototype variable field of view PET camera using the PMT-quadrant-sharing detector array

    NASA Astrophysics Data System (ADS)

    Li, H.; Wong, Wai-Hoi; Zhang, N.; Wang, J.; Uribe, J.; Baghaei, H.; Yokoyama, S.

    1999-06-01

    Electronics for a prototype high-resolution PET camera with eight position-sensitive detector modules has been developed. Each module has 16 BGO (Bi/sub 4/Ge/sub 3/O/sub 12/) blocks (each block is composed of 49 crystals). The design goals are component and space reduction. The electronics is composed of five parts: front-end analog processing, digital position decoding, fast timing, coincidence processing and master data acquisition. The front-end analog circuit is a zone-based structure (each zone has 3/spl times/3 PMTs). Nine ADCs digitize integration signals of an active zone identified by eight trigger clusters; each cluster is composed of six photomultiplier tubes (PMTs). A trigger corresponding to a gamma ray is sent to a fast timing board to obtain a time-mark, and the nine digitized signals are passed to the position decoding board, where a real block (four PMTs) can be picked out from the zone for position decoding. Lookup tables are used for energy discrimination and to identify the gamma-hit crystal location. The coincidence board opens a 70-ns initial timing window, followed by two 20-ns true/accidental time-mark lookup table windows. The data output from the coincidence board can be acquired either in sinogram mode or in list mode with a Motorola/IRONICS VME-based system.

  1. Optimizing TLB entries for mixed page size storage in contiguous memory

    DOEpatents

    Chen, Dong; Gara, Alan; Giampapa, Mark E.; Heidelberger, Philip; Kriegel, Jon K.; Ohmacht, Martin; Steinmacher-Burow, Burkhard

    2013-04-30

    A system and method for accessing memory are provided. The system comprises a lookup buffer for storing one or more page table entries, wherein each of the one or more page table entries comprises at least a virtual page number and a physical page number; a logic circuit for receiving a virtual address from said processor, said logic circuit for matching the virtual address to the virtual page number in one of the page table entries to select the physical page number in the same page table entry, said page table entry having one or more bits set to exclude a memory range from a page.

  2. Application of machine learning techniques to lepton energy reconstruction in water Cherenkov detectors

    NASA Astrophysics Data System (ADS)

    Drakopoulou, E.; Cowan, G. A.; Needham, M. D.; Playfer, S.; Taani, M.

    2018-04-01

    The application of machine learning techniques to the reconstruction of lepton energies in water Cherenkov detectors is discussed and illustrated for TITUS, a proposed intermediate detector for the Hyper-Kamiokande experiment. It is found that applying these techniques leads to an improvement of more than 50% in the energy resolution for all lepton energies compared to an approach based upon lookup tables. Machine learning techniques can be easily applied to different detector configurations and the results are comparable to likelihood-function based techniques that are currently used.

  3. Trip Energy Estimation Methodology and Model Based on Real-World Driving Data for Green Routing Applications: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holden, Jacob; Van Til, Harrison J; Wood, Eric W

    A data-informed model to predict energy use for a proposed vehicle trip has been developed in this paper. The methodology leverages nearly 1 million miles of real-world driving data to generate the estimation model. Driving is categorized at the sub-trip level by average speed, road gradient, and road network geometry, then aggregated by category. An average energy consumption rate is determined for each category, creating an energy rates look-up table. Proposed vehicle trips are then categorized in the same manner, and estimated energy rates are appended from the look-up table. The methodology is robust and applicable to almost any typemore » of driving data. The model has been trained on vehicle global positioning system data from the Transportation Secure Data Center at the National Renewable Energy Laboratory and validated against on-road fuel consumption data from testing in Phoenix, Arizona. The estimation model has demonstrated an error range of 8.6% to 13.8%. The model results can be used to inform control strategies in routing tools, such as change in departure time, alternate routing, and alternate destinations to reduce energy consumption. This work provides a highly extensible framework that allows the model to be tuned to a specific driver or vehicle type.« less

  4. Strategies to overcome photobleaching in algorithm-based adaptive optics for nonlinear in-vivo imaging.

    PubMed

    Caroline Müllenbroich, M; McGhee, Ewan J; Wright, Amanda J; Anderson, Kurt I; Mathieson, Keith

    2014-01-01

    We have developed a nonlinear adaptive optics microscope utilizing a deformable membrane mirror (DMM) and demonstrated its use in compensating for system- and sample-induced aberrations. The optimum shape of the DMM was determined with a random search algorithm optimizing on either two photon fluorescence or second harmonic signals as merit factors. We present here several strategies to overcome photobleaching issues associated with lengthy optimization routines by adapting the search algorithm and the experimental methodology. Optimizations were performed on extrinsic fluorescent dyes, fluorescent beads loaded into organotypic tissue cultures and the intrinsic second harmonic signal of these cultures. We validate the approach of using these preoptimized mirror shapes to compile a robust look-up table that can be applied for imaging over several days and through a variety of tissues. In this way, the photon exposure to the fluorescent cells under investigation is limited to imaging. Using our look-up table approach, we show signal intensity improvement factors ranging from 1.7 to 4.1 in organotypic tissue cultures and freshly excised mouse tissue. Imaging zebrafish in vivo, we demonstrate signal improvement by a factor of 2. This methodology is easily reproducible and could be applied to many photon starved experiments, for example fluorescent life time imaging, or when photobleaching is a concern.

  5. Sizing of single evaporating droplet with Near-Forward Elastic Scattering Spectroscopy

    NASA Astrophysics Data System (ADS)

    Woźniak, M.; Jakubczyk, D.; Derkachov, G.; Archer, J.

    2017-11-01

    We have developed an optical setup and related numerical models to study evolution of single evaporating micro-droplets by analysis of their spectral properties. Our approach combines the advantages of the electrodynamic trapping with the broadband spectral analysis with the supercontinuum laser illumination. The elastically scattered light within the spectral range of 500-900 nm is observed by a spectrometer placed at the near-forward scattering angles between 4.3 ° and 16.2 ° and compared with the numerically generated lookup table of the broadband Mie scattering. Our solution has been successfully applied to infer the size evolution of the evaporating droplets of pure liquids (diethylene and ethylene glycol) and suspensions of nanoparticles (silica and gold nanoparticles in diethylene glycol), with maximal accuracy of ± 25 nm. The obtained results have been compared with the previously developed sizing techniques: (i) based on the analysis of the Mie scattering images - the Mie Scattering Lookup Table Method and (ii) the droplet weighting. Our approach provides possibility to handle levitating objects with much larger size range (radius from 0.5 μm to 30 μm) than with the use of optical tweezers (typically radius below 8 μm) and analyse them with much wider spectral range than with commonly used LED sources.

  6. Application of Polarization to the MODIS Aerosol Retrieval Over Land

    NASA Technical Reports Server (NTRS)

    Levy, Robert C.; Remer, Lorraine R.; Kaufman, Yoram J.

    2004-01-01

    Reflectance measurements in the visible and infrared wavelengths, from the Moderate Resolution Imaging Spectroradiometer (MODIS), are used to derive aerosol optical thicknesses (AOT) and aerosol properties over land surfaces. The measured spectral reflectance is compared with lookup tables, containing theoretical reflectance calculated by radiative transfer (RT) code. Specifically, this RT code calculates top of the atmosphere (TOA) intensities based on a scalar treatment of radiation, neglecting the effects of polarization. In the red and near infrared (NIR) wavelengths the use of the scalar RT code is of sufficient accuracy to model TOA reflectance. However, in the blue, molecular and aerosol scattering dominate the TOA signal. Here, polarization effects can be large, and should be included in the lookup table derivation. Using a RT code that allows for both vector and scalar calculations, we examine the reflectance differences at the TOA, with and without polarization. We find that the differences in blue channel TOA reflectance (vector - scalar) may reach values of 0.01 or greater, depending on the sun/surface/sensor scattering geometry. Reflectance errors of this magnitude translate to AOT differences of 0.1, which is a very large error, especially when the actual AOT is low. As a result of this study, the next version of aerosol retrieval from MODIS over land will include polarization.

  7. Spatial Coherence Between Remotely Sensed Ocean Color Data and Vertical Distribution of Lidar Backscattering in Coastal Stratified Waters

    DTIC Science & Technology

    2010-01-01

    Respondents should be aware that notwithstanding any other provision of law, no person shall be sublet to any penalty for failing to comply with a...Laboratory, NOAA Boulder, CO 8030S USA ’ Naval Research Laboratory, Code 7330. Stennis Space Center. NASA MS 39529. USA ’ Shellfish Assessment. Alaska...of peak) could be retrieved based solely on Rn (A, 0+ ) measurements. The use of Look-Up Tables (LUTs) of regionally and seasonally averaged lOPs

  8. Using the tabulated diffusion flamelet model ADF-PCM to simulate a lifted methane-air jet flame

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michel, Jean-Baptiste; Colin, Olivier; Angelberger, Christian

    2009-07-15

    Two formulations of a turbulent combustion model based on the approximated diffusion flame presumed conditional moment (ADF-PCM) approach [J.-B. Michel, O. Colin, D. Veynante, Combust. Flame 152 (2008) 80-99] are presented. The aim is to describe autoignition and combustion in nonpremixed and partially premixed turbulent flames, while accounting for complex chemistry effects at a low computational cost. The starting point is the computation of approximate diffusion flames by solving the flamelet equation for the progress variable only, reading all chemical terms such as reaction rates or mass fractions from an FPI-type look-up table built from autoigniting PSR calculations using complexmore » chemistry. These flamelets are then used to generate a turbulent look-up table where mean values are estimated by integration over presumed probability density functions. Two different versions of ADF-PCM are presented, differing by the probability density functions used to describe the evolution of the stoichiometric scalar dissipation rate: a Dirac function centered on the mean value for the basic ADF-PCM formulation, and a lognormal function for the improved formulation referenced ADF-PCM{chi}. The turbulent look-up table is read in the CFD code in the same manner as for PCM models. The developed models have been implemented into the compressible RANS CFD code IFP-C3D and applied to the simulation of the Cabra et al. experiment of a lifted methane jet flame [R. Cabra, J. Chen, R. Dibble, A. Karpetis, R. Barlow, Combust. Flame 143 (2005) 491-506]. The ADF-PCM{chi} model accurately reproduces the experimental lift-off height, while it is underpredicted by the basic ADF-PCM model. The ADF-PCM{chi} model shows a very satisfactory reproduction of the experimental mean and fluctuating values of major species mass fractions and temperature, while ADF-PCM yields noticeable deviations. Finally, a comparison of the experimental conditional probability densities of the progress variable for a given mixture fraction with model predictions is performed, showing that ADF-PCM{chi} reproduces the experimentally observed bimodal shape and its dependency on the mixture fraction, whereas ADF-PCM cannot retrieve this shape. (author)« less

  9. The use of three-parameter rating table lookup programs, RDRAT and PARM3, in hydraulic flow models

    USGS Publications Warehouse

    Sanders, C.L.

    1995-01-01

    Subroutines RDRAT and PARM3 enable computer programs such as the BRANCH open-channel unsteady-flow model to route flows through or over combinations of critical-flow sections, culverts, bridges, road- overflow sections, fixed spillways, and(or) dams. The subroutines also obstruct upstream flow to simulate operation of flapper-type tide gates. A multiplier can be applied by date and time to simulate varying numbers of tide gates being open or alternative construction scenarios for multiple culverts. The subroutines use three-parameter (headwater, tailwater, and discharge) rating table lookup methods. These tables may be manually prepared using other programs that do step-backwater computations or compute flow through bridges and culverts or over dams. The subroutine, therefore, precludes the necessity of incorporating considerable hydraulic computational code into the client program, and provides complete flexibility for users of the model for routing flow through almost any affixed structure or combination of structures. The subroutines are written in Fortran 77 language, and have minimal exchange of information with the BRANCH model or other possible client programs. The report documents the interpolation methodology, data input requirements, and software.

  10. Impact of hydrogen SAE J2601 fueling methods on fueling time of light-duty fuel cell electric vehicles

    DOE PAGES

    Reddi, Krishna; Elgowainy, Amgad; Rustagi, Neha; ...

    2017-05-16

    Hydrogen fuel cell electric vehicles (HFCEVs) are zero-emission vehicles (ZEVs) that can provide drivers a similar experience to conventional internal combustion engine vehicles (ICEVs), in terms of fueling time and performance (i.e. power and driving range). The Society of Automotive Engineers (SAE) developed fueling protocol J2601 for light-duty HFCEVs to ensure safe vehicle fills while maximizing fueling performance. This study employs a physical model that simulates and compares the fueling performance of two fueling methods, known as the “lookup table” method and the “MC formula” method, within the SAE J2601 protocol. Both the fueling methods provide fast fueling of HFCEVsmore » within minutes, but the MC formula method takes advantage of active measurement of precooling temperature to dynamically control the fueling process, and thereby provides faster vehicle fills. Here, the MC formula method greatly reduces fueling time compared to the lookup table method at higher ambient temperatures, as well as when the precooling temperature falls on the colder side of the expected temperature window for all station types. Although the SAE J2601 lookup table method is the currently implemented standard for refueling hydrogen fuel cell vehicles, the MC formula method provides significant fueling time advantages in certain conditions; these warrant its implementation in future hydrogen refueling stations for better customer satisfaction with fueling experience of HFCEVs.« less

  11. Impact of hydrogen SAE J2601 fueling methods on fueling time of light-duty fuel cell electric vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reddi, Krishna; Elgowainy, Amgad; Rustagi, Neha

    Hydrogen fuel cell electric vehicles (HFCEVs) are zero-emission vehicles (ZEVs) that can provide drivers a similar experience to conventional internal combustion engine vehicles (ICEVs), in terms of fueling time and performance (i.e. power and driving range). The Society of Automotive Engineers (SAE) developed fueling protocol J2601 for light-duty HFCEVs to ensure safe vehicle fills while maximizing fueling performance. This study employs a physical model that simulates and compares the fueling performance of two fueling methods, known as the “lookup table” method and the “MC formula” method, within the SAE J2601 protocol. Both the fueling methods provide fast fueling of HFCEVsmore » within minutes, but the MC formula method takes advantage of active measurement of precooling temperature to dynamically control the fueling process, and thereby provides faster vehicle fills. Here, the MC formula method greatly reduces fueling time compared to the lookup table method at higher ambient temperatures, as well as when the precooling temperature falls on the colder side of the expected temperature window for all station types. Although the SAE J2601 lookup table method is the currently implemented standard for refueling hydrogen fuel cell vehicles, the MC formula method provides significant fueling time advantages in certain conditions; these warrant its implementation in future hydrogen refueling stations for better customer satisfaction with fueling experience of HFCEVs.« less

  12. Numerical Model Sensitivity to Heterogeneous Satellite Derived Vegetation Roughness

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael; Eastman, Joseph; Borak, Jordan

    2011-01-01

    The sensitivity of a mesoscale weather prediction model to a 1 km satellite-based vegetation roughness initialization is investigated for a domain within the south central United States. Three different roughness databases are employed: i) a control or standard lookup table roughness that is a function only of land cover type, ii) a spatially heterogeneous roughness database, specific to the domain, that was previously derived using a physically based procedure and Moderate Resolution Imaging Spectroradiometer (MODIS) imagery, and iii) a MODIS climatologic roughness database that like (i) is a function only of land cover type, but possesses domain specific mean values from (ii). The model used is the Weather Research and Forecast Model (WRF) coupled to the Community Land Model within the Land Information System (LIS). For each simulation, a statistical comparison is made between modeled results and ground observations within a domain including Oklahoma, Eastern Arkansas, and Northwest Louisiana during a 4-day period within IHOP 2002. Sensitivity analysis compares the impact the three roughness initializations on time-series temperature, precipitation probability of detection (POD), average wind speed, boundary layer height, and turbulent kinetic energy (TKE). Overall, the results indicate that, for the current investigation, replacement of the standard look-up table values with the satellite-derived values statistically improves model performance for most observed variables. Such natural roughness heterogeneity enhances the surface wind speed, PBL height and TKE production up to 10 percent, with a lesser effect over grassland, and greater effect over mixed land cover domains.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roehm, Dominic; Pavel, Robert S.; Barros, Kipton

    We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our predictionmore » scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.« less

  14. Self-Contained Avionics Sensing and Flight Control System for Small Unmanned Aerial Vehicle

    NASA Technical Reports Server (NTRS)

    Ingham, John C. (Inventor); Shams, Qamar A. (Inventor); Logan, Michael J. (Inventor); Fox, Robert L. (Inventor); Fox, legal representative, Melanie L. (Inventor); Kuhn, III, Theodore R. (Inventor); Babel, III, Walter C. (Inventor); Fox, legal representative, Christopher L. (Inventor); Adams, James K. (Inventor); Laughter, Sean A. (Inventor)

    2011-01-01

    A self-contained avionics sensing and flight control system is provided for an unmanned aerial vehicle (UAV). The system includes sensors for sensing flight control parameters and surveillance parameters, and a Global Positioning System (GPS) receiver. Flight control parameters and location signals are processed to generate flight control signals. A Field Programmable Gate Array (FPGA) is configured to provide a look-up table storing sets of values with each set being associated with a servo mechanism mounted on the UAV and with each value in each set indicating a unique duty cycle for the servo mechanism associated therewith. Each value in each set is further indexed to a bit position indicative of a unique percentage of a maximum duty cycle for the servo mechanism associated therewith. The FPGA is further configured to provide a plurality of pulse width modulation (PWM) generators coupled to the look-up table. Each PWM generator is associated with and adapted to be coupled to one of the servo mechanisms.

  15. Distributed database kriging for adaptive sampling (D²KAS)

    DOE PAGES

    Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; ...

    2015-03-18

    We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our predictionmore » scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.« less

  16. Single-Chip Microcomputer Control Of The PWM Inverter

    NASA Astrophysics Data System (ADS)

    Morimoto, Masayuki; Sato, Shinji; Sumito, Kiyotaka; Oshitani, Katsumi

    1987-10-01

    A single-chip microcomputer-based con-troller for a pulsewidth modulated 1.7 KVA inverter of an airconditioner is presented. The PWM pattern generation and the system control of the airconditioner are achieved by software of the 8-bit single-chip micro-computer. The single-chip microcomputer has the disadvantages of low processing speed and small memory capacity which can be overcome by the magnetic flux control method. The PWM pattern is generated every 90 psec. The memory capacity of the PWM look-up table is less than 2 kbytes. The simple and reliable control is realized by the software-based implementation.

  17. Choice: 36 band feature selection software with applications to multispectral pattern recognition

    NASA Technical Reports Server (NTRS)

    Jones, W. C.

    1973-01-01

    Feature selection software was developed at the Earth Resources Laboratory that is capable of inputting up to 36 channels and selecting channel subsets according to several criteria based on divergence. One of the criterion used is compatible with the table look-up classifier requirements. The software indicates which channel subset best separates (based on average divergence) each class from all other classes. The software employs an exhaustive search technique, and computer time is not prohibitive. A typical task to select the best 4 of 22 channels for 12 classes takes 9 minutes on a Univac 1108 computer.

  18. Development and validation of light-duty vehicle modal emissions and fuel consumption values for traffic models.

    DOT National Transportation Integrated Search

    1999-03-01

    A methodology for developing modal vehicle emissions and fuel consumption models has been developed by Oak Ridge National Laboratory (ORNL), sponsored by the Federal Highway Administration. These models, in the form of look-up tables for fuel consump...

  19. Rapid computation of single PET scan rest-stress myocardial blood flow parametric images by table look up.

    PubMed

    Guehl, Nicolas J; Normandin, Marc D; Wooten, Dustin W; Rozen, Guy; Ruskin, Jeremy N; Shoup, Timothy M; Woo, Jonghye; Ptaszek, Leon M; Fakhri, Georges El; Alpert, Nathaniel M

    2017-09-01

    We have recently reported a method for measuring rest-stress myocardial blood flow (MBF) using a single, relatively short, PET scan session. The method requires two IV tracer injections, one to initiate rest imaging and one at peak stress. We previously validated absolute flow quantitation in ml/min/cc for standard bull's eye, segmental analysis. In this work, we extend the method for fast computation of rest-stress MBF parametric images. We provide an analytic solution to the single-scan rest-stress flow model which is then solved using a two-dimensional table lookup method (LM). Simulations were performed to compare the accuracy and precision of the lookup method with the original nonlinear method (NLM). Then the method was applied to 16 single scan rest/stress measurements made in 12 pigs: seven studied after infarction of the left anterior descending artery (LAD) territory, and nine imaged in the native state. Parametric maps of rest and stress MBF as well as maps of left (f LV ) and right (f RV ) ventricular spill-over fractions were generated. Regions of interest (ROIs) for 17 myocardial segments were defined in bull's eye fashion on the parametric maps. The mean of each ROI was then compared to the rest (K 1r ) and stress (K 1s ) MBF estimates obtained from fitting the 17 regional TACs with the NLM. In simulation, the LM performed as well as the NLM in terms of precision and accuracy. The simulation did not show that bias was introduced by the use of a predefined two-dimensional lookup table. In experimental data, parametric maps demonstrated good statistical quality and the LM was computationally much more efficient than the original NLM. Very good agreement was obtained between the mean MBF calculated on the parametric maps for each of the 17 ROIs and the regional MBF values estimated by the NLM (K 1map LM  = 1.019 × K 1 ROI NLM  + 0.019, R 2  = 0.986; mean difference = 0.034 ± 0.036 mL/min/cc). We developed a table lookup method for fast computation of parametric imaging of rest and stress MBF. Our results show the feasibility of obtaining good quality MBF maps using modest computational resources, thus demonstrating that the method can be applied in a clinical environment to obtain full quantitative MBF information. © 2017 American Association of Physicists in Medicine.

  20. A Dual-Wavelength Radar Technique to Detect Hydrometeor Phases

    NASA Technical Reports Server (NTRS)

    Liao, Liang; Meneghini, Robert

    2016-01-01

    This study is aimed at investigating the feasibility of a Ku- and Ka-band space/air-borne dual wavelength radar algorithm to discriminate various phase states of precipitating hydrometeors. A phase-state classification algorithm has been developed from the radar measurements of snow, mixed-phase and rain obtained from stratiform storms. The algorithm, presented in the form of the look-up table that links the Ku-band radar reflectivities and dual-frequency ratio (DFR) to the phase states of hydrometeors, is checked by applying it to the measurements of the Jet Propulsion Laboratory, California Institute of Technology, Airborne Precipitation Radar Second Generation (APR-2). In creating the statistically-based phase look-up table, the attenuation corrected (or true) radar reflectivity factors are employed, leading to better accuracy in determining the hydrometeor phase. In practice, however, the true radar reflectivities are not always available before the phase states of the hydrometeors are determined. Therefore, it is desirable to make use of the measured radar reflectivities in classifying the phase states. To do this, a phase-identification procedure is proposed that uses only measured radar reflectivities. The procedure is then tested using APR-2 airborne radar data. Analysis of the classification results in stratiform rain indicates that the regions of snow, mixed-phase and rain derived from the phase-identification algorithm coincide reasonably well with those determined from the measured radar reflectivities and linear depolarization ratio (LDR).

  1. Effect of Thin Cirrus Clouds on Dust Optical Depth Retrievals From MODIS Observations

    NASA Technical Reports Server (NTRS)

    Feng, Qian; Hsu, N. Christina; Yang, Ping; Tsay, Si-Chee

    2011-01-01

    The effect of thin cirrus clouds in retrieving the dust optical depth from MODIS observations is investigated by using a simplified aerosol retrieval algorithm based on the principles of the Deep Blue aerosol property retrieval method. Specifically, the errors of the retrieved dust optical depth due to thin cirrus contamination are quantified through the comparison of two retrievals by assuming dust-only atmospheres and the counterparts with overlapping mineral dust and thin cirrus clouds. To account for the effect of the polarization state of radiation field on radiance simulation, a vector radiative transfer model is used to generate the lookup tables. In the forward radiative transfer simulations involved in generating the lookup tables, the Rayleigh scattering by atmospheric gaseous molecules and the reflection of the surface assumed to be Lambertian are fully taken into account. Additionally, the spheroid model is utilized to account for the nonsphericity of dust particles In computing their optical properties. For simplicity, the single-scattering albedo, scattering phase matrix, and optical depth are specified a priori for thin cirrus clouds assumed to consist of droxtal ice crystals. The present results indicate that the errors in the retrieved dust optical depths due to the contamination of thin cirrus clouds depend on the scattering angle, underlying surface reflectance, and dust optical depth. Under heavy dusty conditions, the absolute errors are comparable to the predescribed optical depths of thin cirrus clouds.

  2. Improving color characteristics of LCD

    NASA Astrophysics Data System (ADS)

    Feng, Xiao-fan; Daly, Scott J.

    2005-01-01

    The drive for larger size, higher spatial resolution, and wider aperture LCD has shown to increase the electrical crosstalk between electrodes in the driver circuit. This crosstalk leads to additivity errors in color LCD. In this paper, the crosstalk effect was analyzed with micrographs captured from an imaging colorimeter. The experimental result reveals the subpixel nature of color crosstalk. A spatial-based subpixel crosstalk correction algorithm was developed to improve the color performance of LCD. Compared to a 3D lookup table approach, the new algorithm is easier to implement and more accurate in performance.

  3. General Mission Analysis Tool (GMAT) User's Guide (Draft)

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.

    2007-01-01

    4The General Mission Analysis Tool (GMAT) is a space trajectory optimization and mission analysis system. This document is a draft of the users guide for the tool. Included in the guide is information about Configuring Objects/Resources, Object Fields: Quick Look-up Tables, and Commands and Events.

  4. Determining Normal-Distribution Tolerance Bounds Graphically

    NASA Technical Reports Server (NTRS)

    Mezzacappa, M. A.

    1983-01-01

    Graphical method requires calculations and table lookup. Distribution established from only three points: mean upper and lower confidence bounds and lower confidence bound of standard deviation. Method requires only few calculations with simple equations. Graphical procedure establishes best-fit line for measured data and bounds for selected confidence level and any distribution percentile.

  5. Defense Travel Management Office (DTMO)

    Science.gov Websites

    ) Bullet Allowance Tables Bullet Allowance Calculators Bullet Restricted Fares Training Resources Training Resources Bullet Training Resource Lookup Bullet Listing of Available Training Resources Bullet New and Updated Training Resources Bullet Instructions for Accessing Training in Travel Explorer Bullet Enterprise

  6. A short note on calculating the adjusted SAR index

    USDA-ARS?s Scientific Manuscript database

    A simple algebraic technique is presented for computing the adjusted SAR Index proposed by Suarez (1981). The statistical formula presented in this note facilitates the computation of the adjusted SAR without the use of either a look-up table, custom computer software or the need to compute exact a...

  7. A trainable decisions-in decision-out (DEI-DEO) fusion system

    NASA Astrophysics Data System (ADS)

    Dasarathy, Belur V.

    1998-03-01

    Most of the decision fusion systems proposed hitherto in the literature for multiple data source (sensor) environments operate on the basis of pre-defined fusion logic, be they crisp (deterministic), probabilistic, or fuzzy in nature, with no specific learning phase. The fusion systems that are trainable, i.e., ones that have a learning phase, mostly operate in the features-in-decision-out mode, which essentially reduces the fusion process functionally to a pattern classification task in the joint feature space. In this study, a trainable decisions-in-decision-out fusion system is described which estimates a fuzzy membership distribution spread across the different decision choices based on the performance of the different decision processors (sensors) corresponding to each training sample (object) which is associated with a specific ground truth (true decision). Based on a multi-decision space histogram analysis of the performance of the different processors over the entire training data set, a look-up table associating each cell of the histogram with a specific true decision is generated which forms the basis for the operational phase. In the operational phase, for each set of decision inputs, a pointer to the look-up table learnt previously is generated from which a fused decision is derived. This methodology, although primarily designed for fusing crisp decisions from the multiple decision sources, can be adapted for fusion of fuzzy decisions as well if such are the inputs from these sources. Examples, which illustrate the benefits and limitations of the crisp and fuzzy versions of the trainable fusion systems, are also included.

  8. Using false colors to protect visual privacy of sensitive content

    NASA Astrophysics Data System (ADS)

    Ćiftçi, Serdar; Korshunov, Pavel; Akyüz, Ahmet O.; Ebrahimi, Touradj

    2015-03-01

    Many privacy protection tools have been proposed for preserving privacy. Tools for protection of visual privacy available today lack either all or some of the important properties that are expected from such tools. Therefore, in this paper, we propose a simple yet effective method for privacy protection based on false color visualization, which maps color palette of an image into a different color palette, possibly after a compressive point transformation of the original pixel data, distorting the details of the original image. This method does not require any prior face detection or other sensitive regions detection and, hence, unlike typical privacy protection methods, it is less sensitive to inaccurate computer vision algorithms. It is also secure as the look-up tables can be encrypted, reversible as table look-ups can be inverted, flexible as it is independent of format or encoding, adjustable as the final result can be computed by interpolating the false color image with the original using different degrees of interpolation, less distracting as it does not create visually unpleasant artifacts, and selective as it preserves better semantic structure of the input. Four different color scales and four different compression functions, one which the proposed method relies, are evaluated via objective (three face recognition algorithms) and subjective (50 human subjects in an online-based study) assessments using faces from FERET public dataset. The evaluations demonstrate that DEF and RBS color scales lead to the strongest privacy protection, while compression functions add little to the strength of privacy protection. Statistical analysis also shows that recognition algorithms and human subjects perceive the proposed protection similarly

  9. On-sky Closed-loop Correction of Atmospheric Dispersion for High-contrast Coronagraphy and Astrometry

    NASA Astrophysics Data System (ADS)

    Pathak, P.; Guyon, O.; Jovanovic, N.; Lozi, J.; Martinache, F.; Minowa, Y.; Kudo, T.; Kotani, T.; Takami, H.

    2018-02-01

    Adaptive optic (AO) systems delivering high levels of wavefront correction are now common at observatories. One of the main limitations to image quality after wavefront correction comes from atmospheric refraction. An atmospheric dispersion compensator (ADC) is employed to correct for atmospheric refraction. The correction is applied based on a look-up table consisting of dispersion values as a function of telescope elevation angle. The look-up table-based correction of atmospheric dispersion results in imperfect compensation leading to the presence of residual dispersion in the point spread function (PSF) and is insufficient when sub-milliarcsecond precision is required. The presence of residual dispersion can limit the achievable contrast while employing high-performance coronagraphs or can compromise high-precision astrometric measurements. In this paper, we present the first on-sky closed-loop correction of atmospheric dispersion by directly using science path images. The concept behind the measurement of dispersion utilizes the chromatic scaling of focal plane speckles. An adaptive speckle grid generated with a deformable mirror (DM) that has a sufficiently large number of actuators is used to accurately measure the residual dispersion and subsequently correct it by driving the ADC. We have demonstrated with the Subaru Coronagraphic Extreme AO (SCExAO) system on-sky closed-loop correction of residual dispersion to <1 mas across H-band. This work will aid in the direct detection of habitable exoplanets with upcoming extremely large telescopes (ELTs) and also provide a diagnostic tool to test the performance of instruments which require sub-milliarcsecond correction.

  10. Implementation of a digital optical matrix-vector multiplier using a holographic look-up table and residue arithmetic

    NASA Technical Reports Server (NTRS)

    Habiby, Sarry F.

    1987-01-01

    The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. The objective is to demonstrate the operation of an optical processor designed to minimize computation time in performing a practical computing application. This is done by using the large array of processing elements in a Hughes liquid crystal light valve, and relying on the residue arithmetic representation, a holographic optical memory, and position coded optical look-up tables. In the design, all operations are performed in effectively one light valve response time regardless of matrix size. The features of the design allowing fast computation include the residue arithmetic representation, the mapping approach to computation, and the holographic memory. In addition, other features of the work include a practical light valve configuration for efficient polarization control, a model for recording multiple exposures in silver halides with equal reconstruction efficiency, and using light from an optical fiber for a reference beam source in constructing the hologram. The design can be extended to implement larger matrix arrays without increasing computation time.

  11. Fixed-Base Comb with Window-Non-Adjacent Form (NAF) Method for Scalar Multiplication

    PubMed Central

    Seo, Hwajeong; Kim, Hyunjin; Park, Taehwan; Lee, Yeoncheol; Liu, Zhe; Kim, Howon

    2013-01-01

    Elliptic curve cryptography (ECC) is one of the most promising public-key techniques in terms of short key size and various crypto protocols. For this reason, many studies on the implementation of ECC on resource-constrained devices within a practical execution time have been conducted. To this end, we must focus on scalar multiplication, which is the most expensive operation in ECC. A number of studies have proposed pre-computation and advanced scalar multiplication using a non-adjacent form (NAF) representation, and more sophisticated approaches have employed a width-w NAF representation and a modified pre-computation table. In this paper, we propose a new pre-computation method in which zero occurrences are much more frequent than in previous methods. This method can be applied to ordinary group scalar multiplication, but it requires large pre-computation table, so we combined the previous method with ours for practical purposes. This novel structure establishes a new feature that adjusts speed performance and table size finely, so we can customize the pre-computation table for our own purposes. Finally, we can establish a customized look-up table for embedded microprocessors. PMID:23881143

  12. Improved cache performance in Monte Carlo transport calculations using energy banding

    NASA Astrophysics Data System (ADS)

    Siegel, A.; Smith, K.; Felker, K.; Romano, P.; Forget, B.; Beckman, P.

    2014-04-01

    We present an energy banding algorithm for Monte Carlo (MC) neutral particle transport simulations which depend on large cross section lookup tables. In MC codes, read-only cross section data tables are accessed frequently, exhibit poor locality, and are typically too much large to fit in fast memory. Thus, performance is often limited by long latencies to RAM, or by off-node communication latencies when the data footprint is very large and must be decomposed on a distributed memory machine. The proposed energy banding algorithm allows maximal temporal reuse of data in band sizes that can flexibly accommodate different architectural features. The energy banding algorithm is general and has a number of benefits compared to the traditional approach. In the present analysis we explore its potential to achieve improvements in time-to-solution on modern cache-based architectures.

  13. Automated Testcase Generation for Numerical Support Functions in Embedded Systems

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Schnieder, Stefan-Alexander

    2014-01-01

    We present a tool for the automatic generation of test stimuli for small numerical support functions, e.g., code for trigonometric functions, quaternions, filters, or table lookup. Our tool is based on KLEE to produce a set of test stimuli for full path coverage. We use a method of iterative deepening over abstractions to deal with floating-point values. During actual testing the stimuli exercise the code against a reference implementation. We illustrate our approach with results of experiments with low-level trigonometric functions, interpolation routines, and mathematical support functions from an open source UAS autopilot.

  14. Smart Information Management in Health Big Data.

    PubMed

    Muteba A, Eustache

    2017-01-01

    The smart information management system (SIMS) is concerned with the organization of anonymous patient records in a big data and their extraction in order to provide needful real-time intelligence. The purpose of the present study is to highlight the design and the implementation of the smart information management system. We emphasis, in one hand, the organization of a big data in flat file in simulation of nosql database, and in the other hand, the extraction of information based on lookup table and cache mechanism. The SIMS in the health big data aims the identification of new therapies and approaches to delivering care.

  15. Metaphorical motion in mathematical reasoning: further evidence for pre-motor implementation of structure mapping in abstract domains.

    PubMed

    Fields, Chris

    2013-08-01

    The theory of computation and category theory both employ arrow-based notations that suggest that the basic metaphor "state changes are like motions" plays a fundamental role in all mathematical reasoning involving formal manipulations. If this is correct, structure-mapping inferences implemented by the pre-motor action planning system can be expected to be involved in solving any mathematics problems not solvable by table lookups and number line manipulations alone. Available functional imaging studies of multi-digit arithmetic, algebra, geometry and calculus problem solving are consistent with this expectation.

  16. Mixed Linear/Square-Root Encoded Single Slope Ramp Provides a Fast, Low Noise Analog to Digital Converter with Very High Linearity for Focal Plane Arrays

    NASA Technical Reports Server (NTRS)

    Wrigley, Christopher James (Inventor); Hancock, Bruce R. (Inventor); Cunningham, Thomas J. (Inventor); Newton, Kenneth W. (Inventor)

    2014-01-01

    An analog-to-digital converter (ADC) converts pixel voltages from a CMOS image into a digital output. A voltage ramp generator generates a voltage ramp that has a linear first portion and a non-linear second portion. A digital output generator generates a digital output based on the voltage ramp, the pixel voltages, and comparator output from an array of comparators that compare the voltage ramp to the pixel voltages. A return lookup table linearizes the digital output values.

  17. Neurient: An Algorithm for Automatic Tracing of Confluent Neuronal Images to Determine Alignment

    PubMed Central

    Mitchel, J.A.; Martin, I.S.

    2013-01-01

    A goal of neural tissue engineering is the development and evaluation of materials that guide neuronal growth and alignment. However, the methods available to quantitatively evaluate the response of neurons to guidance materials are limited and/or expensive, and may require manual tracing to be performed by the researcher. We have developed an open source, automated Matlab-based algorithm, building on previously published methods, to trace and quantify alignment of fluorescent images of neurons in culture. The algorithm is divided into three phases, including computation of a lookup table which contains directional information for each image, location of a set of seed points which may lie along neurite centerlines, and tracing neurites starting with each seed point and indexing into the lookup table. This method was used to obtain quantitative alignment data for complex images of densely cultured neurons. Complete automation of tracing allows for unsupervised processing of large numbers of images. Following image processing with our algorithm, available metrics to quantify neurite alignment include angular histograms, percent of neurite segments in a given direction, and mean neurite angle. The alignment information obtained from traced images can be used to compare the response of neurons to a range of conditions. This tracing algorithm is freely available to the scientific community under the name Neurient, and its implementation in Matlab allows a wide range of researchers to use a standardized, open source method to quantitatively evaluate the alignment of dense neuronal cultures. PMID:23384629

  18. Digital slip frequency generator and method for determining the desired slip frequency

    DOEpatents

    Klein, Frederick F.

    1989-01-01

    The output frequency of an electric power generator is kept constant with variable rotor speed by automatic adjustment of the excitation slip frequency. The invention features a digital slip frequency generator which provides sine and cosine waveforms from a look-up table, which are combined with real and reactive power output of the power generator.

  19. Design and optimization of color lookup tables on a simplex topology.

    PubMed

    Monga, Vishal; Bala, Raja; Mo, Xuan

    2012-04-01

    An important computational problem in color imaging is the design of color transforms that map color between devices or from a device-dependent space (e.g., RGB/CMYK) to a device-independent space (e.g., CIELAB) and vice versa. Real-time processing constraints entail that such nonlinear color transforms be implemented using multidimensional lookup tables (LUTs). Furthermore, relatively sparse LUTs (with efficient interpolation) are employed in practice because of storage and memory constraints. This paper presents a principled design methodology rooted in constrained convex optimization to design color LUTs on a simplex topology. The use of n simplexes, i.e., simplexes in n dimensions, as opposed to traditional lattices, recently has been of great interest in color LUT design for simplex topologies that allow both more analytically tractable formulations and greater efficiency in the LUT. In this framework of n-simplex interpolation, our central contribution is to develop an elegant iterative algorithm that jointly optimizes the placement of nodes of the color LUT and the output values at those nodes to minimize interpolation error in an expected sense. This is in contrast to existing work, which exclusively designs either node locations or the output values. We also develop new analytical results for the problem of node location optimization, which reduces to constrained optimization of a large but sparse interpolation matrix in our framework. We evaluate our n -simplex color LUTs against the state-of-the-art lattice (e.g., International Color Consortium profiles) and simplex-based techniques for approximating two representative multidimensional color transforms that characterize a CMYK xerographic printer and an RGB scanner, respectively. The results show that color LUTs designed on simplexes offer very significant benefits over traditional lattice-based alternatives in improving color transform accuracy even with a much smaller number of nodes.

  20. Quantitative Analysis of First-Pass Contrast-Enhanced Myocardial Perfusion Multidetector CT Using a Patlak Plot Method and Extraction Fraction Correction During Adenosine Stress

    NASA Astrophysics Data System (ADS)

    Ichihara, Takashi; George, Richard T.; Silva, Caterina; Lima, Joao A. C.; Lardo, Albert C.

    2011-02-01

    The purpose of this study was to develop a quantitative method for myocardial blood flow (MBF) measurement that can be used to derive accurate myocardial perfusion measurements from dynamic multidetector computed tomography (MDCT) images by using a compartment model for calculating the first-order transfer constant (K1) with correction for the capillary transit extraction fraction (E). Six canine models of left anterior descending (LAD) artery stenosis were prepared and underwent first-pass contrast-enhanced MDCT perfusion imaging during adenosine infusion (0.14-0.21 mg/kg/min). K1 , which is the first-order transfer constant from left ventricular (LV) blood to myocardium, was measured using the Patlak plot method applied to time-attenuation curve data of the LV blood pool and myocardium. The results were compared against microsphere MBF measurements, and the extraction fraction of contrast agent was calculated. K1 is related to the regional MBF as K1=EF, E=(1-exp(-PS/F)), where PS is the permeability-surface area product and F is myocardial flow. Based on the above relationship, a look-up table from K1 to MBF can be generated and Patlak plot-derived K1 values can be converted to the calculated MBF. The calculated MBF and microsphere MBF showed a strong linear association. The extraction fraction in dogs as a function of flow (F) was E=(1-exp(-(0.2532F+0.7871)/F)) . Regional MBF can be measured accurately using the Patlak plot method based on a compartment model and look-up table with extraction fraction correction from K1 to MBF.

  1. PROXIMAL: a method for Prediction of Xenobiotic Metabolism.

    PubMed

    Yousofshahi, Mona; Manteiga, Sara; Wu, Charmian; Lee, Kyongbum; Hassoun, Soha

    2015-12-22

    Contamination of the environment with bioactive chemicals has emerged as a potential public health risk. These substances that may cause distress or disease in humans can be found in air, water and food supplies. An open question is whether these chemicals transform into potentially more active or toxic derivatives via xenobiotic metabolizing enzymes expressed in the body. We present a new prediction tool, which we call PROXIMAL (Prediction of Xenobiotic Metabolism) for identifying possible transformation products of xenobiotic chemicals in the liver. Using reaction data from DrugBank and KEGG, PROXIMAL builds look-up tables that catalog the sites and types of structural modifications performed by Phase I and Phase II enzymes. Given a compound of interest, PROXIMAL searches for substructures that match the sites cataloged in the look-up tables, applies the corresponding modifications to generate a panel of possible transformation products, and ranks the products based on the activity and abundance of the enzymes involved. PROXIMAL generates transformations that are specific for the chemical of interest by analyzing the chemical's substructures. We evaluate the accuracy of PROXIMAL's predictions through case studies on two environmental chemicals with suspected endocrine disrupting activity, bisphenol A (BPA) and 4-chlorobiphenyl (PCB3). Comparisons with published reports confirm 5 out of 7 and 17 out of 26 of the predicted derivatives for BPA and PCB3, respectively. We also compare biotransformation predictions generated by PROXIMAL with those generated by METEOR and Metaprint2D-react, two other prediction tools. PROXIMAL can predict transformations of chemicals that contain substructures recognizable by human liver enzymes. It also has the ability to rank the predicted metabolites based on the activity and abundance of enzymes involved in xenobiotic transformation.

  2. An ELISA method to compute endpoint titers to Epstein-Barr virus and cytomegalovirus: application to population-based studies.

    PubMed

    Stowe, Raymond P; Ruiz, R Jeanne; Fagundes, Christopher P; Stowe, Robin H; Chen, Min; Glaser, Ronald

    2014-06-01

    Indirect fluorescence analysis (IFA), the gold standard for determining herpesvirus antibody titers, is labor-intensive and poorly suited for large population-based studies. The enzyme-linked immunosorbent assay (ELISA) is used widely for measuring antiviral antibodies but also suffers drawbacks such as reduced specificity and the qualitative nature of the results due to limited interpretation of the optical density (OD) units. This paper describes a method to titer herpesvirus antibodies using microplates coated with virally-infected cells in which a standard curve, derived from IFA-scored samples, allowed OD units to be converted into titers. A LOOKUP function was created in order to report the data as traditional IFA-based (i.e., 2-fold) titers. The modified ELISA correlated significantly with IFA and was subsequently used to compute endpoint antibody titers to Epstein-Barr virus (EBV)-virus capsid antigen (VCA) and cytomegalovirus (CMV) in blood samples taken from 398 pregnant Hispanic women. Four women were EBV negative (1%), while 58 women were CMV negative (14.6%). EBV VCA antibody titers were significantly higher than CMV antibody titers (p<0.001). This method allows titering of herpesvirus antibodies by ELISA suitable for large population-based studies. In addition, the LOOKUP table enables conversion from OD-derived titers into 2-fold titers for comparison of results with other studies. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Temporal downscaling of crop coefficients for winter wheat in the North China Plain: A case study at the Gucheng ecological-meteorological experimental station

    USDA-ARS?s Scientific Manuscript database

    The crop coefficient (Kc) method is widely used for operational estimation of actual evapotranspiration (ETa) and crop water requirements. The standard method for obtaining Kc is via a lookup table from FAO-56 (Food and Agriculture Organization of the United Nations Irrigation and Drainage Paper No....

  4. Non-equilibrium condensation of supercritical carbon dioxide in a converging-diverging nozzle

    NASA Astrophysics Data System (ADS)

    Ameli, Alireza; Afzalifar, Ali; Turunen-Saaresti, Teemu

    2017-03-01

    Carbon dioxide (CO2) is a promising alternative as a working fluid for future energy conversion and refrigeration cycles. CO2 has low global warming potential compared to refrigerants and supercritical CO2 Brayton cycle ought to have better efficiency than today’s counter parts. However, there are several issues concerning behaviour of supercritical CO2 in aforementioned applications. One of these issues arises due to non-equilibrium condensation of CO2 for some operating conditions in supercritical compressors. This paper investigates the non-equilibrium condensation of carbon dioxide in the course of an expansion from supercritical stagnation conditions in a converging-diverging nozzle. An external look-up table was implemented, using an in-house FORTRAN code, to calculate the fluid properties in supercritical, metastable and saturated regions. This look-up table is coupled with the flow solver and the non-equilibrium condensation model is introduced to the solver using user defined expressions. Numerical results are compared with the experimental measurements. In agreement with the experiment, the distribution of Mach number in the nozzle shows that the flow becomes supersonic in upstream region near the throat where speed of sound is minimum also the equilibrium reestablishment occurs at the outlet boundary condition.

  5. Characterization of Viscoelastic Materials Using Group Shear Wave Speeds.

    PubMed

    Rouze, Ned C; Deng, Yufeng; Trutna, Courtney A; Palmeri, Mark L; Nightingale, Kathryn R

    2018-05-01

    Recent investigations of viscoelastic properties of materials have been performed by observing shear wave propagation following localized, impulsive excitations, and Fourier decomposing the shear wave signal to parameterize the frequency-dependent phase velocity using a material model. This paper describes a new method to characterize viscoelastic materials using group shear wave speeds , , and determined from the shear wave displacement, velocity, and acceleration signals, respectively. Materials are modeled using a two-parameter linear attenuation model with phase velocity and dispersion slope at a reference frequency of 200 Hz. Analytically calculated lookup tables are used to determine the two material parameters from pairs of measured group shear wave speeds. Green's function calculations are used to validate the analytic model. Results are reported for measurements in viscoelastic and approximately elastic phantoms and demonstrate good agreement with phase velocities measured using Fourier analysis of the measured shear wave signals. The calculated lookup tables are relatively insensitive to the excitation configuration. While many commercial shear wave elasticity imaging systems report group shear wave speeds as the measures of material stiffness, this paper demonstrates that differences , , and of group speeds are first-order measures of the viscous properties of materials.

  6. Improving the dictionary lookup approach for disease normalization using enhanced dictionary and query expansion

    PubMed Central

    Jonnagaddala, Jitendra; Jue, Toni Rose; Chang, Nai-Wen; Dai, Hong-Jie

    2016-01-01

    The rapidly increasing biomedical literature calls for the need of an automatic approach in the recognition and normalization of disease mentions in order to increase the precision and effectivity of disease based information retrieval. A variety of methods have been proposed to deal with the problem of disease named entity recognition and normalization. Among all the proposed methods, conditional random fields (CRFs) and dictionary lookup method are widely used for named entity recognition and normalization respectively. We herein developed a CRF-based model to allow automated recognition of disease mentions, and studied the effect of various techniques in improving the normalization results based on the dictionary lookup approach. The dataset from the BioCreative V CDR track was used to report the performance of the developed normalization methods and compare with other existing dictionary lookup based normalization methods. The best configuration achieved an F-measure of 0.77 for the disease normalization, which outperformed the best dictionary lookup based baseline method studied in this work by an F-measure of 0.13. Database URL: https://github.com/TCRNBioinformatics/DiseaseExtract PMID:27504009

  7. Method and apparatus for in-situ detection and isolation of aircraft engine faults

    NASA Technical Reports Server (NTRS)

    Bonanni, Pierino Gianni (Inventor); Brunell, Brent Jerome (Inventor)

    2007-01-01

    A method for performing a fault estimation based on residuals of detected signals includes determining an operating regime based on a plurality of parameters, extracting predetermined noise standard deviations of the residuals corresponding to the operating regime and scaling the residuals, calculating a magnitude of a measurement vector of the scaled residuals and comparing the magnitude to a decision threshold value, extracting an average, or mean direction and a fault level mapping for each of a plurality of fault types, based on the operating regime, calculating a projection of the measurement vector onto the average direction of each of the plurality of fault types, determining a fault type based on which projection is maximum, and mapping the projection to a continuous-valued fault level using a lookup table.

  8. Low-power hardware implementation of movement decoding for brain computer interface with reduced-resolution discrete cosine transform.

    PubMed

    Minho Won; Albalawi, Hassan; Xin Li; Thomas, Donald E

    2014-01-01

    This paper describes a low-power hardware implementation for movement decoding of brain computer interface. Our proposed hardware design is facilitated by two novel ideas: (i) an efficient feature extraction method based on reduced-resolution discrete cosine transform (DCT), and (ii) a new hardware architecture of dual look-up table to perform discrete cosine transform without explicit multiplication. The proposed hardware implementation has been validated for movement decoding of electrocorticography (ECoG) signal by using a Xilinx FPGA Zynq-7000 board. It achieves more than 56× energy reduction over a reference design using band-pass filters for feature extraction.

  9. Zone plate method for electronic holographic display using resolution redistribution technique.

    PubMed

    Takaki, Yasuhiro; Nakamura, Junya

    2011-07-18

    The resolution redistribution (RR) technique can increase the horizontal viewing-zone angle and screen size of electronic holographic display. The present study developed a zone plate method that would reduce hologram calculation time for the RR technique. This method enables calculation of an image displayed on a spatial light modulator by performing additions of the zone plates, while the previous calculation method required performing the Fourier transform twice. The derivation and modeling of the zone plate are shown. In addition, the look-up table approach was introduced for further reduction in computation time. Experimental verification using a holographic display module based on the RR technique is presented.

  10. Flexible Method for Inter-object Communication in C++

    NASA Technical Reports Server (NTRS)

    Curlett, Brian P.; Gould, Jack J.

    1994-01-01

    A method has been developed for organizing and sharing large amounts of information between objects in C++ code. This method uses a set of object classes to define variables and group them into tables. The variable tables presented here provide a convenient way of defining and cataloging data, as well as a user-friendly input/output system, a standardized set of access functions, mechanisms for ensuring data integrity, methods for interprocessor data transfer, and an interpretive language for programming relationships between parameters. The object-oriented nature of these variable tables enables the use of multiple data types, each with unique attributes and behavior. Because each variable provides its own access methods, redundant table lookup functions can be bypassed, thus decreasing access times while maintaining data integrity. In addition, a method for automatic reference counting was developed to manage memory safely.

  11. Efficient use of bit planes in the generation of motion stimuli

    NASA Technical Reports Server (NTRS)

    Mulligan, Jeffrey B.; Stone, Leland S.

    1988-01-01

    The production of animated motion sequences on computer-controlled display systems presents a technical problem because large images cannot be transferred from disk storage to image memory at conventional frame rates. A technique is described in which a single base image can be used to generate a broad class of motion stimuli without the need for such memory transfers. This technique was applied to the generation of drifting sine-wave gratings (and by extension, sine wave plaids). For each drifting grating, sine and cosine spatial phase components are first reduced to 1 bit/pixel using a digital halftoning technique. The resulting pairs of 1-bit images are then loaded into pairs of bit planes of the display memory. To animate the patterns, the display hardware's color lookup table is modified on a frame-by-frame basis; for each frame the lookup table is set to display a weighted sum of the spatial sine and cosine phase components. Because the contrasts and temporal frequencies of the various components are mutually independent in each frame, the sine and cosine components can be counterphase modulated in temporal quadrature, yielding a single drifting grating. Using additional bit planes, multiple drifting gratings can be combined to form sine-wave plaid patterns. A large number of resultant plaid motions can be produced from a single image file because the temporal frequencies of all the components can be varied independently. For a graphics device having 8 bits/pixel, up to four drifting gratings may be combined, each having independently variable contrast and speed.

  12. Hardware-Abbildung eines videobasierten Verfahrens zur echtzeitfähigen Auswertung von Winkelhistogrammen auf eine modulare Coprozessor-Architektur

    NASA Astrophysics Data System (ADS)

    Flatt, H.; Tarnowsky, A.; Blume, H.; Pirsch, P.

    2010-10-01

    Dieser Beitrag behandelt die Abbildung eines videobasierten Verfahrens zur echtzeitfähigen Auswertung von Winkelhistogrammen auf eine modulare Coprozessor-Architektur. Die Architektur besteht aus mehreren dedizierten Recheneinheiten zur parallelen Verarbeitung rechenintensiver Bildverarbeitungsverfahren und ist mit einem RISC-Prozessor verbunden. Eine konfigurierbare Architekturerweiterung um eine Recheneinheit zur Auswertung von Winkelhistogrammen von Objekten ermöglicht in Verbindung mit dem RISC eine echtzeitfähige Klassifikation. Je nach Konfiguration sind für die Architekturerweiterung auf einem Xilinx Virtex-5-FPGA zwischen 3300 und 12 000 Lookup-Tables erforderlich. Bei einer Taktfrequenz von 100 MHz können unabhängig von der Bildauflösung pro Einzelbild in einem 25-Hz-Videodatenstrom bis zu 100 Objekte der Größe 256×256 Pixel analysiert werden. This paper presents the mapping of a video-based approach for real-time evaluation of angular histograms on a modular coprocessor architecture. The architecture comprises several dedicated processing elements for parallel processing of computation-intensive image processing tasks and is coupled with a RISC processor. A configurable architecture extension, especially a processing element for evaluating angular histograms of objects in conjunction with a RISC processor, provides a real-time classification. Depending on the configuration of the architecture extension, 3 300 to 12 000 look-up tables are required for a Xilinx Virtex-5 FPGA implementation. Running at a clock frequency of 100 MHz and independently of the image resolution per frame, 100 objects of size 256×256 pixels are analyzed in a 25 Hz video stream by the architecture.

  13. Validating the accuracy of SO2 gas retrievals in the thermal infrared (8-14 μm)

    NASA Astrophysics Data System (ADS)

    Gabrieli, Andrea; Porter, John N.; Wright, Robert; Lucey, Paul G.

    2017-11-01

    Quantifying sulfur dioxide (SO2) in volcanic plumes is important for eruption predictions and public health. Ground-based remote sensing of spectral radiance of plumes contains information on the path-concentration of SO2. However, reliable inversion algorithms are needed to convert plume spectral radiance measurements into SO2 path-concentrations. Various techniques have been used for this purpose. Recent approaches have employed thermal infrared (TIR) imaging between 8 μm and 14 μm to provide two-dimensional mapping of plume SO2 path-concentration, using what might be described as "dual-view" techniques. In this case, the radiance (or its surrogate brightness temperature) is computed for portions of the image that correspond to the plume and compared with spectral radiance obtained for adjacent regions of the image that do not (i.e., "clear sky"). In this way, the contribution that the plume makes to the measured radiance can be isolated from the background atmospheric contribution, this residual signal being converted to an estimate of gas path-concentration via radiative transfer modeling. These dual-view approaches suffer from several issues, mainly the assumption of clear sky background conditions. At this time, the various inversion algorithms remain poorly validated. This paper makes two contributions. Firstly, it validates the aforementioned dual-view approaches, using hyperspectral TIR imaging data. Secondly, it introduces a new method to derive SO2 path-concentrations, which allows for single point SO2 path-concentration retrievals, suitable for hyperspectral imaging with clear or cloudy background conditions. The SO2 amenable lookup table algorithm (SO2-ALTA) uses the MODTRAN5 radiative transfer model to compute radiance for a variety (millions) of plume and atmospheric conditions. Rather than searching this lookup table to find the best fit for each measured spectrum, the lookup table was used to train a partial least square regression (PLSR) model. The coefficients of this model are used to invert measured radiance spectra to path-concentration on a pixel-by-pixel basis. In order to validate the algorithms, TIR hyperspectral measurements were carried out by measuring sky radiance when looking through gas cells filled with known amounts of SO2. SO2-ALTA was also tested on retrieving SO2 path-concentrations from the Kīlauea volcano, Hawai'i. For cloud-free conditions, all three techniques worked well. In cases where background clouds were present, then only SO2-ALTA was found to provide good results, but only under low atmospheric water vapor column amounts.

  14. Emulation of recharge and evapotranspiration processes in shallow groundwater systems

    NASA Astrophysics Data System (ADS)

    Doble, Rebecca C.; Pickett, Trevor; Crosbie, Russell S.; Morgan, Leanne K.; Turnadge, Chris; Davies, Phil J.

    2017-12-01

    In shallow groundwater systems, recharge and evapotranspiration are highly sensitive to changes in the depth to water table. To effectively model these fluxes, complex functions that include soil and vegetation properties are often required. Model emulation (surrogate modelling or meta-modelling) can provide a means of incorporating detailed conceptualisation of recharge and evapotranspiration processes, while maintaining the numerical tractability and computational performance required for regional scale groundwater models and uncertainty analysis. A method for emulating recharge and evapotranspiration processes in groundwater flow models was developed, and applied to the South East region of South Australia and western Victoria, which is characterised by shallow groundwater, wetlands and coastal lakes. The soil-vegetation-atmosphere transfer (SVAT) model WAVES was used to generate relationships between net recharge (diffuse recharge minus evapotranspiration from groundwater) and depth to water table for different combinations of climate, soil and land cover types. These relationships, which mimicked previously described soil, vegetation and groundwater behaviour, were combined into a net recharge lookup table. The segmented evapotranspiration package in MODFLOW was adapted to select values of net recharge from the lookup table depending on groundwater depth, and the climate, soil and land use characteristics of each cell. The model was found to be numerically robust in steady state testing, had no major increase in run time, and would be more efficient than tightly-coupled modelling approaches. It made reasonable predictions of net recharge and groundwater head compared with remotely sensed estimates of net recharge and a standard MODFLOW comparison model. In particular, the method was better able to predict net recharge and groundwater head in areas with steep hydraulic gradients.

  15. Age Differences in Memory Retrieval Shift: Governed by Feeling-of-Knowing?

    PubMed Central

    Hertzog, Christopher; Touron, Dayna R.

    2010-01-01

    The noun-pair lookup (NP) task was used to evaluate strategic shift from visual scanning to retrieval. We investigated whether age differences in feeling-of-knowing (FOK) account for older adults' delayed retrieval shift. Participants were randomly assigned to one of three conditions: (1) standard NP learning, (2) fast binary FOK judgments, or (3) Choice, where participants had to choose in advance whether to see the look-up table or respond from memory. We found small age differences in FOK magnitudes, but major age differences in memory retrieval choices that mirrored retrieval use in the standard NP task. Older adults showed lower resolution in their confidence judgments (CJs) for recognition memory tests on the NP items, and this difference appeared to influence rates of retrieval shift, given that retrieval use was correlated with CJ magnitudes in both age groups. Older adults had particular difficulty with accuracy and confidence for rearranged pairs, relative to intact pairs. Older adults' slowed retrieval shift appears to be due to (a) impaired associative learning early in practice, not just a lower FOK; but also (b) retrieval reluctance later in practice after the degree of associative learning would afford memory-based responding. PMID:21401263

  16. Improving the dictionary lookup approach for disease normalization using enhanced dictionary and query expansion.

    PubMed

    Jonnagaddala, Jitendra; Jue, Toni Rose; Chang, Nai-Wen; Dai, Hong-Jie

    2016-01-01

    The rapidly increasing biomedical literature calls for the need of an automatic approach in the recognition and normalization of disease mentions in order to increase the precision and effectivity of disease based information retrieval. A variety of methods have been proposed to deal with the problem of disease named entity recognition and normalization. Among all the proposed methods, conditional random fields (CRFs) and dictionary lookup method are widely used for named entity recognition and normalization respectively. We herein developed a CRF-based model to allow automated recognition of disease mentions, and studied the effect of various techniques in improving the normalization results based on the dictionary lookup approach. The dataset from the BioCreative V CDR track was used to report the performance of the developed normalization methods and compare with other existing dictionary lookup based normalization methods. The best configuration achieved an F-measure of 0.77 for the disease normalization, which outperformed the best dictionary lookup based baseline method studied in this work by an F-measure of 0.13.Database URL: https://github.com/TCRNBioinformatics/DiseaseExtract. © The Author(s) 2016. Published by Oxford University Press.

  17. Mixed Linear/Square-Root Encoded Single-Slope Ramp Provides Low-Noise ADC with High Linearity for Focal Plane Arrays

    NASA Technical Reports Server (NTRS)

    Wrigley, Chris J.; Hancock, Bruce R.; Newton, Kenneth W.; Cunningham, Thomas J.

    2013-01-01

    Single-slope analog-to-digital converters (ADCs) are particularly useful for onchip digitization in focal plane arrays (FPAs) because of their inherent monotonicity, relative simplicity, and efficiency for column-parallel applications, but they are comparatively slow. Squareroot encoding can allow the number of code values to be reduced without loss of signal-to-noise ratio (SNR) by keeping the quantization noise just below the signal shot noise. This encoding can be implemented directly by using a quadratic ramp. The reduction in the number of code values can substantially increase the quantization speed. However, in an FPA, the fixed pattern noise (FPN) limits the use of small quantization steps at low signal levels. If the zero-point is adjusted so that the lowest column is onscale, the other columns, including those at the center of the distribution, will be pushed up the ramp where the quantization noise is higher. Additionally, the finite frequency response of the ramp buffer amplifier and the comparator distort the shape of the ramp, so that the effective ramp value at the time the comparator trips differs from the intended value, resulting in errors. Allowing increased settling time decreases the quantization speed, while increasing the bandwidth increases the noise. The FPN problem is solved by breaking the ramp into two portions, with some fraction of the available code values allocated to a linear ramp and the remainder to a quadratic ramp. To avoid large transients, both the value and the slope of the linear and quadratic portions should be equal where they join. The span of the linear portion must cover the minimum offset, but not necessarily the maximum, since the fraction of the pixels above the upper limit will still be correctly quantized, albeit with increased quantization noise. The required linear span, maximum signal and ratio of quantization noise to shot noise at high signal, along with the continuity requirement, determines the number of code values that must be allocated to each portion. The distortion problem is solved by using a lookup table to convert captured code values back to signal levels. The values in this table will be similar to the intended ramp value, but with a correction for the finite bandwidth effects. Continuous-time comparators are used, and their bandwidth is set below the step rate, which smoothes the ramp and reduces the noise. No settling time is needed, as would be the case for clocked comparators, but the low bandwidth enhances the distortion of the non-linear portion. This is corrected by use of a return lookup table, which differs from the one used to generate the ramp. The return lookup table is obtained by calibrating against a stepped precision DC reference. This results in a residual non-linearity well below the quantization noise. This method can also compensate for differential non-linearity (DNL) in the DAC used to generate the ramp. The use of a ramp with a combination of linear and quadratic portions for a single-slope ADC is novel. The number of steps is minimized by keeping the step size just below the photon shot noise. This in turn maximizes the speed of the conversion. High resolution is maintained by keeping small quantization steps at low signals, and noise is minimized by allowing the lowest analog bandwidth, all without increasing the quantization noise. A calibrated return lookup table allows the system to maintain excellent linearity.

  18. Optimized Routing of Intelligent, Mobile Sensors for Dynamic, Data-Driven Sampling

    DTIC Science & Technology

    2016-09-27

    nonstationary random process that requires nonuniform sampling. The ap- proach incorporates complementary representations of an unknown process: the first...lookup table as follows. A uniform grid is created in the r-domain and mapped to the R-domain, which produces a nonuniform grid of locations in the R...vehicle coverage algorithm that invokes the coor- dinate transformation from the previous section to generate nonuniform sampling trajectories [54]. We

  19. Understanding Consistency Maintenance in Service Discovery Architectures in Response to Message Loss

    DTIC Science & Technology

    2002-07-01

    manager (SM), and (3) service cache manager ( SCM ). The SCM is an optional element not supported by all discovery protocols. These components participate...the SCM operates as an intermediary, matching advertised SDs of SMs to requirements provided by SUs. Table 1 shows how these general concepts map...Service DescriptionService ItemService Description (SD) Directory Service Agent (optional) not applicableLookup ServiceService Cache Manager ( SCM

  20. Modeling radiative transfer with the doubling and adding approach in a climate GCM setting

    NASA Astrophysics Data System (ADS)

    Lacis, A. A.

    2017-12-01

    The nonlinear dependence of multiply scattered radiation on particle size, optical depth, and solar zenith angle, makes accurate treatment of multiple scattering in the climate GCM setting problematic, due primarily to computational cost issues. In regard to the accurate methods of calculating multiple scattering that are available, their computational cost is far too prohibitive for climate GCM applications. Utilization of two-stream-type radiative transfer approximations may be computationally fast enough, but at the cost of reduced accuracy. We describe here a parameterization of the doubling/adding method that is being used in the GISS climate GCM, which is an adaptation of the doubling/adding formalism configured to operate with a look-up table utilizing a single gauss quadrature point with an extra-angle formulation. It is designed to closely reproduce the accuracy of full-angle doubling and adding for the multiple scattering effects of clouds and aerosols in a realistic atmosphere as a function of particle size, optical depth, and solar zenith angle. With an additional inverse look-up table, this single-gauss-point doubling/adding approach can be adapted to model fractional cloud cover for any GCM grid-box in the independent pixel approximation as a function of the fractional cloud particle sizes, optical depths, and solar zenith angle dependence.

  1. Tool Support for Software Lookup Table Optimization

    DOE PAGES

    Wilcox, Chris; Strout, Michelle Mills; Bieman, James M.

    2011-01-01

    A number of scientific applications are performance-limited by expressions that repeatedly call costly elementary functions. Lookup table (LUT) optimization accelerates the evaluation of such functions by reusing previously computed results. LUT methods can speed up applications that tolerate an approximation of function results, thereby achieving a high level of fuzzy reuse. One problem with LUT optimization is the difficulty of controlling the tradeoff between performance and accuracy. The current practice of manual LUT optimization adds programming effort by requiring extensive experimentation to make this tradeoff, and such hand tuning can obfuscate algorithms. In this paper we describe a methodology andmore » tool implementation to improve the application of software LUT optimization. Our Mesa tool implements source-to-source transformations for C or C++ code to automate the tedious and error-prone aspects of LUT generation such as domain profiling, error analysis, and code generation. We evaluate Mesa with five scientific applications. Our results show a performance improvement of 3.0× and 6.9× for two molecular biology algorithms, 1.4× for a molecular dynamics program, 2.1× to 2.8× for a neural network application, and 4.6× for a hydrology calculation. We find that Mesa enables LUT optimization with more control over accuracy and less effort than manual approaches.« less

  2. Path integration of head direction: updating a packet of neural activity at the correct speed using axonal conduction delays.

    PubMed

    Walters, Daniel; Stringer, Simon; Rolls, Edmund

    2013-01-01

    The head direction cell system is capable of accurately updating its current representation of head direction in the absence of visual input. This is known as the path integration of head direction. An important question is how the head direction cell system learns to perform accurate path integration of head direction. In this paper we propose a model of velocity path integration of head direction in which the natural time delay of axonal transmission between a linked continuous attractor network and competitive network acts as a timing mechanism to facilitate the correct speed of path integration. The model effectively learns a "look-up" table for the correct speed of path integration. In simulation, we show that the model is able to successfully learn two different speeds of path integration across two different axonal conduction delays, and without the need to alter any other model parameters. An implication of this model is that, by learning look-up tables for each speed of path integration, the model should exhibit a degree of robustness to damage. In simulations, we show that the speed of path integration is not significantly affected by degrading the network through removing a proportion of the cells that signal rotational velocity.

  3. The OpenCalphad thermodynamic software interface

    PubMed Central

    Sundman, Bo; Kattner, Ursula R; Sigli, Christophe; Stratmann, Matthias; Le Tellier, Romain; Palumbo, Mauro; Fries, Suzana G

    2017-01-01

    Thermodynamic data are needed for all kinds of simulations of materials processes. Thermodynamics determines the set of stable phases and also provides chemical potentials, compositions and driving forces for nucleation of new phases and phase transformations. Software to simulate materials properties needs accurate and consistent thermodynamic data to predict metastable states that occur during phase transformations. Due to long calculation times thermodynamic data are frequently pre-calculated into “lookup tables” to speed up calculations. This creates additional uncertainties as data must be interpolated or extrapolated and conditions may differ from those assumed for creating the lookup table. Speed and accuracy requires that thermodynamic software is fully parallelized and the Open-Calphad (OC) software is the first thermodynamic software supporting this feature. This paper gives a brief introduction to computational thermodynamics and introduces the basic features of the OC software and presents four different application examples to demonstrate its versatility. PMID:28260838

  4. Use of NOAA-N satellites for land/water discrimination and flood monitoring

    NASA Technical Reports Server (NTRS)

    Tappan, G.; Horvath, N. C.; Doraiswamy, P. C.; Engman, T.; Goss, D. W. (Principal Investigator)

    1983-01-01

    A tool for monitoring the extent of major floods was developed using data collected by the NOAA-6 advanced very high resolution radiometer (AVHRR). A basic understanding of the spectral returns in AVHRR channels 1 and 2 for water, soil, and vegetation was reached using a large number of NOAA-6 scenes from different seasons and geographic locations. A look-up table classifier was developed based on analysis of the reflective channel relationships for each surface feature. The classifier automatically separated land from water and produced classification maps which were registered for a number of acquisitions, including coverage of a major flood on the Parana River of Argentina.

  5. X-Windows Widget for Image Display

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.

    2011-01-01

    XvicImage is a high-performance XWindows (Motif-compliant) user interface widget for displaying images. It handles all aspects of low-level image display. The fully Motif-compliant image display widget handles the following tasks: (1) Image display, including dithering as needed (2) Zoom (3) Pan (4) Stretch (contrast enhancement, via lookup table) (5) Display of single-band or color data (6) Display of non-byte data (ints, floats) (7) Pseudocolor display (8) Full overlay support (drawing graphics on image) (9) Mouse-based panning (10) Cursor handling, shaping, and planting (disconnecting cursor from mouse) (11) Support for all user interaction events (passed to application) (12) Background loading and display of images (doesn't freeze the GUI) (13) Tiling of images.

  6. A prediction model of compressor with variable-geometry diffuser based on elliptic equation and partial least squares

    PubMed Central

    Yang, Chuanlei; Wang, Yinyan; Wang, Hechun

    2018-01-01

    To achieve a much more extensive intake air flow range of the diesel engine, a variable-geometry compressor (VGC) is introduced into a turbocharged diesel engine. However, due to the variable diffuser vane angle (DVA), the prediction for the performance of the VGC becomes more difficult than for a normal compressor. In the present study, a prediction model comprising an elliptical equation and a PLS (partial least-squares) model was proposed to predict the performance of the VGC. The speed lines of the pressure ratio map and the efficiency map were fitted with the elliptical equation, and the coefficients of the elliptical equation were introduced into the PLS model to build the polynomial relationship between the coefficients and the relative speed, the DVA. Further, the maximal order of the polynomial was investigated in detail to reduce the number of sub-coefficients and achieve acceptable fit accuracy simultaneously. The prediction model was validated with sample data and in order to present the superiority of compressor performance prediction, the prediction results of this model were compared with those of the look-up table and back-propagation neural networks (BPNNs). The validation and comparison results show that the prediction accuracy of the new developed model is acceptable, and this model is much more suitable than the look-up table and the BPNN methods under the same condition in VGC performance prediction. Moreover, the new developed prediction model provides a novel and effective prediction solution for the VGC and can be used to improve the accuracy of the thermodynamic model for turbocharged diesel engines in the future. PMID:29410849

  7. A source-controlled data center network model.

    PubMed

    Yu, Yang; Liang, Mangui; Wang, Zhe

    2017-01-01

    The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS.

  8. A source-controlled data center network model

    PubMed Central

    Yu, Yang; Liang, Mangui; Wang, Zhe

    2017-01-01

    The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS. PMID:28328925

  9. A prediction model of compressor with variable-geometry diffuser based on elliptic equation and partial least squares.

    PubMed

    Li, Xu; Yang, Chuanlei; Wang, Yinyan; Wang, Hechun

    2018-01-01

    To achieve a much more extensive intake air flow range of the diesel engine, a variable-geometry compressor (VGC) is introduced into a turbocharged diesel engine. However, due to the variable diffuser vane angle (DVA), the prediction for the performance of the VGC becomes more difficult than for a normal compressor. In the present study, a prediction model comprising an elliptical equation and a PLS (partial least-squares) model was proposed to predict the performance of the VGC. The speed lines of the pressure ratio map and the efficiency map were fitted with the elliptical equation, and the coefficients of the elliptical equation were introduced into the PLS model to build the polynomial relationship between the coefficients and the relative speed, the DVA. Further, the maximal order of the polynomial was investigated in detail to reduce the number of sub-coefficients and achieve acceptable fit accuracy simultaneously. The prediction model was validated with sample data and in order to present the superiority of compressor performance prediction, the prediction results of this model were compared with those of the look-up table and back-propagation neural networks (BPNNs). The validation and comparison results show that the prediction accuracy of the new developed model is acceptable, and this model is much more suitable than the look-up table and the BPNN methods under the same condition in VGC performance prediction. Moreover, the new developed prediction model provides a novel and effective prediction solution for the VGC and can be used to improve the accuracy of the thermodynamic model for turbocharged diesel engines in the future.

  10. A complexity-scalable software-based MPEG-2 video encoder.

    PubMed

    Chen, Guo-bin; Lu, Xin-ning; Wang, Xing-guo; Liu, Ji-lin

    2004-05-01

    With the development of general-purpose processors (GPP) and video signal processing algorithms, it is possible to implement a software-based real-time video encoder on GPP, and its low cost and easy upgrade attract developers' interests to transfer video encoding from specialized hardware to more flexible software. In this paper, the encoding structure is set up first to support complexity scalability; then a lot of high performance algorithms are used on the key time-consuming modules in coding process; finally, at programming level, processor characteristics are considered to improve data access efficiency and processing parallelism. Other programming methods such as lookup table are adopted to reduce the computational complexity. Simulation results showed that these ideas could not only improve the global performance of video coding, but also provide great flexibility in complexity regulation.

  11. A low complexity, low spur digital IF conversion circuit for high-fidelity GNSS signal playback

    NASA Astrophysics Data System (ADS)

    Su, Fei; Ying, Rendong

    2016-01-01

    A low complexity high efficiency and low spur digital intermediate frequency (IF) conversion circuit is discussed in the paper. This circuit is key element in high-fidelity GNSS signal playback instrument. We analyze the spur performance of a finite state machine (FSM) based numerically controlled oscillators (NCO), by optimization of the control algorithm, a FSM based NCO with 3 quantization stage can achieves 65dB SFDR in the range of the seventh harmonic. Compare with traditional lookup table based NCO design with the same Spurious Free Dynamic Range (SFDR) performance, the logic resource require to implemented the NCO is reduced to 1/3. The proposed design method can be extended to the IF conversion system with good SFDR in the range of higher harmonic components by increasing the quantization stage.

  12. The VLSI design of a Reed-Solomon encoder using Berlekamps bit-serial multiplier algorithm

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Deutsch, L. J.; Reed, I. S.; Hsu, I. S.; Wang, K.; Yeh, C. S.

    1982-01-01

    Realization of a bit-serial multiplication algorithm for the encoding of Reed-Solomon (RS) codes on a single VLSI chip using NMOS technology is demonstrated to be feasible. A dual basis (255, 223) over a Galois field is used. The conventional RS encoder for long codes ofter requires look-up tables to perform the multiplication of two field elements. Berlekamp's algorithm requires only shifting and exclusive-OR operations.

  13. Corps of Engineers Operations and Maintenance Budget Decision Support System (COMB-DSS): System Concept, Design and Prototype Evaluation. Volume 2. Appendixes B Through G.

    DTIC Science & Technology

    1994-04-01

    Rather, it should provide, whenever possible, information on the location and/or quantity of work. Examples of good descriptions are as follows: Major...03100 D1UMNG IrMMlC CHAJE DOI TABI3s IMASON 1 -eascode 1323035 2 reason T231 64 code lookup table giving text for each integer reascode re*acode reaon 0

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, X; Petrongolo, M; Wang, T

    Purpose: A general problem of dual-energy CT (DECT) is that the decomposition is sensitive to noise in the two sets of dual-energy projection data, resulting in severely degraded qualities of decomposed images. We have previously proposed an iterative denoising method for DECT. Using a linear decomposition function, the method does not gain the full benefits of DECT on beam-hardening correction. In this work, we expand the framework of our iterative method to include non-linear decomposition models for noise suppression in DECT. Methods: We first obtain decomposed projections, which are free of beam-hardening artifacts, using a lookup table pre-measured on amore » calibration phantom. First-pass material images with high noise are reconstructed from the decomposed projections using standard filter-backprojection reconstruction. Noise on the decomposed images is then suppressed by an iterative method, which is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, we include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Analytical formulae are derived to compute the variance-covariance matrix from the measured decomposition lookup table. Results: We have evaluated the proposed method via phantom studies. Using non-linear decomposition, our method effectively suppresses the streaking artifacts of beam-hardening and obtains more uniform images than our previous approach based on a linear model. The proposed method reduces the average noise standard deviation of two basis materials by one order of magnitude without sacrificing the spatial resolution. Conclusion: We propose a general framework of iterative denoising for material decomposition of DECT. Preliminary phantom studies have shown the proposed method improves the image uniformity and reduces noise level without resolution loss. In the future, we will perform more phantom studies to further validate the performance of the purposed method. This work is supported by a Varian MRA grant.« less

  15. A frequency-duty cycle equation for the ACGIH hand activity level.

    PubMed

    Radwin, Robert G; Azari, David P; Lindstrom, Mary J; Ulin, Sheryl S; Armstrong, Thomas J; Rempel, David

    2015-01-01

    A new equation for predicting the hand activity level (HAL) used in the American Conference for Government Industrial Hygienists threshold limit value®(TLV®) was based on exertion frequency (F) and percentage duty cycle (D). The TLV® includes a table for estimating HAL from F and D originating from data in Latko et al. (Latko WA, Armstrong TJ, Foulke JA, Herrin GD, Rabourn RA, Ulin SS, Development and evaluation of an observational method for assessing repetition in hand tasks. American Industrial Hygiene Association Journal, 58(4):278-285, 1997) and post hoc adjustments that include extrapolations outside of the data range. Multimedia video task analysis determined D for two additional jobs from Latko's study not in the original data-set, and a new nonlinear regression equation was developed to better fit the data and create a more accurate table. The equation, HAL = 6:56 ln D[F(1:31) /1+3:18 F(1:31), generally matches the TLV® HAL lookup table, and is a substantial improvement over the linear model, particularly for F>1.25 Hz and D>60% jobs. The equation more closely fits the data and applies the TLV® using a continuous function.

  16. Neighborhood comparison operator

    NASA Technical Reports Server (NTRS)

    Gennery, Donald B. (Inventor)

    1987-01-01

    Digital values in a moving window are compared by an operator having nine comparators (18) connected to line buffers (16) for receiving a succession of central pixels together with eight neighborhood pixels. A single bit of program control determines whether the neighborhood pixels are to be compared with the central pixel or a threshold value. The central pixel is always compared with the threshold. The comparator output, plus 2 bits indicating odd-even pixel/line information about the central pixel, addresses a lookup table (20) to provide 14 bits of information, including 2 bits which control a selector (22) to pass either the central pixel value, the other 12 bits of table information, or the bit-wise logic OR of all neighboring pixels.

  17. An embedded real-time red peach detection system based on an OV7670 camera, ARM cortex-M4 processor and 3D look-up tables.

    PubMed

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-10-22

    This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second.

  18. An Embedded Real-Time Red Peach Detection System Based on an OV7670 Camera, ARM Cortex-M4 Processor and 3D Look-Up Tables

    PubMed Central

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-01-01

    This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second. PMID:23202040

  19. A Pipelined Non-Deterministic Finite Automaton-Based String Matching Scheme Using Merged State Transitions in an FPGA

    PubMed Central

    Choi, Kang-Il

    2016-01-01

    This paper proposes a pipelined non-deterministic finite automaton (NFA)-based string matching scheme using field programmable gate array (FPGA) implementation. The characteristics of the NFA such as shared common prefixes and no failure transitions are considered in the proposed scheme. In the implementation of the automaton-based string matching using an FPGA, each state transition is implemented with a look-up table (LUT) for the combinational logic circuit between registers. In addition, multiple state transitions between stages can be performed in a pipelined fashion. In this paper, it is proposed that multiple one-to-one state transitions, called merged state transitions, can be performed with an LUT. By cutting down the number of used LUTs for implementing state transitions, the hardware overhead of combinational logic circuits is greatly reduced in the proposed pipelined NFA-based string matching scheme. PMID:27695114

  20. A Pipelined Non-Deterministic Finite Automaton-Based String Matching Scheme Using Merged State Transitions in an FPGA.

    PubMed

    Kim, HyunJin; Choi, Kang-Il

    2016-01-01

    This paper proposes a pipelined non-deterministic finite automaton (NFA)-based string matching scheme using field programmable gate array (FPGA) implementation. The characteristics of the NFA such as shared common prefixes and no failure transitions are considered in the proposed scheme. In the implementation of the automaton-based string matching using an FPGA, each state transition is implemented with a look-up table (LUT) for the combinational logic circuit between registers. In addition, multiple state transitions between stages can be performed in a pipelined fashion. In this paper, it is proposed that multiple one-to-one state transitions, called merged state transitions, can be performed with an LUT. By cutting down the number of used LUTs for implementing state transitions, the hardware overhead of combinational logic circuits is greatly reduced in the proposed pipelined NFA-based string matching scheme.

  1. Reactive Collision Avoidance Algorithm

    NASA Technical Reports Server (NTRS)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on-line. The optimal avoidance trajectory is implemented as a receding-horizon model predictive control law. Therefore, at each time step, the optimal avoidance trajectory is found and the first time step of its acceleration is applied. At the next time step of the control computer, the problem is re-solved and the new first time step is again applied. This continual updating allows the RCA algorithm to adapt to a colliding spacecraft that is making erratic course changes.

  2. PAM-4 delivery based on pre-distortion and CMMA equalization in a ROF system at 40 GHz

    NASA Astrophysics Data System (ADS)

    Zhou, Wen; Zhang, Jiao; Han, Xifeng; Kong, Miao; Gou, Pengqi

    2018-06-01

    In this paper, we proposed a PAM-4 delivery in a ROF system at 40-GHz. PAM-4 transmission data can be generated via look-up table (LUT) pre-distortion, then delivered over 25km single-mode fiber and 0.5m wireless link. At the receiver side, the received signal can be processed with cascaded multi-module algorithm (CMMA) equalization to improve the decision precision. Our measured results show that 10Gbaud PAM-4 transmission in a ROF system at 40-GHz can be achieved with BER of 1.6 × 10-3. To our knowledge, this is the first time to introduce LUT pre-distortion and CMMA equalization in a ROF system to improve signal performance.

  3. FIR Filter of DS-CDMA UWB Modem Transmitter

    NASA Astrophysics Data System (ADS)

    Kang, Kyu-Min; Cho, Sang-In; Won, Hui-Chul; Choi, Sang-Sung

    This letter presents low-complexity digital pulse shaping filter structures of a direct sequence code division multiple access (DS-CDMA) ultra wide-band (UWB) modem transmitter with a ternary spreading code. The proposed finite impulse response (FIR) filter structures using a look-up table (LUT) have the effect of saving the amount of memory by about 50% to 80% in comparison to the conventional FIR filter structures, and consequently are suitable for a high-speed parallel data process.

  4. Horn’s Curve Estimation Through Multi-Dimensional Interpolation

    DTIC Science & Technology

    2013-03-01

    complex nature of human behavior has not yet been broached. This is not to say analysts play favorites in reaching conclusions, only that varied...Chapter III, Section 3.7. For now, it is sufficient to say underdetermined data presents technical challenges and all such datasets will be excluded from...database lookup table and then use the method of linear interpolation to instantaneously estimate the unknown points on an as-needed basis ( say from a user

  5. Analysis of spacecraft data

    NASA Technical Reports Server (NTRS)

    1984-01-01

    A software program for the production and analysis of data from the Dynamics Explorer-A (DE-A) satellite was maintained and modified and new software initiated. A capability was developed to process DE-A plasma-wave instrument mission analysis files on the Tektronic 4027 color CRT, for which two programs were written. The algorithm for the calibration lookup table for the plasma-wave instrument data was modified and verified, and a production program to generate color FR-80 spectrograms was written.

  6. Simulation and mitigation of higher-order ionospheric errors in PPP

    NASA Astrophysics Data System (ADS)

    Zus, Florian; Deng, Zhiguo; Wickert, Jens

    2017-04-01

    We developed a rapid and precise algorithm to compute ionospheric phase advances in a realistic electron density field. The electron density field is derived from a plasmaspheric extension of the International Reference Ionosphere (Gulyaeva and Bilitza, 2012) and the magnetic field stems from the International Geomagnetic Reference Field. For specific station locations, elevation and azimuth angles the ionospheric phase advances are stored in a look-up table. The higher-order ionospheric residuals are computed by forming the standard linear combination of the ionospheric phase advances. In a simulation study we examine how the higher-order ionospheric residuals leak into estimated station coordinates, clocks, zenith delays and tropospheric gradients in precise point positioning. The simulation study includes a few hundred globally distributed stations and covers the time period 1990-2015. We take a close look on the estimated zenith delays and tropospheric gradients as they are considered a data source for meteorological and climate related research. We also show how the by product of this simulation study, the look-up tables, can be used to mitigate higher-order ionospheric errors in practise. Gulyaeva, T.L., and Bilitza, D. Towards ISO Standard Earth Ionosphere and Plasmasphere Model. In: New Developments in the Standard Model, edited by R.J. Larsen, pp. 1-39, NOVA, Hauppauge, New York, 2012, available at https://www.novapublishers.com/catalog/product_info.php?products_id=35812

  7. Numerical investigation of a helicopter combustion chamber using LES and tabulated chemistry

    NASA Astrophysics Data System (ADS)

    Auzillon, Pierre; Riber, Eléonore; Gicquel, Laurent Y. M.; Gicquel, Olivier; Darabiha, Nasser; Veynante, Denis; Fiorina, Benoît

    2013-01-01

    This article presents Large Eddy Simulations (LES) of a realistic aeronautical combustor device: the chamber CTA1 designed by TURBOMECA. Under nominal operating conditions, experiments show hot spots observed on the combustor walls, in the vicinity of the injectors. These high temperature regions disappear when modifying the fuel stream equivalence ratio. In order to account for detailed chemistry effects within LES, the numerical simulation uses the recently developed turbulent combustion model F-TACLES (Filtered TAbulated Chemistry for LES). The principle of this model is first to generate a lookup table where thermochemical variables are computed from a set of filtered laminar unstrained premixed flamelets. To model the interactions between the flame and the turbulence at the subgrid scale, a flame wrinkling analytical model is introduced and the Filtered Density Function (FDF) of the mixture fraction is modeled by a β function. Filtered thermochemical quantities are stored as a function of three coordinates: the filtered progress variable, the filtered mixture fraction and the mixture fraction subgrid scale variance. The chemical lookup table is then coupled with the LES using a mathematical formalism that ensures an accurate prediction of the flame dynamics. The numerical simulation of the CTA1 chamber with the F-TACLES turbulent combustion model reproduces fairly the temperature fields observed in experiments. In particular the influence of the fuel stream equivalence ratio on the flame position is well captured.

  8. The DaveMLTranslator: An Interface for DAVE-ML Aerodynamic Models

    NASA Technical Reports Server (NTRS)

    Hill, Melissa A.; Jackson, E. Bruce

    2007-01-01

    It can take weeks or months to incorporate a new aerodynamic model into a vehicle simulation and validate the performance of the model. The Dynamic Aerospace Vehicle Exchange Markup Language (DAVE-ML) has been proposed as a means to reduce the time required to accomplish this task by defining a standard format for typical components of a flight dynamic model. The purpose of this paper is to describe an object-oriented C++ implementation of a class that interfaces a vehicle subsystem model specified in DAVE-ML and a vehicle simulation. Using the DaveMLTranslator class, aerodynamic or other subsystem models can be automatically imported and verified at run-time, significantly reducing the elapsed time between receipt of a DAVE-ML model and its integration into a simulation environment. The translator performs variable initializations, data table lookups, and mathematical calculations for the aerodynamic build-up, and executes any embedded static check-cases for verification. The implementation is efficient, enabling real-time execution. Simple interface code for the model inputs and outputs is the only requirement to integrate the DaveMLTranslator as a vehicle aerodynamic model. The translator makes use of existing table-lookup utilities from the Langley Standard Real-Time Simulation in C++ (LaSRS++). The design and operation of the translator class is described and comparisons with existing, conventional, C++ aerodynamic models of the same vehicle are given.

  9. Measurement of the spatially distributed temperature and soot loadings in a laminar diffusion flame using a Cone-Beam Tomography technique

    NASA Astrophysics Data System (ADS)

    Zhao, Huayong; Williams, Ben; Stone, Richard

    2014-01-01

    A new low-cost optical diagnostic technique, called Cone Beam Tomographic Three Colour Spectrometry (CBT-TCS), has been developed to measure the planar distributions of temperature, soot particle size, and soot volume fraction in a co-flow axi-symmetric laminar diffusion flame. The image of a flame is recorded by a colour camera, and then by using colour interpolation and applying a cone beam tomography algorithm, a colour map can be reconstructed that corresponds to a diametral plane. Look-up tables calculated using Planck's law and different scattering models are then employed to deduce the temperature, approximate average soot particle size and soot volume fraction in each voxel (volumetric pixel). A sensitivity analysis of the look-up tables shows that the results have a high temperature resolution but a relatively low soot particle size resolution. The assumptions underlying the technique are discussed in detail. Sample data from an ethylene laminar diffusion flame are compared with data in the literature for similar flames. The comparison shows very consistent temperature and soot volume fraction profiles. Further analysis indicates that the difference seen in comparison with published results are within the measurement uncertainties. This methodology is ready to be applied to measure 3D data by capturing multiple flame images from different angles for non-axisymmetric flame.

  10. Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding

    NASA Technical Reports Server (NTRS)

    Mahmoud, Saad; Hi, Jianjun

    2012-01-01

    The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of the deviation. This method is more complicated than the Pilot-Guided Method due to the gain control circuitry, but does not have the real-time computation complexity of the Blind Estimation method. Each of these methods can be used to provide an accurate estimation of the combining ratio, and the final selection of the estimation method depends on other design constraints.

  11. Java-based remote viewing and processing of nuclear medicine images: toward "the imaging department without walls".

    PubMed

    Slomka, P J; Elliott, E; Driedger, A A

    2000-01-01

    In nuclear medicine practice, images often need to be reviewed and reports prepared from locations outside the department, usually in the form of hard copy. Although hard-copy images are simple and portable, they do not offer electronic data search and image manipulation capabilities. On the other hand, picture archiving and communication systems or dedicated workstations cannot be easily deployed at numerous locations. To solve this problem, we propose a Java-based remote viewing station (JaRViS) for the reading and reporting of nuclear medicine images using Internet browser technology. JaRViS interfaces to the clinical patient database of a nuclear medicine workstation. All JaRViS software resides on a nuclear medicine department server. The contents of the clinical database can be searched by a browser interface after providing a password. Compressed images with the Java applet and color lookup tables are downloaded on the client side. This paradigm does not require nuclear medicine software to reside on remote computers, which simplifies support and deployment of such a system. To enable versatile reporting of the images, color tables and thresholds can be interactively manipulated and images can be displayed in a variety of layouts. Image filtering, frame grouping (adding frames), and movie display are available. Tomographic mode displays are supported, including gated SPECT. The time to display 14 lung perfusion images in 128 x 128 matrix together with the Java applet and color lookup tables over a V.90 modem is <1 min. SPECT and PET slice reorientation is interactive (<1 s). JaRViS could run on a Windows 95/98/NT or a Macintosh platform with Netscape Communicator or Microsoft Intemet Explorer. The performance of Java code for bilinear interpolation, cine display, and filtering approaches that of a standard imaging workstation. It is feasible to set up a remote nuclear medicine viewing station using Java and an Internet or intranet browser. Images can be made easily and cost-effectively available to referring physicians and ambulatory clinics within and outside of the hospital, providing a convenient alternative to film media. We also find this system useful in home reporting of emergency procedures such as lung ventilation-perfusion scans or dynamic studies.

  12. Noise generator for tinnitus treatment based on look-up tables

    NASA Astrophysics Data System (ADS)

    Uriz, Alejandro J.; Agüero, Pablo; Tulli, Juan C.; Castiñeira Moreira, Jorge; González, Esteban; Hidalgo, Roberto; Casadei, Manuel

    2016-04-01

    Treatment of tinnitus by means of masking sounds allows to obtain a significant improve of the quality of life of the individual that suffer that condition. In view of that, it is possible to develop noise synthesizers based on random number generators in digital signal processors (DSP), which are used in almost any digital hearing aid devices. DSP architecture have limitations to implement a pseudo random number generator, due to it, the noise statistics can be not as good as expectations. In this paper, a technique to generate additive white gaussian noise (AWGN) or other types of filtered noise using coefficients stored in program memory of the DSP is proposed. Also, an implementation of the technique is carried out on a dsPIC from Microchip®. Objective experiments and experimental measurements are performed to analyze the proposed technique.

  13. The effect of structural design parameters on FPGA-based feed-forward space-time trellis coding-orthogonal frequency division multiplexing channel encoders

    NASA Astrophysics Data System (ADS)

    Passas, Georgios; Freear, Steven; Fawcett, Darren

    2010-08-01

    Orthogonal frequency division multiplexing (OFDM)-based feed-forward space-time trellis code (FFSTTC) encoders can be synthesised as very high speed integrated circuit hardware description language (VHDL) designs. Evaluation of their FPGA implementation can lead to conclusions that help a designer to decide the optimum implementation, given the encoder structural parameters. VLSI architectures based on 1-bit multipliers and look-up tables (LUTs) are compared in terms of FPGA slices and block RAMs (area), as well as in terms of minimum clock period (speed). Area and speed graphs versus encoder memory order are provided for quadrature phase shift keying (QPSK) and 8 phase shift keying (8-PSK) modulation and two transmit antennas, revealing best implementation under these conditions. The effect of number of modulation bits and transmit antennas on the encoder implementation complexity is also investigated.

  14. A survey of southern hemisphere meteor showers

    NASA Astrophysics Data System (ADS)

    Jenniskens, Peter; Baggaley, Jack; Crumpton, Ian; Aldous, Peter; Pokorny, Petr; Janches, Diego; Gural, Peter S.; Samuels, Dave; Albers, Jim; Howell, Andreas; Johannink, Carl; Breukers, Martin; Odeh, Mohammad; Moskovitz, Nicholas; Collison, Jack; Ganju, Siddha

    2018-05-01

    Results are presented from a video-based meteoroid orbit survey conducted in New Zealand between Sept. 2014 and Dec. 2016, which netted 24,906 orbits from +5 to -5 magnitude meteors. 44 new southern hemisphere meteor showers are identified after combining this data with that of other video-based networks. Results are compared to showers reported from recent radar-based surveys. We find that video cameras and radar often see different showers and sometimes measure different semi-major axis distributions for the same meteoroid stream. For identifying showers in sparse daily orbit data, a shower look-up table of radiant position and speed as a function of time was created. This can replace the commonly used method of identifying showers from a set of mean orbital elements by using a discriminant criterion, which does not fully describe the distribution of meteor shower radiants over time.

  15. Fast generation of video holograms of three-dimensional moving objects using a motion compensation-based novel look-up table.

    PubMed

    Kim, Seung-Cheol; Dong, Xiao-Bin; Kwon, Min-Woo; Kim, Eun-Soo

    2013-05-06

    A novel approach for fast generation of video holograms of three-dimensional (3-D) moving objects using a motion compensation-based novel-look-up-table (MC-N-LUT) method is proposed. Motion compensation has been widely employed in compression of conventional 2-D video data because of its ability to exploit high temporal correlation between successive video frames. Here, this concept of motion-compensation is firstly applied to the N-LUT based on its inherent property of shift-invariance. That is, motion vectors of 3-D moving objects are extracted between the two consecutive video frames, and with them motions of the 3-D objects at each frame are compensated. Then, through this process, 3-D object data to be calculated for its video holograms are massively reduced, which results in a dramatic increase of the computational speed of the proposed method. Experimental results with three kinds of 3-D video scenarios reveal that the average number of calculated object points and the average calculation time for one object point of the proposed method, have found to be reduced down to 86.95%, 86.53% and 34.99%, 32.30%, respectively compared to those of the conventional N-LUT and temporal redundancy-based N-LUT (TR-N-LUT) methods.

  16. Neighborhood comparison operator

    NASA Technical Reports Server (NTRS)

    Gennery, D. B. (Inventor)

    1985-01-01

    Digital values in a moving window are compared by an operator having nine comparators connected to line buffers for receiving a succession of central pixels together with eight neighborhood pixels. A single bit of program control determines whether the neighborhood pixels are to be compared with the central pixel or a threshold value. The central pixel is always compared with the threshold. The omparator output plus 2 bits indicating odd-even pixel/line information about the central pixel addresses a lookup table to provide 14 bits of information, including 2 bits which control a selector to pass either the central pixel value, the other 12 bits of table information, or the bit-wise logical OR of all nine pixels through circuit that implements a very wide OR gate.

  17. Generating Lookup Tables from the AE9/AP9 Models

    DTIC Science & Technology

    2015-06-16

    AP9 Models 5a. CONTRACT NUMBER FA8802-14-C-0001 5b. GRANT NUMBER na 5c. PROGRAM ELEMENT NUMBER 5811 6. AUTHOR( S ) Joshua P. Davis 5d. PROJECT...NUMBER 11 5e. TASK NUMBER na 5f. WORK UNIT NUMBER na 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) The Aerospace Corporation 2310 E. El...Segundo Blvd. El Segundo, CA 90245-4609 8. PERFORMING ORGANIZATION REPORT NUMBER TOR-2015-00893 9. SPONSORING/MONITORING AGENCY NAME( S ) AND ADDRESS

  18. Multispectral histogram normalization contrast enhancement

    NASA Technical Reports Server (NTRS)

    Soha, J. M.; Schwartz, A. A.

    1979-01-01

    A multispectral histogram normalization or decorrelation enhancement which achieves effective color composites by removing interband correlation is described. The enhancement procedure employs either linear or nonlinear transformations to equalize principal component variances. An additional rotation to any set of orthogonal coordinates is thus possible, while full histogram utilization is maintained by avoiding the reintroduction of correlation. For the three-dimensional case, the enhancement procedure may be implemented with a lookup table. An application of the enhancement to Landsat multispectral scanning imagery is presented.

  19. NRL Hyperspectral Imagery Trafficability Tool (HITT): Software andSpectral-Geotechnical Look-up Tables for Estimation and Mapping of Soil Bearing Strength from Hyperspectral Imagery

    DTIC Science & Technology

    2012-09-28

    spectral-geotechnical libraries and models developed during remote sensing and calibration/ validation campaigns conducted by NRL and collaborating...geotechnical libraries and models developed during remote sensing and calibration/ validation campaigns conducted by NRL and collaborating institutions in four...2010; Bachmann, Fry, et al, 2012a). The NRL HITT tool is a model for how we develop and validate software, and the future development of tools by

  20. Remote sensing of atmospheric aerosols with the SPEX spectropolarimeter

    NASA Astrophysics Data System (ADS)

    van Harten, G.; Rietjens, J.; Smit, M.; Snik, F.; Keller, C. U.; di Noia, A.; Hasekamp, O.; Vonk, J.; Volten, H.

    2013-12-01

    Characterizing atmospheric aerosols is key to understanding their influence on climate through their direct and indirect radiative forcing. This requires long-term global coverage, at high spatial (~km) and temporal (~days) resolution, which can only be provided by satellite remote sensing. Aerosol load and properties such as particle size, shape and chemical composition can be derived from multi-wavelength radiance and polarization measurements of sunlight that is scattered by the Earth's atmosphere at different angles. The required polarimetric accuracy of ~10^(-3) is very challenging, particularly since the instrument is located on a rapidly moving platform. Our Spectropolarimeter for Planetary EXploration (SPEX) is based on a novel, snapshot spectral modulator, with the intrinsic ability to measure polarization at high accuracy. It exhibits minimal instrumental polarization and is completely solid-state and passive. An athermal set of birefringent crystals in front of an analyzer encodes the incoming linear polarization into a sinusoidal modulation in the intensity spectrum. Moreover, a dual beam implementation yields redundancy that allows for a mutual correction in both the spectrally and spatially modulated data to increase the measurement accuracy. A partially polarized calibration stimulus has been developed, consisting of a carefully depolarized source followed by tilted glass plates to induce polarization in a controlled way. Preliminary calibration measurements show an accuracy of SPEX of well below 10^(-3), with a sensitivity limit of 2*10^(-4). We demonstrate the potential of the SPEX concept by presenting retrievals of aerosol properties based on clear sky measurements using a prototype satellite instrument and a dedicated ground-based SPEX. The retrieval algorithm, originally designed for POLDER data, performs iterative fitting of aerosol properties and surface albedo, where the initial guess is provided by a look-up table. The retrieved aerosol properties, including aerosol optical thickness, single scattering albedo, size distribution and complex refractive index, will be compared with the on-site AERONET sun-photometer, lidar, particle counter and sizer, and PM10 and PM2.5 monitoring instruments. Retrievals of the aerosol layer height based on polarization measurements in the O2A absorption band will be compared with lidar profiles. Furthermore, the possibility of enhancing the retrieval accuracy by replacing the look-up table with a neural network based initial guess will be discussed, using retrievals from simulated ground-based data.

  1. Evaluation of CFD to Determine Two-Dimensional Airfoil Characteristics for Rotorcraft Applications

    NASA Technical Reports Server (NTRS)

    Smith, Marilyn J.; Wong, Tin-Chee; Potsdam, Mark; Baeder, James; Phanse, Sujeet

    2004-01-01

    The efficient prediction of helicopter rotor performance, vibratory loads, and aeroelastic properties still relies heavily on the use of comprehensive analysis codes by the rotorcraft industry. These comprehensive codes utilize look-up tables to provide two-dimensional aerodynamic characteristics. Typically these tables are comprised of a combination of wind tunnel data, empirical data and numerical analyses. The potential to rely more heavily on numerical computations based on Computational Fluid Dynamics (CFD) simulations has become more of a reality with the advent of faster computers and more sophisticated physical models. The ability of five different CFD codes applied independently to predict the lift, drag and pitching moments of rotor airfoils is examined for the SC1095 airfoil, which is utilized in the UH-60A main rotor. Extensive comparisons with the results of ten wind tunnel tests are performed. These CFD computations are found to be as good as experimental data in predicting many of the aerodynamic performance characteristics. Four turbulence models were examined (Baldwin-Lomax, Spalart-Allmaras, Menter SST, and k-omega).

  2. An automated approach to the design of decision tree classifiers

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Chin, P.; Beaudet, P.

    1980-01-01

    The classification of large dimensional data sets arising from the merging of remote sensing data with more traditional forms of ancillary data is considered. Decision tree classification, a popular approach to the problem, is characterized by the property that samples are subjected to a sequence of decision rules before they are assigned to a unique class. An automated technique for effective decision tree design which relies only on apriori statistics is presented. This procedure utilizes a set of two dimensional canonical transforms and Bayes table look-up decision rules. An optimal design at each node is derived based on the associated decision table. A procedure for computing the global probability of correct classfication is also provided. An example is given in which class statistics obtained from an actual LANDSAT scene are used as input to the program. The resulting decision tree design has an associated probability of correct classification of .76 compared to the theoretically optimum .79 probability of correct classification associated with a full dimensional Bayes classifier. Recommendations for future research are included.

  3. A prototype to automate the video subsystem routing for the video distribution subsystem of Space Station Freedom

    NASA Astrophysics Data System (ADS)

    Betz, Jessie M. Bethly

    1993-12-01

    The Video Distribution Subsystem (VDS) for Space Station Freedom provides onboard video communications. The VDS includes three major functions: external video switching; internal video switching; and sync and control generation. The Video Subsystem Routing (VSR) is a part of the VDS Manager Computer Software Configuration Item (VSM/CSCI). The VSM/CSCI is the software which controls and monitors the VDS equipment. VSR activates, terminates, and modifies video services in response to Tier-1 commands to connect video sources to video destinations. VSR selects connection paths based on availability of resources and updates the video routing lookup tables. This project involves investigating the current methodology to automate the Video Subsystem Routing and developing and testing a prototype as 'proof of concept' for designers.

  4. Low power adder based auditory filter architecture.

    PubMed

    Rahiman, P F Khaleelur; Jayanthi, V S

    2014-01-01

    Cochlea devices are powered up with the help of batteries and they should possess long working life to avoid replacing of devices at regular interval of years. Hence the devices with low power consumptions are required. In cochlea devices there are numerous filters, each responsible for frequency variant signals, which helps in identifying speech signals of different audible range. In this paper, multiplierless lookup table (LUT) based auditory filter is implemented. Power aware adder architectures are utilized to add the output samples of the LUT, available at every clock cycle. The design is developed and modeled using Verilog HDL, simulated using Mentor Graphics Model-Sim Simulator, and synthesized using Synopsys Design Compiler tool. The design was mapped to TSMC 65 nm technological node. The standard ASIC design methodology has been adapted to carry out the power analysis. The proposed FIR filter architecture has reduced the leakage power by 15% and increased its performance by 2.76%.

  5. Model-Based Wavefront Control for CCAT

    NASA Technical Reports Server (NTRS)

    Redding, David; Lou, John Z.; Kissil, Andy; Bradford, Matt; Padin, Steve; Woody, David

    2011-01-01

    The 25-m aperture CCAT submillimeter-wave telescope will have a primary mirror that is divided into 162 individual segments, each of which is provided with 3 positioning actuators. CCAT will be equipped with innovative Imaging Displacement Sensors (IDS) inexpensive optical edge sensors capable of accurately measuring all segment relative motions. These measurements are used in a Kalman-filter-based Optical State Estimator to estimate wavefront errors, permitting use of a minimum-wavefront controller without direct wavefront measurement. This controller corrects the optical impact of errors in 6 degrees of freedom per segment, including lateral translations of the segments, using only the 3 actuated degrees of freedom per segment. The global motions of the Primary and Secondary Mirrors are not measured by the edge sensors. These are controlled using a gravity-sag look-up table. Predicted performance is illustrated by simulated response to errors such as gravity sag.

  6. Optimization and performance evaluation of a conical mirror based fluorescence molecular tomography imaging system

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Zhang, Wei; Zhu, Dianwen; Li, Changqing

    2016-03-01

    We performed numerical simulations and phantom experiments with a conical mirror based fluorescence molecular tomography (FMT) imaging system to optimize its performance. With phantom experiments, we have compared three measurement modes in FMT: the whole surface measurement mode, the transmission mode, and the reflection mode. Our results indicated that the whole surface measurement mode performed the best. Then, we applied two different neutral density (ND) filters to improve the measurement's dynamic range. The benefits from ND filters are not as much as predicted. Finally, with numerical simulations, we have compared two laser excitation patterns: line and point. With the same excitation position number, we found that the line laser excitation had slightly better FMT reconstruction results than the point laser excitation. In the future, we will implement Monte Carlo ray tracing simulations to calculate multiple reflection photons, and create a look-up table accordingly for calibration.

  7. [Research on the High Efficiency Data Communication Repeater Based on STM32F103].

    PubMed

    Zhang, Yahui; Li, Zheng; Chen, Guangfei

    2015-11-01

    To improve the radio frequency (RF) transmission distance of the wireless terminal of the medical internet of things (LOT), to realize the real-time and efficient data communication, the intelligent relay system based on STM32F103 single chip microcomputer (SCM) is proposed. The system used nRF905 chip to achieve the collection, of medical and health information of patients in the 433 MHz band, used SCM to control the serial port to Wi-Fi module to transmit information from 433 MHz to 2.4 GHz wireless Wi-Fi band, and used table look-up algorithm of ready list to improve the efficiency of data communications. The design can realize real-time and efficient data communication. The relay which is easy to use with high practical value can extend the distance and mode of data transmission and achieve real-time transmission of data.

  8. Accelerated computer generated holography using sparse bases in the STFT domain.

    PubMed

    Blinder, David; Schelkens, Peter

    2018-01-22

    Computer-generated holography at high resolutions is a computationally intensive task. Efficient algorithms are needed to generate holograms at acceptable speeds, especially for real-time and interactive applications such as holographic displays. We propose a novel technique to generate holograms using a sparse basis representation in the short-time Fourier space combined with a wavefront-recording plane placed in the middle of the 3D object. By computing the point spread functions in the transform domain, we update only a small subset of the precomputed largest-magnitude coefficients to significantly accelerate the algorithm over conventional look-up table methods. We implement the algorithm on a GPU, and report a speedup factor of over 30. We show that this transform is superior over wavelet-based approaches, and show quantitative and qualitative improvements over the state-of-the-art WASABI method; we report accuracy gains of 2dB PSNR, as well improved view preservation.

  9. Estimation of winter wheat canopy nitrogen density at different growth stages based on Multi-LUT approach

    NASA Astrophysics Data System (ADS)

    Li, Zhenhai; Li, Na; Li, Zhenhong; Wang, Jianwen; Liu, Chang

    2017-10-01

    Rapid real-time monitoring of wheat nitrogen (N) status is crucial for precision N management during wheat growth. In this study, Multi Lookup Table (Multi-LUT) approach based on the N-PROSAIL model parameters setting at different growth stages was constructed to estimating canopy N density (CND) in winter wheat. The results showed that the estimated CND was in line with with measured CND, with the determination coefficient (R2) and the corresponding root mean square error (RMSE) values of 0.80 and 1.16 g m-2, respectively. Time-consuming of one sample estimation was only 6 ms under the test machine with CPU configuration of Intel(R) Core(TM) i5-2430 @2.40GHz quad-core. These results confirmed the potential of using Multi-LUT approach for CND retrieval in winter wheat at different growth stages and under variables climatic conditions.

  10. Assessment and validation of the community radiative transfer model for ice cloud conditions

    NASA Astrophysics Data System (ADS)

    Yi, Bingqi; Yang, Ping; Weng, Fuzhong; Liu, Quanhua

    2014-11-01

    The performance of the Community Radiative Transfer Model (CRTM) under ice cloud conditions is evaluated and improved with the implementation of MODIS collection 6 ice cloud optical property model based on the use of severely roughened solid column aggregates and a modified Gamma particle size distribution. New ice cloud bulk scattering properties (namely, the extinction efficiency, single-scattering albedo, asymmetry factor, and scattering phase function) suitable for application to the CRTM are calculated by using the most up-to-date ice particle optical property library. CRTM-based simulations illustrate reasonable accuracy in comparison with the counterparts derived from a combination of the Discrete Ordinate Radiative Transfer (DISORT) model and the Line-by-line Radiative Transfer Model (LBLRTM). Furthermore, simulations of the top of the atmosphere brightness temperature with CRTM for the Crosstrack Infrared Sounder (CrIS) are carried out to further evaluate the updated CRTM ice cloud optical property look-up table.

  11. Accelerating molecular Monte Carlo simulations using distance and orientation dependent energy tables: tuning from atomistic accuracy to smoothed “coarse-grained” models

    PubMed Central

    Lettieri, S.; Zuckerman, D.M.

    2011-01-01

    Typically, the most time consuming part of any atomistic molecular simulation is due to the repeated calculation of distances, energies and forces between pairs of atoms. However, many molecules contain nearly rigid multi-atom groups such as rings and other conjugated moieties, whose rigidity can be exploited to significantly speed up computations. The availability of GB-scale random-access memory (RAM) offers the possibility of tabulation (pre-calculation) of distance and orientation-dependent interactions among such rigid molecular bodies. Here, we perform an investigation of this energy tabulation approach for a fluid of atomistic – but rigid – benzene molecules at standard temperature and density. In particular, using O(1) GB of RAM, we construct an energy look-up table which encompasses the full range of allowed relative positions and orientations between a pair of whole molecules. We obtain a hardware-dependent speed-up of a factor of 24-50 as compared to an ordinary (“exact”) Monte Carlo simulation and find excellent agreement between energetic and structural properties. Second, we examine the somewhat reduced fidelity of results obtained using energy tables based on much less memory use. Third, the energy table serves as a convenient platform to explore potential energy smoothing techniques, akin to coarse-graining. Simulations with smoothed tables exhibit near atomistic accuracy while increasing diffusivity. The combined speed-up in sampling from tabulation and smoothing exceeds a factor of 100. For future applications greater speed-ups can be expected for larger rigid groups, such as those found in biomolecules. PMID:22120971

  12. Correlation and prediction of dynamic human isolated joint strength from lean body mass

    NASA Technical Reports Server (NTRS)

    Pandya, Abhilash K.; Hasson, Scott M.; Aldridge, Ann M.; Maida, James C.; Woolford, Barbara J.

    1992-01-01

    A relationship between a person's lean body mass and the amount of maximum torque that can be produced with each isolated joint of the upper extremity was investigated. The maximum dynamic isolated joint torque (upper extremity) on 14 subjects was collected using a dynamometer multi-joint testing unit. These data were reduced to a table of coefficients of second degree polynomials, computed using a least squares regression method. All the coefficients were then organized into look-up tables, a compact and convenient storage/retrieval mechanism for the data set. Data from each joint, direction and velocity, were normalized with respect to that joint's average and merged into files (one for each curve for a particular joint). Regression was performed on each one of these files to derive a table of normalized population curve coefficients for each joint axis, direction, and velocity. In addition, a regression table which included all upper extremity joints was built which related average torque to lean body mass for an individual. These two tables are the basis of the regression model which allows the prediction of dynamic isolated joint torques from an individual's lean body mass.

  13. Determination of circumsolar radiation from Meteosat Second Generation

    NASA Astrophysics Data System (ADS)

    Reinhardt, B.; Buras, R.; Bugliaro, L.; Wilbert, S.; Mayer, B.

    2013-06-01

    Reliable data on circumsolar radiation, which is caused by scattering of sun light by cloud or aerosol particles, is becoming more and more important for the resource assessment and design of concentrating solar technologies (CSTs). However, measuring circumsolar radiation is demanding and only very limited data sets are available. As a step to bridge this gap, we have developed a method to determine circumsolar radiation from cirrus cloud properties retrieved by the geostationary satellites of the Meteosat Second Generation (MSG) family. The method takes output from the COCS algorithm to generate a cirrus mask from MSG data, then uses the retrieval algorithm APICS to obtain the optical thickness and the effective radius of the detected cirrus, which in turn are used to determine the circumsolar radiation from a pre-calculated lookup table. The lookup table was generated from extensive calculations using a specifically adjusted version of the Monte Carlo radiative transfer model MYSTIC and by developing a fast yet precise parameterization. APICS was also improved such that it determines the surface albedo, which is needed for the cloud property retrieval, in a self-consistent way instead of using external data. Furthermore it was extended to consider new ice particle shapes to allow for an uncertainty analysis concerning this parameter. We found that the nescience of the ice particle shape leads to an uncertainty of up to 50%. A validation with ground based measurements of circumsolar radiation show good agreement with the new "Baum v3.5" ice particle shape parameterization. For the circumsolar ratio (CSR) the validation yields a mean absolute deviation (MAD) of 0.10, a bias of 11% and a Spearman rank correlation rrank, CSR of 0.54. If measurements with sub-scale cumulus clouds within the relevant satellite pixels are manually excluded, the results improve to MAD = 0.07, bias = -3% and rrank, CSR = 0.71.

  14. Astrophysical fluid simulations of thermally ideal gases with non-constant adiabatic index: numerical implementation

    NASA Astrophysics Data System (ADS)

    Vaidya, B.; Mignone, A.; Bodo, G.; Massaglia, S.

    2015-08-01

    Context. An equation of state (EoS) is a relation between thermodynamic state variables and it is essential for closing the set of equations describing a fluid system. Although an ideal EoS with a constant adiabatic index Γ is the preferred choice owing to its simplistic implementation, many astrophysical fluid simulations may benefit from a more sophisticated treatment that can account for diverse chemical processes. Aims: In the present work we first review the basic thermodynamic principles of a gas mixture in terms of its thermal and caloric EoS by including effects like ionization, dissociation, and temperature dependent degrees of freedom such as molecular vibrations and rotations. The formulation is revisited in the context of plasmas that are either in equilibrium conditions (local thermodynamic- or collisional excitation-equilibria) or described by non-equilibrium chemistry coupled to optically thin radiative cooling. We then present a numerical implementation of thermally ideal gases obeying a more general caloric EoS with non-constant adiabatic index in Godunov-type numerical schemes. Methods: We discuss the necessary modifications to the Riemann solver and to the conversion between total energy and pressure (or vice versa) routinely invoked in Godunov-type schemes. We then present two different approaches for computing the EoS. The first employs root-finder methods and it is best suited for EoS in analytical form. The second is based on lookup tables and interpolation and results in a more computationally efficient approach, although care must be taken to ensure thermodynamic consistency. Results: A number of selected benchmarks demonstrate that the employment of a non-ideal EoS can lead to important differences in the solution when the temperature range is 500-104 K where dissociation and ionization occur. The implementation of selected EoS introduces additional computational costs although the employment of lookup table methods (when possible) can significantly reduce the overhead by a factor of ~ 3-4.

  15. Predicting Risk of Suicide Attempt Using History of Physical Illnesses From Electronic Medical Records

    PubMed Central

    Luo, Wei; Tran, Truyen; Berk, Michael; Venkatesh, Svetha

    2016-01-01

    Background Although physical illnesses, routinely documented in electronic medical records (EMR), have been found to be a contributing factor to suicides, no automated systems use this information to predict suicide risk. Objective The aim of this study is to quantify the impact of physical illnesses on suicide risk, and develop a predictive model that captures this relationship using EMR data. Methods We used history of physical illnesses (except chapter V: Mental and behavioral disorders) from EMR data over different time-periods to build a lookup table that contains the probability of suicide risk for each chapter of the International Statistical Classification of Diseases and Related Health Problems, 10th Revision (ICD-10) codes. The lookup table was then used to predict the probability of suicide risk for any new assessment. Based on the different lengths of history of physical illnesses, we developed six different models to predict suicide risk. We tested the performance of developed models to predict 90-day risk using historical data over differing time-periods ranging from 3 to 48 months. A total of 16,858 assessments from 7399 mental health patients with at least one risk assessment was used for the validation of the developed model. The performance was measured using area under the receiver operating characteristic curve (AUC). Results The best predictive results were derived (AUC=0.71) using combined data across all time-periods, which significantly outperformed the clinical baseline derived from routine risk assessment (AUC=0.56). The proposed approach thus shows potential to be incorporated in the broader risk assessment processes used by clinicians. Conclusions This study provides a novel approach to exploit the history of physical illnesses extracted from EMR (ICD-10 codes without chapter V-mental and behavioral disorders) to predict suicide risk, and this model outperforms existing clinical assessments of suicide risk. PMID:27400764

  16. Light Curve Simulation Using Spacecraft CAD Models and Empirical Material Spectral BRDFS

    NASA Astrophysics Data System (ADS)

    Willison, A.; Bedard, D.

    This paper presents a Matlab-based light curve simulation software package that uses computer-aided design (CAD) models of spacecraft and the spectral bidirectional reflectance distribution function (sBRDF) of their homogenous surface materials. It represents the overall optical reflectance of objects as a sBRDF, a spectrometric quantity, obtainable during an optical ground truth experiment. The broadband bidirectional reflectance distribution function (BRDF), the basis of a broadband light curve, is produced by integrating the sBRDF over the optical wavelength range. Colour-filtered BRDFs, the basis of colour-filtered light curves, are produced by first multiplying the sBRDF by colour filters, and integrating the products. The software package's validity is established through comparison of simulated reflectance spectra and broadband light curves with those measured of the CanX-1 Engineering Model (EM) nanosatellite, collected during an optical ground truth experiment. It is currently being extended to simulate light curves of spacecraft in Earth orbit, using spacecraft Two-Line-Element (TLE) sets, yaw/pitch/roll angles, and observer coordinates. Measured light curves of the NEOSSat spacecraft will be used to validate simulated quantities. The sBRDF was chosen to represent material reflectance as it is spectrometric and a function of illumination and observation geometry. Homogeneous material sBRDFs were obtained using a goniospectrometer for a range of illumination and observation geometries, collected in a controlled environment. The materials analyzed include aluminum alloy, two types of triple-junction photovoltaic (TJPV) cell, white paint, and multi-layer insulation (MLI). Interpolation and extrapolation methods were used to determine the sBRDF for all possible illumination and observation geometries not measured in the laboratory, resulting in empirical look-up tables. These look-up tables are referenced when calculating the overall sBRDF of objects, where the contribution of each facet is proportionally integrated.

  17. Poster 13: Large-scale simultaneous mapping of Titan's aerosol opacity and surface albedo by a new massive inversion method of Cassini/VIMS data

    NASA Astrophysics Data System (ADS)

    Maltagliati, Luca; Rodriguez, Sebastien; Sotin, Christophe; Rannou, Pascal; Bezard, Bruno; Solomonidou, Anezina; Coustenis, Athena; Appere, Thomas; Cornet, Thomas; Le Mouelic, Stephane

    2016-06-01

    We have still limited information on Titan's surface albedo in the near-infrared. Only few spectral windows exist in between the intense methane bands, and even those windows are strongly affected by atmospheric contributions (absorption, scattering). Yet, this part of the spectrum is important to determine the surface composition thanks to the wealth of absorption bands by minerals and ices present there. A radiative transfer model is an effective tool to take the atmospheric effects into consideration in the analysis (e.g. Rannou et al. 2010, Griffith et al 2012, Solomonidou et al. 2016,...), but it is too time-consuming to process the whole VIMS hyperspectral dataset (millions of spectra) and create large-scale maps of the surface albedo. To overcome this problem, we developed an inversion method of VIMS data that employs lookup tables of synthetic spectra produced by a state-of-the-art radiative transfer model (described in its original form in Hirtzig et al. 2013). The heavy computational part (calling the radiative transfer model) is thus done only once for all during the creation of the modeled spectra. We updated the model with new methane spectroscopy and the new aerosol parameters we found in our analysis of the VIMS Emission Phase Function (see the other Maltagliati et al. abstract in this workshop). We analyzed in detail the behavior of the spectra as a function of the free parameters of the model (three inputs, the incidence, emergence and azimuth angles; and two products: the aerosol opacity and the surface albedo) in order to create an optimized grid for the lookup table. The lookup tables were then grafted onto an ad-hoc inversion model. Our method can process a whole 64x64 VIMS datacube in few minutes, with a gain in computational time of a factor of more than one thousand with respect to the standard method. This will consent for the first time a truly massive inversion of VIMS data and large-scale acquisition of Titan's surface albedo, paving the way for global maps of mineralogical composition (and related temporal variations). Results of simultaneous maps of aerosol opacity and surface albedo for the various surface windows are shown for some selected flybys observing the same area with different geometries, to highlight the robustness of the method to correct seamlessly the atmospheric effects.

  18. A LAI inversion algorithm based on the unified model of canopy bidirectional reflectance distribution function for the Heihe River Basin

    NASA Astrophysics Data System (ADS)

    Ma, B.; Li, J.; Fan, W.; Ren, H.; Xu, X.

    2017-12-01

    Leaf area index (LAI) is one of the important parameters of vegetation canopy structure, which can represent the growth condition of vegetation effectively. The accuracy, availability and timeliness of LAI data can be improved greatly, which is of great importance to vegetation-related research, such as the study of atmospheric, land surface and hydrological processes to obtain LAI by remote sensing method. Heihe River Basin is the inland river basin in northwest China. There are various types of vegetation and all kinds of terrain conditions in the basin, so it is helpful for testing the accuracy of the model under the complex surface and evaluating the correctness of the model to study LAI in this area. On the other hand, located in west arid area of China, the ecological environment of Heihe Basin is fragile, LAI is an important parameter to represent the vegetation growth condition, and can help us understand the status of vegetation in the Heihe River Basin. Different from the previous LAI inversion models, the BRDF (bidirectional reflectance distribution function) unified model can be applied for both continuous vegetation and discrete vegetation, it is appropriate to the complex vegetation distribution. LAI is the key input parameter of the model. We establish the inversion algorithm that can exactly retrieve LAI using remote sensing image based on the unified model. First, we determine the vegetation type through the vegetation classification map to obtain the corresponding G function, leaf and surface reflectivity. Then, we need to determine the leaf area index (LAI), the aggregation index (ζ) and the sky scattered light ratio (β) range and the value of the interval, entering all the parameters into the model to calculate the corresponding reflectivity ρ and establish the lookup table of different vegetation. Finally, we can invert LAI on the basis of the established lookup table. The principle of inversion is least squares method. We have produced 1 km LAI products from 2000 to 2014, once every 8 days. The results show that the algorithm owns good stability and can effectively invert LAI in areas with very complex vegetation and terrain conditions.

  19. Fuel Cell Stack Testing and Durability in Support of Ion Tiger UAV

    DTIC Science & Technology

    2010-06-02

    N00173-08-2-C008 specified. In June 2008, the first M250 stack 242503 data were incorporated into the PEMFC system model as a look-up data table...control and operational model which implements the operational strategy by controlling the power from the PEMFC systems and battery pack for a total...Outputs PEMFC System Outputs <~~>*<*,yrx**i~yc*r»>r-’+**^^ FCS_P«wi_Dwn«l (W) 10 15 20 25 BfOfumon PCM« Cwnind Wi ) 5 10 15

  20. Urban Traffic Signal Control for Fuel Economy. Part 2. Extension to Small Cars (Economie D’Essence Grace a la Commande des Feux de Circulation en Zone Urbaine. Partie 2. Application aux Vehicules de Petite Cylindree)

    DTIC Science & Technology

    1981-11-01

    d’essence comparativement au plan en vigueur. Le rapport mentionne 6galement que la consommation d’un gros v"hicule a6t calcul6e a l’aide d’un module...most of the submitted data was not readily enterable into the Vehicle Simulation program. Because of the design of the table look-ups in the program

  1. Definition of a Robust Supervisory Control Scheme for Sodium-Cooled Fast Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ponciroli, R.; Passerini, S.; Vilim, R. B.

    In this work, an innovative control approach for metal-fueled Sodium-cooled Fast Reactors is proposed. With respect to the classical approach adopted for base-load Nuclear Power Plants, an alternative control strategy for operating the reactor at different power levels by respecting the system physical constraints is presented. In order to achieve a higher operational flexibility along with ensuring that the implemented control loops do not influence the system inherent passive safety features, a dedicated supervisory control scheme for the dynamic definition of the corresponding set-points to be supplied to the PID controllers is designed. In particular, the traditional approach based onmore » the adoption of tabulated lookup tables for the set-point definition is found not to be robust enough when failures of the implemented SISO (Single Input Single Output) actuators occur. Therefore, a feedback algorithm based on the Reference Governor approach, which allows for the optimization of reference signals according to the system operating conditions, is proposed.« less

  2. Mercury⊕: An evidential reasoning image classifier

    NASA Astrophysics Data System (ADS)

    Peddle, Derek R.

    1995-12-01

    MERCURY⊕ is a multisource evidential reasoning classification software system based on the Dempster-Shafer theory of evidence. The design and implementation of this software package is described for improving the classification and analysis of multisource digital image data necessary for addressing advanced environmental and geoscience applications. In the remote-sensing context, the approach provides a more appropriate framework for classifying modern, multisource, and ancillary data sets which may contain a large number of disparate variables with different statistical properties, scales of measurement, and levels of error which cannot be handled using conventional Bayesian approaches. The software uses a nonparametric, supervised approach to classification, and provides a more objective and flexible interface to the evidential reasoning framework using a frequency-based method for computing support values from training data. The MERCURY⊕ software package has been implemented efficiently in the C programming language, with extensive use made of dynamic memory allocation procedures and compound linked list and hash-table data structures to optimize the storage and retrieval of evidence in a Knowledge Look-up Table. The software is complete with a full user interface and runs under Unix, Ultrix, VAX/VMS, MS-DOS, and Apple Macintosh operating system. An example of classifying alpine land cover and permafrost active layer depth in northern Canada is presented to illustrate the use and application of these ideas.

  3. Level 1 Processing of MODIS Direct Broadcast Data at the GSFC DAAC

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Kempler, Steven J. (Technical Monitor)

    2001-01-01

    The GSFC DAAC is working to test and package the MODIS Level 1 Processing software for Aqua Direct Broadcast data. This entails the same code base, but different lookup tables for Aqua and Terra. However, the most significant change is the use of ancillary attitude and ephemeris files instead of orbit/attitude information within the science data stream (as with Terra). In addition, we are working on Linux: ports of the algorithms, which could eventually enable processing on PC clusters. Finally, the GSFC DAAC is also working with the GSFC Direct Readout laboratory to ingest Level 0 data from the GSFC DB antenna into the main DAAC, enabling level 1 production in near real time in support of applications users, such as the Synergy project. The mechanism developed for this could conceivably be extended to other participating stations.

  4. An 18-ps TDC using timing adjustment and bin realignment methods in a Cyclone-IV FPGA

    NASA Astrophysics Data System (ADS)

    Cao, Guiping; Xia, Haojie; Dong, Ning

    2018-05-01

    The method commonly used to produce a field-programmable gate array (FPGA)-based time-to-digital converter (TDC) creates a tapped delay line (TDL) for time interpolation to yield high time precision. We conduct timing adjustment and bin realignment to implement a TDC in the Altera Cyclone-IV FPGA. The former tunes the carry look-up table (LUT) cell delay by changing the LUT's function through low-level primitives according to timing analysis results, while the latter realigns bins according to the timing result obtained by timing adjustment so as to create a uniform TDL with bins of equivalent width. The differential nonlinearity and time resolution can be improved by realigning the bins. After calibration, the TDC has a 18 ps root-mean-square timing resolution and a 45 ps least-significant bit resolution.

  5. Global Aerosol Optical Models and Lookup Tables for the New MODIS Aerosol Retrieval over Land

    NASA Technical Reports Server (NTRS)

    Levy, Robert C.; Remer, Loraine A.; Dubovik, Oleg

    2007-01-01

    Since 2000, MODIS has been deriving aerosol properties over land from MODIS observed spectral reflectance, by matching the observed reflectance with that simulated for selected aerosol optical models, aerosol loadings, wavelengths and geometrical conditions (that are contained in a lookup table or 'LUT'). Validation exercises have showed that MODIS tends to under-predict aerosol optical depth (tau) in cases of large tau (tau greater than 1.0), signaling errors in the assumed aerosol optical properties. Using the climatology of almucantur retrievals from the hundreds of global AERONET sunphotometer sites, we found that three spherical-derived models (describing fine-sized dominated aerosol), and one spheroid-derived model (describing coarse-sized dominated aerosol, presumably dust) generally described the range of observed global aerosol properties. The fine dominated models were separated mainly by their single scattering albedo (omega(sub 0)), ranging from non-absorbing aerosol (omega(sub 0) approx. 0.95) in developed urban/industrial regions, to neutrally absorbing aerosol (omega(sub 0) approx.90) in forest fire burning and developing industrial regions, to absorbing aerosol (omega(sub 0) approx. 0.85) in regions of savanna/grassland burning. We determined the dominant model type in each region and season, to create a 1 deg. x 1 deg. grid of assumed aerosol type. We used vector radiative transfer code to create a new LUT, simulating the four aerosol models, in four MODIS channels. Independent AERONET observations of spectral tau agree with the new models, indicating that the new models are suitable for use by the MODIS aerosol retrieval.

  6. Global Tropospheric Noise Maps for InSAR Observations

    NASA Astrophysics Data System (ADS)

    Yun, S. H.; Hensley, S.; Agram, P. S.; Chaubell, M.; Fielding, E. J.; Pan, L.

    2014-12-01

    Radio wave's differential phase delay variation through the troposphere is the largest error sources in Interferometric Synthetic Aperture Radar (InSAR) measurements, and water vapor variability in the troposphere is known to be the dominant factor. We use the precipitable water vapor (PWV) products from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) sensors mounted on Terra and Aqua satellites to produce tropospheric noise maps of InSAR. We estimate the slope and y-intercept of power spectral density curve of MODIS PWV and calculate the structure function to estimate the expected tropospheric noise level as a function of distance. The results serve two purposes: 1) to provide guidance on the expected covariance matrix for geophysical modeling, 2) to provide quantitative basis for the science Level-1 requirements of the planned NASA-ISRO L-band SAR mission (NISAR mission). We populate lookup tables of such power spectrum parameters derived from each 1-by-1 degree tile of global coverage. The MODIS data were retrieved from OSCAR (Online Services for Correcting Atmosphere in Radar) server. Users will be able to use the lookup tables and calculate expected tropospheric noise level of any date of MODIS data at any distance scale. Such calculation results can be used for constructing covariance matrix for geophysical modeling, or building statistics to support InSAR missions' requirements. For example, about 74% of the world had InSAR tropospheric noise level (along a radar line-of-sight for an incidence angle of 40 degrees) of 2 cm or less at 50 km distance scale during the time period of 2010/01/01 - 2010/01/09.

  7. Methods for Estimating Withdrawal and Return Flow by Census Block for 2005 and 2020 for New Hampshire

    USGS Publications Warehouse

    Hayes, Laura; Horn, Marilee A.

    2009-01-01

    The U.S. Geological Survey, in cooperation with the New Hampshire Department of Environmental Services, estimated the amount of water demand, consumptive use, withdrawal, and return flow for each U.S. Census block in New Hampshire for the years 2005 (current) and 2020. Estimates of domestic, commercial, industrial, irrigation, and other nondomestic water use were derived through the use and innovative integration of several State and Federal databases, and by use of previously developed techniques. The New Hampshire Water Demand database was created as part of this study to store and integrate State of New Hampshire data central to the project. Within the New Hampshire Water Demand database, a lookup table was created to link the State databases and identify water users common to more than one database. The lookup table also allowed identification of withdrawal and return-flow locations of registered and unregistered commercial, industrial, agricultural, and other nondomestic users. Geographic information system data from the State were used in combination with U.S. Census Bureau spatial data to locate and quantify withdrawals and return flow for domestic users in each census block. Analyzing and processing the most recently available data resulted in census-block estimations of 2005 water use. Applying population projections developed by the State to the data sets enabled projection of water use for the year 2020. The results for each census block are stored in the New Hampshire Water Demand database and may be aggregated to larger political areas or watersheds to assess relative hydrologic stress on the basis of current and potential water availability.

  8. Estimating skin blood saturation by selecting a subset of hyperspectral imaging data

    NASA Astrophysics Data System (ADS)

    Ewerlöf, Maria; Salerud, E. Göran; Strömberg, Tomas; Larsson, Marcus

    2015-03-01

    Skin blood haemoglobin saturation (?b) can be estimated with hyperspectral imaging using the wavelength (λ) range of 450-700 nm where haemoglobin absorption displays distinct spectral characteristics. Depending on the image size and photon transport algorithm, computations may be demanding. Therefore, this work aims to evaluate subsets with a reduced number of wavelengths for ?b estimation. White Monte Carlo simulations are performed using a two-layered tissue model with discrete values for epidermal thickness (?epi) and the reduced scattering coefficient (μ's ), mimicking an imaging setup. A detected intensity look-up table is calculated for a range of model parameter values relevant to human skin, adding absorption effects in the post-processing. Skin model parameters, including absorbers, are; μ's (λ), ?epi, haemoglobin saturation (?b), tissue fraction blood (?b) and tissue fraction melanin (?mel). The skin model paired with the look-up table allow spectra to be calculated swiftly. Three inverse models with varying number of free parameters are evaluated: A(?b, ?b), B(?b, ?b, ?mel) and C(all parameters free). Fourteen wavelength candidates are selected by analysing the maximal spectral sensitivity to ?b and minimizing the sensitivity to ?b. All possible combinations of these candidates with three, four and 14 wavelengths, as well as the full spectral range, are evaluated for estimating ?b for 1000 randomly generated evaluation spectra. The results show that the simplified models A and B estimated ?b accurately using four wavelengths (mean error 2.2% for model B). If the number of wavelengths increased, the model complexity needed to be increased to avoid poor estimations.

  9. Efficient generation of holographic news ticker in holographic 3DTV

    NASA Astrophysics Data System (ADS)

    Kim, Seung-Cheol; Kim, Eun-Soo

    2009-08-01

    News ticker is used to show breaking news or news headlines in conventional 2-D broadcasting system. For the case of the breaking news, the fast creation is need, because the information should be sent quickly. In addition, if holographic 3- D broadcasting system is started in the future, news ticker will remain. On the other hands, some approaches for generation of CGH patterns have been suggested like the ray-tracing method and look-up table (LUT) method. However, these methods have some drawbacks that needs much time or needs huge memory size for look-up table. Recently, a novel LUT (N-LUT) method for fast generation of CGH patterns of 3-D objects with a dramatically reduced LUT without the loss of computational speed was proposed. Therefore, we proposed the method to efficiently generate the holographic news ticker in holographic 3DTV or 3-D movies using N-LUT method. The proposed method is largely consisted of five steps: construction of the LUT for each character, extraction of characters in news ticker, generation and shift of the CGH pattern for news ticker using the LUT for each character, composition of hologram pattern for 3-D video and hologram pattern for news ticker and reconstruct the holographic 3D video with news ticker. To confirm the proposed method, moving car in front of the castle is used as a 3D video and the words 'HOLOGRAM CAPTION GENERATOR' is used as a news ticker. From this simulation results confirmed the feasibility of the proposed method in fast generation of CGH patterns for holographic captions.

  10. Advanced Machine Learning Emulators of Radiative Transfer Models

    NASA Astrophysics Data System (ADS)

    Camps-Valls, G.; Verrelst, J.; Martino, L.; Vicent, J.

    2017-12-01

    Physically-based model inversion methodologies are based on physical laws and established cause-effect relationships. A plethora of remote sensing applications rely on the physical inversion of a Radiative Transfer Model (RTM), which lead to physically meaningful bio-geo-physical parameter estimates. The process is however computationally expensive, needs expert knowledge for both the selection of the RTM, its parametrization and the the look-up table generation, as well as its inversion. Mimicking complex codes with statistical nonlinear machine learning algorithms has become the natural alternative very recently. Emulators are statistical constructs able to approximate the RTM, although at a fraction of the computational cost, providing an estimation of uncertainty, and estimations of the gradient or finite integral forms. We review the field and recent advances of emulation of RTMs with machine learning models. We posit Gaussian processes (GPs) as the proper framework to tackle the problem. Furthermore, we introduce an automatic methodology to construct emulators for costly RTMs. The Automatic Gaussian Process Emulator (AGAPE) methodology combines the interpolation capabilities of GPs with the accurate design of an acquisition function that favours sampling in low density regions and flatness of the interpolation function. We illustrate the good capabilities of our emulators in toy examples, leaf and canopy levels PROSPECT and PROSAIL RTMs, and for the construction of an optimal look-up-table for atmospheric correction based on MODTRAN5.

  11. Synchronizing Photography For High-Speed-Engine Research

    NASA Technical Reports Server (NTRS)

    Chun, K. S.

    1989-01-01

    Light flashes when shaft reaches predetermined angle. Synchronization system facilitates visualization of flow in high-speed internal-combustion engines. Designed for cinematography and holographic interferometry, system synchronizes camera and light source with predetermined rotational angle of engine shaft. 10-bit resolution of absolute optical shaft encoder adapted, and 2 to tenth power combinations of 10-bit binary data computed to corresponding angle values. Pre-computed angle values programmed into EPROM's (erasable programmable read-only memories) to use as angle lookup table. Resolves shaft angle to within 0.35 degree at rotational speeds up to 73,240 revolutions per minute.

  12. Import Manipulate Plot RELAP5/MOD3 Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, K. R.

    1999-10-05

    XMGR5 was derived from an XY plotting tool called ACE/gr, which is copyrighted by Paul J. Turner and in the public domain. The interactive version of ACE/GR is xmgr, and includes a graphical interface to the X-windows system. Enhancements to xmgr have been developed which import, manipualate, and plot data from RELAP/MOD3, MELCOR, FRAPCON, and SINDA codes, and NRC databank files. capabilities, include two-phase property table lookup functions, an equation interpreter, arithmetic library functions, and units conversion. Plot titles, labels, legends, and narrative can be displayed using Latin or Cyrillic alphabets.

  13. An efficient energy response model for liquid scintillator detectors

    NASA Astrophysics Data System (ADS)

    Lebanowski, Logan; Wan, Linyan; Ji, Xiangpan; Wang, Zhe; Chen, Shaomin

    2018-05-01

    Liquid scintillator detectors are playing an increasingly important role in low-energy neutrino experiments. In this article, we describe a generic energy response model of liquid scintillator detectors that provides energy estimations of sub-percent accuracy. This model fits a minimal set of physically-motivated parameters that capture the essential characteristics of scintillator response and that can naturally account for changes in scintillator over time, helping to avoid associated biases or systematic uncertainties. The model employs a one-step calculation and look-up tables, yielding an immediate estimation of energy and an efficient framework for quantifying systematic uncertainties and correlations.

  14. Effect of black carbon on dust property retrievals from satellite observations

    NASA Astrophysics Data System (ADS)

    Lin, Tang-Huang; Yang, Ping; Yi, Bingqi

    2013-01-01

    The effect of black carbon on the optical properties of polluted mineral dust is studied from a satellite remote-sensing perspective. By including the auxiliary data of surface reflectivity and aerosol mixing weight, the optical properties of mineral dust, or more specifically, the aerosol optical depth (AOD) and single-scattering albedo (SSA), can be retrieved with improved accuracy. Precomputed look-up tables based on the principle of the Deep Blue algorithm are utilized in the retrieval. The mean differences between the retrieved results and the corresponding ground-based measurements are smaller than 1% for both AOD and SSA in the case of pure dust. However, the retrievals can be underestimated by as much as 11.9% for AOD and overestimated by up to 4.1% for SSA in the case of polluted dust with an estimated 10% (in terms of the number-density mixing ratio) of soot aggregates if the black carbon effect on dust aerosols is neglected.

  15. Development of advanced structural analysis methodologies for predicting widespread fatigue damage in aircraft structures

    NASA Technical Reports Server (NTRS)

    Harris, Charles E.; Starnes, James H., Jr.; Newman, James C., Jr.

    1995-01-01

    NASA is developing a 'tool box' that includes a number of advanced structural analysis computer codes which, taken together, represent the comprehensive fracture mechanics capability required to predict the onset of widespread fatigue damage. These structural analysis tools have complementary and specialized capabilities ranging from a finite-element-based stress-analysis code for two- and three-dimensional built-up structures with cracks to a fatigue and fracture analysis code that uses stress-intensity factors and material-property data found in 'look-up' tables or from equations. NASA is conducting critical experiments necessary to verify the predictive capabilities of the codes, and these tests represent a first step in the technology-validation and industry-acceptance processes. NASA has established cooperative programs with aircraft manufacturers to facilitate the comprehensive transfer of this technology by making these advanced structural analysis codes available to industry.

  16. A method for the fast estimation of a battery entropy-variation high-resolution curve - Application on a commercial LiFePO4/graphite cell

    NASA Astrophysics Data System (ADS)

    Damay, Nicolas; Forgez, Christophe; Bichat, Marie-Pierre; Friedrich, Guy

    2016-11-01

    The entropy-variation of a battery is responsible for heat generation or consumption during operation and its prior measurement is mandatory for developing a thermal model. It is generally done through the potentiometric method which is considered as a reference. However, it requires several days or weeks to get a look-up table with a 5 or 10% SoC (State of Charge) resolution. In this study, a calorimetric method based on the inversion of a thermal model is proposed for the fast estimation of a nearly continuous curve of entropy-variation. This is achieved by separating the heats produced while charging and discharging the battery. The entropy-variation is then deduced from the extracted entropic heat. The proposed method is validated by comparing the results obtained with several current rates to measurements made with the potentiometric method.

  17. Landsat-5 Thematic Mapper outgassing effects

    USGS Publications Warehouse

    Helder, D.L.; Micijevic, E.

    2004-01-01

    A periodic 3% to 5% variation in detector response affecting both image and internal calibrator (IC) data has been observed in bands 5 and 7 of the Landsat-5 Thematic Mapper. The source for this variation is thought to be an interference effect due to buildup of an ice-like contaminant film on a ZnSe window, covered with an antireflective coating (ARC), of the cooled dewar containing these detectors. Periodic warming of the dewar is required in order to remove the contaminant and restore detector response to an uncontaminated level. These effects in the IC data have been characterized over four individual outgassing cycles using thin-film models to estimate transmittance of the window/ARC and ARC/contaminant film stack throughout the instrument lifetime. Based on the results obtained from this modeling, a lookup table procedure has been implemented that provides correction factors to improve the calibration accuracy of bands 5 and 7 by approximately 5%.

  18. A Model-Based Approach for the Measurement of Eye Movements Using Image Processing

    NASA Technical Reports Server (NTRS)

    Sung, Kwangjae; Reschke, Millard F.

    1997-01-01

    This paper describes a video eye-tracking algorithm which searches for the best fit of the pupil modeled as a circular disk. The algorithm is robust to common image artifacts such as the droopy eyelids and light reflections while maintaining the measurement resolution available by the centroid algorithm. The presented algorithm is used to derive the pupil size and center coordinates, and can be combined with iris-tracking techniques to measure ocular torsion. A comparison search method of pupil candidates using pixel coordinate reference lookup tables optimizes the processing requirements for a least square fit of the circular disk model. This paper includes quantitative analyses and simulation results for the resolution and the robustness of the algorithm. The algorithm presented in this paper provides a platform for a noninvasive, multidimensional eye measurement system which can be used for clinical and research applications requiring the precise recording of eye movements in three-dimensional space.

  19. Designing Image Operators for MRI-PET Image Fusion of the Brain

    NASA Astrophysics Data System (ADS)

    Márquez, Jorge; Gastélum, Alfonso; Padilla, Miguel A.

    2006-09-01

    Our goal is to obtain images combining in a useful and precise way the information from 3D volumes of medical imaging sets. We address two modalities combining anatomy (Magnetic Resonance Imaging or MRI) and functional information (Positron Emission Tomography or PET). Commercial imaging software offers image fusion tools based on fixed blending or color-channel combination of two modalities, and color Look-Up Tables (LUTs), without considering the anatomical and functional character of the image features. We used a sensible approach for image fusion taking advantage mainly from the HSL (Hue, Saturation and Luminosity) color space, in order to enhance the fusion results. We further tested operators for gradient and contour extraction to enhance anatomical details, plus other spatial-domain filters for functional features corresponding to wide point-spread-function responses in PET images. A set of image-fusion operators was formulated and tested on PET and MRI acquisitions.

  20. Tropospheric Ozone from the TOMS TDOT (TOMS-Direct-Ozone-in-Troposphere) Technique During SAFARI-2000

    NASA Technical Reports Server (NTRS)

    Stone, J. B.; Thompson, A. M.; Frolov, A. D.; Hudson, R. D.; Bhartia, P. K. (Technical Monitor)

    2002-01-01

    There are a number of published residual-type methods for deriving tropospheric ozone from TOMS (Total Ozone Mapping Spectrometer). The basic concept of these methods is that within a zone of constant stratospheric ozone, the tropospheric ozone column can be computed by subtracting stratospheric ozone from the TOMS Level 2 total ozone column, We used the modified-residual method for retrieving tropospheric ozone during SAFARI-2000 and found disagreements with in-situ ozone data over Africa in September 2000. Using the newly developed TDOT (TOMS-Direct-Ozone-in-Troposphere) method that uses TOMS radiances and a modified lookup table based on actual profiles during high ozone pollution periods, new maps were prepared and found to compare better to soundings over Lusaka, Zambia (15.5 S, 28 E), Nairobi and several African cities where MOZAIC aircraft operated in September 2000. The TDOT technique and comparisons are described in detail.

  1. Calibration Test Set for a Phase-Comparison Digital Tracker

    NASA Technical Reports Server (NTRS)

    Boas, Amy; Li, Samuel; McMaster, Robert

    2007-01-01

    An apparatus that generates four signals at a frequency of 7.1 GHz having precisely controlled relative phases and equal amplitudes has been designed and built. This apparatus is intended mainly for use in computer-controlled automated calibration and testing of a phase-comparison digital tracker (PCDT) that measures the relative phases of replicas of the same X-band signal received by four antenna elements in an array. (The relative direction of incidence of the signal on the array is then computed from the relative phases.) The present apparatus can also be used to generate precisely phased signals for steering a beam transmitted from a phased antenna array. The apparatus (see figure) includes a 7.1-GHz signal generator, the output of which is fed to a four-way splitter. Each of the four splitter outputs is attenuated by 10 dB and fed as input to a vector modulator, wherein DC bias voltages are used to control the in-phase (I) and quadrature (Q) signal components. The bias voltages are generated by digital-to-analog- converter circuits on a control board that receives its digital control input from a computer running a LabVIEW program. The outputs of the vector modulators are further attenuated by 10 dB, then presented at high-grade radio-frequency connectors. The attenuation reduces the effects of changing mismatch and reflections. The apparatus was calibrated in a process in which the bias voltages were first stepped through all possible IQ settings. Then in a reverse interpolation performed by use of MATLAB software, a lookup table containing 3,600 IQ settings, representing equal amplitude and phase increments of 0.1 , was created for each vector modulator. During operation of the apparatus, these lookup tables are used in calibrating the PCDT.

  2. L5 TM radiometric recalibration procedure using the internal calibration trends from the NLAPS trending database

    USGS Publications Warehouse

    Chander, G.; Haque, Md. O.; Micijevic, E.; Barsi, J.A.

    2008-01-01

    From the Landsat program's inception in 1972 to the present, the earth science user community has benefited from a historical record of remotely sensed data. The multispectral data from the Landsat 5 (L5) Thematic Mapper (TM) sensor provide the backbone for this extensive archive. Historically, the radiometric calibration procedure for this imagery used the instrument's response to the Internal Calibrator (IC) on a scene-by-scene basis to determine the gain and offset for each detector. The IC system degraded with time causing radiometric calibration errors up to 20 percent. In May 2003 the National Landsat Archive Production System (NLAPS) was updated to use a gain model rather than the scene acquisition specific IC gains to calibrate TM data processed in the United States. Further modification of the gain model was performed in 2007. L5 TM data that were processed using IC prior to the calibration update do not benefit from the recent calibration revisions. A procedure has been developed to give users the ability to recalibrate their existing Level-1 products. The best recalibration results are obtained if the work order report that was originally included in the standard data product delivery is available. However, many users may not have the original work order report. In such cases, the IC gain look-up table that was generated using the radiometric gain trends recorded in the NLAPS database can be used for recalibration. This paper discusses the procedure to recalibrate L5 TM data when the work order report originally used in processing is not available. A companion paper discusses the generation of the NLAPS IC gain and bias look-up tables required to perform the recalibration.

  3. Screening procedure for airborne pollutants emitted from a high-tech industrial complex in Taiwan.

    PubMed

    Wang, John H C; Tsai, Ching-Tsan; Chiang, Chow-Feng

    2015-11-01

    Despite the modernization of computational techniques, atmospheric dispersion modeling remains a complicated task as it involves the use of large amounts of interrelated data with wide variability. The continuously growing list of regulated air pollutants also increases the difficulty of this task. To address these challenges, this study aimed to develop a screening procedure for a long-term exposure scenario by generating a site-specific lookup table of hourly averaged dispersion factors (χ/Q), which could be evaluated by downwind distance, direction, and effective plume height only. To allow for such simplification, the average plume rise was weighted with the frequency distribution of meteorological data so that the prediction of χ/Q could be decoupled from the meteorological data. To illustrate this procedure, 20 receptors around a high-tech complex in Taiwan were selected. Five consecutive years of hourly meteorological data were acquired to generate a lookup table of χ/Q, as well as two regression formulas of plume rise as functions of downwind distance, buoyancy flux, and stack height. To calculate the concentrations for the selected receptors, a six-step Excel algorithm was programmed with four years of emission records and 10 most critical toxics were screened out. A validation check using Industrial Source Complex (ISC3) model with the same meteorological and emission data showed an acceptable overestimate of 6.7% in the average concentration of 10 nearby receptors. The procedure proposed in this study allows practical and focused emission management for a large industrial complex and can therefore be integrated into an air quality decision-making system. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Calibration of amino acid racemization (AAR) kinetics in United States mid-Atlantic Coastal Plain Quaternary mollusks using 87Sr/ 86Sr analyses: Evaluation of kinetic models and estimation of regional Late Pleistocene temperature history

    USGS Publications Warehouse

    Wehmiller, J.F.; Harris, W.B.; Boutin, B.S.; Farrell, K.M.

    2012-01-01

    The use of amino acid racemization (AAR) for estimating ages of Quaternary fossils usually requires a combination of kinetic and effective temperature modeling or independent age calibration of analyzed samples. Because of limited availability of calibration samples, age estimates are often based on model extrapolations from single calibration points over wide ranges of D/L values. Here we present paired AAR and 87Sr/ 86Sr results for Pleistocene mollusks from the North Carolina Coastal Plain, USA. 87Sr/ 86Sr age estimates, derived from the lookup table of McArthur et al. [McArthur, J.M., Howarth, R.J., Bailey, T.R., 2001. Strontium isotopic stratigraphy: LOWESS version 3: best fit to the marine Sr-isotopic curve for 0-509 Ma and accompanying Look-up table for deriving numerical age. Journal of Geology 109, 155-169], provide independent age calibration over the full range of amino acid D/L values, thereby allowing comparisons of alternative kinetic models for seven amino acids. The often-used parabolic kinetic model is found to be insufficient to explain the pattern of racemization, although the kinetic pathways for valine racemization and isoleucine epimerization can be closely approximated with this function. Logarithmic and power law regressions more accurately represent the racemization pathways for all amino acids. The reliability of a non-linear model for leucine racemization, developed and refined over the past 20 years, is confirmed by the 87Sr/ 86Sr age results. This age model indicates that the subsurface record (up to 80m thick) of the North Carolina Coastal Plain spans the entire Quaternary, back to ???2.5Ma. The calibrated kinetics derived from this age model yield an estimate of the effective temperature for the study region of 11??2??C., from which we estimate full glacial (Last Glacial Maximum - LGM) temperatures for the region on the order of 7-10??C cooler than present. These temperatures compare favorably with independent paleoclimate information for the region. ?? 2011 Elsevier B.V.

  5. Monthly analysis of PM ratio characteristics and its relation to AOD.

    PubMed

    Sorek-Hamer, Meytar; Broday, David M; Chatfield, Robert; Esswein, Robert; Stafoggia, Massimo; Lepeule, Johanna; Lyapustin, Alexei; Kloog, Itai

    2017-01-01

    Airborne particulate matter (PM) is derived from diverse sources-natural and anthropogenic. Climate change processes and remote sensing measurements are affected by the PM properties, which are often lumped into homogeneous size fractions that show spatiotemporal variation. Since different sources are attributed to different geographic locations and show specific spatial and temporal PM patterns, we explored the spatiotemporal characteristics of the PM 2.5 /PM 10 ratio in different areas. Furthermore, we examined the statistical relationships between AERONET aerosol optical depth (AOD) products, satellite-based AOD, and the PM ratio, as well as the specific PM size fractions. PM data from the northeastern United States, from San Joaquin Valley, CA, and from Italy, Israel, and France were analyzed, as well as the spatial and temporal co-measured AOD products obtained from the MultiAngle Implementation of Atmospheric Correction (MAIAC) algorithm. Our results suggest that when both the AERONET AOD and the AERONET fine-mode AOD are available, the AERONET AOD ratio can be a fair proxy for the ground PM ratio. Therefore, we recommend incorporating the fine-mode AERONET AOD in the calibration of MAIAC. Along with a relatively large variation in the observed PM ratio (especially in the northeastern United States), this shows the need to revisit MAIAC assumptions on aerosol microphysical properties, and perhaps their seasonal variability, which are used to generate the look-up tables and conduct aerosol retrievals. Our results call for further scrutiny of satellite-borne AOD, in particular its errors, limitations, and relation to the vertical aerosol profile and the particle size, shape, and composition distribution. This work is one step of the required analyses to gain better understanding of what the satellite-based AOD represents. The analysis results recommend incorporating the fine-mode AERONET AOD in MAIAC calibration. Specifically, they indicate the need to revisit MAIAC regional aerosol microphysical model assumptions used to generate look-up tables (LUTs) and conduct retrievals. Furthermore, relatively large variations in measured PM ratio shows that adding seasonality in aerosol microphysics used in LUTs, which is currently static, could also help improve accuracy of MAIAC retrievals. These results call for further scrutiny of satellite-borne AOD for better understanding of its limitations and relation to the vertical aerosol profile and particle size, shape, and composition.

  6. Depolarization Lidar Determination Of Cloud-Base Microphysical Properties

    NASA Astrophysics Data System (ADS)

    Donovan, D. P.; Klein Baltink, H.; Henzing, J. S.; de Roode, S.; Siebesma, A. P.

    2016-06-01

    The links between multiple-scattering induced depolarization and cloud microphysical properties (e.g. cloud particle number density, effective radius, water content) have long been recognised. Previous efforts to use depolarization information in a quantitative manner to retrieve cloud microphysical cloud properties have also been undertaken but with limited scope and, arguably, success. In this work we present a retrieval procedure applicable to liquid stratus clouds with (quasi-)linear LWC profiles and (quasi-)constant number density profiles in the cloud-base region. This set of assumptions allows us to employ a fast and robust inversion procedure based on a lookup-table approach applied to extensive lidar Monte-Carlo multiple-scattering calculations. An example validation case is presented where the results of the inversion procedure are compared with simultaneous cloud radar observations. In non-drizzling conditions it was found, in general, that the lidar- only inversion results can be used to predict the radar reflectivity within the radar calibration uncertainty (2-3 dBZ). Results of a comparison between ground-based aerosol number concentration and lidar-derived cloud base number considerations are also presented. The observed relationship between the two quantities is seen to be consistent with the results of previous studies based on aircraft-based in situ measurements.

  7. Radiation-hardened MRAM-based LUT for non-volatile FPGA soft error mitigation with multi-node upset tolerance

    NASA Astrophysics Data System (ADS)

    Zand, Ramtin; DeMara, Ronald F.

    2017-12-01

    In this paper, we have developed a radiation-hardened non-volatile lookup table (LUT) circuit utilizing spin Hall effect (SHE)-magnetic random access memory (MRAM) devices. The design is motivated by modeling the effect of radiation particles striking hybrid complementary metal oxide semiconductor/spin based circuits, and the resistive behavior of SHE-MRAM devices via established and precise physics equations. The models developed are leveraged in the SPICE circuit simulator to verify the functionality of the proposed design. The proposed hardening technique is based on using feedback transistors, as well as increasing the radiation capacity of the sensitive nodes. Simulation results show that our proposed LUT circuit can achieve multiple node upset (MNU) tolerance with more than 38% and 60% power-delay product improvement as well as 26% and 50% reduction in device count compared to the previous energy-efficient radiation-hardened LUT designs. Finally, we have performed a process variation analysis showing that the MNU immunity of our proposed circuit is realized at the cost of increased susceptibility to transistor and MRAM variations compared to an unprotected LUT design.

  8. Query-Adaptive Reciprocal Hash Tables for Nearest Neighbor Search.

    PubMed

    Liu, Xianglong; Deng, Cheng; Lang, Bo; Tao, Dacheng; Li, Xuelong

    2016-02-01

    Recent years have witnessed the success of binary hashing techniques in approximate nearest neighbor search. In practice, multiple hash tables are usually built using hashing to cover more desired results in the hit buckets of each table. However, rare work studies the unified approach to constructing multiple informative hash tables using any type of hashing algorithms. Meanwhile, for multiple table search, it also lacks of a generic query-adaptive and fine-grained ranking scheme that can alleviate the binary quantization loss suffered in the standard hashing techniques. To solve the above problems, in this paper, we first regard the table construction as a selection problem over a set of candidate hash functions. With the graph representation of the function set, we propose an efficient solution that sequentially applies normalized dominant set to finding the most informative and independent hash functions for each table. To further reduce the redundancy between tables, we explore the reciprocal hash tables in a boosting manner, where the hash function graph is updated with high weights emphasized on the misclassified neighbor pairs of previous hash tables. To refine the ranking of the retrieved buckets within a certain Hamming radius from the query, we propose a query-adaptive bitwise weighting scheme to enable fine-grained bucket ranking in each hash table, exploiting the discriminative power of its hash functions and their complement for nearest neighbor search. Moreover, we integrate such scheme into the multiple table search using a fast, yet reciprocal table lookup algorithm within the adaptive weighted Hamming radius. In this paper, both the construction method and the query-adaptive search method are general and compatible with different types of hashing algorithms using different feature spaces and/or parameter settings. Our extensive experiments on several large-scale benchmarks demonstrate that the proposed techniques can significantly outperform both the naive construction methods and the state-of-the-art hashing algorithms.

  9. Soil, water, and vegetation conditions in south Texas

    NASA Technical Reports Server (NTRS)

    Wiegand, C. L.; Gausman, H. W.; Leamer, R. W.; Richardson, A. J.; Everitt, J. H.; Gerbermann, A. H. (Principal Investigator)

    1977-01-01

    The author has identified the following significant results. The best wavelengths in the 0.4 to 2.5 micron interval were determined for detecting lead toxicity and ozone damage, distinguishing succulent from woody species, and detecting silverleaf sunflower. A perpendicular vegetation index, a measure of the distance from the soil background line, in MSS 5 and MSS 7 data space, of pixels containing vegetation was developed and tested as an indicator of vegetation development and crop vigor. A table lookup procedure was devised that permits rapid identification of soil background and green biomass or phenological development in LANDSAT scenes without the need for training data.

  10. Real-time vibration measurement by a spatial phase-shifting technique with a tilted holographic interferogram.

    PubMed

    Nakadate, S; Isshiki, M

    1997-01-01

    Real-time vibration measurement by a tilted holographic interferogram is presented that utilizes the real-time digital fringe processor of a video signal. Three intensity data sampled at every one-third of the fringe spacing of the tilted fringes are used to calculate the modulation term of the fringe that is a function of a vibration amplitude. A three-dimensional lookup table performs the calculation in a TV repetition rate to give a new fringe profile that contours the vibration amplitude. Vibration modes at the resonant frequencies of a flat speaker were displayed on a monitor as changing the exciting frequency of vibration.

  11. Implementation of high-resolution time-to-digital converter in 8-bit microcontrollers.

    PubMed

    Bengtsson, Lars E

    2012-04-01

    This paper will demonstrate how a time-to-digital converter (TDC) with sub-nanosecond resolution can be implemented into an 8-bit microcontroller using so called "direct" methods. This means that a TDC is created using only five bidirectional digital input-output-pins of a microcontroller and a few passive components (two resistors, a capacitor, and a diode). We will demonstrate how a TDC for the range 1-10 μs is implemented with 0.17 ns resolution. This work will also show how to linearize the output by combining look-up tables and interpolation. © 2012 American Institute of Physics

  12. Smart internet search engine through 6W

    NASA Astrophysics Data System (ADS)

    Goehler, Stephen; Cader, Masud; Szu, Harold

    2006-04-01

    Current Internet search engine technology is limited in its ability to display necessary relevant information to the user. Yahoo, Google and Microsoft use lookup tables or indexes which limits the ability of users to find their desired information. While these companies have improved their results over the years by enhancing their existing technology and algorithms with specialized heuristics such as PageRank, there is a need for a next generation smart search engine that can effectively interpret the relevance of user searches and provide the actual information requested. This paper explores whether a smarter Internet search engine can effectively fulfill a user's needs through the use of 6W representations.

  13. CELES: CUDA-accelerated simulation of electromagnetic scattering by large ensembles of spheres

    NASA Astrophysics Data System (ADS)

    Egel, Amos; Pattelli, Lorenzo; Mazzamuto, Giacomo; Wiersma, Diederik S.; Lemmer, Uli

    2017-09-01

    CELES is a freely available MATLAB toolbox to simulate light scattering by many spherical particles. Aiming at high computational performance, CELES leverages block-diagonal preconditioning, a lookup-table approach to evaluate costly functions and massively parallel execution on NVIDIA graphics processing units using the CUDA computing platform. The combination of these techniques allows to efficiently address large electrodynamic problems (>104 scatterers) on inexpensive consumer hardware. In this paper, we validate near- and far-field distributions against the well-established multi-sphere T-matrix (MSTM) code and discuss the convergence behavior for ensembles of different sizes, including an exemplary system comprising 105 particles.

  14. Large Survey Database: A Distributed Framework for Storage and Analysis of Large Datasets

    NASA Astrophysics Data System (ADS)

    Juric, Mario

    2011-01-01

    The Large Survey Database (LSD) is a Python framework and DBMS for distributed storage, cross-matching and querying of large survey catalogs (>10^9 rows, >1 TB). The primary driver behind its development is the analysis of Pan-STARRS PS1 data. It is specifically optimized for fast queries and parallel sweeps of positionally and temporally indexed datasets. It transparently scales to more than >10^2 nodes, and can be made to function in "shared nothing" architectures. An LSD database consists of a set of vertically and horizontally partitioned tables, physically stored as compressed HDF5 files. Vertically, we partition the tables into groups of related columns ('column groups'), storing together logically related data (e.g., astrometry, photometry). Horizontally, the tables are partitioned into partially overlapping ``cells'' by position in space (lon, lat) and time (t). This organization allows for fast lookups based on spatial and temporal coordinates, as well as data and task distribution. The design was inspired by the success of Google BigTable (Chang et al., 2006). Our programming model is a pipelined extension of MapReduce (Dean and Ghemawat, 2004). An SQL-like query language is used to access data. For complex tasks, map-reduce ``kernels'' that operate on query results on a per-cell basis can be written, with the framework taking care of scheduling and execution. The combination leverages users' familiarity with SQL, while offering a fully distributed computing environment. LSD adds little overhead compared to direct Python file I/O. In tests, we sweeped through 1.1 Grows of PanSTARRS+SDSS data (220GB) less than 15 minutes on a dual CPU machine. In a cluster environment, we achieved bandwidths of 17Gbits/sec (I/O limited). Based on current experience, we believe LSD should scale to be useful for analysis and storage of LSST-scale datasets. It can be downloaded from http://mwscience.net/lsd.

  15. A fast point-cloud computing method based on spatial symmetry of Fresnel field

    NASA Astrophysics Data System (ADS)

    Wang, Xiangxiang; Zhang, Kai; Shen, Chuan; Zhu, Wenliang; Wei, Sui

    2017-10-01

    Aiming at the great challenge for Computer Generated Hologram (CGH) duo to the production of high spatial-bandwidth product (SBP) is required in the real-time holographic video display systems. The paper is based on point-cloud method and it takes advantage of the propagating reversibility of Fresnel diffraction in the propagating direction and the fringe pattern of a point source, known as Gabor zone plate has spatial symmetry, so it can be used as a basis for fast calculation of diffraction field in CGH. A fast Fresnel CGH method based on the novel look-up table (N-LUT) method is proposed, the principle fringe patterns (PFPs) at the virtual plane is pre-calculated by the acceleration algorithm and be stored. Secondly, the Fresnel diffraction fringe pattern at dummy plane can be obtained. Finally, the Fresnel propagation from dummy plan to hologram plane. The simulation experiments and optical experiments based on Liquid Crystal On Silicon (LCOS) is setup to demonstrate the validity of the proposed method under the premise of ensuring the quality of 3D reconstruction the method proposed in the paper can be applied to shorten the computational time and improve computational efficiency.

  16. Two-dimensional interpreter for field-reversed configurations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steinhauer, Loren, E-mail: lstein@uw.edu

    2014-08-15

    An interpretive method is developed for extracting details of the fully two-dimensional (2D) “internal” structure of field-reversed configurations (FRC) from common diagnostics. The challenge is that only external and “gross” diagnostics are routinely available in FRC experiments. Inferring such critical quantities as the poloidal flux and the particle inventory has commonly relied on a theoretical construct based on a quasi-one-dimensional approximation. Such inferences sometimes differ markedly from the more accurate, fully 2D reconstructions of equilibria. An interpreter based on a fully 2D reconstruction is needed to enable realistic within-the-shot tracking of evolving equilibrium properties. Presented here is a flexible equilibriummore » reconstruction with which an extensive data base of equilibria was constructed. An automated interpreter then uses this data base as a look-up table to extract evolving properties. This tool is applied to data from the FRC facility at Tri Alpha Energy. It yields surprising results at several points, such as the inferences that the local β (plasma pressure/external magnetic pressure) of the plasma climbs well above unity and the poloidal flux loss time is somewhat longer than previously thought, both of which arise from full two-dimensionality of FRCs.« less

  17. Computer-implemented land use classification with pattern recognition software and ERTS digital data. [Mississippi coastal plains

    NASA Technical Reports Server (NTRS)

    Joyce, A. T.

    1974-01-01

    Significant progress has been made in the classification of surface conditions (land uses) with computer-implemented techniques based on the use of ERTS digital data and pattern recognition software. The supervised technique presently used at the NASA Earth Resources Laboratory is based on maximum likelihood ratioing with a digital table look-up approach to classification. After classification, colors are assigned to the various surface conditions (land uses) classified, and the color-coded classification is film recorded on either positive or negative 9 1/2 in. film at the scale desired. Prints of the film strips are then mosaicked and photographed to produce a land use map in the format desired. Computer extraction of statistical information is performed to show the extent of each surface condition (land use) within any given land unit that can be identified in the image. Evaluations of the product indicate that classification accuracy is well within the limits for use by land resource managers and administrators. Classifications performed with digital data acquired during different seasons indicate that the combination of two or more classifications offer even better accuracy.

  18. Optimization of an electromagnetic linear actuator using a network and a finite element model

    NASA Astrophysics Data System (ADS)

    Neubert, Holger; Kamusella, Alfred; Lienig, Jens

    2011-03-01

    Model based design optimization leads to robust solutions only if the statistical deviations of design, load and ambient parameters from nominal values are considered. We describe an optimization methodology that involves these deviations as stochastic variables for an exemplary electromagnetic actuator used to drive a Braille printer. A combined model simulates the dynamic behavior of the actuator and its non-linear load. It consists of a dynamic network model and a stationary magnetic finite element (FE) model. The network model utilizes lookup tables of the magnetic force and the flux linkage computed by the FE model. After a sensitivity analysis using design of experiment (DoE) methods and a nominal optimization based on gradient methods, a robust design optimization is performed. Selected design variables are involved in form of their density functions. In order to reduce the computational effort we use response surfaces instead of the combined system model obtained in all stochastic analysis steps. Thus, Monte-Carlo simulations can be applied. As a result we found an optimum system design meeting our requirements with regard to function and reliability.

  19. FE calculations on a three stage metal forming process of Sandvik Nanoflex™

    NASA Astrophysics Data System (ADS)

    Voncken, R. M. J.; van der Sluis, O.; Post, J.; Huétink, J.

    2004-06-01

    Sandvik Nanoflex™ combines good corrosion resistance with high strength. This steel has good deformability in austenitic conditions. It belongs to the group of metastable austenites, which means that during deformation a strain-induced transformation into martensite takes place. After deformation, transformation continues as a result of internal stresses. Both transformations are stress-state and temperature dependent. A constitutive model for this steel has been formulated, based on the macroscopic material behaviour measured by inductive measurements. Both the stress-assisted and the strain-induced transformation into martensite have been incorporated in this model. Path-dependent work hardening has also been taken into account. This article describes how the model is implemented in an internal Philips FE code called Crystal, which is a dedicated robust and accurate finite element solver. The implementation is based on lookup tables in combination with feed-forward neural networks. The radial return method is used to determine the material state during and after plastic flow, however, it has been extended to cope with the stiff character of the partial differential equation that describes the transformation behaviour.

  20. RELAP-7 Progress Report. FY-2015 Optimization Activities Summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Ray Alden; Zou, Ling; Andrs, David

    2015-09-01

    This report summarily documents the optimization activities on RELAP-7 for FY-2015. It includes the migration from the analytical stiffened gas equation of state for both the vapor and liquid phases to accurate and efficient property evaluations for both equilibrium and metastable (nonequilibrium) states using the Spline-Based Table Look-up (SBTL) method with the IAPWS-95 properties for steam and water. It also includes the initiation of realistic closure models based, where appropriate, on the U.S. Nuclear Regulatory Commission’s TRACE code. It also describes an improved entropy viscosity numerical stabilization method for the nonequilibrium two-phase flow model of RELAP-7. For ease of presentationmore » to the reader, the nonequilibrium two-phase flow model used in RELAP-7 is briefly presented, though for detailed explanation the reader is referred to RELAP-7 Theory Manual [R.A. Berry, J.W. Peterson, H. Zhang, R.C. Martineau, H. Zhao, L. Zou, D. Andrs, “RELAP-7 Theory Manual,” Idaho National Laboratory INL/EXT-14-31366(rev. 1), February 2014].« less

  1. Adaptive local linear regression with application to printer color management.

    PubMed

    Gupta, Maya R; Garcia, Eric K; Chin, Erika

    2008-06-01

    Local learning methods, such as local linear regression and nearest neighbor classifiers, base estimates on nearby training samples, neighbors. Usually, the number of neighbors used in estimation is fixed to be a global "optimal" value, chosen by cross validation. This paper proposes adapting the number of neighbors used for estimation to the local geometry of the data, without need for cross validation. The term enclosing neighborhood is introduced to describe a set of neighbors whose convex hull contains the test point when possible. It is proven that enclosing neighborhoods yield bounded estimation variance under some assumptions. Three such enclosing neighborhood definitions are presented: natural neighbors, natural neighbors inclusive, and enclosing k-NN. The effectiveness of these neighborhood definitions with local linear regression is tested for estimating lookup tables for color management. Significant improvements in error metrics are shown, indicating that enclosing neighborhoods may be a promising adaptive neighborhood definition for other local learning tasks as well, depending on the density of training samples.

  2. Hardware Implementation of 32-Bit High-Speed Direct Digital Frequency Synthesizer

    PubMed Central

    Ibrahim, Salah Hasan; Ali, Sawal Hamid Md.; Islam, Md. Shabiul

    2014-01-01

    The design and implementation of a high-speed direct digital frequency synthesizer are presented. A modified Brent-Kung parallel adder is combined with pipelining technique to improve the speed of the system. A gated clock technique is proposed to reduce the number of registers in the phase accumulator design. The quarter wave symmetry technique is used to store only one quarter of the sine wave. The ROM lookup table (LUT) is partitioned into three 4-bit sub-ROMs based on angular decomposition technique and trigonometric identity. Exploiting the advantages of sine-cosine symmetrical attributes together with XOR logic gates, one sub-ROM block can be removed from the design. These techniques, compressed the ROM into 368 bits. The ROM compressed ratio is 534.2 : 1, with only two adders, two multipliers, and XOR-gates with high frequency resolution of 0.029 Hz. These techniques make the direct digital frequency synthesizer an attractive candidate for wireless communication applications. PMID:24991635

  3. Computerized systems analysis and optimization of aircraft engine performance, weight, and life cycle costs

    NASA Technical Reports Server (NTRS)

    Fishbach, L. H.

    1979-01-01

    The computational techniques utilized to determine the optimum propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements are described. The characteristics and use of the following computer codes are discussed: (1) NNEP - a very general cycle analysis code that can assemble an arbitrary matrix fans, turbines, ducts, shafts, etc., into a complete gas turbine engine and compute on- and off-design thermodynamic performance; (2) WATE - a preliminary design procedure for calculating engine weight using the component characteristics determined by NNEP; (3) POD DRG - a table look-up program to calculate wave and friction drag of nacelles; (4) LIFCYC - a computer code developed to calculate life cycle costs of engines based on the output from WATE; and (5) INSTAL - a computer code developed to calculate installation effects, inlet performance and inlet weight. Examples are given to illustrate how these computer techniques can be applied to analyze and optimize propulsion system fuel consumption, weight, and cost for representative types of aircraft and missions.

  4. Characterisation of the n-colour printing process using the spot colour overprint model.

    PubMed

    Deshpande, Kiran; Green, Phil; Pointer, Michael R

    2014-12-29

    This paper is aimed at reproducing the solid spot colours using the n-colour separation. A simplified numerical method, called as the spot colour overprint (SCOP) model, was used for characterising the n-colour printing process. This model was originally developed for estimating the spot colour overprints. It was extended to be used as a generic forward characterisation model for the n-colour printing process. The inverse printer model based on the look-up table was implemented to obtain the colour separation for n-colour printing process. Finally the real-world spot colours were reproduced using 7-colour separation on lithographic offset printing process. The colours printed with 7 inks were compared against the original spot colours to evaluate the accuracy. The results show good accuracy with the mean CIEDE2000 value between the target colours and the printed colours of 2.06. The proposed method can be used successfully to reproduce the spot colours, which can potentially save significant time and cost in the printing and packaging industry.

  5. Modeling and Calibration of a Novel One-Mirror Galvanometric Laser Scanner

    PubMed Central

    Yu, Chengyi; Chen, Xiaobo; Xi, Juntong

    2017-01-01

    A laser stripe sensor has limited application when a point cloud of geometric samples on the surface of the object needs to be collected, so a galvanometric laser scanner is designed by using a one-mirror galvanometer element as its mechanical device to drive the laser stripe to sweep along the object. A novel mathematical model is derived for the proposed galvanometer laser scanner without any position assumptions and then a model-driven calibration procedure is proposed. Compared with available model-driven approaches, the influence of machining and assembly errors is considered in the proposed model. Meanwhile, a plane-constraint-based approach is proposed to extract a large number of calibration points effectively and accurately to calibrate the galvanometric laser scanner. Repeatability and accuracy of the galvanometric laser scanner are evaluated on the automobile production line to verify the efficiency and accuracy of the proposed calibration method. Experimental results show that the proposed calibration approach yields similar measurement performance compared with a look-up table calibration method. PMID:28098844

  6. The Advanced Gamma-ray Imaging System (AGIS): Real Time Stereoscopic Array Trigger

    NASA Astrophysics Data System (ADS)

    Byrum, K.; Anderson, J.; Buckley, J.; Cundiff, T.; Dawson, J.; Drake, G.; Duke, C.; Haberichter, B.; Krawzcynski, H.; Krennrich, F.; Madhavan, A.; Schroedter, M.; Smith, A.

    2009-05-01

    Future large arrays of Imaging Atmospheric Cherenkov telescopes (IACTs) such as AGIS and CTA are conceived to comprise of 50 - 100 individual telescopes each having a camera with 10**3 to 10**4 pixels. To maximize the capabilities of such IACT arrays with a low energy threshold, a wide field of view and a low background rate, a sophisticated array trigger is required. We describe the design of a stereoscopic array trigger that calculates image parameters and then correlates them across a subset of telescopes. Fast Field Programmable Gate Array technology allows to use lookup tables at the array trigger level to form a real-time pattern recognition trigger tht capitalizes on the multiple view points of the shower at different shower core distances. A proof of principle system is currently under construction. It is based on 400 MHz FPGAs and the goal is for camera trigger rates of up to 10 MHz and a tunable cosmic-ray background suppression at the array level.

  7. Aerodynamic Characteristics of SC1095 and SC1094 R8 Airfoils

    NASA Technical Reports Server (NTRS)

    Bousman, William G.

    2003-01-01

    Two airfoils are used on the main rotor blade of the UH-60A helicopter, the SC1095 and the SC1094 R8. Measurements of the section lift, drag, and pitching moment have been obtained in ten wind tunnel tests for the SC1095 airfoil, and in five of these tests, measurements have also been obtained for the SC1094 R8. The ten wind tunnel tests are characterized and described in the present study. A number of fundamental parameters measured in these tests are compared and an assessment is made of the adequacy of the test data for use in look-up tables required by lifting-line calculation methods.

  8. An Improved Method for Real-Time 3D Construction of DTM

    NASA Astrophysics Data System (ADS)

    Wei, Yi

    This paper discusses the real-time optimal construction of DTM by two measures. One is to improve coordinate transformation of discrete points acquired from lidar, after processing a total number of 10000 data points, the formula calculation for transformation costs 0.810s, while the table look-up method for transformation costs 0.188s, indicating that the latter is superior to the former. The other one is to adjust the density of the point cloud acquired from lidar, the certain amount of the data points are used for 3D construction in proper proportion in order to meet different needs for 3D imaging, and ultimately increase efficiency of DTM construction while saving system resources.

  9. Digital image film generation: from the photoscientist's perspective

    USGS Publications Warehouse

    Boyd, John E.

    1982-01-01

    The technical sophistication of photoelectronic transducers, integrated circuits, and laser-beam film recorders has made digital imagery an alternative to traditional analog imagery for remote sensing. Because a digital image is stored in discrete digital values, image enhancement is possible before the data are converted to a photographic image. To create a special film-reproduction curve - which can simulate any desired gamma, relative film speed, and toe/shoulder response - the digital-to-analog transfer function of the film recorder is uniquely defined and implemented by a lookup table in the film recorder. Because the image data are acquired in spectral bands, false-color composites also can be given special characteristics by selecting a reproduction curve tailored for each band.

  10. Repetition code of 15 qubits

    NASA Astrophysics Data System (ADS)

    Wootton, James R.; Loss, Daniel

    2018-05-01

    The repetition code is an important primitive for the techniques of quantum error correction. Here we implement repetition codes of at most 15 qubits on the 16 qubit ibmqx3 device. Each experiment is run for a single round of syndrome measurements, achieved using the standard quantum technique of using ancilla qubits and controlled operations. The size of the final syndrome is small enough to allow for lookup table decoding using experimentally obtained data. The results show strong evidence that the logical error rate decays exponentially with code distance, as is expected and required for the development of fault-tolerant quantum computers. The results also give insight into the nature of noise in the device.

  11. Performance optimization of internet firewalls

    NASA Astrophysics Data System (ADS)

    Chiueh, Tzi-cker; Ballman, Allen

    1997-01-01

    Internet firewalls control the data traffic in and out of an enterprise network by checking network packets against a set of rules that embodies an organization's security policy. Because rule checking is computationally more expensive than routing-table look-up, it could become a potential bottleneck for scaling up the performance of IP routers, which typically implement firewall functions in software. in this paper, we analyzed the performance problems associated with firewalls, particularly packet filters, propose a good connection cache to amortize the costly security check over the packets in a connection, and report the preliminary performance results of a trace-driven simulation that show the average packet check time can be reduced by a factor of 2.5 at the least.

  12. Fast polyenergetic forward projection for image formation using OpenCL on a heterogeneous parallel computing platform.

    PubMed

    Zhou, Lili; Clifford Chao, K S; Chang, Jenghwa

    2012-11-01

    Simulated projection images of digital phantoms constructed from CT scans have been widely used for clinical and research applications but their quality and computation speed are not optimal for real-time comparison with the radiography acquired with an x-ray source of different energies. In this paper, the authors performed polyenergetic forward projections using open computing language (OpenCL) in a parallel computing ecosystem consisting of CPU and general purpose graphics processing unit (GPGPU) for fast and realistic image formation. The proposed polyenergetic forward projection uses a lookup table containing the NIST published mass attenuation coefficients (μ∕ρ) for different tissue types and photon energies ranging from 1 keV to 20 MeV. The CT images of interested sites are first segmented into different tissue types based on the CT numbers and converted to a three-dimensional attenuation phantom by linking each voxel to the corresponding tissue type in the lookup table. The x-ray source can be a radioisotope or an x-ray generator with a known spectrum described as weight w(n) for energy bin E(n). The Siddon method is used to compute the x-ray transmission line integral for E(n) and the x-ray fluence is the weighted sum of the exponential of line integral for all energy bins with added Poisson noise. To validate this method, a digital head and neck phantom constructed from the CT scan of a Rando head phantom was segmented into three (air, gray∕white matter, and bone) regions for calculating the polyenergetic projection images for the Mohan 4 MV energy spectrum. To accelerate the calculation, the authors partitioned the workloads using the task parallelism and data parallelism and scheduled them in a parallel computing ecosystem consisting of CPU and GPGPU (NVIDIA Tesla C2050) using OpenCL only. The authors explored the task overlapping strategy and the sequential method for generating the first and subsequent DRRs. A dispatcher was designed to drive the high-degree parallelism of the task overlapping strategy. Numerical experiments were conducted to compare the performance of the OpenCL∕GPGPU-based implementation with the CPU-based implementation. The projection images were similar to typical portal images obtained with a 4 or 6 MV x-ray source. For a phantom size of 512 × 512 × 223, the time for calculating the line integrals for a 512 × 512 image panel was 16.2 ms on GPGPU for one energy bin in comparison to 8.83 s on CPU. The total computation time for generating one polyenergetic projection image of 512 × 512 was 0.3 s (141 s for CPU). The relative difference between the projection images obtained with the CPU-based and OpenCL∕GPGPU-based implementations was on the order of 10(-6) and was virtually indistinguishable. The task overlapping strategy was 5.84 and 1.16 times faster than the sequential method for the first and the subsequent digitally reconstruction radiographies, respectively. The authors have successfully built digital phantoms using anatomic CT images and NIST μ∕ρ tables for simulating realistic polyenergetic projection images and optimized the processing speed with parallel computing using GPGPU∕OpenCL-based implementation. The computation time was fast (0.3 s per projection image) enough for real-time IGRT (image-guided radiotherapy) applications.

  13. Using polynomials to simplify fixed pattern noise and photometric correction of logarithmic CMOS image sensors.

    PubMed

    Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan

    2015-10-16

    An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient.

  14. Atmospheric radiation modeling of galactic cosmic rays using LRO/CRaTER and the EMMREM model with comparisons to balloon and airline based measurements

    NASA Astrophysics Data System (ADS)

    Joyce, C. J.; Schwadron, N. A.; Townsend, L. W.; deWet, W. C.; Wilson, J. K.; Spence, H. E.; Tobiska, W. K.; Shelton-Mur, K.; Yarborough, A.; Harvey, J.; Herbst, A.; Koske-Phillips, A.; Molina, F.; Omondi, S.; Reid, C.; Reid, D.; Shultz, J.; Stephenson, B.; McDevitt, M.; Phillips, T.

    2016-09-01

    We provide an analysis of the galactic cosmic ray radiation environment of Earth's atmosphere using measurements from the Cosmic Ray Telescope for the Effects of Radiation (CRaTER) aboard the Lunar Reconnaissance Orbiter (LRO) together with the Badhwar-O'Neil model and dose lookup tables generated by the Earth-Moon-Mars Radiation Environment Module (EMMREM). This study demonstrates an updated atmospheric radiation model that uses new dose tables to improve the accuracy of the modeled dose rates. Additionally, a method for computing geomagnetic cutoffs is incorporated into the model in order to account for location-dependent effects of the magnetosphere. Newly available measurements of atmospheric dose rates from instruments aboard commercial aircraft and high-altitude balloons enable us to evaluate the accuracy of the model in computing atmospheric dose rates. When compared to the available observations, the model seems to be reasonably accurate in modeling atmospheric radiation levels, overestimating airline dose rates by an average of 20%, which falls within the uncertainty limit recommended by the International Commission on Radiation Units and Measurements (ICRU). Additionally, measurements made aboard high-altitude balloons during simultaneous launches from New Hampshire and California provide an additional comparison to the model. We also find that the newly incorporated geomagnetic cutoff method enables the model to represent radiation variability as a function of location with sufficient accuracy.

  15. A Projection Quality-Driven Tube Current Modulation Method in Cone-Beam CT for IGRT: Proof of Concept.

    PubMed

    Men, Kuo; Dai, Jianrong

    2017-12-01

    To develop a projection quality-driven tube current modulation method in cone-beam computed tomography for image-guided radiotherapy based on the prior attenuation information obtained by the planning computed tomography and then evaluate its effect on a reduction in the imaging dose. The QCKV-1 phantom with different thicknesses (0-400 mm) of solid water upon it was used to simulate different attenuation (μ). Projections were acquired with a series of tube current-exposure time product (mAs) settings, and a 2-dimensional contrast to noise ratio was analyzed for each projection to create a lookup table of mAs versus 2-dimensional contrast to noise ratio, μ. Before a patient underwent computed tomography, the maximum attenuation [Formula: see text] within the 95% range of each projection angle (θ) was estimated according to the planning computed tomography images. Then, a desired 2-dimensional contrast to noise ratio value was selected, and the mAs setting at θ was calculated with the lookup table of mAs versus 2-dimensional contrast to noise ratio,[Formula: see text]. Three-dimensional cone-beam computed tomography images were reconstructed using the projections acquired with the selected mAs. The imaging dose was evaluated with a polymethyl methacrylate dosimetry phantom in terms of volume computed tomography dose index. Image quality was analyzed using a Catphan 503 phantom with an oval body annulus and a pelvis phantom. For the Catphan 503 phantom, the cone-beam computed tomography image obtained by the projection quality-driven tube current modulation method had a similar quality to that of conventional cone-beam computed tomography . However, the proposed method could reduce the imaging dose by 16% to 33% to achieve an equivalent contrast to noise ratio value. For the pelvis phantom, the structural similarity index was 0.992 with a dose reduction of 39.7% for the projection quality-driven tube current modulation method. The proposed method could reduce the additional dose to the patient while not degrading the image quality for cone-beam computed tomography. The projection quality-driven tube current modulation method could be especially beneficial to patients who undergo cone-beam computed tomography frequently during a treatment course.

  16. Determination of circumsolar radiation from Meteosat Second Generation

    NASA Astrophysics Data System (ADS)

    Reinhardt, B.; Buras, R.; Bugliaro, L.; Wilbert, S.; Mayer, B.

    2014-03-01

    Reliable data on circumsolar radiation, which is caused by scattering of sunlight by cloud or aerosol particles, is becoming more and more important for the resource assessment and design of concentrating solar technologies (CSTs). However, measuring circumsolar radiation is demanding and only very limited data sets are available. As a step to bridge this gap, a method was developed which allows for determination of circumsolar radiation from cirrus cloud properties retrieved by the geostationary satellites of the Meteosat Second Generation (MSG) family. The method takes output from the COCS algorithm to generate a cirrus mask from MSG data and then uses the retrieval algorithm APICS to obtain the optical thickness and the effective radius of the detected cirrus, which in turn are used to determine the circumsolar radiation from a pre-calculated look-up table. The look-up table was generated from extensive calculations using a specifically adjusted version of the Monte Carlo radiative transfer model MYSTIC and by developing a fast yet precise parameterization. APICS was also improved such that it determines the surface albedo, which is needed for the cloud property retrieval, in a self-consistent way instead of using external data. Furthermore, it was extended to consider new ice particle shapes to allow for an uncertainty analysis concerning this parameter. We found that the nescience of the ice particle shape leads to an uncertainty of up to 50%. A validation with 1 yr of ground-based measurements shows, however, that the frequency distribution of the circumsolar radiation can be well characterized with typical ice particle shape mixtures, which feature either smooth or severely roughened particle surfaces. However, when comparing instantaneous values, timing and amplitude errors become evident. For the circumsolar ratio (CSR) this is reflected in a mean absolute deviation (MAD) of 0.11 for both employed particle shape mixtures, and a bias of 4 and 11%, for the mixture with smooth and roughend particles, respectively. If measurements with sub-scale cumulus clouds within the relevant satellite pixels are manually excluded, the instantaneous agreement between satellite and ground measurements improves. For a 2-monthly time series, for which a manual screening of all-sky images was performed, MAD values of 0.08 and 0.07 were obtained for the two employed ice particle mixtures, respectively.

  17. Mapping high-resolution incident photosynthetically active radiation over land surfaces from MODIS and GOES satellite data

    NASA Astrophysics Data System (ADS)

    Liang, S.; Wang, K.; Wang, D.; Townshend, J.; Running, S.; Tsay, S.

    2008-05-01

    Incident photosynthetically active radiation (PAR) is a key variable required by almost all terrestrial ecosystem models. Many radiation efficiency models are linearly related canopy productivity to the absorbed PAR. Unfortunately, the current incident PAR products estimated from remotely sensed data or calculated by radiation models at spatial and temporal resolutions are not sufficient for carbon cycle modeling and various applications. In this study, we aim to develop incident PAR products at one kilometer scale from multiple satellite sensors, such as Moderate Resolution Imaging Spectrometer (MODIS) and Geostationary Operational Environmental Satellite (GOES) sensor. We first developed a look-up table approach to estimate instantanerous incident PAR product from MODIS (Liang et al., 2006). The temporal observations of each pixel are used to estimate land surface reflectance and look-up tables of both aerosol and cloud are searched, based on the top-of-atmosphere reflectance and surface reflectance for determining incident PAR. The incident PAR product includes both the direct and diffuse components. The calculation of a daily integrated PAR using two different methods has also been developed (Wang, et al., 2008a). The similar algorithm has been further extended to GOES data (Wang, et al., 2008b, Zheng, et al., 2008). Extensive validation activities are conducted to evaluate the algorithms and products using the ground measurements from FLUXNET and other networks. They are also compared with other satellite products. The results indicate that our approaches can produce reasonable PAR product at 1km resolution. We have generated 1km incident PAR products over North America for several years, which are freely available to the science community. Liang, S., T. Zheng, R. Liu, H. Fang, S. C. Tsay, S. Running, (2006), Estimation of incident Photosynthetically Active Radiation from MODIS Data, Journal of Geophysical Research ¡§CAtmosphere. 111, D15208,doi:10.1029/2005JD006730. Wang, D., S. Liang, and Zheng, T., (2008a), Integrated daily PAR from MODIS. International Journal of Remote Sensing, revised. Wang, K., S. Liang, T. Zheng and D. Wang, (2008b), Simultaneous estimation of surface photosynthetically active radiation and albedo from GOES, Remote Sensing of Environment, revised. Zheng, T., S. Liang, K. Wang, (2008), Estimation of incident PAR from GOES imagery, Journal of Applied Meteorology and Climatology. in press.

  18. Rigor in electronic health record knowledge representation: Lessons learned from a SNOMED CT clinical content encoding exercise.

    PubMed

    Monsen, Karen A; Finn, Robert S; Fleming, Thea E; Garner, Erin J; LaValla, Amy J; Riemer, Judith G

    2016-01-01

    Rigor in clinical knowledge representation is necessary foundation for meaningful interoperability, exchange and reuse of electronic health record (EHR) data. It is critical for clinicians to understand principles and implications of using clinical standards for knowledge representation within EHRs. To educate clinicians and students about knowledge representation and to evaluate their success of applying the manual lookups method for assigning Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) concept identifiers using formally mapped concepts from the Omaha System interface terminology. Clinicians who were students in a doctoral nursing program conducted 21 lookups for Omaha System terms in publicly available SNOMED CT browsers. Lookups were deemed successful if results matched exactly with the corresponding code from the January 2013 SNOMED CT-Omaha System terminology cross-map. Of the 21 manual lookups attempted, 12 (57.1%) were successful. Errors were due to semantic gaps differences in granularity and synonymy or partial term matching. Achieving rigor in clinical knowledge representation across settings, vendors and health systems is a globally recognized challenge. Cross-maps have potential to improve rigor in SNOMED CT encoding of clinical data. Further research is needed to evaluate outcomes of using of terminology cross-maps to encode clinical terms with SNOMED CT concept identifiers based on interface terminologies.

  19. Evaluation of an unsteady flamelet progress variable model for autoignition and flame development in compositionally stratified mixtures

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Saumyadip; Abraham, John

    2012-07-01

    The unsteady flamelet progress variable (UFPV) model has been proposed by Pitsch and Ihme ["An unsteady/flamelet progress variable method for LES of nonpremixed turbulent combustion," AIAA Paper No. 2005-557, 2005] for modeling the averaged/filtered chemistry source terms in Reynolds averaged simulations and large eddy simulations of reacting non-premixed combustion. In the UFPV model, a look-up table of source terms is generated as a function of mixture fraction Z, scalar dissipation rate χ, and progress variable C by solving the unsteady flamelet equations. The assumption is that the unsteady flamelet represents the evolution of the reacting mixing layer in the non-premixed flame. We assess the accuracy of the model in predicting autoignition and flame development in compositionally stratified n-heptane/air mixtures using direct numerical simulations (DNS). The focus in this work is primarily on the assessment of accuracy of the probability density functions (PDFs) employed for obtaining averaged source terms. The performance of commonly employed presumed functions, such as the dirac-delta distribution function, the β distribution function, and statistically most likely distribution (SMLD) approach in approximating the shapes of the PDFs of the reactive and the conserved scalars is evaluated. For unimodal distributions, it is observed that functions that need two-moment information, e.g., the β distribution function and the SMLD approach with two-moment closure, are able to reasonably approximate the actual PDF. As the distribution becomes multimodal, higher moment information is required. Differences are observed between the ignition trends obtained from DNS and those predicted by the look-up table, especially for smaller gradients where the flamelet assumption becomes less applicable. The formulation assumes that the shape of the χ(Z) profile can be modeled by an error function which remains unchanged in the presence of heat release. We show that this assumption is not accurate.

  20. Quantitative analysis of the z-spectrum using a numerically simulated look-up table: Application to the healthy human brain at 7T.

    PubMed

    Geades, Nicolas; Hunt, Benjamin A E; Shah, Simon M; Peters, Andrew; Mougin, Olivier E; Gowland, Penny A

    2017-08-01

    To develop a method that fits a multipool model to z-spectra acquired from non-steady state sequences, taking into account the effects of variations in T1 or B1 amplitude and the results estimating the parameters for a four-pool model to describe the z-spectrum from the healthy brain. We compared measured spectra with a look-up table (LUT) of possible spectra and investigated the potential advantages of simultaneously considering spectra acquired at different saturation powers (coupled spectra) to provide sensitivity to a range of different physicochemical phenomena. The LUT method provided reproducible results in healthy controls. The average values of the macromolecular pool sizes measured in white matter (WM) and gray matter (GM) of 10 healthy volunteers were 8.9% ± 0.3% (intersubject standard deviation) and 4.4% ± 0.4%, respectively, whereas the average nuclear Overhauser effect pool sizes in WM and GM were 5% ± 0.1% and 3% ± 0.1%, respectively, and average amide proton transfer pool sizes in WM and GM were 0.21% ± 0.03% and 0.20% ± 0.02%, respectively. The proposed method demonstrated increased robustness when compared with existing methods (such as Lorentzian fitting and asymmetry analysis) while yielding fully quantitative results. The method can be adjusted to measure other parameters relevant to the z-spectrum. Magn Reson Med 78:645-655, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.

  1. Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks

    PubMed Central

    Naveros, Francisco; Garrido, Jesus A.; Carrillo, Richard R.; Ros, Eduardo; Luque, Niceto R.

    2017-01-01

    Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under increasing levels of neural complexity. PMID:28223930

  2. MOIL-opt: Energy-Conserving Molecular Dynamics on a GPU/CPU system

    PubMed Central

    Ruymgaart, A. Peter; Cardenas, Alfredo E.; Elber, Ron

    2011-01-01

    We report an optimized version of the molecular dynamics program MOIL that runs on a shared memory system with OpenMP and exploits the power of a Graphics Processing Unit (GPU). The model is of heterogeneous computing system on a single node with several cores sharing the same memory and a GPU. This is a typical laboratory tool, which provides excellent performance at minimal cost. Besides performance, emphasis is made on accuracy and stability of the algorithm probed by energy conservation for explicit-solvent atomically-detailed-models. Especially for long simulations energy conservation is critical due to the phenomenon known as “energy drift” in which energy errors accumulate linearly as a function of simulation time. To achieve long time dynamics with acceptable accuracy the drift must be particularly small. We identify several means of controlling long-time numerical accuracy while maintaining excellent speedup. To maintain a high level of energy conservation SHAKE and the Ewald reciprocal summation are run in double precision. Double precision summation of real-space non-bonded interactions improves energy conservation. In our best option, the energy drift using 1fs for a time step while constraining the distances of all bonds, is undetectable in 10ns simulation of solvated DHFR (Dihydrofolate reductase). Faster options, shaking only bonds with hydrogen atoms, are also very well behaved and have drifts of less than 1kcal/mol per nanosecond of the same system. CPU/GPU implementations require changes in programming models. We consider the use of a list of neighbors and quadratic versus linear interpolation in lookup tables of different sizes. Quadratic interpolation with a smaller number of grid points is faster than linear lookup tables (with finer representation) without loss of accuracy. Atomic neighbor lists were found most efficient. Typical speedups are about a factor of 10 compared to a single-core single-precision code. PMID:22328867

  3. Using neural networks and Dyna algorithm for integrated planning, reacting and learning in systems

    NASA Technical Reports Server (NTRS)

    Lima, Pedro; Beard, Randal

    1992-01-01

    The traditional AI answer to the decision making problem for a robot is planning. However, planning is usually CPU-time consuming, depending on the availability and accuracy of a world model. The Dyna system generally described in earlier work, uses trial and error to learn a world model which is simultaneously used to plan reactions resulting in optimal action sequences. It is an attempt to integrate planning, reactive, and learning systems. The architecture of Dyna is presented. The different blocks are described. There are three main components of the system. The first is the world model used by the robot for internal world representation. The input of the world model is the current state and the action taken in the current state. The output is the corresponding reward and resulting state. The second module in the system is the policy. The policy observes the current state and outputs the action to be executed by the robot. At the beginning of program execution, the policy is stochastic and through learning progressively becomes deterministic. The policy decides upon an action according to the output of an evaluation function, which is the third module of the system. The evaluation function takes the following as input: the current state of the system, the action taken in that state, the resulting state, and a reward generated by the world which is proportional to the current distance from the goal state. Originally, the work proposed was as follows: (1) to implement a simple 2-D world where a 'robot' is navigating around obstacles, to learn the path to a goal, by using lookup tables; (2) to substitute the world model and Q estimate function Q by neural networks; and (3) to apply the algorithm to a more complex world where the use of a neural network would be fully justified. In this paper, the system design and achieved results will be described. First we implement the world model with a neural network and leave Q implemented as a look up table. Next, we use a lookup table for the world model and implement the Q function with a neural net. Time limitations prevented the combination of these two approaches. The final section discusses the results and gives clues for future work.

  4. Recognition and Quantification of Area Damaged by Oligonychus Perseae in Avocado Leaves

    NASA Astrophysics Data System (ADS)

    Díaz, Gloria; Romero, Eduardo; Boyero, Juan R.; Malpica, Norberto

    The measure of leaf damage is a basic tool in plant epidemiology research. Measuring the area of a great number of leaves is subjective and time consuming. We investigate the use of machine learning approaches for the objective segmentation and quantification of leaf area damaged by mites in avocado leaves. After extraction of the leaf veins, pixels are labeled with a look-up table generated using a Support Vector Machine with a polynomial kernel of degree 3, on the chrominance components of YCrCb color space. Spatial information is included in the segmentation process by rating the degree of membership to a certain class and the homogeneity of the classified region. Results are presented on real images with different degrees of damage.

  5. Collective network routing

    DOEpatents

    Hoenicke, Dirk

    2014-12-02

    Disclosed are a unified method and apparatus to classify, route, and process injected data packets into a network so as to belong to a plurality of logical networks, each implementing a specific flow of data on top of a common physical network. The method allows to locally identify collectives of packets for local processing, such as the computation of the sum, difference, maximum, minimum, or other logical operations among the identified packet collective. Packets are injected together with a class-attribute and an opcode attribute. Network routers, employing the described method, use the packet attributes to look-up the class-specific route information from a local route table, which contains the local incoming and outgoing directions as part of the specifically implemented global data flow of the particular virtual network.

  6. Nonlinear, nonbinary cyclic group codes

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1992-01-01

    New cyclic group codes of length 2(exp m) - 1 over (m - j)-bit symbols are introduced. These codes can be systematically encoded and decoded algebraically. The code rates are very close to Reed-Solomon (RS) codes and are much better than Bose-Chaudhuri-Hocquenghem (BCH) codes (a former alternative). The binary (m - j)-tuples are identified with a subgroup of the binary m-tuples which represents the field GF(2 exp m). Encoding is systematic and involves a two-stage procedure consisting of the usual linear feedback register (using the division or check polynomial) and a small table lookup. For low rates, a second shift-register encoding operation may be invoked. Decoding uses the RS error-correcting procedures for the m-tuple codes for m = 4, 5, and 6.

  7. Wind shear modeling for aircraft hazard definition

    NASA Technical Reports Server (NTRS)

    Frost, W.; Camp, D. W.; Wang, S. T.

    1978-01-01

    Mathematical models of wind profiles were developed for use in fast time and manned flight simulation studies aimed at defining and eliminating these wind shear hazards. A set of wind profiles and associated wind shear characteristics for stable and neutral boundary layers, thunderstorms, and frontal winds potentially encounterable by aircraft in the terminal area are given. Engineering models of wind shear for direct hazard analysis are presented in mathematical formulae, graphs, tables, and computer lookup routines. The wind profile data utilized to establish the models are described as to location, how obtained, time of observation and number of data points up to 500 m. Recommendations, engineering interpretations and guidelines for use of the data are given and the range of applicability of the wind shear models is described.

  8. Programmable remapper for image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)

    1991-01-01

    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  9. The magnifying glass - A feature space local expansion for visual analysis. [and image enhancement

    NASA Technical Reports Server (NTRS)

    Juday, R. D.

    1981-01-01

    The Magnifying Glass Transformation (MGT) technique is proposed, as a multichannel spectral operation yielding visual imagery which is enhanced in a specified spectral vicinity, guided by the statistics of training samples. An application example is that in which the discrimination among spectral neighbors within an interactive display may be increased without altering distant object appearances or overall interpretation. A direct histogram specification technique is applied to the channels within the multispectral image so that a subset of the spectral domain occupies an increased fraction of the domain. The transformation is carried out by obtaining the training information, establishing the condition of the covariance matrix, determining the influenced solid, and initializing the lookup table. Finally, the image is transformed.

  10. Digital color representation

    DOEpatents

    White, James M.; Faber, Vance; Saltzman, Jeffrey S.

    1992-01-01

    An image population having a large number of attributes is processed to form a display population with a predetermined smaller number of attributes which represent the larger number of attributes. In a particular application, the color values in an image are compressed for storage in a discrete lookup table (LUT) where an 8-bit data signal is enabled to form a display of 24-bit color values. The LUT is formed in a sampling and averaging process from the image color values with no requirement to define discrete Voronoi regions for color compression. Image color values are assigned 8-bit pointers to their closest LUT value whereby data processing requires only the 8-bit pointer value to provide 24-bit color values from the LUT.

  11. An automated approach to the design of decision tree classifiers

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Chin, R.; Beaudet, P.

    1982-01-01

    An automated technique is presented for designing effective decision tree classifiers predicated only on a priori class statistics. The procedure relies on linear feature extractions and Bayes table look-up decision rules. Associated error matrices are computed and utilized to provide an optimal design of the decision tree at each so-called 'node'. A by-product of this procedure is a simple algorithm for computing the global probability of correct classification assuming the statistical independence of the decision rules. Attention is given to a more precise definition of decision tree classification, the mathematical details on the technique for automated decision tree design, and an example of a simple application of the procedure using class statistics acquired from an actual Landsat scene.

  12. Computer-generated holograms by multiple wavefront recording plane method with occlusion culling.

    PubMed

    Symeonidou, Athanasia; Blinder, David; Munteanu, Adrian; Schelkens, Peter

    2015-08-24

    We propose a novel fast method for full parallax computer-generated holograms with occlusion processing, suitable for volumetric data such as point clouds. A novel light wave propagation strategy relying on the sequential use of the wavefront recording plane method is proposed, which employs look-up tables in order to reduce the computational complexity in the calculation of the fields. Also, a novel technique for occlusion culling with little additional computation cost is introduced. Additionally, the method adheres a Gaussian distribution to the individual points in order to improve visual quality. Performance tests show that for a full-parallax high-definition CGH a speedup factor of more than 2,500 compared to the ray-tracing method can be achieved without hardware acceleration.

  13. Generation and transmission of DPSK signals using a directly modulated passive feedback laser.

    PubMed

    Karar, Abdullah S; Gao, Ying; Zhong, Kang Ping; Ke, Jian Hong; Cartledge, John C

    2012-12-10

    The generation of differential-phase-shift keying (DPSK) signals is demonstrated using a directly modulated passive feedback laser at 10.709-Gb/s, 14-Gb/s and 16-Gb/s. The quality of the DPSK signals is assessed using both noncoherent detection for a bit rate of 10.709-Gb/s and coherent detection with digital signal processing involving a look-up table pattern-dependent distortion compensator. Transmission over a passive link consisting of 100 km of single mode fiber at a bit rate of 10.709-Gb/s is achieved with a received optical power of -45 dBm at a bit-error-ratio of 3.8 × 10(-3) and a 49 dB loss margin.

  14. Numerical solution of Space Shuttle Orbiter flow field including real gas effects

    NASA Technical Reports Server (NTRS)

    Prabhu, D. K.; Tannehill, J. C.

    1984-01-01

    The hypersonic, laminar flow around the Space Shuttle Orbiter has been computed for both an ideal gas (gamma = 1.2) and equilibrium air using a real-gas, parabolized Navier-Stokes code. This code employs a generalized coordinate transformation; hence, it places no restrictions on the orientation of the solution surfaces. The initial solution in the nose region was computed using a 3-D, real-gas, time-dependent Navier-Stokes code. The thermodynamic and transport properties of equilibrium air were obtained from either approximate curve fits or a table look-up procedure. Numerical results are presented for flight conditions corresponding to the STS-3 trajectory. The computed surface pressures and convective heating rates are compared with data from the STS-3 flight.

  15. Minimalist design of a robust real-time quantum random number generator

    NASA Astrophysics Data System (ADS)

    Kravtsov, K. S.; Radchenko, I. V.; Kulik, S. P.; Molotkov, S. N.

    2015-08-01

    We present a simple and robust construction of a real-time quantum random number generator (QRNG). Our minimalist approach ensures stable operation of the device as well as its simple and straightforward hardware implementation as a stand-alone module. As a source of randomness the device uses measurements of time intervals between clicks of a single-photon detector. The obtained raw sequence is then filtered and processed by a deterministic randomness extractor, which is realized as a look-up table. This enables high speed on-the-fly processing without the need of extensive computations. The overall performance of the device is around 1 random bit per detector click, resulting in 1.2 Mbit/s generation rate in our implementation.

  16. Cache directory look-up re-use as conflict check mechanism for speculative memory requests

    DOEpatents

    Ohmacht, Martin

    2013-09-10

    In a cache memory, energy and other efficiencies can be realized by saving a result of a cache directory lookup for sequential accesses to a same memory address. Where the cache is a point of coherence for speculative execution in a multiprocessor system, with directory lookups serving as the point of conflict detection, such saving becomes particularly advantageous.

  17. Selection Algorithm for the CALIPSO Lidar Aerosol Extinction-to-Backscatter Ratio

    NASA Technical Reports Server (NTRS)

    Omar, Ali H.; Winker, David M.; Vaughan, Mark A.

    2006-01-01

    The extinction-to-backscatter ratio (S(sub a)) is an important parameter used in the determination of the aerosol extinction and subsequently the optical depth from lidar backscatter measurements. We outline the algorithm used to determine Sa for the Cloud and Aerosol Lidar and Infrared Pathfinder Spaceborne Observations (CALIPSO) lidar. S(sub a) for the CALIPSO lidar will either be selected from a look-up table or calculated using the lidar measurements depending on the characteristics of aerosol layer. Whenever suitable lofted layers are encountered, S(sub a) is computed directly from the integrated backscatter and transmittance. In all other cases, the CALIPSO observables: the depolarization ratio, delta, the layer integrated attenuated backscatter, beta, and the mean layer total attenuated color ratio, gamma, together with the surface type, are used to aid in aerosol typing. Once the type is identified, a look-up-table developed primarily from worldwide observations, is used to determine the S(sub a) value. The CALIPSO aerosol models include desert dust, biomass burning, background, polluted continental, polluted dust, and marine aerosols.

  18. An interface for simulating radiative transfer in and around volcanic plumes with the Monte Carlo radiative transfer model McArtim

    USGS Publications Warehouse

    Kern, Christoph

    2016-03-23

    This report describes two software tools that, when used as front ends for the three-dimensional backward Monte Carlo atmospheric-radiative-transfer model (RTM) McArtim, facilitate the generation of lookup tables of volcanic-plume optical-transmittance characteristics in the ultraviolet/visible-spectral region. In particular, the differential optical depth and derivatives thereof (that is, weighting functions), with regard to a change in SO2 column density or aerosol optical thickness, can be simulated for a specific measurement geometry and a representative range of plume conditions. These tables are required for the retrieval of SO2 column density in volcanic plumes, using the simulated radiative-transfer/differential optical-absorption spectroscopic (SRT-DOAS) approach outlined by Kern and others (2012). This report, together with the software tools published online, is intended to make this sophisticated SRT-DOAS technique available to volcanologists and gas geochemists in an operational environment, without the need for an indepth treatment of the underlying principles or the low-level interface of the RTM McArtim.

  19. The compartment bag test (CBT) for enumerating fecal indicator bacteria: Basis for design and interpretation of results.

    PubMed

    Gronewold, Andrew D; Sobsey, Mark D; McMahan, Lanakila

    2017-06-01

    For the past several years, the compartment bag test (CBT) has been employed in water quality monitoring and public health protection around the world. To date, however, the statistical basis for the design and recommended procedures for enumerating fecal indicator bacteria (FIB) concentrations from CBT results have not been formally documented. Here, we provide that documentation following protocols for communicating the evolution of similar water quality testing procedures. We begin with an overview of the statistical theory behind the CBT, followed by a description of how that theory was applied to determine an optimal CBT design. We then provide recommendations for interpreting CBT results, including procedures for estimating quantiles of the FIB concentration probability distribution, and the confidence of compliance with recognized water quality guidelines. We synthesize these values in custom user-oriented 'look-up' tables similar to those developed for other FIB water quality testing methods. Modified versions of our tables are currently distributed commercially as part of the CBT testing kit. Published by Elsevier B.V.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tao, Wei-Kuo; Takayabu, Yukari N.; Lang, Steve

    Yanai et al. (1973) utilized the meteorological data collected from a sounding network to present a pioneering work on thermodynamic budgets, which are referred to as the apparent heat source (Q1) and apparent moisture sink (Q2). Latent heating (LH) is one of the most dominant terms in Q1. Yanai’s paper motivated the development of satellite-based LH algorithms and provided a theoretical background for imposing large-scale advective forcing into cloud-resolving models (CRMs). These CRM-simulated LH and Q1 data have been used to generate the look-up tables in Tropical Rainfall Measuring Mission (TRMM) LH algorithms. A set of algorithms developed for retrievingmore » LH profiles from TRMM-based rainfall profiles are described and evaluated, including details concerning their intrinsic space-time resolutions. Included in the paper are results from a variety of validation analyses that define the uncertainty of the LH profile estimates. Also, examples of how TRMM-retrieved LH profiles have been used to understand the lifecycle of the MJO and improve the predictions of global weather and climate models as well as comparisons with large-scale analyses are provided. Areas for further improvement of the TRMM products are discussed.« less

  1. On the Green's function of the partially diffusion-controlled reversible ABCD reaction for radiation chemistry codes

    NASA Astrophysics Data System (ADS)

    Plante, Ianik; Devroye, Luc

    2015-09-01

    Several computer codes simulating chemical reactions in particles systems are based on the Green's functions of the diffusion equation (GFDE). Indeed, many types of chemical systems have been simulated using the exact GFDE, which has also become the gold standard for validating other theoretical models. In this work, a simulation algorithm is presented to sample the interparticle distance for partially diffusion-controlled reversible ABCD reaction. This algorithm is considered exact for 2-particles systems, is faster than conventional look-up tables and uses only a few kilobytes of memory. The simulation results obtained with this method are compared with those obtained with the independent reaction times (IRT) method. This work is part of our effort in developing models to understand the role of chemical reactions in the radiation effects on cells and tissues and may eventually be included in event-based models of space radiation risks. However, as many reactions are of this type in biological systems, this algorithm might play a pivotal role in future simulation programs not only in radiation chemistry, but also in the simulation of biochemical networks in time and space as well.

  2. A Synthesis of VIIRS Solar and Lunar Calibrations

    NASA Technical Reports Server (NTRS)

    Eplee, Robert E.; Turpie, Kevin R.; Meister, Gerhard; Patt, Frederick S.; Fireman, Gwyn F.; Franz, Bryan A.; McClain, Charles R.

    2013-01-01

    The NASA VIIRS Ocean Science Team (VOST) has developed two independent calibrations of the SNPP VIIRS moderate resolution reflective solar bands using solar diffuser and lunar observations through June 2013. Fits to the solar calibration time series show mean residuals per band of 0.078-0.10%. There are apparent residual lunar libration correlations in the lunar calibration time series that are not accounted for by the ROLO photometric model of the Moon. Fits to the lunar time series that account for residual librations show mean residuals per band of 0.071-0.17%. Comparison of the solar and lunar time series shows that the relative differences in the two calibrations are 0.12-0.31%. Relative uncertainties in the VIIRS solar and lunar calibration time series are comparable to those achieved for SeaWiFS, Aqua MODIS, and Terra MODIS. Intercomparison of the VIIRS lunar time series with those from SeaWiFS, Aqua MODIS, and Terra MODIS shows that the scatter in the VIIRS lunar observations is consistent with that observed for the heritage instruments. Based on these analyses, the VOST has derived a calibration lookup table for VIIRS ocean color data based on fits to the solar calibration time series.

  3. GUI Type Fault Diagnostic Program for a Turboshaft Engine Using Fuzzy and Neural Networks

    NASA Astrophysics Data System (ADS)

    Kong, Changduk; Koo, Youngju

    2011-04-01

    The helicopter to be operated in a severe flight environmental condition must have a very reliable propulsion system. On-line condition monitoring and fault detection of the engine can promote reliability and availability of the helicopter propulsion system. A hybrid health monitoring program using Fuzzy Logic and Neural Network Algorithms can be proposed. In this hybrid method, the Fuzzy Logic identifies easily the faulted components from engine measuring parameter changes, and the Neural Networks can quantify accurately its identified faults. In order to use effectively the fault diagnostic system, a GUI (Graphical User Interface) type program is newly proposed. This program is composed of the real time monitoring part, the engine condition monitoring part and the fault diagnostic part. The real time monitoring part can display measuring parameters of the study turboshaft engine such as power turbine inlet temperature, exhaust gas temperature, fuel flow, torque and gas generator speed. The engine condition monitoring part can evaluate the engine condition through comparison between monitoring performance parameters the base performance parameters analyzed by the base performance analysis program using look-up tables. The fault diagnostic part can identify and quantify the single faults the multiple faults from the monitoring parameters using hybrid method.

  4. Fast simulation tool for ultraviolet radiation at the earth's surface

    NASA Astrophysics Data System (ADS)

    Engelsen, Ola; Kylling, Arve

    2005-04-01

    FastRT is a fast, yet accurate, UV simulation tool that computes downward surface UV doses, UV indices, and irradiances in the spectral range 290 to 400 nm with a resolution as small as 0.05 nm. It computes a full UV spectrum within a few milliseconds on a standard PC, and enables the user to convolve the spectrum with user-defined and built-in spectral response functions including the International Commission on Illumination (CIE) erythemal response function used for UV index calculations. The program accounts for the main radiative input parameters, i.e., instrumental characteristics, solar zenith angle, ozone column, aerosol loading, clouds, surface albedo, and surface altitude. FastRT is based on look-up tables of carefully selected entries of atmospheric transmittances and spherical albedos, and exploits the smoothness of these quantities with respect to atmospheric, surface, geometrical, and spectral parameters. An interactive site, http://nadir.nilu.no/~olaeng/fastrt/fastrt.html, enables the public to run the FastRT program with most input options. This page also contains updated information about FastRT and links to freely downloadable source codes and binaries.

  5. Monitoring Error Rates In Illumina Sequencing.

    PubMed

    Manley, Leigh J; Ma, Duanduan; Levine, Stuart S

    2016-12-01

    Guaranteeing high-quality next-generation sequencing data in a rapidly changing environment is an ongoing challenge. The introduction of the Illumina NextSeq 500 and the depreciation of specific metrics from Illumina's Sequencing Analysis Viewer (SAV; Illumina, San Diego, CA, USA) have made it more difficult to determine directly the baseline error rate of sequencing runs. To improve our ability to measure base quality, we have created an open-source tool to construct the Percent Perfect Reads (PPR) plot, previously provided by the Illumina sequencers. The PPR program is compatible with HiSeq 2000/2500, MiSeq, and NextSeq 500 instruments and provides an alternative to Illumina's quality value (Q) scores for determining run quality. Whereas Q scores are representative of run quality, they are often overestimated and are sourced from different look-up tables for each platform. The PPR's unique capabilities as a cross-instrument comparison device, as a troubleshooting tool, and as a tool for monitoring instrument performance can provide an increase in clarity over SAV metrics that is often crucial for maintaining instrument health. These capabilities are highlighted.

  6. How to use a phase-only spatial light modulator as a color display.

    PubMed

    Harm, Walter; Jesacher, Alexander; Thalhammer, Gregor; Bernet, Stefan; Ritsch-Marte, Monika

    2015-02-15

    We demonstrate that a parallel aligned liquid crystal on silicon (PA-LCOS) spatial light modulator (SLM) without any attached color mask can be used as a full color display with white light illumination. The method is based on the wavelength dependence of the (voltage controlled) birefringence of the liquid crystal pixels. Modern SLMs offer a wide range over which the birefringence can be modulated, leading (in combination with a linear polarizer) to several intensity modulation periods of a reflected light wave as a function of the applied voltage. Because of dispersion, the oscillation period strongly depends on the wavelength. Thus each voltage applied to an SLM pixel corresponds to another reflected color spectrum. For SLMs with a sufficiently broad tuning range, one obtains a color palette (i.e., a "color lookup-table"), which allows one to display color images. An advantage over standard liquid crystal displays (LCDs), which use color masks in front of the individual pixels, is that the light efficiency and the display resolution are increased by a factor of three.

  7. Ensemble LUT classification for degraded document enhancement

    NASA Astrophysics Data System (ADS)

    Obafemi-Ajayi, Tayo; Agam, Gady; Frieder, Ophir

    2008-01-01

    The fast evolution of scanning and computing technologies have led to the creation of large collections of scanned paper documents. Examples of such collections include historical collections, legal depositories, medical archives, and business archives. Moreover, in many situations such as legal litigation and security investigations scanned collections are being used to facilitate systematic exploration of the data. It is almost always the case that scanned documents suffer from some form of degradation. Large degradations make documents hard to read and substantially deteriorate the performance of automated document processing systems. Enhancement of degraded document images is normally performed assuming global degradation models. When the degradation is large, global degradation models do not perform well. In contrast, we propose to estimate local degradation models and use them in enhancing degraded document images. Using a semi-automated enhancement system we have labeled a subset of the Frieder diaries collection.1 This labeled subset was then used to train an ensemble classifier. The component classifiers are based on lookup tables (LUT) in conjunction with the approximated nearest neighbor algorithm. The resulting algorithm is highly effcient. Experimental evaluation results are provided using the Frieder diaries collection.1

  8. A Thermal Infrared Radiation Parameterization for Atmospheric Studies

    NASA Technical Reports Server (NTRS)

    Chou, Ming-Dah; Suarez, Max J.; Liang, Xin-Zhong; Yan, Michael M.-H.; Cote, Charles (Technical Monitor)

    2001-01-01

    This technical memorandum documents the longwave radiation parameterization developed at the Climate and Radiation Branch, NASA Goddard Space Flight Center, for a wide variety of weather and climate applications. Based on the 1996-version of the Air Force Geophysical Laboratory HITRAN data, the parameterization includes the absorption due to major gaseous absorption (water vapor, CO2, O3) and most of the minor trace gases (N2O, CH4, CFCs), as well as clouds and aerosols. The thermal infrared spectrum is divided into nine bands. To achieve a high degree of accuracy and speed, various approaches of computing the transmission function are applied to different spectral bands and gases. The gaseous transmission function is computed either using the k-distribution method or the table look-up method. To include the effect of scattering due to clouds and aerosols, the optical thickness is scaled by the single-scattering albedo and asymmetry factor. The parameterization can accurately compute fluxes to within 1% of the high spectral-resolution line-by-line calculations. The cooling rate can be accurately computed in the region extending from the surface to the 0.01-hPa level.

  9. A high-resolution programmable Vernier delay generator based on carry chains in FPGA

    NASA Astrophysics Data System (ADS)

    Cui, Ke; Li, Xiangyu; Zhu, Rihong

    2017-06-01

    This paper presents an architecture of a high-resolution delay generator implemented in a single field programmable gate array chip by exploiting the method of utilizing dedicated carry chains. It serves as the core component in various physical instruments. The proposed delay generator contains the coarse delay step and the fine delay step to guarantee both large dynamic range and high resolution. The carry chains are organized in the Vernier delay loop style to fulfill the fine delay step with high precision and high linearity. The delay generator was implemented in the EP3SE110F1152I3 Stratix III device from Altera on a self-designed test board. Test results show that the obtained resolution is 38.6 ps, and the differential nonlinearity/integral nonlinearity is in the range of [-0.18 least significant bit (LSB), 0.24 LSB]/(-0.02 LSB, 0.01 LSB) under the nominal supply voltage of 1100 mV and environmental temperature of 2 0°C. The delay generator is rather efficient concerning resource cost, which uses only 668 look-up tables and 146 registers in total.

  10. Analytical modeling of the temporal evolution of hot spot temperatures in silicon solar cells

    NASA Astrophysics Data System (ADS)

    Wasmer, Sven; Rajsrima, Narong; Geisemeyer, Ino; Fertig, Fabian; Greulich, Johannes Michael; Rein, Stefan

    2018-03-01

    We present an approach to predict the equilibrium temperature of hot spots in crystalline silicon solar cells based on the analysis of their temporal evolution right after turning on a reverse bias. For this end, we derive an analytical expression for the time-dependent heat diffusion of a breakdown channel that is assumed to be cylindrical. We validate this by means of thermography imaging of hot spots right after turning on a reverse bias. The expression allows to be used to extract hot spot powers and radii from short-term measurements, targeting application in inline solar cell characterization. The extracted hot spot powers are validated at the hands of long-term dark lock-in thermography imaging. Using a look-up table of expected equilibrium temperatures determined by numerical and analytical simulations, we utilize the determined hot spot properties to predict the equilibrium temperatures of about 100 industrial aluminum back-surface field solar cells and achieve a high correlation coefficient of 0.86 and a mean absolute error of only 3.3 K.

  11. Whole-body to tissue concentration ratios for use in biota dose assessments for animals.

    PubMed

    Yankovich, Tamara L; Beresford, Nicholas A; Wood, Michael D; Aono, Tasuo; Andersson, Pål; Barnett, Catherine L; Bennett, Pamela; Brown, Justin E; Fesenko, Sergey; Fesenko, J; Hosseini, Ali; Howard, Brenda J; Johansen, Mathew P; Phaneuf, Marcel M; Tagami, Keiko; Takata, Hyoe; Twining, John R; Uchida, Shigeo

    2010-11-01

    Environmental monitoring programs often measure contaminant concentrations in animal tissues consumed by humans (e.g., muscle). By comparison, demonstration of the protection of biota from the potential effects of radionuclides involves a comparison of whole-body doses to radiological dose benchmarks. Consequently, methods for deriving whole-body concentration ratios based on tissue-specific data are required to make best use of the available information. This paper provides a series of look-up tables with whole-body:tissue-specific concentration ratios for non-human biota. Focus was placed on relatively broad animal categories (including molluscs, crustaceans, freshwater fishes, marine fishes, amphibians, reptiles, birds and mammals) and commonly measured tissues (specifically, bone, muscle, liver and kidney). Depending upon organism, whole-body to tissue concentration ratios were derived for between 12 and 47 elements. The whole-body to tissue concentration ratios can be used to estimate whole-body concentrations from tissue-specific measurements. However, we recommend that any given whole-body to tissue concentration ratio should not be used if the value falls between 0.75 and 1.5. Instead, a value of one should be assumed.

  12. A 3D simulation look-up library for real-time airborne gamma-ray spectroscopy

    NASA Astrophysics Data System (ADS)

    Kulisek, Jonathan A.; Wittman, Richard S.; Miller, Erin A.; Kernan, Warnick J.; McCall, Jonathon D.; McConn, Ron J.; Schweppe, John E.; Seifert, Carolyn E.; Stave, Sean C.; Stewart, Trevor N.

    2018-01-01

    A three-dimensional look-up library consisting of simulated gamma-ray spectra was developed to leverage, in real-time, the abundance of data provided by a helicopter-mounted gamma-ray detection system consisting of 92 CsI-based radiation sensors and exhibiting a highly angular-dependent response. We have demonstrated how this library can be used to help effectively estimate the terrestrial gamma-ray background, develop simulated flight scenarios, and to localize radiological sources. Source localization accuracy was significantly improved, particularly for weak sources, by estimating the entire gamma-ray spectra while accounting for scattering in the air, and especially off the ground.

  13. A Parallel Nonrigid Registration Algorithm Based on B-Spline for Medical Images.

    PubMed

    Du, Xiaogang; Dang, Jianwu; Wang, Yangping; Wang, Song; Lei, Tao

    2016-01-01

    The nonrigid registration algorithm based on B-spline Free-Form Deformation (FFD) plays a key role and is widely applied in medical image processing due to the good flexibility and robustness. However, it requires a tremendous amount of computing time to obtain more accurate registration results especially for a large amount of medical image data. To address the issue, a parallel nonrigid registration algorithm based on B-spline is proposed in this paper. First, the Logarithm Squared Difference (LSD) is considered as the similarity metric in the B-spline registration algorithm to improve registration precision. After that, we create a parallel computing strategy and lookup tables (LUTs) to reduce the complexity of the B-spline registration algorithm. As a result, the computing time of three time-consuming steps including B-splines interpolation, LSD computation, and the analytic gradient computation of LSD, is efficiently reduced, for the B-spline registration algorithm employs the Nonlinear Conjugate Gradient (NCG) optimization method. Experimental results of registration quality and execution efficiency on the large amount of medical images show that our algorithm achieves a better registration accuracy in terms of the differences between the best deformation fields and ground truth and a speedup of 17 times over the single-threaded CPU implementation due to the powerful parallel computing ability of Graphics Processing Unit (GPU).

  14. Localization of small arms fire using acoustic measurements of muzzle blast and/or ballistic shock wave arrivals.

    PubMed

    Lo, Kam W; Ferguson, Brian G

    2012-11-01

    The accurate localization of small arms fire using fixed acoustic sensors is considered. First, the conventional wavefront-curvature passive ranging method, which requires only differential time-of-arrival (DTOA) measurements of the muzzle blast wave to estimate the source position, is modified to account for sensor positions that are not strictly collinear (bowed array). Second, an existing single-sensor-node ballistic model-based localization method, which requires both DTOA and differential angle-of-arrival (DAOA) measurements of the muzzle blast wave and ballistic shock wave, is improved by replacing the basic external ballistics model (which describes the bullet's deceleration along its trajectory) with a more rigorous model and replacing the look-up table ranging procedure with a nonlinear (or polynomial) equation-based ranging procedure. Third, a new multiple-sensor-node ballistic model-based localization method, which requires only DTOA measurements of the ballistic shock wave to localize the point of fire, is formulated. The first method is applicable to situations when only the muzzle blast wave is received, whereas the third method applies when only the ballistic shock wave is received. The effectiveness of each of these methods is verified using an extensive set of real data recorded during a 7 day field experiment.

  15. Real-time distortion correction for visual inspection systems based on FPGA

    NASA Astrophysics Data System (ADS)

    Liang, Danhua; Zhang, Zhaoxia; Chen, Xiaodong; Yu, Daoyin

    2008-03-01

    Visual inspection is a kind of new technology based on the research of computer vision, which focuses on the measurement of the object's geometry and location. It can be widely used in online measurement, and other real-time measurement process. Because of the defects of the traditional visual inspection, a new visual detection mode -all-digital intelligent acquisition and transmission is presented. The image processing, including filtering, image compression, binarization, edge detection and distortion correction, can be completed in the programmable devices -FPGA. As the wide-field angle lens is adopted in the system, the output images have serious distortion. Limited by the calculating speed of computer, software can only correct the distortion of static images but not the distortion of dynamic images. To reach the real-time need, we design a distortion correction system based on FPGA. The method of hardware distortion correction is that the spatial correction data are calculated first under software circumstance, then converted into the address of hardware storage and stored in the hardware look-up table, through which data can be read out to correct gray level. The major benefit using FPGA is that the same circuit can be used for other circularly symmetric wide-angle lenses without being modified.

  16. On-demand server-side image processing for web-based DICOM image display

    NASA Astrophysics Data System (ADS)

    Sakusabe, Takaya; Kimura, Michio; Onogi, Yuzo

    2000-04-01

    Low cost image delivery is needed in modern networked hospitals. If a hospital has hundreds of clients, cost of client systems is a big problem. Naturally, a Web-based system is the most effective solution. But a Web browser could not display medical images with certain image processing such as a lookup table transformation. We developed a Web-based medical image display system using Web browser and on-demand server-side image processing. All images displayed on a Web page are generated from DICOM files on a server, delivered on-demand. User interaction on the Web page is handled by a client-side scripting technology such as JavaScript. This combination makes a look-and-feel of an imaging workstation not only for its functionality but also for its speed. Real time update of images with tracing mouse motion is achieved on Web browser without any client-side image processing which may be done by client-side plug-in technology such as Java Applets or ActiveX. We tested performance of the system in three cases. Single client, small number of clients in a fast speed network, and large number of clients in a normal speed network. The result shows that there are very slight overhead for communication and very scalable in number of clients.

  17. Using Polynomials to Simplify Fixed Pattern Noise and Photometric Correction of Logarithmic CMOS Image Sensors

    PubMed Central

    Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan

    2015-01-01

    An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient. PMID:26501287

  18. An Efficient Index Dissemination in Unstructured Peer-to-Peer Networks

    NASA Astrophysics Data System (ADS)

    Takahashi, Yusuke; Izumi, Taisuke; Kakugawa, Hirotsugu; Masuzawa, Toshimitsu

    Using Bloom filters is one of the most popular and efficient lookup methods in P2P networks. A Bloom filter is a representation of data item indices, which achieves small memory requirement by allowing one-sided errors (false positive). In the lookup scheme besed on the Bloom filter, each peer disseminates a Bloom filter representing indices of the data items it owns in advance. Using the information of disseminated Bloom filters as a clue, each query can find a short path to its destination. In this paper, we propose an efficient extension of the Bloom filter, called a Deterministic Decay Bloom Filter (DDBF) and an index dissemination method based on it. While the index dissemination based on a standard Bloom filter suffers performance degradation by containing information of too many data items when its dissemination radius is large, the DDBF can circumvent such degradation by limiting information according to the distance between the filter holder and the items holders, i. e., a DDBF contains less information for faraway items and more information for nearby items. Interestingly, the construction of DDBFs requires no extra cost above that of standard filters. We also show by simulation that our method can achieve better lookup performance than existing ones.

  19. Speed and accuracy improvements in FLAASH atmospheric correction of hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Perkins, Timothy; Adler-Golden, Steven; Matthew, Michael W.; Berk, Alexander; Bernstein, Lawrence S.; Lee, Jamine; Fox, Marsha

    2012-11-01

    Remotely sensed spectral imagery of the earth's surface can be used to fullest advantage when the influence of the atmosphere has been removed and the measurements are reduced to units of reflectance. Here, we provide a comprehensive summary of the latest version of the Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes atmospheric correction algorithm. We also report some new code improvements for speed and accuracy. These include the re-working of the original algorithm in C-language code parallelized with message passing interface and containing a new radiative transfer look-up table option, which replaces executions of the MODTRAN model. With computation times now as low as ~10 s per image per computer processor, automated, real-time, on-board atmospheric correction of hyper- and multi-spectral imagery is within reach.

  20. New real-time algorithms for arbitrary, high precision function generation with applications to acoustic transducer excitation

    NASA Astrophysics Data System (ADS)

    Gaydecki, P.

    2009-07-01

    A system is described for the design, downloading and execution of arbitrary functions, intended for use with acoustic and low-frequency ultrasonic transducers in condition monitoring and materials testing applications. The instrumentation comprises a software design tool and a powerful real-time digital signal processor unit, operating at 580 million multiplication-accumulations per second (MMACs). The embedded firmware employs both an established look-up table approach and a new function interpolation technique to generate the real-time signals with very high precision and flexibility. Using total harmonic distortion (THD) analysis, the purity of the waveforms have been compared with those generated using traditional analogue function generators; this analysis has confirmed that the new instrument has a consistently superior signal-to-noise ratio.

  1. Flight through thunderstorm outflows. [aircraft landing

    NASA Technical Reports Server (NTRS)

    Frost, W.; Crosby, B.; Camp, D. W.

    1978-01-01

    Computer simulation of aircraft landing through thunderstorm gust fronts is carried out. The two-dimensional, nonlinear equations or aircraft motion containing all wind shear terms are solved numerically. The gust front spatial wind field inputs are provided in the form of tabulated experimental data which are coupled with a computer table lookup routine to provide the required wind components and shear at any given position within an approximate 500 m by 1 km vertical plane. The aircraft is considered to enter the wind field at a specified position under trimmed conditions. Both fixed control and automatic control landings are simulated. Flight paths, as well as control inputs necessary to maintain specified trajectories, are presented and discussed for aircraft having characteristics of a DC-8, B-747, augmentor-wing STOL, and a DHC-6.

  2. Flight through thunderstorm outflows

    NASA Technical Reports Server (NTRS)

    Frost, W.; Crosby, B.; Camp, D. W.

    1979-01-01

    Computer simulation of aircraft landing through thunderstorm gust fronts is carried out. The 3 degree-of-freedom, nonlinear equations of aircraft motion for the longitudinal variables containing all two-dimensional wind shear terms are solved numerically. The gust front spatial wind field inputs are provided in the form of tabulated experimental data which are coupled with a computer table lookup routine to provide the required wind components and shear at any given position within an approximate 500 m x 1 km vertical plane. The aircraft is considered to enter the wind field at a specified position under trimmed conditions. Both fixed control and automatic control landings are simulated. Flight paths, as well as control inputs necessary to maintain specified trajectories, are presented and discussed for aircraft having characteristics of a DC-8, B-747, and a DHC-6.

  3. Representations of time coordinates in FITS. Time and relative dimension in space

    NASA Astrophysics Data System (ADS)

    Rots, Arnold H.; Bunclark, Peter S.; Calabretta, Mark R.; Allen, Steven L.; Manchester, Richard N.; Thompson, William T.

    2015-02-01

    Context. In a series of three previous papers, formulation and specifics of the representation of world coordinate transformations in FITS data have been presented. This fourth paper deals with encoding time. Aims: Time on all scales and precisions known in astronomical datasets is to be described in an unambiguous, complete, and self-consistent manner. Methods: Employing the well-established World Coordinate System (WCS) framework, and maintaining compatibility with the FITS conventions that are currently in use to specify time, the standard is extended to describe rigorously the time coordinate. Results: World coordinate functions are defined for temporal axes sampled linearly and as specified by a lookup table. The resulting standard is consistent with the existing FITS WCS standards and specifies a metadata set that achieves the aims enunciated above.

  4. Adults with Autism Tend to Undermine the Hidden Environmental Structure: Evidence from a Visual Associative Learning Task.

    PubMed

    Sapey-Triomphe, Laurie-Anne; Sonié, Sandrine; Hénaff, Marie-Anne; Mattout, Jérémie; Schmitz, Christina

    2018-04-13

    The learning-style theory of Autism Spectrum Disorders (ASD) (Qian, Lipkin, Frontiers in Human Neuroscience 5:77, 2011) states that ASD individuals differ from neurotypics in the way they learn and store information about the environment and its structure. ASD would rather adopt a lookup-table strategy (LUT: memorizing each experience), while neurotypics would favor an interpolation style (INT: extracting regularities to generalize). In a series of visual behavioral tasks, we tested this hypothesis in 20 neurotypical and 20 ASD adults. ASD participants had difficulties using the INT style when instructions were hidden but not when instructions were revealed. Rather than an inability to use rules, ASD would be characterized by a disinclination to generalize and infer such rules.

  5. Control law system for X-Wing aircraft

    NASA Technical Reports Server (NTRS)

    Lawrence, Thomas H. (Inventor); Gold, Phillip J. (Inventor)

    1990-01-01

    Control law system for the collective axis, as well as pitch and roll axes, of an X-Wing aircraft and for the pneumatic valving controlling circulation control blowing for the rotor. As to the collective axis, the system gives the pilot single-lever direct lift control and insures that maximum cyclic blowing control power is available in transition. Angle-of-attach de-coupling is provided in rotary wing flight, and mechanical collective is used to augment pneumatic roll control when appropriate. Automatic gain variations with airspeed and rotor speed are provided, so a unitary set of control laws works in all three X-Wing flight modes. As to pitch and roll axes, the system produces essentially the same aircraft response regardless of flight mode or condition. Undesirable cross-couplings are compensated for in a manner unnoticeable to the pilot without requiring pilot action, as flight mode or condition is changed. A hub moment feedback scheme is implemented, utilizing a P+I controller, significantly improving bandwidth. Limits protect aircraft structure from inadvertent damage. As to pneumatic valving, the system automatically provides the pressure required at each valve azimuth location, as dictated by collective, cyclic and higher harmonic blowing commands. Variations in the required control phase angle are automatically introduced, and variations in plenum pressure are compensated for. The required switching for leading, trailing and dual edge blowing is automated, using a simple table look-up procedure. Non-linearities due to valve characteristics of circulation control lift are linearized by map look-ups.

  6. MATH77 - A LIBRARY OF MATHEMATICAL SUBPROGRAMS FOR FORTRAN 77, RELEASE 4.0

    NASA Technical Reports Server (NTRS)

    Lawson, C. L.

    1994-01-01

    MATH77 is a high quality library of ANSI FORTRAN 77 subprograms implementing contemporary algorithms for the basic computational processes of science and engineering. The portability of MATH77 meets the needs of present-day scientists and engineers who typically use a variety of computing environments. Release 4.0 of MATH77 contains 454 user-callable and 136 lower-level subprograms. Usage of the user-callable subprograms is described in 69 sections of the 416 page users' manual. The topics covered by MATH77 are indicated by the following list of chapter titles in the users' manual: Mathematical Functions, Pseudo-random Number Generation, Linear Systems of Equations and Linear Least Squares, Matrix Eigenvalues and Eigenvectors, Matrix Vector Utilities, Nonlinear Equation Solving, Curve Fitting, Table Look-Up and Interpolation, Definite Integrals (Quadrature), Ordinary Differential Equations, Minimization, Polynomial Rootfinding, Finite Fourier Transforms, Special Arithmetic , Sorting, Library Utilities, Character-based Graphics, and Statistics. Besides subprograms that are adaptations of public domain software, MATH77 contains a number of unique packages developed by the authors of MATH77. Instances of the latter type include (1) adaptive quadrature, allowing for exceptional generality in multidimensional cases, (2) the ordinary differential equations solver used in spacecraft trajectory computation for JPL missions, (3) univariate and multivariate table look-up and interpolation, allowing for "ragged" tables, and providing error estimates, and (4) univariate and multivariate derivative-propagation arithmetic. MATH77 release 4.0 is a subroutine library which has been carefully designed to be usable on any computer system that supports the full ANSI standard FORTRAN 77 language. It has been successfully implemented on a CRAY Y/MP computer running UNICOS, a UNISYS 1100 computer running EXEC 8, a DEC VAX series computer running VMS, a Sun4 series computer running SunOS, a Hewlett-Packard 720 computer running HP-UX, a Macintosh computer running MacOS, and an IBM PC compatible computer running MS-DOS. Accompanying the library is a set of 196 "demo" drivers that exercise all of the user-callable subprograms. The FORTRAN source code for MATH77 comprises 109K lines of code in 375 files with a total size of 4.5Mb. The demo drivers comprise 11K lines of code and 418K. Forty-four percent of the lines of the library code and 29% of those in the demo code are comment lines. The standard distribution medium for MATH77 is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 9track 1600 BPI magnetic tape in VAX BACKUP format and a TK50 tape cartridge in VAX BACKUP format. An electronic copy of the documentation is included on the distribution media. Previous releases of MATH77 have been used over a number of years in a variety of JPL applications. MATH77 Release 4.0 was completed in 1992. MATH77 is a copyrighted work with all copyright vested in NASA.

  7. Preliminary Work for Examining the Scalability of Reinforcement Learning

    NASA Technical Reports Server (NTRS)

    Clouse, Jeff

    1998-01-01

    Researchers began studying automated agents that learn to perform multiple-step tasks early in the history of artificial intelligence (Samuel, 1963; Samuel, 1967; Waterman, 1970; Fikes, Hart & Nilsonn, 1972). Multiple-step tasks are tasks that can only be solved via a sequence of decisions, such as control problems, robotics problems, classic problem-solving, and game-playing. The objective of agents attempting to learn such tasks is to use the resources they have available in order to become more proficient at the tasks. In particular, each agent attempts to develop a good policy, a mapping from states to actions, that allows it to select actions that optimize a measure of its performance on the task; for example, reducing the number of steps necessary to complete the task successfully. Our study focuses on reinforcement learning, a set of learning techniques where the learner performs trial-and-error experiments in the task and adapts its policy based on the outcome of those experiments. Much of the work in reinforcement learning has focused on a particular, simple representation, where every problem state is represented explicitly in a table, and associated with each state are the actions that can be chosen in that state. A major advantage of this table lookup representation is that one can prove that certain reinforcement learning techniques will develop an optimal policy for the current task. The drawback is that the representation limits the application of reinforcement learning to multiple-step tasks with relatively small state-spaces. There has been a little theoretical work that proves that convergence to optimal solutions can be obtained when using generalization structures, but the structures are quite simple. The theory says little about complex structures, such as multi-layer, feedforward artificial neural networks (Rumelhart & McClelland, 1986), but empirical results indicate that the use of reinforcement learning with such structures is promising. These empirical results make no theoretical claims, nor compare the policies produced to optimal policies. A goal of our work is to be able to make the comparison between an optimal policy and one stored in an artificial neural network. A difficulty of performing such a study is finding a multiple-step task that is small enough that one can find an optimal policy using table lookup, yet large enough that, for practical purposes, an artificial neural network is really required. We have identified a limited form of the game OTHELLO as satisfying these requirements. The work we report here is in the very preliminary stages of research, but this paper provides background for the problem being studied and a description of our initial approach to examining the problem. In the remainder of this paper, we first describe reinforcement learning in more detail. Next, we present the game OTHELLO. Finally we argue that a restricted form of the game meets the requirements of our study, and describe our preliminary approach to finding an optimal solution to the problem.

  8. Consistency of color representation in smart phones.

    PubMed

    Dain, Stephen J; Kwan, Benjamin; Wong, Leslie

    2016-03-01

    One of the barriers to the construction of consistent computer-based color vision tests has been the variety of monitors and computers. Consistency of color on a variety of screens has necessitated calibration of each setup individually. Color vision examination with a carefully controlled display has, as a consequence, been a laboratory rather than a clinical activity. Inevitably, smart phones have become a vehicle for color vision tests. They have the advantage that the processor and screen are associated and there are fewer models of smart phones than permutations of computers and monitors. Colorimetric consistency of display within a model may be a given. It may extend across models from the same manufacturer but is unlikely to extend between manufacturers especially where technologies vary. In this study, we measured the same set of colors in a JPEG file displayed on 11 samples of each of four models of smart phone (iPhone 4s, iPhone5, Samsung Galaxy S3, and Samsung Galaxy S4) using a Photo Research PR-730. The iPhones are white LED backlit LCD and the Samsung are OLEDs. The color gamut varies between models and comparison with sRGB space shows 61%, 85%, 117%, and 110%, respectively. The iPhones differ markedly from the Samsungs and from one another. This indicates that model-specific color lookup tables will be needed. Within each model, the primaries were quite consistent (despite the age of phone varying within each sample). The worst case in each model was the blue primary; the 95th percentile limits in the v' coordinate were ±0.008 for the iPhone 4 and ±0.004 for the other three models. The u'v' variation in white points was ±0.004 for the iPhone4 and ±0.002 for the others, although the spread of white points between models was u'v'±0.007. The differences are essentially the same for primaries at low luminance. The variation of colors intermediate between the primaries (e.g., red-purple, orange) mirror the variation in the primaries. The variation in luminance (maximum brightness) was ±7%, 15%, 7%, and 15%, respectively. The iPhones have almost 2× the luminance. To accommodate differences between makes and models, dedicated color lookup tables will be necessary, but the variations within a model appear to be small enough that consistent color vision tests can be designed successfully.

  9. Full equations utilities (FEQUTL) model for the approximation of hydraulic characteristics of open channels and control structures during unsteady flow

    USGS Publications Warehouse

    Franz, Delbert D.; Melching, Charles S.

    1997-01-01

    The Full EQuations UTiLities (FEQUTL) model is a computer program for computation of tables that list the hydraulic characteristics of open channels and control structures as a function of upstream and downstream depths; these tables facilitate the simulation of unsteady flow in a stream system with the Full Equations (FEQ) model. Simulation of unsteady flow requires many iterations for each time period computed. Thus, computation of hydraulic characteristics during the simulations is impractical, and preparation of function tables and application of table look-up procedures facilitates simulation of unsteady flow. Three general types of function tables are computed: one-dimensional tables that relate hydraulic characteristics to upstream flow depth, two-dimensional tables that relate flow through control structures to upstream and downstream flow depth, and three-dimensional tables that relate flow through gated structures to upstream and downstream flow depth and gate setting. For open-channel reaches, six types of one-dimensional function tables contain different combinations of the top width of flow, area, first moment of area with respect to the water surface, conveyance, flux coefficients, and correction coefficients for channel curvilinearity. For hydraulic control structures, one type of one-dimensional function table contains relations between flow and upstream depth, and two types of two-dimensional function tables contain relations among flow and upstream and downstream flow depths. For hydraulic control structures with gates, a three-dimensional function table lists the system of two-dimensional tables that contain the relations among flow and upstream and downstream flow depths that correspond to different gate openings. Hydraulic control structures for which function tables containing flow relations are prepared in FEQUTL include expansions, contractions, bridges, culverts, embankments, weirs, closed conduits (circular, rectangular, and pipe-arch shapes), dam failures, floodways, and underflow gates (sluice and tainter gates). The theory for computation of the hydraulic characteristics is presented for open channels and for each hydraulic control structure. For the hydraulic control structures, the theory is developed from the results of experimental tests of flow through the structure for different upstream and downstream flow depths. These tests were done to describe flow hydraulics for a single, steady-flow design condition and, thus, do not provide complete information on flow transitions (for example, between free- and submerged-weir flow) that may result in simulation of unsteady flow. Therefore, new procedures are developed to approximate the hydraulics of flow transitions for culverts, embankments, weirs, and underflow gates.

  10. Digital Bedrock Compilation: A Geodatabase Covering Forest Service Lands in California

    NASA Astrophysics Data System (ADS)

    Elder, D.; de La Fuente, J. A.; Reichert, M.

    2010-12-01

    This digital database contains bedrock geologic mapping for Forest Service lands within California. This compilation began in 2004 and the first version was completed in 2005. Second publication of this geodatabase was completed in 2010 and filled major gaps in the southern Sierra Nevada and Modoc/Medicine Lake/Warner Mountains areas. This digital map database was compiled from previously published and unpublished geologic mapping, with source mapping and review from California Geological Survey, the U.S. Geological Survey and others. Much of the source data was itself compilation mapping. This geodatabase is huge, containing ~107,000 polygons and ~ 280,000 arcs. Mapping was compiled from more than one thousand individual sources and covers over 41,000,000 acres (~166,000 km2). It was compiled from source maps at various scales - from ~ 1:4,000 to 1:250,000 and represents the best available geologic mapping at largest scale possible. An estimated 70-80% of the source information was digitized from geologic mapping at 1:62,500 scale or better. Forest Service ACT2 Enterprise Team compiled the bedrock mapping and developed a geodatabase to store this information. This geodatabase supports feature classes for polygons (e.g, map units), lines (e.g., contacts, boundaries, faults and structural lines) and points (e.g., orientation data, structural symbology). Lookup tables provide detailed information for feature class items. Lookup/type tables contain legal values and hierarchical groupings for geologic ages and lithologies. Type tables link coded values with descriptions for line and point attributes, such as line type, line location and point type. This digital mapping is at the core of many quantitative analyses and derivative map products. Queries of the database are used to produce maps and to quantify rock types of interest. These include the following: (1) ultramafic rocks - where hazards from naturally occurring asbestos are high, (2) granitic rocks - increased erosion hazards, (3) limestone, chert, sedimentary rocks - paleontological resources (Potential Fossil Yield Classification maps), (4) calcareous rocks (cave resources, water chemistry), and (5) lava flows - lava tubes (more caves). Map unit groupings (e.g., belts, terranes, tectonic & geomorphic provinces) can also be derived from the geodatabase. Digital geologic mapping was used in ground water modeling to predict effects of tunneling through the San Bernardino Mountains. Bedrock mapping is used in models that characterize watershed sediment regimes and quantify anthropogenic influences. When combined with digital geomorphology mapping, this geodatabase helps to assess landslide hazards.

  11. Spatial Data Mining for Estimating Cover Management Factor of Universal Soil Loss Equation

    NASA Astrophysics Data System (ADS)

    Tsai, F.; Lin, T. C.; Chiang, S. H.; Chen, W. W.

    2016-12-01

    Universal Soil Loss Equation (USLE) is a widely used mathematical model that describes long-term soil erosion processes. Among the six different soil erosion risk factors of USLE, the cover-management factor (C-factor) is related to land-cover/land-use. The value of C-factor ranges from 0.001 to 1, so it alone might cause a thousandfold difference in a soil erosion analysis using USLE. The traditional methods for the estimation of USLE C-factor include in situ experiments, soil physical parameter models, USLE look-up tables with land use maps, and regression models between vegetation indices and C-factors. However, these methods are either difficult or too expensive to implement in large areas. In addition, the values of C-factor obtained using these methods can not be updated frequently, either. To address this issue, this research developed a spatial data mining approach to estimate the values of C-factor with assorted spatial datasets for a multi-temporal (2004 to 2008) annual soil loss analysis of a reservoir watershed in northern Taiwan. The idea is to establish the relationship between the USLE C-factor and spatial data consisting of vegetation indices and texture features extracted from satellite images, soil and geology attributes, digital elevation model, road and river distribution etc. A decision tree classifier was used to rank influential conditional attributes in the preliminary data mining. Then, factor simplification and separation were considered to optimize the model and the random forest classifier was used to analyze 9 simplified factor groups. Experimental results indicate that the overall accuracy of the data mining model is about 79% with a kappa value of 0.76. The estimated soil erosion amounts in 2004-2008 according to the data mining results are about 50.39 - 74.57 ton/ha-year after applying the sediment delivery ratio and correction coefficient. Comparing with estimations calculated with C-factors from look-up tables, the soil erosion values estimated with C-factors generated from spatial data mining results are more in agreement with the values published by the watershed administration authority.

  12. Utilization of ontology look-up services in information retrieval for biomedical literature.

    PubMed

    Vishnyakova, Dina; Pasche, Emilie; Lovis, Christian; Ruch, Patrick

    2013-01-01

    With the vast amount of biomedical data we face the necessity to improve information retrieval processes in biomedical domain. The use of biomedical ontologies facilitated the combination of various data sources (e.g. scientific literature, clinical data repository) by increasing the quality of information retrieval and reducing the maintenance efforts. In this context, we developed Ontology Look-up services (OLS), based on NEWT and MeSH vocabularies. Our services were involved in some information retrieval tasks such as gene/disease normalization. The implementation of OLS services significantly accelerated the extraction of particular biomedical facts by structuring and enriching the data context. The results of precision in normalization tasks were boosted on about 20%.

  13. Applications of the BIOPHYS Algorithm for Physically-Based Retrieval of Biophysical, Structural and Forest Disturbance Information

    NASA Technical Reports Server (NTRS)

    Peddle, Derek R.; Huemmrich, K. Fred; Hall, Forrest G.; Masek, Jeffrey G.; Soenen, Scott A.; Jackson, Chris D.

    2011-01-01

    Canopy reflectance model inversion using look-up table approaches provides powerful and flexible options for deriving improved forest biophysical structural information (BSI) compared with traditional statistical empirical methods. The BIOPHYS algorithm is an improved, physically-based inversion approach for deriving BSI for independent use and validation and for monitoring, inventory and quantifying forest disturbance as well as input to ecosystem, climate and carbon models. Based on the multiple-forward mode (MFM) inversion approach, BIOPHYS results were summarized from different studies (Minnesota/NASA COVER; Virginia/LEDAPS; Saskatchewan/BOREAS), sensors (airborne MMR; Landsat; MODIS) and models (GeoSail; GOMS). Applications output included forest density, height, crown dimension, branch and green leaf area, canopy cover, disturbance estimates based on multi-temporal chronosequences, and structural change following recovery from forest fires over the last century. Good correspondences with validation field data were obtained. Integrated analyses of multiple solar and view angle imagery further improved retrievals compared with single pass data. Quantifying ecosystem dynamics such as the area and percent of forest disturbance, early regrowth and succession provide essential inputs to process-driven models of carbon flux. BIOPHYS is well suited for large-area, multi-temporal applications involving multiple image sets and mosaics for assessing vegetation disturbance and quantifying biophysical structural dynamics and change. It is also suitable for integration with forest inventory, monitoring, updating, and other programs.

  14. A new Downscaling Approach for SMAP, SMOS and ASCAT by predicting sub-grid Soil Moisture Variability based on Soil Texture

    NASA Astrophysics Data System (ADS)

    Montzka, C.; Rötzer, K.; Bogena, H. R.; Vereecken, H.

    2017-12-01

    Improving the coarse spatial resolution of global soil moisture products from SMOS, SMAP and ASCAT is currently an up-to-date topic. Soil texture heterogeneity is known to be one of the main sources of soil moisture spatial variability. A method has been developed that predicts the soil moisture standard deviation as a function of the mean soil moisture based on soil texture information. It is a closed-form expression using stochastic analysis of 1D unsaturated gravitational flow in an infinitely long vertical profile based on the Mualem-van Genuchten model and first-order Taylor expansions. With the recent development of high resolution maps of basic soil properties such as soil texture and bulk density, relevant information to estimate soil moisture variability within a satellite product grid cell is available. Here, we predict for each SMOS, SMAP and ASCAT grid cell the sub-grid soil moisture variability based on the SoilGrids1km data set. We provide a look-up table that indicates the soil moisture standard deviation for any given soil moisture mean. The resulting data set provides important information for downscaling coarse soil moisture observations of the SMOS, SMAP and ASCAT missions. Downscaling SMAP data by a field capacity proxy indicates adequate accuracy of the sub-grid soil moisture patterns.

  15. When Does Model-Based Control Pay Off?

    PubMed

    Kool, Wouter; Cushman, Fiery A; Gershman, Samuel J

    2016-08-01

    Many accounts of decision making and reinforcement learning posit the existence of two distinct systems that control choice: a fast, automatic system and a slow, deliberative system. Recent research formalizes this distinction by mapping these systems to "model-free" and "model-based" strategies in reinforcement learning. Model-free strategies are computationally cheap, but sometimes inaccurate, because action values can be accessed by inspecting a look-up table constructed through trial-and-error. In contrast, model-based strategies compute action values through planning in a causal model of the environment, which is more accurate but also more cognitively demanding. It is assumed that this trade-off between accuracy and computational demand plays an important role in the arbitration between the two strategies, but we show that the hallmark task for dissociating model-free and model-based strategies, as well as several related variants, do not embody such a trade-off. We describe five factors that reduce the effectiveness of the model-based strategy on these tasks by reducing its accuracy in estimating reward outcomes and decreasing the importance of its choices. Based on these observations, we describe a version of the task that formally and empirically obtains an accuracy-demand trade-off between model-free and model-based strategies. Moreover, we show that human participants spontaneously increase their reliance on model-based control on this task, compared to the original paradigm. Our novel task and our computational analyses may prove important in subsequent empirical investigations of how humans balance accuracy and demand.

  16. When Does Model-Based Control Pay Off?

    PubMed Central

    2016-01-01

    Many accounts of decision making and reinforcement learning posit the existence of two distinct systems that control choice: a fast, automatic system and a slow, deliberative system. Recent research formalizes this distinction by mapping these systems to “model-free” and “model-based” strategies in reinforcement learning. Model-free strategies are computationally cheap, but sometimes inaccurate, because action values can be accessed by inspecting a look-up table constructed through trial-and-error. In contrast, model-based strategies compute action values through planning in a causal model of the environment, which is more accurate but also more cognitively demanding. It is assumed that this trade-off between accuracy and computational demand plays an important role in the arbitration between the two strategies, but we show that the hallmark task for dissociating model-free and model-based strategies, as well as several related variants, do not embody such a trade-off. We describe five factors that reduce the effectiveness of the model-based strategy on these tasks by reducing its accuracy in estimating reward outcomes and decreasing the importance of its choices. Based on these observations, we describe a version of the task that formally and empirically obtains an accuracy-demand trade-off between model-free and model-based strategies. Moreover, we show that human participants spontaneously increase their reliance on model-based control on this task, compared to the original paradigm. Our novel task and our computational analyses may prove important in subsequent empirical investigations of how humans balance accuracy and demand. PMID:27564094

  17. A New Approach for Combining Time-of-Flight and RGB Cameras Based on Depth-Dependent Planar Projective Transformations

    PubMed Central

    Salinas, Carlota; Fernández, Roemi; Montes, Héctor; Armada, Manuel

    2015-01-01

    Image registration for sensor fusion is a valuable technique to acquire 3D and colour information for a scene. Nevertheless, this process normally relies on feature-matching techniques, which is a drawback for combining sensors that are not able to deliver common features. The combination of ToF and RGB cameras is an instance that problem. Typically, the fusion of these sensors is based on the extrinsic parameter computation of the coordinate transformation between the two cameras. This leads to a loss of colour information because of the low resolution of the ToF camera, and sophisticated algorithms are required to minimize this issue. This work proposes a method for sensor registration with non-common features and that avoids the loss of colour information. The depth information is used as a virtual feature for estimating a depth-dependent homography lookup table (Hlut). The homographies are computed within sets of ground control points of 104 images. Since the distance from the control points to the ToF camera are known, the working distance of each element on the Hlut is estimated. Finally, two series of experimental tests have been carried out in order to validate the capabilities of the proposed method. PMID:26404315

  18. Silicon oxynitride-on-glass waveguide array refractometer with wide sensing range and integrated read-out (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Viegas, Jaime; Mayeh, Mona; Srinivasan, Pradeep; Johnson, Eric G.; Marques, Paulo V. S.; Farahi, Faramarz

    2017-02-01

    In this work, a silicon oxynitride-on-silica refractometer is presented, based on sub-wavelength coupled arrayed waveguide interference, and capable of low-cost, high resolution, large scale deployment. The sensor has an experimental spectral sensitivity as high as 3200 nm/RIU, covering refractive indices ranging from 1 (air) up to 1.43 (oils). The sensor readout can be performed by standard spectrometers techniques of by pattern projection onto a camera, followed by optical pattern recognition. Positive identification of the refractive index of an unknown species is obtained by pattern cross-correlation with a look-up calibration table based algorithm. Given the lower contrast between core and cladding in such devices, higher mode overlap with single mode fiber is achieved, leading to a larger coupling efficiency and more relaxed alignment requirements as compared to silicon photonics platform. Also, the optical transparency of the sensor in the visible range allows the operation with light sources and camera detectors in the visible range, of much lower capital costs for a complete sensor system. Furthermore, the choice of refractive indices of core and cladding in the sensor head with integrated readout, allows the fabrication of the same device in polymers, for mass-production replication of disposable sensors.

  19. Image mosaic and topographic map of the moon

    USGS Publications Warehouse

    Hare, Trent M.; Hayward, Rosalyn K.; Blue, Jennifer S.; Archinal, Brent A.

    2015-01-01

    Sheet 2: This map is based on data from the Lunar Orbiter Laser Altimeter (LOLA; Smith and others, 2010), an instrument on the National Aeronautics and Space Administration (NASA) Lunar Reconnaissance Orbiter (LRO) spacecraft (Tooley and others, 2010). The image used for the base of this map represents more than 6.5 billion measurements gathered between July 2009 and July 2013, adjusted for consistency in the coordinate system described below, and then converted to lunar radii (Mazarico and others, 2012). For the Mercator portion, these measurements were converted into a digital elevation model (DEM) with a resolution of 0.015625 degrees per pixel, or 64 pixels per degree. In projection, the pixels are 473.8 m in size at the equator. For the polar portion, the LOLA elevation points were used to create a DEM at 240 meters per pixel. A shaded relief map was generated from each DEM with a sun angle of 45° from horizontal, and a sun azimuth of 270°, as measured clockwise from north with no vertical exaggeration. The DEM values were then mapped to a global color look-up table, with each color representing a range of 1 km of elevation. For this map sheet, only larger feature names are shown. For references listed above, please open the full PDF.

  20. Java-based PACS and reporting system for nuclear medicine

    NASA Astrophysics Data System (ADS)

    Slomka, Piotr J.; Elliott, Edward; Driedger, Albert A.

    2000-05-01

    In medical imaging practice, images and reports often need be reviewed and edited from many locations. We have designed and implemented a Java-based Remote Viewing and Reporting System (JaRRViS) for a nuclear medicine department, which is deployed as a web service, at the fraction of the cost dedicated PACS systems. The system can be extended to other imaging modalities. JaRRViS interfaces to the clinical patient databases of imaging workstations. Specialized nuclear medicine applets support interactive displays of data such as 3-D gated SPECT with all the necessary options such as cine, filtering, dynamic lookup tables, and reorientation. The reporting module is implemented as a separate applet using Java Foundation Classes (JFC) Swing Editor Kit and allows composition of multimedia reports after selection and annotation of appropriate images. The reports are stored on the server in the HTML format. JaRRViS uses Java Servlets for the preparation and storage of final reports. The http links to the reports or to the patient's raw images with applets can be obtained from JaRRViS by any Hospital Information System (HIS) via standard queries. Such links can be sent via e-mail or included as text fields in any HIS database, providing direct access to the patient reports and images via standard web browsers.

  1. Image enhancement software for underwater recovery operations: User's manual

    NASA Astrophysics Data System (ADS)

    Partridge, William J.; Therrien, Charles W.

    1989-06-01

    This report describes software for performing image enhancement on live or recorded video images. The software was developed for operational use during underwater recovery operations at the Naval Undersea Warfare Engineering Station. The image processing is performed on an IBM-PC/AT compatible computer equipped with hardware to digitize and display video images. The software provides the capability to provide contrast enhancement and other similar functions in real time through hardware lookup tables, to automatically perform histogram equalization, to capture one or more frames and average them or apply one of several different processing algorithms to a captured frame. The report is in the form of a user manual for the software and includes guided tutorial and reference sections. A Digital Image Processing Primer in the appendix serves to explain the principle concepts that are used in the image processing.

  2. Population attribute compression

    DOEpatents

    White, James M.; Faber, Vance; Saltzman, Jeffrey S.

    1995-01-01

    An image population having a large number of attributes is processed to form a display population with a predetermined smaller number of attributes that represent the larger number of attributes. In a particular application, the color values in an image are compressed for storage in a discrete look-up table (LUT). Color space containing the LUT color values is successively subdivided into smaller volumes until a plurality of volumes are formed, each having no more than a preselected maximum number of color values. Image pixel color values can then be rapidly placed in a volume with only a relatively few LUT values from which a nearest neighbor is selected. Image color values are assigned 8 bit pointers to their closest LUT value whereby data processing requires only the 8 bit pointer value to provide 24 bit color values from the LUT.

  3. Analysis Of AVIRIS Data From LEO-15 Using Tafkaa Atmospheric Correction

    NASA Technical Reports Server (NTRS)

    Montes, Marcos J.; Gao, Bo-Cai; Davis, Curtiss O.; Moline, Mark

    2004-01-01

    We previously developed an algorithm named Tafkaa for atmospheric correction of remote sensing ocean color data from aircraft and satellite platforms. The algorithm allows quick atmospheric correction of hyperspectral data using lookup tables generated with a modified version of Ahmad & Fraser s vector radiative transfer code. During the past few years we have extended the capabilities of the code. Current modifications include the ability to account for within scene variation in solar geometry (important for very long scenes) and view geometries (important for wide fields of view). Additionally, versions of Tafkaa have been made for a variety of multi-spectral sensors, including SeaWiFS and MODIS. In this proceeding we present some initial results of atmospheric correction of AVIRIS data from the 2001 July Hyperspectral Coastal Ocean Dynamics Experiment (HyCODE) at LEO-15.

  4. Digital to analog conversion and visual evaluation of Thematic Mapper data

    USGS Publications Warehouse

    McCord, James R.; Binnie, Douglas R.; Seevers, Paul M.

    1985-01-01

    As a part of the National Aeronautics and Space Administration Landsat D Image Data Quality Analysis Program, the Earth Resources Observation Systems Data Center (EDC) developed procedures to optimize the visual information content of Thematic Mapper data and evaluate the resulting photographic products by visual interpretation. A digital-to-analog transfer function was developed which would properly place the digital values on the most useable portion of a film response curve. Individual black-and-white transparencies generated using the resulting look-up tables were utilized in the production of color-composite images with varying band combinations. Four experienced photointerpreters ranked 2-cm-diameter (0. 75 inch) chips of selected image features of each band combination for ease of interpretability. A nonparametric rank-order test determined the significance of interpreter preference for the band combinations.

  5. Digital to Analog Conversion and Visual Evaluation of Thematic Mapper Data

    USGS Publications Warehouse

    McCord, James R.; Binnie, Douglas R.; Seevers, Paul M.

    1985-01-01

    As a part of the National Aeronautics and Space Administration Landsat D Image Data Quality Analysis Program, the Earth Resources Observation Systems Data Center (EDC) developed procedures to optimize the visual information content of Thematic Mapper data and evaluate the resulting photographic products by visual interpretation. A digital-to-analog transfer function was developed which would properly place the digital values on the most useable portion of a film response curve. Individual black-and-white transparencies generated using the resulting look-up tables were utilized in the production of color-composite images with varying band combinations. Four experienced photointerpreters ranked 2-cm-diameter (0. 75 inch) chips of selected image features of each band combination for ease of interpretability. A nonparametric rank-order test determined the significance of interpreter preference for the band combinations.

  6. Building a symbolic computer algebra toolbox to compute 2D Fourier transforms in polar coordinates.

    PubMed

    Dovlo, Edem; Baddour, Natalie

    2015-01-01

    The development of a symbolic computer algebra toolbox for the computation of two dimensional (2D) Fourier transforms in polar coordinates is presented. Multidimensional Fourier transforms are widely used in image processing, tomographic reconstructions and in fact any application that requires a multidimensional convolution. By examining a function in the frequency domain, additional information and insights may be obtained. The advantages of our method include: •The implementation of the 2D Fourier transform in polar coordinates within the toolbox via the combination of two significantly simpler transforms.•The modular approach along with the idea of lookup tables implemented help avoid the issue of indeterminate results which may occur when attempting to directly evaluate the transform.•The concept also helps prevent unnecessary computation of already known transforms thereby saving memory and processing time.

  7. Experiments with Cross-Language Information Retrieval on a Health Portal for Psychology and Psychotherapy.

    PubMed

    Andrenucci, Andrea

    2016-01-01

    Few studies have been performed within cross-language information retrieval (CLIR) in the field of psychology and psychotherapy. The aim of this paper is to to analyze and assess the quality of available query translation methods for CLIR on a health portal for psychology. A test base of 100 user queries, 50 Multi Word Units (WUs) and 50 Single WUs, was used. Swedish was the source language and English the target language. Query translation methods based on machine translation (MT) and dictionary look-up were utilized in order to submit query translations to two search engines: Google Site Search and Quick Ask. Standard IR evaluation measures and a qualitative analysis were utilized to assess the results. The lexicon extracted with word alignment of the portal's parallel corpus provided better statistical results among dictionary look-ups. Google Translate provided more linguistically correct translations overall and also delivered better retrieval results in MT.

  8. Informative Bayesian Type A uncertainty evaluation, especially applicable to a small number of observations

    NASA Astrophysics Data System (ADS)

    Cox, M.; Shirono, K.

    2017-10-01

    A criticism levelled at the Guide to the Expression of Uncertainty in Measurement (GUM) is that it is based on a mixture of frequentist and Bayesian thinking. In particular, the GUM’s Type A (statistical) uncertainty evaluations are frequentist, whereas the Type B evaluations, using state-of-knowledge distributions, are Bayesian. In contrast, making the GUM fully Bayesian implies, among other things, that a conventional objective Bayesian approach to Type A uncertainty evaluation for a number n of observations leads to the impractical consequence that n must be at least equal to 4, thus presenting a difficulty for many metrologists. This paper presents a Bayesian analysis of Type A uncertainty evaluation that applies for all n ≥slant 2 , as in the frequentist analysis in the current GUM. The analysis is based on assuming that the observations are drawn from a normal distribution (as in the conventional objective Bayesian analysis), but uses an informative prior based on lower and upper bounds for the standard deviation of the sampling distribution for the quantity under consideration. The main outcome of the analysis is a closed-form mathematical expression for the factor by which the standard deviation of the mean observation should be multiplied to calculate the required standard uncertainty. Metrological examples are used to illustrate the approach, which is straightforward to apply using a formula or look-up table.

  9. Easy Volcanic Aerosol (EVA v1.0): an idealized forcing generator for climate simulations

    NASA Astrophysics Data System (ADS)

    Toohey, Matthew; Stevens, Bjorn; Schmidt, Hauke; Timmreck, Claudia

    2016-11-01

    Stratospheric sulfate aerosols from volcanic eruptions have a significant impact on the Earth's climate. To include the effects of volcanic eruptions in climate model simulations, the Easy Volcanic Aerosol (EVA) forcing generator provides stratospheric aerosol optical properties as a function of time, latitude, height, and wavelength for a given input list of volcanic eruption attributes. EVA is based on a parameterized three-box model of stratospheric transport and simple scaling relationships used to derive mid-visible (550 nm) aerosol optical depth and aerosol effective radius from stratospheric sulfate mass. Precalculated look-up tables computed from Mie theory are used to produce wavelength-dependent aerosol extinction, single scattering albedo, and scattering asymmetry factor values. The structural form of EVA and the tuning of its parameters are chosen to produce best agreement with the satellite-based reconstruction of stratospheric aerosol properties following the 1991 Pinatubo eruption, and with prior millennial-timescale forcing reconstructions, including the 1815 eruption of Tambora. EVA can be used to produce volcanic forcing for climate models which is based on recent observations and physical understanding but internally self-consistent over any timescale of choice. In addition, EVA is constructed so as to allow for easy modification of different aspects of aerosol properties, in order to be used in model experiments to help advance understanding of what aspects of the volcanic aerosol are important for the climate system.

  10. A Parallel Nonrigid Registration Algorithm Based on B-Spline for Medical Images

    PubMed Central

    Wang, Yangping; Wang, Song

    2016-01-01

    The nonrigid registration algorithm based on B-spline Free-Form Deformation (FFD) plays a key role and is widely applied in medical image processing due to the good flexibility and robustness. However, it requires a tremendous amount of computing time to obtain more accurate registration results especially for a large amount of medical image data. To address the issue, a parallel nonrigid registration algorithm based on B-spline is proposed in this paper. First, the Logarithm Squared Difference (LSD) is considered as the similarity metric in the B-spline registration algorithm to improve registration precision. After that, we create a parallel computing strategy and lookup tables (LUTs) to reduce the complexity of the B-spline registration algorithm. As a result, the computing time of three time-consuming steps including B-splines interpolation, LSD computation, and the analytic gradient computation of LSD, is efficiently reduced, for the B-spline registration algorithm employs the Nonlinear Conjugate Gradient (NCG) optimization method. Experimental results of registration quality and execution efficiency on the large amount of medical images show that our algorithm achieves a better registration accuracy in terms of the differences between the best deformation fields and ground truth and a speedup of 17 times over the single-threaded CPU implementation due to the powerful parallel computing ability of Graphics Processing Unit (GPU). PMID:28053653

  11. Global root zone storage capacity from satellite-based evaporation data

    NASA Astrophysics Data System (ADS)

    Wang-Erlandsson, Lan; Bastiaanssen, Wim; Gao, Hongkai; Jägermeyr, Jonas; Senay, Gabriel; van Dijk, Albert; Guerschman, Juan; Keys, Patrick; Gordon, Line; Savenije, Hubert

    2016-04-01

    We present an "earth observation-based" method for estimating root zone storage capacity - a critical, yet uncertain parameter in hydrological and land surface modelling. By assuming that vegetation optimises its root zone storage capacity to bridge critical dry periods, we were able to use state-of-the-art satellite-based evaporation data computed with independent energy balance equations to derive gridded root zone storage capacity at global scale. This approach does not require soil or vegetation information, is model independent, and is in principle scale-independent. In contrast to traditional look-up table approaches, our method captures the variability in root zone storage capacity within land cover type, including in rainforests where direct measurements of root depth otherwise are scarce. Implementing the estimated root zone storage capacity in the global hydrological model STEAM improved evaporation simulation overall, and in particular during the least evaporating months in sub-humid to humid regions with moderate to high seasonality. We find that evergreen forests are able to create a large storage to buffer for extreme droughts (with a return period of up to 60 years), in contrast to short vegetation and crops (which seem to adapt to a drought return period of about 2 years). The presented method to estimate root zone storage capacity eliminates the need for soils and rooting depth information, which could be a game-changer in global land surface modelling.

  12. Fast associative memory + slow neural circuitry = the computational model of the brain.

    NASA Astrophysics Data System (ADS)

    Berkovich, Simon; Berkovich, Efraim; Lapir, Gennady

    1997-08-01

    We propose a computational model of the brain based on a fast associative memory and relatively slow neural processors. In this model, processing time is expensive but memory access is not, and therefore most algorithmic tasks would be accomplished by using large look-up tables as opposed to calculating. The essential feature of an associative memory in this context (characteristic for a holographic type memory) is that it works without an explicit mechanism for resolution of multiple responses. As a result, the slow neuronal processing elements, overwhelmed by the flow of information, operate as a set of templates for ranking of the retrieved information. This structure addresses the primary controversy in the brain architecture: distributed organization of memory vs. localization of processing centers. This computational model offers an intriguing explanation of many of the paradoxical features in the brain architecture, such as integration of sensors (through DMA mechanism), subliminal perception, universality of software, interrupts, fault-tolerance, certain bizarre possibilities for rapid arithmetics etc. In conventional computer science the presented type of a computational model did not attract attention as it goes against the technological grain by using a working memory faster than processing elements.

  13. Optimized universal color palette design for error diffusion

    NASA Astrophysics Data System (ADS)

    Kolpatzik, Bernd W.; Bouman, Charles A.

    1995-04-01

    Currently, many low-cost computers can only simultaneously display a palette of 256 color. However, this palette is usually selectable from a very large gamut of available colors. For many applications, this limited palette size imposes a significant constraint on the achievable image quality. We propose a method for designing an optimized universal color palette for use with halftoning methods such as error diffusion. The advantage of a universal color palette is that it is fixed and therefore allows multiple images to be displayed simultaneously. To design the palette, we employ a new vector quantization method known as sequential scalar quantization (SSQ) to allocate the colors in a visually uniform color space. The SSQ method achieves near-optimal allocation, but may be efficiently implemented using a series of lookup tables. When used with error diffusion, SSQ adds little computational overhead and may be used to minimize the visual error in an opponent color coordinate system. We compare the performance of the optimized algorithm to standard error diffusion by evaluating a visually weighted mean-squared-error measure. Our metric is based on the color difference in CIE L*AL*B*, but also accounts for the lowpass characteristic of human contrast sensitivity.

  14. Quad-rotor flight path energy optimization

    NASA Astrophysics Data System (ADS)

    Kemper, Edward

    Quad-Rotor unmanned areal vehicles (UAVs) have been a popular area of research and development in the last decade, especially with the advent of affordable microcontrollers like the MSP 430 and the Raspberry Pi. Path-Energy Optimization is an area that is well developed for linear systems. In this thesis, this idea of path-energy optimization is extended to the nonlinear model of the Quad-rotor UAV. The classical optimization technique is adapted to the nonlinear model that is derived for the problem at hand, coming up with a set of partial differential equations and boundary value conditions to solve these equations. Then, different techniques to implement energy optimization algorithms are tested using simulations in Python. First, a purely nonlinear approach is used. This method is shown to be computationally intensive, with no practical solution available in a reasonable amount of time. Second, heuristic techniques to minimize the energy of the flight path are tested, using Ziegler-Nichols' proportional integral derivative (PID) controller tuning technique. Finally, a brute force look-up table based PID controller is used. Simulation results of the heuristic method show that both reliable control of the system and path-energy optimization are achieved in a reasonable amount of time.

  15. Compact Hip-Force Sensor for a Gait-Assistance Exoskeleton System.

    PubMed

    Choi, Hyundo; Seo, Keehong; Hyung, Seungyong; Shim, Youngbo; Lim, Soo-Chul

    2018-02-13

    In this paper, we propose a compact force sensor system for a hip-mounted exoskeleton for seniors with difficulties in walking due to muscle weakness. It senses and monitors the delivered force and power of the exoskeleton for motion control and taking urgent safety action. Two FSR (force-sensitive resistors) sensors are used to measure the assistance force when the user is walking. The sensor system directly measures the interaction force between the exoskeleton and the lower limb of the user instead of a previously reported force-sensing method, which estimated the hip assistance force from the current of the motor and lookup tables. Furthermore, the sensor system has the advantage of generating torque in the walking-assistant actuator based on directly measuring the hip-assistance force. Thus, the gait-assistance exoskeleton system can control the delivered power and torque to the user. The force sensing structure is designed to decouple the force caused by hip motion from other directional forces to the sensor so as to only measure that force. We confirmed that the hip-assistance force could be measured with the proposed prototype compact force sensor attached to a thigh frame through an experiment with a real system.

  16. Planning/scheduling techniques for VQ-based image compression

    NASA Technical Reports Server (NTRS)

    Short, Nicholas M., Jr.; Manohar, Mareboyana; Tilton, James C.

    1994-01-01

    The enormous size of the data holding and the complexity of the information system resulting from the EOS system pose several challenges to computer scientists, one of which is data archival and dissemination. More than ninety percent of the data holdings of NASA is in the form of images which will be accessed by users across the computer networks. Accessing the image data in its full resolution creates data traffic problems. Image browsing using a lossy compression reduces this data traffic, as well as storage by factor of 30-40. Of the several image compression techniques, VQ is most appropriate for this application since the decompression of the VQ compressed images is a table lookup process which makes minimal additional demands on the user's computational resources. Lossy compression of image data needs expert level knowledge in general and is not straightforward to use. This is especially true in the case of VQ. It involves the selection of appropriate codebooks for a given data set and vector dimensions for each compression ratio, etc. A planning and scheduling system is described for using the VQ compression technique in the data access and ingest of raw satellite data.

  17. A Novel Method for Estimating Shortwave Direct Radiative Effect of Above-cloud Aerosols over Ocean Using CALIOP and MODIS Data

    NASA Technical Reports Server (NTRS)

    Zhang, Z.; Meyer, K.; Platnick, S.; Oreopoulos, L.; Lee, D.; Yu, H.

    2013-01-01

    This paper describes an efficient and unique method for computing the shortwave direct radiative effect (DRE) of aerosol residing above low-level liquid-phase clouds using CALIOP and MODIS data. It accounts for the overlapping of aerosol and cloud rigorously by utilizing the joint histogram of cloud optical depth and cloud top pressure. Effects of sub-grid scale cloud and aerosol variations on DRE are accounted for. It is computationally efficient through using grid-level cloud and aerosol statistics, instead of pixel-level products, and a pre-computed look-up table in radiative transfer calculations. We verified that for smoke over the southeast Atlantic Ocean the method yields a seasonal mean instantaneous shortwave DRE that generally agrees with more rigorous pixel-level computation within 4%. We have also computed the annual mean instantaneous shortwave DRE of light-absorbing aerosols (i.e., smoke and polluted dust) over global ocean based on 4 yr of CALIOP and MODIS data. We found that the variability of the annual mean shortwave DRE of above-cloud light-absorbing aerosol is mainly driven by the optical depth of the underlying clouds.

  18. A Novel Method for Estimating Shortwave Direct Radiative Effect of Above-Cloud Aerosols Using CALIOP and MODIS Data

    NASA Technical Reports Server (NTRS)

    Zhang, Z.; Meyer, K.; Platnick, S.; Oreopoulos, L.; Lee, D.; Yu, H.

    2014-01-01

    This paper describes an efficient and unique method for computing the shortwave direct radiative effect (DRE) of aerosol residing above low-level liquid-phase clouds using CALIOP and MODIS data. It accounts for the overlapping of aerosol and cloud rigorously by utilizing the joint histogram of cloud optical depth and cloud top pressure. Effects of sub-grid scale cloud and aerosol variations on DRE are accounted for. It is computationally efficient through using grid-level cloud and aerosol statistics, instead of pixel-level products, and a pre-computed look-up table in radiative transfer calculations. We verified that for smoke over the southeast Atlantic Ocean the method yields a seasonal mean instantaneous shortwave DRE that generally agrees with more rigorous pixel-level computation within 4. We have also computed the annual mean instantaneous shortwave DRE of light-absorbing aerosols (i.e., smoke and polluted dust) over global ocean based on 4 yr of CALIOP and MODIS data. We found that the variability of the annual mean shortwave DRE of above-cloud light-absorbing aerosol is mainly driven by the optical depth of the underlying clouds.

  19. Compact Hip-Force Sensor for a Gait-Assistance Exoskeleton System

    PubMed Central

    Choi, Hyundo; Seo, Keehong; Hyung, Seungyong; Shim, Youngbo; Lim, Soo-Chul

    2018-01-01

    In this paper, we propose a compact force sensor system for a hip-mounted exoskeleton for seniors with difficulties in walking due to muscle weakness. It senses and monitors the delivered force and power of the exoskeleton for motion control and taking urgent safety action. Two FSR (force-sensitive resistors) sensors are used to measure the assistance force when the user is walking. The sensor system directly measures the interaction force between the exoskeleton and the lower limb of the user instead of a previously reported force-sensing method, which estimated the hip assistance force from the current of the motor and lookup tables. Furthermore, the sensor system has the advantage of generating torque in the walking-assistant actuator based on directly measuring the hip-assistance force. Thus, the gait-assistance exoskeleton system can control the delivered power and torque to the user. The force sensing structure is designed to decouple the force caused by hip motion from other directional forces to the sensor so as to only measure that force. We confirmed that the hip-assistance force could be measured with the proposed prototype compact force sensor attached to a thigh frame through an experiment with a real system. PMID:29438300

  20. Detection of canine skin and subcutaneous tumors by visible and near-infrared diffuse reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Cugmas, Blaž; Plavec, Tanja; Bregar, Maksimilijan; Naglič, Peter; Pernuš, Franjo; Likar, Boštjan; Bürmen, Miran

    2015-03-01

    Cancer is the main cause of canine morbidity and mortality. The existing evaluation of tumors requires an experienced veterinarian and usually includes invasive procedures (e.g., fine-needle aspiration) that can be unpleasant for the dog and the owner. We investigate visible and near-infrared diffuse reflectance spectroscopy (DRS) as a noninvasive optical technique for evaluation and detection of canine skin and subcutaneous tumors ex vivo and in vivo. The optical properties of tumors and skin were calculated in a spectrally constrained manner, using a lookup table-based inverse model. The obtained optical properties were analyzed and compared among different tumor groups. The calculated parameters of the absorption and reduced scattering coefficients were subsequently used for detection of malignant skin and subcutaneous tumors. The detection sensitivity and specificity of malignant tumors ex vivo were 90.0% and 73.5%, respectively, while corresponding detection sensitivity and specificity of malignant tumors in vivo were 88.4% and 54.6%, respectively. The obtained results show that the DRS is a promising noninvasive optical technique for detection and classification of malignant and benign canine skin and subcutaneous tumors. The method should be further investigated on tumors with common origin.

  1. The Scaled SLW model of gas radiation in non-uniform media based on Planck-weighted moments of gas absorption cross-section

    NASA Astrophysics Data System (ADS)

    Solovjov, Vladimir P.; Andre, Frederic; Lemonnier, Denis; Webb, Brent W.

    2018-02-01

    The Scaled SLW model for prediction of radiation transfer in non-uniform gaseous media is presented. The paper considers a new approach for construction of a Scaled SLW model. In order to maintain the SLW method as a simple and computationally efficient engineering method special attention is paid to explicit non-iterative methods of calculation of the scaling coefficient. The moments of gas absorption cross-section weighted by the Planck blackbody emissive power (in particular, the first moment - Planck mean, and first inverse moment - Rosseland mean) are used as the total characteristics of the absorption spectrum to be preserved by scaling. Generalized SLW modelling using these moments including both discrete gray gases and the continuous formulation is presented. Application of line-by-line look-up table for corresponding ALBDF and inverse ALBDF distribution functions (such that no solution of implicit equations is needed) ensures that the method is flexible and efficient. Predictions for radiative transfer using the Scaled SLW model are compared to line-by-line benchmark solutions, and predictions using the Rank Correlated SLW model and SLW Reference Approach. Conclusions and recommendations regarding application of the Scaled SLW model are made.

  2. Retrieval of aerosol optical properties using MERIS observations: Algorithm and some first results.

    PubMed

    Mei, Linlu; Rozanov, Vladimir; Vountas, Marco; Burrows, John P; Levy, Robert C; Lotz, Wolfhardt

    2017-08-01

    The MEdium Resolution Imaging Spectrometer (MERIS) instrument on board ESA Envisat made measurements from 2002 to 2012. Although MERIS was limited in spectral coverage, accurate Aerosol Optical Thickness (AOT) from MERIS data are retrieved by using appropriate additional information. We introduce a new AOT retrieval algorithm for MERIS over land surfaces, referred to as eXtensible Bremen AErosol Retrieval (XBAER). XBAER is similar to the "dark-target" (DT) retrieval algorithm used for Moderate-resolution Imaging Spectroradiometer (MODIS), in that it uses a lookup table (LUT) to match to satellite-observed reflectance and derive the AOT. Instead of a global parameterization of surface spectral reflectance, XBAER uses a set of spectral coefficients to prescribe surface properties. In this manner, XBAER is not limited to dark surfaces (vegetation) and retrieves AOT over bright surface (desert, semiarid, and urban areas). Preliminary validation of the MERIS-derived AOT and the ground-based Aerosol Robotic Network (AERONET) measurements yield good agreement, the resulting regression equation is y = (0.92 × ± 0.07) + (0.05 ± 0.01) and Pearson correlation coefficient of R = 0.78. Global monthly means of AOT have been compared from XBAER, MODIS and other satellite-derived datasets.

  3. Ultra-Low Power Dynamic Knob in Adaptive Compressed Sensing Towards Biosignal Dynamics.

    PubMed

    Wang, Aosen; Lin, Feng; Jin, Zhanpeng; Xu, Wenyao

    2016-06-01

    Compressed sensing (CS) is an emerging sampling paradigm in data acquisition. Its integrated analog-to-information structure can perform simultaneous data sensing and compression with low-complexity hardware. To date, most of the existing CS implementations have a fixed architectural setup, which lacks flexibility and adaptivity for efficient dynamic data sensing. In this paper, we propose a dynamic knob (DK) design to effectively reconfigure the CS architecture by recognizing the biosignals. Specifically, the dynamic knob design is a template-based structure that comprises a supervised learning module and a look-up table module. We model the DK performance in a closed analytic form and optimize the design via a dynamic programming formulation. We present the design on a 130 nm process, with a 0.058 mm (2) fingerprint and a 187.88 nJ/event energy-consumption. Furthermore, we benchmark the design performance using a publicly available dataset. Given the energy constraint in wireless sensing, the adaptive CS architecture can consistently improve the signal reconstruction quality by more than 70%, compared with the traditional CS. The experimental results indicate that the ultra-low power dynamic knob can provide an effective adaptivity and improve the signal quality in compressed sensing towards biosignal dynamics.

  4. Enhancements on the Convex Programming Based Powered Descent Guidance Algorithm for Mars Landing

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Blackmore, Lars; Scharf, Daniel P.; Wolf, Aron

    2008-01-01

    In this paper, we present enhancements on the powered descent guidance algorithm developed for Mars pinpoint landing. The guidance algorithm solves the powered descent minimum fuel trajectory optimization problem via a direct numerical method. Our main contribution is to formulate the trajectory optimization problem, which has nonconvex control constraints, as a finite dimensional convex optimization problem, specifically as a finite dimensional second order cone programming (SOCP) problem. SOCP is a subclass of convex programming, and there are efficient SOCP solvers with deterministic convergence properties. Hence, the resulting guidance algorithm can potentially be implemented onboard a spacecraft for real-time applications. Particularly, this paper discusses the algorithmic improvements obtained by: (i) Using an efficient approach to choose the optimal time-of-flight; (ii) Using a computationally inexpensive way to detect the feasibility/ infeasibility of the problem due to the thrust-to-weight constraint; (iii) Incorporating the rotation rate of the planet into the problem formulation; (iv) Developing additional constraints on the position and velocity to guarantee no-subsurface flight between the time samples of the temporal discretization; (v) Developing a fuel-limited targeting algorithm; (vi) Initial result on developing an onboard table lookup method to obtain almost fuel optimal solutions in real-time.

  5. A flamelet model for supersonic non-premixed combustion with pressure variation

    NASA Astrophysics Data System (ADS)

    Zhao, Guo-Yan; Sun, Ming-Bo; Wu, Jin-Shui; Wang, Hong-Bo

    2015-08-01

    A modified flamelet model is proposed for studying supersonic combustion with pressure variation considering that pressure is far from homogenous in a supersonic combustor. In this model, the flamelet database are tabulated at a reference pressure, while quantities at other pressure are obtained using a sixth-order polynomial in pressure. Attributed to merit of the modified model which compute coefficients for the expansion only. And they brought less requirements for memory and table lookup time, expensive cost is avoided. The performance of modified model is much better than the approach of using a flamelet model-based method with tabulation at different pressure values. Two types of hydrogen fueled scramjet combustors were introduced to validate the modified flamelet model. It was observed that the temperature is sensitive to the choice of model in combustion area, which in return will significantly affect the pressure. It was found that the results of modified model were in good agreement with the experimental data compared with the isobaric flamelet model, especially for temperature, whose value is more accurately predicted. It is concluded that the modified flamelet model was more effective for cases with a wide range of pressure variation.

  6. Contribution of cosmic ray particles to radiation environment at high mountain altitude: Comparison of Monte Carlo simulations with experimental data.

    PubMed

    Mishev, A L

    2016-03-01

    A numerical model for assessment of the effective dose due to secondary cosmic ray particles of galactic origin at high mountain altitude of about 3000 m above the sea level is presented. The model is based on a newly numerically computed effective dose yield function considering realistic propagation of cosmic rays in the Earth magnetosphere and atmosphere. The yield function is computed using a full Monte Carlo simulation of the atmospheric cascade induced by primary protons and α- particles and subsequent conversion of secondary particle fluence (neutrons, protons, gammas, electrons, positrons, muons and charged pions) to effective dose. A lookup table of the newly computed effective dose yield function is provided. The model is compared with several measurements. The comparison of model simulations with measured spectral energy distributions of secondary cosmic ray neutrons at high mountain altitude shows good consistency. Results from measurements of radiation environment at high mountain station--Basic Environmental Observatory Moussala (42.11 N, 23.35 E, 2925 m a.s.l.) are also shown, specifically the contribution of secondary cosmic ray neutrons. A good agreement with the model is demonstrated. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Extraction and Separation Modeling of Orion Test Vehicles with ADAMS Simulation

    NASA Technical Reports Server (NTRS)

    Fraire, Usbaldo, Jr.; Anderson, Keith; Cuthbert, Peter A.

    2013-01-01

    The Capsule Parachute Assembly System (CPAS) project has increased efforts to demonstrate the performance of fully integrated parachute systems at both higher dynamic pressures and in the presence of wake fields using a Parachute Compartment Drop Test Vehicle (PCDTV) and a Parachute Test Vehicle (PTV), respectively. Modeling the extraction and separation events has proven challenging and an understanding of the physics is required to reduce the risk of separation malfunctions. The need for extraction and separation modeling is critical to a successful CPAS test campaign. Current PTV-alone simulations, such as Decelerator System Simulation (DSS), require accurate initial conditions (ICs) drawn from a separation model. Automatic Dynamic Analysis of Mechanical Systems (ADAMS), a Commercial off the Shelf (COTS) tool, was employed to provide insight into the multi-body six degree of freedom (DOF) interaction between parachute test hardware and external and internal forces. Components of the model include a composite extraction parachute, primary vehicle (PTV or PCDTV), platform cradle, a release mechanism, aircraft ramp, and a programmer parachute with attach points. Independent aerodynamic forces were applied to the mated test vehicle/platform cradle and the separated test vehicle and platform cradle. The aero coefficients were determined from real time lookup tables which were functions of both angle of attack ( ) and sideslip ( ). The atmospheric properties were also determined from a real time lookup table characteristic of the Yuma Proving Grounds (YPG) atmosphere relative to the planned test month. Representative geometries were constructed in ADAMS with measured mass properties generated for each independent vehicle. Derived smart separation parameters were included in ADAMS as sensors with defined pitch and pitch rate criteria used to refine inputs to analogous avionics systems for optimal separation conditions. Key design variables were dispersed in a Monte Carlo analysis to provide the maximum expected range of the state variables at programmer deployment to be used as ICs in DSS. Extensive comparisons were made with Decelerator System Simulation Application (DSSA) to validate the mated portion of the ADAMS extraction trajectory. Results of the comparisons improved the fidelity of ADAMS with a ramp pitch profile update from DSSA. Post-test reconstructions resulted in improvements to extraction parachute drag area knock-down factors, extraction line modeling, and the inclusion of ball-to-socket attachments used as a release mechanism on the PTV. Modeling of two Extraction parachutes was based on United States Air Force (USAF) tow test data and integrated into ADAMS for nominal and Monte Carlo trajectory assessments. Video overlay of ADAMS animations and actual C-12 chase plane test videos supported analysis and observation efforts of extraction and separation events. The COTS ADAMS simulation has been integrated with NASA based simulations to provide complete end to end trajectories with a focus on the extraction, separation, and programmer deployment sequence. The flexibility of modifying ADAMS inputs has proven useful for sensitivity studies and extraction/separation modeling efforts. 1

  8. Self-addressed diffractive lens schemes for the characterization of LCoS displays

    NASA Astrophysics Data System (ADS)

    Zhang, Haolin; Lizana, Angel; Iemmi, Claudio; Monroy-Ramírez, Freddy A.; Marquez, Andrés.; Moreno, Ignacio; Campos, Juan

    2018-02-01

    We proposed a self-calibration method to calibrate both the phase-voltage look-up table and the screen phase distribution of Liquid Crystal on Silicon (LCoS) displays by implementing different lens configurations on the studied device within a same optical scheme. On the one hand, the phase-voltage relation is determined from interferometric measurements, which are obtained by addressing split-lens phase distributions on the LCoS display. On the other hand, the surface profile is retrieved by self-addressing a diffractive micro-lens array to the LCoS display, in a way that we configure a Shack-Hartmann wavefront sensor that self-determines the screen spatial variations. Moreover, both the phase-voltage response and the surface phase inhomogeneity of the LCoS are measured within the same experimental set-up, without the necessity of further adjustments. Experimental results prove the usefulness of the above-mentioned technique for LCoS displays characterization.

  9. Physiological basis for noninvasive skin cancer diagnosis using diffuse reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Zhang, Yao; Markey, Mia K.; Tunnell, James W.

    2017-02-01

    Diffuse reflectance spectroscopy offers a noninvasive, fast, and low-cost alternative to visual screening and biopsy for skin cancer diagnosis. We have previously acquired reflectance spectra from 137 lesions in 76 patients and determined the capability of spectral diagnosis using principal component analysis (PCA). However, it is not well elucidated why spectral analysis enables tissue classification. To provide the physiological basis, we used the Monte Carlo look-up table (MCLUT) model to extract physiological parameters from those clinical data. The MCLUT model results in the following physiological parameters: oxygen saturation, hemoglobin concentration, melanin concentration, vessel radius, and scattering parameters. Physiological parameters show that cancerous skin tissue has lower scattering and larger vessel radii, compared to normal tissue. These results demonstrate the potential of diffuse reflectance spectroscopy for detection of early precancerous changes in tissue. In the future, a diagnostic algorithm that combines these physiological parameters could be enable non-invasive diagnosis of skin cancer.

  10. Mars Radiation Surface Model

    NASA Astrophysics Data System (ADS)

    Alzate, N.; Grande, M.; Matthiae, D.

    2017-09-01

    Planetary Space Weather Services (PSWS) within the Europlanet H2020 Research Infrastructure have been developed following protocols and standards available in Astrophysical, Solar Physics and Planetary Science Virtual Observatories. Several VO-compliant functionalities have been implemented in various tools. The PSWS extends the concepts of space weather and space situational awareness to other planets in our Solar System and in particular to spacecraft that voyage through it. One of the five toolkits developed as part of these services is a model dedicated to the Mars environment. This model has been developed at Aberystwyth University and the Institut fur Luft- und Raumfahrtmedizin (DLR Cologne) using modeled average conditions available from Planetocosmics. It is available for tracing propagation of solar events through the Solar System and modeling the response of the Mars environment. The results have been synthesized into look-up tables parameterized to variable solar wind conditions at Mars.

  11. MIMO equalization with adaptive step size for few-mode fiber transmission systems.

    PubMed

    van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J

    2014-01-13

    Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.

  12. Reflectance model for quantifying chlorophyll a in the presence of productivity degradation products

    NASA Technical Reports Server (NTRS)

    Carder, K. L.; Hawes, S. K.; Steward, R. G.; Baker, K. A.; Smith, R. C.; Mitchell, B. G.

    1991-01-01

    A reflectance model developed to estimate chlorophyll a concentrations in the presence of marine colored dissolved organic matter, pheopigments, detritus, and bacteria is presented. Nomograms and lookup tables are generated to describe the effects of different mixtures of chlorophyll a and these degradation products on the R(412):R(443) and R(443):R(565) remote-sensing reflectance or irradiance reflectance ratios. These are used to simulate the accuracy of potential ocean color satellite algorithms, assuming that atmospheric effects have been removed. For the California Current upwelling and offshore regions, with chlorophyll a not greater than 1.3 mg/cu m, the average error for chlorophyll a retrievals derived from irradiance reflectance data for degradation product-rich areas was reduced from +/-61 percent to +/-23 percent by application of an algorithm using two reflectance ratios rather than the commonly used algorithm applying a single reflectance ratio.

  13. Synchronization trigger control system for flow visualization

    NASA Technical Reports Server (NTRS)

    Chun, K. S.

    1987-01-01

    The use of cinematography or holographic interferometry for dynamic flow visualization in an internal combustion engine requires a control device that globally synchronizes camera and light source timing at a predefined shaft encoder angle. The device is capable of 0.35 deg resolution for rotational speeds of up to 73 240 rpm. This was achieved by implementing the shaft encoder signal addressed look-up table (LUT) and appropriate latches. The developed digital signal processing technique achieves 25 nsec of high speed triggering angle detection by using direct parallel bit comparison of the shaft encoder digital code with a simulated angle reference code, instead of using angle value comparison which involves more complicated computation steps. In order to establish synchronization to an AC reference signal whose magnitude is variant with the rotating speed, a dynamic peak followup synchronization technique has been devised. This method scrutinizes the reference signal and provides the right timing within 40 nsec. Two application examples are described.

  14. Fiber-optic extrinsic Fabry-Perot interferometer strain sensor with <50 pm displacement resolution using three-wavelength digital phase demodulation.

    PubMed

    Schmidt, M; Werther, B; Fuerstenau, N; Matthias, M; Melz, T

    2001-04-09

    A fiber-optic extrinsic Fabry-Perot interferometer strain sensor (EFPI-S) of ls = 2.5 cm sensor length using three-wavelength digital phase demodulation is demonstrated to exhibit <50 pm displacement resolution (<2nm/m strain resolution) when measuring the cross expansion of a PZT-ceramic plate. The sensing (single-mode downlead-) and reflecting fibers are fused into a 150/360 microm capillary fiber where the fusion points define the sensor length. Readout is performed using an improved version of the previously described three-wavelength digital phase demodulation method employing an arctan-phase stepping algorithm. In the resent experiments the strain sensitivity was varied via the mapping of the arctan - lookup table to the 16-Bit DA-converter range from 188.25 k /V (6 Volt range 1130 k ) to 11.7 k /Volt (range 70 k ).

  15. Further optimization of SeDDaRA blind image deconvolution algorithm and its DSP implementation

    NASA Astrophysics Data System (ADS)

    Wen, Bo; Zhang, Qiheng; Zhang, Jianlin

    2011-11-01

    Efficient algorithm for blind image deconvolution and its high-speed implementation is of great value in practice. Further optimization of SeDDaRA is developed, from algorithm structure to numerical calculation methods. The main optimization covers that, the structure's modularization for good implementation feasibility, reducing the data computation and dependency of 2D-FFT/IFFT, and acceleration of power operation by segmented look-up table. Then the Fast SeDDaRA is proposed and specialized for low complexity. As the final implementation, a hardware system of image restoration is conducted by using the multi-DSP parallel processing. Experimental results show that, the processing time and memory demand of Fast SeDDaRA decreases 50% at least; the data throughput of image restoration system is over 7.8Msps. The optimization is proved efficient and feasible, and the Fast SeDDaRA is able to support the real-time application.

  16. aMC fast: automation of fast NLO computations for PDF fits

    NASA Astrophysics Data System (ADS)

    Bertone, Valerio; Frederix, Rikkert; Frixione, Stefano; Rojo, Juan; Sutton, Mark

    2014-08-01

    We present the interface between M adG raph5_ aMC@NLO, a self-contained program that calculates cross sections up to next-to-leading order accuracy in an automated manner, and APPL grid, a code that parametrises such cross sections in the form of look-up tables which can be used for the fast computations needed in the context of PDF fits. The main characteristic of this interface, which we dub aMC fast, is its being fully automated as well, which removes the need to extract manually the process-specific information for additional physics processes, as is the case with other matrix-element calculators, and renders it straightforward to include any new process in the PDF fits. We demonstrate this by studying several cases which are easily measured at the LHC, have a good constraining power on PDFs, and some of which were previously unavailable in the form of a fast interface.

  17. A Short Research Note on Calculating Exact Distribution Functions and Random Sampling for the 3D NFW Profile

    NASA Astrophysics Data System (ADS)

    Robotham, A. S. G.; Howlett, Cullan

    2018-06-01

    In this short note we publish the analytic quantile function for the Navarro, Frenk & White (NFW) profile. All known published and coded methods for sampling from the 3D NFW PDF use either accept-reject, or numeric interpolation (sometimes via a lookup table) for projecting random Uniform samples through the quantile distribution function to produce samples of the radius. This is a common requirement in N-body initial condition (IC), halo occupation distribution (HOD), and semi-analytic modelling (SAM) work for correctly assigning particles or galaxies to positions given an assumed concentration for the NFW profile. Using this analytic description allows for much faster and cleaner code to solve a common numeric problem in modern astronomy. We release R and Python versions of simple code that achieves this sampling, which we note is trivial to reproduce in any modern programming language.

  18. Development of esMOCA RULA, Motion Capture Instrumentation for RULA Assessment

    NASA Astrophysics Data System (ADS)

    Akhmad, S.; Arendra, A.

    2018-01-01

    The purpose of this research is to build motion capture instrumentation using sensors fusion accelerometer and gyroscope to assist in RULA assessment. Data processing of sensor orientation is done in every sensor node by digital motion processor. Nine sensors are placed in the upper limb of operator subject. Development of kinematics model is done with Simmechanic Simulink. This kinematics model receives streaming data from sensors via wireless sensors network. The output of the kinematics model is the relative angular angle between upper limb members and visualized on the monitor. This angular information is compared to the look-up table of the RULA worksheet and gives the RULA score. The assessment result of the instrument is compared with the result of the assessment by rula assessors. To sum up, there is no significant difference of assessment by the instrument with an assessment by an assessor.

  19. Disposable soft 3 axis force sensor for biomedical applications.

    PubMed

    Chathuranga, Damith Suresh; Zhongkui Wang; Yohan Noh; Nanayakkara, Thrishantha; Hirai, Shinichi

    2015-08-01

    This paper proposes a new disposable soft 3D force sensor that can be used to calculate either force or displacement and vibrations. It uses three Hall Effect sensors orthogonally placed around a cylindrical beam made of silicon rubber. A niobium permanent magnet is inside the silicon. When a force is applied to the end of the cylinder, it is compressed and bent to the opposite side of the force displacing the magnet. This displacement causes change in the magnetic flux around the ratiomatric linear sensors (Hall Effect sensors). By analysing these changes, we calculate the force or displacement in three directions using a lookup table. This sensor can be used in minimal invasive surgery and haptic feedback applications. The cheap construction, bio-compatibility and ease of miniaturization are few advantages of this sensor. The sensor design, and its characterization are presented in this work.

  20. The effects of training on errors of perceived direction in perspective displays

    NASA Technical Reports Server (NTRS)

    Tharp, Gregory K.; Ellis, Stephen R.

    1990-01-01

    An experiment was conducted to determine the effects of training on the characteristic direction errors that are observed when subjects estimate exocentric directions on perspective displays. Changes in five subjects' perceptual errors were measured during a training procedure designed to eliminate the error. The training was provided by displaying to each subject both the sign and the direction of his judgment error. The feedback provided by the error display was found to decrease but not eliminate the error. A lookup table model of the source of the error was developed in which the judgement errors were attributed to overestimates of both the pitch and the yaw of the viewing direction used to produce the perspective projection. The model predicts the quantitative characteristics of the data somewhat better than previous models did. A mechanism is proposed for the observed learning, and further tests of the model are suggested.

  1. A Framework for Optimal Control Allocation with Structural Load Constraints

    NASA Technical Reports Server (NTRS)

    Frost, Susan A.; Taylor, Brian R.; Jutte, Christine V.; Burken, John J.; Trinh, Khanh V.; Bodson, Marc

    2010-01-01

    Conventional aircraft generally employ mixing algorithms or lookup tables to determine control surface deflections needed to achieve moments commanded by the flight control system. Control allocation is the problem of converting desired moments into control effector commands. Next generation aircraft may have many multipurpose, redundant control surfaces, adding considerable complexity to the control allocation problem. These issues can be addressed with optimal control allocation. Most optimal control allocation algorithms have control surface position and rate constraints. However, these constraints are insufficient to ensure that the aircraft's structural load limits will not be exceeded by commanded surface deflections. In this paper, a framework is proposed to enable a flight control system with optimal control allocation to incorporate real-time structural load feedback and structural load constraints. A proof of concept simulation that demonstrates the framework in a simulation of a generic transport aircraft is presented.

  2. MIDAS - ESO's new image processing system

    NASA Astrophysics Data System (ADS)

    Banse, K.; Crane, P.; Grosbol, P.; Middleburg, F.; Ounnas, C.; Ponz, D.; Waldthausen, H.

    1983-03-01

    The Munich Image Data Analysis System (MIDAS) is an image processing system whose heart is a pair of VAX 11/780 computers linked together via DECnet. One of these computers, VAX-A, is equipped with 3.5 Mbytes of memory, 1.2 Gbytes of disk storage, and two tape drives with 800/1600 bpi density. The other computer, VAX-B, has 4.0 Mbytes of memory, 688 Mbytes of disk storage, and one tape drive with 1600/6250 bpi density. MIDAS is a command-driven system geared toward the interactive user. The type and number of parameters in a command depends on the unique parameter invoked. MIDAS is a highly modular system that provides building blocks for the undertaking of more sophisticated applications. Presently, 175 commands are available. These include the modification of the color-lookup table interactively, to enhance various image features, and the interactive extraction of subimages.

  3. Application of active magnetic bearings in flexible rotordynamic systems - A state-of-the-art review

    NASA Astrophysics Data System (ADS)

    Siva Srinivas, R.; Tiwari, R.; Kannababu, Ch.

    2018-06-01

    In this paper a critical review of literature on applications of Active Magnetic Bearings (AMBs) systems in flexible rotordynamic systems have been presented. AMBs find various applications in rotating machinery; however, this paper mainly focuses on works in vibration suppression and associated with the condition monitoring using AMBs. It briefly introduces reader to the AMB working principle, provides details of various hardware components of a typical rotor-AMB test rig, and presents a background of traditional methods of vibration suppression in flexible rotors and the condition monitoring. It then moves on to summarize the basic features of AMB integrated flexible rotor test rigs available in literature with necessary instrumentation and its main objectives. A couple of lookup tables provide summary of important information of test rigs in papers within the scope of this article. Finally, future directions in AMB research within the paper's scope have been suggested.

  4. Toward Improved Modeling of Spectral Solar Irradiance for Solar Energy Applications: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Yu; Sengupta, Manajit

    This study introduces the National Renewable Energy Laboratory's (NREL's) recent efforts to extend the capability of the Fast All-sky Radiation Model for Solar applications (FARMS) by computing spectral solar irradiances over both horizontal and inclined surfaces. A new model is developed by computing the optical thickness of the atmosphere using a spectral irradiance model for clear-sky conditions, SMARTS2. A comprehensive lookup table (LUT) of cloud bidirectional transmittance distribution functions (BTDFs) is precomputed for 2002 wavelength bands using an atmospheric radiative transfer model, libRadtran. The solar radiation transmitted through the atmosphere is given by considering all possible paths of photon transmissionmore » and the relevent scattering and absorption attenuation. Our results indicate that this new model has an accuracy that is similar to that of state-of-the-art radiative transfer models, but it is significantly more efficient.« less

  5. Building a symbolic computer algebra toolbox to compute 2D Fourier transforms in polar coordinates

    PubMed Central

    Dovlo, Edem; Baddour, Natalie

    2015-01-01

    The development of a symbolic computer algebra toolbox for the computation of two dimensional (2D) Fourier transforms in polar coordinates is presented. Multidimensional Fourier transforms are widely used in image processing, tomographic reconstructions and in fact any application that requires a multidimensional convolution. By examining a function in the frequency domain, additional information and insights may be obtained. The advantages of our method include: • The implementation of the 2D Fourier transform in polar coordinates within the toolbox via the combination of two significantly simpler transforms. • The modular approach along with the idea of lookup tables implemented help avoid the issue of indeterminate results which may occur when attempting to directly evaluate the transform. • The concept also helps prevent unnecessary computation of already known transforms thereby saving memory and processing time. PMID:26150988

  6. Numerical techniques for high-throughput reflectance interference biosensing

    NASA Astrophysics Data System (ADS)

    Sevenler, Derin; Ünlü, M. Selim

    2016-06-01

    We have developed a robust and rapid computational method for processing the raw spectral data collected from thin film optical interference biosensors. We have applied this method to Interference Reflectance Imaging Sensor (IRIS) measurements and observed a 10,000 fold improvement in processing time, unlocking a variety of clinical and scientific applications. Interference biosensors have advantages over similar technologies in certain applications, for example highly multiplexed measurements of molecular kinetics. However, processing raw IRIS data into useful measurements has been prohibitively time consuming for high-throughput studies. Here we describe the implementation of a lookup table (LUT) technique that provides accurate results in far less time than naive methods. We also discuss an additional benefit that the LUT method can be used with a wider range of interference layer thickness and experimental configurations that are incompatible with methods that require fitting the spectral response.

  7. Plasma-Jet Magneto-Inertial Fusion Burn Calculations

    NASA Astrophysics Data System (ADS)

    Santarius, John

    2010-11-01

    Several issues exist related to using plasma jets to implode a Magneto-Inertial Fusion (MIF) liner onto a magnetized plasmoid and compress it to fusion-relevant temperatures [1]. The poster will explore how well the liner's inertia provides transient plasma confinement and affects the burn dynamics. The investigation uses the University of Wisconsin's 1-D Lagrangian radiation-hydrodynamics code, BUCKY, which solves single-fluid equations of motion with ion-electron interactions, PdV work, table-lookup equations of state, fast-ion energy deposition, pressure contributions from all species, and one or two temperatures. Extensions to the code include magnetic field evolution as the plasmoid compresses plus dependence of the thermal conductivity on the magnetic field. [4pt] [1] Y.C. F. Thio, et al.,``Magnetized Target Fusion in a Spheroidal Geometry with Standoff Drivers,'' in Current Trends in International Fusion Research, E. Panarella, ed. (National Research Council of Canada, Ottawa, Canada, 1999), p. 113.

  8. Positional accuracy and geographic bias of four methods of geocoding in epidemiologic research.

    PubMed

    Schootman, Mario; Sterling, David A; Struthers, James; Yan, Yan; Laboube, Ted; Emo, Brett; Higgs, Gary

    2007-06-01

    We examined the geographic bias of four methods of geocoding addresses using ArcGIS, commercial firm, SAS/GIS, and aerial photography. We compared "point-in-polygon" (ArcGIS, commercial firm, and aerial photography) and the "look-up table" method (SAS/GIS) to allocate addresses to census geography, particularly as it relates to census-based poverty rates. We randomly selected 299 addresses of children treated for asthma at an urban emergency department (1999-2001). The coordinates of the building address side door were obtained by constant offset based on ArcGIS and a commercial firm and true ground location based on aerial photography. Coordinates were available for 261 addresses across all methods. For 24% to 30% of geocoded road/door coordinates the positional error was 51 meters or greater, which was similar across geocoding methods. The mean bearing was -26.8 degrees for the vector of coordinates based on aerial photography and ArcGIS and 8.5 degrees for the vector based on aerial photography and the commercial firm (p < 0.0001). ArcGIS and the commercial firm performed very well relative to SAS/GIS in terms of allocation to census geography. For 20%, the door location based on aerial photography was assigned to a different block group compared to SAS/GIS. The block group poverty rate varied at least two standard deviations for 6% to 7% of addresses. We found important differences in distance and bearing between geocoding relative to aerial photography. Allocation of locations based on aerial photography to census-based geographic areas could lead to substantial errors.

  9. [Determine and Implement Updates to Be Made to MODEAR (Mission Operations Data Enterprise Architecture Repository)

    NASA Technical Reports Server (NTRS)

    Fanourakis, Sofia

    2015-01-01

    My main project was to determine and implement updates to be made to MODEAR (Mission Operations Data Enterprise Architecture Repository) process definitions to be used for CST-100 (Crew Space Transportation-100) related missions. Emphasis was placed on the scheduling aspect of the processes. In addition, I was to complete other tasks as given. Some of the additional tasks were: to create pass-through command look-up tables for the flight controllers, finish one of the MDT (Mission Operations Directorate Display Tool) displays, gather data on what is included in the CST-100 public data, develop a VBA (Visual Basic for Applications) script to create a csv (Comma-Separated Values) file with specific information from spreadsheets containing command data, create a command script for the November MCC-ASIL (Mission Control Center-Avionics System Integration Laboratory) testing, and take notes for one of the TCVB (Terminal Configured Vehicle B-737) meetings. In order to make progress in my main project I scheduled meetings with the appropriate subject matter experts, prepared material for the meetings, and assisted in the discussions in order to understand the process or processes at hand. After such discussions I made updates to various MODEAR processes and process graphics. These meetings have resulted in significant updates to the processes that were discussed. In addition, the discussions have helped the departments responsible for these processes better understand the work ahead and provided material to help document how their products are created. I completed my other tasks utilizing resources available to me and, when necessary, consulting with the subject matter experts. Outputs resulting from my other tasks were: two completed and one partially completed pass through command look-up tables for the fight controllers, significant updates to one of the MDT displays, a spreadsheet containing data on what is included in the CST-100 public data, a tool to create a csv file with specific information from spreadsheets containing command data, a command script for the November MCC-ASIL testing which resulted in a successful test day identifying several potential issues, and notes from one of the TCVB meetings that was used to keep the teams up to date on what was discussed and decided. I have learned a great deal working at NASA these last four months. I was able to meet and work with amazing individuals, further develop my technical knowledge, expand my knowledge base regarding human spaceflight, and contribute to the CST-100 missions. My work at NASA has strengthened my desire to continue my education in order to make further contributions to the field, and has given me the opportunity to see the advantages of a career at NASA.

  10. Heuristic Modeling for TRMM Lifetime Predictions

    NASA Technical Reports Server (NTRS)

    Jordan, P. S.; Sharer, P. J.; DeFazio, R. L.

    1996-01-01

    Analysis time for computing the expected mission lifetimes of proposed frequently maneuvering, tightly altitude constrained, Earth orbiting spacecraft have been significantly reduced by means of a heuristic modeling method implemented in a commercial-off-the-shelf spreadsheet product (QuattroPro) running on a personal computer (PC). The method uses a look-up table to estimate the maneuver frequency per month as a function of the spacecraft ballistic coefficient and the solar flux index, then computes the associated fuel use by a simple engine model. Maneuver frequency data points are produced by means of a single 1-month run of traditional mission analysis software for each of the 12 to 25 data points required for the table. As the data point computations are required only a mission design start-up and on the occasion of significant mission redesigns, the dependence on time consuming traditional modeling methods is dramatically reduced. Results to date have agreed with traditional methods to within 1 to 1.5 percent. The spreadsheet approach is applicable to a wide variety of Earth orbiting spacecraft with tight altitude constraints. It will be particularly useful to such missions as the Tropical Rainfall Measurement Mission scheduled for launch in 1997, whose mission lifetime calculations are heavily dependent on frequently revised solar flux predictions.

  11. Path Integration of Head Direction: Updating a Packet of Neural Activity at the Correct Speed Using Axonal Conduction Delays

    PubMed Central

    Walters, Daniel; Stringer, Simon; Rolls, Edmund

    2013-01-01

    The head direction cell system is capable of accurately updating its current representation of head direction in the absence of visual input. This is known as the path integration of head direction. An important question is how the head direction cell system learns to perform accurate path integration of head direction. In this paper we propose a model of velocity path integration of head direction in which the natural time delay of axonal transmission between a linked continuous attractor network and competitive network acts as a timing mechanism to facilitate the correct speed of path integration. The model effectively learns a “look-up” table for the correct speed of path integration. In simulation, we show that the model is able to successfully learn two different speeds of path integration across two different axonal conduction delays, and without the need to alter any other model parameters. An implication of this model is that, by learning look-up tables for each speed of path integration, the model should exhibit a degree of robustness to damage. In simulations, we show that the speed of path integration is not significantly affected by degrading the network through removing a proportion of the cells that signal rotational velocity. PMID:23526976

  12. Decision-Tree Formulation With Order-1 Lateral Execution

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    A compact symbolic formulation enables mapping of an arbitrarily complex decision tree of a certain type into a highly computationally efficient multidimensional software object. The type of decision trees to which this formulation applies is that known in the art as the Boolean class of balanced decision trees. Parallel lateral slices of an object created by means of this formulation can be executed in constant time considerably less time than would otherwise be required. Decision trees of various forms are incorporated into almost all large software systems. A decision tree is a way of hierarchically solving a problem, proceeding through a set of true/false responses to a conclusion. By definition, a decision tree has a tree-like structure, wherein each internal node denotes a test on an attribute, each branch from an internal node represents an outcome of a test, and leaf nodes represent classes or class distributions that, in turn represent possible conclusions. The drawback of decision trees is that execution of them can be computationally expensive (and, hence, time-consuming) because each non-leaf node must be examined to determine whether to progress deeper into a tree structure or to examine an alternative. The present formulation was conceived as an efficient means of representing a decision tree and executing it in as little time as possible. The formulation involves the use of a set of symbolic algorithms to transform a decision tree into a multi-dimensional object, the rank of which equals the number of lateral non-leaf nodes. The tree can then be executed in constant time by means of an order-one table lookup. The sequence of operations performed by the algorithms is summarized as follows: 1. Determination of whether the tree under consideration can be encoded by means of this formulation. 2. Extraction of decision variables. 3. Symbolic optimization of the decision tree to minimize its form. 4. Expansion and transformation of all nested conjunctive-disjunctive paths to a flattened conjunctive form composed only of equality checks when possible. If each reduced conjunctive form contains only equality checks and all of these forms use the same variables, then the decision tree can be reduced to an order-one operation through a table lookup. The speedup to order one is accomplished by distributing each decision variable over a surface of a multidimensional object by mapping the equality constant to an index

  13. Estimating effective particle size of tropical deep convective clouds with a look-up table method using satellite measurements of brightness temperature differences

    NASA Astrophysics Data System (ADS)

    Hong, Gang; Minnis, Patrick; Doelling, David; Ayers, J. Kirk; Sun-Mack, Szedung

    2012-03-01

    A method for estimating effective ice particle radius Re at the tops of tropical deep convective clouds (DCC) is developed on the basis of precomputed look-up tables (LUTs) of brightness temperature differences (BTDs) between the 3.7 and 11.0 μm bands. A combination of discrete ordinates radiative transfer and correlated k distribution programs, which account for the multiple scattering and monochromatic molecular absorption in the atmosphere, is utilized to compute the LUTs as functions of solar zenith angle, satellite zenith angle, relative azimuth angle, Re, cloud top temperature (CTT), and cloud visible optical thickness τ. The LUT-estimated DCC Re agrees well with the cloud retrievals of the Moderate Resolution Imaging Spectroradiometer (MODIS) for the NASA Clouds and Earth's Radiant Energy System with a correlation coefficient of 0.988 and differences of less than 10%. The LUTs are applied to 1 year of measurements taken from MODIS aboard Aqua in 2007 to estimate DCC Re and are compared to a similar quantity from CloudSat over the region bounded by 140°E, 180°E, 0°N, and 20°N in the Western Pacific Warm Pool. The estimated DCC Re values are mainly concentrated in the range of 25-45 μm and decrease with CTT. Matching the LUT-estimated Re with ice cloud Re retrieved by CloudSat, it is found that the ice cloud τ values from DCC top to the vertical location where LUT-estimated Re is located at the CloudSat-retrieved Re profile are mostly less than 2.5 with a mean value of about 1.3. Changes in the DCC τ can result in differences of less than 10% for Re estimated from LUTs. The LUTs of 0.65 μm bidirectional reflectance distribution function (BRDF) are built as functions of viewing geometry and column amount of ozone above upper troposphere. The 0.65 μm BRDF can eliminate some noncore portions of the DCCs detected using only 11 μm brightness temperature thresholds, which result in a mean difference of only 0.6 μm for DCC Re estimated from BTD LUTs.

  14. Evaluation of the MODIS Aerosol Retrievals over Ocean and Land during CLAMS.

    NASA Astrophysics Data System (ADS)

    Levy, R. C.; Remer, L. A.; Martins, J. V.; Kaufman, Y. J.; Plana-Fattori, A.; Redemann, J.; Wenny, B.

    2005-04-01

    The Chesapeake Lighthouse Aircraft Measurements for Satellites (CLAMS) experiment took place from 10 July to 2 August 2001 in a combined ocean-land region that included the Chesapeake Lighthouse [Clouds and the Earth's Radiant Energy System (CERES) Ocean Validation Experiment (COVE)] and the Wallops Flight Facility (WFF), both along coastal Virginia. This experiment was designed mainly for validating instruments and algorithms aboard the Terra satellite platform, including the Moderate Resolution Imaging Spectroradiometer (MODIS). Over the ocean, MODIS retrieved aerosol optical depths (AODs) at seven wavelengths and an estimate of the aerosol size distribution. Over the land, MODIS retrieved AOD at three wavelengths plus qualitative estimates of the aerosol size. Temporally coincident measurements of aerosol properties were made with a variety of sun photometers from ground sites and airborne sites just above the surface. The set of sun photometers provided unprecedented spectral coverage from visible (VIS) to the solar near-infrared (NIR) and infrared (IR) wavelengths. In this study, AOD and aerosol size retrieved from MODIS is compared with similar measurements from the sun photometers. Over the nearby ocean, the MODIS AOD in the VIS and NIR correlated well with sun-photometer measurements, nearly fitting a one-to-one line on a scatterplot. As one moves from ocean to land, there is a pronounced discontinuity of the MODIS AOD, where MODIS compares poorly to the sun-photometer measurements. Especially in the blue wavelength, MODIS AOD is too high in clean aerosol conditions and too low under larger aerosol loadings. Using the Second Simulation of the Satellite Signal in the Solar Spectrum (6S) radiative code to perform atmospheric correction, the authors find inconsistency in the surface albedo assumptions used by the MODIS lookup tables. It is demonstrated how the high bias at low aerosol loadings can be corrected. By using updated urban/industrial aerosol climatology for the MODIS lookup table over land, it is shown that the low bias for larger aerosol loadings can also be corrected. Understanding and improving MODIS retrievals over the East Coast may point to strategies for correction in other locations, thus improving the global quality of MODIS. Improvements in regional aerosol detection could also lead to the use of MODIS for monitoring air pollution.

  15. Building an integrated neurodegenerative disease database at an academic health center.

    PubMed

    Xie, Sharon X; Baek, Young; Grossman, Murray; Arnold, Steven E; Karlawish, Jason; Siderowf, Andrew; Hurtig, Howard; Elman, Lauren; McCluskey, Leo; Van Deerlin, Vivianna; Lee, Virginia M-Y; Trojanowski, John Q

    2011-07-01

    It is becoming increasingly important to study common and distinct etiologies, clinical and pathological features, and mechanisms related to neurodegenerative diseases such as Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis, and frontotemporal lobar degeneration. These comparative studies rely on powerful database tools to quickly generate data sets that match diverse and complementary criteria set by them. In this article, we present a novel integrated neurodegenerative disease (INDD) database, which was developed at the University of Pennsylvania (Penn) with the help of a consortium of Penn investigators. Because the work of these investigators are based on Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis, and frontotemporal lobar degeneration, it allowed us to achieve the goal of developing an INDD database for these major neurodegenerative disorders. We used the Microsoft SQL server as a platform, with built-in "backwards" functionality to provide Access as a frontend client to interface with the database. We used PHP Hypertext Preprocessor to create the "frontend" web interface and then used a master lookup table to integrate individual neurodegenerative disease databases. We also present methods of data entry, database security, database backups, and database audit trails for this INDD database. Using the INDD database, we compared the results of a biomarker study with those using an alternative approach by querying individual databases separately. We have demonstrated that the Penn INDD database has the ability to query multiple database tables from a single console with high accuracy and reliability. The INDD database provides a powerful tool for generating data sets in comparative studies on several neurodegenerative diseases. Copyright © 2011 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  16. Reducing uncertainty for estimating forest carbon stocks and dynamics using integrated remote sensing, forest inventory and process-based modeling

    NASA Astrophysics Data System (ADS)

    Poulter, B.; Ciais, P.; Joetzjer, E.; Maignan, F.; Luyssaert, S.; Barichivich, J.

    2015-12-01

    Accurately estimating forest biomass and forest carbon dynamics requires new integrated remote sensing, forest inventory, and carbon cycle modeling approaches. Presently, there is an increasing and urgent need to reduce forest biomass uncertainty in order to meet the requirements of carbon mitigation treaties, such as Reducing Emissions from Deforestation and forest Degradation (REDD+). Here we describe a new parameterization and assimilation methodology used to estimate tropical forest biomass using the ORCHIDEE-CAN dynamic global vegetation model. ORCHIDEE-CAN simulates carbon uptake and allocation to individual trees using a mechanistic representation of photosynthesis, respiration and other first-order processes. The model is first parameterized using forest inventory data to constrain background mortality rates, i.e., self-thinning, and productivity. Satellite remote sensing data for forest structure, i.e., canopy height, is used to constrain simulated forest stand conditions using a look-up table approach to match canopy height distributions. The resulting forest biomass estimates are provided for spatial grids that match REDD+ project boundaries and aim to provide carbon estimates for the criteria described in the IPCC Good Practice Guidelines Tier 3 category. With the increasing availability of forest structure variables derived from high-resolution LIDAR, RADAR, and optical imagery, new methodologies and applications with process-based carbon cycle models are becoming more readily available to inform land management.

  17. Coupled Inertial Navigation and Flush Air Data Sensing Algorithm for Atmosphere Estimation

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark

    2016-01-01

    This paper describes an algorithm for atmospheric state estimation based on a coupling between inertial navigation and flush air data-sensing pressure measurements. The navigation state is used in the atmospheric estimation algorithm along with the pressure measurements and a model of the surface pressure distribution to estimate the atmosphere using a nonlinear weighted least-squares algorithm. The approach uses a high-fidelity model of atmosphere stored in table-lookup form, along with simplified models propagated along the trajectory within the algorithm to aid the solution. Thus, the method is a reduced-order Kalman filter in which the inertial states are taken from the navigation solution and atmospheric states are estimated in the filter. The algorithm is applied to data from the Mars Science Laboratory entry, descent, and landing from August 2012. Reasonable estimates of the atmosphere are produced by the algorithm. The observability of winds along the trajectory are examined using an index based on the observability Gramian and the pressure measurement sensitivity matrix. The results indicate that bank reversals are responsible for adding information content. The algorithm is applied to the design of the pressure measurement system for the Mars 2020 mission. A linear covariance analysis is performed to assess estimator performance. The results indicate that the new estimator produces more precise estimates of atmospheric states than existing algorithms.

  18. Quantifying Vegetation Biophysical Variables from Imaging Spectroscopy Data: A Review on Retrieval Methods

    NASA Astrophysics Data System (ADS)

    Verrelst, Jochem; Malenovský, Zbyněk; Van der Tol, Christiaan; Camps-Valls, Gustau; Gastellu-Etchegorry, Jean-Philippe; Lewis, Philip; North, Peter; Moreno, Jose

    2018-06-01

    An unprecedented spectroscopic data stream will soon become available with forthcoming Earth-observing satellite missions equipped with imaging spectroradiometers. This data stream will open up a vast array of opportunities to quantify a diversity of biochemical and structural vegetation properties. The processing requirements for such large data streams require reliable retrieval techniques enabling the spatiotemporally explicit quantification of biophysical variables. With the aim of preparing for this new era of Earth observation, this review summarizes the state-of-the-art retrieval methods that have been applied in experimental imaging spectroscopy studies inferring all kinds of vegetation biophysical variables. Identified retrieval methods are categorized into: (1) parametric regression, including vegetation indices, shape indices and spectral transformations; (2) nonparametric regression, including linear and nonlinear machine learning regression algorithms; (3) physically based, including inversion of radiative transfer models (RTMs) using numerical optimization and look-up table approaches; and (4) hybrid regression methods, which combine RTM simulations with machine learning regression methods. For each of these categories, an overview of widely applied methods with application to mapping vegetation properties is given. In view of processing imaging spectroscopy data, a critical aspect involves the challenge of dealing with spectral multicollinearity. The ability to provide robust estimates, retrieval uncertainties and acceptable retrieval processing speed are other important aspects in view of operational processing. Recommendations towards new-generation spectroscopy-based processing chains for operational production of biophysical variables are given.

  19. Real-time 3D measurement based on structured light illumination considering camera lens distortion

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Chen, Qian; Zuo, Chao; Sun, Jiasong; Yu, ShiLing

    2014-12-01

    Optical three-dimensional (3-D) profilometry is gaining increasing attention for its simplicity, flexibility, high accuracy, and non-contact nature. Recent advances in imaging sensors and digital projection technology further its progress in high-speed, real-time applications, enabling 3-D shapes reconstruction of moving objects and dynamic scenes. In traditional 3-D measurement system where the processing time is not a key factor, camera lens distortion correction is performed directly. However, for the time-critical high-speed applications, the time-consuming correction algorithm is inappropriate to be performed directly during the real-time process. To cope with this issue, here we present a novel high-speed real-time 3-D coordinates measuring technique based on fringe projection with the consideration of the camera lens distortion. A pixel mapping relation between a distorted image and a corrected one is pre-determined and stored in computer memory for real-time fringe correction. And a method of lookup table (LUT) is introduced as well for fast data processing. Our experimental results reveal that the measurement error of the in-plane coordinates has been reduced by one order of magnitude and the accuracy of the out-plane coordinate been tripled after the distortions being eliminated. Moreover, owing to the merit of the LUT, the 3-D reconstruction can be achieved at 92.34 frames per second.

  20. Quantification of effective plant rooting depth: advancing global hydrological modelling

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Donohue, R. J.; McVicar, T.

    2017-12-01

    Plant rooting depth (Zr) is a key parameter in hydrological and biogeochemical models, yet the global spatial distribution of Zr is largely unknown due to the difficulties in its direct measurement. Moreover, Zr observations are usually only representative of a single plant or several plants, which can differ greatly from the effective Zr over a modelling unit (e.g., catchment or grid-box). Here, we provide a global parameterization of an analytical Zr model that balances the marginal carbon cost and benefit of deeper roots, and produce a climatological (i.e., 1982-2010 average) global Zr map. To test the Zr estimates, we apply the estimated Zr in a highly transparent hydrological model (i.e., the Budyko-Choudhury-Porporato (BCP) model) to estimate mean annual actual evapotranspiration (E) across the globe. We then compare the estimated E with both water balance-based E observations at 32 major catchments and satellite grid-box retrievals across the globe. Our results show that the BCP model, when implemented with Zr estimated herein, optimally reproduced the spatial pattern of E at both scales and provides improved model outputs when compared to BCP model results from two already existing global Zr datasets. These results suggest that our Zr estimates can be effectively used in state-of-the-art hydrological models, and potentially biogeochemical models, where the determination of Zr currently largely relies on biome type-based look-up tables.

  1. Evaluation of a Novel Conjunctive Exploratory Navigation Interface for Consumer Health Information: A Crowdsourced Comparative Study

    PubMed Central

    Cui, Licong; Carter, Rebecca

    2014-01-01

    Background Numerous consumer health information websites have been developed to provide consumers access to health information. However, lookup search is insufficient for consumers to take full advantage of these rich public information resources. Exploratory search is considered a promising complementary mechanism, but its efficacy has never before been rigorously evaluated for consumer health information retrieval interfaces. Objective This study aims to (1) introduce a novel Conjunctive Exploratory Navigation Interface (CENI) for supporting effective consumer health information retrieval and navigation, and (2) evaluate the effectiveness of CENI through a search-interface comparative evaluation using crowdsourcing with Amazon Mechanical Turk (AMT). Methods We collected over 60,000 consumer health questions from NetWellness, one of the first consumer health websites to provide high-quality health information. We designed and developed a novel conjunctive exploratory navigation interface to explore NetWellness health questions with health topics as dynamic and searchable menus. To investigate the effectiveness of CENI, we developed a second interface with keyword-based search only. A crowdsourcing comparative study was carefully designed to compare three search modes of interest: (A) the topic-navigation-based CENI, (B) the keyword-based lookup interface, and (C) either the most commonly available lookup search interface with Google, or the resident advanced search offered by NetWellness. To compare the effectiveness of the three search modes, 9 search tasks were designed with relevant health questions from NetWellness. Each task included a rating of difficulty level and questions for validating the quality of answers. Ninety anonymous and unique AMT workers were recruited as participants. Results Repeated-measures ANOVA analysis of the data showed the search modes A, B, and C had statistically significant differences among their levels of difficulty (P<.001). Wilcoxon signed-rank test (one-tailed) between A and B showed that A was significantly easier than B (P<.001). Paired t tests (one-tailed) between A and C showed A was significantly easier than C (P<.001). Participant responses on the preferred search modes showed that 47.8% (43/90) participants preferred A, 25.6% (23/90) preferred B, 24.4% (22/90) preferred C. Participant comments on the preferred search modes indicated that CENI was easy to use, provided better organization of health questions by topics, allowed users to narrow down to the most relevant contents quickly, and supported the exploratory navigation by non-experts or those unsure how to initiate their search. Conclusions We presented a novel conjunctive exploratory navigation interface for consumer health information retrieval and navigation. Crowdsourcing permitted a carefully designed comparative search-interface evaluation to be completed in a timely and cost-effective manner with a relatively large number of participants recruited anonymously. Accounting for possible biases, our study has shown for the first time with crowdsourcing that the combination of exploratory navigation and lookup search is more effective than lookup search alone. PMID:24513593

  2. Evaluation of a novel Conjunctive Exploratory Navigation Interface for consumer health information: a crowdsourced comparative study.

    PubMed

    Cui, Licong; Carter, Rebecca; Zhang, Guo-Qiang

    2014-02-10

    Numerous consumer health information websites have been developed to provide consumers access to health information. However, lookup search is insufficient for consumers to take full advantage of these rich public information resources. Exploratory search is considered a promising complementary mechanism, but its efficacy has never before been rigorously evaluated for consumer health information retrieval interfaces. This study aims to (1) introduce a novel Conjunctive Exploratory Navigation Interface (CENI) for supporting effective consumer health information retrieval and navigation, and (2) evaluate the effectiveness of CENI through a search-interface comparative evaluation using crowdsourcing with Amazon Mechanical Turk (AMT). We collected over 60,000 consumer health questions from NetWellness, one of the first consumer health websites to provide high-quality health information. We designed and developed a novel conjunctive exploratory navigation interface to explore NetWellness health questions with health topics as dynamic and searchable menus. To investigate the effectiveness of CENI, we developed a second interface with keyword-based search only. A crowdsourcing comparative study was carefully designed to compare three search modes of interest: (A) the topic-navigation-based CENI, (B) the keyword-based lookup interface, and (C) either the most commonly available lookup search interface with Google, or the resident advanced search offered by NetWellness. To compare the effectiveness of the three search modes, 9 search tasks were designed with relevant health questions from NetWellness. Each task included a rating of difficulty level and questions for validating the quality of answers. Ninety anonymous and unique AMT workers were recruited as participants. Repeated-measures ANOVA analysis of the data showed the search modes A, B, and C had statistically significant differences among their levels of difficulty (P<.001). Wilcoxon signed-rank test (one-tailed) between A and B showed that A was significantly easier than B (P<.001). Paired t tests (one-tailed) between A and C showed A was significantly easier than C (P<.001). Participant responses on the preferred search modes showed that 47.8% (43/90) participants preferred A, 25.6% (23/90) preferred B, 24.4% (22/90) preferred C. Participant comments on the preferred search modes indicated that CENI was easy to use, provided better organization of health questions by topics, allowed users to narrow down to the most relevant contents quickly, and supported the exploratory navigation by non-experts or those unsure how to initiate their search. We presented a novel conjunctive exploratory navigation interface for consumer health information retrieval and navigation. Crowdsourcing permitted a carefully designed comparative search-interface evaluation to be completed in a timely and cost-effective manner with a relatively large number of participants recruited anonymously. Accounting for possible biases, our study has shown for the first time with crowdsourcing that the combination of exploratory navigation and lookup search is more effective than lookup search alone.

  3. An improved Four-Russians method and sparsified Four-Russians algorithm for RNA folding.

    PubMed

    Frid, Yelena; Gusfield, Dan

    2016-01-01

    The basic RNA secondary structure prediction problem or single sequence folding problem (SSF) was solved 35 years ago by a now well-known [Formula: see text]-time dynamic programming method. Recently three methodologies-Valiant, Four-Russians, and Sparsification-have been applied to speedup RNA secondary structure prediction. The sparsification method exploits two properties of the input: the number of subsequence Z with the endpoints belonging to the optimal folding set and the maximum number base-pairs L. These sparsity properties satisfy [Formula: see text] and [Formula: see text], and the method reduces the algorithmic running time to O(LZ). While the Four-Russians method utilizes tabling partial results. In this paper, we explore three different algorithmic speedups. We first expand the reformulate the single sequence folding Four-Russians [Formula: see text]-time algorithm, to utilize an on-demand lookup table. Second, we create a framework that combines the fastest Sparsification and new fastest on-demand Four-Russians methods. This combined method has worst-case running time of [Formula: see text], where [Formula: see text] and [Formula: see text]. Third we update the Four-Russians formulation to achieve an on-demand [Formula: see text]-time parallel algorithm. This then leads to an asymptotic speedup of [Formula: see text] where [Formula: see text] and [Formula: see text] the number of subsequence with the endpoint j belonging to the optimal folding set. The on-demand formulation not only removes all extraneous computation and allows us to incorporate more realistic scoring schemes, but leads us to take advantage of the sparsity properties. Through asymptotic analysis and empirical testing on the base-pair maximization variant and a more biologically informative scoring scheme, we show that this Sparse Four-Russians framework is able to achieve a speedup on every problem instance, that is asymptotically never worse, and empirically better than achieved by the minimum of the two methods alone.

  4. Integrating patient reported outcome measures and computerized adaptive test estimates on the same common metrics: an example from the assessment of activities in rheumatoid arthritis.

    PubMed

    Doğanay Erdoğan, Beyza; Elhan, Atilla Halİl; Kaskatı, Osman Tolga; Öztuna, Derya; Küçükdeveci, Ayşe Adile; Kutlay, Şehim; Tennant, Alan

    2017-10-01

    This study aimed to explore the potential of an inclusive and fully integrated measurement system for the Activities component of the International Classification of Functioning, Disability and Health (ICF), incorporating four classical scales, including the Health Assessment Questionnaire (HAQ), and a Computerized Adaptive Testing (CAT). Three hundred patients with rheumatoid arthritis (RA) answered relevant questions from four questionnaires. Rasch analysis was performed to create an item bank using this item pool. A further 100 RA patients were recruited for a CAT application. Both real and simulated CATs were applied and the agreement between these CAT-based scores and 'paper-pencil' scores was evaluated with intraclass correlation coefficient (ICC). Anchoring strategies were used to obtain a direct translation from the item bank common metric to the HAQ score. Mean age of 300 patients was 52.3 ± 11.7 years; disease duration was 11.3 ± 8.0 years; 74.7% were women. After testing for the assumptions of Rasch analysis, a 28-item Activities item bank was created. The agreement between CAT-based scores and paper-pencil scores were high (ICC = 0.993). Using those HAQ items in the item bank as anchoring items, another Rasch analysis was performed with HAQ-8 scores as separate items together with anchoring items. Finally a conversion table of the item bank common metric to the HAQ scores was created. A fully integrated and inclusive health assessment system, illustrating the Activities component of the ICF, was built to assess RA patients. Raw score to metric conversions and vice versa were available, giving access to the metric by a simple look-up table. © 2015 Asia Pacific League of Associations for Rheumatology and Wiley Publishing Asia Pty Ltd.

  5. [Tasseled cap triangle (TCT)-leaf area index (LAI)model of rice fields based on PROSAIL model and its application].

    PubMed

    Li, Ya Ni; Lu, Lei; Liu, Yong

    2017-12-01

    The tasseled cap triangle (TCT)-leaf area index (LAI) isoline is a model that reflects the distribution of LAI isoline in the spectral space constituted by reflectance of red and near-infrared (NIR) bands, and the LAI retrieval model developed on the basis of this is more accurate than the commonly used statistical relationship models. This study used ground-based measurements of the rice field, validated the applicability of PROSAIL model in simulating canopy reflectance of rice field, and calibrated the input parameters of the model. The ranges of values of PROSAIL input parameters for simulating rice canopy reflectance were determined. Based on this, the TCT-LAI isoline model of rice field was established, and a look-up table (LUT) required in remote sensing retrieval of LAI was developed. Then, the LUT was used for Landsat 8 and WorldView 3 data to retrieve LAI of rice field, respectively. The results showed that the LAI retrieved using the LUT developed from TCT-LAI isoline model had a good linear relationship with the measured LAI R 2 =0.76, RMSE=0.47. Compared with the LAI retrieved from Landsat 8, LAI values retrieved from WorldView 3 va-ried with wider range, and data distribution was more scattered. Resampling the Landsat 8 and WorldView 3 reflectance data to 1 km to retrieve LAI, the result of MODIS LAI product was significantly underestimated compared to that of retrieved LAI.

  6. Global root zone storage capacity from satellite-based evaporation

    NASA Astrophysics Data System (ADS)

    Wang-Erlandsson, Lan; Bastiaanssen, Wim G. M.; Gao, Hongkai; Jägermeyr, Jonas; Senay, Gabriel B.; van Dijk, Albert I. J. M.; Guerschman, Juan P.; Keys, Patrick W.; Gordon, Line J.; Savenije, Hubert H. G.

    2016-04-01

    This study presents an "Earth observation-based" method for estimating root zone storage capacity - a critical, yet uncertain parameter in hydrological and land surface modelling. By assuming that vegetation optimises its root zone storage capacity to bridge critical dry periods, we were able to use state-of-the-art satellite-based evaporation data computed with independent energy balance equations to derive gridded root zone storage capacity at global scale. This approach does not require soil or vegetation information, is model independent, and is in principle scale independent. In contrast to a traditional look-up table approach, our method captures the variability in root zone storage capacity within land cover types, including in rainforests where direct measurements of root depths otherwise are scarce. Implementing the estimated root zone storage capacity in the global hydrological model STEAM (Simple Terrestrial Evaporation to Atmosphere Model) improved evaporation simulation overall, and in particular during the least evaporating months in sub-humid to humid regions with moderate to high seasonality. Our results suggest that several forest types are able to create a large storage to buffer for severe droughts (with a very long return period), in contrast to, for example, savannahs and woody savannahs (medium length return period), as well as grasslands, shrublands, and croplands (very short return period). The presented method to estimate root zone storage capacity eliminates the need for poor resolution soil and rooting depth data that form a limitation for achieving progress in the global land surface modelling community.

  7. Electrostatic polymer-based microdeformable mirror for adaptive optics

    NASA Astrophysics Data System (ADS)

    Zamkotsian, Frederic; Conedera, Veronique; Granier, Hugues; Liotard, Arnaud; Lanzoni, Patrick; Salvagnac, Ludovic; Fabre, Norbert; Camon, Henri

    2007-02-01

    Future adaptive optics (AO) systems require deformable mirrors with very challenging parameters, up to 250 000 actuators and inter-actuator spacing around 500 μm. MOEMS-based devices are promising for the development of a complete generation of new deformable mirrors. Our micro-deformable mirror (MDM) is based on an array of electrostatic actuators with attachments to a continuous mirror on top. The originality of our approach lies in the elaboration of layers made of polymer materials. Mirror layers and active actuators have been demonstrated. Based on the design of this actuator and our polymer process, realization of a complete polymer-MDM has been done using two process flows: the first involves exclusively polymer materials while the second uses SU8 polymer for structural layers and SiO II and sol-gel for sacrificial layers. The latest shows a better capability in order to produce completely released structures. The electrostatic force provides a non-linear actuation, while AO systems are based on linear matrices operations. Then, we have developed a dedicated 14-bit electronics in order to "linearize" the actuation, using a calibration and a sixth-order polynomial fitting strategy. The response is nearly perfect over our 3×3 MDM prototype with a standard deviation of 3.5 nm; the influence function of the central actuator has been measured. First evaluation on the cross non-linarities has also been studied on OKO mirror and a simple look-up table is sufficient for determining the location of each actuator whatever the locations of the neighbor actuators. Electrostatic MDM are particularly well suited for open-loop AO applications.

  8. Cache directory lookup reader set encoding for partial cache line speculation support

    DOEpatents

    Gara, Alan; Ohmacht, Martin

    2014-10-21

    In a multiprocessor system, with conflict checking implemented in a directory lookup of a shared cache memory, a reader set encoding permits dynamic recordation of read accesses. The reader set encoding includes an indication of a portion of a line read, for instance by indicating boundaries of read accesses. Different encodings may apply to different types of speculative execution.

  9. Examining the Conditions of Using an On-Line Dictionary to Learn Words and Comprehend Texts

    ERIC Educational Resources Information Center

    Dilenschneider, Robert Francis

    2018-01-01

    This study investigated three look-up conditions for language learners to learn unknown target words and comprehend a reading passage when their attention is transferred away to an on-line dictionary. The research questions focused on how each look-up condition impacted the recall and recognition of word forms, word meanings, and passage…

  10. Learning Receptive Fields and Quality Lookups for Blind Quality Assessment of Stereoscopic Images.

    PubMed

    Shao, Feng; Lin, Weisi; Wang, Shanshan; Jiang, Gangyi; Yu, Mei; Dai, Qionghai

    2016-03-01

    Blind quality assessment of 3D images encounters more new challenges than its 2D counterparts. In this paper, we propose a blind quality assessment for stereoscopic images by learning the characteristics of receptive fields (RFs) from perspective of dictionary learning, and constructing quality lookups to replace human opinion scores without performance loss. The important feature of the proposed method is that we do not need a large set of samples of distorted stereoscopic images and the corresponding human opinion scores to learn a regression model. To be more specific, in the training phase, we learn local RFs (LRFs) and global RFs (GRFs) from the reference and distorted stereoscopic images, respectively, and construct their corresponding local quality lookups (LQLs) and global quality lookups (GQLs). In the testing phase, blind quality pooling can be easily achieved by searching optimal GRF and LRF indexes from the learnt LQLs and GQLs, and the quality score is obtained by combining the LRF and GRF indexes together. Experimental results on three publicly 3D image quality assessment databases demonstrate that in comparison with the existing methods, the devised algorithm achieves high consistent alignment with subjective assessment.

  11. Blogging for the Distance Librarian

    ERIC Educational Resources Information Center

    Pival, Paul R.

    2005-01-01

    Based on user lookups, "Merriam-Webster Online "proclaimed "Blog" the word of the year for 2004. Distance librarianship, until mid-way through 2004, was a subject that was underrepresented in the blogosphere. The inception of a blog called "The Distant Librarian: Comments on the World of Distance Librarianship" is chronicled in this article, along…

  12. Polarized light imaging specifies the anisotropy of light scattering in the superficial layer of a tissue

    PubMed Central

    Jacques, Steven L.; Roussel, Stéphane; Samatham, Ravikant

    2016-01-01

    Abstract. This report describes how optical images acquired using linearly polarized light can specify the anisotropy of scattering (g) and the ratio of reduced scattering [μs′=μs(1−g)] to absorption (μa), i.e., N′=μs′/μa. A camera acquired copolarized (HH) and crosspolarized (HV) reflectance images of a tissue (skin), which yielded images based on the intensity (I=HH+HV) and difference (Q=HH−HV) of reflectance images. Monte Carlo simulations generated an analysis grid (or lookup table), which mapped Q and I into a grid of g versus N′, i.e., g(Q,I) and N′(Q,I). The anisotropy g is interesting because it is sensitive to the submicrometer structure of biological tissues. Hence, polarized light imaging can monitor shifts in the submicrometer (50 to 1000 nm) structure of tissues. The Q values for forearm skin on two subjects (one Caucasian, one pigmented) were in the range of 0.046±0.007 (24), which is the mean±SD for 24 measurements on 8 skin sites×3 visible wavelengths, 470, 524, and 625 nm, which indicated g values of 0.67±0.07 (24). PMID:27165546

  13. Direct Volume Rendering with Shading via Three-Dimensional Textures

    NASA Technical Reports Server (NTRS)

    VanGelder, Allen; Kim, Kwansik

    1996-01-01

    A new and easy-to-implement method for direct volume rendering that uses 3D texture maps for acceleration, and incorporates directional lighting, is described. The implementation, called Voltx, produces high-quality images at nearly interactive speeds on workstations with hardware support for three-dimensional texture maps. Previously reported methods did not incorporate a light model, and did not address issues of multiple texture maps for large volumes. Our research shows that these extensions impact performance by about a factor of ten. Voltx supports orthographic, perspective, and stereo views. This paper describes the theory and implementation of this technique, and compares it to the shear-warp factorization approach. A rectilinear data set is converted into a three-dimensional texture map containing color and opacity information. Quantized normal vectors and a lookup table provide efficiency. A new tesselation of the sphere is described, which serves as the basis for normal-vector quantization. A new gradient-based shading criterion is described, in which the gradient magnitude is interpreted in the context of the field-data value and the material classification parameters, and not in isolation. In the rendering phase, the texture map is applied to a stack of parallel planes, which effectively cut the texture into many slabs. The slabs are composited to form an image.

  14. A Temperature Sensor using a Silicon-on-Insulator (SOI) Timer for Very Wide Temperature Measurement

    NASA Technical Reports Server (NTRS)

    Patterson, Richard L.; Hammoud, Ahmad; Elbuluk, Malik; Culley, Dennis E.

    2008-01-01

    A temperature sensor based on a commercial-off-the-shelf (COTS) Silicon-on-Insulator (SOI) Timer was designed for extreme temperature applications. The sensor can operate under a wide temperature range from hot jet engine compartments to cryogenic space exploration missions. For example, in Jet Engine Distributed Control Architecture, the sensor must be able to operate at temperatures exceeding 150 C. For space missions, extremely low cryogenic temperatures need to be measured. The output of the sensor, which consisted of a stream of digitized pulses whose period was proportional to the sensed temperature, can be interfaced with a controller or a computer. The data acquisition system would then give a direct readout of the temperature through the use of a look-up table, a built-in algorithm, or a mathematical model. Because of the wide range of temperature measurement and because the sensor is made of carefully selected COTS parts, this work is directly applicable to the NASA Fundamental Aeronautics/Subsonic Fixed Wing Program--Jet Engine Distributed Engine Control Task and to the NASA Electronic Parts and Packaging (NEPP) Program. In the past, a temperature sensor was designed and built using an SOI operational amplifier, and a report was issued. This work used an SOI 555 timer as its core and is completely new work.

  15. Surface emissivity and temperature retrieval for a hyperspectral sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borel, C.C.

    1998-12-01

    With the growing use of hyper-spectral imagers, e.g., AVIRIS in the visible and short-wave infrared there is hope of using such instruments in the mid-wave and thermal IR (TIR) some day. The author believes that this will enable him to get around using the present temperature-emissivity separation algorithms using methods which take advantage of the many channels available in hyper-spectral imagers. A simple fact used in coming up with a novel algorithm is that a typical surface emissivity spectrum are rather smooth compared to spectral features introduced by the atmosphere. Thus, a iterative solution technique can be devised which retrievesmore » emissivity spectra based on spectral smoothness. To make the emissivities realistic, atmospheric parameters are varied using approximations, look-up tables derived from a radiative transfer code and spectral libraries. One such iterative algorithm solves the radiative transfer equation for the radiance at the sensor for the unknown emissivity and uses the blackbody temperature computed in an atmospheric window to get a guess for the unknown surface temperature. By varying the surface temperature over a small range a series of emissivity spectra are calculated. The one with the smoothest characteristic is chosen. The algorithm was tested on synthetic data using MODTRAN and the Salisbury emissivity database.« less

  16. Assessment of Biases in MODIS Surface Reflectance Due to Lambertian Approximation

    NASA Technical Reports Server (NTRS)

    Wang, Yujie; Lyapustin, Alexei I.; Privette, Jeffrey L.; Cook, Robert B.; SanthanaVannan, Suresh K.; Vermote, Eric F.; Schaaf, Crystal

    2010-01-01

    Using MODIS data and the AERONET-based Surface Reflectance Validation Network (ASRVN), this work studies errors of MODIS atmospheric correction caused by the Lambertian approximation. On one hand, this approximation greatly simplifies the radiative transfer model, reduces the size of the look-up tables, and makes operational algorithm faster. On the other hand, uncompensated atmospheric scattering caused by Lambertian model systematically biases the results. For example, for a typical bowl-shaped bidirectional reflectance distribution function (BRDF), the derived reflectance is underestimated at high solar or view zenith angles, where BRDF is high, and is overestimated at low zenith angles where BRDF is low. The magnitude of biases grows with the amount of scattering in the atmosphere, i.e., at shorter wavelengths and at higher aerosol concentration. The slope of regression of Lambertian surface reflectance vs. ASRVN bidirectional reflectance factor (BRF) is about 0.85 in the red and 0.6 in the green bands. This error propagates into the MODIS BRDF/albedo algorithm, slightly reducing the magnitude of overall reflectance and anisotropy of BRDF. This results in a small negative bias of spectral surface albedo. An assessment for the GSFC (Greenbelt, USA) validation site shows the albedo reduction by 0.004 in the near infrared, 0.005 in the red, and 0.008 in the green MODIS bands.

  17. Elimination of single-beam substitution error in diffuse reflectance measurements using an integrating sphere.

    PubMed

    Vidovic, Luka; Majaron, Boris

    2014-02-01

    Diffuse reflectance spectra (DRS) of biological samples are commonly measured using an integrating sphere (IS). To account for the incident light spectrum, measurement begins by placing a highly reflective white standard against the IS sample opening and collecting the reflected light. After replacing the white standard with the test sample of interest, DRS of the latter is determined as the ratio of the two values at each involved wavelength. However, such a substitution may alter the fluence rate inside the IS. This leads to distortion of measured DRS, which is known as single-beam substitution error (SBSE). Barring the use of more complex experimental setups, the literature states that only approximate corrections of the SBSE are possible, e.g., by using look-up tables generated with calibrated low-reflectivity standards. We present a practical method for elimination of SBSE when using IS equipped with an additional reference port. Two additional measurements performed at this port enable a rigorous elimination of SBSE. Our experimental characterization of SBSE is replicated by theoretical derivation. This offers an alternative possibility of computational removal of SBSE based on advance characterization of a specific DRS setup. The influence of SBSE on quantitative analysis of DRS is illustrated in one application example.

  18. Color management with a hammer: the B-spline fitter

    NASA Astrophysics Data System (ADS)

    Bell, Ian E.; Liu, Bonny H. P.

    2003-01-01

    To paraphrase Abraham Maslow: If the only tool you have is a hammer, every problem looks like a nail. We have a B-spline fitter customized for 3D color data, and many problems in color management can be solved with this tool. Whereas color devices were once modeled with extensive measurement, look-up tables and trilinear interpolation, recent improvements in hardware have made B-spline models an affordable alternative. Such device characterizations require fewer color measurements than piecewise linear models, and have uses beyond simple interpolation. A B-spline fitter, for example, can act as a filter to remove noise from measurements, leaving a model with guaranteed smoothness. Inversion of the device model can then be carried out consistently and efficiently, as the spline model is well behaved and its derivatives easily computed. Spline-based algorithms also exist for gamut mapping, the composition of maps, and the extrapolation of a gamut. Trilinear interpolation---a degree-one spline---can still be used after nonlinear spline smoothing for high-speed evaluation with robust convergence. Using data from several color devices, this paper examines the use of B-splines as a generic tool for modeling devices and mapping one gamut to another, and concludes with applications to high-dimensional and spectral data.

  19. Anterior chamber blood cell differentiation using spectroscopic optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Qian, Ruobing; McNabb, Ryan P.; Kuo, Anthony N.; Izatt, Joseph A.

    2018-02-01

    There is great clinical importance in identifying cellular responses in the anterior chamber (AC) which can indicate signs of hyphema (an accumulation of red blood cells (RBCs)) or aberrant intraocular inflammation (an accumulation of white blood cells (WBCs)). These responses are difficult to diagnose and require specialized equipment such as ophthalmic microscopes and specialists trained in examining the eye. In this work, we applied spectroscopic OCT to differentiate between RBCs and subtypes of WBCs, including neutrophils, lymphocytes and monocytes, both in vitro and in ACs of porcine eyes. We located and tracked single cells in OCT volumetric images, and extracted the spectroscopic data of each cell from the detected interferograms using short-time Fourier Transform (STFT). A look-up table of Mie spectra was generated and used to correlate the spectroscopic data of single cells to their characteristic sizes. The accuracy of the method was first validated on 10um polystyrene microspheres. For RBCs and subtypes of WBCs, the extracted size distributions based on the best Mie spectra fit were significantly different between each cell type by using the Wilcoxon rank-sum test. A similar size distribution of neutrophils was also acquired in the measurements of cells introduced into the ACs of porcine eyes, further supporting spectroscopic OCT for potentially differentiating and quantifying blood cell types in the AC in vivo.

  20. RELAP-7 Progress Report: A Mathematical Model for 1-D Compressible, Single-Phase Flow Through a Branching Junction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, R. A.

    In the literature, the abundance of pipe network junction models, as well as inclusion of dissipative losses between connected pipes with loss coefficients, has been treated using the incompressible flow assumption of constant density. This approach is fundamentally, physically wrong for compressible flow with density change. This report introduces a mathematical modeling approach for general junctions in piping network systems for which the transient flows are compressible and single-phase. The junction could be as simple as a 1-pipe input and 1-pipe output with differing pipe cross-sectional areas for which a dissipative loss is necessary, or it could include an activemore » component, between an inlet pipe and an outlet pipe, such as a pump or turbine. In this report, discussion will be limited to the former. A more general branching junction connecting an arbitrary number of pipes with transient, 1-D compressible single-phase flows is also presented. These models will be developed in a manner consistent with the use of a general equation of state like, for example, the recent Spline-Based Table Look-up method [1] for incorporating the IAPWS-95 formulation [2] to give accurate and efficient calculations for properties for water and steam with RELAP-7 [3].« less

  1. Two-sample discrimination of Poisson means

    NASA Technical Reports Server (NTRS)

    Lampton, M.

    1994-01-01

    This paper presents a statistical test for detecting significant differences between two random count accumulations. The null hypothesis is that the two samples share a common random arrival process with a mean count proportional to each sample's exposure. The model represents the partition of N total events into two counts, A and B, as a sequence of N independent Bernoulli trials whose partition fraction, f, is determined by the ratio of the exposures of A and B. The detection of a significant difference is claimed when the background (null) hypothesis is rejected, which occurs when the observed sample falls in a critical region of (A, B) space. The critical region depends on f and the desired significance level, alpha. The model correctly takes into account the fluctuations in both the signals and the background data, including the important case of small numbers of counts in the signal, the background, or both. The significance can be exactly determined from the cumulative binomial distribution, which in turn can be inverted to determine the critical A(B) or B(A) contour. This paper gives efficient implementations of these tests, based on lookup tables. Applications include the detection of clustering of astronomical objects, the detection of faint emission or absorption lines in photon-limited spectroscopy, the detection of faint emitters or absorbers in photon-limited imaging, and dosimetry.

  2. A Think-Aloud Protocols Investigation of Dictionary Processing Strategies among Saudi EFL Students

    ERIC Educational Resources Information Center

    Alhaysony, Maha

    2012-01-01

    This paper aims to examine qualitatively how Saudi EFL female students look-up word meanings in their dictionaries while reading. We aimed to identify and describe the look-up strategies used by these students. The subjects of the study were ten third-year English major students. A think-aloud protocol was used in order to gain insights into the…

  3. Satellite estimation of surface spectral ultraviolet irradiance using OMI data in East Asia

    NASA Astrophysics Data System (ADS)

    Lee, H.; Kim, J.; Jeong, U.

    2017-12-01

    Due to a strong influence to the human health and ecosystem environment, continuous monitoring of the surface ultraviolet (UV) irradiance is important nowadays. The amount of UVA (320-400 nm) and UVB (290-320 nm) radiation at the Earth surface depends on the extent of Rayleigh scattering by atmospheric gas molecules, the radiative absorption by ozone, radiative scattering by clouds, and both absorption and scattering by airborne aerosols. Thus advanced consideration of these factors is the essential part to establish the process of UV irradiance estimation. Also UV index (UVI) is a simple parameter to show the strength of surface UV irradiance, therefore UVI has been widely utilized for the purpose of UV monitoring. In this study, we estimate surface UV irradiance at East Asia using realistic input based on OMI Total Ozone and reflectivity, and then validate this estimated comparing to UV irradiance from World Ozone and Ultraviolet Radiation Data Centre (WOUDC) data. In this work, we also try to develop our own retrieval algorithm for better estimation of surface irradiance. We use the Vector Linearized Discrete Ordinate Radiative Transfer (VLIDORT) model version 2.6 for our UV irradiance calculation. The input to the VLIDORT radiative transfer calculations are the total ozone column (TOMS V7 climatology), the surface albedo (Herman and Celarier, 1997) and the cloud optical depth. Based on these, the UV irradiance is calculated based on look-up table (LUT) approach. To correct absorbing aerosol, UV irradiance algorithm added climatological aerosol information (Arola et al., 2009). The further study, we analyze the comprehensive uncertainty analysis based on LUT and all input parameters.

  4. Applicability of aquifer impact models to support decisions at CO 2 sequestration sites

    DOE PAGES

    Keating, Elizabeth; Bacon, Diana; Carroll, Susan; ...

    2016-07-25

    The National Risk Assessment Partnership has developed a suite of tools to assess and manage risk at CO 2 sequestration sites. This capability includes polynomial or look-up table based reduced-order models (ROMs) that predict the impact of CO 2 and brine leaks on overlying aquifers. The development of these computationally-efficient models and the underlying reactive transport simulations they emulate has been documented elsewhere (Carroll et al., 2014a; Carroll et al., 2014b; Dai et al., 2014 ; Keating et al., 2016). Here in this paper, we seek to demonstrate applicability of ROM-based analysis by considering what types of decisions and aquifermore » types would benefit from the ROM analysis. We present four hypothetical examples where applying ROMs, in ensemble mode, could support decisions during a geologic CO 2 sequestration project. These decisions pertain to site selection, site characterization, monitoring network evaluation, and health impacts. In all cases, we consider potential brine/CO 2 leak rates at the base of the aquifer to be uncertain. We show that derived probabilities provide information relevant to the decision at hand. Although the ROMs were developed using site-specific data from two aquifers (High Plains and Edwards), the models accept aquifer characteristics as variable inputs and so they may have more broad applicability. We conclude that pH and TDS predictions are the most transferable to other aquifers based on the analysis of the nine water quality metrics (pH, TDS, 4 trace metals, 3 organic compounds). Guidelines are presented for determining the aquifer types for which the ROMs should be applicable.« less

  5. Hydraulic Modeling and Evolutionary Optimization for Enhanced Real-Time Decision Support of Combined Sewer Overflows

    NASA Astrophysics Data System (ADS)

    Zimmer, A. L.; Minsker, B. S.; Schmidt, A. R.; Ostfeld, A.

    2011-12-01

    Real-time mitigation of combined sewer overflows (CSOs) requires evaluation of multiple operational strategies during rapidly changing rainfall events. Simulation models for hydraulically complex systems can effectively provide decision support for short time intervals when coupled with efficient optimization. This work seeks to reduce CSOs for a test case roughly based on the North Branch of the Chicago Tunnel and Reservoir Plan (TARP), which is operated by the Metropolitan Water Reclamation District of Greater Chicago (MWRDGC). The North Branch tunnel flows to a junction with the main TARP system. The Chicago combined sewer system alleviates potential CSOs by directing high interceptor flows through sluice gates and dropshafts to a deep tunnel. Decision variables to control CSOs consist of sluice gate positions that control water flow to the tunnel as well as a treatment plant pumping rate that lowers interceptor water levels. A physics-based numerical model is used to simulate the hydraulic effects of changes in the decision variables. The numerical model is step-wise steady and conserves water mass and momentum at each time step by iterating through a series of look-up tables. The look-up tables are constructed offline to avoid extensive real-time calculations, and describe conduit storage and water elevations as a function of flow. A genetic algorithm (GA) is used to minimize CSOs at each time interval within a moving horizon framework. Decision variables are coded at 15-minute increments and GA solutions are two hours in duration. At each 15-minute interval, the algorithm identifies a good solution for a two-hour rainfall forecast. Three GA modifications help reduce optimization time. The first adjustment reduces the search alphabet by eliminating sluice gate positions that do not influence overflow volume. The second GA retains knowledge of the best decision at the previous interval by shifting the genes in the best previous sequence to initialize search at the new interval. The third approach is a micro-GA with a small population size and high diversity. Current tunnel operations attempt to avoid dropshaft geysers by simultaneously closing all sluice gates when the downstream end of the deep tunnel pressurizes. In an effort to further reduce CSOs, this research introduces a constraint that specifies a maximum allowable tunnel flow to prevent pressurization. The downstream junction depth is bounded by two flow conditions: a low tunnel water level represents inflow from the main system only, while a higher level includes main system flow as well as all possible North Branch inflow. If the lower of the two tunnel levels is pressurized, no North Branch flow is allowed to enter the junction. If only the higher level pressurizes, a linear rating is used to restrict the total North Branch flow below the volume that pressurizes the boundary. The numerical model is successfully calibrated to EPA SWMM and efficiently portrays system hydraulics in real-time. Results on the three GA approaches as well as impacts of various policies for the downstream constraint will be presented at the conference.

  6. Near constant-time optimal piecewise LDR to HDR inverse tone mapping

    NASA Astrophysics Data System (ADS)

    Chen, Qian; Su, Guan-Ming; Yin, Peng

    2015-02-01

    In a backward compatible HDR image/video compression, it is a general approach to reconstruct HDR from compressed LDR as a prediction to original HDR, which is referred to as inverse tone mapping. Experimental results show that 2- piecewise 2nd order polynomial has the best mapping accuracy than 1 piece high order or 2-piecewise linear, but it is also the most time-consuming method because to find the optimal pivot point to split LDR range to 2 pieces requires exhaustive search. In this paper, we propose a fast algorithm that completes optimal 2-piecewise 2nd order polynomial inverse tone mapping in near constant time without quality degradation. We observe that in least square solution, each entry in the intermediate matrix can be written as the sum of some basic terms, which can be pre-calculated into look-up tables. Since solving the matrix becomes looking up values in tables, computation time barely differs regardless of the number of points searched. Hence, we can carry out the most thorough pivot point search to find the optimal pivot that minimizes MSE in near constant time. Experiment shows that our proposed method achieves the same PSNR performance while saving 60 times computation time compared to the traditional exhaustive search in 2-piecewise 2nd order polynomial inverse tone mapping with continuous constraint.

  7. A look-up-table digital predistortion technique for high-voltage power amplifiers in ultrasonic applications.

    PubMed

    Gao, Zheng; Gui, Ping

    2012-07-01

    In this paper, we present a digital predistortion technique to improve the linearity and power efficiency of a high-voltage class-AB power amplifier (PA) for ultrasound transmitters. The system is composed of a digital-to-analog converter (DAC), an analog-to-digital converter (ADC), and a field-programmable gate array (FPGA) in which the digital predistortion (DPD) algorithm is implemented. The DPD algorithm updates the error, which is the difference between the ideal signal and the attenuated distorted output signal, in the look-up table (LUT) memory during each cycle of a sinusoidal signal using the least-mean-square (LMS) algorithm. On the next signal cycle, the error data are used to equalize the signal with negative harmonic components to cancel the amplifier's nonlinear response. The algorithm also includes a linear interpolation method applied to the windowed sinusoidal signals for the B-mode and Doppler modes. The measurement test bench uses an arbitrary function generator as the DAC to generate the input signal, an oscilloscope as the ADC to capture the output waveform, and software to implement the DPD algorithm. The measurement results show that the proposed system is able to reduce the second-order harmonic distortion (HD2) by 20 dB and the third-order harmonic distortion (HD3) by 14.5 dB, while at the same time improving the power efficiency by 18%.

  8. Interface Supports Lightweight Subsystem Routing for Flight Applications

    NASA Technical Reports Server (NTRS)

    Lux, James P.; Block, Gary L.; Ahmad, Mohammad; Whitaker, William D.; Dillon, James W.

    2010-01-01

    A wireless avionics interface exploits the constrained nature of data networks in flight systems to use a lightweight routing method. This simplified routing means that a processor is not required, and the logic can be implemented as an intellectual property (IP) core in a field-programmable gate array (FPGA). The FPGA can be shared with the flight subsystem application. In addition, the router is aware of redundant subsystems, and can be configured to provide hot standby support as part of the interface. This simplifies implementation of flight applications requiring hot stand - by support. When a valid inbound packet is received from the network, the destination node address is inspected to determine whether the packet is to be processed by this node. Each node has routing tables for the next neighbor node to guide the packet to the destination node. If it is to be processed, the final packet destination is inspected to determine whether the packet is to be forwarded to another node, or routed locally. If the packet is local, it is sent to an Applications Data Interface (ADI), which is attached to a local flight application. Under this scheme, an interface can support many applications in a subsystem supporting a high level of subsystem integration. If the packet is to be forwarded to another node, it is sent to the outbound packet router. The outbound packet router receives packets from an ADI or a packet to be forwarded. It then uses a lookup table to determine the next destination for the packet. Upon detecting a remote subsystem failure, the routing table can be updated to autonomously bypass the failed subsystem.

  9. Terrestrial effects of high energy cosmic rays

    NASA Astrophysics Data System (ADS)

    Atri, Dimitra

    On geological timescales, the Earth is likely to be exposed to higher than the usual flux of high energy cosmic rays (HECRs) from astrophysical sources such as nearby supernovae, gamma ray bursts or by galactic shocks. These high-energy particles strike the Earth's atmosphere, initiating an extensive air shower. As the air shower propagates deeper, it ionizes the atmosphere by producing charged secondary particles and photons. Increased ionization leads to changes in atmospheric chemistry, resulting in ozone depletion. This increases the flux of solar UVB radiation at the surface, which is potentially harmful to living organisms. Increased ionization affects the global electrical circuit, which could enhance the low-altitude cloud formation rate. Secondary particles such as muons and thermal neutrons produced as a result of hadronic interactions of the primary cosmic rays with the atmosphere are able to reach the ground, enhancing the biological radiation dose. The muon flux dominates the radiation dose from cosmic rays causing damage to DNA and an increase in mutation rates and cancer, which can have serious biological implications for surface and sub-surface life. Using CORSIKA, we perform massive computer simulations and construct lookup tables for 10 GeV - 1 PeV primaries, which can be used to quantify these effects from enhanced cosmic ray exposure to any astrophysical source. These tables are freely available to the community and can be used for other studies. We use these tables to study the terrestrial implications of galactic shock generated by the infall of our galaxy toward the Virgo cluster. Increased radiation dose from muons could be a possible mechanism explaining the observed periodicity in biodiversity in paleobiology databases.

  10. Terrestrial Effects of High Energy Cosmic Rays

    NASA Astrophysics Data System (ADS)

    Atri, Dimitra

    2011-01-01

    On geological timescales, the Earth is likely to be exposed to an increased flux of high energy cosmic rays (HECRs) from astrophysical sources such as nearby supernovae, gamma ray bursts or by galactic shocks. These high-energy particles strike the Earth's atmosphere initiating an extensive air shower. As the air shower propagates deeper, it ionizes the atmosphere by producing charged secondary particles. Increased ionization could lead to changes in atmospheric chemistry, resulting in ozone depletion. This could increase the flux of solar UVB radiation at the surface, which is potentially harmful to living organisms. Increased ionization affects the global electrical circuit can could possibly enhance the low-altitude cloud formation rate. Secondary particles such as muons and thermal neutrons produced as a result of nuclear interactions are able to reach the ground, enhancing the biological radiation dose. The muon flux dominates radiation dose from cosmic rays causing DNA damage and increase in the mutation rates, which can have serious biological implications for terrestrial and sub-terrestrial life. This radiation dose is an important constraint on the habitability of a planet. Using CORSIKA, we perform massive computer simulations and construct lookup tables from 10 GeV - 1 PeV primaries (1 PeV - 0.1 ZeV in progress), which can be used to quantify these effects. These tables are freely available to the community and can be used for other studies, not necessarily relevant to Astrobiology. We use these tables to study the terrestrial implications of galactic shock generated by the infall of our galaxy toward the Virgo cluster. This could be a possible mechanism explaining the observed periodicity in biodiversity in paleobiology databases.

  11. Minimizing the Standard Deviation of Spatially Averaged Surface Cross-Sectional Data from the Dual-Frequency Precipitation Radar

    NASA Technical Reports Server (NTRS)

    Meneghini, Robert; Kim, Hyokyung

    2016-01-01

    For an airborne or spaceborne radar, the precipitation-induced path attenuation can be estimated from the measurements of the normalized surface cross section, sigma 0, in the presence and absence of precipitation. In one implementation, the mean rain-free estimate and its variability are found from a lookup table (LUT) derived from previously measured data. For the dual-frequency precipitation radar aboard the global precipitation measurement satellite, the nominal table consists of the statistics of the rain-free 0 over a 0.5 deg x 0.5 deg latitude-longitude grid using a three-month set of input data. However, a problem with the LUT is an insufficient number of samples in many cells. An alternative table is constructed by a stepwise procedure that begins with the statistics over a 0.25 deg x 0.25 deg grid. If the number of samples at a cell is too few, the area is expanded, cell by cell, choosing at each step that cell that minimizes the variance of the data. The question arises, however, as to whether the selected region corresponds to the smallest variance. To address this question, a second type of variable-averaging grid is constructed using all possible spatial configurations and computing the variance of the data within each region. Comparisons of the standard deviations for the fixed and variable-averaged grids are given as a function of incidence angle and surface type using a three-month set of data. The advantage of variable spatial averaging is that the average standard deviation can be reduced relative to the fixed grid while satisfying the minimum sample requirement.

  12. Performance of Point and Range Queries for In-memory Databases using Radix Trees on GPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alam, Maksudul; Yoginath, Srikanth B; Perumalla, Kalyan S

    In in-memory database systems augmented by hardware accelerators, accelerating the index searching operations can greatly increase the runtime performance of database queries. Recently, adaptive radix trees (ART) have been shown to provide very fast index search implementation on the CPU. Here, we focus on an accelerator-based implementation of ART. We present a detailed performance study of our GPU-based adaptive radix tree (GRT) implementation over a variety of key distributions, synthetic benchmarks, and actual keys from music and book data sets. The performance is also compared with other index-searching schemes on the GPU. GRT on modern GPUs achieves some of themore » highest rates of index searches reported in the literature. For point queries, a throughput of up to 106 million and 130 million lookups per second is achieved for sparse and dense keys, respectively. For range queries, GRT yields 600 million and 1000 million lookups per second for sparse and dense keys, respectively, on a large dataset of 64 million 32-bit keys.« less

  13. Fully distributed monitoring architecture supporting multiple trackees and trackers in indoor mobile asset management application.

    PubMed

    Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju

    2014-03-21

    A tracking service like asset management is essential in a dynamic hospital environment consisting of numerous mobile assets (e.g., wheelchairs or infusion pumps) that are continuously relocated throughout a hospital. The tracking service is accomplished based on the key technologies of an indoor location-based service (LBS), such as locating and monitoring multiple mobile targets inside a building in real time. An indoor LBS such as a tracking service entails numerous resource lookups being requested concurrently and frequently from several locations, as well as a network infrastructure requiring support for high scalability in indoor environments. A traditional centralized architecture needs to maintain a geographic map of the entire building or complex in its central server, which can cause low scalability and traffic congestion. This paper presents a self-organizing and fully distributed indoor mobile asset management (MAM) platform, and proposes an architecture for multiple trackees (such as mobile assets) and trackers based on the proposed distributed platform in real time. In order to verify the suggested platform, scalability performance according to increases in the number of concurrent lookups was evaluated in a real test bed. Tracking latency and traffic load ratio in the proposed tracking architecture was also evaluated.

  14. Software Engineering Institute: Year in Review 2008

    DTIC Science & Technology

    2008-01-01

    security information they need. Now, new podcasts are uploaded every two weeks to the CERT website and iTunes . The series has become increasingly...reused throughout an organization— customer lookup, account lookup, and credit card validation are some examples. 2008 YEAR IN REVIEW...were charged in August 2008 with the theft of more than 40 million credit and debit card numbers from T.J. Maxx, Marshall’s, Barnes & Noble

  15. The conformal characters

    NASA Astrophysics Data System (ADS)

    Bourget, Antoine; Troost, Jan

    2018-04-01

    We revisit the study of the multiplets of the conformal algebra in any dimension. The theory of highest weight representations is reviewed in the context of the Bernstein-Gelfand-Gelfand category of modules. The Kazhdan-Lusztig polynomials code the relation between the Verma modules and the irreducible modules in the category and are the key to the characters of the conformal multiplets (whether finite dimensional, infinite dimensional, unitary or non-unitary). We discuss the representation theory and review in full generality which representations are unitarizable. The mathematical theory that allows for both the general treatment of characters and the full analysis of unitarity is made accessible. A good understanding of the mathematics of conformal multiplets renders the treatment of all highest weight representations in any dimension uniform, and provides an overarching comprehension of case-by-case results. Unitary highest weight representations and their characters are classified and computed in terms of data associated to cosets of the Weyl group of the conformal algebra. An executive summary is provided, as well as look-up tables up to and including rank four.

  16. Experimental method of in-vivo dosimetry without build-up device on the skin for external beam radiotherapy

    NASA Astrophysics Data System (ADS)

    Jeon, Hosang; Nam, Jiho; Lee, Jayoung; Park, Dahl; Baek, Cheol-Ha; Kim, Wontaek; Ki, Yongkan; Kim, Dongwon

    2015-06-01

    Accurate dose delivery is crucial to the success of modern radiotherapy. To evaluate the dose actually delivered to patients, in-vivo dosimetry (IVD) is generally performed during radiotherapy to measure the entrance doses. In IVD, a build-up device should be placed on top of an in-vivo dosimeter to satisfy the electron equilibrium condition. However, a build-up device made of tissue-equivalent material or metal may perturb dose delivery to a patient, and requires an additional laborious and time-consuming process. We developed a novel IVD method using a look-up table of conversion ratios instead of a build-up device. We validated this method through a monte-carlo simulation and 31 clinical trials. The mean error of clinical IVD is 3.17% (standard deviation: 2.58%), which is comparable to that of conventional IVD methods. Moreover, the required time was greatly reduced so that the efficiency of IVD could be improved for both patients and therapists.

  17. A manual for microcomputer image analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rich, P.M.; Ranken, D.M.; George, J.S.

    1989-12-01

    This manual is intended to serve three basic purposes: as a primer in microcomputer image analysis theory and techniques, as a guide to the use of IMAGE{copyright}, a public domain microcomputer program for image analysis, and as a stimulus to encourage programmers to develop microcomputer software suited for scientific use. Topics discussed include the principals of image processing and analysis, use of standard video for input and display, spatial measurement techniques, and the future of microcomputer image analysis. A complete reference guide that lists the commands for IMAGE is provided. IMAGE includes capabilities for digitization, input and output of images,more » hardware display lookup table control, editing, edge detection, histogram calculation, measurement along lines and curves, measurement of areas, examination of intensity values, output of analytical results, conversion between raster and vector formats, and region movement and rescaling. The control structure of IMAGE emphasizes efficiency, precision of measurement, and scientific utility. 18 refs., 18 figs., 2 tabs.« less

  18. RANS Simulation (Virtual Blade Model [VBM]) of Single Full Scale DOE RM1 MHK Turbine

    DOE Data Explorer

    Javaherchi, Teymour; Aliseda, Alberto

    2013-04-10

    Attached are the .cas and .dat files along with the required User Defined Functions (UDFs) and look-up table of lift and drag coefficients for Reynolds Averaged Navier-Stokes (RANS) simulation of a single full scale DOE RM1 turbine implemented in ANSYS FLUENT CFD-package. In this case study the flow field around and in the wake of the full scale DOE RM1 turbine is simulated using Blade Element Model (a.k.a Virtual Blade Model) by solving RANS equations coupled with k-\\omega turbulence closure model. It should be highlighted that in this simulation the actual geometry of the rotor blade is not modeled. The effect of turbine rotating blades are modeled using the Blade Element Theory. This simulation provides an accurate estimate for the performance of device and structure of it's turbulent far wake. Due to the simplifications implemented for modeling the rotating blades in this model, VBM is limited to capture details of the flow field in near wake region of the device.

  19. Evaluation of a uranium zirconium hydride fuel rod option for conversion of the MIT research reactor (MITR) from highly-enriched uranium to low-enriched uranium

    DOE PAGES

    Dunn, F. E.; Wilson, E. H.; Feldman, E. E.; ...

    2017-03-23

    The conversion of the Massachusetts Institute of Technology Reactor (MITR) from the use of highly-enriched uranium (HEU) fuel-plate assemblies to low-enriched uranium (LEU) by replacing the HEU fuel plates with specially designed General Atomics (GA) uranium zirconium hydride (UZrH) LEU fuel rods is evaluated in this paper. The margin to critical heat flux (CHF) in the core, which is cooled by light water at low pressure, is evaluated analytically for steady-state operation. A form of the Groeneveld CHF lookup table method is used and described in detail. A CHF ratio of 1.41 was found in the present analysis at 10more » MW with engineering hot channel factors included. Therefore, the nominal reactor core power, and neutron flux performance, would need to be reduced by at least 25% in order to meet the regulatory requirement of a minimum CHF ratio of 2.0.« less

  20. Evaluation of a uranium zirconium hydride fuel rod option for conversion of the MIT research reactor (MITR) from highly-enriched uranium to low-enriched uranium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunn, F. E.; Wilson, E. H.; Feldman, E. E.

    The conversion of the Massachusetts Institute of Technology Reactor (MITR) from the use of highly-enriched uranium (HEU) fuel-plate assemblies to low-enriched uranium (LEU) by replacing the HEU fuel plates with specially designed General Atomics (GA) uranium zirconium hydride (UZrH) LEU fuel rods is evaluated in this paper. The margin to critical heat flux (CHF) in the core, which is cooled by light water at low pressure, is evaluated analytically for steady-state operation. A form of the Groeneveld CHF lookup table method is used and described in detail. A CHF ratio of 1.41 was found in the present analysis at 10more » MW with engineering hot channel factors included. Therefore, the nominal reactor core power, and neutron flux performance, would need to be reduced by at least 25% in order to meet the regulatory requirement of a minimum CHF ratio of 2.0.« less

  1. Image Format Conversion to DICOM and Lookup Table Conversion to Presentation Value of the Japanese Society of Radiological Technology (JSRT) Standard Digital Image Database.

    PubMed

    Yanagita, Satoshi; Imahana, Masato; Suwa, Kazuaki; Sugimura, Hitomi; Nishiki, Masayuki

    2016-01-01

    Japanese Society of Radiological Technology (JSRT) standard digital image database contains many useful cases of chest X-ray images, and has been used in many state-of-the-art researches. However, the pixel values of all the images are simply digitized as relative density values by utilizing a scanned film digitizer. As a result, the pixel values are completely different from the standardized display system input value of digital imaging and communications in medicine (DICOM), called presentation value (P-value), which can maintain a visual consistency when observing images using different display luminance. Therefore, we converted all the images from JSRT standard digital image database to DICOM format followed by the conversion of the pixel values to P-value using an original program developed by ourselves. Consequently, JSRT standard digital image database has been modified so that the visual consistency of images is maintained among different luminance displays.

  2. VizieR Online Data Catalog: Wisconsin soft X-ray diffuse background all-sky Survey (McCammon+ 1983)

    NASA Astrophysics Data System (ADS)

    McCammon, D.; Burrows, D. N.; Sanders, W. T.; Kraushaar, W. L.

    1997-10-01

    The catalog contains all-sky survey of the soft X-ray diffuse background and the count-rate data from which the maps were made for the ten flights included in the survey. It contains 40 files in the machine-readable version and includes documentation and utility subroutines. The data files contain different band maps (B, C, M, M1, M2, I, J, 2-6 keV) in a 0 degree-centered Aitoff projection, in a 180-degree-centered Aitoff projection, in a north polar projection, and in a south polar projection. Lookup tables in the form of FITS images are provided for conversion between pixel coordinates and Galactic coordinates for the various projections. The bands are: B = 130-188eV C = 160-284eV M1 = 440-930eV M2 = 600-1100eV I = 770-1500eV J = 1100-2200eV 2-6keV = 1800-6300eV (51 data files).

  3. Clutch pressure estimation for a power-split hybrid transmission using nonlinear robust observer

    NASA Astrophysics Data System (ADS)

    Zhou, Bin; Zhang, Jianwu; Gao, Ji; Yu, Haisheng; Liu, Dong

    2018-06-01

    For a power-split hybrid transmission, using the brake clutch to realize the transition from electric drive mode to hybrid drive mode is an available strategy. Since the pressure information of the brake clutch is essential for the mode transition control, this research designs a nonlinear robust reduced-order observer to estimate the brake clutch pressure. Model uncertainties or disturbances are considered as additional inputs, thus the observer is designed in order that the error dynamics is input-to-state stable. The nonlinear characteristics of the system are expressed as the lookup tables in the observer. Moreover, the gain matrix of the observer is solved by two optimization procedures under the constraints of the linear matrix inequalities. The proposed observer is validated by offline simulation and online test, the results have shown that the observer achieves significant performance during the mode transition, as the estimation error is within a reasonable range, more importantly, it is asymptotically stable.

  4. One-Dimensional Burn Dynamics of Plasma-Jet Magneto-Inertial Fusion

    NASA Astrophysics Data System (ADS)

    Santarius, John

    2009-11-01

    This poster will discuss several issues related to using plasma jets to implode a Magneto-Inertial Fusion (MIF) liner onto a magnetized plasmoid and compress it to fusion-relevant temperatures [1]. The problem of pure plasma jet convergence and compression without a target present will be investigated. Cases with a target present will explore how well the liner's inertia provides transient plasma stability and confinement. The investigation uses UW's 1-D Lagrangian radiation-hydrodynamics code, BUCKY, which solves single-fluid equations of motion with ion-electron interactions, PdV work, table-lookup equations of state, fast-ion energy deposition, and pressure contributions from all species. Extensions to the code include magnetic field evolution as the plasmoid compresses plus dependence of the thermal conductivity and fusion product energy deposition on the magnetic field.[4pt] [1] Y.C. F. Thio, et al.,``Magnetized Target Fusion in a Spheroidal Geometry with Standoff Drivers,'' in Current Trends in International Fusion Research, E. Panarella, ed. (National Research Council of Canada, Ottawa, Canada, 1999), p. 113.

  5. PCI: A PATRAN-NASTRAN model translator

    NASA Technical Reports Server (NTRS)

    Sheerer, T. J.

    1990-01-01

    The amount of programming required to develop a PATRAN-NASTRAN translator was surprisingly small. The approach taken produced a highly flexible translator comparable with the PATNAS translator and superior to the PATCOS translator. The coding required varied from around ten lines for a shell element to around thirty for a bar element, and the time required to add a feature to the program is typically less than an hour. The use of a lookup table for element names makes the translator also applicable to other versions of NASTRAN. The saving in time as a result of using PDA's Gateway utilities was considerable. During the writing of the program it became apparent that, with a somewhat more complex structure, it would be possible to extend the element data file to contain all data required to define the translation from PATRAN to NASTRAN by mapping of data between formats. Similar data files on property, material and grid formats would produce a completely universal translator from PATRAN to any FEA program, or indeed any CAE system.

  6. Determining Greenland Ice Sheet Accumulation Rates from Radar Remote Sensing

    NASA Technical Reports Server (NTRS)

    Jezek, Kenneth C.

    2002-01-01

    An important component of NASA's Program for Arctic Regional Climate Assessment (PARCA) is a mass balance investigation of the Greenland Ice Sheet. The mass balance is calculated by taking the difference between the areally Integrated snow accumulation and the net ice discharge of the ice sheet. Uncertainties in this calculation Include the snow accumulation rate, which has traditionally been determined by interpolating data from ice core samples taken from isolated spots across the ice sheet. The sparse data associated with ice cores juxtaposed against the high spatial and temporal resolution provided by remote sensing , has motivated scientists to investigate relationships between accumulation rate and microwave observations as an option for obtaining spatially contiguous estimates. The objective of this PARCA continuation proposal was to complete an estimate of surface accumulation rate on the Greenland Ice Sheet derived from C-band radar backscatter data compiled in the ERS-1 SAR mosaic of data acquired during, September-November, 1992. An empirical equation, based on elevation and latitude, is used to determine the mean annual temperature. We examine the influence of accumulation rate, and mean annual temperature on C-band radar backscatter using a forward model, which incorporates snow metamorphosis and radar backscatter components. Our model is run over a range of accumulation and temperature conditions. Based on the model results, we generate a look-up table, which uniquely maps the measured radar backscatter, and mean annual temperature to accumulation rate. Our results compare favorably with in situ accumulation rate measurements falling within our study area.

  7. Systematic design methodology for robust genetic transistors based on I/O specifications via promoter-RBS libraries.

    PubMed

    Lee, Yi-Ying; Hsu, Chih-Yuan; Lin, Ling-Jiun; Chang, Chih-Chun; Cheng, Hsiao-Chun; Yeh, Tsung-Hsien; Hu, Rei-Hsing; Lin, Che; Xie, Zhen; Chen, Bor-Sen

    2013-10-27

    Synthetic genetic transistors are vital for signal amplification and switching in genetic circuits. However, it is still problematic to efficiently select the adequate promoters, Ribosome Binding Sides (RBSs) and inducer concentrations to construct a genetic transistor with the desired linear amplification or switching in the Input/Output (I/O) characteristics for practical applications. Three kinds of promoter-RBS libraries, i.e., a constitutive promoter-RBS library, a repressor-regulated promoter-RBS library and an activator-regulated promoter-RBS library, are constructed for systematic genetic circuit design using the identified kinetic strengths of their promoter-RBS components.According to the dynamic model of genetic transistors, a design methodology for genetic transistors via a Genetic Algorithm (GA)-based searching algorithm is developed to search for a set of promoter-RBS components and adequate concentrations of inducers to achieve the prescribed I/O characteristics of a genetic transistor. Furthermore, according to design specifications for different types of genetic transistors, a look-up table is built for genetic transistor design, from which we could easily select an adequate set of promoter-RBS components and adequate concentrations of external inducers for a specific genetic transistor. This systematic design method will reduce the time spent using trial-and-error methods in the experimental procedure for a genetic transistor with a desired I/O characteristic. We demonstrate the applicability of our design methodology to genetic transistors that have desirable linear amplification or switching by employing promoter-RBS library searching.

  8. High temporal resolution aerosol retrieval using Geostationary Ocean Color Imager: application and initial validation

    NASA Astrophysics Data System (ADS)

    Zhang, Yuhuan; Li, Zhengqiang; Zhang, Ying; Hou, Weizhen; Xu, Hua; Chen, Cheng; Ma, Yan

    2014-01-01

    The Geostationary Ocean Color Imager (GOCI) provides multispectral imagery of the East Asia region hourly from 9:00 to 16:00 local time (GMT+9) and collects multispectral imagery at eight spectral channels (412, 443, 490, 555, 660, 680, 745, and 865 nm) with a spatial resolution of 500 m. Thus, this technology brings significant advantages to high temporal resolution environmental monitoring. We present the retrieval of aerosol optical depth (AOD) in northern China based on GOCI data. Cross-calibration was performed against Moderate Resolution Imaging Spectrometer (MODIS) data in order to correct the land calibration bias of the GOCI sensor. AOD retrievals were then accomplished using a look-up table (LUT) strategy with assumptions of a quickly varying aerosol and a slowly varying surface with time. The AOD retrieval algorithm calculates AOD by minimizing the surface reflectance variations of a series of observations in a short period of time, such as several days. The monitoring of hourly AOD variations was implemented, and the retrieved AOD agreed well with AErosol RObotic NETwork (AERONET) ground-based measurements with a good R2 of approximately 0.74 at validation sites at the cities of Beijing and Xianghe, although intercept bias may be high in specific cases. The comparisons with MODIS products also show a good agreement in AOD spatial distribution. This work suggests that GOCI imagery can provide high temporal resolution monitoring of atmospheric aerosols over land, which is of great interest in climate change studies and environmental monitoring.

  9. Systematic design methodology for robust genetic transistors based on I/O specifications via promoter-RBS libraries

    PubMed Central

    2013-01-01

    Background Synthetic genetic transistors are vital for signal amplification and switching in genetic circuits. However, it is still problematic to efficiently select the adequate promoters, Ribosome Binding Sides (RBSs) and inducer concentrations to construct a genetic transistor with the desired linear amplification or switching in the Input/Output (I/O) characteristics for practical applications. Results Three kinds of promoter-RBS libraries, i.e., a constitutive promoter-RBS library, a repressor-regulated promoter-RBS library and an activator-regulated promoter-RBS library, are constructed for systematic genetic circuit design using the identified kinetic strengths of their promoter-RBS components. According to the dynamic model of genetic transistors, a design methodology for genetic transistors via a Genetic Algorithm (GA)-based searching algorithm is developed to search for a set of promoter-RBS components and adequate concentrations of inducers to achieve the prescribed I/O characteristics of a genetic transistor. Furthermore, according to design specifications for different types of genetic transistors, a look-up table is built for genetic transistor design, from which we could easily select an adequate set of promoter-RBS components and adequate concentrations of external inducers for a specific genetic transistor. Conclusion This systematic design method will reduce the time spent using trial-and-error methods in the experimental procedure for a genetic transistor with a desired I/O characteristic. We demonstrate the applicability of our design methodology to genetic transistors that have desirable linear amplification or switching by employing promoter-RBS library searching. PMID:24160305

  10. An FPGA-Based Massively Parallel Neuromorphic Cortex Simulator

    PubMed Central

    Wang, Runchun M.; Thakur, Chetan S.; van Schaik, André

    2018-01-01

    This paper presents a massively parallel and scalable neuromorphic cortex simulator designed for simulating large and structurally connected spiking neural networks, such as complex models of various areas of the cortex. The main novelty of this work is the abstraction of a neuromorphic architecture into clusters represented by minicolumns and hypercolumns, analogously to the fundamental structural units observed in neurobiology. Without this approach, simulating large-scale fully connected networks needs prohibitively large memory to store look-up tables for point-to-point connections. Instead, we use a novel architecture, based on the structural connectivity in the neocortex, such that all the required parameters and connections can be stored in on-chip memory. The cortex simulator can be easily reconfigured for simulating different neural networks without any change in hardware structure by programming the memory. A hierarchical communication scheme allows one neuron to have a fan-out of up to 200 k neurons. As a proof-of-concept, an implementation on one Altera Stratix V FPGA was able to simulate 20 million to 2.6 billion leaky-integrate-and-fire (LIF) neurons in real time. We verified the system by emulating a simplified auditory cortex (with 100 million neurons). This cortex simulator achieved a low power dissipation of 1.62 μW per neuron. With the advent of commercially available FPGA boards, our system offers an accessible and scalable tool for the design, real-time simulation, and analysis of large-scale spiking neural networks. PMID:29692702

  11. An FPGA-Based Massively Parallel Neuromorphic Cortex Simulator.

    PubMed

    Wang, Runchun M; Thakur, Chetan S; van Schaik, André

    2018-01-01

    This paper presents a massively parallel and scalable neuromorphic cortex simulator designed for simulating large and structurally connected spiking neural networks, such as complex models of various areas of the cortex. The main novelty of this work is the abstraction of a neuromorphic architecture into clusters represented by minicolumns and hypercolumns, analogously to the fundamental structural units observed in neurobiology. Without this approach, simulating large-scale fully connected networks needs prohibitively large memory to store look-up tables for point-to-point connections. Instead, we use a novel architecture, based on the structural connectivity in the neocortex, such that all the required parameters and connections can be stored in on-chip memory. The cortex simulator can be easily reconfigured for simulating different neural networks without any change in hardware structure by programming the memory. A hierarchical communication scheme allows one neuron to have a fan-out of up to 200 k neurons. As a proof-of-concept, an implementation on one Altera Stratix V FPGA was able to simulate 20 million to 2.6 billion leaky-integrate-and-fire (LIF) neurons in real time. We verified the system by emulating a simplified auditory cortex (with 100 million neurons). This cortex simulator achieved a low power dissipation of 1.62 μW per neuron. With the advent of commercially available FPGA boards, our system offers an accessible and scalable tool for the design, real-time simulation, and analysis of large-scale spiking neural networks.

  12. Selective Activation of Resting-State Networks following Focal Stimulation in a Connectome-Based Network Model of the Human Brain

    PubMed Central

    2016-01-01

    Abstract When the brain is stimulated, for example, by sensory inputs or goal-oriented tasks, the brain initially responds with activities in specific areas. The subsequent pattern formation of functional networks is constrained by the structural connectivity (SC) of the brain. The extent to which information is processed over short- or long-range SC is unclear. Whole-brain models based on long-range axonal connections, for example, can partly describe measured functional connectivity dynamics at rest. Here, we study the effect of SC on the network response to stimulation. We use a human whole-brain network model comprising long- and short-range connections. We systematically activate each cortical or thalamic area, and investigate the network response as a function of its short- and long-range SC. We show that when the brain is operating at the edge of criticality, stimulation causes a cascade of network recruitments, collapsing onto a smaller space that is partly constrained by SC. We found both short- and long-range SC essential to reproduce experimental results. In particular, the stimulation of specific areas results in the activation of one or more resting-state networks. We suggest that the stimulus-induced brain activity, which may indicate information and cognitive processing, follows specific routes imposed by structural networks explaining the emergence of functional networks. We provide a lookup table linking stimulation targets and functional network activations, which potentially can be useful in diagnostics and treatments with brain stimulation. PMID:27752540

  13. High-speed real-time 3-D coordinates measurement based on fringe projection profilometry considering camera lens distortion

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Chen, Qian; Zuo, Chao; Sun, Jiasong; Yu, Shi Ling

    2014-10-01

    Optical three-dimensional (3-D) profilometry is gaining increasing attention for its simplicity, flexibility, high accuracy, and non-contact nature. Recent advances in imaging sensors and digital projection technology further its progress in high-speed, real-time applications, enabling 3-D shapes reconstruction of moving objects and dynamic scenes. However, the camera lens is never perfect and the lens distortion does influence the accuracy of the measurement result, which is often overlooked in the existing real-time 3-D shape measurement systems. To this end, here we present a novel high-speed real-time 3-D coordinates measuring technique based on fringe projection with the consideration of the camera lens distortion. A pixel mapping relation between a distorted image and a corrected one is pre-determined and stored in computer memory for real-time fringe correction. The out-of-plane height is obtained firstly and the acquisition for the two corresponding in-plane coordinates follows on the basis of the solved height. Besides, a method of lookup table (LUT) is introduced as well for fast data processing. Our experimental results reveal that the measurement error of the in-plane coordinates has been reduced by one order of magnitude and the accuracy of the out-plane coordinate been tripled after the distortions being eliminated. Moreover, owing to the generated LUTs, a 3-D reconstruction speed of 92.34 frames per second can be achieved.

  14. Mixing weight determination for retrieving optical properties of polluted dust with MODIS and AERONET data

    NASA Astrophysics Data System (ADS)

    Chang, Kuo-En; Hsiao, Ta-Chih; Hsu, N. Christina; Lin, Neng-Huei; Wang, Sheng-Hsiang; Liu, Gin-Rong; Liu, Chian-Yi; Lin, Tang-Huang

    2016-08-01

    In this study, an approach in determining effective mixing weight of soot aggregates from dust-soot aerosols is proposed to improve the accuracy of retrieving properties of polluted dusts by means of satellite remote sensing. Based on a pre-computed database containing several variables (such as wavelength, refractive index, soot mixing weight, surface reflectivity, observation geometries and aerosol optical depth (AOD)), the fan-shaped look-up tables can be drawn out accordingly for determining the mixing weights, AOD and single scattering albedo (SSA) of polluted dusts simultaneously with auxiliary regional dust properties and surface reflectivity. To validate the performance of the approach in this study, 6 cases study of polluted dusts (dust-soot aerosols) in Lower Egypt and Israel were examined with the ground-based measurements through AErosol RObotic NETwork (AERONET). The results show that the mean absolute differences could be reduced from 32.95% to 6.56% in AOD and from 2.67% to 0.83% in SSA retrievals for MODIS aerosol products when referenced to AERONET measurements, demonstrating the soundness of the proposed approach under different levels of dust loading, mixing weight and surface reflectivity. Furthermore, the developed algorithm is capable of providing the spatial distribution of the mixing weights and removing the requirement to assume that the dust plume properties are uniform. The case study further shows the spatially variant dust-soot mixing weight would improve the retrieval accuracy in AODmixture and SSAmixture about 10.0% and 1.4% respectively.

  15. A chest-shape target automatic detection method based on Deformable Part Models

    NASA Astrophysics Data System (ADS)

    Zhang, Mo; Jin, Weiqi; Li, Li

    2016-10-01

    Automatic weapon platform is one of the important research directions at domestic and overseas, it needs to accomplish fast searching for the object to be shot under complex background. Therefore, fast detection for given target is the foundation of further task. Considering that chest-shape target is common target of shoot practice, this paper treats chestshape target as the target and studies target automatic detection method based on Deformable Part Models. The algorithm computes Histograms of Oriented Gradient(HOG) features of the target and trains a model using Latent variable Support Vector Machine(SVM); In this model, target image is divided into several parts then we can obtain foot filter and part filters; Finally, the algorithm detects the target at the HOG features pyramid with method of sliding window. The running time of extracting HOG pyramid with lookup table can be shorten by 36%. The result indicates that this algorithm can detect the chest-shape target in natural environments indoors or outdoors. The true positive rate of detection reaches 76% with many hard samples, and the false positive rate approaches 0. Running on a PC (Intel(R)Core(TM) i5-4200H CPU) with C++ language, the detection time of images with the resolution of 640 × 480 is 2.093s. According to TI company run library about image pyramid and convolution for DM642 and other hardware, our detection algorithm is expected to be implemented on hardware platform, and it has application prospect in actual system.

  16. Magnetic tweezers with high permeability electromagnets for fast actuation of magnetic beads.

    PubMed

    Chen, La; Offenhäusser, Andreas; Krause, Hans-Joachim

    2015-04-01

    As a powerful and versatile scientific instrument, magnetic tweezers have been widely used in biophysical research areas, such as mechanical cell properties and single molecule manipulation. If one wants to steer bead position, the nonlinearity of magnetic properties and the strong position dependence of the magnetic field in most magnetic tweezers lead to quite a challenge in their control. In this article, we report multi-pole electromagnetic tweezers with high permeability cores yielding high force output, good maneuverability, and flexible design. For modeling, we adopted a piece-wise linear dependence of magnetization on field to characterize the magnetic beads. We implemented a bi-linear interpolation of magnetic field in the work space, based on a lookup table obtained from finite element simulation. The electronics and software were custom-made to achieve high performance. In addition, the effects of dimension and defect on structure of magnetic tips also were inspected. In a workspace with size of 0.1 × 0.1 mm(2), a force of up to 400 pN can be applied on a 2.8 μm superparamagnetic bead in any direction within the plane. Because the magnetic particle is always pulled towards a tip, the pulling forces from the pole tips have to be well balanced in order to achieve control of the particle's position. Active video tracking based feedback control is implemented, which is able to work at a speed of up to 1 kHz, yielding good maneuverability of the magnetic beads.

  17. Magnetic tweezers with high permeability electromagnets for fast actuation of magnetic beads

    NASA Astrophysics Data System (ADS)

    Chen, La; Offenhäusser, Andreas; Krause, Hans-Joachim

    2015-04-01

    As a powerful and versatile scientific instrument, magnetic tweezers have been widely used in biophysical research areas, such as mechanical cell properties and single molecule manipulation. If one wants to steer bead position, the nonlinearity of magnetic properties and the strong position dependence of the magnetic field in most magnetic tweezers lead to quite a challenge in their control. In this article, we report multi-pole electromagnetic tweezers with high permeability cores yielding high force output, good maneuverability, and flexible design. For modeling, we adopted a piece-wise linear dependence of magnetization on field to characterize the magnetic beads. We implemented a bi-linear interpolation of magnetic field in the work space, based on a lookup table obtained from finite element simulation. The electronics and software were custom-made to achieve high performance. In addition, the effects of dimension and defect on structure of magnetic tips also were inspected. In a workspace with size of 0.1 × 0.1 mm2, a force of up to 400 pN can be applied on a 2.8 μm superparamagnetic bead in any direction within the plane. Because the magnetic particle is always pulled towards a tip, the pulling forces from the pole tips have to be well balanced in order to achieve control of the particle's position. Active video tracking based feedback control is implemented, which is able to work at a speed of up to 1 kHz, yielding good maneuverability of the magnetic beads.

  18. A Retrieval of Tropical Latent Heating Using the 3D Structure of Precipitation Features

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed, Fiaz; Schumacher, Courtney; Feng, Zhe

    Traditionally, radar-based latent heating retrievals use rainfall to estimate the total column-integrated latent heating and then distribute that heating in the vertical using a model-based look-up table (LUT). In this study, we develop a new method that uses size characteristics of radar-observed precipitating echo (i.e., area and mean echo-top height) to estimate the vertical structure of latent heating. This technique (named the Convective-Stratiform Area [CSA] algorithm) builds on the fact that the shape and magnitude of latent heating profiles are dependent on the organization of convective systems and aims to avoid some of the pitfalls involved in retrieving accurate rainfallmore » amounts and microphysical information from radars and models. The CSA LUTs are based on a high-resolution Weather Research and Forecasting model (WRF) simulation whose domain spans much of the near-equatorial Indian Ocean. When applied to S-PolKa radar observations collected during the DYNAMO/CINDY2011/AMIE field campaign, the CSA retrieval compares well to heating profiles from a sounding-based budget analysis and improves upon a simple rain-based latent heating retrieval. The CSA LUTs also highlight the fact that convective latent heating increases in magnitude and height as cluster area and echo-top heights grow, with a notable congestus signature of cooling at mid levels. Stratiform latent heating is less dependent on echo-top height, but is strongly linked to area. Unrealistic latent heating profiles in the stratiform LUT, viz., a low-level heating spike, an elevated melting layer, and net column cooling were identified and corrected for. These issues highlight the need for improvement in model parameterizations, particularly in linking microphysical phase changes to larger mesoscale processes.« less

  19. Validation of proton stopping power ratio estimation based on dual energy CT using fresh tissue samples

    NASA Astrophysics Data System (ADS)

    Taasti, Vicki T.; Michalak, Gregory J.; Hansen, David C.; Deisher, Amanda J.; Kruse, Jon J.; Krauss, Bernhard; Muren, Ludvig P.; Petersen, Jørgen B. B.; McCollough, Cynthia H.

    2018-01-01

    Dual energy CT (DECT) has been shown, in theoretical and phantom studies, to improve the stopping power ratio (SPR) determination used for proton treatment planning compared to the use of single energy CT (SECT). However, it has not been shown that this also extends to organic tissues. The purpose of this study was therefore to investigate the accuracy of SPR estimation for fresh pork and beef tissue samples used as surrogates of human tissues. The reference SPRs for fourteen tissue samples, which included fat, muscle and femur bone, were measured using proton pencil beams. The tissue samples were subsequently CT scanned using four different scanners with different dual energy acquisition modes, giving in total six DECT-based SPR estimations for each sample. The SPR was estimated using a proprietary algorithm (syngo.via DE Rho/Z Maps, Siemens Healthcare, Forchheim, Germany) for extracting the electron density and the effective atomic number. SECT images were also acquired and SECT-based SPR estimations were performed using a clinical Hounsfield look-up table. The mean and standard deviation of the SPR over large volume-of-interests were calculated. For the six different DECT acquisition methods, the root-mean-square errors (RMSEs) for the SPR estimates over all tissue samples were between 0.9% and 1.5%. For the SECT-based SPR estimation the RMSE was 2.8%. For one DECT acquisition method, a positive bias was seen in the SPR estimates, having a mean error of 1.3%. The largest errors were found in the very dense cortical bone from a beef femur. This study confirms the advantages of DECT-based SPR estimation although good results were also obtained using SECT for most tissues.

  20. Depth of interaction decoding of a continuous crystal detector module.

    PubMed

    Ling, T; Lewellen, T K; Miyaoka, R S

    2007-04-21

    We present a clustering method to extract the depth of interaction (DOI) information from an 8 mm thick crystal version of our continuous miniature crystal element (cMiCE) small animal PET detector. This clustering method, based on the maximum-likelihood (ML) method, can effectively build look-up tables (LUT) for different DOI regions. Combined with our statistics-based positioning (SBP) method, which uses a LUT searching algorithm based on the ML method and two-dimensional mean-variance LUTs of light responses from each photomultiplier channel with respect to different gamma ray interaction positions, the position of interaction and DOI can be estimated simultaneously. Data simulated using DETECT2000 were used to help validate our approach. An experiment using our cMiCE detector was designed to evaluate the performance. Two and four DOI region clustering were applied to the simulated data. Two DOI regions were used for the experimental data. The misclassification rate for simulated data is about 3.5% for two DOI regions and 10.2% for four DOI regions. For the experimental data, the rate is estimated to be approximately 25%. By using multi-DOI LUTs, we also observed improvement of the detector spatial resolution, especially for the corner region of the crystal. These results show that our ML clustering method is a consistent and reliable way to characterize DOI in a continuous crystal detector without requiring any modifications to the crystal or detector front end electronics. The ability to characterize the depth-dependent light response function from measured data is a major step forward in developing practical detectors with DOI positioning capability.

  1. Research of spectacle frame measurement system based on structured light method

    NASA Astrophysics Data System (ADS)

    Guan, Dong; Chen, Xiaodong; Zhang, Xiuda; Yan, Huimin

    2016-10-01

    Automatic eyeglass lens edging system is now widely used to automatically cut and polish the uncut lens based on the spectacle frame shape data which is obtained from the spectacle frame measuring machine installed on the system. The conventional approach to acquire the frame shape data works in the contact scanning mode with a probe tracing around the groove contour of the spectacle frame which requires a sophisticated mechanical and numerical control system. In this paper, a novel non-contact optical measuring method based on structured light to measure the three dimensional (3D) data of the spectacle frame is proposed. First we focus on the processing approach solving the problem of deterioration of the structured light stripes caused by intense specular reflection on the frame surface. The techniques of bright-dark bi-level fringe projecting, multiple exposuring and high dynamic range imaging are introduced to obtain a high-quality image of structured light stripes. Then, the Gamma transform and median filtering are applied to enhance image contrast. In order to get rid of background noise from the image and extract the region of interest (ROI), an auxiliary lighting system of special design is utilized to help effectively distinguish between the object and the background. In addition, a morphological method with specific morphological structure-elements is adopted to remove noise between stripes and boundary of the spectacle frame. By further fringe center extraction and depth information acquisition through the method of look-up table, the 3D shape of the spectacle frame is recovered.

  2. A simple and fast physics-based analytical method to calculate therapeutic and stray doses from external beam, megavoltage x-ray therapy

    PubMed Central

    Wilson, Lydia J; Newhauser, Wayne D

    2015-01-01

    State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 minutes. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models. PMID:26040833

  3. A simple and fast physics-based analytical method to calculate therapeutic and stray doses from external beam, megavoltage x-ray therapy.

    PubMed

    Jagetic, Lydia J; Newhauser, Wayne D

    2015-06-21

    State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 min. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models.

  4. JPSS-1 VIIRS Version 2 At-Launch Relative Spectral Response Characterization and Performance

    NASA Technical Reports Server (NTRS)

    Moeller, Chris; Schwarting, Thomas; McIntire, Jeff; Moyer, Dave; Zeng, Jinan

    2017-01-01

    The relative spectral response (RSR) characterization of the JPSS-1 VIIRS spectral bands has achieved at launch status in the VIIRS Data Analysis Working Group February 2016 Version 2 RSR release. The Version 2 release improves upon the June 2015 Version 1 release by including December 2014 NIST TSIRCUS spectral measurements of VIIRS VisNIR bands in the analysis plus correcting CO2 influence on the band M13 RSR. The T-SIRCUS based characterization is merged with the summer 2014 SpMA based characterization of VisNIR bands (Version 1 release) to yield a fused RSR for these bands, combining the strengths of the T-SIRCUS and the SpMA measurement systems. The M13 RSR is updated by applying a model-based correction to mitigate CO2 attenuation of the SpMA source signal that occurred during M13 spectral measurements. The Version 2 release carries forward the Version 1 RSR for those bands that were not updated (M8-M12, M14-M16AB, I3-I5, DNBMGS). The Version 2 release includes band average (overall detectors and subsamples) RSR plus supporting RSR for each detector and subsample. The at-launch band average RSR have been used to populate Look-Up Tables supporting the sensor data record and environmental data record at-launch science products. Spectral performance metrics show that JPSS-1VIIRS RSR are compliant on specifications with a few minor exceptions. The Version 2 release, which replaces the Version 1 release, is currently available on the password-protected NASA JPSS-1 eRooms under EAR99 control.

  5. A new method for calculating number concentrations of cloud condensation nuclei based on measurements of a three-wavelength humidified nephelometer system

    NASA Astrophysics Data System (ADS)

    Tao, Jiangchuan; Zhao, Chunsheng; Kuang, Ye; Zhao, Gang; Shen, Chuanyang; Yu, Yingli; Bian, Yuxuan; Xu, Wanyun

    2018-02-01

    The number concentration of cloud condensation nuclei (CCN) plays a fundamental role in cloud physics. Instrumentations of direct measurements of CCN number concentration (NCCN) based on chamber technology are complex and costly; thus a simple way for measuring NCCN is needed. In this study, a new method for NCCN calculation based on measurements of a three-wavelength humidified nephelometer system is proposed. A three-wavelength humidified nephelometer system can measure the aerosol light-scattering coefficient (σsp) at three wavelengths and the light-scattering enhancement factor (fRH). The Ångström exponent (Å) inferred from σsp at three wavelengths provides information on mean predominate aerosol size, and hygroscopicity parameter (κ) can be calculated from the combination of fRH and Å. Given this, a lookup table that includes σsp, κ and Å is established to predict NCCN. Due to the precondition for the application, this new method is not suitable for externally mixed particles, large particles (e.g., dust and sea salt) or fresh aerosol particles. This method is validated with direct measurements of NCCN using a CCN counter on the North China Plain. Results show that relative deviations between calculated NCCN and measured NCCN are within 30 % and confirm the robustness of this method. This method enables simplerNCCN measurements because the humidified nephelometer system is easily operated and stable. Compared with the method using a CCN counter, another advantage of this newly proposed method is that it can obtain NCCN at lower supersaturations in the ambient atmosphere.

  6. [A Method to Reconstruct Surface Reflectance Spectrum from Multispectral Image Based on Canopy Radiation Transfer Model].

    PubMed

    Zhao, Yong-guang; Ma, Ling-ling; Li, Chuan-rong; Zhu, Xiao-hua; Tang, Ling-li

    2015-07-01

    Due to the lack of enough spectral bands for multi-spectral sensor, it is difficult to reconstruct surface retlectance spectrum from finite spectral information acquired by multi-spectral instrument. Here, taking into full account of the heterogeneity of pixel from remote sensing image, a method is proposed to simulate hyperspectral data from multispectral data based on canopy radiation transfer model. This method first assumes the mixed pixels contain two types of land cover, i.e., vegetation and soil. The sensitive parameters of Soil-Leaf-Canopy (SLC) model and a soil ratio factor were retrieved from multi-spectral data based on Look-Up Table (LUT) technology. Then, by combined with a soil ratio factor, all the parameters were input into the SLC model to simulate the surface reflectance spectrum from 400 to 2 400 nm. Taking Landsat Enhanced Thematic Mapper Plus (ETM+) image as reference image, the surface reflectance spectrum was simulated. The simulated reflectance spectrum revealed different feature information of different surface types. To test the performance of this method, the simulated reflectance spectrum was convolved with the Landsat ETM + spectral response curves and Moderate Resolution Imaging Spectrometer (MODIS) spectral response curves to obtain the simulated Landsat ETM+ and MODIS image. Finally, the simulated Landsat ETM+ and MODIS images were compared with the observed Landsat ETM+ and MODIS images. The results generally showed high correction coefficients (Landsat: 0.90-0.99, MODIS: 0.74-0.85) between most simulated bands and observed bands and indicated that the simulated reflectance spectrum was well simulated and reliable.

  7. From Ramachandran Maps to Tertiary Structures of Proteins.

    PubMed

    DasGupta, Debarati; Kaushik, Rahul; Jayaram, B

    2015-08-27

    Sequence to structure of proteins is an unsolved problem. A possible coarse grained resolution to this entails specification of all the torsional (Φ, Ψ) angles along the backbone of the polypeptide chain. The Ramachandran map quite elegantly depicts the allowed conformational (Φ, Ψ) space of proteins which is still very large for the purposes of accurate structure generation. We have divided the allowed (Φ, Ψ) space in Ramachandran maps into 27 distinct conformations sufficient to regenerate a structure to within 5 Å from the native, at least for small proteins, thus reducing the structure prediction problem to a specification of an alphanumeric string, i.e., the amino acid sequence together with one of the 27 conformations preferred by each amino acid residue. This still theoretically results in 27(n) conformations for a protein comprising "n" amino acids. We then investigated the spatial correlations at the two-residue (dipeptide) and three-residue (tripeptide) levels in what may be described as higher order Ramachandran maps, with the premise that the allowed conformational space starts to shrink as we introduce neighborhood effects. We found, for instance, for a tripeptide which potentially can exist in any of the 27(3) "allowed" conformations, three-fourths of these conformations are redundant to the 95% confidence level, suggesting sequence context dependent preferred conformations. We then created a look-up table of preferred conformations at the tripeptide level and correlated them with energetically favorable conformations. We found in particular that Boltzmann probabilities calculated from van der Waals energies for each conformation of tripeptides correlate well with the observed populations in the structural database (the average correlation coefficient is ∼0.8). An alpha-numeric string and hence the tertiary structure can be generated for any sequence from the look-up table within minutes on a single processor and to a higher level of accuracy if secondary structure can be specified. We tested the methodology on 100 small proteins, and in 90% of the cases, a structure within 5 Å is recovered. We thus believe that the method presented here provides the missing link between Ramachandran maps and tertiary structures of proteins. A Web server to convert a tertiary structure to an alphanumeric string and to predict the tertiary structure from the sequence of a protein using the above methodology is created and made freely accessible at http://www.scfbio-iitd.res.in/software/proteomics/rm2ts.jsp.

  8. Phantom-GRAPE: Numerical software library to accelerate collisionless N-body simulation with SIMD instruction set on x86 architecture

    NASA Astrophysics Data System (ADS)

    Tanikawa, Ataru; Yoshikawa, Kohji; Nitadori, Keigo; Okamoto, Takashi

    2013-02-01

    We have developed a numerical software library for collisionless N-body simulations named "Phantom-GRAPE" which highly accelerates force calculations among particles by use of a new SIMD instruction set extension to the x86 architecture, Advanced Vector eXtensions (AVX), an enhanced version of the Streaming SIMD Extensions (SSE). In our library, not only the Newton's forces, but also central forces with an arbitrary shape f(r), which has a finite cutoff radius rcut (i.e. f(r)=0 at r>rcut), can be quickly computed. In computing such central forces with an arbitrary force shape f(r), we refer to a pre-calculated look-up table. We also present a new scheme to create the look-up table whose binning is optimal to keep good accuracy in computing forces and whose size is small enough to avoid cache misses. Using an Intel Core i7-2600 processor, we measure the performance of our library for both of the Newton's forces and the arbitrarily shaped central forces. In the case of Newton's forces, we achieve 2×109 interactions per second with one processor core (or 75 GFLOPS if we count 38 operations per interaction), which is 20 times higher than the performance of an implementation without any explicit use of SIMD instructions, and 2 times than that with the SSE instructions. With four processor cores, we obtain the performance of 8×109 interactions per second (or 300 GFLOPS). In the case of the arbitrarily shaped central forces, we can calculate 1×109 and 4×109 interactions per second with one and four processor cores, respectively. The performance with one processor core is 6 times and 2 times higher than those of the implementations without any use of SIMD instructions and with the SSE instructions. These performances depend only weakly on the number of particles, irrespective of the force shape. It is good contrast with the fact that the performance of force calculations accelerated by graphics processing units (GPUs) depends strongly on the number of particles. Substantially weak dependence of the performance on the number of particles is suitable to collisionless N-body simulations, since these simulations are usually performed with sophisticated N-body solvers such as Tree- and TreePM-methods combined with an individual timestep scheme. We conclude that collisionless N-body simulations accelerated with our library have significant advantage over those accelerated by GPUs, especially on massively parallel environments.

  9. A novel productivity-driven logic element for field-programmable devices

    NASA Astrophysics Data System (ADS)

    Marconi, Thomas; Bertels, Koen; Gaydadjiev, Georgi

    2014-06-01

    Although various techniques have been proposed for power reduction in field-programmable devices (FPDs), they are still all based on conventional logic elements (LEs). In the conventional LE, the output of the combinational logic (e.g. the look-up table (LUT) in many field-programmable gate arrays (FPGAs)) is connected to the input of the storage element; while the D flip-flop (DFF) is always clocked even when not necessary. Such unnecessary transitions waste power. To address this problem, we propose a novel productivity-driven LE with reduced number of transitions. The differences between our LE and the conventional LE are in the FFs-type used and the internal LE organisation. In our LEs, DFFs have been replaced by T flip-flops with the T input permanently connected to logic value 1. Instead of connecting the output of the combinational logic to the FF input, we use it as the FF clock. The proposed LE has been validated via Simulation Program with Integrated Circuit Emphasis (SPICE) simulations for a 45-nm Complementary Metal-Oxide-Semiconductor (CMOS) technology as well as via a real Computer-Aided Design (CAD) tools on a real FPGA using the standard Microelectronic Center of North Carolina (MCNC) benchmark circuits. The experimental results show that FPDs using our proposal not only have 48% lower total power but also run 17% faster than conventional FPDs on average.

  10. Final report on a pilot academic e-books project at Keio University Libraries : Potential for the scholarly use of digitized academic books

    NASA Astrophysics Data System (ADS)

    Shimada, Takashi

    This article reports on the results and significance of a pilot academic e-books project carried out at the Keio University Libraries for fiscal 2010 to 2012 to assess the viability of a new model of the libraries providing all the campuses with accesses to Japanese academic books digitized jointly with academic publishers and cooperative firms. It focuses on the experimental use of digitized books, highlighting the students’ attitudes and expectations towards e-books as found from surveys. Some major findings include the following. Users have a strong demand for digitized readings that are rather lookup-oriented than learning-oriented, with greater value placed on the functionalities of federated full-text searching, reading on a screen, and accessing the desired chapter direct from table of contents. They also want an online space in which to manage different forms of digitized learning resources. We investigated the potential of e-books and new type of textbooks as educational infrastructures based on the results of experiment. Japan’s university libraries should need to engage actively in the mass digitization of academic books to be adaptive to the change in the ways research, study and teaching are conducted. We plan to start a joint experiment with other university libraries to develop a practical model for the use of e-books.

  11. Influence of aerosol estimation on coastal water products retrieved from HICO images

    NASA Astrophysics Data System (ADS)

    Patterson, Karen W.; Lamela, Gia

    2011-06-01

    The Hyperspectral Imager for the Coastal Ocean (HICO) is a hyperspectral sensor which was launched to the International Space Station in September 2009. The Naval Research Laboratory (NRL) has been developing the Coastal Water Signatures Toolkit (CWST) to estimate water depth, bottom type and water column constituents such as chlorophyll, suspended sediments and chromophoric dissolved organic matter from hyperspectral imagery. The CWST uses a look-up table approach, comparing remote sensing reflectance spectra observed in an image to a database of modeled spectra for pre-determined water column constituents, depth and bottom type. In order to successfully use this approach, the remote sensing reflectances must be accurate which implies accurately correcting for the atmospheric contribution to the HICO top of the atmosphere radiances. One tool the NRL is using to atmospherically correct HICO imagery is Correction of Coastal Ocean Atmospheres (COCOA), which is based on Tafkaa 6S. One of the user input parameters to COCOA is aerosol optical depth or aerosol visibility, which can vary rapidly over short distances in coastal waters. Changes to the aerosol thickness results in changes to the magnitude of the remote sensing reflectances. As such, the CWST retrievals for water constituents, depth and bottom type can be expected to vary in like fashion. This work is an illustration of the variability in CWST retrievals due to inaccurate aerosol thickness estimation during atmospheric correction of HICO images.

  12. RANS Simulation (Virtual Blade Model [VBM]) of Array of Three Coaxial Lab Scaled DOE RM1 MHK Turbine with 5D Spacing

    DOE Data Explorer

    Javaherchi, Teymour

    2016-06-08

    Attached are the .cas and .dat files along with the required User Defined Functions (UDFs) and look-up table of lift and drag coefficients for the Reynolds Averaged Navier-Stokes (RANS) simulation of three coaxially located lab-scaled DOE RM1 turbine implemented in ANSYS FLUENT CFD-package. The lab-scaled DOE RM1 is a re-design geometry, based of the full scale DOE RM1 design, producing same power output as the full scale model, while operating at matched Tip Speed Ratio values at reachable laboratory Reynolds number (see attached paper). In this case study the flow field around and in the wake of the lab-scaled DOE RM1 turbines in a coaxial array is simulated using Blade Element Model (a.k.a Virtual Blade Model) by solving RANS equations coupled with k-\\omega turbulence closure model. It should be highlighted that in this simulation the actual geometry of the rotor blade is not modeled. The effect of turbine rotating blades are modeled using the Blade Element Theory. This simulation provides an accurate estimate for the performance of each device and structure of their turbulent far wake. The results of these simulations were validated against the developed in-house experimental data. Simulations for other turbine configurations are available upon request.

  13. RANS Simulation (Virtual Blade Model [VBM]) of Single Lab Scaled DOE RM1 MHK Turbine

    DOE Data Explorer

    Javaherchi, Teymour; Stelzenmuller, Nick; Aliseda, Alberto; Seydel, Joseph

    2014-04-15

    Attached are the .cas and .dat files for the Reynolds Averaged Navier-Stokes (RANS) simulation of a single lab-scaled DOE RM1 turbine implemented in ANSYS FLUENT CFD-package. The lab-scaled DOE RM1 is a re-design geometry, based of the full scale DOE RM1 design, producing same power output as the full scale model, while operating at matched Tip Speed Ratio values at reachable laboratory Reynolds number (see attached paper). In this case study the flow field around and in the wake of the lab-scaled DOE RM1 turbine is simulated using Blade Element Model (a.k.a Virtual Blade Model) by solving RANS equations coupled with k-\\omega turbulence closure model. It should be highlighted that in this simulation the actual geometry of the rotor blade is not modeled. The effect of turbine rotating blades are modeled using the Blade Element Theory. This simulation provides an accurate estimate for the performance of device and structure of it's turbulent far wake. Due to the simplifications implemented for modeling the rotating blades in this model, VBM is limited to capture details of the flow field in near wake region of the device. The required User Defined Functions (UDFs) and look-up table of lift and drag coefficients are included along with the .cas and .dat files.

  14. Retrieval of Winter Wheat Leaf Area Index from Chinese GF-1 Satellite Data Using the PROSAIL Model.

    PubMed

    Li, He; Liu, Gaohuan; Liu, Qingsheng; Chen, Zhongxin; Huang, Chong

    2018-04-06

    Leaf area index (LAI) is one of the key biophysical parameters in crop structure. The accurate quantitative estimation of crop LAI is essential to verify crop growth and health. The PROSAIL radiative transfer model (RTM) is one of the most established methods for estimating crop LAI. In this study, a look-up table (LUT) based on the PROSAIL RTM was first used to estimate winter wheat LAI from GF-1 data, which accounted for some available prior knowledge relating to the distribution of winter wheat characteristics. Next, the effects of 15 LAI-LUT strategies with reflectance bands and 10 LAI-LUT strategies with vegetation indexes on the accuracy of the winter wheat LAI retrieval with different phenological stages were evaluated against in situ LAI measurements. The results showed that the LUT strategies of LAI-GNDVI were optimal and had the highest accuracy with a root mean squared error (RMSE) value of 0.34, and a coefficient of determination (R²) of 0.61 during the elongation stages, and the LUT strategies of LAI-Green were optimal with a RMSE of 0.74, and R² of 0.20 during the grain-filling stages. The results demonstrated that the PROSAIL RTM had great potential in winter wheat LAI inversion with GF-1 satellite data and the performance could be improved by selecting the appropriate LUT inversion strategies in different growth periods.

  15. EPIC Radiance Simulator for Deep Space Climate ObserVatoRy (DSCOVR)

    NASA Technical Reports Server (NTRS)

    Lyapustin, Alexei; Marshak, Alexander; Wang, Yujie; Korkin, Sergey; Herman, Jay

    2011-01-01

    The Deep Space Climate ObserVatoRy (DSCOVR) is a planned space weather mission for the Sun and Earth observations from the Lagrangian L1 point. Onboard of DSCOVR is a multispectral imager EPIC designed for unique observations of the full illuminated disk of the Earth with high temporal and 10 km spatial resolution. Depending on latitude, EPIC will observe the same Earth surface area during the course of the day in a wide range of solar and view zenith angles in the backscattering view geometry with the scattering angle of 164-172 . To understand the information content of EPIC data for analysis of the Earth clouds, aerosols and surface properties, an EPIC radiance Simulator was developed covering the UV -VIS-NIR range including the oxygen A and B-bands (A=340, 388, 443, 555, 680, 779.5, 687.7, 763.3 nm). The Simulator uses ancillary data (surface pressure/height, NCEP wind speed) as well as MODIS-based geophysical fields such as spectral surface bidirectional reflectance, column water vapor, and properties of aerosols and clouds including optical depth, effective radius, phase and cloud top height. The original simulations are conducted at 1 km resolution using the look-up table approach and then are averaged to 10 km EPIC radiances. This talk will give an overview of the EPIC Simulator with analysis of results over the continental USA and northern Atlantic.

  16. Wheel-Sleeper Impact Model in Rail Vehicles Analysis

    NASA Astrophysics Data System (ADS)

    Brabie, Dan

    The current paper establishes the necessary prerequisites for studying post-derailment dynamic behavior of high-speed rail vehicles by means of multi-body system (MBS) software. A finite-element (FE) model of one rail vehicle wheel impacting a limited concrete sleeper volume is built in LS-DYNA. A novel simulation scheme is employed for obtaining the necessary wheel-sleeper impact data, transferred to the MBS code as pre-defined look-up tables of the wheel's impulse variation during impact. The FE model is tentatively validated successfully by comparing the indentation marks with one photograph from an authentic derailment for a continuous impact sequence over three subsequent sleepers. A post-derailment module is developed and implemented in the MBS simulation tool GENSYS, which detects the wheel contact with sleepers and applies valid longitudinal, lateral and vertical force resultants based on the existing impact conditions. The accuracy of the MBS code in terms of the wheels three-dimensional trajectory over 24 consecutive sleepers is successfully compared with its FE counterpart for an arbitrary impact scenario. An axle mounted brake disc is tested as an alternative substitute guidance mechanism after flange climbing derailments at 100 and 200 km/h on the Swedish high-speed tilting train X 2000. Certain combinations of brake disc geometrical parameters manage to stop the lateral deviation of the wheelsets in circular curve sections at high lateral track plane acceleration.

  17. Using exposure bands for rapid decision making in the ...

    EPA Pesticide Factsheets

    The ILSI Health and Environmental Sciences Institute (HESI) Risk Assessment in the 21st Century (RISK21) project was initiated to address and catalyze improvements in human health risk assessment. RISK21 is a problem formulation-based conceptual roadmap and risk matrix visualization tool, facilitating transparent evaluation of both hazard and exposure components. The RISK21 roadmap is exposure-driven, i.e. exposure is used as the second step (after problem formulation) to define and focus the assessment. This paper describes the exposure tiers of the RISK21 matrix and the approaches to adapt readily available information to more quickly inform exposure at a screening level. In particular, exposure look-up tables were developed from available exposure tools (European Centre for Ecotoxicology and Toxicology of Chemicals (ECETOC) Targeted Risk Assessment (TRA) for worker exposure, ECETOC TRA, European Solvents Industry Group (ESIG) Generic Exposure Scenario (GES) Risk and Exposure Tool (EGRET) for consumer exposure, and USEtox for indirect exposure to humans via the environment) were tested in a hypothetical mosquito bed netting case study. A detailed WHO risk assessment for a similar mosquito net use served as a benchmark for the performance of the RISK21 approach. The case study demonstrated that the screening methodologies provided suitable conservative exposure estimates for risk assessment. The results of this effort showed that the RISK21 approach is useful f

  18. Thin ice clouds in the Arctic: cloud optical depth and particle size retrieved from ground-based thermal infrared radiometry

    NASA Astrophysics Data System (ADS)

    Blanchard, Yann; Royer, Alain; O'Neill, Norman T.; Turner, David D.; Eloranta, Edwin W.

    2017-06-01

    Multiband downwelling thermal measurements of zenith sky radiance, along with cloud boundary heights, were used in a retrieval algorithm to estimate cloud optical depth and effective particle diameter of thin ice clouds in the Canadian High Arctic. Ground-based thermal infrared (IR) radiances for 150 semitransparent ice clouds cases were acquired at the Polar Environment Atmospheric Research Laboratory (PEARL) in Eureka, Nunavut, Canada (80° N, 86° W). We analyzed and quantified the sensitivity of downwelling thermal radiance to several cloud parameters including optical depth, effective particle diameter and shape, water vapor content, cloud geometric thickness and cloud base altitude. A lookup table retrieval method was used to successfully extract, through an optimal estimation method, cloud optical depth up to a maximum value of 2.6 and to separate thin ice clouds into two classes: (1) TIC1 clouds characterized by small crystals (effective particle diameter ≤ 30 µm), and (2) TIC2 clouds characterized by large ice crystals (effective particle diameter > 30 µm). The retrieval technique was validated using data from the Arctic High Spectral Resolution Lidar (AHSRL) and Millimeter Wave Cloud Radar (MMCR). Inversions were performed over three polar winters and results showed a significant correlation (R2 = 0.95) for cloud optical depth retrievals and an overall accuracy of 83 % for the classification of TIC1 and TIC2 clouds. A partial validation relative to an algorithm based on high spectral resolution downwelling IR radiance measurements between 8 and 21 µm was also performed. It confirms the robustness of the optical depth retrieval and the fact that the broadband thermal radiometer retrieval was sensitive to small particle (TIC1) sizes.

  19. An optimal-estimation-based aerosol retrieval algorithm using OMI near-UV observations

    NASA Astrophysics Data System (ADS)

    Jeong, U.; Kim, J.; Ahn, C.; Torres, O.; Liu, X.; Bhartia, P. K.; Spurr, R. J. D.; Haffner, D.; Chance, K.; Holben, B. N.

    2016-01-01

    An optimal-estimation(OE)-based aerosol retrieval algorithm using the OMI (Ozone Monitoring Instrument) near-ultraviolet observation was developed in this study. The OE-based algorithm has the merit of providing useful estimates of errors simultaneously with the inversion products. Furthermore, instead of using the traditional look-up tables for inversion, it performs online radiative transfer calculations with the VLIDORT (linearized pseudo-spherical vector discrete ordinate radiative transfer code) to eliminate interpolation errors and improve stability. The measurements and inversion products of the Distributed Regional Aerosol Gridded Observation Network campaign in northeast Asia (DRAGON NE-Asia 2012) were used to validate the retrieved aerosol optical thickness (AOT) and single scattering albedo (SSA). The retrieved AOT and SSA at 388 nm have a correlation with the Aerosol Robotic Network (AERONET) products that is comparable to or better than the correlation with the operational product during the campaign. The OE-based estimated error represented the variance of actual biases of AOT at 388 nm between the retrieval and AERONET measurements better than the operational error estimates. The forward model parameter errors were analyzed separately for both AOT and SSA retrievals. The surface reflectance at 388 nm, the imaginary part of the refractive index at 354 nm, and the number fine-mode fraction (FMF) were found to be the most important parameters affecting the retrieval accuracy of AOT, while FMF was the most important parameter for the SSA retrieval. The additional information provided with the retrievals, including the estimated error and degrees of freedom, is expected to be valuable for relevant studies. Detailed advantages of using the OE method were described and discussed in this paper.

  20. Crystal identification for a dual-layer-offset LYSO based PET system via Lu-176 background radiation and mean shift algorithm

    NASA Astrophysics Data System (ADS)

    Wei, Qingyang; Ma, Tianyu; Xu, Tianpeng; Zeng, Ming; Gu, Yu; Dai, Tiantian; Liu, Yaqiang

    2018-01-01

    Modern positron emission tomography (PET) detectors are made from pixelated scintillation crystal arrays and readout by Anger logic. The interaction position of the gamma-ray should be assigned to a crystal using a crystal position map or look-up table. Crystal identification is a critical procedure for pixelated PET systems. In this paper, we propose a novel crystal identification method for a dual-layer-offset LYSO based animal PET system via Lu-176 background radiation and mean shift algorithm. Single photon event data of the Lu-176 background radiation are acquired in list-mode for 3 h to generate a single photon flood map (SPFM). Coincidence events are obtained from the same data using time information to generate a coincidence flood map (CFM). The CFM is used to identify the peaks of the inner layer using the mean shift algorithm. The response of the inner layer is deducted from the SPFM by subtracting CFM. Then, the peaks of the outer layer are also identified using the mean shift algorithm. The automatically identified peaks are manually inspected by a graphical user interface program. Finally, a crystal position map is generated using a distance criterion based on these peaks. The proposed method is verified on the animal PET system with 48 detector blocks on a laptop with an Intel i7-5500U processor. The total runtime for whole system peak identification is 67.9 s. Results show that the automatic crystal identification has 99.98% and 99.09% accuracy for the peaks of the inner and outer layers of the whole system respectively. In conclusion, the proposed method is suitable for the dual-layer-offset lutetium based PET system to perform crystal identification instead of external radiation sources.

Top