Sample records for table look-up method

  1. Memory-efficient table look-up optimized algorithm for context-based adaptive variable length decoding in H.264/advanced video coding

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong

    2016-03-01

    Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.

  2. Non-tables look-up search algorithm for efficient H.264/AVC context-based adaptive variable length coding decoding

    NASA Astrophysics Data System (ADS)

    Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong

    2014-09-01

    In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.

  3. A look-up table based approach to characterize crystal twinning for synchrotron X-ray Laue microdiffraction scans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yao; Wan, Liang; Chen, Kai

    An automated method has been developed to characterize the type and spatial distribution of twinning in crystal orientation maps from synchrotron X-ray Laue microdiffraction results. The method relies on a look-up table approach. Taking into account the twin axis and twin plane for plausible rotation and reflection twins, respectively, and the point group symmetry operations for a specific crystal, a look-up table listing crystal-specific rotation angle–axis pairs, which reveal the orientation relationship between the twin and the parent lattice, is generated. By comparing these theoretical twin–parent orientation relationships in the look-up table with the measured misorientations, twin boundaries are mappedmore » automatically from Laue microdiffraction raster scans with thousands of data points. Finally, taking advantage of the high orientation resolution of the Laue microdiffraction method, this automated approach is also applicable to differentiating twinning elements among multiple twinning modes in any crystal system.« less

  4. A look-up table based approach to characterize crystal twinning for synchrotron X-ray Laue microdiffraction scans

    DOE PAGES

    Li, Yao; Wan, Liang; Chen, Kai

    2015-04-25

    An automated method has been developed to characterize the type and spatial distribution of twinning in crystal orientation maps from synchrotron X-ray Laue microdiffraction results. The method relies on a look-up table approach. Taking into account the twin axis and twin plane for plausible rotation and reflection twins, respectively, and the point group symmetry operations for a specific crystal, a look-up table listing crystal-specific rotation angle–axis pairs, which reveal the orientation relationship between the twin and the parent lattice, is generated. By comparing these theoretical twin–parent orientation relationships in the look-up table with the measured misorientations, twin boundaries are mappedmore » automatically from Laue microdiffraction raster scans with thousands of data points. Finally, taking advantage of the high orientation resolution of the Laue microdiffraction method, this automated approach is also applicable to differentiating twinning elements among multiple twinning modes in any crystal system.« less

  5. Spectral Retrieval of Latent Heating Profiles from TRMM PR Data: Comparison of Look-Up Tables

    NASA Technical Reports Server (NTRS)

    Shige, Shoichi; Takayabu, Yukari N.; Tao, Wei-Kuo; Johnson, Daniel E.; Shie, Chung-Lin

    2003-01-01

    The primary goal of the Tropical Rainfall Measuring Mission (TRMM) is to use the information about distributions of precipitation to determine the four dimensional (i.e., temporal and spatial) patterns of latent heating over the whole tropical region. The Spectral Latent Heating (SLH) algorithm has been developed to estimate latent heating profiles for the TRMM Precipitation Radar (PR) with a cloud- resolving model (CRM). The method uses CRM- generated heating profile look-up tables for the three rain types; convective, shallow stratiform, and anvil rain (deep stratiform with a melting level). For convective and shallow stratiform regions, the look-up table refers to the precipitation top height (PTH). For anvil region, on the other hand, the look- up table refers to the precipitation rate at the melting level instead of PTH. For global applications, it is necessary to examine the universality of the look-up table. In this paper, we compare the look-up tables produced from the numerical simulations of cloud ensembles forced with the Tropical Ocean Global Atmosphere (TOGA) Coupled Atmosphere-Ocean Response Experiment (COARE) data and the GARP Atlantic Tropical Experiment (GATE) data. There are some notable differences between the TOGA-COARE table and the GATE table, especially for the convective heating. First, there is larger number of deepest convective profiles in the TOGA-COARE table than in the GATE table, mainly due to the differences in SST. Second, shallow convective heating is stronger in the TOGA COARE table than in the GATE table. This might be attributable to the difference in the strength of the low-level inversions. Third, altitudes of convective heating maxima are larger in the TOGA COARE table than in the GATE table. Levels of convective heating maxima are located just below the melting level, because warm-rain processes are prevalent in tropical oceanic convective systems. Differences in levels of convective heating maxima probably reflect differences in melting layer heights. We are now extending our study to simulations of other field experiments (e.g. SCSMEX and ARM) in order to examine the universality of the look-up table. The impact of look-up tables on the retrieved latent heating profiles will also be assessed.

  6. Improved look-up table method of computer-generated holograms.

    PubMed

    Wei, Hui; Gong, Guanghong; Li, Ni

    2016-11-10

    Heavy computation load and vast memory requirements are major bottlenecks of computer-generated holograms (CGHs), which are promising and challenging in three-dimensional displays. To solve these problems, an improved look-up table (LUT) method suitable for arbitrarily sampled object points is proposed and implemented on a graphics processing unit (GPU) whose reconstructed object quality is consistent with that of the coherent ray-trace (CRT) method. The concept of distance factor is defined, and the distance factors are pre-computed off-line and stored in a look-up table. The results show that while reconstruction quality close to that of the CRT method is obtained, the on-line computation time is dramatically reduced compared with the LUT method on the GPU and the memory usage is lower than that of the novel-LUT considerably. Optical experiments are carried out to validate the effectiveness of the proposed method.

  7. Overview of fast algorithm in 3D dynamic holographic display

    NASA Astrophysics Data System (ADS)

    Liu, Juan; Jia, Jia; Pan, Yijie; Wang, Yongtian

    2013-08-01

    3D dynamic holographic display is one of the most attractive techniques for achieving real 3D vision with full depth cue without any extra devices. However, huge 3D information and data should be preceded and be computed in real time for generating the hologram in 3D dynamic holographic display, and it is a challenge even for the most advanced computer. Many fast algorithms are proposed for speeding the calculation and reducing the memory usage, such as:look-up table (LUT), compressed look-up table (C-LUT), split look-up table (S-LUT), and novel look-up table (N-LUT) based on the point-based method, and full analytical polygon-based methods, one-step polygon-based method based on the polygon-based method. In this presentation, we overview various fast algorithms based on the point-based method and the polygon-based method, and focus on the fast algorithm with low memory usage, the C-LUT, and one-step polygon-based method by the 2D Fourier analysis of the 3D affine transformation. The numerical simulations and the optical experiments are presented, and several other algorithms are compared. The results show that the C-LUT algorithm and the one-step polygon-based method are efficient methods for saving calculation time. It is believed that those methods could be used in the real-time 3D holographic display in future.

  8. Efficient generation of 3D hologram for American Sign Language using look-up table

    NASA Astrophysics Data System (ADS)

    Park, Joo-Sup; Kim, Seung-Cheol; Kim, Eun-Soo

    2010-02-01

    American Sign Language (ASL) is one of the languages giving the greatest help for communication of the hearing impaired person. Current 2-D broadcasting, 2-D movies are used the ASL to give some information, help understand the situation of the scene and translate the foreign language. These ASL will not be disappeared in future three-dimensional (3-D) broadcasting or 3-D movies because the usefulness of the ASL. On the other hands, some approaches for generation of CGH patterns have been suggested like the ray-tracing method and look-up table (LUT) method. However, these methods have some drawbacks that needs much time or needs huge memory size for look-up table. Recently, a novel LUT (N-LUT) method for fast generation of CGH patterns of 3-D objects with a dramatically reduced LUT without the loss of computational speed was proposed. Therefore, we proposed the method to efficiently generate the holographic ASL in holographic 3DTV or 3-D movies using look-up table method. The proposed method is largely consisted of five steps: construction of the LUT for each ASL images, extraction of characters in scripts or situation, call the fringe patterns for characters in the LUT for each ASL, composition of hologram pattern for 3-D video and hologram pattern for ASL and reconstruct the holographic 3D video with ASL. Some simulation results confirmed the feasibility of the proposed method in efficient generation of CGH patterns for ASL.

  9. A VLSI architecture for performing finite field arithmetic with reduced table look-up

    NASA Technical Reports Server (NTRS)

    Hsu, I. S.; Truong, T. K.; Reed, I. S.

    1986-01-01

    A new table look-up method for finding the log and antilog of finite field elements has been developed by N. Glover. In his method, the log and antilog of a field element is found by the use of several smaller tables. The method is based on a use of the Chinese Remainder Theorem. The technique often results in a significant reduction in the memory requirements of the problem. A VLSI architecture is developed for a special case of this new algorithm to perform finite field arithmetic including multiplication, division, and the finding of an inverse element in the finite field.

  10. Table look-up estimation of signal and noise parameters from quantized observables

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V. A.; Rodemich, E. R.

    1986-01-01

    A table look-up algorithm for estimating underlying signal and noise parameters from quantized observables is examined. A general mathematical model is developed, and a look-up table designed specifically for estimating parameters from four-bit quantized data is described. Estimator performance is evaluated both analytically and by means of numerical simulation, and an example is provided to illustrate the use of the look-up table for estimating signal-to-noise ratios commonly encountered in Voyager-type data.

  11. Instantaneous and controllable integer ambiguity resolution: review and an alternative approach

    NASA Astrophysics Data System (ADS)

    Zhang, Jingyu; Wu, Meiping; Li, Tao; Zhang, Kaidong

    2015-11-01

    In the high-precision application of Global Navigation Satellite System (GNSS), integer ambiguity resolution is the key step to realize precise positioning and attitude determination. As the necessary part of quality control, integer aperture (IA) ambiguity resolution provides the theoretical and practical foundation for ambiguity validation. It is mainly realized by acceptance testing. Due to the constraint of correlation between ambiguities, it is impossible to realize the controlling of failure rate according to analytical formula. Hence, the fixed failure rate approach is implemented by Monte Carlo sampling. However, due to the characteristics of Monte Carlo sampling and look-up table, we have to face the problem of a large amount of time consumption if sufficient GNSS scenarios are included in the creation of look-up table. This restricts the fixed failure rate approach to be a post process approach if a look-up table is not available. Furthermore, if not enough GNSS scenarios are considered, the table may only be valid for a specific scenario or application. Besides this, the method of creating look-up table or look-up function still needs to be designed for each specific acceptance test. To overcome these problems in determination of critical values, this contribution will propose an instantaneous and CONtrollable (iCON) IA ambiguity resolution approach for the first time. The iCON approach has the following advantages: (a) critical value of acceptance test is independently determined based on the required failure rate and GNSS model without resorting to external information such as look-up table; (b) it can be realized instantaneously for most of IA estimators which have analytical probability formulas. The stronger GNSS model, the less time consumption; (c) it provides a new viewpoint to improve the research about IA estimation. To verify these conclusions, multi-frequency and multi-GNSS simulation experiments are implemented. Those results show that IA estimators based on iCON approach can realize controllable ambiguity resolution. Besides this, compared with ratio test IA based on look-up table, difference test IA and IA least square based on the iCON approach most of times have higher success rates and better controllability to failure rates.

  12. A robust and accurate numerical method for transcritical turbulent flows at supercritical pressure with an arbitrary equation of state

    NASA Astrophysics Data System (ADS)

    Kawai, Soshi; Terashima, Hiroshi; Negishi, Hideyo

    2015-11-01

    This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture the steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier-Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.

  13. A robust and accurate numerical method for transcritical turbulent flows at supercritical pressure with an arbitrary equation of state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawai, Soshi, E-mail: kawai@cfd.mech.tohoku.ac.jp; Terashima, Hiroshi; Negishi, Hideyo

    2015-11-01

    This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture themore » steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier–Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.« less

  14. Generating functional analysis of minority games with inner product strategy definitions

    NASA Astrophysics Data System (ADS)

    Coolen, A. C. C.; Shayeghi, N.

    2008-08-01

    We use generating functional methods to solve the so-called inner product versions of the minority game (MG), with fake and/or real market histories, by generalizing the theory developed recently for look-up table MGs with real histories. The phase diagrams of the look-up table and inner product MG versions are generally found to be identical, with the exception of inner product MGs where histories are sampled linearly, which are found to be structurally critical. However, we encounter interesting differences both in the theory (where the role of the history frequency distribution in look-up table MGs is taken over by the eigenvalue spectrum of a history covariance matrix in inner product MGs) and in the static and dynamic phenomenology of the models. Our theoretical predictions are supported by numerical simulations.

  15. Adjusting the specificity of an engine map based on the sensitivity of an engine control parameter relative to a performance variable

    DOEpatents

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2014-10-28

    Methods and systems for engine control optimization are provided. A first and a second operating condition of a vehicle engine are detected. An initial value is identified for a first and a second engine control parameter corresponding to a combination of the detected operating conditions according to a first and a second engine map look-up table. The initial values for the engine control parameters are adjusted based on a detected engine performance variable to cause the engine performance variable to approach a target value. A first and a second sensitivity of the engine performance variable are determined in response to changes in the engine control parameters. The first engine map look-up table is adjusted when the first sensitivity is greater than a threshold, and the second engine map look-up table is adjusted when the second sensitivity is greater than a threshold.

  16. A novel data reduction technique for single slanted hot-wire measurements used to study incompressible compressor tip leakage flows

    NASA Astrophysics Data System (ADS)

    Berdanier, Reid A.; Key, Nicole L.

    2016-03-01

    The single slanted hot-wire technique has been used extensively as a method for measuring three velocity components in turbomachinery applications. The cross-flow orientation of probes with respect to the mean flow in rotating machinery results in detrimental prong interference effects when using multi-wire probes. As a result, the single slanted hot-wire technique is often preferred. Typical data reduction techniques solve a set of nonlinear equations determined by curve fits to calibration data. A new method is proposed which utilizes a look-up table method applied to a simulated triple-wire sensor with application to turbomachinery environments having subsonic, incompressible flows. Specific discussion regarding corrections for temperature and density changes present in a multistage compressor application is included, and additional consideration is given to the experimental error which accompanies each data reduction process. Hot-wire data collected from a three-stage research compressor with two rotor tip clearances are used to compare the look-up table technique with the traditional nonlinear equation method. The look-up table approach yields velocity errors of less than 5 % for test conditions deviating by more than 20 °C from calibration conditions (on par with the nonlinear solver method), while requiring less than 10 % of the computational processing time.

  17. On the look-up tables for the critical heat flux in tubes (history and problems)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirillov, P.L.; Smogalev, I.P.

    1995-09-01

    The complication of critical heat flux (CHF) problem for boiling in channels is caused by the large number of variable factors and the variety of two-phase flows. The existence of several hundreds of correlations for the prediction of CHF demonstrates the unsatisfactory state of this problem. The phenomenological CHF models can provide only the qualitative predictions of CHF primarily in annular-dispersed flow. The CHF look-up tables covered the results of numerous experiments received more recognition in the last 15 years. These tables are based on the statistical averaging of CHF values for each range of pressure, mass flux and quality.more » The CHF values for regions, where no experimental data is available, are obtained by extrapolation. The correction of these tables to account for the diameter effect is a complicated problem. There are ranges of conditions where the simple correlations cannot produce the reliable results. Therefore, diameter effect on CHF needs additional study. The modification of look-up table data for CHF in tubes to predict CHF in rod bundles must include a method which to take into account the nonuniformity of quality in a rod bundle cross section.« less

  18. Real-time look-up table-based color correction for still image stabilization of digital cameras without using frame memory

    NASA Astrophysics Data System (ADS)

    Luo, Lin-Bo; An, Sang-Woo; Wang, Chang-Shuai; Li, Ying-Chun; Chong, Jong-Wha

    2012-09-01

    Digital cameras usually decrease exposure time to capture motion-blur-free images. However, this operation will generate an under-exposed image with a low-budget complementary metal-oxide semiconductor image sensor (CIS). Conventional color correction algorithms can efficiently correct under-exposed images; however, they are generally not performed in real time and need at least one frame memory if they are implemented by hardware. The authors propose a real-time look-up table-based color correction method that corrects under-exposed images with hardware without using frame memory. The method utilizes histogram matching of two preview images, which are exposed for a long and short time, respectively, to construct an improved look-up table (ILUT) and then corrects the captured under-exposed image in real time. Because the ILUT is calculated in real time before processing the captured image, this method does not require frame memory to buffer image data, and therefore can greatly save the cost of CIS. This method not only supports single image capture, but also bracketing to capture three images at a time. The proposed method was implemented by hardware description language and verified by a field-programmable gate array with a 5 M CIS. Simulations show that the system can perform in real time with a low cost and can correct the color of under-exposed images well.

  19. Extension of Generalized Fluid System Simulation Program's Fluid Property Database

    NASA Technical Reports Server (NTRS)

    Patel, Kishan

    2011-01-01

    This internship focused on the development of additional capabilities for the General Fluid Systems Simulation Program (GFSSP). GFSSP is a thermo-fluid code used to evaluate system performance by a finite volume-based network analysis method. The program was developed primarily to analyze the complex internal flow of propulsion systems and is capable of solving many problems related to thermodynamics and fluid mechanics. GFSSP is integrated with thermodynamic programs that provide fluid properties for sub-cooled, superheated, and saturation states. For fluids that are not included in the thermodynamic property program, look-up property tables can be provided. The look-up property tables of the current release version can only handle sub-cooled and superheated states. The primary purpose of the internship was to extend the look-up tables to handle saturated states. This involves a) generation of a property table using REFPROP, a thermodynamic property program that is widely used, and b) modifications of the Fortran source code to read in an additional property table containing saturation data for both saturated liquid and saturated vapor states. Also, a method was implemented to calculate the thermodynamic properties of user-fluids within the saturation region, given values of pressure and enthalpy. These additions required new code to be written, and older code had to be adjusted to accommodate the new capabilities. Ultimately, the changes will lead to the incorporation of this new capability in future versions of GFSSP. This paper describes the development and validation of the new capability.

  20. Optoelectronic switch matrix as a look-up table for residue arithmetic.

    PubMed

    Macdonald, R I

    1987-10-01

    The use of optoelectronic matrix switches to perform look-up table functions in residue arithmetic processors is proposed. In this application, switchable detector arrays give the advantage of a greatly reduced requirement for optical sources by comparison with previous optoelectronic residue processors.

  1. Investigating a method for estimating direct nitrous oxide emissions from grazed pasture soils in New Zealand using NZ-DNDC.

    PubMed

    Giltrap, Donna L; Ausseil, Anne-Gaelle E; Thakur, Kailash P; Sutherland, M Anne

    2013-11-01

    In this study, we developed emission factor (EF) look-up tables for calculating the direct nitrous oxide (N2O) emissions from grazed pasture soils in New Zealand. Look-up tables of long-term average direct emission factors (and their associated uncertainties) were generated using multiple simulations of the NZ-DNDC model over a representative range of major soil, climate and management conditions occurring in New Zealand using 20 years of climate data. These EFs were then combined with national activity data maps to estimate direct N2O emissions from grazed pasture in New Zealand using 2010 activity data. The total direct N2O emissions using look-up tables were 12.7±12.1 Gg N2O-N (equivalent to using a national average EF of 0.70±0.67%). This agreed with the amount calculated using the New Zealand specific EFs (95% confidence interval 7.7-23.1 Gg N2O-N), although the relative uncertainty increased. The high uncertainties in the look-up table EFs were primarily due to the high uncertainty of the soil parameters within the selected soil categories. Uncertainty analyses revealed that the uncertainty in soil parameters contributed much more to the uncertainty in N2O emissions than the inter-annual weather variability. The effect of changes to fertiliser applications was also examined and it was found that for fertiliser application rates of 0-50 kg N/ha for sheep and beef and 60-240 kg N/ha for dairy the modelled EF was within ±10% of the value simulated using annual fertiliser application rates of 15 kg N/ha and 140 kg N/ha respectively. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Bitstream decoding processor for fast entropy decoding of variable length coding-based multiformat videos

    NASA Astrophysics Data System (ADS)

    Jo, Hyunho; Sim, Donggyu

    2014-06-01

    We present a bitstream decoding processor for entropy decoding of variable length coding-based multiformat videos. Since most of the computational complexity of entropy decoders comes from bitstream accesses and table look-up process, the developed bitstream processing unit (BsPU) has several designated instructions to access bitstreams and to minimize branch operations in the table look-up process. In addition, the instruction for bitstream access has the capability to remove emulation prevention bytes (EPBs) of H.264/AVC without initial delay, repeated memory accesses, and additional buffer. Experimental results show that the proposed method for EPB removal achieves a speed-up of 1.23 times compared to the conventional EPB removal method. In addition, the BsPU achieves speed-ups of 5.6 and 3.5 times in entropy decoding of H.264/AVC and MPEG-4 Visual bitstreams, respectively, compared to an existing processor without designated instructions and a new table mapping algorithm. The BsPU is implemented on a Xilinx Virtex5 LX330 field-programmable gate array. The MPEG-4 Visual (ASP, Level 5) and H.264/AVC (Main Profile, Level 4) are processed using the developed BsPU with a core clock speed of under 250 MHz in real time.

  3. The Development of a Motor-Free Short-Form of the Wechsler Intelligence Scale for Children-Fifth Edition.

    PubMed

    Piovesana, Adina M; Harrison, Jessica L; Ducat, Jacob J

    2017-12-01

    This study aimed to develop a motor-free short-form of the Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V) that allows clinicians to estimate the Full Scale Intelligence Quotients of youths with motor impairments. Using the reliabilities and intercorrelations of six WISC-V motor-free subtests, psychometric methodologies were applied to develop look-up tables for four Motor-free Short-form indices: Verbal Comprehension Short-form, Perceptual Reasoning Short-form, Working Memory Short-form, and a Motor-free Intelligence Quotient. Index-level discrepancy tables were developed using the same methods to allow clinicians to statistically compare visual, verbal, and working memory abilities. The short-form indices had excellent reliabilities ( r = .92-.97) comparable to the original WISC-V. This motor-free short-form of the WISC-V is a reliable alternative for the assessment of intellectual functioning in youths with motor impairments. Clinicians are provided with user-friendly look-up tables, index level discrepancy tables, and base rates, displayed similar to those in the WISC-V manuals to enable interpretation of assessment results.

  4. Three-directional motion-compensation mask-based novel look-up table on graphics processing units for video-rate generation of digital holographic videos of three-dimensional scenes.

    PubMed

    Kwon, Min-Woo; Kim, Seung-Cheol; Kim, Eun-Soo

    2016-01-20

    A three-directional motion-compensation mask-based novel look-up table method is proposed and implemented on graphics processing units (GPUs) for video-rate generation of digital holographic videos of three-dimensional (3D) scenes. Since the proposed method is designed to be well matched with the software and memory structures of GPUs, the number of compute-unified-device-architecture kernel function calls can be significantly reduced. This results in a great increase of the computational speed of the proposed method, allowing video-rate generation of the computer-generated hologram (CGH) patterns of 3D scenes. Experimental results reveal that the proposed method can generate 39.8 frames of Fresnel CGH patterns with 1920×1080 pixels per second for the test 3D video scenario with 12,088 object points on dual GPU boards of NVIDIA GTX TITANs, and they confirm the feasibility of the proposed method in the practical application fields of electroholographic 3D displays.

  5. Device and circuit analysis of a sub 20 nm double gate MOSFET with gate stack using a look-up-table-based approach

    NASA Astrophysics Data System (ADS)

    Chakraborty, S.; Dasgupta, A.; Das, R.; Kar, M.; Kundu, A.; Sarkar, C. K.

    2017-12-01

    In this paper, we explore the possibility of mapping devices designed in TCAD environment to its modeled version developed in cadence virtuoso environment using a look-up table (LUT) approach. Circuit simulation of newly designed devices in TCAD environment is a very slow and tedious process involving complex scripting. Hence, the LUT based modeling approach has been proposed as a faster and easier alternative in cadence environment. The LUTs are prepared by extracting data from the device characteristics obtained from device simulation in TCAD. A comparative study is shown between the TCAD simulation and the LUT-based alternative to showcase the accuracy of modeled devices. Finally the look-up table approach is used to evaluate the performance of circuits implemented using 14 nm nMOSFET.

  6. Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods.

    PubMed

    Kim, Seung-Cheol; Kim, Eun-Soo

    2009-02-20

    In this paper we propose a new approach for fast generation of computer-generated holograms (CGHs) of a 3D object by using the run-length encoding (RLE) and the novel look-up table (N-LUT) methods. With the RLE method, spatially redundant data of a 3D object are extracted and regrouped into the N-point redundancy map according to the number of the adjacent object points having the same 3D value. Based on this redundancy map, N-point principle fringe patterns (PFPs) are newly calculated by using the 1-point PFP of the N-LUT, and the CGH pattern for the 3D object is generated with these N-point PFPs. In this approach, object points to be involved in calculation of the CGH pattern can be dramatically reduced and, as a result, an increase of computational speed can be obtained. Some experiments with a test 3D object are carried out and the results are compared to those of the conventional methods.

  7. 76 FR 8990 - Hours of Service of Drivers; Availability of Supplemental Documents and Corrections to Notice of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-16

    .... FMCSA used a look-up table for scaling the individual hours of work in the previous week. Look-up tables...). Column C (cells C93-137) presents the values scaled to our average work (52 hours per week of work) and... Evaluation to link the hours worked in the previous week to fatigue the following week. On January 28, 2011...

  8. Implementation of a fast digital optical matrix-vector multiplier using a holographic look-up table and residue arithmetic

    NASA Technical Reports Server (NTRS)

    Habiby, Sarry F.; Collins, Stuart A., Jr.

    1987-01-01

    The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. A Hughes liquid crystal light valve, the residue arithmetic representation, and a holographic optical memory are used to construct position coded optical look-up tables. All operations are performed in effectively one light valve response time with a potential for a high information density.

  9. Implementation of a fast digital optical matrix-vector multiplier using a holographic look-up table and residue arithmetic.

    PubMed

    Habiby, S F; Collins, S A

    1987-11-01

    The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. A Hughes liquid crystal light valve, the residue arithmetic representation, and a holographic optical memory are used to construct position coded optical look-up tables. All operations are performed in effectively one light valve response time with a potential for a high information density.

  10. Computer Vision for Artificially Intelligent Robotic Systems

    NASA Astrophysics Data System (ADS)

    Ma, Chialo; Ma, Yung-Lung

    1987-04-01

    In this paper An Acoustic Imaging Recognition System (AIRS) will be introduced which is installed on an Intelligent Robotic System and can recognize different type of Hand tools' by Dynamic pattern recognition. The dynamic pattern recognition is approached by look up table method in this case, the method can save a lot of calculation time and it is practicable. The Acoustic Imaging Recognition System (AIRS) is consist of four parts -- position control unit, pulse-echo signal processing unit, pattern recognition unit and main control unit. The position control of AIRS can rotate an angle of ±5 degree Horizental and Vertical seperately, the purpose of rotation is to find the maximum reflection intensity area, from the distance, angles and intensity of the target we can decide the characteristic of this target, of course all the decision is target, of course all the decision is processed bye the main control unit. In Pulse-Echo Signal Process Unit, we ultilize the correlation method, to overcome the limitation of short burst of ultrasonic, because the Correlation system can transmit large time bandwidth signals and obtain their resolution and increased intensity through pulse compression in the correlation receiver. The output of correlator is sampled and transfer into digital data by u law coding method, and this data together with delay time T, angle information OH, eV will be sent into main control unit for further analysis. The recognition process in this paper, we use dynamic look up table method, in this method at first we shall set up serval recognition pattern table and then the new pattern scanned by Transducer array will be devided into serval stages and compare with the sampling table. The comparison is implemented by dynamic programing and Markovian process. All the hardware control signals, such as optimum delay time for correlator receiver, horizental and vertical rotation angle for transducer plate, are controlled by the Main Control Unit, the Main Control Unit also handles the pattern recognition process. The distance from the target to the transducer plate is limitted by the power and beam angle of transducer elements, in this AIRS Model, we use a narrow beam transducer and it's input voltage is 50V p-p. A RobOt equipped with AIRS can not only measure the distance from the target but also recognize a three dimensional image of target from the image lab of Robot memory. Indexitems, Accoustic System, Supersonic transducer, Dynamic programming, Look-up-table, Image process, pattern Recognition, Quad Tree, Quadappoach.

  11. Improved Performance Characteristics For Indium Antimonide Photovoltaic Detector Arrays Using A FET-Switched Multiplexing Technique

    NASA Astrophysics Data System (ADS)

    Ma, Yung-Lung; Ma, Chialo

    1987-03-01

    In this paper An Acoustic Imaging Recognition System (AIRS) will be introduced which is installed on an Intelligent Robotic System and can recognize different type of Hand tools' by Dynamic pattern recognition. The dynamic pattern recognition is approached by look up table method in this case, the method can save a lot of calculation time and it is practicable. The Acoustic Imaging Recognition System (AIRS) is consist of four parts _ position control unit, pulse-echo signal processing unit, pattern recognition unit and main control unit. The position control of AIRS can rotate an angle of ±5 degree Horizental and Vertical seperately, the purpose of rotation is to find the maximum reflection intensity area, from the distance, angles and intensity of the target we can decide the characteristic of this target, of course all the decision is target, of course all the decision is processed by the main control unit. In Pulse-Echo Signal Process Unit, we utilize the correlation method, to overcome the limitation of short burst of ultrasonic, because the Correlation system can transmit large time bandwidth signals and obtain their resolution and increased intensity through pulse compression in the correlation receiver. The output of correlator is sampled and transfer into digital data by p law coding method, and this data together with delay time T, angle information eH, eV will be sent into main control unit for further analysis. The recognition process in this paper, we use dynamic look up table method, in this method at first we shall set up serval recognition pattern table and then the new pattern scanned by Transducer array will be devided into serval stages and compare with the sampling table. The comparison is implemented by dynamic programing and Markovian process. All the hardware control signals, such as optimum delay time for correlator receiver, horizental and vertical rotation angle for transducer plate, are controlled by the Main Control Unit, the Main Control Unit also handles the pattern recognition process. The distance from the target to the transducer plate is limitted by the power and beam angle of transducer elements, in this AIRS Models, we use a narrow beam transducer and it's input voltage is 50V p-p. A Robot equipped with AIRS can not only measure the distance from the target but also recognize a three dimensional image of target from the image lab of Robot memory. Indexitems, Accoustic System, Supersonic transducer, Dynamic programming, Look-up-table, Image process, pattern Recognition, Quad Tree, Quadappoach.

  12. Image processing on the image with pixel noise bits removed

    NASA Astrophysics Data System (ADS)

    Chuang, Keh-Shih; Wu, Christine

    1992-06-01

    Our previous studies used statistical methods to assess the noise level in digital images of various radiological modalities. We separated the pixel data into signal bits and noise bits and demonstrated visually that the removal of the noise bits does not affect the image quality. In this paper we apply image enhancement techniques on noise-bits-removed images and demonstrate that the removal of noise bits has no effect on the image property. The image processing techniques used are gray-level look up table transformation, Sobel edge detector, and 3-D surface display. Preliminary results show no noticeable difference between original image and noise bits removed image using look up table operation and Sobel edge enhancement. There is a slight enhancement of the slicing artifact in the 3-D surface display of the noise bits removed image.

  13. SU-E-T-169: Characterization of Pacemaker/ICD Dose in SAVI HDR Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalavagunta, C; Lasio, G; Yi, B

    2015-06-15

    Purpose: It is important to estimate dose to pacemaker (PM)/Implantable Cardioverter Defibrillator (ICD) before undertaking Accelerated Partial Breast Treatment using High Dose Rate (HDR) brachytherapy. Kim et al. have reported HDR PM/ICD dose using a single-source balloon applicator. To the authors knowledge, there have so far not been any published PM/ICD dosimetry literature for the Strut Adjusted Volume Implant (SAVI, Cianna Medical, Aliso Viejo, CA). This study aims to fill this gap by generating a dose look up table (LUT) to predict maximum dose to the PM/ICD in SAVI HDR brachytherapy. Methods: CT scans for 3D dosimetric planning were acquiredmore » for four SAVI applicators (6−1-mini, 6−1, 8−1 and 10−1) expanded to their maximum diameter in air. The CT datasets were imported into the Elekta Oncentra TPS for planning and each applicator was digitized in a multiplanar reconstruction window. A dose of 340 cGy was prescribed to the surface of a 1 cm expansion of the SAVI applicator cavity. Cartesian coordinates of the digitized applicator were determined in the treatment leading to the generation of a dose distribution and corresponding distance-dose prediction look up table (LUT) for distances from 2 to 15 cm (6-mini) and 2 to 20 cm (10–1).The deviation between the LUT doses and the dose to the cardiac device in a clinical case was evaluated. Results: Distance-dose look up table were compared to clinical SAVI plan and the discrepancy between the max dose predicted by the LUT and the clinical plan was found to be in the range (−0.44%, 0.74%) of the prescription dose. Conclusion: The distance-dose look up tables for SAVI applicators can be used to estimate the maximum dose to the ICD/PM, with a potential usefulness for quick assessment of dose to the cardiac device prior to applicator placement.« less

  14. Retrieval of aerosol optical depth from surface solar radiation measurements using machine learning algorithms, non-linear regression and a radiative transfer-based look-up table

    NASA Astrophysics Data System (ADS)

    Huttunen, Jani; Kokkola, Harri; Mielonen, Tero; Esa Juhani Mononen, Mika; Lipponen, Antti; Reunanen, Juha; Vilhelm Lindfors, Anders; Mikkonen, Santtu; Erkki Juhani Lehtinen, Kari; Kouremeti, Natalia; Bais, Alkiviadis; Niska, Harri; Arola, Antti

    2016-07-01

    In order to have a good estimate of the current forcing by anthropogenic aerosols, knowledge on past aerosol levels is needed. Aerosol optical depth (AOD) is a good measure for aerosol loading. However, dedicated measurements of AOD are only available from the 1990s onward. One option to lengthen the AOD time series beyond the 1990s is to retrieve AOD from surface solar radiation (SSR) measurements taken with pyranometers. In this work, we have evaluated several inversion methods designed for this task. We compared a look-up table method based on radiative transfer modelling, a non-linear regression method and four machine learning methods (Gaussian process, neural network, random forest and support vector machine) with AOD observations carried out with a sun photometer at an Aerosol Robotic Network (AERONET) site in Thessaloniki, Greece. Our results show that most of the machine learning methods produce AOD estimates comparable to the look-up table and non-linear regression methods. All of the applied methods produced AOD values that corresponded well to the AERONET observations with the lowest correlation coefficient value being 0.87 for the random forest method. While many of the methods tended to slightly overestimate low AODs and underestimate high AODs, neural network and support vector machine showed overall better correspondence for the whole AOD range. The differences in producing both ends of the AOD range seem to be caused by differences in the aerosol composition. High AODs were in most cases those with high water vapour content which might affect the aerosol single scattering albedo (SSA) through uptake of water into aerosols. Our study indicates that machine learning methods benefit from the fact that they do not constrain the aerosol SSA in the retrieval, whereas the LUT method assumes a constant value for it. This would also mean that machine learning methods could have potential in reproducing AOD from SSR even though SSA would have changed during the observation period.

  15. Generation of Look-Up Tables for Dynamic Job Shop Scheduling Decision Support Tool

    NASA Astrophysics Data System (ADS)

    Oktaviandri, Muchamad; Hassan, Adnan; Mohd Shaharoun, Awaluddin

    2016-02-01

    Majority of existing scheduling techniques are based on static demand and deterministic processing time, while most job shop scheduling problem are concerned with dynamic demand and stochastic processing time. As a consequence, the solutions obtained from the traditional scheduling technique are ineffective wherever changes occur to the system. Therefore, this research intends to develop a decision support tool (DST) based on promising artificial intelligent that is able to accommodate the dynamics that regularly occur in job shop scheduling problem. The DST was designed through three phases, i.e. (i) the look-up table generation, (ii) inverse model development and (iii) integration of DST components. This paper reports the generation of look-up tables for various scenarios as a part in development of the DST. A discrete event simulation model was used to compare the performance among SPT, EDD, FCFS, S/OPN and Slack rules; the best performances measures (mean flow time, mean tardiness and mean lateness) and the job order requirement (inter-arrival time, due dates tightness and setup time ratio) which were compiled into look-up tables. The well-known 6/6/J/Cmax Problem from Muth and Thompson (1963) was used as a case study. In the future, the performance measure of various scheduling scenarios and the job order requirement will be mapped using ANN inverse model.

  16. SU-E-T-275: Dose Verification in a Small Animal Image-Guided Radiation Therapy X-Ray Machine: A Dose Comparison between TG-61 Based Look-Up Table and MOSFET Method for Various Collimator Sizes.

    PubMed

    Rodrigues, A; Nguyen, G; Li, Y; Roy Choudhury, K; Kirsch, D; Das, S; Yoshizumi, T

    2012-06-01

    To verify the accuracy of TG-61 based dosimetry with MOSFET technology using a tissue-equivalent mouse phantom. Accuracy of mouse dose between a TG-61 based look-up table was verified with MOSFET technology. The look-up table followed a TG-61 based commissioning and used a solid water block and radiochromic film. A tissue-equivalent mouse phantom (2 cm diameter, 8 cm length) was used for the MOSFET method. Detectors were placed in the phantom at the head and center of the body. MOSFETs were calibrated in air with an ion chamber and f-factor was applied to derive the dose to tissue. In CBCT mode, the phantom was positioned such that the system isocenter coincided with the center of the MOSFET with the active volume perpendicular to the beam. The absorbed dose was measured three times for seven different collimators, respectively. The exposure parameters were 225 kVp, 13 mA, and an exposure time of 20 s. For a 10 mm, 15 mm, and 20 mm circular collimator, the dose measured by the phantom was 4.3%, 2.7%, and 6% lower than TG-61 based measurements, respectively. For a 10 × 10 mm, 20 × 20 mm, and 40 × 40 mm collimator, the dose difference was 4.7%, 7.7%, and 2.9%, respectively. The MOSFET data was systematically lower than the commissioning data. The dose difference is due to the increased scatter radiation in the solid water block versus the dimension of the mouse phantom leading to an overestimation of the actual dose in the solid water block. The MOSFET method with the use of a tissue- equivalent mouse phantom provides less labor intensive geometry-specific dosimetry and accuracy with better dose tolerances of up to ± 2.7%. © 2012 American Association of Physicists in Medicine.

  17. Using Neural Networks to Improve the Performance of Radiative Transfer Modeling Used for Geometry Dependent LER Calculations

    NASA Astrophysics Data System (ADS)

    Fasnacht, Z.; Qin, W.; Haffner, D. P.; Loyola, D. G.; Joiner, J.; Krotkov, N. A.; Vasilkov, A. P.; Spurr, R. J. D.

    2017-12-01

    In order to estimate surface reflectance used in trace gas retrieval algorithms, radiative transfer models (RTM) such as the Vector Linearized Discrete Ordinate Radiative Transfer Model (VLIDORT) can be used to simulate the top of the atmosphere (TOA) radiances with advanced models of surface properties. With large volumes of satellite data, these model simulations can become computationally expensive. Look up table interpolation can improve the computational cost of the calculations, but the non-linear nature of the radiances requires a dense node structure if interpolation errors are to be minimized. In order to reduce our computational effort and improve the performance of look-up tables, neural networks can be trained to predict these radiances. We investigate the impact of using look-up table interpolation versus a neural network trained using the smart sampling technique, and show that neural networks can speed up calculations and reduce errors while using significantly less memory and RTM calls. In future work we will implement a neural network in operational processing to meet growing demands for reflectance modeling in support of high spatial resolution satellite missions.

  18. Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding

    NASA Technical Reports Server (NTRS)

    Mahmoud, Saad; Hi, Jianjun

    2012-01-01

    The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of the deviation. This method is more complicated than the Pilot-Guided Method due to the gain control circuitry, but does not have the real-time computation complexity of the Blind Estimation method. Each of these methods can be used to provide an accurate estimation of the combining ratio, and the final selection of the estimation method depends on other design constraints.

  19. a Mapping Method of Slam Based on Look up Table

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Li, J.; Wang, A.; Wang, J.

    2017-09-01

    In the last years several V-SLAM(Visual Simultaneous Localization and Mapping) approaches have appeared showing impressive reconstructions of the world. However these maps are built with far more than the required information. This limitation comes from the whole process of each key-frame. In this paper we present for the first time a mapping method based on the LOOK UP TABLE(LUT) for visual SLAM that can improve the mapping effectively. As this method relies on extracting features in each cell divided from image, it can get the pose of camera that is more representative of the whole key-frame. The tracking direction of key-frames is obtained by counting the number of parallax directions of feature points. LUT stored all mapping needs the number of cell corresponding to the tracking direction which can reduce the redundant information in the key-frame, and is more efficient to mapping. The result shows that a better map with less noise is build using less than one-third of the time. We believe that the capacity of LUT efficiently building maps makes it a good choice for the community to investigate in the scene reconstruction problems.

  20. Digital slip frequency generator and method for determining the desired slip frequency

    DOEpatents

    Klein, Frederick F.

    1989-01-01

    The output frequency of an electric power generator is kept constant with variable rotor speed by automatic adjustment of the excitation slip frequency. The invention features a digital slip frequency generator which provides sine and cosine waveforms from a look-up table, which are combined with real and reactive power output of the power generator.

  1. A shower look-up table to trace the dynamics of meteoroid streams and their sources

    NASA Astrophysics Data System (ADS)

    Jenniskens, Petrus

    2018-04-01

    Meteor showers are caused by meteoroid streams from comets (and some primitive asteroids). They trace the comet population and its dynamical evolution, warn of dangerous long-period comets that can pass close to Earth's orbit, outline volumes of space with a higher satellite impact probability, and define how meteoroids evolve in the interplanetary medium. Ongoing meteoroid orbit surveys have mapped these showers in recent years, but the surveys are now running up against a more and more complicated scene. The IAU Working List of Meteor Showers has reached 956 entries to be investigated (per March 1, 2018). The picture is even more complicated with the discovery that radar-detected streams are often different, or differently distributed, than video-detected streams. Complicating matters even more, some meteor showers are active over many months, during which their radiant position gradually changes, which makes the use of mean orbits as a proxy for a meteoroid stream's identity meaningless. The dispersion of the stream in space and time is important to that identity and contains much information about its origin and dynamical evolution. To make sense of the meteor shower zoo, a Shower Look-Up Table was created that captures this dispersion. The Shower Look-Up Table has enabled the automated identification of showers in the ongoing CAMS video-based meteoroid orbit survey, results of which are presented now online in near-real time at http://cams.seti.org/FDL/. Visualization tools have been built that depict the streams in a planetarium setting. Examples will be presented that sample the range of meteoroid streams that this look-up table describes. Possibilities for further dynamical studies will be discussed.

  2. Fast generation of video holograms of three-dimensional moving objects using a motion compensation-based novel look-up table.

    PubMed

    Kim, Seung-Cheol; Dong, Xiao-Bin; Kwon, Min-Woo; Kim, Eun-Soo

    2013-05-06

    A novel approach for fast generation of video holograms of three-dimensional (3-D) moving objects using a motion compensation-based novel-look-up-table (MC-N-LUT) method is proposed. Motion compensation has been widely employed in compression of conventional 2-D video data because of its ability to exploit high temporal correlation between successive video frames. Here, this concept of motion-compensation is firstly applied to the N-LUT based on its inherent property of shift-invariance. That is, motion vectors of 3-D moving objects are extracted between the two consecutive video frames, and with them motions of the 3-D objects at each frame are compensated. Then, through this process, 3-D object data to be calculated for its video holograms are massively reduced, which results in a dramatic increase of the computational speed of the proposed method. Experimental results with three kinds of 3-D video scenarios reveal that the average number of calculated object points and the average calculation time for one object point of the proposed method, have found to be reduced down to 86.95%, 86.53% and 34.99%, 32.30%, respectively compared to those of the conventional N-LUT and temporal redundancy-based N-LUT (TR-N-LUT) methods.

  3. Fixed-Base Comb with Window-Non-Adjacent Form (NAF) Method for Scalar Multiplication

    PubMed Central

    Seo, Hwajeong; Kim, Hyunjin; Park, Taehwan; Lee, Yeoncheol; Liu, Zhe; Kim, Howon

    2013-01-01

    Elliptic curve cryptography (ECC) is one of the most promising public-key techniques in terms of short key size and various crypto protocols. For this reason, many studies on the implementation of ECC on resource-constrained devices within a practical execution time have been conducted. To this end, we must focus on scalar multiplication, which is the most expensive operation in ECC. A number of studies have proposed pre-computation and advanced scalar multiplication using a non-adjacent form (NAF) representation, and more sophisticated approaches have employed a width-w NAF representation and a modified pre-computation table. In this paper, we propose a new pre-computation method in which zero occurrences are much more frequent than in previous methods. This method can be applied to ordinary group scalar multiplication, but it requires large pre-computation table, so we combined the previous method with ours for practical purposes. This novel structure establishes a new feature that adjusts speed performance and table size finely, so we can customize the pre-computation table for our own purposes. Finally, we can establish a customized look-up table for embedded microprocessors. PMID:23881143

  4. Look-up-table approach for leaf area index retrieval from remotely sensed data based on scale information

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaohua; Li, Chuanrong; Tang, Lingli

    2018-03-01

    Leaf area index (LAI) is a key structural characteristic of vegetation and plays a significant role in global change research. Several methods and remotely sensed data have been evaluated for LAI estimation. This study aimed to evaluate the suitability of the look-up-table (LUT) approach for crop LAI retrieval from Satellite Pour l'Observation de la Terre (SPOT)-5 data and establish an LUT approach for LAI inversion based on scale information. The LAI inversion result was validated by in situ LAI measurements, indicating that the LUT generated based on the PROSAIL (PROSPECT+SAIL: properties spectra + scattering by arbitrarily inclined leaves) model was suitable for crop LAI estimation, with a root mean square error (RMSE) of ˜0.31m2 / m2 and determination coefficient (R2) of 0.65. The scale effect of crop LAI was analyzed based on Taylor expansion theory, indicating that when the SPOT data aggregated by 200 × 200 pixel, the relative error is significant with 13.7%. Finally, an LUT method integrated with scale information was proposed in this article, improving the inversion accuracy with RMSE of 0.20 m2 / m2 and R2 of 0.83.

  5. Differential Binary Encoding Method for Calibrating Image Sensors Based on IOFBs

    PubMed Central

    Fernández, Pedro R.; Lázaro-Galilea, José Luis; Gardel, Alfredo; Espinosa, Felipe; Bravo, Ignacio; Cano, Ángel

    2012-01-01

    Image transmission using incoherent optical fiber bundles (IOFBs) requires prior calibration to obtain the spatial in-out fiber correspondence necessary to reconstruct the image captured by the pseudo-sensor. This information is recorded in a Look-Up Table called the Reconstruction Table (RT), used later for reordering the fiber positions and reconstructing the original image. This paper presents a very fast method based on image-scanning using spaces encoded by a weighted binary code to obtain the in-out correspondence. The results demonstrate that this technique yields a remarkable reduction in processing time and the image reconstruction quality is very good compared to previous techniques based on spot or line scanning, for example. PMID:22666023

  6. ICL: The Image Composition Language

    NASA Technical Reports Server (NTRS)

    Foley, James D.; Kim, Won Chul

    1986-01-01

    The Image Composition Language (ICL) provides a convenient way for programmers of interactive graphics application programs to define how the video look-up table of a raster display system is to be loaded. The ICL allows one or several images stored in the frame buffer to be combined in a variety of ways. The ICL treats these images as variables, and provides arithematic, relational, and conditional operators to combine the images, scalar variables, and constants in image composition expressions. The objective of ICL is to provide programmers with a simple way to compose images, to relieve the tedium usually associated with loading the video look-up table to obtain desired results.

  7. A Low-Complexity and High-Performance 2D Look-Up Table for LDPC Hardware Implementation

    NASA Astrophysics Data System (ADS)

    Chen, Jung-Chieh; Yang, Po-Hui; Lain, Jenn-Kaie; Chung, Tzu-Wen

    In this paper, we propose a low-complexity, high-efficiency two-dimensional look-up table (2D LUT) for carrying out the sum-product algorithm in the decoding of low-density parity-check (LDPC) codes. Instead of employing adders for the core operation when updating check node messages, in the proposed scheme, the main term and correction factor of the core operation are successfully merged into a compact 2D LUT. Simulation results indicate that the proposed 2D LUT not only attains close-to-optimal bit error rate performance but also enjoys a low complexity advantage that is suitable for hardware implementation.

  8. Research on Aircraft Target Detection Algorithm Based on Improved Radial Gradient Transformation

    NASA Astrophysics Data System (ADS)

    Zhao, Z. M.; Gao, X. M.; Jiang, D. N.; Zhang, Y. Q.

    2018-04-01

    Aiming at the problem that the target may have different orientation in the unmanned aerial vehicle (UAV) image, the target detection algorithm based on the rotation invariant feature is studied, and this paper proposes a method of RIFF (Rotation-Invariant Fast Features) based on look up table and polar coordinate acceleration to be used for aircraft target detection. The experiment shows that the detection performance of this method is basically equal to the RIFF, and the operation efficiency is greatly improved.

  9. Design of Cancelable Palmprint Templates Based on Look Up Table

    NASA Astrophysics Data System (ADS)

    Qiu, Jian; Li, Hengjian; Dong, Jiwen

    2018-03-01

    A novel cancelable palmprint templates generation scheme is proposed in this paper. Firstly, the Gabor filter and chaotic matrix are used to extract palmprint features. It is then arranged into a row vector and divided into equal size blocks. These blocks are converted to corresponding decimals and mapped to look up tables, forming final cancelable palmprint features based on the selected check bits. Finally, collaborative representation based classification with regularized least square is used for classification. Experimental results on the Hong Kong PolyU Palmprint Database verify that the proposed cancelable templates can achieve very high performance and security levels. Meanwhile, it can also satisfy the needs of real-time applications.

  10. Efficient generation of holographic news ticker in holographic 3DTV

    NASA Astrophysics Data System (ADS)

    Kim, Seung-Cheol; Kim, Eun-Soo

    2009-08-01

    News ticker is used to show breaking news or news headlines in conventional 2-D broadcasting system. For the case of the breaking news, the fast creation is need, because the information should be sent quickly. In addition, if holographic 3- D broadcasting system is started in the future, news ticker will remain. On the other hands, some approaches for generation of CGH patterns have been suggested like the ray-tracing method and look-up table (LUT) method. However, these methods have some drawbacks that needs much time or needs huge memory size for look-up table. Recently, a novel LUT (N-LUT) method for fast generation of CGH patterns of 3-D objects with a dramatically reduced LUT without the loss of computational speed was proposed. Therefore, we proposed the method to efficiently generate the holographic news ticker in holographic 3DTV or 3-D movies using N-LUT method. The proposed method is largely consisted of five steps: construction of the LUT for each character, extraction of characters in news ticker, generation and shift of the CGH pattern for news ticker using the LUT for each character, composition of hologram pattern for 3-D video and hologram pattern for news ticker and reconstruct the holographic 3D video with news ticker. To confirm the proposed method, moving car in front of the castle is used as a 3D video and the words 'HOLOGRAM CAPTION GENERATOR' is used as a news ticker. From this simulation results confirmed the feasibility of the proposed method in fast generation of CGH patterns for holographic captions.

  11. Modeling radiative transfer with the doubling and adding approach in a climate GCM setting

    NASA Astrophysics Data System (ADS)

    Lacis, A. A.

    2017-12-01

    The nonlinear dependence of multiply scattered radiation on particle size, optical depth, and solar zenith angle, makes accurate treatment of multiple scattering in the climate GCM setting problematic, due primarily to computational cost issues. In regard to the accurate methods of calculating multiple scattering that are available, their computational cost is far too prohibitive for climate GCM applications. Utilization of two-stream-type radiative transfer approximations may be computationally fast enough, but at the cost of reduced accuracy. We describe here a parameterization of the doubling/adding method that is being used in the GISS climate GCM, which is an adaptation of the doubling/adding formalism configured to operate with a look-up table utilizing a single gauss quadrature point with an extra-angle formulation. It is designed to closely reproduce the accuracy of full-angle doubling and adding for the multiple scattering effects of clouds and aerosols in a realistic atmosphere as a function of particle size, optical depth, and solar zenith angle. With an additional inverse look-up table, this single-gauss-point doubling/adding approach can be adapted to model fractional cloud cover for any GCM grid-box in the independent pixel approximation as a function of the fractional cloud particle sizes, optical depths, and solar zenith angle dependence.

  12. Table-driven image transformation engine algorithm

    NASA Astrophysics Data System (ADS)

    Shichman, Marc

    1993-04-01

    A high speed image transformation engine (ITE) was designed and a prototype built for use in a generic electronic light table and image perspective transformation application code. The ITE takes any linear transformation, breaks the transformation into two passes and resamples the image appropriately for each pass. The system performance is achieved by driving the engine with a set of look up tables computed at start up time for the calculation of pixel output contributions. Anti-aliasing is done automatically in the image resampling process. Operations such as multiplications and trigonometric functions are minimized. This algorithm can be used for texture mapping, image perspective transformation, electronic light table, and virtual reality.

  13. Inductive System for Reliable Magnesium Level Detection in a Titanium Reduction Reactor

    NASA Astrophysics Data System (ADS)

    Krauter, Nico; Eckert, Sven; Gundrum, Thomas; Stefani, Frank; Wondrak, Thomas; Frick, Peter; Khalilov, Ruslan; Teimurazov, Andrei

    2018-05-01

    The determination of the Magnesium level in a Titanium reduction retort by inductive methods is often hampered by the formation of Titanium sponge rings which disturb the propagation of electromagnetic signals between excitation and receiver coils. We present a new method for the reliable identification of the Magnesium level which explicitly takes into account the presence of sponge rings with unknown geometry and conductivity. The inverse problem is solved by a look-up-table method, based on the solution of the inductive forward problems for several tens of thousands parameter combinations.

  14. New realisation of Preisach model using adaptive polynomial approximation

    NASA Astrophysics Data System (ADS)

    Liu, Van-Tsai; Lin, Chun-Liang; Wing, Home-Young

    2012-09-01

    Modelling system with hysteresis has received considerable attention recently due to the increasing accurate requirement in engineering applications. The classical Preisach model (CPM) is the most popular model to demonstrate hysteresis which can be represented by infinite but countable first-order reversal curves (FORCs). The usage of look-up tables is one way to approach the CPM in actual practice. The data in those tables correspond with the samples of a finite number of FORCs. This approach, however, faces two major problems: firstly, it requires a large amount of memory space to obtain an accurate prediction of hysteresis; secondly, it is difficult to derive efficient ways to modify the data table to reflect the timing effect of elements with hysteresis. To overcome, this article proposes the idea of using a set of polynomials to emulate the CPM instead of table look-up. The polynomial approximation requires less memory space for data storage. Furthermore, the polynomial coefficients can be obtained accurately by using the least-square approximation or adaptive identification algorithm, such as the possibility of accurate tracking of hysteresis model parameters.

  15. Radiometry simulation within the end-to-end simulation tool SENSOR

    NASA Astrophysics Data System (ADS)

    Wiest, Lorenz; Boerner, Anko

    2001-02-01

    12 An end-to-end simulation is a valuable tool for sensor system design, development, optimization, testing, and calibration. This contribution describes the radiometry module of the end-to-end simulation tool SENSOR. It features MODTRAN 4.0-based look up tables in conjunction with a cache-based multilinear interpolation algorithm to speed up radiometry calculations. It employs a linear reflectance parameterization to reduce look up table size, considers effects due to the topology of a digital elevation model (surface slope, sky view factor) and uses a reflectance class feature map to assign Lambertian and BRDF reflectance properties to the digital elevation model. The overall consistency of the radiometry part is demonstrated by good agreement between ATCOR 4-retrieved reflectance spectra of a simulated digital image cube and the original reflectance spectra used to simulate this image data cube.

  16. A Memory-Based Programmable Logic Device Using Look-Up Table Cascade with Synchronous Static Random Access Memories

    NASA Astrophysics Data System (ADS)

    Nakamura, Kazuyuki; Sasao, Tsutomu; Matsuura, Munehiro; Tanaka, Katsumasa; Yoshizumi, Kenichi; Nakahara, Hiroki; Iguchi, Yukihiro

    2006-04-01

    A large-scale memory-technology-based programmable logic device (PLD) using a look-up table (LUT) cascade is developed in the 0.35-μm standard complementary metal oxide semiconductor (CMOS) logic process. Eight 64 K-bit synchronous SRAMs are connected to form an LUT cascade with a few additional circuits. The features of the LUT cascade include: 1) a flexible cascade connection structure, 2) multi phase pseudo asynchronous operations with synchronous static random access memory (SRAM) cores, and 3) LUT-bypass redundancy. This chip operates at 33 MHz in 8-LUT cascades at 122 mW. Benchmark results show that it achieves a comparable performance to field programmable gate array (FPGAs).

  17. A novel high-frequency encoding algorithm for image compression

    NASA Astrophysics Data System (ADS)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-12-01

    In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.

  18. An Aerosol Extinction-to-Backscatter Ratio Database Derived from the NASA Micro-Pulse Lidar Network: Applications for Space-based Lidar Observations

    NASA Technical Reports Server (NTRS)

    Welton, Ellsworth J.; Campbell, James R.; Spinhime, James D.; Berkoff, Timothy A.; Holben, Brent; Tsay, Si-Chee; Bucholtz, Anthony

    2004-01-01

    Backscatter lidar signals are a function of both backscatter and extinction. Hence, these lidar observations alone cannot separate the two quantities. The aerosol extinction-to-backscatter ratio, S, is the key parameter required to accurately retrieve extinction and optical depth from backscatter lidar observations of aerosol layers. S is commonly defined as 4*pi divided by the product of the single scatter albedo and the phase function at 180-degree scattering angle. Values of S for different aerosol types are not well known, and are even more difficult to determine when aerosols become mixed. Here we present a new lidar-sunphotometer S database derived from Observations of the NASA Micro-Pulse Lidar Network (MPLNET). MPLNET is a growing worldwide network of eye-safe backscatter lidars co-located with sunphotometers in the NASA Aerosol Robotic Network (AERONET). Values of S for different aerosol species and geographic regions will be presented. A framework for constructing an S look-up table will be shown. Look-up tables of S are needed to calculate aerosol extinction and optical depth from space-based lidar observations in the absence of co-located AOD data. Applications for using the new S look-up table to reprocess aerosol products from NASA's Geoscience Laser Altimeter System (GLAS) will be discussed.

  19. Estimating effective dose to pediatric patients undergoing interventional radiology procedures using anthropomorphic phantoms and MOSFET dosimeters.

    PubMed

    Miksys, Nelson; Gordon, Christopher L; Thomas, Karen; Connolly, Bairbre L

    2010-05-01

    The purpose of this study was to estimate the effective doses received by pediatric patients during interventional radiology procedures and to present those doses in "look-up tables" standardized according to minute of fluoroscopy and frame of digital subtraction angiography (DSA). Organ doses were measured with metal oxide semiconductor field effect transistor (MOSFET) dosimeters inserted within three anthropomorphic phantoms, representing children at ages 1, 5, and 10 years, at locations corresponding to radiosensitive organs. The phantoms were exposed to mock interventional radiology procedures of the head, chest, and abdomen using posteroanterior and lateral geometries, varying magnification, and fluoroscopy or DSA exposures. Effective doses were calculated from organ doses recorded by the MOSFET dosimeters and are presented in look-up tables according to the different age groups. The largest effective dose burden for fluoroscopy was recorded for posteroanterior and lateral abdominal procedures (0.2-1.1 mSv/min of fluoroscopy), whereas procedures of the head resulted in the lowest effective doses (0.02-0.08 mSv/min of fluoroscopy). DSA exposures of the abdomen imparted higher doses (0.02-0.07 mSv/DSA frame) than did those involving the head and chest. Patient doses during interventional procedures vary significantly depending on the type of procedure. User-friendly look-up tables may provide a helpful tool for health care providers in estimating effective doses for an individual procedure.

  20. Gaze Estimation for Off-Angle Iris Recognition Based on the Biometric Eye Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karakaya, Mahmut; Barstow, Del R; Santos-Villalobos, Hector J

    Iris recognition is among the highest accuracy biometrics. However, its accuracy relies on controlled high quality capture data and is negatively affected by several factors such as angle, occlusion, and dilation. Non-ideal iris recognition is a new research focus in biometrics. In this paper, we present a gaze estimation method designed for use in an off-angle iris recognition framework based on the ANONYMIZED biometric eye model. Gaze estimation is an important prerequisite step to correct an off-angle iris images. To achieve the accurate frontal reconstruction of an off-angle iris image, we first need to estimate the eye gaze direction frommore » elliptical features of an iris image. Typically additional information such as well-controlled light sources, head mounted equipment, and multiple cameras are not available. Our approach utilizes only the iris and pupil boundary segmentation allowing it to be applicable to all iris capture hardware. We compare the boundaries with a look-up-table generated by using our biologically inspired biometric eye model and find the closest feature point in the look-up-table to estimate the gaze. Based on the results from real images, the proposed method shows effectiveness in gaze estimation accuracy for our biometric eye model with an average error of approximately 3.5 degrees over a 50 degree range.« less

  1. Near constant-time optimal piecewise LDR to HDR inverse tone mapping

    NASA Astrophysics Data System (ADS)

    Chen, Qian; Su, Guan-Ming; Yin, Peng

    2015-02-01

    In a backward compatible HDR image/video compression, it is a general approach to reconstruct HDR from compressed LDR as a prediction to original HDR, which is referred to as inverse tone mapping. Experimental results show that 2- piecewise 2nd order polynomial has the best mapping accuracy than 1 piece high order or 2-piecewise linear, but it is also the most time-consuming method because to find the optimal pivot point to split LDR range to 2 pieces requires exhaustive search. In this paper, we propose a fast algorithm that completes optimal 2-piecewise 2nd order polynomial inverse tone mapping in near constant time without quality degradation. We observe that in least square solution, each entry in the intermediate matrix can be written as the sum of some basic terms, which can be pre-calculated into look-up tables. Since solving the matrix becomes looking up values in tables, computation time barely differs regardless of the number of points searched. Hence, we can carry out the most thorough pivot point search to find the optimal pivot that minimizes MSE in near constant time. Experiment shows that our proposed method achieves the same PSNR performance while saving 60 times computation time compared to the traditional exhaustive search in 2-piecewise 2nd order polynomial inverse tone mapping with continuous constraint.

  2. Fuzzy Rule Suram for Wood Drying

    NASA Astrophysics Data System (ADS)

    Situmorang, Zakarias

    2017-12-01

    Implemented of fuzzy rule must used a look-up table as defuzzification analysis. Look-up table is the actuator plant to doing the value of fuzzification. Rule suram based of fuzzy logic with variables of weather is temperature ambient and humidity ambient, it implemented for wood drying process. The membership function of variable of state represented in error value and change error with typical map of triangle and map of trapezium. Result of analysis to reach 4 fuzzy rule in 81 conditions to control the output system can be constructed in a number of way of weather and conditions of air. It used to minimum of the consumption of electric energy by heater. One cycle of schedule drying is a serial of condition of chamber to process as use as a wood species.

  3. Using false colors to protect visual privacy of sensitive content

    NASA Astrophysics Data System (ADS)

    Ćiftçi, Serdar; Korshunov, Pavel; Akyüz, Ahmet O.; Ebrahimi, Touradj

    2015-03-01

    Many privacy protection tools have been proposed for preserving privacy. Tools for protection of visual privacy available today lack either all or some of the important properties that are expected from such tools. Therefore, in this paper, we propose a simple yet effective method for privacy protection based on false color visualization, which maps color palette of an image into a different color palette, possibly after a compressive point transformation of the original pixel data, distorting the details of the original image. This method does not require any prior face detection or other sensitive regions detection and, hence, unlike typical privacy protection methods, it is less sensitive to inaccurate computer vision algorithms. It is also secure as the look-up tables can be encrypted, reversible as table look-ups can be inverted, flexible as it is independent of format or encoding, adjustable as the final result can be computed by interpolating the false color image with the original using different degrees of interpolation, less distracting as it does not create visually unpleasant artifacts, and selective as it preserves better semantic structure of the input. Four different color scales and four different compression functions, one which the proposed method relies, are evaluated via objective (three face recognition algorithms) and subjective (50 human subjects in an online-based study) assessments using faces from FERET public dataset. The evaluations demonstrate that DEF and RBS color scales lead to the strongest privacy protection, while compression functions add little to the strength of privacy protection. Statistical analysis also shows that recognition algorithms and human subjects perceive the proposed protection similarly

  4. Assessment of the Broadleaf Crops Leaf Area Index Product from the Terra MODIS Instrument

    NASA Technical Reports Server (NTRS)

    Tan, Bin; Hu, Jiannan; Huang, Dong; Yang, Wenze; Zhang, Ping; Shabanov, Nikolay V.; Knyazikhin, Yuri; Nemani, Ramakrishna R.; Myneni, Ranga B.

    2005-01-01

    The first significant processing of Terra MODIS data, called Collection 3, covered the period from November 2000 to December 2002. The Collection 3 leaf area index (LAI) and fraction vegetation absorbed photosynthetically active radiation (FPAR) products for broadleaf crops exhibited three anomalies (a) high LAI values during the peak growing season, (b) differences in LAI seasonality between the radiative transfer-based main algorithm and the vegetation index based back-up algorithm, and (c) too few retrievals from the main algorithm during the summer period when the crops are at full flush. The cause of these anomalies is a mismatch between reflectances modeled by the algorithm and MODIS measurements. Therefore, the Look-Up-Tables accompanying the algorithm were revised and implemented in Collection 4 processing. The main algorithm with the revised Look-Up-Tables generated retrievals for over 80% of the pixels with valid data. Retrievals from the back-up algorithm, although few, should be used with caution as they are generated from surface reflectances with high uncertainties.

  5. Zone plate method for electronic holographic display using resolution redistribution technique.

    PubMed

    Takaki, Yasuhiro; Nakamura, Junya

    2011-07-18

    The resolution redistribution (RR) technique can increase the horizontal viewing-zone angle and screen size of electronic holographic display. The present study developed a zone plate method that would reduce hologram calculation time for the RR technique. This method enables calculation of an image displayed on a spatial light modulator by performing additions of the zone plates, while the previous calculation method required performing the Fourier transform twice. The derivation and modeling of the zone plate are shown. In addition, the look-up table approach was introduced for further reduction in computation time. Experimental verification using a holographic display module based on the RR technique is presented.

  6. Experimental method of in-vivo dosimetry without build-up device on the skin for external beam radiotherapy

    NASA Astrophysics Data System (ADS)

    Jeon, Hosang; Nam, Jiho; Lee, Jayoung; Park, Dahl; Baek, Cheol-Ha; Kim, Wontaek; Ki, Yongkan; Kim, Dongwon

    2015-06-01

    Accurate dose delivery is crucial to the success of modern radiotherapy. To evaluate the dose actually delivered to patients, in-vivo dosimetry (IVD) is generally performed during radiotherapy to measure the entrance doses. In IVD, a build-up device should be placed on top of an in-vivo dosimeter to satisfy the electron equilibrium condition. However, a build-up device made of tissue-equivalent material or metal may perturb dose delivery to a patient, and requires an additional laborious and time-consuming process. We developed a novel IVD method using a look-up table of conversion ratios instead of a build-up device. We validated this method through a monte-carlo simulation and 31 clinical trials. The mean error of clinical IVD is 3.17% (standard deviation: 2.58%), which is comparable to that of conventional IVD methods. Moreover, the required time was greatly reduced so that the efficiency of IVD could be improved for both patients and therapists.

  7. Digital intermediate frequency QAM modulator using parallel processing

    DOEpatents

    Pao, Hsueh-Yuan [Livermore, CA; Tran, Binh-Nien [San Ramon, CA

    2008-05-27

    The digital Intermediate Frequency (IF) modulator applies to various modulation types and offers a simple and low cost method to implement a high-speed digital IF modulator using field programmable gate arrays (FPGAs). The architecture eliminates multipliers and sequential processing by storing the pre-computed modulated cosine and sine carriers in ROM look-up-tables (LUTs). The high-speed input data stream is parallel processed using the corresponding LUTs, which reduces the main processing speed, allowing the use of low cost FPGAs.

  8. Quantitative Analysis of First-Pass Contrast-Enhanced Myocardial Perfusion Multidetector CT Using a Patlak Plot Method and Extraction Fraction Correction During Adenosine Stress

    NASA Astrophysics Data System (ADS)

    Ichihara, Takashi; George, Richard T.; Silva, Caterina; Lima, Joao A. C.; Lardo, Albert C.

    2011-02-01

    The purpose of this study was to develop a quantitative method for myocardial blood flow (MBF) measurement that can be used to derive accurate myocardial perfusion measurements from dynamic multidetector computed tomography (MDCT) images by using a compartment model for calculating the first-order transfer constant (K1) with correction for the capillary transit extraction fraction (E). Six canine models of left anterior descending (LAD) artery stenosis were prepared and underwent first-pass contrast-enhanced MDCT perfusion imaging during adenosine infusion (0.14-0.21 mg/kg/min). K1 , which is the first-order transfer constant from left ventricular (LV) blood to myocardium, was measured using the Patlak plot method applied to time-attenuation curve data of the LV blood pool and myocardium. The results were compared against microsphere MBF measurements, and the extraction fraction of contrast agent was calculated. K1 is related to the regional MBF as K1=EF, E=(1-exp(-PS/F)), where PS is the permeability-surface area product and F is myocardial flow. Based on the above relationship, a look-up table from K1 to MBF can be generated and Patlak plot-derived K1 values can be converted to the calculated MBF. The calculated MBF and microsphere MBF showed a strong linear association. The extraction fraction in dogs as a function of flow (F) was E=(1-exp(-(0.2532F+0.7871)/F)) . Regional MBF can be measured accurately using the Patlak plot method based on a compartment model and look-up table with extraction fraction correction from K1 to MBF.

  9. Identification of sea ice types in spaceborne synthetic aperture radar data

    NASA Technical Reports Server (NTRS)

    Kwok, Ronald; Rignot, Eric; Holt, Benjamin; Onstott, R.

    1992-01-01

    This study presents an approach for identification of sea ice types in spaceborne SAR image data. The unsupervised classification approach involves cluster analysis for segmentation of the image data followed by cluster labeling based on previously defined look-up tables containing the expected backscatter signatures of different ice types measured by a land-based scatterometer. Extensive scatterometer observations and experience accumulated in field campaigns during the last 10 yr were used to construct these look-up tables. The classification approach, its expected performance, the dependence of this performance on radar system performance, and expected ice scattering characteristics are discussed. Results using both aircraft and simulated ERS-1 SAR data are presented and compared to limited field ice property measurements and coincident passive microwave imagery. The importance of an integrated postlaunch program for the validation and improvement of this approach is discussed.

  10. Correlation and prediction of dynamic human isolated joint strength from lean body mass

    NASA Technical Reports Server (NTRS)

    Pandya, Abhilash K.; Hasson, Scott M.; Aldridge, Ann M.; Maida, James C.; Woolford, Barbara J.

    1992-01-01

    A relationship between a person's lean body mass and the amount of maximum torque that can be produced with each isolated joint of the upper extremity was investigated. The maximum dynamic isolated joint torque (upper extremity) on 14 subjects was collected using a dynamometer multi-joint testing unit. These data were reduced to a table of coefficients of second degree polynomials, computed using a least squares regression method. All the coefficients were then organized into look-up tables, a compact and convenient storage/retrieval mechanism for the data set. Data from each joint, direction and velocity, were normalized with respect to that joint's average and merged into files (one for each curve for a particular joint). Regression was performed on each one of these files to derive a table of normalized population curve coefficients for each joint axis, direction, and velocity. In addition, a regression table which included all upper extremity joints was built which related average torque to lean body mass for an individual. These two tables are the basis of the regression model which allows the prediction of dynamic isolated joint torques from an individual's lean body mass.

  11. Computer-generated holograms by multiple wavefront recording plane method with occlusion culling.

    PubMed

    Symeonidou, Athanasia; Blinder, David; Munteanu, Adrian; Schelkens, Peter

    2015-08-24

    We propose a novel fast method for full parallax computer-generated holograms with occlusion processing, suitable for volumetric data such as point clouds. A novel light wave propagation strategy relying on the sequential use of the wavefront recording plane method is proposed, which employs look-up tables in order to reduce the computational complexity in the calculation of the fields. Also, a novel technique for occlusion culling with little additional computation cost is introduced. Additionally, the method adheres a Gaussian distribution to the individual points in order to improve visual quality. Performance tests show that for a full-parallax high-definition CGH a speedup factor of more than 2,500 compared to the ray-tracing method can be achieved without hardware acceleration.

  12. Heuristic Modeling for TRMM Lifetime Predictions

    NASA Technical Reports Server (NTRS)

    Jordan, P. S.; Sharer, P. J.; DeFazio, R. L.

    1996-01-01

    Analysis time for computing the expected mission lifetimes of proposed frequently maneuvering, tightly altitude constrained, Earth orbiting spacecraft have been significantly reduced by means of a heuristic modeling method implemented in a commercial-off-the-shelf spreadsheet product (QuattroPro) running on a personal computer (PC). The method uses a look-up table to estimate the maneuver frequency per month as a function of the spacecraft ballistic coefficient and the solar flux index, then computes the associated fuel use by a simple engine model. Maneuver frequency data points are produced by means of a single 1-month run of traditional mission analysis software for each of the 12 to 25 data points required for the table. As the data point computations are required only a mission design start-up and on the occasion of significant mission redesigns, the dependence on time consuming traditional modeling methods is dramatically reduced. Results to date have agreed with traditional methods to within 1 to 1.5 percent. The spreadsheet approach is applicable to a wide variety of Earth orbiting spacecraft with tight altitude constraints. It will be particularly useful to such missions as the Tropical Rainfall Measurement Mission scheduled for launch in 1997, whose mission lifetime calculations are heavily dependent on frequently revised solar flux predictions.

  13. A look-up-table digital predistortion technique for high-voltage power amplifiers in ultrasonic applications.

    PubMed

    Gao, Zheng; Gui, Ping

    2012-07-01

    In this paper, we present a digital predistortion technique to improve the linearity and power efficiency of a high-voltage class-AB power amplifier (PA) for ultrasound transmitters. The system is composed of a digital-to-analog converter (DAC), an analog-to-digital converter (ADC), and a field-programmable gate array (FPGA) in which the digital predistortion (DPD) algorithm is implemented. The DPD algorithm updates the error, which is the difference between the ideal signal and the attenuated distorted output signal, in the look-up table (LUT) memory during each cycle of a sinusoidal signal using the least-mean-square (LMS) algorithm. On the next signal cycle, the error data are used to equalize the signal with negative harmonic components to cancel the amplifier's nonlinear response. The algorithm also includes a linear interpolation method applied to the windowed sinusoidal signals for the B-mode and Doppler modes. The measurement test bench uses an arbitrary function generator as the DAC to generate the input signal, an oscilloscope as the ADC to capture the output waveform, and software to implement the DPD algorithm. The measurement results show that the proposed system is able to reduce the second-order harmonic distortion (HD2) by 20 dB and the third-order harmonic distortion (HD3) by 14.5 dB, while at the same time improving the power efficiency by 18%.

  14. Sandia Unstructured Triangle Tabular Interpolation Package v 0.1 beta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2013-09-24

    The software interpolates tabular data, such as for equations of state, provided on an unstructured triangular grid. In particular, interpolation occurs in a two dimensional space by looking up the triangle in which the desired evaluation point resides and then performing a linear interpolation over the n-tuples associated with the nodes of the chosen triangle. The interface to the interpolation routines allows for automated conversion of units from those tabulated to the desired output units. when multiple tables are included in a data file, new tables may be generated by on-the-fly mixing of the provided tables

  15. TILTING TABLE AREA, PDP ROOM, LEVEL +27’, LOOKING SOUTHWEST, SHOWING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    TILTING TABLE AREA, PDP ROOM, LEVEL +27’, LOOKING SOUTHWEST, SHOWING TILTING TABLE, MARKED BY WHITE ELECTRICAL CORD - Physics Assembly Laboratory, Area A/M, Savannah River Site, Aiken, Aiken County, SC

  16. Methods for comparative evaluation of propulsion system designs for supersonic aircraft

    NASA Technical Reports Server (NTRS)

    Tyson, R. M.; Mairs, R. Y.; Halferty, F. D., Jr.; Moore, B. E.; Chaloff, D.; Knudsen, A. W.

    1976-01-01

    The propulsion system comparative evaluation study was conducted to define a rapid, approximate method for evaluating the effects of propulsion system changes for an advanced supersonic cruise airplane, and to verify the approximate method by comparing its mission performance results with those from a more detailed analysis. A table look up computer program was developed to determine nacelle drag increments for a range of parametric nacelle shapes and sizes. Aircraft sensitivities to propulsion parameters were defined. Nacelle shapes, installed weights, and installed performance was determined for four study engines selected from the NASA supersonic cruise aircraft research (SCAR) engine studies program. Both rapid evaluation method (using sensitivities) and traditional preliminary design methods were then used to assess the four engines. The method was found to compare well with the more detailed analyses.

  17. Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks

    PubMed Central

    Naveros, Francisco; Garrido, Jesus A.; Carrillo, Richard R.; Ros, Eduardo; Luque, Niceto R.

    2017-01-01

    Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under increasing levels of neural complexity. PMID:28223930

  18. TILTING TABLE AREA, PDP ROOM, LEVEL +27’, LOOKING NORTHWEST. TILTING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    TILTING TABLE AREA, PDP ROOM, LEVEL +27’, LOOKING NORTHWEST. TILTING TABLE MARKED BY WHITE ELECTRICAL CORD IN LOWER LEFT CENTER - Physics Assembly Laboratory, Area A/M, Savannah River Site, Aiken, Aiken County, SC

  19. Self-Contained Avionics Sensing and Flight Control System for Small Unmanned Aerial Vehicle

    NASA Technical Reports Server (NTRS)

    Ingham, John C. (Inventor); Shams, Qamar A. (Inventor); Logan, Michael J. (Inventor); Fox, Robert L. (Inventor); Fox, legal representative, Melanie L. (Inventor); Kuhn, III, Theodore R. (Inventor); Babel, III, Walter C. (Inventor); Fox, legal representative, Christopher L. (Inventor); Adams, James K. (Inventor); Laughter, Sean A. (Inventor)

    2011-01-01

    A self-contained avionics sensing and flight control system is provided for an unmanned aerial vehicle (UAV). The system includes sensors for sensing flight control parameters and surveillance parameters, and a Global Positioning System (GPS) receiver. Flight control parameters and location signals are processed to generate flight control signals. A Field Programmable Gate Array (FPGA) is configured to provide a look-up table storing sets of values with each set being associated with a servo mechanism mounted on the UAV and with each value in each set indicating a unique duty cycle for the servo mechanism associated therewith. Each value in each set is further indexed to a bit position indicative of a unique percentage of a maximum duty cycle for the servo mechanism associated therewith. The FPGA is further configured to provide a plurality of pulse width modulation (PWM) generators coupled to the look-up table. Each PWM generator is associated with and adapted to be coupled to one of the servo mechanisms.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alhroob, M.; Boyd, G.; Hasib, A.

    Precision ultrasonic measurements in binary gas systems provide continuous real-time monitoring of mixture composition and flow. Using custom micro-controller-based electronics, we have developed an ultrasonic instrument, with numerous potential applications, capable of making continuous high-precision sound velocity measurements. The instrument measures sound transit times along two opposite directions aligned parallel to - or obliquely crossing - the gas flow. The difference between the two measured times yields the gas flow rate while their average gives the sound velocity, which can be compared with a sound velocity vs. molar composition look-up table for the binary mixture at a given temperature andmore » pressure. The look-up table may be generated from prior measurements in known mixtures of the two components, from theoretical calculations, or from a combination of the two. We describe the instrument and its performance within numerous applications in the ATLAS experiment at the CERN Large Hadron Collider (LHC). The instrument can be of interest in other areas where continuous in-situ binary gas analysis and flowmetry are required. (authors)« less

  1. Quantitative analysis of the z-spectrum using a numerically simulated look-up table: Application to the healthy human brain at 7T.

    PubMed

    Geades, Nicolas; Hunt, Benjamin A E; Shah, Simon M; Peters, Andrew; Mougin, Olivier E; Gowland, Penny A

    2017-08-01

    To develop a method that fits a multipool model to z-spectra acquired from non-steady state sequences, taking into account the effects of variations in T1 or B1 amplitude and the results estimating the parameters for a four-pool model to describe the z-spectrum from the healthy brain. We compared measured spectra with a look-up table (LUT) of possible spectra and investigated the potential advantages of simultaneously considering spectra acquired at different saturation powers (coupled spectra) to provide sensitivity to a range of different physicochemical phenomena. The LUT method provided reproducible results in healthy controls. The average values of the macromolecular pool sizes measured in white matter (WM) and gray matter (GM) of 10 healthy volunteers were 8.9% ± 0.3% (intersubject standard deviation) and 4.4% ± 0.4%, respectively, whereas the average nuclear Overhauser effect pool sizes in WM and GM were 5% ± 0.1% and 3% ± 0.1%, respectively, and average amide proton transfer pool sizes in WM and GM were 0.21% ± 0.03% and 0.20% ± 0.02%, respectively. The proposed method demonstrated increased robustness when compared with existing methods (such as Lorentzian fitting and asymmetry analysis) while yielding fully quantitative results. The method can be adjusted to measure other parameters relevant to the z-spectrum. Magn Reson Med 78:645-655, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.

  2. Single-Chip Microcomputer Control Of The PWM Inverter

    NASA Astrophysics Data System (ADS)

    Morimoto, Masayuki; Sato, Shinji; Sumito, Kiyotaka; Oshitani, Katsumi

    1987-10-01

    A single-chip microcomputer-based con-troller for a pulsewidth modulated 1.7 KVA inverter of an airconditioner is presented. The PWM pattern generation and the system control of the airconditioner are achieved by software of the 8-bit single-chip micro-computer. The single-chip microcomputer has the disadvantages of low processing speed and small memory capacity which can be overcome by the magnetic flux control method. The PWM pattern is generated every 90 psec. The memory capacity of the PWM look-up table is less than 2 kbytes. The simple and reliable control is realized by the software-based implementation.

  3. Collective network routing

    DOEpatents

    Hoenicke, Dirk

    2014-12-02

    Disclosed are a unified method and apparatus to classify, route, and process injected data packets into a network so as to belong to a plurality of logical networks, each implementing a specific flow of data on top of a common physical network. The method allows to locally identify collectives of packets for local processing, such as the computation of the sum, difference, maximum, minimum, or other logical operations among the identified packet collective. Packets are injected together with a class-attribute and an opcode attribute. Network routers, employing the described method, use the packet attributes to look-up the class-specific route information from a local route table, which contains the local incoming and outgoing directions as part of the specifically implemented global data flow of the particular virtual network.

  4. An Improved Method for Real-Time 3D Construction of DTM

    NASA Astrophysics Data System (ADS)

    Wei, Yi

    This paper discusses the real-time optimal construction of DTM by two measures. One is to improve coordinate transformation of discrete points acquired from lidar, after processing a total number of 10000 data points, the formula calculation for transformation costs 0.810s, while the table look-up method for transformation costs 0.188s, indicating that the latter is superior to the former. The other one is to adjust the density of the point cloud acquired from lidar, the certain amount of the data points are used for 3D construction in proper proportion in order to meet different needs for 3D imaging, and ultimately increase efficiency of DTM construction while saving system resources.

  5. A Novel Approach: Chemical Relational Databases, and the Role of the ISSCAN Database on Assessing Chemical Carcinogenity

    EPA Science Inventory

    Mutagenicity and carcinogenicity databases are crucial resources for toxicologists and regulators involved in chemicals risk assessment. Until recently, existing public toxicity databases have been constructed primarily as "look-up-tables" of existing data, and most often did no...

  6. Development and validation of light-duty vehicle modal emissions and fuel consumption values for traffic models.

    DOT National Transportation Integrated Search

    1999-03-01

    A methodology for developing modal vehicle emissions and fuel consumption models has been developed by Oak Ridge National Laboratory (ORNL), sponsored by the Federal Highway Administration. These models, in the form of look-up tables for fuel consump...

  7. General Mission Analysis Tool (GMAT) User's Guide (Draft)

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.

    2007-01-01

    4The General Mission Analysis Tool (GMAT) is a space trajectory optimization and mission analysis system. This document is a draft of the users guide for the tool. Included in the guide is information about Configuring Objects/Resources, Object Fields: Quick Look-up Tables, and Commands and Events.

  8. A short note on calculating the adjusted SAR index

    USDA-ARS?s Scientific Manuscript database

    A simple algebraic technique is presented for computing the adjusted SAR Index proposed by Suarez (1981). The statistical formula presented in this note facilitates the computation of the adjusted SAR without the use of either a look-up table, custom computer software or the need to compute exact a...

  9. Reduced-Order Model for Leakage Through an Open Wellbore from the Reservoir due to Carbon Dioxide Injection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Lehua; Oldenburg, Curtis M.

    Potential CO 2 leakage through existing open wellbores is one of the most significant hazards that need to be addressed in geologic carbon sequestration (GCS) projects. In the framework of the National Risk Assessment Partnership (NRAP) which requires fast computations for uncertainty analysis, rigorous simulation of the coupled wellbore-reservoir system is not practical. We have developed a 7,200-point look-up table reduced-order model (ROM) for estimating the potential leakage rate up open wellbores in response to CO 2 injection nearby. The ROM is based on coupled simulations using T2Well/ECO2H which was run repeatedly for representative conditions relevant to NRAP to createmore » a look-up table response-surface ROM. The ROM applies to a wellbore that fully penetrates a 20-m thick reservoir that is used for CO 2 storage. The radially symmetric reservoir is assumed to have initially uniform pressure, temperature, gas saturation, and brine salinity, and it is assumed these conditions are held constant at the far-field boundary (100 m away from the wellbore). In such a system, the leakage can quickly reach quasi-steady state. The ROM table can be used to estimate both the free-phase CO 2 and brine leakage rates through an open well as a function of wellbore and reservoir conditions. Results show that injection-induced pressure and reservoir gas saturation play important roles in controlling leakage. Caution must be used in the application of this ROM because well leakage is formally transient and the ROM lookup table was populated using quasi-steady simulation output after 1000 time steps which may correspond to different physical times for the various parameter combinations of the coupled wellbore-reservoir system.« less

  10. Simulation model of a twin-tail, high performance airplane

    NASA Technical Reports Server (NTRS)

    Buttrill, Carey S.; Arbuckle, P. Douglas; Hoffler, Keith D.

    1992-01-01

    The mathematical model and associated computer program to simulate a twin-tailed high performance fighter airplane (McDonnell Douglas F/A-18) are described. The simulation program is written in the Advanced Continuous Simulation Language. The simulation math model includes the nonlinear six degree-of-freedom rigid-body equations, an engine model, sensors, and first order actuators with rate and position limiting. A simplified form of the F/A-18 digital control laws (version 8.3.3) are implemented. The simulated control law includes only inner loop augmentation in the up and away flight mode. The aerodynamic forces and moments are calculated from a wind-tunnel-derived database using table look-ups with linear interpolation. The aerodynamic database has an angle-of-attack range of -10 to +90 and a sideslip range of -20 to +20 degrees. The effects of elastic deformation are incorporated in a quasi-static-elastic manner. Elastic degrees of freedom are not actively simulated. In the engine model, the throttle-commanded steady-state thrust level and the dynamic response characteristics of the engine are based on airflow rate as determined from a table look-up. Afterburner dynamics are switched in at a threshold based on the engine airflow and commanded thrust.

  11. Looking northeast across transfer table pit at Boiler Shop (Bldg. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Looking northeast across transfer table pit at Boiler Shop (Bldg. 152) - Atchison, Topeka, Santa Fe Railroad, Albuquerque Shops, Boiler Shop, 908 Second Street, Southwest, Albuquerque, Bernalillo County, NM

  12. Implementation of high-resolution time-to-digital converter in 8-bit microcontrollers.

    PubMed

    Bengtsson, Lars E

    2012-04-01

    This paper will demonstrate how a time-to-digital converter (TDC) with sub-nanosecond resolution can be implemented into an 8-bit microcontroller using so called "direct" methods. This means that a TDC is created using only five bidirectional digital input-output-pins of a microcontroller and a few passive components (two resistors, a capacitor, and a diode). We will demonstrate how a TDC for the range 1-10 μs is implemented with 0.17 ns resolution. This work will also show how to linearize the output by combining look-up tables and interpolation. © 2012 American Institute of Physics

  13. Low-power hardware implementation of movement decoding for brain computer interface with reduced-resolution discrete cosine transform.

    PubMed

    Minho Won; Albalawi, Hassan; Xin Li; Thomas, Donald E

    2014-01-01

    This paper describes a low-power hardware implementation for movement decoding of brain computer interface. Our proposed hardware design is facilitated by two novel ideas: (i) an efficient feature extraction method based on reduced-resolution discrete cosine transform (DCT), and (ii) a new hardware architecture of dual look-up table to perform discrete cosine transform without explicit multiplication. The proposed hardware implementation has been validated for movement decoding of electrocorticography (ECoG) signal by using a Xilinx FPGA Zynq-7000 board. It achieves more than 56× energy reduction over a reference design using band-pass filters for feature extraction.

  14. xRage Equation of State

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grove, John W.

    2016-08-16

    The xRage code supports a variety of hydrodynamic equation of state (EOS) models. In practice these are generally accessed in the executing code via a pressure-temperature based table look up. This document will describe the various models supported by these codes and provide details on the algorithms used to evaluate the equation of state.

  15. Selection Algorithm for the CALIPSO Lidar Aerosol Extinction-to-Backscatter Ratio

    NASA Technical Reports Server (NTRS)

    Omar, Ali H.; Winker, David M.; Vaughan, Mark A.

    2006-01-01

    The extinction-to-backscatter ratio (S(sub a)) is an important parameter used in the determination of the aerosol extinction and subsequently the optical depth from lidar backscatter measurements. We outline the algorithm used to determine Sa for the Cloud and Aerosol Lidar and Infrared Pathfinder Spaceborne Observations (CALIPSO) lidar. S(sub a) for the CALIPSO lidar will either be selected from a look-up table or calculated using the lidar measurements depending on the characteristics of aerosol layer. Whenever suitable lofted layers are encountered, S(sub a) is computed directly from the integrated backscatter and transmittance. In all other cases, the CALIPSO observables: the depolarization ratio, delta, the layer integrated attenuated backscatter, beta, and the mean layer total attenuated color ratio, gamma, together with the surface type, are used to aid in aerosol typing. Once the type is identified, a look-up-table developed primarily from worldwide observations, is used to determine the S(sub a) value. The CALIPSO aerosol models include desert dust, biomass burning, background, polluted continental, polluted dust, and marine aerosols.

  16. Implementation of a digital optical matrix-vector multiplier using a holographic look-up table and residue arithmetic

    NASA Technical Reports Server (NTRS)

    Habiby, Sarry F.

    1987-01-01

    The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. The objective is to demonstrate the operation of an optical processor designed to minimize computation time in performing a practical computing application. This is done by using the large array of processing elements in a Hughes liquid crystal light valve, and relying on the residue arithmetic representation, a holographic optical memory, and position coded optical look-up tables. In the design, all operations are performed in effectively one light valve response time regardless of matrix size. The features of the design allowing fast computation include the residue arithmetic representation, the mapping approach to computation, and the holographic memory. In addition, other features of the work include a practical light valve configuration for efficient polarization control, a model for recording multiple exposures in silver halides with equal reconstruction efficiency, and using light from an optical fiber for a reference beam source in constructing the hologram. The design can be extended to implement larger matrix arrays without increasing computation time.

  17. Looking southwest at dualtrack transfer table, with Machine Shop (Bldg. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Looking southwest at dual-track transfer table, with Machine Shop (Bldg. 163) in background - Atchison, Topeka, Santa Fe Railroad, Albuquerque Shops, 908 Second Street, Southwest, Albuquerque, Bernalillo County, NM

  18. Quantifying anti-gravity torques in the design of a powered exoskeleton.

    PubMed

    Ragonesi, Daniel; Agrawal, Sunil; Sample, Whitney; Rahman, Tariq

    2011-01-01

    Designing an upper extremity exoskeleton for people with arm weakness requires knowledge of the passive and active residual force capabilities of users. This paper experimentally measures the passive gravitational torques of 3 groups of subjects: able-bodied adults, able bodied children, and children with neurological disabilities. The experiment involves moving the arm to various positions in the sagittal plane and measuring the gravitational force at the wrist. This force is then converted to static gravitational torques at the elbow and shoulder. Data are compared between look-up table data based on anthropometry and empirical data. Results show that the look-up torques deviate from experimentally measured torques as the arm reaches up and down. This experiment informs designers of Upper Limb orthoses on the contribution of passive human joint torques.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Andrew; Lawrence, Earl

    The Response Surface Modeling (RSM) Tool Suite is a collection of three codes used to generate an empirical interpolation function for a collection of drag coefficient calculations computed with Test Particle Monte Carlo (TPMC) simulations. The first code, "Automated RSM", automates the generation of a drag coefficient RSM for a particular object to a single command. "Automated RSM" first creates a Latin Hypercube Sample (LHS) of 1,000 ensemble members to explore the global parameter space. For each ensemble member, a TPMC simulation is performed and the object drag coefficient is computed. In the next step of the "Automated RSM" code,more » a Gaussian process is used to fit the TPMC simulations. In the final step, Markov Chain Monte Carlo (MCMC) is used to evaluate the non-analytic probability distribution function from the Gaussian process. The second code, "RSM Area", creates a look-up table for the projected area of the object based on input limits on the minimum and maximum allowed pitch and yaw angles and pitch and yaw angle intervals. The projected area from the look-up table is used to compute the ballistic coefficient of the object based on its pitch and yaw angle. An accurate ballistic coefficient is crucial in accurately computing the drag on an object. The third code, "RSM Cd", uses the RSM generated by the "Automated RSM" code and the projected area look-up table generated by the "RSM Area" code to accurately compute the drag coefficient and ballistic coefficient of the object. The user can modify the object velocity, object surface temperature, the translational temperature of the gas, the species concentrations of the gas, and the pitch and yaw angles of the object. Together, these codes allow for the accurate derivation of an object's drag coefficient and ballistic coefficient under any conditions with only knowledge of the object's geometry and mass.« less

  20. 26. WARDROOM, LOOKING TOWARDS PORT, AT TABLE, WEAPONS CLOSET, AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    26. WARDROOM, LOOKING TOWARDS PORT, AT TABLE, WEAPONS CLOSET, AND DESK. - U.S. Coast Guard Cutter WHITE LUPINE, U.S. Coast Guard Station Rockland, east end of Tillson Avenue, Rockland, Knox County, ME

  1. A method for the fast estimation of a battery entropy-variation high-resolution curve - Application on a commercial LiFePO4/graphite cell

    NASA Astrophysics Data System (ADS)

    Damay, Nicolas; Forgez, Christophe; Bichat, Marie-Pierre; Friedrich, Guy

    2016-11-01

    The entropy-variation of a battery is responsible for heat generation or consumption during operation and its prior measurement is mandatory for developing a thermal model. It is generally done through the potentiometric method which is considered as a reference. However, it requires several days or weeks to get a look-up table with a 5 or 10% SoC (State of Charge) resolution. In this study, a calorimetric method based on the inversion of a thermal model is proposed for the fast estimation of a nearly continuous curve of entropy-variation. This is achieved by separating the heats produced while charging and discharging the battery. The entropy-variation is then deduced from the extracted entropic heat. The proposed method is validated by comparing the results obtained with several current rates to measurements made with the potentiometric method.

  2. Simulation and mitigation of higher-order ionospheric errors in PPP

    NASA Astrophysics Data System (ADS)

    Zus, Florian; Deng, Zhiguo; Wickert, Jens

    2017-04-01

    We developed a rapid and precise algorithm to compute ionospheric phase advances in a realistic electron density field. The electron density field is derived from a plasmaspheric extension of the International Reference Ionosphere (Gulyaeva and Bilitza, 2012) and the magnetic field stems from the International Geomagnetic Reference Field. For specific station locations, elevation and azimuth angles the ionospheric phase advances are stored in a look-up table. The higher-order ionospheric residuals are computed by forming the standard linear combination of the ionospheric phase advances. In a simulation study we examine how the higher-order ionospheric residuals leak into estimated station coordinates, clocks, zenith delays and tropospheric gradients in precise point positioning. The simulation study includes a few hundred globally distributed stations and covers the time period 1990-2015. We take a close look on the estimated zenith delays and tropospheric gradients as they are considered a data source for meteorological and climate related research. We also show how the by product of this simulation study, the look-up tables, can be used to mitigate higher-order ionospheric errors in practise. Gulyaeva, T.L., and Bilitza, D. Towards ISO Standard Earth Ionosphere and Plasmasphere Model. In: New Developments in the Standard Model, edited by R.J. Larsen, pp. 1-39, NOVA, Hauppauge, New York, 2012, available at https://www.novapublishers.com/catalog/product_info.php?products_id=35812

  3. A Web-Based Visualization and Animation Platform for Digital Logic Design

    ERIC Educational Resources Information Center

    Shoufan, Abdulhadi; Lu, Zheng; Huss, Sorin A.

    2015-01-01

    This paper presents a web-based education platform for the visualization and animation of the digital logic design process. This includes the design of combinatorial circuits using logic gates, multiplexers, decoders, and look-up-tables as well as the design of finite state machines. Various configurations of finite state machines can be selected…

  4. Accelerating molecular Monte Carlo simulations using distance and orientation dependent energy tables: tuning from atomistic accuracy to smoothed “coarse-grained” models

    PubMed Central

    Lettieri, S.; Zuckerman, D.M.

    2011-01-01

    Typically, the most time consuming part of any atomistic molecular simulation is due to the repeated calculation of distances, energies and forces between pairs of atoms. However, many molecules contain nearly rigid multi-atom groups such as rings and other conjugated moieties, whose rigidity can be exploited to significantly speed up computations. The availability of GB-scale random-access memory (RAM) offers the possibility of tabulation (pre-calculation) of distance and orientation-dependent interactions among such rigid molecular bodies. Here, we perform an investigation of this energy tabulation approach for a fluid of atomistic – but rigid – benzene molecules at standard temperature and density. In particular, using O(1) GB of RAM, we construct an energy look-up table which encompasses the full range of allowed relative positions and orientations between a pair of whole molecules. We obtain a hardware-dependent speed-up of a factor of 24-50 as compared to an ordinary (“exact”) Monte Carlo simulation and find excellent agreement between energetic and structural properties. Second, we examine the somewhat reduced fidelity of results obtained using energy tables based on much less memory use. Third, the energy table serves as a convenient platform to explore potential energy smoothing techniques, akin to coarse-graining. Simulations with smoothed tables exhibit near atomistic accuracy while increasing diffusivity. The combined speed-up in sampling from tabulation and smoothing exceeds a factor of 100. For future applications greater speed-ups can be expected for larger rigid groups, such as those found in biomolecules. PMID:22120971

  5. All-optical 10Gb/s ternary-CAM cell for routing look-up table applications.

    PubMed

    Mourgias-Alexandris, George; Vagionas, Christos; Tsakyridis, Apostolos; Maniotis, Pavlos; Pleros, Nikos

    2018-03-19

    We experimentally demonstrate the first all-optical Ternary-Content Addressable Memory (T-CAM) cell that operates at 10Gb/s and comprises two monolithically integrated InP Flip-Flops (FF) and a SOA-MZI optical XOR gate. The two FFs are responsible for storing the data bit and the ternary state 'X', respectively, with the XOR gate used for comparing the stored FF-data and the search bit. The experimental results reveal error-free operation at 10Gb/s for both Write and Ternary Content Addressing of the T-CAM cell, indicating that the proposed optical T-CAM cell could in principle lead to all-optical T-CAM-based Address Look-up memory architectures for high-end routing applications.

  6. Aerodynamic Characteristics of SC1095 and SC1094 R8 Airfoils

    NASA Technical Reports Server (NTRS)

    Bousman, William G.

    2003-01-01

    Two airfoils are used on the main rotor blade of the UH-60A helicopter, the SC1095 and the SC1094 R8. Measurements of the section lift, drag, and pitching moment have been obtained in ten wind tunnel tests for the SC1095 airfoil, and in five of these tests, measurements have also been obtained for the SC1094 R8. The ten wind tunnel tests are characterized and described in the present study. A number of fundamental parameters measured in these tests are compared and an assessment is made of the adequacy of the test data for use in look-up tables required by lifting-line calculation methods.

  7. Life-table methods for detecting age-risk factor interactions in long-term follow-up studies.

    PubMed

    Logue, E E; Wing, S

    1986-01-01

    Methodological investigation has suggested that age-risk factor interactions should be more evident in age of experience life tables than in follow-up time tables due to the mixing of ages of experience over follow-up time in groups defined by age at initial examination. To illustrate the two approaches, age modification of the effect of total cholesterol on ischemic heart disease mortality in two long-term follow-up studies was investigated. Follow-up time life table analysis of 116 deaths over 20 years in one study was more consistent with a uniform relative risk due to cholesterol, while age of experience life table analysis was more consistent with a monotonic negative age interaction. In a second follow-up study (160 deaths over 24 years), there was no evidence of a monotonic negative age-cholesterol interaction by either method. It was concluded that age-specific life table analysis should be used when age-risk factor interactions are considered, but that both approaches yield almost identical results in absence of age interaction. The identification of the more appropriate life-table analysis should be ultimately guided by the nature of the age or time phenomena of scientific interest.

  8. New DICOM extensions for softcopy and hardcopy display consistency.

    PubMed

    Eichelberg, M; Riesmeier, J; Kleber, K; Grönemeyer, D H; Oosterwijk, H; Jensch, P

    2000-01-01

    The DICOM standard defines in detail how medical images can be communicated. However, the rules on how to interpret the parameters contained in a DICOM image which deal with the image presentation were either lacking or not well defined. As a result, the same image frequently looks different when displayed on different workstations or printed on a film from various printers. Three new DICOM extensions attempt to close this gap by defining a comprehensive model for the display of images on softcopy and hardcopy devices: Grayscale Standard Display Function, Grayscale Softcopy Presentation State and Presentation Look Up Table.

  9. The Periodic Table. Physical Science in Action[TM]. Schlessinger Science Library. [Videotape].

    ERIC Educational Resources Information Center

    2000

    Kids know that when they are lost, they look at a map to find their way. It's no different in the world of science, as they'll learn in The Periodic Table--a fun and engaging look at the road map of the elements. Young students will learn about key information included on the table, including atomic number, atomic mass and chemical symbol. They'll…

  10. An Up-to-Date Look at the Supply of Child Care.

    ERIC Educational Resources Information Center

    Neugebauer, Roger

    1992-01-01

    Cites recently completed child care supply and demand studies showing 400 percent growth rate in center care in past two decades, with more children enrolled in centers than in any other form of nonparental child care. Data on trends, usage by age, forms of care used, types of centers used, and a profile of centers are provided in tables. (LB)

  11. An embedded real-time red peach detection system based on an OV7670 camera, ARM cortex-M4 processor and 3D look-up tables.

    PubMed

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-10-22

    This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second.

  12. An Embedded Real-Time Red Peach Detection System Based on an OV7670 Camera, ARM Cortex-M4 Processor and 3D Look-Up Tables

    PubMed Central

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-01-01

    This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second. PMID:23202040

  13. Non-equilibrium condensation of supercritical carbon dioxide in a converging-diverging nozzle

    NASA Astrophysics Data System (ADS)

    Ameli, Alireza; Afzalifar, Ali; Turunen-Saaresti, Teemu

    2017-03-01

    Carbon dioxide (CO2) is a promising alternative as a working fluid for future energy conversion and refrigeration cycles. CO2 has low global warming potential compared to refrigerants and supercritical CO2 Brayton cycle ought to have better efficiency than today’s counter parts. However, there are several issues concerning behaviour of supercritical CO2 in aforementioned applications. One of these issues arises due to non-equilibrium condensation of CO2 for some operating conditions in supercritical compressors. This paper investigates the non-equilibrium condensation of carbon dioxide in the course of an expansion from supercritical stagnation conditions in a converging-diverging nozzle. An external look-up table was implemented, using an in-house FORTRAN code, to calculate the fluid properties in supercritical, metastable and saturated regions. This look-up table is coupled with the flow solver and the non-equilibrium condensation model is introduced to the solver using user defined expressions. Numerical results are compared with the experimental measurements. In agreement with the experiment, the distribution of Mach number in the nozzle shows that the flow becomes supersonic in upstream region near the throat where speed of sound is minimum also the equilibrium reestablishment occurs at the outlet boundary condition.

  14. 32. PILOT HOUSE, LOOKING TOWARDS PORT, TABLE TO LEFT IS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    32. PILOT HOUSE, LOOKING TOWARDS PORT, TABLE TO LEFT IS WHERE CHARTS ARE PLOTTED AT BACKGROUND LEFT IS TOP OF STAIRS DOWN TO MESS DECK. - U.S. Coast Guard Cutter WHITE HEATH, USGS Integrated Support Command Boston, 427 Commercial Street, Boston, Suffolk County, MA

  15. A prediction model of compressor with variable-geometry diffuser based on elliptic equation and partial least squares

    PubMed Central

    Yang, Chuanlei; Wang, Yinyan; Wang, Hechun

    2018-01-01

    To achieve a much more extensive intake air flow range of the diesel engine, a variable-geometry compressor (VGC) is introduced into a turbocharged diesel engine. However, due to the variable diffuser vane angle (DVA), the prediction for the performance of the VGC becomes more difficult than for a normal compressor. In the present study, a prediction model comprising an elliptical equation and a PLS (partial least-squares) model was proposed to predict the performance of the VGC. The speed lines of the pressure ratio map and the efficiency map were fitted with the elliptical equation, and the coefficients of the elliptical equation were introduced into the PLS model to build the polynomial relationship between the coefficients and the relative speed, the DVA. Further, the maximal order of the polynomial was investigated in detail to reduce the number of sub-coefficients and achieve acceptable fit accuracy simultaneously. The prediction model was validated with sample data and in order to present the superiority of compressor performance prediction, the prediction results of this model were compared with those of the look-up table and back-propagation neural networks (BPNNs). The validation and comparison results show that the prediction accuracy of the new developed model is acceptable, and this model is much more suitable than the look-up table and the BPNN methods under the same condition in VGC performance prediction. Moreover, the new developed prediction model provides a novel and effective prediction solution for the VGC and can be used to improve the accuracy of the thermodynamic model for turbocharged diesel engines in the future. PMID:29410849

  16. PROXIMAL: a method for Prediction of Xenobiotic Metabolism.

    PubMed

    Yousofshahi, Mona; Manteiga, Sara; Wu, Charmian; Lee, Kyongbum; Hassoun, Soha

    2015-12-22

    Contamination of the environment with bioactive chemicals has emerged as a potential public health risk. These substances that may cause distress or disease in humans can be found in air, water and food supplies. An open question is whether these chemicals transform into potentially more active or toxic derivatives via xenobiotic metabolizing enzymes expressed in the body. We present a new prediction tool, which we call PROXIMAL (Prediction of Xenobiotic Metabolism) for identifying possible transformation products of xenobiotic chemicals in the liver. Using reaction data from DrugBank and KEGG, PROXIMAL builds look-up tables that catalog the sites and types of structural modifications performed by Phase I and Phase II enzymes. Given a compound of interest, PROXIMAL searches for substructures that match the sites cataloged in the look-up tables, applies the corresponding modifications to generate a panel of possible transformation products, and ranks the products based on the activity and abundance of the enzymes involved. PROXIMAL generates transformations that are specific for the chemical of interest by analyzing the chemical's substructures. We evaluate the accuracy of PROXIMAL's predictions through case studies on two environmental chemicals with suspected endocrine disrupting activity, bisphenol A (BPA) and 4-chlorobiphenyl (PCB3). Comparisons with published reports confirm 5 out of 7 and 17 out of 26 of the predicted derivatives for BPA and PCB3, respectively. We also compare biotransformation predictions generated by PROXIMAL with those generated by METEOR and Metaprint2D-react, two other prediction tools. PROXIMAL can predict transformations of chemicals that contain substructures recognizable by human liver enzymes. It also has the ability to rank the predicted metabolites based on the activity and abundance of enzymes involved in xenobiotic transformation.

  17. A prediction model of compressor with variable-geometry diffuser based on elliptic equation and partial least squares.

    PubMed

    Li, Xu; Yang, Chuanlei; Wang, Yinyan; Wang, Hechun

    2018-01-01

    To achieve a much more extensive intake air flow range of the diesel engine, a variable-geometry compressor (VGC) is introduced into a turbocharged diesel engine. However, due to the variable diffuser vane angle (DVA), the prediction for the performance of the VGC becomes more difficult than for a normal compressor. In the present study, a prediction model comprising an elliptical equation and a PLS (partial least-squares) model was proposed to predict the performance of the VGC. The speed lines of the pressure ratio map and the efficiency map were fitted with the elliptical equation, and the coefficients of the elliptical equation were introduced into the PLS model to build the polynomial relationship between the coefficients and the relative speed, the DVA. Further, the maximal order of the polynomial was investigated in detail to reduce the number of sub-coefficients and achieve acceptable fit accuracy simultaneously. The prediction model was validated with sample data and in order to present the superiority of compressor performance prediction, the prediction results of this model were compared with those of the look-up table and back-propagation neural networks (BPNNs). The validation and comparison results show that the prediction accuracy of the new developed model is acceptable, and this model is much more suitable than the look-up table and the BPNN methods under the same condition in VGC performance prediction. Moreover, the new developed prediction model provides a novel and effective prediction solution for the VGC and can be used to improve the accuracy of the thermodynamic model for turbocharged diesel engines in the future.

  18. [Cleanliness Norms 1964-1975].

    PubMed

    Noelle-Neumann, E

    1976-01-01

    In 1964 the Institut für Demoskopie Allensbach made a first survey taking stock of norms concerning cleanliness in the Federal Republic of Germany. At that time, 78% of respondents thought that the vogue among young people of cultivating an unkempt look was past or on the wane (Table 1.). Today we know that this fashion was an indicator of more serious desires for change in many different areas like politics, sexual morality, education and that its high point was still to come. In the fall of 1975 a second survey, modelled on the one of 1964, was conducted. Again, it concentrated on norms, not on behavior. As expected, norms have changed over this period but not in a one-directional or simple manner. In general, people are much more large-minded about children's looks: neat, clean school-dress, properly combed hair, clean shoes, all this and also holding their things in order has become less important in 1975 (Table 2). To carry a clean handkerchief is becoming oldfashioned (Table 3). On the other hand, principles of bringing-up children have not loosened concerning personal hygiene - brushing ones teeth, washing hands, feet, and neck, clean fingernails (Table 4). On one item related to protection of the environment, namely throwing around waste paper, standards have even become more strict (Table 5). With regard to school-leavers, norms of personal hygiene have generally become more strict (Table 6). As living standards have gone up and the number of full bathrooms has risen from 42% to 75% of households, norms of personal hygiene have also increased: one warm bath a week seemed enough to 56% of adults in 1964, but to only 32% in 1975 (Table 7). Also standards for changing underwear have changed a lot: in 1964 only 12% of respondents said "every day", in 1975 48% said so (Table 8). Even more stringent norms are applied to young women (Tables 9/10). For comparison: 1964 there were automatic washing machines in 16%, 1975 in 79% of households. Answers to questions which qualities men value especially in women and which qualities women value especially in men show a decrease in valutation of "cleanliness". These results can be interpreted in different ways (Tables 11/12). It seems, however, that "cleanliness" is not going out as a cultural value. We have found that young people today do not consider clean dress important but that they are probably better washed under their purposely neglected clothing than young people were ten years ago. As a nation, Germans still consider cleanliness to be a articularly German virtue, 1975 even more so than 1964 (Table 13). An association test, first made in March 1976, confirms this: When they hear "Germany", 68% of Germans think of "cleanliness" (Table 14).

  19. An 18-ps TDC using timing adjustment and bin realignment methods in a Cyclone-IV FPGA

    NASA Astrophysics Data System (ADS)

    Cao, Guiping; Xia, Haojie; Dong, Ning

    2018-05-01

    The method commonly used to produce a field-programmable gate array (FPGA)-based time-to-digital converter (TDC) creates a tapped delay line (TDL) for time interpolation to yield high time precision. We conduct timing adjustment and bin realignment to implement a TDC in the Altera Cyclone-IV FPGA. The former tunes the carry look-up table (LUT) cell delay by changing the LUT's function through low-level primitives according to timing analysis results, while the latter realigns bins according to the timing result obtained by timing adjustment so as to create a uniform TDL with bins of equivalent width. The differential nonlinearity and time resolution can be improved by realigning the bins. After calibration, the TDC has a 18 ps root-mean-square timing resolution and a 45 ps least-significant bit resolution.

  20. Combustor air flow control method for fuel cell apparatus

    DOEpatents

    Clingerman, Bruce J.; Mowery, Kenneth D.; Ripley, Eugene V.

    2001-01-01

    A method for controlling the heat output of a combustor in a fuel cell apparatus to a fuel processor where the combustor has dual air inlet streams including atmospheric air and fuel cell cathode effluent containing oxygen depleted air. In all operating modes, an enthalpy balance is provided by regulating the quantity of the air flow stream to the combustor to support fuel cell processor heat requirements. A control provides a quick fast forward change in an air valve orifice cross section in response to a calculated predetermined air flow, the molar constituents of the air stream to the combustor, the pressure drop across the air valve, and a look up table of the orifice cross sectional area and valve steps. A feedback loop fine tunes any error between the measured air flow to the combustor and the predetermined air flow.

  1. Using the tabulated diffusion flamelet model ADF-PCM to simulate a lifted methane-air jet flame

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michel, Jean-Baptiste; Colin, Olivier; Angelberger, Christian

    2009-07-15

    Two formulations of a turbulent combustion model based on the approximated diffusion flame presumed conditional moment (ADF-PCM) approach [J.-B. Michel, O. Colin, D. Veynante, Combust. Flame 152 (2008) 80-99] are presented. The aim is to describe autoignition and combustion in nonpremixed and partially premixed turbulent flames, while accounting for complex chemistry effects at a low computational cost. The starting point is the computation of approximate diffusion flames by solving the flamelet equation for the progress variable only, reading all chemical terms such as reaction rates or mass fractions from an FPI-type look-up table built from autoigniting PSR calculations using complexmore » chemistry. These flamelets are then used to generate a turbulent look-up table where mean values are estimated by integration over presumed probability density functions. Two different versions of ADF-PCM are presented, differing by the probability density functions used to describe the evolution of the stoichiometric scalar dissipation rate: a Dirac function centered on the mean value for the basic ADF-PCM formulation, and a lognormal function for the improved formulation referenced ADF-PCM{chi}. The turbulent look-up table is read in the CFD code in the same manner as for PCM models. The developed models have been implemented into the compressible RANS CFD code IFP-C3D and applied to the simulation of the Cabra et al. experiment of a lifted methane jet flame [R. Cabra, J. Chen, R. Dibble, A. Karpetis, R. Barlow, Combust. Flame 143 (2005) 491-506]. The ADF-PCM{chi} model accurately reproduces the experimental lift-off height, while it is underpredicted by the basic ADF-PCM model. The ADF-PCM{chi} model shows a very satisfactory reproduction of the experimental mean and fluctuating values of major species mass fractions and temperature, while ADF-PCM yields noticeable deviations. Finally, a comparison of the experimental conditional probability densities of the progress variable for a given mixture fraction with model predictions is performed, showing that ADF-PCM{chi} reproduces the experimentally observed bimodal shape and its dependency on the mixture fraction, whereas ADF-PCM cannot retrieve this shape. (author)« less

  2. Self-addressed diffractive lens schemes for the characterization of LCoS displays

    NASA Astrophysics Data System (ADS)

    Zhang, Haolin; Lizana, Angel; Iemmi, Claudio; Monroy-Ramírez, Freddy A.; Marquez, Andrés.; Moreno, Ignacio; Campos, Juan

    2018-02-01

    We proposed a self-calibration method to calibrate both the phase-voltage look-up table and the screen phase distribution of Liquid Crystal on Silicon (LCoS) displays by implementing different lens configurations on the studied device within a same optical scheme. On the one hand, the phase-voltage relation is determined from interferometric measurements, which are obtained by addressing split-lens phase distributions on the LCoS display. On the other hand, the surface profile is retrieved by self-addressing a diffractive micro-lens array to the LCoS display, in a way that we configure a Shack-Hartmann wavefront sensor that self-determines the screen spatial variations. Moreover, both the phase-voltage response and the surface phase inhomogeneity of the LCoS are measured within the same experimental set-up, without the necessity of further adjustments. Experimental results prove the usefulness of the above-mentioned technique for LCoS displays characterization.

  3. Synchronization trigger control system for flow visualization

    NASA Technical Reports Server (NTRS)

    Chun, K. S.

    1987-01-01

    The use of cinematography or holographic interferometry for dynamic flow visualization in an internal combustion engine requires a control device that globally synchronizes camera and light source timing at a predefined shaft encoder angle. The device is capable of 0.35 deg resolution for rotational speeds of up to 73 240 rpm. This was achieved by implementing the shaft encoder signal addressed look-up table (LUT) and appropriate latches. The developed digital signal processing technique achieves 25 nsec of high speed triggering angle detection by using direct parallel bit comparison of the shaft encoder digital code with a simulated angle reference code, instead of using angle value comparison which involves more complicated computation steps. In order to establish synchronization to an AC reference signal whose magnitude is variant with the rotating speed, a dynamic peak followup synchronization technique has been devised. This method scrutinizes the reference signal and provides the right timing within 40 nsec. Two application examples are described.

  4. A simplified up-down method (SUDO) for measuring mechanical nociception in rodents using von Frey filaments

    PubMed Central

    2014-01-01

    Background The measurement of mechanosensitivity is a key method for the study of pain in animal models. This is often accomplished with the use of von Frey filaments in an up-down testing paradigm. The up-down method described by Chaplan et al. (J Neurosci Methods 53:55–63, 1994) for mechanosensitivity testing in rodents remains one of the most widely used methods for measuring pain in animals. However, this method results in animals receiving a varying number of stimuli, which may lead to animals in different groups receiving different testing experiences that influences their later responses. To standardize the measurement of mechanosensitivity we developed a simplified up-down method (SUDO) for estimating paw withdrawal threshold (PWT) with von Frey filaments that uses a constant number of five stimuli per test. We further refined the PWT calculation to allow the estimation of PWT directly from the behavioral response to the fifth stimulus, omitting the need for look-up tables. Results The PWT estimates derived using SUDO strongly correlated (r > 0.96) with the PWT estimates determined with the conventional up-down method of Chaplan et al., and this correlation remained very strong across different levels of tester experience, different experimental conditions, and in tests from both mice and rats. The two testing methods also produced similar PWT estimates in prospective behavioral tests of mice at baseline and after induction of hyperalgesia by intraplantar capsaicin or complete Freund’s adjuvant. Conclusion SUDO thus offers an accurate, fast and user-friendly replacement for the widely used up-down method of Chaplan et al. PMID:24739328

  5. Spatial Data Mining for Estimating Cover Management Factor of Universal Soil Loss Equation

    NASA Astrophysics Data System (ADS)

    Tsai, F.; Lin, T. C.; Chiang, S. H.; Chen, W. W.

    2016-12-01

    Universal Soil Loss Equation (USLE) is a widely used mathematical model that describes long-term soil erosion processes. Among the six different soil erosion risk factors of USLE, the cover-management factor (C-factor) is related to land-cover/land-use. The value of C-factor ranges from 0.001 to 1, so it alone might cause a thousandfold difference in a soil erosion analysis using USLE. The traditional methods for the estimation of USLE C-factor include in situ experiments, soil physical parameter models, USLE look-up tables with land use maps, and regression models between vegetation indices and C-factors. However, these methods are either difficult or too expensive to implement in large areas. In addition, the values of C-factor obtained using these methods can not be updated frequently, either. To address this issue, this research developed a spatial data mining approach to estimate the values of C-factor with assorted spatial datasets for a multi-temporal (2004 to 2008) annual soil loss analysis of a reservoir watershed in northern Taiwan. The idea is to establish the relationship between the USLE C-factor and spatial data consisting of vegetation indices and texture features extracted from satellite images, soil and geology attributes, digital elevation model, road and river distribution etc. A decision tree classifier was used to rank influential conditional attributes in the preliminary data mining. Then, factor simplification and separation were considered to optimize the model and the random forest classifier was used to analyze 9 simplified factor groups. Experimental results indicate that the overall accuracy of the data mining model is about 79% with a kappa value of 0.76. The estimated soil erosion amounts in 2004-2008 according to the data mining results are about 50.39 - 74.57 ton/ha-year after applying the sediment delivery ratio and correction coefficient. Comparing with estimations calculated with C-factors from look-up tables, the soil erosion values estimated with C-factors generated from spatial data mining results are more in agreement with the values published by the watershed administration authority.

  6. AGILE: Autonomous Global Integrated Language Exploitation

    DTIC Science & Technology

    2008-04-01

    training is extending the pronunciation dictionary to cover any additional words. For many languages this is relatively straightforward via grapheme-to...into one or more word sequences and look up the constituent parts in the Master dictionary or apply Buckwalter to them. The Buckwalter prefix table was...errors involve the article ’Al’. As a result of this analysis, the pronunciation dictionary was extended to add alternate pronunciations for the

  7. Trip Energy Estimation Methodology and Model Based on Real-World Driving Data for Green Routing Applications: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holden, Jacob; Van Til, Harrison J; Wood, Eric W

    A data-informed model to predict energy use for a proposed vehicle trip has been developed in this paper. The methodology leverages nearly 1 million miles of real-world driving data to generate the estimation model. Driving is categorized at the sub-trip level by average speed, road gradient, and road network geometry, then aggregated by category. An average energy consumption rate is determined for each category, creating an energy rates look-up table. Proposed vehicle trips are then categorized in the same manner, and estimated energy rates are appended from the look-up table. The methodology is robust and applicable to almost any typemore » of driving data. The model has been trained on vehicle global positioning system data from the Transportation Secure Data Center at the National Renewable Energy Laboratory and validated against on-road fuel consumption data from testing in Phoenix, Arizona. The estimation model has demonstrated an error range of 8.6% to 13.8%. The model results can be used to inform control strategies in routing tools, such as change in departure time, alternate routing, and alternate destinations to reduce energy consumption. This work provides a highly extensible framework that allows the model to be tuned to a specific driver or vehicle type.« less

  8. Path integration of head direction: updating a packet of neural activity at the correct speed using axonal conduction delays.

    PubMed

    Walters, Daniel; Stringer, Simon; Rolls, Edmund

    2013-01-01

    The head direction cell system is capable of accurately updating its current representation of head direction in the absence of visual input. This is known as the path integration of head direction. An important question is how the head direction cell system learns to perform accurate path integration of head direction. In this paper we propose a model of velocity path integration of head direction in which the natural time delay of axonal transmission between a linked continuous attractor network and competitive network acts as a timing mechanism to facilitate the correct speed of path integration. The model effectively learns a "look-up" table for the correct speed of path integration. In simulation, we show that the model is able to successfully learn two different speeds of path integration across two different axonal conduction delays, and without the need to alter any other model parameters. An implication of this model is that, by learning look-up tables for each speed of path integration, the model should exhibit a degree of robustness to damage. In simulations, we show that the speed of path integration is not significantly affected by degrading the network through removing a proportion of the cells that signal rotational velocity.

  9. Strategies to overcome photobleaching in algorithm-based adaptive optics for nonlinear in-vivo imaging.

    PubMed

    Caroline Müllenbroich, M; McGhee, Ewan J; Wright, Amanda J; Anderson, Kurt I; Mathieson, Keith

    2014-01-01

    We have developed a nonlinear adaptive optics microscope utilizing a deformable membrane mirror (DMM) and demonstrated its use in compensating for system- and sample-induced aberrations. The optimum shape of the DMM was determined with a random search algorithm optimizing on either two photon fluorescence or second harmonic signals as merit factors. We present here several strategies to overcome photobleaching issues associated with lengthy optimization routines by adapting the search algorithm and the experimental methodology. Optimizations were performed on extrinsic fluorescent dyes, fluorescent beads loaded into organotypic tissue cultures and the intrinsic second harmonic signal of these cultures. We validate the approach of using these preoptimized mirror shapes to compile a robust look-up table that can be applied for imaging over several days and through a variety of tissues. In this way, the photon exposure to the fluorescent cells under investigation is limited to imaging. Using our look-up table approach, we show signal intensity improvement factors ranging from 1.7 to 4.1 in organotypic tissue cultures and freshly excised mouse tissue. Imaging zebrafish in vivo, we demonstrate signal improvement by a factor of 2. This methodology is easily reproducible and could be applied to many photon starved experiments, for example fluorescent life time imaging, or when photobleaching is a concern.

  10. ECG compression using Slantlet and lifting wavelet transform with and without normalisation

    NASA Astrophysics Data System (ADS)

    Aggarwal, Vibha; Singh Patterh, Manjeet

    2013-05-01

    This article analyses the performance of: (i) linear transform: Slantlet transform (SLT), (ii) nonlinear transform: lifting wavelet transform (LWT) and (iii) nonlinear transform (LWT) with normalisation for electrocardiogram (ECG) compression. First, an ECG signal is transformed using linear transform and nonlinear transform. The transformed coefficients (TC) are then thresholded using bisection algorithm in order to match the predefined user-specified percentage root mean square difference (UPRD) within the tolerance. Then, the binary look up table is made to store the position map for zero and nonzero coefficients (NZCs). The NZCs are quantised by Max-Lloyd quantiser followed by Arithmetic coding. The look up table is encoded by Huffman coding. The results show that the LWT gives the best result as compared to SLT evaluated in this article. This transform is then considered to evaluate the effect of normalisation before thresholding. In case of normalisation, the TC is normalised by dividing the TC by ? (where ? is number of samples) to reduce the range of TC. The normalised coefficients (NC) are then thresholded. After that the procedure is same as in case of coefficients without normalisation. The results show that the compression ratio (CR) in case of LWT with normalisation is improved as compared to that without normalisation.

  11. The compartment bag test (CBT) for enumerating fecal indicator bacteria: Basis for design and interpretation of results.

    PubMed

    Gronewold, Andrew D; Sobsey, Mark D; McMahan, Lanakila

    2017-06-01

    For the past several years, the compartment bag test (CBT) has been employed in water quality monitoring and public health protection around the world. To date, however, the statistical basis for the design and recommended procedures for enumerating fecal indicator bacteria (FIB) concentrations from CBT results have not been formally documented. Here, we provide that documentation following protocols for communicating the evolution of similar water quality testing procedures. We begin with an overview of the statistical theory behind the CBT, followed by a description of how that theory was applied to determine an optimal CBT design. We then provide recommendations for interpreting CBT results, including procedures for estimating quantiles of the FIB concentration probability distribution, and the confidence of compliance with recognized water quality guidelines. We synthesize these values in custom user-oriented 'look-up' tables similar to those developed for other FIB water quality testing methods. Modified versions of our tables are currently distributed commercially as part of the CBT testing kit. Published by Elsevier B.V.

  12. Evaluation of sea-surface photosynthetically available radiation algorithms under various sky conditions and solar elevations.

    PubMed

    Somayajula, Srikanth Ayyala; Devred, Emmanuel; Bélanger, Simon; Antoine, David; Vellucci, V; Babin, Marcel

    2018-04-20

    In this study, we report on the performance of satellite-based photosynthetically available radiation (PAR) algorithms used in published oceanic primary production models. The performance of these algorithms was evaluated using buoy observations under clear and cloudy skies, and for the particular case of low sun angles typically encountered at high latitudes or at moderate latitudes in winter. The PAR models consisted of (i) the standard one from the NASA-Ocean Biology Processing Group (OBPG), (ii) the Gregg and Carder (GC) semi-analytical clear-sky model, and (iii) look-up-tables based on the Santa Barbara DISORT atmospheric radiative transfer (SBDART) model. Various combinations of atmospheric inputs, empirical cloud corrections, and semi-analytical irradiance models yielded a total of 13 (11 + 2 developed in this study) different PAR products, which were compared with in situ measurements collected at high frequency (15 min) at a buoy site in the Mediterranean Sea (the "BOUée pour l'acquiSition d'une Série Optique à Long termE," or, "BOUSSOLE" site). An objective ranking method applied to the algorithm results indicated that seven PAR products out of 13 were well in agreement with the in situ measurements. Specifically, the OBPG method showed the best overall performance with a root mean square difference (RMSD) (bias) of 19.7% (6.6%) and 10% (6.3%) followed by the look-up-table method with a RMSD (bias) of 25.5% (6.8%) and 9.6% (2.6%) at daily and monthly scales, respectively. Among the four methods based on clear-sky PAR empirically corrected for cloud cover, the Dobson and Smith method consistently underestimated daily PAR while the Budyko formulation overestimated daily PAR. Empirically cloud-corrected methods using cloud fraction (CF) performed better under quasi-clear skies (CF<0.3) with an RMSD (bias) of 9.7%-14.8% (3.6%-11.3%) than under partially clear to cloudy skies (0.30.7), however, all methods showed larger RMSD differences (biases) ranging between 32% and 80.6% (-54.5%-8.7%). Finally, three methods tested for low sun elevations revealed systematic overestimation, and one method showed a systematic underestimation of daily PAR, with relative RMSDs as large as 50% under all sky conditions. Under partially clear to overcast conditions all the methods underestimated PAR. Model uncertainties predominantly depend on which cloud products were used.

  13. Capacitive touch sensing : signal and image processing algorithms

    NASA Astrophysics Data System (ADS)

    Baharav, Zachi; Kakarala, Ramakrishna

    2011-03-01

    Capacitive touch sensors have been in use for many years, and recently gained center stage with the ubiquitous use in smart-phones. In this work we will analyze the most common method of projected capacitive sensing, that of absolute capacitive sensing, together with the most common sensing pattern, that of diamond-shaped sensors. After a brief introduction to the problem, and the reasons behind its popularity, we will formulate the problem as a reconstruction from projections. We derive analytic solutions for two simple cases: circular finger on a wire grid, and square finger on a square grid. The solutions give insight into the ambiguities of finding finger location from sensor readings. The main contribution of our paper is the discussion of interpolation algorithms including simple linear interpolation , curve fitting (parabolic and Gaussian), filtering, general look-up-table, and combinations thereof. We conclude with observations on the limits of the present algorithmic methods, and point to possible future research.

  14. Accelerated computer generated holography using sparse bases in the STFT domain.

    PubMed

    Blinder, David; Schelkens, Peter

    2018-01-22

    Computer-generated holography at high resolutions is a computationally intensive task. Efficient algorithms are needed to generate holograms at acceptable speeds, especially for real-time and interactive applications such as holographic displays. We propose a novel technique to generate holograms using a sparse basis representation in the short-time Fourier space combined with a wavefront-recording plane placed in the middle of the 3D object. By computing the point spread functions in the transform domain, we update only a small subset of the precomputed largest-magnitude coefficients to significantly accelerate the algorithm over conventional look-up table methods. We implement the algorithm on a GPU, and report a speedup factor of over 30. We show that this transform is superior over wavelet-based approaches, and show quantitative and qualitative improvements over the state-of-the-art WASABI method; we report accuracy gains of 2dB PSNR, as well improved view preservation.

  15. Method for routing events from key strokes in a multi-processing computer systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhodes, D.A.; Rustici, E.; Carter, K.H.

    1990-01-23

    The patent describes a method of routing user input in a computer system which concurrently runs a plurality of processes. It comprises: generating keycodes representative of keys typed by a user; distinguishing generated keycodes by looking up each keycode in a routing table which assigns each possible keycode to an individual assigned process of the plurality of processes, one of which processes being a supervisory process; then, sending each keycode to its assigned process until a keycode assigned to the supervisory process is received; sending keycodes received subsequent to the keycode assigned to the supervisory process to a buffer; next,more » providing additional keycodes to the supervisory process from the buffer until the supervisory process has completed operation; and sending keycodes stored in the buffer to processes assigned therewith after the supervisory process has completedoperation.« less

  16. The Imperial College Thermophysical Properties Data Centre

    NASA Astrophysics Data System (ADS)

    Angus, S.; Cole, W. A.; Craven, R.; de Reuck, K. M.; Trengove, R. D.; Wakeham, W. A.

    1986-07-01

    The IUPAC Thermodynamic Tables Project Centre in London has at its disposal considerable expertise on the production and utilization of high-accuracy equations of state which represent the thermodynamic properties of substances. For some years they have been content to propagate this information by the traditional method of book production, but the increasing use of the computer in industry for process design has shown that an additional method was needed. The setting up of the IUPAC Transport Properties Project Centre, also at Imperial College, whose products would also be in demand by industry, afforded the occasion for a new look at the problem. The solution has been to set up the Imperial College Thermophysical Properties Data Centre, which embraces the two IUPAC Project Centres, and for it to establish a link with the existing Physical Properties Data Service of the Institution of Chemical Engineers, thus providing for the dissemination of the available information without involving the Centres in problems such as those of marketing and advertising. This paper outlines the activities of the Centres and discusses the problems in bringing their products to the attention of industry in suitable form.

  17. A new zenith-looking narrow-band radiometer-based system (ZEN) for dust aerosol optical depth monitoring

    NASA Astrophysics Data System (ADS)

    Almansa, A. Fernando; Cuevas, Emilio; Torres, Benjamín; Barreto, África; García, Rosa D.; Cachorro, Victoria E.; de Frutos, Ángel M.; López, César; Ramos, Ramón

    2017-02-01

    A new zenith-looking narrow-band radiometer based system (ZEN), conceived for dust aerosol optical depth (AOD) monitoring, is presented in this paper. The ZEN system comprises a new radiometer (ZEN-R41) and a methodology for AOD retrieval (ZEN-LUT). ZEN-R41 has been designed to be stand alone and without moving parts, making it a low-cost and robust instrument with low maintenance, appropriate for deployment in remote and unpopulated desert areas. The ZEN-LUT method is based on the comparison of the measured zenith sky radiance (ZSR) with a look-up table (LUT) of computed ZSRs. The LUT is generated with the LibRadtran radiative transfer code. The sensitivity study proved that the ZEN-LUT method is appropriate for inferring AOD from ZSR measurements with an AOD standard uncertainty up to 0.06 for AOD500 nm ˜ 0.5 and up to 0.15 for AOD500 nm ˜ 1.0, considering instrumental errors of 5 %. The validation of the ZEN-LUT technique was performed using data from AErosol RObotic NETwork (AERONET) Cimel Electronique 318 photometers (CE318). A comparison between AOD obtained by applying the ZEN-LUT method on ZSRs (inferred from CE318 diffuse-sky measurements) and AOD provided by AERONET (derived from CE318 direct-sun measurements) was carried out at three sites characterized by a regular presence of desert mineral dust aerosols: Izaña and Santa Cruz in the Canary Islands and Tamanrasset in Algeria. The results show a coefficient of determination (R2) ranging from 0.99 to 0.97, and root mean square errors (RMSE) ranging from 0.010 at Izaña to 0.032 at Tamanrasset. The comparison of ZSR values from ZEN-R41 and the CE318 showed absolute relative mean bias (RMB) < 10 %. ZEN-R41 AOD values inferred from ZEN-LUT methodology were compared with AOD provided by AERONET, showing a fairly good agreement in all wavelengths, with mean absolute AOD differences < 0.030 and R2 higher than 0.97.

  18. The VLSI design of a Reed-Solomon encoder using Berlekamps bit-serial multiplier algorithm

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Deutsch, L. J.; Reed, I. S.; Hsu, I. S.; Wang, K.; Yeh, C. S.

    1982-01-01

    Realization of a bit-serial multiplication algorithm for the encoding of Reed-Solomon (RS) codes on a single VLSI chip using NMOS technology is demonstrated to be feasible. A dual basis (255, 223) over a Galois field is used. The conventional RS encoder for long codes ofter requires look-up tables to perform the multiplication of two field elements. Berlekamp's algorithm requires only shifting and exclusive-OR operations.

  19. Puzzler Solution: Perfect Weather for a Picnic | Poster

    Cancer.gov

    It looks like we stumped you. We did not receive any correct guesses for the current Poster Puzzler, which is an image of the top of the Building 434 picnic table, with a view looking towards Building 472. This picnic table and others across campus were supplied by the NCI at Frederick Campus Improvement Committee. Building 434, located on Wood Street, is home to the staff of

  20. Becoming Reactive by Concretization

    NASA Technical Reports Server (NTRS)

    Prieditis, Armand; Janakiraman, Bhaskar

    1992-01-01

    One way to build a reactive system is to construct an action table indexed by the current situation or stimulus. The action table describes what course of action to pursue for each situation or stimulus. This paper describes an incremental approach to constructing the action table through achieving goals with a hierarchical search system. These hierarchies are generated with transformations called concretizations, which add constraints to a problem and which can reduce the search space. The basic idea is that an action for a state is looked up in the action table and executed whenever the action table has an entry for that state; otherwise, a path is found to the nearest (cost-wise in a graph with costweighted arcs) state that has a mappring from a state in the next highest hierarchy. For each state along the solution path, the successor state in the path is cached in the action table entry for that state. Without caching, the hierarchical search system can logarithmically reduce search. When the table is complete the system no longer searches: it simply reacts by proceeding to the state listed in the table for each state. Since the cached information is specific only to the nearest state in the next highest hierarchy and not the goal, inter-goal transfer of reactivity is possible. To illustrate our approach, we show how an implemented hierarchical search system can completely reactive.

  1. Choice: 36 band feature selection software with applications to multispectral pattern recognition

    NASA Technical Reports Server (NTRS)

    Jones, W. C.

    1973-01-01

    Feature selection software was developed at the Earth Resources Laboratory that is capable of inputting up to 36 channels and selecting channel subsets according to several criteria based on divergence. One of the criterion used is compatible with the table look-up classifier requirements. The software indicates which channel subset best separates (based on average divergence) each class from all other classes. The software employs an exhaustive search technique, and computer time is not prohibitive. A typical task to select the best 4 of 22 channels for 12 classes takes 9 minutes on a Univac 1108 computer.

  2. Object tracking mask-based NLUT on GPUs for real-time generation of holographic videos of three-dimensional scenes.

    PubMed

    Kwon, M-W; Kim, S-C; Yoon, S-E; Ho, Y-S; Kim, E-S

    2015-02-09

    A new object tracking mask-based novel-look-up-table (OTM-NLUT) method is proposed and implemented on graphics-processing-units (GPUs) for real-time generation of holographic videos of three-dimensional (3-D) scenes. Since the proposed method is designed to be matched with software and memory structures of the GPU, the number of compute-unified-device-architecture (CUDA) kernel function calls and the computer-generated hologram (CGH) buffer size of the proposed method have been significantly reduced. It therefore results in a great increase of the computational speed of the proposed method and enables real-time generation of CGH patterns of 3-D scenes. Experimental results show that the proposed method can generate 31.1 frames of Fresnel CGH patterns with 1,920 × 1,080 pixels per second, on average, for three test 3-D video scenarios with 12,666 object points on three GPU boards of NVIDIA GTX TITAN, and confirm the feasibility of the proposed method in the practical application of electro-holographic 3-D displays.

  3. Reactive Collision Avoidance Algorithm

    NASA Technical Reports Server (NTRS)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on-line. The optimal avoidance trajectory is implemented as a receding-horizon model predictive control law. Therefore, at each time step, the optimal avoidance trajectory is found and the first time step of its acceleration is applied. At the next time step of the control computer, the problem is re-solved and the new first time step is again applied. This continual updating allows the RCA algorithm to adapt to a colliding spacecraft that is making erratic course changes.

  4. Determination of circumsolar radiation from Meteosat Second Generation

    NASA Astrophysics Data System (ADS)

    Reinhardt, B.; Buras, R.; Bugliaro, L.; Wilbert, S.; Mayer, B.

    2014-03-01

    Reliable data on circumsolar radiation, which is caused by scattering of sunlight by cloud or aerosol particles, is becoming more and more important for the resource assessment and design of concentrating solar technologies (CSTs). However, measuring circumsolar radiation is demanding and only very limited data sets are available. As a step to bridge this gap, a method was developed which allows for determination of circumsolar radiation from cirrus cloud properties retrieved by the geostationary satellites of the Meteosat Second Generation (MSG) family. The method takes output from the COCS algorithm to generate a cirrus mask from MSG data and then uses the retrieval algorithm APICS to obtain the optical thickness and the effective radius of the detected cirrus, which in turn are used to determine the circumsolar radiation from a pre-calculated look-up table. The look-up table was generated from extensive calculations using a specifically adjusted version of the Monte Carlo radiative transfer model MYSTIC and by developing a fast yet precise parameterization. APICS was also improved such that it determines the surface albedo, which is needed for the cloud property retrieval, in a self-consistent way instead of using external data. Furthermore, it was extended to consider new ice particle shapes to allow for an uncertainty analysis concerning this parameter. We found that the nescience of the ice particle shape leads to an uncertainty of up to 50%. A validation with 1 yr of ground-based measurements shows, however, that the frequency distribution of the circumsolar radiation can be well characterized with typical ice particle shape mixtures, which feature either smooth or severely roughened particle surfaces. However, when comparing instantaneous values, timing and amplitude errors become evident. For the circumsolar ratio (CSR) this is reflected in a mean absolute deviation (MAD) of 0.11 for both employed particle shape mixtures, and a bias of 4 and 11%, for the mixture with smooth and roughend particles, respectively. If measurements with sub-scale cumulus clouds within the relevant satellite pixels are manually excluded, the instantaneous agreement between satellite and ground measurements improves. For a 2-monthly time series, for which a manual screening of all-sky images was performed, MAD values of 0.08 and 0.07 were obtained for the two employed ice particle mixtures, respectively.

  5. FIR Filter of DS-CDMA UWB Modem Transmitter

    NASA Astrophysics Data System (ADS)

    Kang, Kyu-Min; Cho, Sang-In; Won, Hui-Chul; Choi, Sang-Sung

    This letter presents low-complexity digital pulse shaping filter structures of a direct sequence code division multiple access (DS-CDMA) ultra wide-band (UWB) modem transmitter with a ternary spreading code. The proposed finite impulse response (FIR) filter structures using a look-up table (LUT) have the effect of saving the amount of memory by about 50% to 80% in comparison to the conventional FIR filter structures, and consequently are suitable for a high-speed parallel data process.

  6. Recognizing human actions by learning and matching shape-motion prototype trees.

    PubMed

    Jiang, Zhuolin; Lin, Zhe; Davis, Larry S

    2012-03-01

    A shape-motion prototype-based approach is introduced for action recognition. The approach represents an action as a sequence of prototypes for efficient and flexible action matching in long video sequences. During training, an action prototype tree is learned in a joint shape and motion space via hierarchical K-means clustering and each training sequence is represented as a labeled prototype sequence; then a look-up table of prototype-to-prototype distances is generated. During testing, based on a joint probability model of the actor location and action prototype, the actor is tracked while a frame-to-prototype correspondence is established by maximizing the joint probability, which is efficiently performed by searching the learned prototype tree; then actions are recognized using dynamic prototype sequence matching. Distance measures used for sequence matching are rapidly obtained by look-up table indexing, which is an order of magnitude faster than brute-force computation of frame-to-frame distances. Our approach enables robust action matching in challenging situations (such as moving cameras, dynamic backgrounds) and allows automatic alignment of action sequences. Experimental results demonstrate that our approach achieves recognition rates of 92.86 percent on a large gesture data set (with dynamic backgrounds), 100 percent on the Weizmann action data set, 95.77 percent on the KTH action data set, 88 percent on the UCF sports data set, and 87.27 percent on the CMU action data set.

  7. Advanced Machine Learning Emulators of Radiative Transfer Models

    NASA Astrophysics Data System (ADS)

    Camps-Valls, G.; Verrelst, J.; Martino, L.; Vicent, J.

    2017-12-01

    Physically-based model inversion methodologies are based on physical laws and established cause-effect relationships. A plethora of remote sensing applications rely on the physical inversion of a Radiative Transfer Model (RTM), which lead to physically meaningful bio-geo-physical parameter estimates. The process is however computationally expensive, needs expert knowledge for both the selection of the RTM, its parametrization and the the look-up table generation, as well as its inversion. Mimicking complex codes with statistical nonlinear machine learning algorithms has become the natural alternative very recently. Emulators are statistical constructs able to approximate the RTM, although at a fraction of the computational cost, providing an estimation of uncertainty, and estimations of the gradient or finite integral forms. We review the field and recent advances of emulation of RTMs with machine learning models. We posit Gaussian processes (GPs) as the proper framework to tackle the problem. Furthermore, we introduce an automatic methodology to construct emulators for costly RTMs. The Automatic Gaussian Process Emulator (AGAPE) methodology combines the interpolation capabilities of GPs with the accurate design of an acquisition function that favours sampling in low density regions and flatness of the interpolation function. We illustrate the good capabilities of our emulators in toy examples, leaf and canopy levels PROSPECT and PROSAIL RTMs, and for the construction of an optimal look-up-table for atmospheric correction based on MODTRAN5.

  8. A Dual-Wavelength Radar Technique to Detect Hydrometeor Phases

    NASA Technical Reports Server (NTRS)

    Liao, Liang; Meneghini, Robert

    2016-01-01

    This study is aimed at investigating the feasibility of a Ku- and Ka-band space/air-borne dual wavelength radar algorithm to discriminate various phase states of precipitating hydrometeors. A phase-state classification algorithm has been developed from the radar measurements of snow, mixed-phase and rain obtained from stratiform storms. The algorithm, presented in the form of the look-up table that links the Ku-band radar reflectivities and dual-frequency ratio (DFR) to the phase states of hydrometeors, is checked by applying it to the measurements of the Jet Propulsion Laboratory, California Institute of Technology, Airborne Precipitation Radar Second Generation (APR-2). In creating the statistically-based phase look-up table, the attenuation corrected (or true) radar reflectivity factors are employed, leading to better accuracy in determining the hydrometeor phase. In practice, however, the true radar reflectivities are not always available before the phase states of the hydrometeors are determined. Therefore, it is desirable to make use of the measured radar reflectivities in classifying the phase states. To do this, a phase-identification procedure is proposed that uses only measured radar reflectivities. The procedure is then tested using APR-2 airborne radar data. Analysis of the classification results in stratiform rain indicates that the regions of snow, mixed-phase and rain derived from the phase-identification algorithm coincide reasonably well with those determined from the measured radar reflectivities and linear depolarization ratio (LDR).

  9. Measurement of the spatially distributed temperature and soot loadings in a laminar diffusion flame using a Cone-Beam Tomography technique

    NASA Astrophysics Data System (ADS)

    Zhao, Huayong; Williams, Ben; Stone, Richard

    2014-01-01

    A new low-cost optical diagnostic technique, called Cone Beam Tomographic Three Colour Spectrometry (CBT-TCS), has been developed to measure the planar distributions of temperature, soot particle size, and soot volume fraction in a co-flow axi-symmetric laminar diffusion flame. The image of a flame is recorded by a colour camera, and then by using colour interpolation and applying a cone beam tomography algorithm, a colour map can be reconstructed that corresponds to a diametral plane. Look-up tables calculated using Planck's law and different scattering models are then employed to deduce the temperature, approximate average soot particle size and soot volume fraction in each voxel (volumetric pixel). A sensitivity analysis of the look-up tables shows that the results have a high temperature resolution but a relatively low soot particle size resolution. The assumptions underlying the technique are discussed in detail. Sample data from an ethylene laminar diffusion flame are compared with data in the literature for similar flames. The comparison shows very consistent temperature and soot volume fraction profiles. Further analysis indicates that the difference seen in comparison with published results are within the measurement uncertainties. This methodology is ready to be applied to measure 3D data by capturing multiple flame images from different angles for non-axisymmetric flame.

  10. Sparse Unorganized Point Cloud Based Relative Pose Estimation for Uncooperative Space Target.

    PubMed

    Yin, Fang; Chou, Wusheng; Wu, Yun; Yang, Guang; Xu, Song

    2018-03-28

    This paper proposes an autonomous algorithm to determine the relative pose between the chaser spacecraft and the uncooperative space target, which is essential in advanced space applications, e.g., on-orbit serving missions. The proposed method, named Congruent Tetrahedron Align (CTA) algorithm, uses the very sparse unorganized 3D point cloud acquired by a LIDAR sensor, and does not require any prior pose information. The core of the method is to determine the relative pose by looking for the congruent tetrahedron in scanning point cloud and model point cloud on the basis of its known model. The two-level index hash table is built for speeding up the search speed. In addition, the Iterative Closest Point (ICP) algorithm is used for pose tracking after CTA. In order to evaluate the method in arbitrary initial attitude, a simulated system is presented. Specifically, the performance of the proposed method to provide the initial pose needed for the tracking algorithm is demonstrated, as well as their robustness against noise. Finally, a field experiment is conducted and the results demonstrated the effectiveness of the proposed method.

  11. Sparse Unorganized Point Cloud Based Relative Pose Estimation for Uncooperative Space Target

    PubMed Central

    Chou, Wusheng; Wu, Yun; Yang, Guang; Xu, Song

    2018-01-01

    This paper proposes an autonomous algorithm to determine the relative pose between the chaser spacecraft and the uncooperative space target, which is essential in advanced space applications, e.g., on-orbit serving missions. The proposed method, named Congruent Tetrahedron Align (CTA) algorithm, uses the very sparse unorganized 3D point cloud acquired by a LIDAR sensor, and does not require any prior pose information. The core of the method is to determine the relative pose by looking for the congruent tetrahedron in scanning point cloud and model point cloud on the basis of its known model. The two-level index hash table is built for speeding up the search speed. In addition, the Iterative Closest Point (ICP) algorithm is used for pose tracking after CTA. In order to evaluate the method in arbitrary initial attitude, a simulated system is presented. Specifically, the performance of the proposed method to provide the initial pose needed for the tracking algorithm is demonstrated, as well as their robustness against noise. Finally, a field experiment is conducted and the results demonstrated the effectiveness of the proposed method. PMID:29597323

  12. Evaluation of an unsteady flamelet progress variable model for autoignition and flame development in compositionally stratified mixtures

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Saumyadip; Abraham, John

    2012-07-01

    The unsteady flamelet progress variable (UFPV) model has been proposed by Pitsch and Ihme ["An unsteady/flamelet progress variable method for LES of nonpremixed turbulent combustion," AIAA Paper No. 2005-557, 2005] for modeling the averaged/filtered chemistry source terms in Reynolds averaged simulations and large eddy simulations of reacting non-premixed combustion. In the UFPV model, a look-up table of source terms is generated as a function of mixture fraction Z, scalar dissipation rate χ, and progress variable C by solving the unsteady flamelet equations. The assumption is that the unsteady flamelet represents the evolution of the reacting mixing layer in the non-premixed flame. We assess the accuracy of the model in predicting autoignition and flame development in compositionally stratified n-heptane/air mixtures using direct numerical simulations (DNS). The focus in this work is primarily on the assessment of accuracy of the probability density functions (PDFs) employed for obtaining averaged source terms. The performance of commonly employed presumed functions, such as the dirac-delta distribution function, the β distribution function, and statistically most likely distribution (SMLD) approach in approximating the shapes of the PDFs of the reactive and the conserved scalars is evaluated. For unimodal distributions, it is observed that functions that need two-moment information, e.g., the β distribution function and the SMLD approach with two-moment closure, are able to reasonably approximate the actual PDF. As the distribution becomes multimodal, higher moment information is required. Differences are observed between the ignition trends obtained from DNS and those predicted by the look-up table, especially for smaller gradients where the flamelet assumption becomes less applicable. The formulation assumes that the shape of the χ(Z) profile can be modeled by an error function which remains unchanged in the presence of heat release. We show that this assumption is not accurate.

  13. Towards Linking 3D SAR and Lidar Models with a Spatially Explicit Individual Based Forest Model

    NASA Astrophysics Data System (ADS)

    Osmanoglu, B.; Ranson, J.; Sun, G.; Armstrong, A. H.; Fischer, R.; Huth, A.

    2017-12-01

    In this study, we present a parameterization of the FORMIND individual-based gap model (IBGM)for old growth Atlantic lowland rainforest in La Selva, Costa Rica for the purpose of informing multisensor remote sensing techniques for above ground biomass techniques. The model was successfully parameterized and calibrated for the study site; results show that the simulated forest reproduces the structural complexity of Costa Rican rainforest based on comparisons with CARBONO inventory plot data. Though the simulated stem numbers (378) slightly underestimated the plot data (418), particularly for canopy dominant intermediate shade tolerant trees and shade tolerant understory trees, overall there was a 9.7% difference. Aboveground biomass (kg/ha) showed a 0.1% difference between the simulated forest and inventory plot dataset. The Costa Rica FORMIND simulation was then used to parameterize a spatially explicit (3D) SAR and lidar backscatter models. The simulated forest stands were used to generate a Look Up Table as a tractable means to estimate aboveground forest biomass for these complex forests. Various combinations of lidar and radar variables were evaluated in the LUT inversion. To test the capability of future data for estimation of forest height and biomass, we considered data of 1) L- (or P-) band polarimetric data (backscattering coefficients of HH, HV and VV); 2) L-band dual-pol repeat-pass InSAR data (HH/HV backscattering coefficients and coherences, height of scattering phase center at HH and HV using DEM or surface height from lidar data as reference); 3) P-band polarimetric InSAR data (canopy height from inversion of PolInSAR data or use the coherences and height of scattering phase center at HH, HV and VV); 4) various height indices from waveform lidar data); and 5) surface and canopy top height from photon-counting lidar data. The methods for parameterizing the remote sensing models with the IBGM and developing Look Up Tables will be discussed. Results from various remote sensing scenarios will also be presented.

  14. Light Curve Simulation Using Spacecraft CAD Models and Empirical Material Spectral BRDFS

    NASA Astrophysics Data System (ADS)

    Willison, A.; Bedard, D.

    This paper presents a Matlab-based light curve simulation software package that uses computer-aided design (CAD) models of spacecraft and the spectral bidirectional reflectance distribution function (sBRDF) of their homogenous surface materials. It represents the overall optical reflectance of objects as a sBRDF, a spectrometric quantity, obtainable during an optical ground truth experiment. The broadband bidirectional reflectance distribution function (BRDF), the basis of a broadband light curve, is produced by integrating the sBRDF over the optical wavelength range. Colour-filtered BRDFs, the basis of colour-filtered light curves, are produced by first multiplying the sBRDF by colour filters, and integrating the products. The software package's validity is established through comparison of simulated reflectance spectra and broadband light curves with those measured of the CanX-1 Engineering Model (EM) nanosatellite, collected during an optical ground truth experiment. It is currently being extended to simulate light curves of spacecraft in Earth orbit, using spacecraft Two-Line-Element (TLE) sets, yaw/pitch/roll angles, and observer coordinates. Measured light curves of the NEOSSat spacecraft will be used to validate simulated quantities. The sBRDF was chosen to represent material reflectance as it is spectrometric and a function of illumination and observation geometry. Homogeneous material sBRDFs were obtained using a goniospectrometer for a range of illumination and observation geometries, collected in a controlled environment. The materials analyzed include aluminum alloy, two types of triple-junction photovoltaic (TJPV) cell, white paint, and multi-layer insulation (MLI). Interpolation and extrapolation methods were used to determine the sBRDF for all possible illumination and observation geometries not measured in the laboratory, resulting in empirical look-up tables. These look-up tables are referenced when calculating the overall sBRDF of objects, where the contribution of each facet is proportionally integrated.

  15. Six-Position, Frontal View Photography in Blepharoplasty: A Simple Method.

    PubMed

    Zhang, Cheng; Guo, Xiaoshuang; Han, Xuefeng; Tian, Yi; Jin, Xiaolei

    2018-02-26

    Photography plays a pivotal role in patient education, photo-documentation, preoperative planning and postsurgical evaluation in plastic surgeries. It has long been serving as a bridge that facilitated communication not only between patients and doctors, but also among plastic surgeons from different countries. Although several basic principles and photographic methods have been proposed, there is no internationally accepted photography that could provide both static and dynamic information in blepharoplasty. In this article, we introduced a novel six-position, frontal view photography for thorough assessment in blepharoplasty. From October 2013 to January 2017, 1068 patients who underwent blepharoplasty were enrolled in our clinical research. All patients received six-position, frontal view photography. Pictures were taken of the patients looking up, looking down, squeezing, smiling, looking ahead and with closed eyes. Conventionally, frontal view photography only contained the last two positions. Then, both novel six-position photographs and conventional two-position photographs were used to appraise postsurgical outcomes. Compared to conventional two-position, frontal view photography, six-position, frontal view photography can provide more detailed, thorough information about the eyes. It is of clinical significance in indicating underlying adhesion of skin/muscle/fat according to individual's features and assessing preoperative and postoperative dynamic changes and aesthetic outcomes. Six-position, frontal view photography is technically uncomplicated while exhibiting static, dynamic and detailed information of the eyes. This innovative method is favorable in eye assessment, especially for revision blepharoplasty. We suggest using six-position, frontal view photography to obtain comprehensive photographs. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  16. High-speed three-dimensional shape measurement for dynamic scenes using bi-frequency tripolar pulse-width-modulation fringe projection

    NASA Astrophysics Data System (ADS)

    Zuo, Chao; Chen, Qian; Gu, Guohua; Feng, Shijie; Feng, Fangxiaoyu; Li, Rubin; Shen, Guochen

    2013-08-01

    This paper introduces a high-speed three-dimensional (3-D) shape measurement technique for dynamic scenes by using bi-frequency tripolar pulse-width-modulation (TPWM) fringe projection. Two wrapped phase maps with different wavelengths can be obtained simultaneously by our bi-frequency phase-shifting algorithm. Then the two phase maps are unwrapped using a simple look-up-table based number-theoretical approach. To guarantee the robustness of phase unwrapping as well as the high sinusoidality of projected patterns, TPWM technique is employed to generate ideal fringe patterns with slight defocus. We detailed our technique, including its principle, pattern design, and system setup. Several experiments on dynamic scenes were performed, verifying that our method can achieve a speed of 1250 frames per second for fast, dense, and accurate 3-D measurements.

  17. CT Scans

    MedlinePlus

    ... cross-sectional pictures of your body. Doctors use CT scans to look for Broken bones Cancers Blood clots Signs of heart disease Internal bleeding During a CT scan, you lie still on a table. The table ...

  18. Bit-serial neuroprocessor architecture

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    2001-01-01

    A neuroprocessor architecture employs a combination of bit-serial and serial-parallel techniques for implementing the neurons of the neuroprocessor. The neuroprocessor architecture includes a neural module containing a pool of neurons, a global controller, a sigmoid activation ROM look-up-table, a plurality of neuron state registers, and a synaptic weight RAM. The neuroprocessor reduces the number of neurons required to perform the task by time multiplexing groups of neurons from a fixed pool of neurons to achieve the successive hidden layers of a recurrent network topology.

  19. LMDS Lightweight Modular Display System.

    DTIC Science & Technology

    1982-02-16

    based on standard functions. This means that the cost to produce a particular display function can be met in the most economical fashion and at the same...not mean that the NTDS interface would be eliminated. What is anticipated is the use of ETHERNET at a low level of system interface, ie internal to...GENERATOR dSYMBOL GEN eCOMMUNICATION 3-2 The architecture of the unit’s (fig 3-4) input circuitry is based on a video table look-up ROM. The function

  20. NRL Hyperspectral Imagery Trafficability Tool (HITT): Software andSpectral-Geotechnical Look-up Tables for Estimation and Mapping of Soil Bearing Strength from Hyperspectral Imagery

    DTIC Science & Technology

    2012-09-28

    spectral-geotechnical libraries and models developed during remote sensing and calibration/ validation campaigns conducted by NRL and collaborating...geotechnical libraries and models developed during remote sensing and calibration/ validation campaigns conducted by NRL and collaborating institutions in four...2010; Bachmann, Fry, et al, 2012a). The NRL HITT tool is a model for how we develop and validate software, and the future development of tools by

  1. Spatial Coherence Between Remotely Sensed Ocean Color Data and Vertical Distribution of Lidar Backscattering in Coastal Stratified Waters

    DTIC Science & Technology

    2010-01-01

    Respondents should be aware that notwithstanding any other provision of law, no person shall be sublet to any penalty for failing to comply with a...Laboratory, NOAA Boulder, CO 8030S USA ’ Naval Research Laboratory, Code 7330. Stennis Space Center. NASA MS 39529. USA ’ Shellfish Assessment. Alaska...of peak) could be retrieved based solely on Rn (A, 0+ ) measurements. The use of Look-Up Tables (LUTs) of regionally and seasonally averaged lOPs

  2. Literacy, Numeracy, and Problem Solving in Technology-Rich Environments among U.S. Adults: Results from the Program for the International Assessment of Adult Competencies 2012. Appendix D: Standard Error Tables. First Look. NCES 2014-008

    ERIC Educational Resources Information Center

    National Center for Education Statistics, 2013

    2013-01-01

    This paper provides Appendix D, Standard Error tables, for the full report, entitled. "Literacy, Numeracy, and Problem Solving in Technology-Rich Environments among U.S. Adults: Results from the Program for the International Assessment of Adult Competencies 2012. First Look. NCES 2014-008." The full report presents results of the Program…

  3. Quantifying Vegetation Biophysical Variables from Imaging Spectroscopy Data: A Review on Retrieval Methods

    NASA Astrophysics Data System (ADS)

    Verrelst, Jochem; Malenovský, Zbyněk; Van der Tol, Christiaan; Camps-Valls, Gustau; Gastellu-Etchegorry, Jean-Philippe; Lewis, Philip; North, Peter; Moreno, Jose

    2018-06-01

    An unprecedented spectroscopic data stream will soon become available with forthcoming Earth-observing satellite missions equipped with imaging spectroradiometers. This data stream will open up a vast array of opportunities to quantify a diversity of biochemical and structural vegetation properties. The processing requirements for such large data streams require reliable retrieval techniques enabling the spatiotemporally explicit quantification of biophysical variables. With the aim of preparing for this new era of Earth observation, this review summarizes the state-of-the-art retrieval methods that have been applied in experimental imaging spectroscopy studies inferring all kinds of vegetation biophysical variables. Identified retrieval methods are categorized into: (1) parametric regression, including vegetation indices, shape indices and spectral transformations; (2) nonparametric regression, including linear and nonlinear machine learning regression algorithms; (3) physically based, including inversion of radiative transfer models (RTMs) using numerical optimization and look-up table approaches; and (4) hybrid regression methods, which combine RTM simulations with machine learning regression methods. For each of these categories, an overview of widely applied methods with application to mapping vegetation properties is given. In view of processing imaging spectroscopy data, a critical aspect involves the challenge of dealing with spectral multicollinearity. The ability to provide robust estimates, retrieval uncertainties and acceptable retrieval processing speed are other important aspects in view of operational processing. Recommendations towards new-generation spectroscopy-based processing chains for operational production of biophysical variables are given.

  4. 3. DETAIL OF STONEWORK ON ARCH, WATER TABLE AND DENTILS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. DETAIL OF STONEWORK ON ARCH, WATER TABLE AND DENTILS ON EAST ELEVATION LOOKING NORTHWEST. - Original Airport Entrance Overpass, Spanning original Airport Entrance Road at National Airport, Arlington, Arlington County, VA

  5. Integrated large view angle hologram system with multi-slm

    NASA Astrophysics Data System (ADS)

    Yang, ChengWei; Liu, Juan

    2017-10-01

    Recently holographic display has attracted much attention for its ability to generate real-time 3D reconstructed image. CGH provides an effective way to produce hologram, and spacial light modulator (SLM) is used to reconstruct the image. However the reconstructing system is usually very heavy and complex, and the view-angle is limited by the pixel size and spatial bandwidth product (SBP) of the SLM. In this paper a light portable holographic display system is proposed by integrating the optical elements and host computer units.Which significantly reduces the space taken in horizontal direction. CGH is produced based on the Fresnel diffraction and point source method. To reduce the memory usage and image distortion, we use an optimized accurate compressed look up table method (AC-LUT) to compute the hologram. In the system, six SLMs are concatenated to a curved plane, each one loading the phase-only hologram in a different angle of the object, the horizontal view-angle of the reconstructed image can be expanded to about 21.8°.

  6. Characterisation of the n-colour printing process using the spot colour overprint model.

    PubMed

    Deshpande, Kiran; Green, Phil; Pointer, Michael R

    2014-12-29

    This paper is aimed at reproducing the solid spot colours using the n-colour separation. A simplified numerical method, called as the spot colour overprint (SCOP) model, was used for characterising the n-colour printing process. This model was originally developed for estimating the spot colour overprints. It was extended to be used as a generic forward characterisation model for the n-colour printing process. The inverse printer model based on the look-up table was implemented to obtain the colour separation for n-colour printing process. Finally the real-world spot colours were reproduced using 7-colour separation on lithographic offset printing process. The colours printed with 7 inks were compared against the original spot colours to evaluate the accuracy. The results show good accuracy with the mean CIEDE2000 value between the target colours and the printed colours of 2.06. The proposed method can be used successfully to reproduce the spot colours, which can potentially save significant time and cost in the printing and packaging industry.

  7. Modeling and Calibration of a Novel One-Mirror Galvanometric Laser Scanner

    PubMed Central

    Yu, Chengyi; Chen, Xiaobo; Xi, Juntong

    2017-01-01

    A laser stripe sensor has limited application when a point cloud of geometric samples on the surface of the object needs to be collected, so a galvanometric laser scanner is designed by using a one-mirror galvanometer element as its mechanical device to drive the laser stripe to sweep along the object. A novel mathematical model is derived for the proposed galvanometer laser scanner without any position assumptions and then a model-driven calibration procedure is proposed. Compared with available model-driven approaches, the influence of machining and assembly errors is considered in the proposed model. Meanwhile, a plane-constraint-based approach is proposed to extract a large number of calibration points effectively and accurately to calibrate the galvanometric laser scanner. Repeatability and accuracy of the galvanometric laser scanner are evaluated on the automobile production line to verify the efficiency and accuracy of the proposed calibration method. Experimental results show that the proposed calibration approach yields similar measurement performance compared with a look-up table calibration method. PMID:28098844

  8. Quad-rotor flight path energy optimization

    NASA Astrophysics Data System (ADS)

    Kemper, Edward

    Quad-Rotor unmanned areal vehicles (UAVs) have been a popular area of research and development in the last decade, especially with the advent of affordable microcontrollers like the MSP 430 and the Raspberry Pi. Path-Energy Optimization is an area that is well developed for linear systems. In this thesis, this idea of path-energy optimization is extended to the nonlinear model of the Quad-rotor UAV. The classical optimization technique is adapted to the nonlinear model that is derived for the problem at hand, coming up with a set of partial differential equations and boundary value conditions to solve these equations. Then, different techniques to implement energy optimization algorithms are tested using simulations in Python. First, a purely nonlinear approach is used. This method is shown to be computationally intensive, with no practical solution available in a reasonable amount of time. Second, heuristic techniques to minimize the energy of the flight path are tested, using Ziegler-Nichols' proportional integral derivative (PID) controller tuning technique. Finally, a brute force look-up table based PID controller is used. Simulation results of the heuristic method show that both reliable control of the system and path-energy optimization are achieved in a reasonable amount of time.

  9. The vector radiative transfer numerical model of coupled ocean-atmosphere system using the matrix-operator method

    NASA Astrophysics Data System (ADS)

    Xianqiang, He; Delu, Pan; Yan, Bai; Qiankun, Zhu

    2005-10-01

    The numerical model of the vector radiative transfer of the coupled ocean-atmosphere system is developed based on the matrix-operator method, which is named PCOART. In PCOART, using the Fourier analysis, the vector radiative transfer equation (VRTE) splits up into a set of independent equations with zenith angle as only angular coordinate. Using the Gaussian-Quadrature method, VRTE is finally transferred into the matrix equation, which is calculated by using the adding-doubling method. According to the reflective and refractive properties of the ocean-atmosphere interface, the vector radiative transfer numerical model of ocean and atmosphere is coupled in PCOART. By comparing with the exact Rayleigh scattering look-up-table of MODIS(Moderate-resolution Imaging Spectroradiometer), it is shown that PCOART is an exact numerical calculation model, and the processing methods of the multi-scattering and polarization are correct in PCOART. Also, by validating with the standard problems of the radiative transfer in water, it is shown that PCOART could be used to calculate the underwater radiative transfer problems. Therefore, PCOART is a useful tool to exactly calculate the vector radiative transfer of the coupled ocean-atmosphere system, which can be used to study the polarization properties of the radiance in the whole ocean-atmosphere system and the remote sensing of the atmosphere and ocean.

  10. 1. DOWNRIVER VIEW OF BRIDGE, LOOKING SOUTHSOUTHWEST Peter J. Edwards, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. DOWNRIVER VIEW OF BRIDGE, LOOKING SOUTH-SOUTHWEST Peter J. Edwards, photographer, August 1988 - Four Mile Bridge, Copper Creek Road, Spans Table Rock Fork, Mollala River, Molalla, Clackamas County, OR

  11. 5. DETAIL VIEW SHOWING ARCH AND SUPPORTS, LOOKING WESTSOUTHWEST Mike ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    5. DETAIL VIEW SHOWING ARCH AND SUPPORTS, LOOKING WEST-SOUTHWEST Mike Hanemann, photographer, August 1988 - Four Mile Bridge, Copper Creek Road, Spans Table Rock Fork, Mollala River, Molalla, Clackamas County, OR

  12. 87. AFT CREWS' MESS DECK STARBOARD LOOKING TO PORT ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    87. AFT CREWS' MESS DECK - STARBOARD LOOKING TO PORT SHOWING COFFEE MAKER, ICE CREAM FREEZER, TABLES AND SCUTTLEBUTTS. - U.S.S. HORNET, Puget Sound Naval Shipyard, Sinclair Inlet, Bremerton, Kitsap County, WA

  13. A memory efficient implementation scheme of Gauss error function in a Laguerre-Volterra network for neuroprosthetic devices

    NASA Astrophysics Data System (ADS)

    Li, Will X. Y.; Cui, Ke; Zhang, Wei

    2017-04-01

    Cognitive neural prosthesis is a manmade device which can be used to restore or compensate for lost human cognitive modalities. The generalized Laguerre-Volterra (GLV) network serves as a robust mathematical underpinning for the development of such prosthetic instrument. In this paper, a hardware implementation scheme of Gauss error function for the GLV network targeting reconfigurable platforms is reported. Numerical approximations are formulated which transform the computation of nonelementary function into combinational operations of elementary functions, and memory-intensive look-up table (LUT) based approaches can therefore be circumvented. The computational precision can be made adjustable with the utilization of an error compensation scheme, which is proposed based on the experimental observation of the mathematical characteristics of the error trajectory. The precision can be further customizable by exploiting the run-time characteristics of the reconfigurable system. Compared to the polynomial expansion based implementation scheme, the utilization of slice LUTs, occupied slices, and DSP48E1s on a Xilinx XC6VLX240T field-programmable gate array has decreased by 94.2%, 94.1%, and 90.0%, respectively. While compared to the look-up table based scheme, 1.0 ×1017 bits of storage can be spared under the maximum allowable error of 1.0 ×10-3 . The proposed implementation scheme can be employed in the study of large-scale neural ensemble activity and in the design and development of neural prosthetic device.

  14. The Application of LT-Table in TRIZ Contradiction Resolving Process

    NASA Astrophysics Data System (ADS)

    Wei, Zihui; Li, Qinghai; Wang, Donglin; Tian, Yumei

    TRIZ is used to resolve invention problems. ARIZ is the most powerful systematic method which integrates all of TRIZ heuristics. Definition of ideal final result (IFR), identification of contradictions and resource utilization are main lines of ARIZ. But resource searching of ARIZ has fault of blindness. Alexandr sets up mathematical model of transformation of the hereditary information in an invention problem using the theory of catastrophes, and provides method of resource searching using LT-table. The application of LT-table on contradiction resolving is introduced. Resource utilization using LT-table is joined into ARIZ step as an addition of TRIZ, apply this method in separator paper punching machine design.

  15. A fast point-cloud computing method based on spatial symmetry of Fresnel field

    NASA Astrophysics Data System (ADS)

    Wang, Xiangxiang; Zhang, Kai; Shen, Chuan; Zhu, Wenliang; Wei, Sui

    2017-10-01

    Aiming at the great challenge for Computer Generated Hologram (CGH) duo to the production of high spatial-bandwidth product (SBP) is required in the real-time holographic video display systems. The paper is based on point-cloud method and it takes advantage of the propagating reversibility of Fresnel diffraction in the propagating direction and the fringe pattern of a point source, known as Gabor zone plate has spatial symmetry, so it can be used as a basis for fast calculation of diffraction field in CGH. A fast Fresnel CGH method based on the novel look-up table (N-LUT) method is proposed, the principle fringe patterns (PFPs) at the virtual plane is pre-calculated by the acceleration algorithm and be stored. Secondly, the Fresnel diffraction fringe pattern at dummy plane can be obtained. Finally, the Fresnel propagation from dummy plan to hologram plane. The simulation experiments and optical experiments based on Liquid Crystal On Silicon (LCOS) is setup to demonstrate the validity of the proposed method under the premise of ensuring the quality of 3D reconstruction the method proposed in the paper can be applied to shorten the computational time and improve computational efficiency.

  16. 41. PATTERN STORAGE, GRIND STONE, WATER TANK, SHAFTING, AND TABLE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    41. PATTERN STORAGE, GRIND STONE, WATER TANK, SHAFTING, AND TABLE SAW (L TO R)-LOOKING WEST. - W. A. Young & Sons Foundry & Machine Shop, On Water Street along Monongahela River, Rices Landing, Greene County, PA

  17. Data Services

    Science.gov Websites

    Moon Data for One Day Rise/Set/Twilight Table for an Entire Year What the Moon Looks Like Now Dates of Contact more... Sitemap Rise/Set/Transit/Twilight Data Complete Sun and Moon Data for One Day Table of Solar System Objects and Bright Stars Duration of Daylight/Darkness Table for One Year Phases of the

  18. 117. VIEW, LOOKING NORTHWEST, OF DIESTER MODEL 6 CONCENTRATING (SHAKING) ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    117. VIEW, LOOKING NORTHWEST, OF DIESTER MODEL 6 CONCENTRATING (SHAKING) TABLE, USED FOR PRIMARY, MECHANICAL SEPARATION OF GOLD FROM ORE. - Shenandoah-Dives Mill, 135 County Road 2, Silverton, San Juan County, CO

  19. Calculation of recoil implantation profiles using known range statistics

    NASA Technical Reports Server (NTRS)

    Fung, C. D.; Avila, R. E.

    1985-01-01

    A method has been developed to calculate the depth distribution of recoil atoms that result from ion implantation onto a substrate covered with a thin surface layer. The calculation includes first order recoils considering projected range straggles, and lateral straggles of recoils but neglecting lateral straggles of projectiles. Projectile range distributions at intermediate energies in the surface layer are deduced from look-up tables of known range statistics. A great saving of computing time and human effort is thus attained in comparison with existing procedures. The method is used to calculate recoil profiles of oxygen from implantation of arsenic through SiO2 and of nitrogen from implantation of phosphorus through Si3N4 films on silicon. The calculated recoil profiles are in good agreement with results obtained by other investigators using the Boltzmann transport equation and they also compare very well with available experimental results in the literature. The deviation between calculated and experimental results is discussed in relation to lateral straggles. From this discussion, a range of surface layer thickness for which the method applies is recommended.

  20. Robust distortion correction of endoscope

    NASA Astrophysics Data System (ADS)

    Li, Wenjing; Nie, Sixiang; Soto-Thompson, Marcelo; Chen, Chao-I.; A-Rahim, Yousif I.

    2008-03-01

    Endoscopic images suffer from a fundamental spatial distortion due to the wide angle design of the endoscope lens. This barrel-type distortion is an obstacle for subsequent Computer Aided Diagnosis (CAD) algorithms and should be corrected. Various methods and research models for the barrel-type distortion correction have been proposed and studied. For industrial applications, a stable, robust method with high accuracy is required to calibrate the different types of endoscopes in an easy of use way. The correction area shall be large enough to cover all the regions that the physicians need to see. In this paper, we present our endoscope distortion correction procedure which includes data acquisition, distortion center estimation, distortion coefficients calculation, and look-up table (LUT) generation. We investigate different polynomial models used for modeling the distortion and propose a new one which provides correction results with better visual quality. The method has been verified with four types of colonoscopes. The correction procedure is currently being applied on human subject data and the coefficients are being utilized in a subsequent 3D reconstruction project of colon.

  1. Characterization of Properties of Earth Atmosphere from Multi-Angular Polarimetric Observations of Polder/Parasol Using GRASP Algorithm

    NASA Astrophysics Data System (ADS)

    Dubovik, O.; Litvinov, P.; Lapyonok, T.; Ducos, F.; Fuertes, D.; Huang, X.; Torres, B.; Aspetsberger, M.; Federspiel, C.

    2014-12-01

    The POLDER imager on board of the PARASOL micro-satellite is the only satellite polarimeter provided ~ 9 years extensive record of detailed polarmertic observations of Earth atmosphere from space. POLDER / PARASOL registers spectral polarimetric characteristics of the reflected atmospheric radiation at up to 16 viewing directions over each observed pixel. Such observations have very high sensitivity to the variability of the properties of atmosphere and underlying surface and can not be adequately interpreted using look-up-table retrieval algorithms developed for analyzing mono-viewing intensity only observations traditionally used in atmospheric remote sensing. Therefore, a new enhanced retrieval algorithm GRASP (Generalized Retrieval of Aerosol and Surface Properties) has been developed and applied for processing of PARASOL data. GRASP relies on highly optimized statistical fitting of observations and derives large number of unknowns for each observed pixel. The algorithm uses elaborated model of the atmosphere and fully accounts for all multiple interactions of scattered solar light with aerosol, gases and the underlying surface. All calculations are implemented during inversion and no look-up tables are used. The algorithm is very flexible in utilization of various types of a priori constraints on the retrieved characteristics and in parameterization of surface - atmosphere system. It is also optimized for high performance calculations. The results of the PARASOL data processing will be presented with the emphasis on the discussion of transferability and adaptability of the developed retrieval concept for processing polarimetric observations of other planets. For example, flexibility and possible alternative in modeling properties of aerosol polydisperse mixtures, particle composition and shape, reflectance of surface, etc. will be discussed.

  2. L5 TM radiometric recalibration procedure using the internal calibration trends from the NLAPS trending database

    USGS Publications Warehouse

    Chander, G.; Haque, Md. O.; Micijevic, E.; Barsi, J.A.

    2008-01-01

    From the Landsat program's inception in 1972 to the present, the earth science user community has benefited from a historical record of remotely sensed data. The multispectral data from the Landsat 5 (L5) Thematic Mapper (TM) sensor provide the backbone for this extensive archive. Historically, the radiometric calibration procedure for this imagery used the instrument's response to the Internal Calibrator (IC) on a scene-by-scene basis to determine the gain and offset for each detector. The IC system degraded with time causing radiometric calibration errors up to 20 percent. In May 2003 the National Landsat Archive Production System (NLAPS) was updated to use a gain model rather than the scene acquisition specific IC gains to calibrate TM data processed in the United States. Further modification of the gain model was performed in 2007. L5 TM data that were processed using IC prior to the calibration update do not benefit from the recent calibration revisions. A procedure has been developed to give users the ability to recalibrate their existing Level-1 products. The best recalibration results are obtained if the work order report that was originally included in the standard data product delivery is available. However, many users may not have the original work order report. In such cases, the IC gain look-up table that was generated using the radiometric gain trends recorded in the NLAPS database can be used for recalibration. This paper discusses the procedure to recalibrate L5 TM data when the work order report originally used in processing is not available. A companion paper discusses the generation of the NLAPS IC gain and bias look-up tables required to perform the recalibration.

  3. Further optimization of SeDDaRA blind image deconvolution algorithm and its DSP implementation

    NASA Astrophysics Data System (ADS)

    Wen, Bo; Zhang, Qiheng; Zhang, Jianlin

    2011-11-01

    Efficient algorithm for blind image deconvolution and its high-speed implementation is of great value in practice. Further optimization of SeDDaRA is developed, from algorithm structure to numerical calculation methods. The main optimization covers that, the structure's modularization for good implementation feasibility, reducing the data computation and dependency of 2D-FFT/IFFT, and acceleration of power operation by segmented look-up table. Then the Fast SeDDaRA is proposed and specialized for low complexity. As the final implementation, a hardware system of image restoration is conducted by using the multi-DSP parallel processing. Experimental results show that, the processing time and memory demand of Fast SeDDaRA decreases 50% at least; the data throughput of image restoration system is over 7.8Msps. The optimization is proved efficient and feasible, and the Fast SeDDaRA is able to support the real-time application.

  4. A survey of southern hemisphere meteor showers

    NASA Astrophysics Data System (ADS)

    Jenniskens, Peter; Baggaley, Jack; Crumpton, Ian; Aldous, Peter; Pokorny, Petr; Janches, Diego; Gural, Peter S.; Samuels, Dave; Albers, Jim; Howell, Andreas; Johannink, Carl; Breukers, Martin; Odeh, Mohammad; Moskovitz, Nicholas; Collison, Jack; Ganju, Siddha

    2018-05-01

    Results are presented from a video-based meteoroid orbit survey conducted in New Zealand between Sept. 2014 and Dec. 2016, which netted 24,906 orbits from +5 to -5 magnitude meteors. 44 new southern hemisphere meteor showers are identified after combining this data with that of other video-based networks. Results are compared to showers reported from recent radar-based surveys. We find that video cameras and radar often see different showers and sometimes measure different semi-major axis distributions for the same meteoroid stream. For identifying showers in sparse daily orbit data, a shower look-up table of radiant position and speed as a function of time was created. This can replace the commonly used method of identifying showers from a set of mean orbital elements by using a discriminant criterion, which does not fully describe the distribution of meteor shower radiants over time.

  5. 49. COMMAND INFORMATION CENTER (CIC) AFT LOOKING FORWARD PORT ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    49. COMMAND INFORMATION CENTER (CIC) - AFT LOOKING FORWARD PORT TO STARBOARD SHOWING VARIOUS TYPES OF RADAR UNITS, PLOT TABLES AND PLOTTING BOARDS. - U.S.S. HORNET, Puget Sound Naval Shipyard, Sinclair Inlet, Bremerton, Kitsap County, WA

  6. Looking northeast from roof of Machine Shop (Bldg. 163) at ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Looking northeast from roof of Machine Shop (Bldg. 163) at transfer table pit and Boiler Shop (Bldg. 152) - Atchison, Topeka, Santa Fe Railroad, Albuquerque Shops, Machine Shop, 908 Second Street, Southwest, Albuquerque, Bernalillo County, NM

  7. Performance optimization of internet firewalls

    NASA Astrophysics Data System (ADS)

    Chiueh, Tzi-cker; Ballman, Allen

    1997-01-01

    Internet firewalls control the data traffic in and out of an enterprise network by checking network packets against a set of rules that embodies an organization's security policy. Because rule checking is computationally more expensive than routing-table look-up, it could become a potential bottleneck for scaling up the performance of IP routers, which typically implement firewall functions in software. in this paper, we analyzed the performance problems associated with firewalls, particularly packet filters, propose a good connection cache to amortize the costly security check over the packets in a connection, and report the preliminary performance results of a trace-driven simulation that show the average packet check time can be reduced by a factor of 2.5 at the least.

  8. Fuel Cell Stack Testing and Durability in Support of Ion Tiger UAV

    DTIC Science & Technology

    2010-06-02

    N00173-08-2-C008 specified. In June 2008, the first M250 stack 242503 data were incorporated into the PEMFC system model as a look-up data table...control and operational model which implements the operational strategy by controlling the power from the PEMFC systems and battery pack for a total...Outputs PEMFC System Outputs <~~>*<*,yrx**i~yc*r»>r-’+**^^ FCS_P«wi_Dwn«l (W) 10 15 20 25 BfOfumon PCM« Cwnind Wi ) 5 10 15

  9. Urban Traffic Signal Control for Fuel Economy. Part 2. Extension to Small Cars (Economie D’Essence Grace a la Commande des Feux de Circulation en Zone Urbaine. Partie 2. Application aux Vehicules de Petite Cylindree)

    DTIC Science & Technology

    1981-11-01

    d’essence comparativement au plan en vigueur. Le rapport mentionne 6galement que la consommation d’un gros v"hicule a6t calcul6e a l’aide d’un module...most of the submitted data was not readily enterable into the Vehicle Simulation program. Because of the design of the table look-ups in the program

  10. Cancer screening information at community health fairs: What the participants do with information they receive.

    PubMed

    Monrose, Erica; Ledergerber, Jessica; Acheampong, Derrick; Jandorf, Lina

    2017-09-21

    To assess participants' reasons for seeking cancer screening information at community health fairs and what they do with the information they receive. Mixed quantitative and qualitative approach was used. Community health fairs are organized in underserved New York City neighbourhoods. From June 14, 2016 to August 26, 2016, cancer prevention tables providing information about various cancer screenings were established at 12 local community health fairs in New York City. In-person and follow up telephone surveys assessing interest in the cancer prevention table, personal cancer screening adherence rates, information-sharing behaviours and demographic variables have been taken into account. Statistical analyses were performed using IBM SPSS 22.0: frequencies, descriptive, cross tabulations. All qualitative data was coded by theme so that it could be analysed through SPSS. For example, Were you interested in a specific cancer? may be coded as 2 for yes , breast cancer . One hundred and sixteen patrons participated in the initial survey. Of those, 88 (78%) agreed to give their contact information for the follow-up survey and 60 follow-up surveys were completed (68%). Of those who reported reading the material, 45% shared the information; 15% subsequently spoke to a provider about cancer screenings and 40% intended to speak to a provider. Participants disseminated information without prompting; suggesting the reach of these fairs extends beyond the people who visit our table. Future studies should look at whether patrons would share information at higher rates when they are explicitly encouraged to share the information.

  11. Multi-satellites normalization of the FengYun-2s visible detectors by the MVP method

    NASA Astrophysics Data System (ADS)

    Li, Yuan; Rong, Zhi-guo; Zhang, Li-jun; Sun, Ling; Xu, Na

    2013-08-01

    After January 13, 2012, FY-2F had successfully launched, the total number of the in orbit operating FengYun-2 geostationary meteorological satellites reached three. For accurate and efficient application of multi-satellite observation data, the study of the multi-satellites normalization of the visible detector was urgent. The method required to be non-rely on the in orbit calibration. So as to validate the calibration results before and after the launch; calculate day updating surface bidirectional reflectance distribution function (BRDF); at the same time track the long-term decay phenomenon of the detector's linearity and responsivity. By research of the typical BRDF model, the normalization method was designed. Which could effectively solute the interference of surface directional reflectance characteristics, non-rely on visible detector in orbit calibration. That was the Median Vertical Plane (MVP) method. The MVP method was based on the symmetry of principal plane, which were the directional reflective properties of the general surface targets. Two geostationary satellites were taken as the endpoint of a segment, targets on the intersecting line of the segment's MVP and the earth surface could be used as a normalization reference target (NRT). Observation on the NRT by two satellites at the moment the sun passing through the MVP brought the same observation zenith, solar zenith, and opposite relative direction angle. At that time, the linear regression coefficients of the satellite output data were the required normalization coefficients. The normalization coefficients between FY-2D, FY-2E and FY-2F were calculated, and the self-test method of the normalized results was designed and realized. The results showed the differences of the responsivity between satellites could up to 10.1%(FY-2E to FY-2F); the differences of the output reflectance calculated by the broadcast calibration look-up table could up to 21.1%(FY-2D to FY-2F); the differences of the output reflectance from FY-2D and FY-2E calculated by the site experiment results reduced to 2.9%(13.6% when using the broadcast table). The normalized relative error was also calculated by the self-test method, which was less than 0.2%.

  12. Rapid computation of single PET scan rest-stress myocardial blood flow parametric images by table look up.

    PubMed

    Guehl, Nicolas J; Normandin, Marc D; Wooten, Dustin W; Rozen, Guy; Ruskin, Jeremy N; Shoup, Timothy M; Woo, Jonghye; Ptaszek, Leon M; Fakhri, Georges El; Alpert, Nathaniel M

    2017-09-01

    We have recently reported a method for measuring rest-stress myocardial blood flow (MBF) using a single, relatively short, PET scan session. The method requires two IV tracer injections, one to initiate rest imaging and one at peak stress. We previously validated absolute flow quantitation in ml/min/cc for standard bull's eye, segmental analysis. In this work, we extend the method for fast computation of rest-stress MBF parametric images. We provide an analytic solution to the single-scan rest-stress flow model which is then solved using a two-dimensional table lookup method (LM). Simulations were performed to compare the accuracy and precision of the lookup method with the original nonlinear method (NLM). Then the method was applied to 16 single scan rest/stress measurements made in 12 pigs: seven studied after infarction of the left anterior descending artery (LAD) territory, and nine imaged in the native state. Parametric maps of rest and stress MBF as well as maps of left (f LV ) and right (f RV ) ventricular spill-over fractions were generated. Regions of interest (ROIs) for 17 myocardial segments were defined in bull's eye fashion on the parametric maps. The mean of each ROI was then compared to the rest (K 1r ) and stress (K 1s ) MBF estimates obtained from fitting the 17 regional TACs with the NLM. In simulation, the LM performed as well as the NLM in terms of precision and accuracy. The simulation did not show that bias was introduced by the use of a predefined two-dimensional lookup table. In experimental data, parametric maps demonstrated good statistical quality and the LM was computationally much more efficient than the original NLM. Very good agreement was obtained between the mean MBF calculated on the parametric maps for each of the 17 ROIs and the regional MBF values estimated by the NLM (K 1map LM  = 1.019 × K 1 ROI NLM  + 0.019, R 2  = 0.986; mean difference = 0.034 ± 0.036 mL/min/cc). We developed a table lookup method for fast computation of parametric imaging of rest and stress MBF. Our results show the feasibility of obtaining good quality MBF maps using modest computational resources, thus demonstrating that the method can be applied in a clinical environment to obtain full quantitative MBF information. © 2017 American Association of Physicists in Medicine.

  13. Puzzler Solution: Perfect Weather for a Picnic | Poster

    Cancer.gov

    It looks like we stumped you. We did not receive any correct guesses for the current Poster Puzzler, which is an image of the top of the Building 434 picnic table, with a view looking towards Building 472. This picnic table and others across campus were supplied by the NCI at Frederick Campus Improvement Committee. Building 434, located on Wood Street, is home to the staff of Scientific Publications, Graphics & Media (SPGM), the Central Repository, and the NCI Experimental Therapeutics Program support group, Applied and Developmental Research Directorate.

  14. Food table on ISS

    NASA Image and Video Library

    2015-04-08

    ISS043E091650 (04/08/2015) --- A view of the food table located in the Russian Zvezda service module on the International Space Station taken by Expedition 43 Flight Engineer Scott Kelly. Assorted food, drink and condiment packets are visible. Kelly tweeted this image along with the comment: ""Looks messy, but it's functional. Our #food table on the @space station. What's for breakfast? #YearInSpace".

  15. What You Don't Find out about England's Educational Performance in the PISA League Table. Election Factsheet

    ERIC Educational Resources Information Center

    Burge, Bethan

    2015-01-01

    This election factsheet highlights the following points: (1) It isn't always possible to say with certainty from looking at a country's rank in the PISA educational league tables alone whether one country or economy has definitely performed better than another; (2) England's position in the league tables is dependent on which countries and…

  16. Don't Read University Rankings like Reading Football League Tables: Taking a Close Look at the Indicators

    ERIC Educational Resources Information Center

    Soh, Kay Cheng

    2011-01-01

    The outcome of university ranking is of much interest and concern to the many stakeholders, including university's sponsors, administrators, staff, current and prospective students, and the public. The results of rankings presented in the form of league tables, analogous to football league tables, attract more attention than do the processes by…

  17. Computerized organ localization in abdominal CT volume with context-driven generalized Hough transform

    NASA Astrophysics Data System (ADS)

    Liu, Jing; Li, Qiang

    2014-03-01

    Fast localization of organs is a key step in computer-aided detection of lesions and in image guided radiation therapy. We developed a context-driven Generalized Hough Transform (GHT) for robust localization of organ-of-interests (OOIs) in a CT volume. Conventional GHT locates the center of an organ by looking-up center locations of pre-learned organs with "matching" edges. It often suffers from mislocalization because "similar" edges in vicinity may attract the prelearned organs towards wrong places. The proposed method not only uses information from organ's own shape but also takes advantage of nearby "similar" edge structures. First, multiple GHT co-existing look-up tables (cLUT) were constructed from a set of training shapes of different organs. Each cLUT represented the spatial relationship between the center of the OOI and the shape of a co-existing organ. Second, the OOI center in a test image was determined using GHT with each cLUT separately. Third, the final localization of OOI was based on weighted combination of the centers obtained in the second stage. The training set consisted of 10 CT volumes with manually segmented OOIs including liver, spleen and kidneys. The method was tested on a set of 25 abdominal CT scans. Context-driven GHT correctly located all OOIs in the test image and gave localization errors of 19.5±9.0, 12.8±7.3, 9.4±4.6 and 8.6±4.1 mm for liver, spleen, left and right kidney respectively. Conventional GHT mis-located 8 out of 100 organs and its localization errors were 26.0±32.6, 14.1±10.6, 30.1±42.6 and 23.6±39.7mm for liver, spleen, left and right kidney respectively.

  18. Mercury⊕: An evidential reasoning image classifier

    NASA Astrophysics Data System (ADS)

    Peddle, Derek R.

    1995-12-01

    MERCURY⊕ is a multisource evidential reasoning classification software system based on the Dempster-Shafer theory of evidence. The design and implementation of this software package is described for improving the classification and analysis of multisource digital image data necessary for addressing advanced environmental and geoscience applications. In the remote-sensing context, the approach provides a more appropriate framework for classifying modern, multisource, and ancillary data sets which may contain a large number of disparate variables with different statistical properties, scales of measurement, and levels of error which cannot be handled using conventional Bayesian approaches. The software uses a nonparametric, supervised approach to classification, and provides a more objective and flexible interface to the evidential reasoning framework using a frequency-based method for computing support values from training data. The MERCURY⊕ software package has been implemented efficiently in the C programming language, with extensive use made of dynamic memory allocation procedures and compound linked list and hash-table data structures to optimize the storage and retrieval of evidence in a Knowledge Look-up Table. The software is complete with a full user interface and runs under Unix, Ultrix, VAX/VMS, MS-DOS, and Apple Macintosh operating system. An example of classifying alpine land cover and permafrost active layer depth in northern Canada is presented to illustrate the use and application of these ideas.

  19. 125. BENCH SHOP, LOOKING SOUTHEAST AT CENTER OF ROOM SHOWING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    125. BENCH SHOP, LOOKING SOUTHEAST AT CENTER OF ROOM SHOWING TOOL SHARPENER ON RIGHT AND ELECTRIC TABLE SAW AT CENTER. - Gruber Wagon Works, Pennsylvania Route 183 & State Hill Road at Red Bridge Park, Bernville, Berks County, PA

  20. VIEW OF PDP ROOM AT LEVEL +27’, LOOKING NORTH TOWARD ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF PDP ROOM AT LEVEL +27’, LOOKING NORTH TOWARD TILTING TABLE AREA. PART OF SHEAVE RACK FOR PDP IN LOWER LEFT - Physics Assembly Laboratory, Area A/M, Savannah River Site, Aiken, Aiken County, SC

  1. Estimating effective particle size of tropical deep convective clouds with a look-up table method using satellite measurements of brightness temperature differences

    NASA Astrophysics Data System (ADS)

    Hong, Gang; Minnis, Patrick; Doelling, David; Ayers, J. Kirk; Sun-Mack, Szedung

    2012-03-01

    A method for estimating effective ice particle radius Re at the tops of tropical deep convective clouds (DCC) is developed on the basis of precomputed look-up tables (LUTs) of brightness temperature differences (BTDs) between the 3.7 and 11.0 μm bands. A combination of discrete ordinates radiative transfer and correlated k distribution programs, which account for the multiple scattering and monochromatic molecular absorption in the atmosphere, is utilized to compute the LUTs as functions of solar zenith angle, satellite zenith angle, relative azimuth angle, Re, cloud top temperature (CTT), and cloud visible optical thickness τ. The LUT-estimated DCC Re agrees well with the cloud retrievals of the Moderate Resolution Imaging Spectroradiometer (MODIS) for the NASA Clouds and Earth's Radiant Energy System with a correlation coefficient of 0.988 and differences of less than 10%. The LUTs are applied to 1 year of measurements taken from MODIS aboard Aqua in 2007 to estimate DCC Re and are compared to a similar quantity from CloudSat over the region bounded by 140°E, 180°E, 0°N, and 20°N in the Western Pacific Warm Pool. The estimated DCC Re values are mainly concentrated in the range of 25-45 μm and decrease with CTT. Matching the LUT-estimated Re with ice cloud Re retrieved by CloudSat, it is found that the ice cloud τ values from DCC top to the vertical location where LUT-estimated Re is located at the CloudSat-retrieved Re profile are mostly less than 2.5 with a mean value of about 1.3. Changes in the DCC τ can result in differences of less than 10% for Re estimated from LUTs. The LUTs of 0.65 μm bidirectional reflectance distribution function (BRDF) are built as functions of viewing geometry and column amount of ozone above upper troposphere. The 0.65 μm BRDF can eliminate some noncore portions of the DCCs detected using only 11 μm brightness temperature thresholds, which result in a mean difference of only 0.6 μm for DCC Re estimated from BTD LUTs.

  2. Estimating Demand Elasticities for Mobile Telecommunications in Austria

    DTIC Science & Technology

    2004-12-01

    method to measure price elasticities relies on individual or survey data of consumer behavior . Independently of whether aggregated or individual data has...are able to distinguish between short- and long-run elasticities and to distinguish between consumer behavior on the firm level. 3 The Austrian...Insert Table 2 about here * In order to take a closer look on consumer behavior in the Austrian mobile telephone market, we have used four different

  3. EXTENSION OF SHEAR RUNOUT TABLE INTO SHIPPING BUILDING, WHICH LAY ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    EXTENSION OF SHEAR RUNOUT TABLE INTO SHIPPING BUILDING, WHICH LAY PERPENDICULAR TO 8" MILL. VIEW LOOKING NORTH INCLUDES NEW BOLD PRODUCT SHEARS, STOPS LENGTH GAUGES, AND BUNDLING CRADLES. - LTV Steel, 8-inch Bar Mill, Buffalo Plant, Buffalo, Erie County, NY

  4. A Comparative Study on Safe Pile Capacity as Shown in Table 1 of IS 2911 (Part III): 1980

    NASA Astrophysics Data System (ADS)

    Pakrashi, Somdev

    2017-06-01

    Code of practice for design and construction of under reamed pile foundations: IS 2911 (Part-III)—1980 presents one table in respect of safe load for bored cast in situ under reamed piles in sandy and clayey soils including black cotton soils, stem dia. of pile ranging from 20 to 50 cm and its effective length being 3.50 m. A comparative study, was taken up by working out safe pile capacity for one 400 dia., 3.5 m long bored cast in situ under reamed pile based on subsoil properties obtained from soil investigation work as well as subsoil properties of different magnitudes of clayey, sandy soils and comparing the same with the safe pile capacity shown in Table 1 of that IS Code. The study reveals that safe pile capacity computed from subsoil properties, barring a very few cases, considerably differs from that shown in the aforesaid code and looks forward for more research work and study to find out a conclusive explanation of this probable anomaly.

  5. Path Integration of Head Direction: Updating a Packet of Neural Activity at the Correct Speed Using Axonal Conduction Delays

    PubMed Central

    Walters, Daniel; Stringer, Simon; Rolls, Edmund

    2013-01-01

    The head direction cell system is capable of accurately updating its current representation of head direction in the absence of visual input. This is known as the path integration of head direction. An important question is how the head direction cell system learns to perform accurate path integration of head direction. In this paper we propose a model of velocity path integration of head direction in which the natural time delay of axonal transmission between a linked continuous attractor network and competitive network acts as a timing mechanism to facilitate the correct speed of path integration. The model effectively learns a “look-up” table for the correct speed of path integration. In simulation, we show that the model is able to successfully learn two different speeds of path integration across two different axonal conduction delays, and without the need to alter any other model parameters. An implication of this model is that, by learning look-up tables for each speed of path integration, the model should exhibit a degree of robustness to damage. In simulations, we show that the speed of path integration is not significantly affected by degrading the network through removing a proportion of the cells that signal rotational velocity. PMID:23526976

  6. The Scaled SLW model of gas radiation in non-uniform media based on Planck-weighted moments of gas absorption cross-section

    NASA Astrophysics Data System (ADS)

    Solovjov, Vladimir P.; Andre, Frederic; Lemonnier, Denis; Webb, Brent W.

    2018-02-01

    The Scaled SLW model for prediction of radiation transfer in non-uniform gaseous media is presented. The paper considers a new approach for construction of a Scaled SLW model. In order to maintain the SLW method as a simple and computationally efficient engineering method special attention is paid to explicit non-iterative methods of calculation of the scaling coefficient. The moments of gas absorption cross-section weighted by the Planck blackbody emissive power (in particular, the first moment - Planck mean, and first inverse moment - Rosseland mean) are used as the total characteristics of the absorption spectrum to be preserved by scaling. Generalized SLW modelling using these moments including both discrete gray gases and the continuous formulation is presented. Application of line-by-line look-up table for corresponding ALBDF and inverse ALBDF distribution functions (such that no solution of implicit equations is needed) ensures that the method is flexible and efficient. Predictions for radiative transfer using the Scaled SLW model are compared to line-by-line benchmark solutions, and predictions using the Rank Correlated SLW model and SLW Reference Approach. Conclusions and recommendations regarding application of the Scaled SLW model are made.

  7. A trainable decisions-in decision-out (DEI-DEO) fusion system

    NASA Astrophysics Data System (ADS)

    Dasarathy, Belur V.

    1998-03-01

    Most of the decision fusion systems proposed hitherto in the literature for multiple data source (sensor) environments operate on the basis of pre-defined fusion logic, be they crisp (deterministic), probabilistic, or fuzzy in nature, with no specific learning phase. The fusion systems that are trainable, i.e., ones that have a learning phase, mostly operate in the features-in-decision-out mode, which essentially reduces the fusion process functionally to a pattern classification task in the joint feature space. In this study, a trainable decisions-in-decision-out fusion system is described which estimates a fuzzy membership distribution spread across the different decision choices based on the performance of the different decision processors (sensors) corresponding to each training sample (object) which is associated with a specific ground truth (true decision). Based on a multi-decision space histogram analysis of the performance of the different processors over the entire training data set, a look-up table associating each cell of the histogram with a specific true decision is generated which forms the basis for the operational phase. In the operational phase, for each set of decision inputs, a pointer to the look-up table learnt previously is generated from which a fused decision is derived. This methodology, although primarily designed for fusing crisp decisions from the multiple decision sources, can be adapted for fusion of fuzzy decisions as well if such are the inputs from these sources. Examples, which illustrate the benefits and limitations of the crisp and fuzzy versions of the trainable fusion systems, are also included.

  8. Estimating skin blood saturation by selecting a subset of hyperspectral imaging data

    NASA Astrophysics Data System (ADS)

    Ewerlöf, Maria; Salerud, E. Göran; Strömberg, Tomas; Larsson, Marcus

    2015-03-01

    Skin blood haemoglobin saturation (?b) can be estimated with hyperspectral imaging using the wavelength (λ) range of 450-700 nm where haemoglobin absorption displays distinct spectral characteristics. Depending on the image size and photon transport algorithm, computations may be demanding. Therefore, this work aims to evaluate subsets with a reduced number of wavelengths for ?b estimation. White Monte Carlo simulations are performed using a two-layered tissue model with discrete values for epidermal thickness (?epi) and the reduced scattering coefficient (μ's ), mimicking an imaging setup. A detected intensity look-up table is calculated for a range of model parameter values relevant to human skin, adding absorption effects in the post-processing. Skin model parameters, including absorbers, are; μ's (λ), ?epi, haemoglobin saturation (?b), tissue fraction blood (?b) and tissue fraction melanin (?mel). The skin model paired with the look-up table allow spectra to be calculated swiftly. Three inverse models with varying number of free parameters are evaluated: A(?b, ?b), B(?b, ?b, ?mel) and C(all parameters free). Fourteen wavelength candidates are selected by analysing the maximal spectral sensitivity to ?b and minimizing the sensitivity to ?b. All possible combinations of these candidates with three, four and 14 wavelengths, as well as the full spectral range, are evaluated for estimating ?b for 1000 randomly generated evaluation spectra. The results show that the simplified models A and B estimated ?b accurately using four wavelengths (mean error 2.2% for model B). If the number of wavelengths increased, the model complexity needed to be increased to avoid poor estimations.

  9. On-sky Closed-loop Correction of Atmospheric Dispersion for High-contrast Coronagraphy and Astrometry

    NASA Astrophysics Data System (ADS)

    Pathak, P.; Guyon, O.; Jovanovic, N.; Lozi, J.; Martinache, F.; Minowa, Y.; Kudo, T.; Kotani, T.; Takami, H.

    2018-02-01

    Adaptive optic (AO) systems delivering high levels of wavefront correction are now common at observatories. One of the main limitations to image quality after wavefront correction comes from atmospheric refraction. An atmospheric dispersion compensator (ADC) is employed to correct for atmospheric refraction. The correction is applied based on a look-up table consisting of dispersion values as a function of telescope elevation angle. The look-up table-based correction of atmospheric dispersion results in imperfect compensation leading to the presence of residual dispersion in the point spread function (PSF) and is insufficient when sub-milliarcsecond precision is required. The presence of residual dispersion can limit the achievable contrast while employing high-performance coronagraphs or can compromise high-precision astrometric measurements. In this paper, we present the first on-sky closed-loop correction of atmospheric dispersion by directly using science path images. The concept behind the measurement of dispersion utilizes the chromatic scaling of focal plane speckles. An adaptive speckle grid generated with a deformable mirror (DM) that has a sufficiently large number of actuators is used to accurately measure the residual dispersion and subsequently correct it by driving the ADC. We have demonstrated with the Subaru Coronagraphic Extreme AO (SCExAO) system on-sky closed-loop correction of residual dispersion to <1 mas across H-band. This work will aid in the direct detection of habitable exoplanets with upcoming extremely large telescopes (ELTs) and also provide a diagnostic tool to test the performance of instruments which require sub-milliarcsecond correction.

  10. 47. INTERIOR VIEW LOOKING NORTH AT THE FRONT OF THE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    47. INTERIOR VIEW LOOKING NORTH AT THE FRONT OF THE STAMP BATTERIES AND MORTAR BOXES. THE AMALGAMATION TABLES EXTEND TO THE FOREGROUND AND BOTTOM OF THE IMAGE. - Standard Gold Mill, East of Bodie Creek, Northeast of Bodie, Bodie, Mono County, CA

  11. The entropy of the life table: A reappraisal.

    PubMed

    Fernandez, Oscar E; Beltrán-Sánchez, Hiram

    2015-09-01

    The life table entropy provides useful information for understanding improvements in mortality and survival in a population. In this paper we take a closer look at the life table entropy and use advanced mathematical methods to provide additional insights for understanding how it relates to changes in mortality and survival. By studying the entropy (H) as a functional, we show that changes in the entropy depend on both the relative change in life expectancy lost due to death (e(†)) and in life expectancy at birth (e0). We also show that changes in the entropy can be further linked to improvements in premature and older deaths. We illustrate our methods with empirical data from Latin American countries, which suggests that at high mortality levels declines in H (which are associated with survival increases) linked with larger improvements in e0, whereas at low mortality levels e(†) made larger contributions to H. We additionally show that among countries with low mortality level, contributions of e(†) to changes in the life table entropy resulted from averting early deaths. These findings indicate that future increases in overall survival in low mortality countries will likely result from improvements in e(†). Copyright © 2015 Elsevier Inc. All rights reserved.

  12. League tables and school effectiveness: a mathematical model.

    PubMed Central

    Hoyle, Rebecca B; Robinson, James C

    2003-01-01

    'School performance tables', an alphabetical list of secondary schools along with aggregates of their pupils' performances in national tests, have been published in the UK since 1992. Inevitably, the media have responded by publishing ranked 'league tables'. Despite concern over the potentially divisive effect of such tables, the current government has continued to publish this information in the same form. The effect of this information on standards and on the social make-up of the community has been keenly debated. Since there is no control group available that would allow us to investigate this issue directly, we present here a simple mathematical model. Our results indicate that, while random fluctuations from year to year can cause large distortions in the league-table positions, some schools still establish themselves as 'desirable'. To our surprise, we found that 'value-added' tables were no more accurate than tables based on raw exam scores, while a different method of drawing up the tables, in which exam results are averaged over a period of time, appears to give a much more reliable measure of school performance. PMID:12590748

  13. A Digital Lock-In Amplifier for Use at Temperatures of up to 200 °C

    PubMed Central

    Cheng, Jingjing; Xu, Yingjun; Wu, Lei; Wang, Guangwei

    2016-01-01

    Weak voltage signals cannot be reliably measured using currently available logging tools when these tools are subject to high-temperature (up to 200 °C) environments for prolonged periods. In this paper, we present a digital lock-in amplifier (DLIA) capable of operating at temperatures of up to 200 °C. The DLIA contains a low-noise instrument amplifier and signal acquisition and the corresponding signal processing electronics. The high-temperature stability of the DLIA is achieved by designing system-in-package (SiP) and multi-chip module (MCM) components with low thermal resistances. An effective look-up-table (LUT) method was developed for the lock-in amplifier algorithm, to decrease the complexity of the calculations and generate less heat than the traditional way. The performance of the design was tested by determining the linearity, gain, Q value, and frequency characteristic of the DLIA between 25 and 200 °C. The maximal nonlinear error in the linearity of the DLIA working at 200 °C was about 1.736% when the equivalent input was a sine wave signal with an amplitude of between 94.8 and 1896.0 nV and a frequency of 800 kHz. The tests showed that the DLIA proposed could work effectively in high-temperature environments up to 200 °C. PMID:27845710

  14. Global root zone storage capacity from satellite-based evaporation data

    NASA Astrophysics Data System (ADS)

    Wang-Erlandsson, Lan; Bastiaanssen, Wim; Gao, Hongkai; Jägermeyr, Jonas; Senay, Gabriel; van Dijk, Albert; Guerschman, Juan; Keys, Patrick; Gordon, Line; Savenije, Hubert

    2016-04-01

    We present an "earth observation-based" method for estimating root zone storage capacity - a critical, yet uncertain parameter in hydrological and land surface modelling. By assuming that vegetation optimises its root zone storage capacity to bridge critical dry periods, we were able to use state-of-the-art satellite-based evaporation data computed with independent energy balance equations to derive gridded root zone storage capacity at global scale. This approach does not require soil or vegetation information, is model independent, and is in principle scale-independent. In contrast to traditional look-up table approaches, our method captures the variability in root zone storage capacity within land cover type, including in rainforests where direct measurements of root depth otherwise are scarce. Implementing the estimated root zone storage capacity in the global hydrological model STEAM improved evaporation simulation overall, and in particular during the least evaporating months in sub-humid to humid regions with moderate to high seasonality. We find that evergreen forests are able to create a large storage to buffer for extreme droughts (with a return period of up to 60 years), in contrast to short vegetation and crops (which seem to adapt to a drought return period of about 2 years). The presented method to estimate root zone storage capacity eliminates the need for soils and rooting depth information, which could be a game-changer in global land surface modelling.

  15. A Thermal Infrared Radiation Parameterization for Atmospheric Studies

    NASA Technical Reports Server (NTRS)

    Chou, Ming-Dah; Suarez, Max J.; Liang, Xin-Zhong; Yan, Michael M.-H.; Cote, Charles (Technical Monitor)

    2001-01-01

    This technical memorandum documents the longwave radiation parameterization developed at the Climate and Radiation Branch, NASA Goddard Space Flight Center, for a wide variety of weather and climate applications. Based on the 1996-version of the Air Force Geophysical Laboratory HITRAN data, the parameterization includes the absorption due to major gaseous absorption (water vapor, CO2, O3) and most of the minor trace gases (N2O, CH4, CFCs), as well as clouds and aerosols. The thermal infrared spectrum is divided into nine bands. To achieve a high degree of accuracy and speed, various approaches of computing the transmission function are applied to different spectral bands and gases. The gaseous transmission function is computed either using the k-distribution method or the table look-up method. To include the effect of scattering due to clouds and aerosols, the optical thickness is scaled by the single-scattering albedo and asymmetry factor. The parameterization can accurately compute fluxes to within 1% of the high spectral-resolution line-by-line calculations. The cooling rate can be accurately computed in the region extending from the surface to the 0.01-hPa level.

  16. RELAP-7 Progress Report. FY-2015 Optimization Activities Summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Ray Alden; Zou, Ling; Andrs, David

    2015-09-01

    This report summarily documents the optimization activities on RELAP-7 for FY-2015. It includes the migration from the analytical stiffened gas equation of state for both the vapor and liquid phases to accurate and efficient property evaluations for both equilibrium and metastable (nonequilibrium) states using the Spline-Based Table Look-up (SBTL) method with the IAPWS-95 properties for steam and water. It also includes the initiation of realistic closure models based, where appropriate, on the U.S. Nuclear Regulatory Commission’s TRACE code. It also describes an improved entropy viscosity numerical stabilization method for the nonequilibrium two-phase flow model of RELAP-7. For ease of presentationmore » to the reader, the nonequilibrium two-phase flow model used in RELAP-7 is briefly presented, though for detailed explanation the reader is referred to RELAP-7 Theory Manual [R.A. Berry, J.W. Peterson, H. Zhang, R.C. Martineau, H. Zhao, L. Zou, D. Andrs, “RELAP-7 Theory Manual,” Idaho National Laboratory INL/EXT-14-31366(rev. 1), February 2014].« less

  17. 7. GENERAL VIEW OF GUT SHANTY ON LEVEL 3; LOOKING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. GENERAL VIEW OF GUT SHANTY ON LEVEL 3; LOOKING SOUTHEAST; HOG VISCERA WERE SORTED AND CLEANED WITH HOT WATER ON LONG STAINLESS STEEL TABLES - Rath Packing Company, Hog Dressing Building, Sycamore Street between Elm & Eighteenth Streets, Waterloo, Black Hawk County, IA

  18. Looking at Debit and Credit Card Fraud

    ERIC Educational Resources Information Center

    Porkess, Roger; Mason, Stephen

    2012-01-01

    This article, written jointly by a mathematician and a barrister, looks at some of the statistical issues raised by court cases based on fraud involving chip and PIN cards. It provides examples and insights that statistics teachers should find helpful. (Contains 4 tables and 1 figure.)

  19. Looking south at, left to right, Heavy Equipment Shop (Bldg. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Looking south at, left to right, Heavy Equipment Shop (Bldg. 188), C.W.E. Office (Bldg. 130), Boiler Shop (Bldg. 152), and canopy over drop table pits - Atchison, Topeka, Santa Fe Railroad, Albuquerque Shops, 908 Second Street, Southwest, Albuquerque, Bernalillo County, NM

  20. Localization of small arms fire using acoustic measurements of muzzle blast and/or ballistic shock wave arrivals.

    PubMed

    Lo, Kam W; Ferguson, Brian G

    2012-11-01

    The accurate localization of small arms fire using fixed acoustic sensors is considered. First, the conventional wavefront-curvature passive ranging method, which requires only differential time-of-arrival (DTOA) measurements of the muzzle blast wave to estimate the source position, is modified to account for sensor positions that are not strictly collinear (bowed array). Second, an existing single-sensor-node ballistic model-based localization method, which requires both DTOA and differential angle-of-arrival (DAOA) measurements of the muzzle blast wave and ballistic shock wave, is improved by replacing the basic external ballistics model (which describes the bullet's deceleration along its trajectory) with a more rigorous model and replacing the look-up table ranging procedure with a nonlinear (or polynomial) equation-based ranging procedure. Third, a new multiple-sensor-node ballistic model-based localization method, which requires only DTOA measurements of the ballistic shock wave to localize the point of fire, is formulated. The first method is applicable to situations when only the muzzle blast wave is received, whereas the third method applies when only the ballistic shock wave is received. The effectiveness of each of these methods is verified using an extensive set of real data recorded during a 7 day field experiment.

  1. Color quality management in advanced flat panel display engines

    NASA Astrophysics Data System (ADS)

    Lebowsky, Fritz; Neugebauer, Charles F.; Marnatti, David M.

    2003-01-01

    During recent years color reproduction systems for consumer needs have experienced various difficulties. In particular, flat panels and printers could not reach a satisfactory color match. The RGB image stored on an Internet server of a retailer did not show the desired colors on a consumer display device or printer device. STMicroelectronics addresses this important color reproduction issue inside their advanced display engines using novel algorithms targeted for low cost consumer flat panels. Using a new and genuine RGB color space transformation, which combines a gamma correction Look-Up-Table, tetrahedrization, and linear interpolation, we satisfy market demands.

  2. An efficient energy response model for liquid scintillator detectors

    NASA Astrophysics Data System (ADS)

    Lebanowski, Logan; Wan, Linyan; Ji, Xiangpan; Wang, Zhe; Chen, Shaomin

    2018-05-01

    Liquid scintillator detectors are playing an increasingly important role in low-energy neutrino experiments. In this article, we describe a generic energy response model of liquid scintillator detectors that provides energy estimations of sub-percent accuracy. This model fits a minimal set of physically-motivated parameters that capture the essential characteristics of scintillator response and that can naturally account for changes in scintillator over time, helping to avoid associated biases or systematic uncertainties. The model employs a one-step calculation and look-up tables, yielding an immediate estimation of energy and an efficient framework for quantifying systematic uncertainties and correlations.

  3. Regional Input-Output Tables and Trade Flows: an Integrated and Interregional Non-survey Approach

    DOE PAGES

    Boero, Riccardo; Edwards, Brian Keith; Rivera, Michael Kelly

    2017-03-20

    Regional input–output tables and trade flows: an integrated and interregional non-survey approach. Regional Studies. Regional analyses require detailed and accurate information about dynamics happening within and between regional economies. However, regional input–output tables and trade flows are rarely observed and they must be estimated using up-to-date information. Common estimation approaches vary widely but consider tables and flows independently. Here, by using commonly used economic assumptions and available economic information, this paper presents a method that integrates the estimation of regional input–output tables and trade flows across regions. Examples of the method implementation are presented and compared with other approaches, suggestingmore » that the integrated approach provides advantages in terms of estimation accuracy and analytical capabilities.« less

  4. Regional Input-Output Tables and Trade Flows: an Integrated and Interregional Non-survey Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boero, Riccardo; Edwards, Brian Keith; Rivera, Michael Kelly

    Regional input–output tables and trade flows: an integrated and interregional non-survey approach. Regional Studies. Regional analyses require detailed and accurate information about dynamics happening within and between regional economies. However, regional input–output tables and trade flows are rarely observed and they must be estimated using up-to-date information. Common estimation approaches vary widely but consider tables and flows independently. Here, by using commonly used economic assumptions and available economic information, this paper presents a method that integrates the estimation of regional input–output tables and trade flows across regions. Examples of the method implementation are presented and compared with other approaches, suggestingmore » that the integrated approach provides advantages in terms of estimation accuracy and analytical capabilities.« less

  5. Operational correction and validation of the VIIRS TEB longwave infrared band calibration bias during blackbody temperature changes

    NASA Astrophysics Data System (ADS)

    Wang, Wenhui; Cao, Changyong; Ignatov, Alex; Li, Zhenglong; Wang, Likun; Zhang, Bin; Blonski, Slawomir; Li, Jun

    2017-09-01

    The Suomi NPP VIIRS thermal emissive bands (TEB) have been performing very well since data became available on January 20, 2012. The longwave infrared bands at 11 and 12 um (M15 and M16) are primarily used for sea surface temperature (SST) retrievals. A long standing anomaly has been observed during the quarterly warm-up-cool-down (WUCD) events. During such event daytime SST product becomes anomalous with a warm bias shown as a spike in the SST time series on the order of 0.2 K. A previous study (CAO et al. 2017) suggested that the VIIRS TEB calibration anomaly during WUCD is due to a flawed theoretical assumption in the calibration equation and proposed an Ltrace method to address the issue. This paper complements that study and presents operational implementation and validation of the Ltrace method for M15 and M16. The Ltrace method applies bias correction during WUCD only. It requires a simple code change and one-time calibration parameter look-up-table update. The method was evaluated using colocated CrIS observations and the SST algorithm. Our results indicate that the method can effectively reduce WUCD calibration anomaly in M15, with residual bias of 0.02 K after the correction. It works less effectively for M16, with residual bias of 0.04 K. The Ltrace method may over-correct WUCD calibration biases, especially for M16. However, the residual WUCD biases are small in both bands. Evaluation results using the SST algorithm show that the method can effectively remove SST anomaly during WUCD events.

  6. Enhanced visualization of abnormalities in digital-mammographic images

    NASA Astrophysics Data System (ADS)

    Young, Susan S.; Moore, William E.

    2002-05-01

    This paper describes two new presentation methods that are intended to improve the ability of radiologists to visualize abnormalities in mammograms by enhancing the appearance of the breast parenchyma pattern relative to the fatty-tissue surroundings. The first method, referred to as mountain- view, is obtained via multiscale edge decomposition through filter banks. The image is displayed in a multiscale edge domain that causes the image to have a topographic-like appearance. The second method displays the image in the intensity domain and is referred to as contrast-enhancement presentation. The input image is first passed through a decomposition filter bank to produce a filtered output (Id). The image at the lowest resolution is processed using a LUT (look-up table) to produce a tone scaled image (I'). The LUT is designed to optimally map the code value range corresponding to the parenchyma pattern in the mammographic image into the dynamic range of the output medium. The algorithm uses a contrast weight control mechanism to produce the desired weight factors to enhance the edge information corresponding to the parenchyma pattern. The output image is formed using a reconstruction filter bank through I' and enhanced Id.

  7. A Novel Method for Estimating Shortwave Direct Radiative Effect of Above-cloud Aerosols over Ocean Using CALIOP and MODIS Data

    NASA Technical Reports Server (NTRS)

    Zhang, Z.; Meyer, K.; Platnick, S.; Oreopoulos, L.; Lee, D.; Yu, H.

    2013-01-01

    This paper describes an efficient and unique method for computing the shortwave direct radiative effect (DRE) of aerosol residing above low-level liquid-phase clouds using CALIOP and MODIS data. It accounts for the overlapping of aerosol and cloud rigorously by utilizing the joint histogram of cloud optical depth and cloud top pressure. Effects of sub-grid scale cloud and aerosol variations on DRE are accounted for. It is computationally efficient through using grid-level cloud and aerosol statistics, instead of pixel-level products, and a pre-computed look-up table in radiative transfer calculations. We verified that for smoke over the southeast Atlantic Ocean the method yields a seasonal mean instantaneous shortwave DRE that generally agrees with more rigorous pixel-level computation within 4%. We have also computed the annual mean instantaneous shortwave DRE of light-absorbing aerosols (i.e., smoke and polluted dust) over global ocean based on 4 yr of CALIOP and MODIS data. We found that the variability of the annual mean shortwave DRE of above-cloud light-absorbing aerosol is mainly driven by the optical depth of the underlying clouds.

  8. A Novel Method for Estimating Shortwave Direct Radiative Effect of Above-Cloud Aerosols Using CALIOP and MODIS Data

    NASA Technical Reports Server (NTRS)

    Zhang, Z.; Meyer, K.; Platnick, S.; Oreopoulos, L.; Lee, D.; Yu, H.

    2014-01-01

    This paper describes an efficient and unique method for computing the shortwave direct radiative effect (DRE) of aerosol residing above low-level liquid-phase clouds using CALIOP and MODIS data. It accounts for the overlapping of aerosol and cloud rigorously by utilizing the joint histogram of cloud optical depth and cloud top pressure. Effects of sub-grid scale cloud and aerosol variations on DRE are accounted for. It is computationally efficient through using grid-level cloud and aerosol statistics, instead of pixel-level products, and a pre-computed look-up table in radiative transfer calculations. We verified that for smoke over the southeast Atlantic Ocean the method yields a seasonal mean instantaneous shortwave DRE that generally agrees with more rigorous pixel-level computation within 4. We have also computed the annual mean instantaneous shortwave DRE of light-absorbing aerosols (i.e., smoke and polluted dust) over global ocean based on 4 yr of CALIOP and MODIS data. We found that the variability of the annual mean shortwave DRE of above-cloud light-absorbing aerosol is mainly driven by the optical depth of the underlying clouds.

  9. Improving wavelet denoising based on an in-depth analysis of the camera color processing

    NASA Astrophysics Data System (ADS)

    Seybold, Tamara; Plichta, Mathias; Stechele, Walter

    2015-02-01

    While Denoising is an extensively studied task in signal processing research, most denoising methods are designed and evaluated using readily processed image data, e.g. the well-known Kodak data set. The noise model is usually additive white Gaussian noise (AWGN). This kind of test data does not correspond to nowadays real-world image data taken with a digital camera. Using such unrealistic data to test, optimize and compare denoising algorithms may lead to incorrect parameter tuning or suboptimal choices in research on real-time camera denoising algorithms. In this paper we derive a precise analysis of the noise characteristics for the different steps in the color processing. Based on real camera noise measurements and simulation of the processing steps, we obtain a good approximation for the noise characteristics. We further show how this approximation can be used in standard wavelet denoising methods. We improve the wavelet hard thresholding and bivariate thresholding based on our noise analysis results. Both the visual quality and objective quality metrics show the advantage of the proposed method. As the method is implemented using look-up-tables that are calculated before the denoising step, our method can be implemented with very low computational complexity and can process HD video sequences real-time in an FPGA.

  10. 7. Credit BG. View looking west into small solid rocket ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. Credit BG. View looking west into small solid rocket motor testing bay of Test Stand 'E' (Building 4259/E-60). Motors are mounted on steel table and fired horizontally toward the east. - Jet Propulsion Laboratory Edwards Facility, Test Stand E, Edwards Air Force Base, Boron, Kern County, CA

  11. Looking north through Machine Shop (Bldg. 163) Track 409 Doors ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Looking north through Machine Shop (Bldg. 163) Track 409 Doors at transfer table, with Boiler Shop (Bldg. 152) at left and C.W.E. Shop No. 2 (Bldg. 47) at right - Atchison, Topeka, Santa Fe Railroad, Albuquerque Shops, 908 Second Street, Southwest, Albuquerque, Bernalillo County, NM

  12. WASP (Write a Scientific Paper) using Excel - 10: Contingency tables.

    PubMed

    Grech, Victor

    2018-06-01

    Contingency tables may be required to perform chi-test analyses. This provides pointers as to how to do this in Microsoft Excel and explains how to set up methods to calculate confidence intervals for proportions, including proportions with zero numerators. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Report on computation of repetitive hyperbaric-hypobaric decompression tables

    NASA Technical Reports Server (NTRS)

    Edel, P. O.

    1975-01-01

    The tables were constructed specifically for NASA's simulated weightlessness training program; they provide for 8 depth ranges covering depths from 7 to 47 FSW, with exposure times of 15 to 360 minutes. These tables were based up on an 8 compartment model using tissue half-time values of 5 to 360 minutes and Workmanline M-values for control of the decompression obligation resulting from hyperbaric exposures. Supersaturation ratios of 1.55:1 to 2:1 were used for control of ascents to altitude following such repetitive dives. Adequacy of the method and the resultant tables were determined in light of past experience with decompression involving hyperbaric-hypobaric interfaces in human exposures. Using these criteria, the method showed conformity with empirically determined values. In areas where a discrepancy existed, the tables would err in the direction of safety.

  14. Comparison of imaging characteristics of multiple-beam equalization and storage phosphor direct digitizer radiographic systems

    NASA Astrophysics Data System (ADS)

    Sankaran, A.; Chuang, Keh-Shih; Yonekawa, Hisashi; Huang, H. K.

    1992-06-01

    The imaging characteristics of two chest radiographic equipment, Advanced Multiple Beam Equalization Radiography (AMBER) and Konica Direct Digitizer [using a storage phosphor (SP) plate] systems have been compared. The variables affecting image quality and the computer display/reading systems used are detailed. Utilizing specially designed wedge, geometric, and anthropomorphic phantoms, studies were conducted on: exposure and energy response of detectors; nodule detectability; different exposure techniques; various look- up tables (LUTs), gray scale displays and laser printers. Methods for scatter estimation and reduction were investigated. It is concluded that AMBER with screen-film and equalization techniques provides better nodule detectability than SP plates. However, SP plates have other advantages such as flexibility in the selection of exposure techniques, image processing features, and excellent sensitivity when combined with optimum reader operating modes. The equalization feature of AMBER provides better nodule detectability under the denser regions of the chest. Results of diagnostic accuracy are demonstrated with nodule detectability plots and analysis of images obtained with phantoms.

  15. Non-uniform cosine modulated filter banks using meta-heuristic algorithms in CSD space.

    PubMed

    Kalathil, Shaeen; Elias, Elizabeth

    2015-11-01

    This paper presents an efficient design of non-uniform cosine modulated filter banks (CMFB) using canonic signed digit (CSD) coefficients. CMFB has got an easy and efficient design approach. Non-uniform decomposition can be easily obtained by merging the appropriate filters of a uniform filter bank. Only the prototype filter needs to be designed and optimized. In this paper, the prototype filter is designed using window method, weighted Chebyshev approximation and weighted constrained least square approximation. The coefficients are quantized into CSD, using a look-up-table. The finite precision CSD rounding, deteriorates the filter bank performances. The performances of the filter bank are improved using suitably modified meta-heuristic algorithms. The different meta-heuristic algorithms which are modified and used in this paper are Artificial Bee Colony algorithm, Gravitational Search algorithm, Harmony Search algorithm and Genetic algorithm and they result in filter banks with less implementation complexity, power consumption and area requirements when compared with those of the conventional continuous coefficient non-uniform CMFB.

  16. Non-uniform cosine modulated filter banks using meta-heuristic algorithms in CSD space

    PubMed Central

    Kalathil, Shaeen; Elias, Elizabeth

    2014-01-01

    This paper presents an efficient design of non-uniform cosine modulated filter banks (CMFB) using canonic signed digit (CSD) coefficients. CMFB has got an easy and efficient design approach. Non-uniform decomposition can be easily obtained by merging the appropriate filters of a uniform filter bank. Only the prototype filter needs to be designed and optimized. In this paper, the prototype filter is designed using window method, weighted Chebyshev approximation and weighted constrained least square approximation. The coefficients are quantized into CSD, using a look-up-table. The finite precision CSD rounding, deteriorates the filter bank performances. The performances of the filter bank are improved using suitably modified meta-heuristic algorithms. The different meta-heuristic algorithms which are modified and used in this paper are Artificial Bee Colony algorithm, Gravitational Search algorithm, Harmony Search algorithm and Genetic algorithm and they result in filter banks with less implementation complexity, power consumption and area requirements when compared with those of the conventional continuous coefficient non-uniform CMFB. PMID:26644921

  17. Self-Organization of Blood Pressure Regulation: Experimental Evidence

    PubMed Central

    Fortrat, Jacques-Olivier; Levrard, Thibaud; Courcinous, Sandrine; Victor, Jacques

    2016-01-01

    Blood pressure regulation is a prime example of homeostatic regulation. However, some characteristics of the cardiovascular system better match a non-linear self-organized system than a homeostatic one. To determine whether blood pressure regulation is self-organized, we repeated the seminal demonstration of self-organized control of movement, but applied it to the cardiovascular system. We looked for two distinctive features peculiar to self-organization: non-equilibrium phase transitions and hysteresis in their occurrence when the system is challenged. We challenged the cardiovascular system by means of slow, 20-min Tilt-Up and Tilt-Down tilt table tests in random order. We continuously determined the phase between oscillations at the breathing frequency of Total Peripheral Resistances and Heart Rate Variability by means of cross-spectral analysis. We looked for a significant phase drift during these procedures, which signed a non-equilibrium phase transition. We determined at which head-up tilt angle it occurred. We checked that this angle was significantly different between Tilt-Up and Tilt-Down to demonstrate hysteresis. We observed a significant non-equilibrium phase transition in nine healthy volunteers out of 11 with significant hysteresis (48.1 ± 7.5° and 21.8 ± 3.9° during Tilt-Up and Tilt-Down, respectively, p < 0.05). Our study shows experimental evidence of self-organized short-term blood pressure regulation. It provides new insights into blood pressure regulation and its related disorders. PMID:27065880

  18. A virtual image chain for perceived image quality of medical display

    NASA Astrophysics Data System (ADS)

    Marchessoux, Cédric; Jung, Jürgen

    2006-03-01

    This paper describes a virtual image chain for medical display (project VICTOR: granted in the 5th framework program by European commission). The chain starts from raw data of an image digitizer (CR, DR) or synthetic patterns and covers image enhancement (MUSICA by Agfa) and both display possibilities, hardcopy (film on viewing box) and softcopy (monitor). Key feature of the chain is a complete image wise approach. A first prototype is implemented in an object-oriented software platform. The display chain consists of several modules. Raw images are either taken from scanners (CR-DR) or from a pattern generator, in which characteristics of DR- CR systems are introduced by their MTF and their dose-dependent Poisson noise. The image undergoes image enhancement and comes to display. For soft display, color and monochrome monitors are used in the simulation. The image is down-sampled. The non-linear response of a color monitor is taken into account by the GOG or S-curve model, whereas the Standard Gray-Scale-Display-Function (DICOM) is used for monochrome display. The MTF of the monitor is applied on the image in intensity levels. For hardcopy display, the combination of film, printer, lightbox and viewing condition is modeled. The image is up-sampled and the DICOM-GSDF or a Kanamori Look-Up-Table is applied. An anisotropic model for the MTF of the printer is applied on the image in intensity levels. The density-dependent color (XYZ) of the hardcopy film is introduced by Look-Up-tables. Finally a Human Visual System Model is applied to the intensity images (XYZ in terms of cd/m2) in order to eliminate nonvisible differences. Comparison leads to visible differences, which are quantified by higher order image quality metrics. A specific image viewer is used for the visualization of the intensity image and the visual difference maps.

  19. AdiosStMan: Parallelizing Casacore Table Data System using Adaptive IO System

    NASA Astrophysics Data System (ADS)

    Wang, R.; Harris, C.; Wicenec, A.

    2016-07-01

    In this paper, we investigate the Casacore Table Data System (CTDS) used in the casacore and CASA libraries, and methods to parallelize it. CTDS provides a storage manager plugin mechanism for third-party developers to design and implement their own CTDS storage managers. Having this in mind, we looked into various storage backend techniques that can possibly enable parallel I/O for CTDS by implementing new storage managers. After carrying on benchmarks showing the excellent parallel I/O throughput of the Adaptive IO System (ADIOS), we implemented an ADIOS based parallel CTDS storage manager. We then applied the CASA MSTransform frequency split task to verify the ADIOS Storage Manager. We also ran a series of performance tests to examine the I/O throughput in a massively parallel scenario.

  20. 9. GENERAL INTERIOR VIEW OF BEEF KILLING FLOOR; LOOKING SOUTHEAST; ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    9. GENERAL INTERIOR VIEW OF BEEF KILLING FLOOR; LOOKING SOUTHEAST; PLATFORMS IN FOREGROUND WERE USED BY SPLITTERS, TRIMMERS AND GOVERNMENT INSPECTORS; SKINNING TABLE RAN ALONG THE WINDOWS NEAR THE CENTER OF THE PHOTO - Rath Packing Company, Beef Killing Building, Sycamore Street between Elm & Eighteenth Streets, Waterloo, Black Hawk County, IA

  1. 37. July 1974. WOOD SHOP, VIEW LOOKING NORTHWEST, SHOWING THE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    37. July 1974. WOOD SHOP, VIEW LOOKING NORTHWEST, SHOWING THE PLANER WITH ITS BELT CHASE FROM THE BASEMENT LINESHAFT AND THE BELTING SYSTEM FOR THE TABLE-SHAPER. BEYOND THE PLANER IS THE BAND SAW. - Gruber Wagon Works, Pennsylvania Route 183 & State Hill Road at Red Bridge Park, Bernville, Berks County, PA

  2. 12 CFR Appendix C to Subpart G - OCC Interpretations

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ....203(a)(1)-2. Paragraph 34.203(c)(2)(iii). 1. Confirming elements in the appraisal. To confirm that the elements in appendix A to this subpart are included in the written appraisal, a creditor need not look...)(7)(viii). 1. Bureau table of rural counties The Bureau publishes on its Web site a table of rural...

  3. Adult Training and Education: Results from the National Household Education Surveys Program of 2016. First Look. NCES 2017-103rev

    ERIC Educational Resources Information Center

    Cronen, Stephanie; McQuiggan, Meghan; Isenberg, Emily

    2018-01-01

    This First Look report provides selected key findings on adults' attainment of nondegree credentials (licenses, certifications, and postsecondary certificates), and their completion of work experience programs such as apprenticeships and internships. This version of the report corrects an error in three tables in the originally released version…

  4. 7. INTERIOR, ROBERTS AND SCHAEFER SHAKER TABLE (LEFT), MARYLAND NEW ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. INTERIOR, ROBERTS AND SCHAEFER SHAKER TABLE (LEFT), MARYLAND NEW RIVER COAL COMPANY INSTALLED APRON CONVEYOR (RIGHT) USED TO CONVEY COAL TO THE BELKNAP CHORIDE WASHER, RETURN CHUTE FOR CLEANED COAL (FAR RIGHT), AND COAL STORAGE SILO (BACKGROUND), LOOKING WEST - Nuttallburg Mine Complex, Tipple, North side of New River, 2.7 miles upstream from Fayette Landing, Lookout, Fayette County, WV

  5. Method and apparatus for spur-reduced digital sinusoid synthesis

    NASA Technical Reports Server (NTRS)

    Zimmerman, George A. (Inventor); Flanagan, Michael J. (Inventor)

    1995-01-01

    A technique for reducing the spurious signal content in digital sinusoid synthesis is presented. Spur reduction is accomplished through dithering both amplitude and phase values prior to word-length reduction. The analytical approach developed for analog quantization is used to produce new bounds on spur performance in these dithered systems. Amplitude dithering allows output word-length reduction without introducing additional spurs. Effects of periodic dither similar to that produced by a pseudo-noise (PN) generator are analyzed. This phase dithering method provides a spur reduction of 6(M + 1) dB per phase bit when the dither consists of M uniform variates. While the spur reduction is at the expense of an increase in system noise, the noise power can be made white, making the power spectral density small. This technique permits the use of a smaller number of phase bits addressing sinusoid look-up tables, resulting in an exponential decrease in system complexity. Amplitude dithering allows the use of less complicated multipliers and narrower data paths in purely digital applications, as well as the use of coarse-resolution, highly-linear digital-to-analog converters (DAC's) to obtain spur performance limited by the DAC linearity rather than its resolution.

  6. On the suitability of the connection machine for direct particle simulation

    NASA Technical Reports Server (NTRS)

    Dagum, Leonard

    1990-01-01

    The algorithmic structure was examined of the vectorizable Stanford particle simulation (SPS) method and the structure is reformulated in data parallel form. Some of the SPS algorithms can be directly translated to data parallel, but several of the vectorizable algorithms have no direct data parallel equivalent. This requires the development of new, strictly data parallel algorithms. In particular, a new sorting algorithm is developed to identify collision candidates in the simulation and a master/slave algorithm is developed to minimize communication cost in large table look up. Validation of the method is undertaken through test calculations for thermal relaxation of a gas, shock wave profiles, and shock reflection from a stationary wall. A qualitative measure is provided of the performance of the Connection Machine for direct particle simulation. The massively parallel architecture of the Connection Machine is found quite suitable for this type of calculation. However, there are difficulties in taking full advantage of this architecture because of lack of a broad based tradition of data parallel programming. An important outcome of this work has been new data parallel algorithms specifically of use for direct particle simulation but which also expand the data parallel diction.

  7. On the Green's function of the partially diffusion-controlled reversible ABCD reaction for radiation chemistry codes

    NASA Astrophysics Data System (ADS)

    Plante, Ianik; Devroye, Luc

    2015-09-01

    Several computer codes simulating chemical reactions in particles systems are based on the Green's functions of the diffusion equation (GFDE). Indeed, many types of chemical systems have been simulated using the exact GFDE, which has also become the gold standard for validating other theoretical models. In this work, a simulation algorithm is presented to sample the interparticle distance for partially diffusion-controlled reversible ABCD reaction. This algorithm is considered exact for 2-particles systems, is faster than conventional look-up tables and uses only a few kilobytes of memory. The simulation results obtained with this method are compared with those obtained with the independent reaction times (IRT) method. This work is part of our effort in developing models to understand the role of chemical reactions in the radiation effects on cells and tissues and may eventually be included in event-based models of space radiation risks. However, as many reactions are of this type in biological systems, this algorithm might play a pivotal role in future simulation programs not only in radiation chemistry, but also in the simulation of biochemical networks in time and space as well.

  8. Evolutionary Optimization of Centrifugal Nozzles for Organic Vapours

    NASA Astrophysics Data System (ADS)

    Persico, Giacomo

    2017-03-01

    This paper discusses the shape-optimization of non-conventional centrifugal turbine nozzles for Organic Rankine Cycle applications. The optimal aerodynamic design is supported by the use of a non-intrusive, gradient-free technique specifically developed for shape optimization of turbomachinery profiles. The method is constructed as a combination of a geometrical parametrization technique based on B-Splines, a high-fidelity and experimentally validated Computational Fluid Dynamic solver, and a surrogate-based evolutionary algorithm. The non-ideal gas behaviour featuring the flow of organic fluids in the cascades of interest is introduced via a look-up-table approach, which is rigorously applied throughout the whole optimization process. Two transonic centrifugal nozzles are considered, featuring very different loading and radial extension. The use of a systematic and automatic design method to such a non-conventional configuration highlights the character of centrifugal cascades; the blades require a specific and non-trivial definition of the shape, especially in the rear part, to avoid the onset of shock waves. It is shown that the optimization acts in similar way for the two cascades, identifying an optimal curvature of the blade that both provides a relevant increase of cascade performance and a reduction of downstream gradients.

  9. GUI Type Fault Diagnostic Program for a Turboshaft Engine Using Fuzzy and Neural Networks

    NASA Astrophysics Data System (ADS)

    Kong, Changduk; Koo, Youngju

    2011-04-01

    The helicopter to be operated in a severe flight environmental condition must have a very reliable propulsion system. On-line condition monitoring and fault detection of the engine can promote reliability and availability of the helicopter propulsion system. A hybrid health monitoring program using Fuzzy Logic and Neural Network Algorithms can be proposed. In this hybrid method, the Fuzzy Logic identifies easily the faulted components from engine measuring parameter changes, and the Neural Networks can quantify accurately its identified faults. In order to use effectively the fault diagnostic system, a GUI (Graphical User Interface) type program is newly proposed. This program is composed of the real time monitoring part, the engine condition monitoring part and the fault diagnostic part. The real time monitoring part can display measuring parameters of the study turboshaft engine such as power turbine inlet temperature, exhaust gas temperature, fuel flow, torque and gas generator speed. The engine condition monitoring part can evaluate the engine condition through comparison between monitoring performance parameters the base performance parameters analyzed by the base performance analysis program using look-up tables. The fault diagnostic part can identify and quantify the single faults the multiple faults from the monitoring parameters using hybrid method.

  10. Mapping high-resolution incident photosynthetically active radiation over land surfaces from MODIS and GOES satellite data

    NASA Astrophysics Data System (ADS)

    Liang, S.; Wang, K.; Wang, D.; Townshend, J.; Running, S.; Tsay, S.

    2008-05-01

    Incident photosynthetically active radiation (PAR) is a key variable required by almost all terrestrial ecosystem models. Many radiation efficiency models are linearly related canopy productivity to the absorbed PAR. Unfortunately, the current incident PAR products estimated from remotely sensed data or calculated by radiation models at spatial and temporal resolutions are not sufficient for carbon cycle modeling and various applications. In this study, we aim to develop incident PAR products at one kilometer scale from multiple satellite sensors, such as Moderate Resolution Imaging Spectrometer (MODIS) and Geostationary Operational Environmental Satellite (GOES) sensor. We first developed a look-up table approach to estimate instantanerous incident PAR product from MODIS (Liang et al., 2006). The temporal observations of each pixel are used to estimate land surface reflectance and look-up tables of both aerosol and cloud are searched, based on the top-of-atmosphere reflectance and surface reflectance for determining incident PAR. The incident PAR product includes both the direct and diffuse components. The calculation of a daily integrated PAR using two different methods has also been developed (Wang, et al., 2008a). The similar algorithm has been further extended to GOES data (Wang, et al., 2008b, Zheng, et al., 2008). Extensive validation activities are conducted to evaluate the algorithms and products using the ground measurements from FLUXNET and other networks. They are also compared with other satellite products. The results indicate that our approaches can produce reasonable PAR product at 1km resolution. We have generated 1km incident PAR products over North America for several years, which are freely available to the science community. Liang, S., T. Zheng, R. Liu, H. Fang, S. C. Tsay, S. Running, (2006), Estimation of incident Photosynthetically Active Radiation from MODIS Data, Journal of Geophysical Research ¡§CAtmosphere. 111, D15208,doi:10.1029/2005JD006730. Wang, D., S. Liang, and Zheng, T., (2008a), Integrated daily PAR from MODIS. International Journal of Remote Sensing, revised. Wang, K., S. Liang, T. Zheng and D. Wang, (2008b), Simultaneous estimation of surface photosynthetically active radiation and albedo from GOES, Remote Sensing of Environment, revised. Zheng, T., S. Liang, K. Wang, (2008), Estimation of incident PAR from GOES imagery, Journal of Applied Meteorology and Climatology. in press.

  11. [The influence of training on rehabilitation and keep-fit tables on the chosen parameters of body weight].

    PubMed

    Krawczyk, Joanna; Wojciechowski, Jarosław; Leszczyński, Ryszard; Błaszczyk, Jan

    2010-01-01

    More and more people in the world contend with overweight or obesity, and this phenomenon at the moment is being recognized as one of the most important problems of modern civilization observed in many developed countries. Change of the lifestyle connected with turning from the active life to the more sedentary one and bad eating habits led to the development of overweight and obesity at an alarmingly fast rate with the parallel development of interests directed on conducting the research and looking for the effective methods of fighting against the overweight and obesity. The aim of the study was to evaluate some parameters of body weight among people being put on the healthy training on the rehabilitation and keep-fit tables Slender-Life. A group of 50 patients treated in sanatorium were included into the observation. Double measurement of body weight and thickness of the skin and fat were performed during the first and last days of the fifteen day training on the formerly mentioned tables. The statistically important decrease of examined parameters including the real body weight, fat mass, the BMI indication and the thickness of the skin and fat folds was detected. The healthy training on the rehabilitation and keep-fit tables Slender-Life causes the increase of the body fat-free weight. The positive acceptation of the rehabilitation on tables Slender-Life proves it should be applied.

  12. A Linear Programming Application to Aircrew Scheduling.

    DTIC Science & Technology

    1980-06-06

    1968, our total force -- Air National Guard (ANG), Air Force Reserve (AFRES), and Active duty - aircraft inventory has dropped from over 15,000...3-3) I 8 Table 2 Active Duty A-7D Training Program Level A Level B Level C Day Night Day Night Sorties WD/SATa b 22/18 6 11/6 1 9/5 Maverick 6 4/2 2...and C are listed in % LTable 5. There are additional constraints not apparent from looking at the table. First, night events listed in Table 5 are only

  13. Trends in Student Financing of Undergraduate Education: Selected Years, 1995-96 to 2011-12. Web Tables. NCES 2014-013

    ERIC Educational Resources Information Center

    National Center for Education Statistics, 2014

    2014-01-01

    Between 1995-96 and 2011-12, the number of undergraduates attending postsecondary institutions in the United States increased from nearly 17 million to 23 million. The web tables presented in this report provide a comprehensive look over a 16-year period at the trends in how undergraduates enrolled in U.S. postsecondary institutions finance their…

  14. Automatic Extraction of Drug Adverse Effects from Product Characteristics (SPCs): A Text Versus Table Comparison.

    PubMed

    Lamy, Jean-Baptiste; Ugon, Adrien; Berthelot, Hélène

    2016-01-01

    Potential adverse effects (AEs) of drugs are described in their summary of product characteristics (SPCs), a textual document. Automatic extraction of AEs from SPCs is useful for detecting AEs and for building drug databases. However, this task is difficult because each AE is associated with a frequency that must be extracted and the presentation of AEs in SPCs is heterogeneous, consisting of plain text and tables in many different formats. We propose a taxonomy for the presentation of AEs in SPCs. We set up natural language processing (NLP) and table parsing methods for extracting AEs from texts and tables of any format, and evaluate them on 10 SPCs. Automatic extraction performed better on tables than on texts. Tables should be recommended for the presentation of the AEs section of the SPCs.

  15. The Terminal Interface Message Processor Program.

    DTIC Science & Technology

    1973-11-01

    table entry for this device to one of CONECO, CONVT, CONEEE, CONESC , IBMEEE, IBMESC, IBMECO, IBMCON, BINECO, BINCON, or HUNT 8.2.2.1.1-2 8/73...transmit on EDM, goto NOPE EOMa set up counter to make buffer look full goto NOPE 8.2.2.1.1-6 8/73 A I I CONEEE call ECHO to echo characterI CONESC mask...6 82 CCHAR 8.2.2.2.2-3CCHARA 8 . 2,2 .2 .2- 3 CLKOI 8.2.2.2-1 CLOCK 8.2.2-1 CLOCK4 8.2.2-1 CLOCKA 8.2.2-2 CONEEE 8.2.2.1.1-7 CONESC 8.2.2.1.1-7

  16. Use of NOAA-N satellites for land/water discrimination and flood monitoring

    NASA Technical Reports Server (NTRS)

    Tappan, G.; Horvath, N. C.; Doraiswamy, P. C.; Engman, T.; Goss, D. W. (Principal Investigator)

    1983-01-01

    A tool for monitoring the extent of major floods was developed using data collected by the NOAA-6 advanced very high resolution radiometer (AVHRR). A basic understanding of the spectral returns in AVHRR channels 1 and 2 for water, soil, and vegetation was reached using a large number of NOAA-6 scenes from different seasons and geographic locations. A look-up table classifier was developed based on analysis of the reflective channel relationships for each surface feature. The classifier automatically separated land from water and produced classification maps which were registered for a number of acquisitions, including coverage of a major flood on the Parana River of Argentina.

  17. Surface emissivity and temperature retrieval for a hyperspectral sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borel, C.C.

    1998-12-01

    With the growing use of hyper-spectral imagers, e.g., AVIRIS in the visible and short-wave infrared there is hope of using such instruments in the mid-wave and thermal IR (TIR) some day. The author believes that this will enable him to get around using the present temperature-emissivity separation algorithms using methods which take advantage of the many channels available in hyper-spectral imagers. A simple fact used in coming up with a novel algorithm is that a typical surface emissivity spectrum are rather smooth compared to spectral features introduced by the atmosphere. Thus, a iterative solution technique can be devised which retrievesmore » emissivity spectra based on spectral smoothness. To make the emissivities realistic, atmospheric parameters are varied using approximations, look-up tables derived from a radiative transfer code and spectral libraries. One such iterative algorithm solves the radiative transfer equation for the radiance at the sensor for the unknown emissivity and uses the blackbody temperature computed in an atmospheric window to get a guess for the unknown surface temperature. By varying the surface temperature over a small range a series of emissivity spectra are calculated. The one with the smoothest characteristic is chosen. The algorithm was tested on synthetic data using MODTRAN and the Salisbury emissivity database.« less

  18. Full equations utilities (FEQUTL) model for the approximation of hydraulic characteristics of open channels and control structures during unsteady flow

    USGS Publications Warehouse

    Franz, Delbert D.; Melching, Charles S.

    1997-01-01

    The Full EQuations UTiLities (FEQUTL) model is a computer program for computation of tables that list the hydraulic characteristics of open channels and control structures as a function of upstream and downstream depths; these tables facilitate the simulation of unsteady flow in a stream system with the Full Equations (FEQ) model. Simulation of unsteady flow requires many iterations for each time period computed. Thus, computation of hydraulic characteristics during the simulations is impractical, and preparation of function tables and application of table look-up procedures facilitates simulation of unsteady flow. Three general types of function tables are computed: one-dimensional tables that relate hydraulic characteristics to upstream flow depth, two-dimensional tables that relate flow through control structures to upstream and downstream flow depth, and three-dimensional tables that relate flow through gated structures to upstream and downstream flow depth and gate setting. For open-channel reaches, six types of one-dimensional function tables contain different combinations of the top width of flow, area, first moment of area with respect to the water surface, conveyance, flux coefficients, and correction coefficients for channel curvilinearity. For hydraulic control structures, one type of one-dimensional function table contains relations between flow and upstream depth, and two types of two-dimensional function tables contain relations among flow and upstream and downstream flow depths. For hydraulic control structures with gates, a three-dimensional function table lists the system of two-dimensional tables that contain the relations among flow and upstream and downstream flow depths that correspond to different gate openings. Hydraulic control structures for which function tables containing flow relations are prepared in FEQUTL include expansions, contractions, bridges, culverts, embankments, weirs, closed conduits (circular, rectangular, and pipe-arch shapes), dam failures, floodways, and underflow gates (sluice and tainter gates). The theory for computation of the hydraulic characteristics is presented for open channels and for each hydraulic control structure. For the hydraulic control structures, the theory is developed from the results of experimental tests of flow through the structure for different upstream and downstream flow depths. These tests were done to describe flow hydraulics for a single, steady-flow design condition and, thus, do not provide complete information on flow transitions (for example, between free- and submerged-weir flow) that may result in simulation of unsteady flow. Therefore, new procedures are developed to approximate the hydraulics of flow transitions for culverts, embankments, weirs, and underflow gates.

  19. Astronaut Jack Lousma looks at map of Earth in ward room of Skylab cluster

    NASA Image and Video Library

    1973-08-01

    S73-34193 (1 Aug. 1973) --- Astronaut Jack R. Lousma, Skylab 3 pilot, looks at a map of Earth at the food table in the ward room of the Orbital Workshop (OWS). In this photographic reproduction taken from a television transmission made by a color TV camera aboard the Skylab space station cluster in Earth orbit. Photo credit: NASA

  20. 26. July 1974. BENCH SHOP, VIEW LOOKING SOUTH, SHOWING THE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    26. July 1974. BENCH SHOP, VIEW LOOKING SOUTH, SHOWING THE BORING MACHINE PURCHASED IN 1885. THE BIT MAY BE LOWERED BY THE HANGING LINKAGE OR THE TABLE RAISED BY THE FOOT PEDAL. NOTICE THE CHASE FOR THE BELTS, BUILT NO LESS CAREFULLY THAN THE MACHINE ITSELF. - Gruber Wagon Works, Pennsylvania Route 183 & State Hill Road at Red Bridge Park, Bernville, Berks County, PA

  1. Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions

    NASA Technical Reports Server (NTRS)

    Nearing, Grey S.; Mocko, David M.; Peters-Lidard, Christa D.; Kumar, Sujay V.; Xia, Youlong

    2016-01-01

    Model benchmarking allows us to separate uncertainty in model predictions caused 1 by model inputs from uncertainty due to model structural error. We extend this method with a large-sample approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances.

  2. A high-resolution programmable Vernier delay generator based on carry chains in FPGA

    NASA Astrophysics Data System (ADS)

    Cui, Ke; Li, Xiangyu; Zhu, Rihong

    2017-06-01

    This paper presents an architecture of a high-resolution delay generator implemented in a single field programmable gate array chip by exploiting the method of utilizing dedicated carry chains. It serves as the core component in various physical instruments. The proposed delay generator contains the coarse delay step and the fine delay step to guarantee both large dynamic range and high resolution. The carry chains are organized in the Vernier delay loop style to fulfill the fine delay step with high precision and high linearity. The delay generator was implemented in the EP3SE110F1152I3 Stratix III device from Altera on a self-designed test board. Test results show that the obtained resolution is 38.6 ps, and the differential nonlinearity/integral nonlinearity is in the range of [-0.18 least significant bit (LSB), 0.24 LSB]/(-0.02 LSB, 0.01 LSB) under the nominal supply voltage of 1100 mV and environmental temperature of 2 0°C. The delay generator is rather efficient concerning resource cost, which uses only 668 look-up tables and 146 registers in total.

  3. Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions

    PubMed Central

    Nearing, Grey S.; Mocko, David M.; Peters-Lidard, Christa D.; Kumar, Sujay V.; Xia, Youlong

    2018-01-01

    Model benchmarking allows us to separate uncertainty in model predictions caused by model inputs from uncertainty due to model structural error. We extend this method with a “large-sample” approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances. PMID:29697706

  4. Whole-body to tissue concentration ratios for use in biota dose assessments for animals.

    PubMed

    Yankovich, Tamara L; Beresford, Nicholas A; Wood, Michael D; Aono, Tasuo; Andersson, Pål; Barnett, Catherine L; Bennett, Pamela; Brown, Justin E; Fesenko, Sergey; Fesenko, J; Hosseini, Ali; Howard, Brenda J; Johansen, Mathew P; Phaneuf, Marcel M; Tagami, Keiko; Takata, Hyoe; Twining, John R; Uchida, Shigeo

    2010-11-01

    Environmental monitoring programs often measure contaminant concentrations in animal tissues consumed by humans (e.g., muscle). By comparison, demonstration of the protection of biota from the potential effects of radionuclides involves a comparison of whole-body doses to radiological dose benchmarks. Consequently, methods for deriving whole-body concentration ratios based on tissue-specific data are required to make best use of the available information. This paper provides a series of look-up tables with whole-body:tissue-specific concentration ratios for non-human biota. Focus was placed on relatively broad animal categories (including molluscs, crustaceans, freshwater fishes, marine fishes, amphibians, reptiles, birds and mammals) and commonly measured tissues (specifically, bone, muscle, liver and kidney). Depending upon organism, whole-body to tissue concentration ratios were derived for between 12 and 47 elements. The whole-body to tissue concentration ratios can be used to estimate whole-body concentrations from tissue-specific measurements. However, we recommend that any given whole-body to tissue concentration ratio should not be used if the value falls between 0.75 and 1.5. Instead, a value of one should be assumed.

  5. Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions.

    PubMed

    Nearing, Grey S; Mocko, David M; Peters-Lidard, Christa D; Kumar, Sujay V; Xia, Youlong

    2016-03-01

    Model benchmarking allows us to separate uncertainty in model predictions caused by model inputs from uncertainty due to model structural error. We extend this method with a "large-sample" approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances.

  6. A simple algorithm to compute the peak power output of GaAs/Ge solar cells on the Martian surface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glueck, P.R.; Bahrami, K.A.

    1995-12-31

    The Jet Propulsion Laboratory`s (JPL`s) Mars Pathfinder Project will deploy a robotic ``microrover`` on the surface of Mars in the summer of 1997. This vehicle will derive primary power from a GaAs/Ge solar array during the day and will ``sleep`` at night. This strategy requires that the rover be able to (1) determine when it is necessary to save the contents of volatile memory late in the afternoon and (2) determine when sufficient power is available to resume operations in the morning. An algorithm was developed that estimates the peak power point of the solar array from the solar arraymore » short-circuit current and temperature telemetry, and provides functional redundancy for both measurements using the open-circuit voltage telemetry. The algorithm minimizes vehicle processing and memory utilization by using linear equations instead of look-up tables to estimate peak power with very little loss in accuracy. This paper describes the method used to obtain the algorithm and presents the detailed algorithm design.« less

  7. Optical scatterometry of quarter-micron patterns using neural regression

    NASA Astrophysics Data System (ADS)

    Bischoff, Joerg; Bauer, Joachim J.; Haak, Ulrich; Hutschenreuther, Lutz; Truckenbrodt, Horst

    1998-06-01

    With shrinking dimensions and increasing chip areas, a rapid and non-destructive full wafer characterization after every patterning cycle is an inevitable necessity. In former publications it was shown that Optical Scatterometry (OS) has the potential to push the attainable feature limits of optical techniques from 0.8 . . . 0.5 microns for imaging methods down to 0.1 micron and below. Thus the demands of future metrology can be met. Basically being a nonimaging method, OS combines light scatter (or diffraction) measurements with modern data analysis schemes to solve the inverse scatter issue. For very fine patterns with lambda-to-pitch ratios grater than one, the specular reflected light versus the incidence angle is recorded. Usually, the data analysis comprises two steps -- a training cycle connected the a rigorous forward modeling and the prediction itself. Until now, two data analysis schemes are usually applied -- the multivariate regression based Partial Least Squares method (PLS) and a look-up-table technique which is also referred to as Minimum Mean Square Error approach (MMSE). Both methods are afflicted with serious drawbacks. On the one hand, the prediction accuracy of multivariate regression schemes degrades with larger parameter ranges due to the linearization properties of the method. On the other hand, look-up-table methods are rather time consuming during prediction thus prolonging the processing time and reducing the throughput. An alternate method is an Artificial Neural Network (ANN) based regression which combines the advantages of multivariate regression and MMSE. Due to the versatility of a neural network, not only can its structure be adapted more properly to the scatter problem, but also the nonlinearity of the neuronal transfer functions mimic the nonlinear behavior of optical diffraction processes more adequately. In spite of these pleasant properties, the prediction speed of ANN regression is comparable with that of the PLS-method. In this paper, the viability and performance of ANN-regression will be demonstrated with the example of sub-quarter-micron resist metrology. To this end, 0.25 micrometer line/space patterns have been printed in positive photoresist by means of DUV projection lithography. In order to evaluate the total metrology chain from light scatter measurement through data analysis, a thorough modeling has been performed. Assuming a trapezoidal shape of the developed resist profile, a training data set was generated by means of the Rigorous Coupled Wave Approach (RCWA). After training the model, a second data set was computed and deteriorated by Gaussian noise to imitate real measuring conditions. Then, these data have been fed into the models established before resulting in a Standard Error of Prediction (SEP) which corresponds to the measuring accuracy. Even with putting only little effort in the design of a back-propagation network, the ANN is clearly superior to the PLS-method. Depending on whether a network with one or two hidden layers was used, accuracy gains between 2 and 5 can be achieved compared with PLS regression. Furthermore, the ANN is less noise sensitive, for there is only a doubling of the SEP at 5% noise for ANN whereas for PLS the accuracy degrades rapidly with increasing noise. The accuracy gain also depends on the light polarization and on the measured parameters. Finally, these results have been proven experimentally, where the OS-results are in good accordance with the profiles obtained from cross- sectioning micrographs.

  8. Process and representation in graphical displays

    NASA Technical Reports Server (NTRS)

    Gillan, Douglas J.; Lewis, Robert; Rudisill, Marianne

    1990-01-01

    How people comprehend graphics is examined. Graphical comprehension involves the cognitive representation of information from a graphic display and the processing strategies that people apply to answer questions about graphics. Research on representation has examined both the features present in a graphic display and the cognitive representation of the graphic. The key features include the physical components of a graph, the relation between the figure and its axes, and the information in the graph. Tests of people's memory for graphs indicate that both the physical and informational aspect of a graph are important in the cognitive representation of a graph. However, the physical (or perceptual) features overshadow the information to a large degree. Processing strategies also involve a perception-information distinction. In order to answer simple questions (e.g., determining the value of a variable, comparing several variables, and determining the mean of a set of variables), people switch between two information processing strategies: (1) an arithmetic, look-up strategy in which they use a graph much like a table, looking up values and performing arithmetic calculations; and (2) a perceptual strategy in which they use the spatial characteristics of the graph to make comparisons and estimations. The user's choice of strategies depends on the task and the characteristics of the graph. A theory of graphic comprehension is presented.

  9. Development of advanced structural analysis methodologies for predicting widespread fatigue damage in aircraft structures

    NASA Technical Reports Server (NTRS)

    Harris, Charles E.; Starnes, James H., Jr.; Newman, James C., Jr.

    1995-01-01

    NASA is developing a 'tool box' that includes a number of advanced structural analysis computer codes which, taken together, represent the comprehensive fracture mechanics capability required to predict the onset of widespread fatigue damage. These structural analysis tools have complementary and specialized capabilities ranging from a finite-element-based stress-analysis code for two- and three-dimensional built-up structures with cracks to a fatigue and fracture analysis code that uses stress-intensity factors and material-property data found in 'look-up' tables or from equations. NASA is conducting critical experiments necessary to verify the predictive capabilities of the codes, and these tests represent a first step in the technology-validation and industry-acceptance processes. NASA has established cooperative programs with aircraft manufacturers to facilitate the comprehensive transfer of this technology by making these advanced structural analysis codes available to industry.

  10. aMC fast: automation of fast NLO computations for PDF fits

    NASA Astrophysics Data System (ADS)

    Bertone, Valerio; Frederix, Rikkert; Frixione, Stefano; Rojo, Juan; Sutton, Mark

    2014-08-01

    We present the interface between M adG raph5_ aMC@NLO, a self-contained program that calculates cross sections up to next-to-leading order accuracy in an automated manner, and APPL grid, a code that parametrises such cross sections in the form of look-up tables which can be used for the fast computations needed in the context of PDF fits. The main characteristic of this interface, which we dub aMC fast, is its being fully automated as well, which removes the need to extract manually the process-specific information for additional physics processes, as is the case with other matrix-element calculators, and renders it straightforward to include any new process in the PDF fits. We demonstrate this by studying several cases which are easily measured at the LHC, have a good constraining power on PDFs, and some of which were previously unavailable in the form of a fast interface.

  11. Development of esMOCA RULA, Motion Capture Instrumentation for RULA Assessment

    NASA Astrophysics Data System (ADS)

    Akhmad, S.; Arendra, A.

    2018-01-01

    The purpose of this research is to build motion capture instrumentation using sensors fusion accelerometer and gyroscope to assist in RULA assessment. Data processing of sensor orientation is done in every sensor node by digital motion processor. Nine sensors are placed in the upper limb of operator subject. Development of kinematics model is done with Simmechanic Simulink. This kinematics model receives streaming data from sensors via wireless sensors network. The output of the kinematics model is the relative angular angle between upper limb members and visualized on the monitor. This angular information is compared to the look-up table of the RULA worksheet and gives the RULA score. The assessment result of the instrument is compared with the result of the assessment by rula assessors. To sum up, there is no significant difference of assessment by the instrument with an assessment by an assessor.

  12. Assessment and validation of the community radiative transfer model for ice cloud conditions

    NASA Astrophysics Data System (ADS)

    Yi, Bingqi; Yang, Ping; Weng, Fuzhong; Liu, Quanhua

    2014-11-01

    The performance of the Community Radiative Transfer Model (CRTM) under ice cloud conditions is evaluated and improved with the implementation of MODIS collection 6 ice cloud optical property model based on the use of severely roughened solid column aggregates and a modified Gamma particle size distribution. New ice cloud bulk scattering properties (namely, the extinction efficiency, single-scattering albedo, asymmetry factor, and scattering phase function) suitable for application to the CRTM are calculated by using the most up-to-date ice particle optical property library. CRTM-based simulations illustrate reasonable accuracy in comparison with the counterparts derived from a combination of the Discrete Ordinate Radiative Transfer (DISORT) model and the Line-by-line Radiative Transfer Model (LBLRTM). Furthermore, simulations of the top of the atmosphere brightness temperature with CRTM for the Crosstrack Infrared Sounder (CrIS) are carried out to further evaluate the updated CRTM ice cloud optical property look-up table.

  13. A new methodology for vibration error compensation of optical encoders.

    PubMed

    Lopez, Jesus; Artes, Mariano

    2012-01-01

    Optical encoders are sensors based on grating interference patterns. Tolerances inherent to the manufacturing process can induce errors in the position accuracy as the measurement signals stand apart from the ideal conditions. In case the encoder is working under vibrations, the oscillating movement of the scanning head is registered by the encoder system as a displacement, introducing an error into the counter to be added up to graduation, system and installation errors. Behavior improvement can be based on different techniques trying to compensate the error from measurement signals processing. In this work a new "ad hoc" methodology is presented to compensate the error of the encoder when is working under the influence of vibration. The methodology is based on fitting techniques to the Lissajous figure of the deteriorated measurement signals and the use of a look up table, giving as a result a compensation procedure in which a higher accuracy of the sensor is obtained.

  14. Evaluation of CFD to Determine Two-Dimensional Airfoil Characteristics for Rotorcraft Applications

    NASA Technical Reports Server (NTRS)

    Smith, Marilyn J.; Wong, Tin-Chee; Potsdam, Mark; Baeder, James; Phanse, Sujeet

    2004-01-01

    The efficient prediction of helicopter rotor performance, vibratory loads, and aeroelastic properties still relies heavily on the use of comprehensive analysis codes by the rotorcraft industry. These comprehensive codes utilize look-up tables to provide two-dimensional aerodynamic characteristics. Typically these tables are comprised of a combination of wind tunnel data, empirical data and numerical analyses. The potential to rely more heavily on numerical computations based on Computational Fluid Dynamics (CFD) simulations has become more of a reality with the advent of faster computers and more sophisticated physical models. The ability of five different CFD codes applied independently to predict the lift, drag and pitching moments of rotor airfoils is examined for the SC1095 airfoil, which is utilized in the UH-60A main rotor. Extensive comparisons with the results of ten wind tunnel tests are performed. These CFD computations are found to be as good as experimental data in predicting many of the aerodynamic performance characteristics. Four turbulence models were examined (Baldwin-Lomax, Spalart-Allmaras, Menter SST, and k-omega).

  15. An automated approach to the design of decision tree classifiers

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Chin, P.; Beaudet, P.

    1980-01-01

    The classification of large dimensional data sets arising from the merging of remote sensing data with more traditional forms of ancillary data is considered. Decision tree classification, a popular approach to the problem, is characterized by the property that samples are subjected to a sequence of decision rules before they are assigned to a unique class. An automated technique for effective decision tree design which relies only on apriori statistics is presented. This procedure utilizes a set of two dimensional canonical transforms and Bayes table look-up decision rules. An optimal design at each node is derived based on the associated decision table. A procedure for computing the global probability of correct classfication is also provided. An example is given in which class statistics obtained from an actual LANDSAT scene are used as input to the program. The resulting decision tree design has an associated probability of correct classification of .76 compared to the theoretically optimum .79 probability of correct classification associated with a full dimensional Bayes classifier. Recommendations for future research are included.

  16. Computed Tomography (CT) -- Sinuses

    MedlinePlus Videos and Cool Tools

    ... for the moving table. top of page Additional Information and Resources RTAnswers.org Radiation Therapy for Head ... Send us your feedback Did you find the information you were looking for? Yes No Please type ...

  17. Learning Object Repositories

    ERIC Educational Resources Information Center

    Lehman, Rosemary

    2007-01-01

    This chapter looks at the development and nature of learning objects, meta-tagging standards and taxonomies, learning object repositories, learning object repository characteristics, and types of learning object repositories, with type examples. (Contains 1 table.)

  18. Excellence in Physics Education Award: SCALE-UP, Student Centered Active Learning Environment with Upside-down Pedagogies

    NASA Astrophysics Data System (ADS)

    Beichner, Robert

    2016-03-01

    The Student-Centered Active Learning Environment with Upside-down Pedagogies (SCALE-UP) Project combines curricula and a specially-designed instructional space to enhance learning. SCALE-UP students practice communication and teamwork skills while performing activities that enhance their conceptual understanding and problem solving skills. This can be done with small or large classes and has been implemented at more than 250 institutions. Educational research indicates that students should collaborate on interesting tasks and be deeply involved with the material they are studying. SCALE-UP classtime is spent primarily on ``tangibles'' and ``ponderables''--hands-on measurements/observations and interesting questions. There are also computer simulations (called ``visibles'') and hypothesis-driven labs. Students sit at tables designed to facilitate group interactions. Instructors circulate and engage in Socratic dialogues. The setting looks like a banquet hall, with lively interactions nearly all the time. Impressive learning gains have been measured at institutions across the US and internationally. This talk describes today's students, how lecturing got started, what happens in a SCALE-UP classroom, and how the approach has spread. The SCALE-UP project has greatly benefitted from numerous Grants made by NSF and FIPSE to NCSU and other institutions.

  19. A new method to acquire 3-D images of a dental cast

    NASA Astrophysics Data System (ADS)

    Li, Zhongke; Yi, Yaxing; Zhu, Zhen; Li, Hua; Qin, Yongyuan

    2006-01-01

    This paper introduced our newly developed method to acquire three-dimensional images of a dental cast. A rotatable table, a laser-knife, a mirror, a CCD camera and a personal computer made up of a three-dimensional data acquiring system. A dental cast is placed on the table; the mirror is installed beside the table; a linear laser is projected to the dental cast; the CCD camera is put up above the dental cast, it can take picture of the dental cast and the shadow in the mirror; while the table rotating, the camera records the shape of the laser streak projected on the dental cast, and transmit the data to the computer. After the table rotated one circuit, the computer processes the data, calculates the three-dimensional coordinates of the dental cast's surface. In data processing procedure, artificial neural networks are enrolled to calibrate the lens distortion, map coordinates form screen coordinate system to world coordinate system. According to the three-dimensional coordinates, the computer reconstructs the stereo image of the dental cast. It is essential for computer-aided diagnosis and treatment planning in orthodontics. In comparison with other systems in service, for example, laser beam three-dimensional scanning system, the characteristic of this three-dimensional data acquiring system: a. celerity, it casts only 1 minute to scan a dental cast; b. compact, the machinery is simple and compact; c. no blind zone, a mirror is introduced ably to reduce blind zone.

  20. Glaucoma: Symptoms, Diagnosis, Treatment and Latest Research

    MedlinePlus

    ... Feature: Glaucoma Glaucoma: Symptoms, Diagnosis, Treatment and Latest Research Past Issues / Fall 2009 Table of Contents Symptoms ... patients may need to keep taking drugs. Latest Research Researchers are studying the causes of glaucoma, looking ...

  1. Partition-based acquisition model for speed up navigated beta-probe surface imaging

    NASA Astrophysics Data System (ADS)

    Monge, Frédéric; Shakir, Dzhoshkun I.; Navab, Nassir; Jannin, Pierre

    2016-03-01

    Although gross total resection in low-grade glioma surgery leads to a better patient outcome, the in-vivo control of resection borders remains challenging. For this purpose, navigated beta-probe systems combined with 18F-based radiotracer, relying on activity distribution surface estimation, have been proposed to generate reconstructed images. The clinical relevancy has been outlined by early studies where intraoperative functional information is leveraged although inducing low spatial resolution in reconstruction. To improve reconstruction quality, multiple acquisition models have been proposed. They involve the definition of attenuation matrix for designing radiation detection physics. Yet, they require high computational power for efficient intraoperative use. To address the problem, we propose a new acquisition model called Partition Model (PM) considering an existing model where coefficients of the matrix are taken from a look-up table (LUT). Our model is based upon the division of the LUT into averaged homogeneous values for assigning attenuation coefficients. We validated our model using in vitro datasets, where tumors and peri-tumoral tissues have been simulated. We compared our acquisition model with the o_-the-shelf LUT and the raw method. Acquisition models outperformed the raw method in term of tumor contrast (7.97:1 mean T:B) but with a difficulty of real-time use. Both acquisition models reached the same detection performance with references (0.8 mean AUC and 0.77 mean NCC), where PM slightly improves the mean tumor contrast up to 10.1:1 vs 9.9:1 with the LUT model and more importantly, it reduces the mean computation time by 7.5%. Our model gives a faster solution for an intraoperative use of navigated beta-probe surface imaging system, with improved image quality.

  2. Radiometric Cross-Calibration of GAOFEN-1 Wfv Cameras with LANDSAT-8 Oli and Modis Sensors Based on Radiation and Geometry Matching

    NASA Astrophysics Data System (ADS)

    Li, J.; Wu, Z.; Wei, X.; Zhang, Y.; Feng, F.; Guo, F.

    2018-04-01

    Cross-calibration has the advantages of high precision, low resource requirements and simple implementation. It has been widely used in recent years. The four wide-field-of-view (WFV) cameras on-board Gaofen-1 satellite provide high spatial resolution and wide combined coverage (4 × 200 km) without onboard calibration. In this paper, the four-band radiometric cross-calibration coefficients of WFV1 camera were obtained based on radiation and geometry matching taking Landsat 8 OLI (Operational Land Imager) sensor as reference. Scale Invariant Feature Transform (SIFT) feature detection method and distance and included angle weighting method were introduced to correct misregistration of WFV-OLI image pair. The radiative transfer model was used to eliminate difference between OLI sensor and WFV1 camera through the spectral match factor (SMF). The near-infrared band of WFV1 camera encompasses water vapor absorption bands, thus a Look Up Table (LUT) for SMF varies from water vapor amount is established to estimate the water vapor effects. The surface synchronization experiment was designed to verify the reliability of the cross-calibration coefficients, which seem to perform better than the official coefficients claimed by the China Centre for Resources Satellite Data and Application (CCRSDA).

  3. Recognition and Quantification of Area Damaged by Oligonychus Perseae in Avocado Leaves

    NASA Astrophysics Data System (ADS)

    Díaz, Gloria; Romero, Eduardo; Boyero, Juan R.; Malpica, Norberto

    The measure of leaf damage is a basic tool in plant epidemiology research. Measuring the area of a great number of leaves is subjective and time consuming. We investigate the use of machine learning approaches for the objective segmentation and quantification of leaf area damaged by mites in avocado leaves. After extraction of the leaf veins, pixels are labeled with a look-up table generated using a Support Vector Machine with a polynomial kernel of degree 3, on the chrominance components of YCrCb color space. Spatial information is included in the segmentation process by rating the degree of membership to a certain class and the homogeneity of the classified region. Results are presented on real images with different degrees of damage.

  4. Programmable remapper for image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)

    1991-01-01

    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  5. An automated approach to the design of decision tree classifiers

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Chin, R.; Beaudet, P.

    1982-01-01

    An automated technique is presented for designing effective decision tree classifiers predicated only on a priori class statistics. The procedure relies on linear feature extractions and Bayes table look-up decision rules. Associated error matrices are computed and utilized to provide an optimal design of the decision tree at each so-called 'node'. A by-product of this procedure is a simple algorithm for computing the global probability of correct classification assuming the statistical independence of the decision rules. Attention is given to a more precise definition of decision tree classification, the mathematical details on the technique for automated decision tree design, and an example of a simple application of the procedure using class statistics acquired from an actual Landsat scene.

  6. Time-dependent phase error correction using digital waveform synthesis

    DOEpatents

    Doerry, Armin W.; Buskirk, Stephen

    2017-10-10

    The various technologies presented herein relate to correcting a time-dependent phase error generated as part of the formation of a radar waveform. A waveform can be pre-distorted to facilitate correction of an error induced into the waveform by a downstream operation/component in a radar system. For example, amplifier power droop effect can engender a time-dependent phase error in a waveform as part of a radar signal generating operation. The error can be quantified and an according complimentary distortion can be applied to the waveform to facilitate negation of the error during the subsequent processing of the waveform. A time domain correction can be applied by a phase error correction look up table incorporated into a waveform phase generator.

  7. PAM-4 delivery based on pre-distortion and CMMA equalization in a ROF system at 40 GHz

    NASA Astrophysics Data System (ADS)

    Zhou, Wen; Zhang, Jiao; Han, Xifeng; Kong, Miao; Gou, Pengqi

    2018-06-01

    In this paper, we proposed a PAM-4 delivery in a ROF system at 40-GHz. PAM-4 transmission data can be generated via look-up table (LUT) pre-distortion, then delivered over 25km single-mode fiber and 0.5m wireless link. At the receiver side, the received signal can be processed with cascaded multi-module algorithm (CMMA) equalization to improve the decision precision. Our measured results show that 10Gbaud PAM-4 transmission in a ROF system at 40-GHz can be achieved with BER of 1.6 × 10-3. To our knowledge, this is the first time to introduce LUT pre-distortion and CMMA equalization in a ROF system to improve signal performance.

  8. Generation and transmission of DPSK signals using a directly modulated passive feedback laser.

    PubMed

    Karar, Abdullah S; Gao, Ying; Zhong, Kang Ping; Ke, Jian Hong; Cartledge, John C

    2012-12-10

    The generation of differential-phase-shift keying (DPSK) signals is demonstrated using a directly modulated passive feedback laser at 10.709-Gb/s, 14-Gb/s and 16-Gb/s. The quality of the DPSK signals is assessed using both noncoherent detection for a bit rate of 10.709-Gb/s and coherent detection with digital signal processing involving a look-up table pattern-dependent distortion compensator. Transmission over a passive link consisting of 100 km of single mode fiber at a bit rate of 10.709-Gb/s is achieved with a received optical power of -45 dBm at a bit-error-ratio of 3.8 × 10(-3) and a 49 dB loss margin.

  9. New Arab social order: a study of the social impact of oil wealth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, S.E.

    1982-01-01

    The skyrocketing Arab oil revenues of the 1970s have triggered socio-economic forces in the Arab world. Observers have studied the financial and geopolitical aspects of Arab oil, but generally have ignored the human and social repercussions stimulated by the oil wealth. This book challenges the commonly accepted view of the impact of manpower movements across the Arab wealth divide, looking at the new social formations, class structures, value systems, and social cleavages that have been emerging in both rich and poor Arab countries. These developments may add up to a silent social revolution, and are possibly a prelude to moremore » overt tension, conflict, and political turmoil. 136 references, 13 figures, 39 tables.« less

  10. Numerical solution of Space Shuttle Orbiter flow field including real gas effects

    NASA Technical Reports Server (NTRS)

    Prabhu, D. K.; Tannehill, J. C.

    1984-01-01

    The hypersonic, laminar flow around the Space Shuttle Orbiter has been computed for both an ideal gas (gamma = 1.2) and equilibrium air using a real-gas, parabolized Navier-Stokes code. This code employs a generalized coordinate transformation; hence, it places no restrictions on the orientation of the solution surfaces. The initial solution in the nose region was computed using a 3-D, real-gas, time-dependent Navier-Stokes code. The thermodynamic and transport properties of equilibrium air were obtained from either approximate curve fits or a table look-up procedure. Numerical results are presented for flight conditions corresponding to the STS-3 trajectory. The computed surface pressures and convective heating rates are compared with data from the STS-3 flight.

  11. Minimalist design of a robust real-time quantum random number generator

    NASA Astrophysics Data System (ADS)

    Kravtsov, K. S.; Radchenko, I. V.; Kulik, S. P.; Molotkov, S. N.

    2015-08-01

    We present a simple and robust construction of a real-time quantum random number generator (QRNG). Our minimalist approach ensures stable operation of the device as well as its simple and straightforward hardware implementation as a stand-alone module. As a source of randomness the device uses measurements of time intervals between clicks of a single-photon detector. The obtained raw sequence is then filtered and processed by a deterministic randomness extractor, which is realized as a look-up table. This enables high speed on-the-fly processing without the need of extensive computations. The overall performance of the device is around 1 random bit per detector click, resulting in 1.2 Mbit/s generation rate in our implementation.

  12. True 3D display and BeoWulf connectivity

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz P.; Kostrzewski, Andrew A.; Kupiec, Stephen A.; Yu, Kevin H.; Aye, Tin M.; Savant, Gajendra D.

    2003-09-01

    We propose a novel true 3-D display based on holographic optics, called HAD (Holographic Autostereoscopic Display), or Holographic Inverse Look-around and Autostereoscopic Reality (HILAR), its latest generation. It does not require goggles, unlike the state of the art 3-D system which do not work without goggles, and has a table-like 360° look-around capability. Also, novel 3-D image-rendering software, based on Beowulf PC cluster hardware is discussed.

  13. DETAIL VIEW OF CLASSIFIER, TAILINGS LAUNDER TROUGH, LINE SHAFTS, AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL VIEW OF CLASSIFIER, TAILINGS LAUNDER TROUGH, LINE SHAFTS, AND CONCENTRATION TABLES, LOOKING SOUTHWEST. SLURRY EXITING THE BALL MILL WAS COLLECTED IN AN AMALGAMATION BOX (MISSING) FROM THE END OF THE MILL, AND INTRODUCED INTO THE CLASSIFIER. THE TAILINGS LAUDER IS ON THE GROUND AT LOWER RIGHT. THE LINE SHAFTING ABOVE PROVIDED POWER TO THE CONCENTRATION TABLES BELOW AT CENTER RIGHT. - Gold Hill Mill, Warm Spring Canyon Road, Death Valley Junction, Inyo County, CA

  14. Nurses and global health: 'at the table' or 'on the menu'?

    PubMed

    Scammell, Janet

    2018-01-11

    Janet Scammell, Associate Professor (Nursing), Bournemouth University, looks at the role of the nursing workforce in shaping wider global health care, and the part nurse educators play in promoting international involvement.

  15. Juvenile Delinquency: An Introduction

    ERIC Educational Resources Information Center

    Smith, Carolyn A.

    2008-01-01

    Juvenile Delinquency is a term which is often inaccurately used. This article clarifies definitions, looks at prevalence, and explores the relationship between juvenile delinquency and mental health. Throughout, differences between males and females are explored. (Contains 1 table.)

  16. From Ramachandran Maps to Tertiary Structures of Proteins.

    PubMed

    DasGupta, Debarati; Kaushik, Rahul; Jayaram, B

    2015-08-27

    Sequence to structure of proteins is an unsolved problem. A possible coarse grained resolution to this entails specification of all the torsional (Φ, Ψ) angles along the backbone of the polypeptide chain. The Ramachandran map quite elegantly depicts the allowed conformational (Φ, Ψ) space of proteins which is still very large for the purposes of accurate structure generation. We have divided the allowed (Φ, Ψ) space in Ramachandran maps into 27 distinct conformations sufficient to regenerate a structure to within 5 Å from the native, at least for small proteins, thus reducing the structure prediction problem to a specification of an alphanumeric string, i.e., the amino acid sequence together with one of the 27 conformations preferred by each amino acid residue. This still theoretically results in 27(n) conformations for a protein comprising "n" amino acids. We then investigated the spatial correlations at the two-residue (dipeptide) and three-residue (tripeptide) levels in what may be described as higher order Ramachandran maps, with the premise that the allowed conformational space starts to shrink as we introduce neighborhood effects. We found, for instance, for a tripeptide which potentially can exist in any of the 27(3) "allowed" conformations, three-fourths of these conformations are redundant to the 95% confidence level, suggesting sequence context dependent preferred conformations. We then created a look-up table of preferred conformations at the tripeptide level and correlated them with energetically favorable conformations. We found in particular that Boltzmann probabilities calculated from van der Waals energies for each conformation of tripeptides correlate well with the observed populations in the structural database (the average correlation coefficient is ∼0.8). An alpha-numeric string and hence the tertiary structure can be generated for any sequence from the look-up table within minutes on a single processor and to a higher level of accuracy if secondary structure can be specified. We tested the methodology on 100 small proteins, and in 90% of the cases, a structure within 5 Å is recovered. We thus believe that the method presented here provides the missing link between Ramachandran maps and tertiary structures of proteins. A Web server to convert a tertiary structure to an alphanumeric string and to predict the tertiary structure from the sequence of a protein using the above methodology is created and made freely accessible at http://www.scfbio-iitd.res.in/software/proteomics/rm2ts.jsp.

  17. Phantom-GRAPE: Numerical software library to accelerate collisionless N-body simulation with SIMD instruction set on x86 architecture

    NASA Astrophysics Data System (ADS)

    Tanikawa, Ataru; Yoshikawa, Kohji; Nitadori, Keigo; Okamoto, Takashi

    2013-02-01

    We have developed a numerical software library for collisionless N-body simulations named "Phantom-GRAPE" which highly accelerates force calculations among particles by use of a new SIMD instruction set extension to the x86 architecture, Advanced Vector eXtensions (AVX), an enhanced version of the Streaming SIMD Extensions (SSE). In our library, not only the Newton's forces, but also central forces with an arbitrary shape f(r), which has a finite cutoff radius rcut (i.e. f(r)=0 at r>rcut), can be quickly computed. In computing such central forces with an arbitrary force shape f(r), we refer to a pre-calculated look-up table. We also present a new scheme to create the look-up table whose binning is optimal to keep good accuracy in computing forces and whose size is small enough to avoid cache misses. Using an Intel Core i7-2600 processor, we measure the performance of our library for both of the Newton's forces and the arbitrarily shaped central forces. In the case of Newton's forces, we achieve 2×109 interactions per second with one processor core (or 75 GFLOPS if we count 38 operations per interaction), which is 20 times higher than the performance of an implementation without any explicit use of SIMD instructions, and 2 times than that with the SSE instructions. With four processor cores, we obtain the performance of 8×109 interactions per second (or 300 GFLOPS). In the case of the arbitrarily shaped central forces, we can calculate 1×109 and 4×109 interactions per second with one and four processor cores, respectively. The performance with one processor core is 6 times and 2 times higher than those of the implementations without any use of SIMD instructions and with the SSE instructions. These performances depend only weakly on the number of particles, irrespective of the force shape. It is good contrast with the fact that the performance of force calculations accelerated by graphics processing units (GPUs) depends strongly on the number of particles. Substantially weak dependence of the performance on the number of particles is suitable to collisionless N-body simulations, since these simulations are usually performed with sophisticated N-body solvers such as Tree- and TreePM-methods combined with an individual timestep scheme. We conclude that collisionless N-body simulations accelerated with our library have significant advantage over those accelerated by GPUs, especially on massively parallel environments.

  18. An improved parallel fuzzy connected image segmentation method based on CUDA.

    PubMed

    Wang, Liansheng; Li, Dong; Huang, Shaohui

    2016-05-12

    Fuzzy connectedness method (FC) is an effective method for extracting fuzzy objects from medical images. However, when FC is applied to large medical image datasets, its running time will be greatly expensive. Therefore, a parallel CUDA version of FC (CUDA-kFOE) was proposed by Ying et al. to accelerate the original FC. Unfortunately, CUDA-kFOE does not consider the edges between GPU blocks, which causes miscalculation of edge points. In this paper, an improved algorithm is proposed by adding a correction step on the edge points. The improved algorithm can greatly enhance the calculation accuracy. In the improved method, an iterative manner is applied. In the first iteration, the affinity computation strategy is changed and a look up table is employed for memory reduction. In the second iteration, the error voxels because of asynchronism are updated again. Three different CT sequences of hepatic vascular with different sizes were used in the experiments with three different seeds. NVIDIA Tesla C2075 is used to evaluate our improved method over these three data sets. Experimental results show that the improved algorithm can achieve a faster segmentation compared to the CPU version and higher accuracy than CUDA-kFOE. The calculation results were consistent with the CPU version, which demonstrates that it corrects the edge point calculation error of the original CUDA-kFOE. The proposed method has a comparable time cost and has less errors compared to the original CUDA-kFOE as demonstrated in the experimental results. In the future, we will focus on automatic acquisition method and automatic processing.

  19. Public School Teacher Attrition and Mobility in the First Five Years: Results from the First through Fifth Waves of the 2007-08 Beginning Teacher Longitudinal Study. First Look. NCES 2015-337

    ERIC Educational Resources Information Center

    Gray, Lucinda; Taie, Soheyla

    2015-01-01

    This First Look report provides selected findings from all five waves of the Beginning Teacher Longitudinal Study (BTLS) along with data tables and methodological information. The BTLS follows a sample of public elementary and secondary school teachers who participated in the 2007-08 Schools and Staffing Survey (SASS), and whose first year of…

  20. A new Downscaling Approach for SMAP, SMOS and ASCAT by predicting sub-grid Soil Moisture Variability based on Soil Texture

    NASA Astrophysics Data System (ADS)

    Montzka, C.; Rötzer, K.; Bogena, H. R.; Vereecken, H.

    2017-12-01

    Improving the coarse spatial resolution of global soil moisture products from SMOS, SMAP and ASCAT is currently an up-to-date topic. Soil texture heterogeneity is known to be one of the main sources of soil moisture spatial variability. A method has been developed that predicts the soil moisture standard deviation as a function of the mean soil moisture based on soil texture information. It is a closed-form expression using stochastic analysis of 1D unsaturated gravitational flow in an infinitely long vertical profile based on the Mualem-van Genuchten model and first-order Taylor expansions. With the recent development of high resolution maps of basic soil properties such as soil texture and bulk density, relevant information to estimate soil moisture variability within a satellite product grid cell is available. Here, we predict for each SMOS, SMAP and ASCAT grid cell the sub-grid soil moisture variability based on the SoilGrids1km data set. We provide a look-up table that indicates the soil moisture standard deviation for any given soil moisture mean. The resulting data set provides important information for downscaling coarse soil moisture observations of the SMOS, SMAP and ASCAT missions. Downscaling SMAP data by a field capacity proxy indicates adequate accuracy of the sub-grid soil moisture patterns.

  1. Statistical classification of drug incidents due to look-alike sound-alike mix-ups.

    PubMed

    Wong, Zoie Shui Yee

    2016-06-01

    It has been recognised that medication names that look or sound similar are a cause of medication errors. This study builds statistical classifiers for identifying medication incidents due to look-alike sound-alike mix-ups. A total of 227 patient safety incident advisories related to medication were obtained from the Canadian Patient Safety Institute's Global Patient Safety Alerts system. Eight feature selection strategies based on frequent terms, frequent drug terms and constituent terms were performed. Statistical text classifiers based on logistic regression, support vector machines with linear, polynomial, radial-basis and sigmoid kernels and decision tree were trained and tested. The models developed achieved an average accuracy of above 0.8 across all the model settings. The receiver operating characteristic curves indicated the classifiers performed reasonably well. The results obtained in this study suggest that statistical text classification can be a feasible method for identifying medication incidents due to look-alike sound-alike mix-ups based on a database of advisories from Global Patient Safety Alerts. © The Author(s) 2014.

  2. After High School, Then What? A Look at the Postsecondary Sorting-Out Process for American Youth

    DTIC Science & Technology

    1991-01-01

    then remained stable from 1984 to 1987. The two time series for women show slightly different patterns, in that the college entrance rates in Table 8...standing of the sorting-out process-the process by which young people with widely differing talents and ambitions choose among competing alternatives such...Table 3.1 These differences between the male and female rates underscore the huge gender gap in college enrollment patterns that existed in 1970. Men

  3. The Periodic Round Table (by Gary Katz)

    NASA Astrophysics Data System (ADS)

    Rodgers, Reviewed By Glen E.

    2000-02-01

    Unwrapping and lifting the Periodic Round Table out of its colorful box is an exciting experience for a professional chemist or a chemistry student. Touted as a "new way of looking at the elements", it is certainly thatat least at first blush. The "table" consists of four sets of two finely finished hardwood discs each with the following elemental symbols and their corresponding atomic numbers pleasingly and symmetrically wood-burned into their faces. The four sets of two discs are 1 1/2, 3, 4 1/2, and 6 in. in diameter, each disc is 3/4 in. thick, and therefore the entire "round table" stands 6 in. high and is 6 in. in diameter at its base. The eight beautifully polished discs (represented below) are held together by center dowels that allow each to be rotated separately.

  4. Depth of interaction decoding of a continuous crystal detector module.

    PubMed

    Ling, T; Lewellen, T K; Miyaoka, R S

    2007-04-21

    We present a clustering method to extract the depth of interaction (DOI) information from an 8 mm thick crystal version of our continuous miniature crystal element (cMiCE) small animal PET detector. This clustering method, based on the maximum-likelihood (ML) method, can effectively build look-up tables (LUT) for different DOI regions. Combined with our statistics-based positioning (SBP) method, which uses a LUT searching algorithm based on the ML method and two-dimensional mean-variance LUTs of light responses from each photomultiplier channel with respect to different gamma ray interaction positions, the position of interaction and DOI can be estimated simultaneously. Data simulated using DETECT2000 were used to help validate our approach. An experiment using our cMiCE detector was designed to evaluate the performance. Two and four DOI region clustering were applied to the simulated data. Two DOI regions were used for the experimental data. The misclassification rate for simulated data is about 3.5% for two DOI regions and 10.2% for four DOI regions. For the experimental data, the rate is estimated to be approximately 25%. By using multi-DOI LUTs, we also observed improvement of the detector spatial resolution, especially for the corner region of the crystal. These results show that our ML clustering method is a consistent and reliable way to characterize DOI in a continuous crystal detector without requiring any modifications to the crystal or detector front end electronics. The ability to characterize the depth-dependent light response function from measured data is a major step forward in developing practical detectors with DOI positioning capability.

  5. Reviews Book: Nucleus Book: The Wonderful World of Relativity Book: Head Shot Book: Cosmos Close-Up Places to Visit: Physics DemoLab Book: Quarks, Leptons and the Big Bang EBook: Shooting Stars Equipment: Victor 70C USB Digital Multimeter Web Watch

    NASA Astrophysics Data System (ADS)

    2012-09-01

    WE RECOMMEND Nucleus: A Trip into the Heart of Matter A coffee-table book for everyone to dip into and learn from The Wonderful World of Relativity A charming, stand-out introduction to relativity The Physics DemoLab, National University of Singapore A treasure trove of physics for hands-on science experiences Quarks, Leptons and the Big Bang Perfect to polish up on particle physics for older students Victor 70C USB Digital Multimeter Equipment impresses for usability and value WORTH A LOOK Cosmos Close-Up Weighty tour of the galaxy that would make a good display Shooting Stars Encourage students to try astrophotography with this ebook HANDLE WITH CARE Head Shot: The Science Behind the JKF Assassination Exploration of the science behind the crime fails to impress WEB WATCH App-lied science for education: a selection of free Android apps are reviewed and iPhone app options are listed

  6. A special look at New Jersey's transportation system

    DOT National Transportation Integrated Search

    2000-08-01

    This document is a photographic presentation of New Jersey's transportation system. Its table of contents lists the following 8 subject headings: 1 Bridges, 2. Roadsides, 3. Rail Stations, 4. Non-motor Transport, 5. Nature, 6. History, 7. Housekeepin...

  7. Two-dimensional interpreter for field-reversed configurations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steinhauer, Loren, E-mail: lstein@uw.edu

    2014-08-15

    An interpretive method is developed for extracting details of the fully two-dimensional (2D) “internal” structure of field-reversed configurations (FRC) from common diagnostics. The challenge is that only external and “gross” diagnostics are routinely available in FRC experiments. Inferring such critical quantities as the poloidal flux and the particle inventory has commonly relied on a theoretical construct based on a quasi-one-dimensional approximation. Such inferences sometimes differ markedly from the more accurate, fully 2D reconstructions of equilibria. An interpreter based on a fully 2D reconstruction is needed to enable realistic within-the-shot tracking of evolving equilibrium properties. Presented here is a flexible equilibriummore » reconstruction with which an extensive data base of equilibria was constructed. An automated interpreter then uses this data base as a look-up table to extract evolving properties. This tool is applied to data from the FRC facility at Tri Alpha Energy. It yields surprising results at several points, such as the inferences that the local β (plasma pressure/external magnetic pressure) of the plasma climbs well above unity and the poloidal flux loss time is somewhat longer than previously thought, both of which arise from full two-dimensionality of FRCs.« less

  8. Side-emitting fiber optic position sensor

    DOEpatents

    Weiss, Jonathan D [Albuquerque, NM

    2008-02-12

    A side-emitting fiber optic position sensor and method of determining an unknown position of an object by using the sensor. In one embodiment, a concentrated beam of light source illuminates the side of a side-emitting fiber optic at an unknown axial position along the fiber's length. Some of this side-illuminated light is in-scattered into the fiber and captured. As the captured light is guided down the fiber, its intensity decreases due to loss from side-emission away from the fiber and from bulk absorption within the fiber. By measuring the intensity of light emitted from one (or both) ends of the fiber with a photodetector(s), the axial position of the light source is determined by comparing the photodetector's signal to a calibrated response curve, look-up table, or by using a mathematical model. Alternatively, the side-emitting fiber is illuminated at one end, while a photodetector measures the intensity of light emitted from the side of the fiber, at an unknown position. As the photodetector moves further away from the illuminated end, the detector's signal strength decreases due to loss from side-emission and/or bulk absorption. As before, the detector's signal is correlated to a unique position along the fiber.

  9. The SAMI Galaxy Survey: A prototype data archive for Big Science exploration

    NASA Astrophysics Data System (ADS)

    Konstantopoulos, I. S.; Green, A. W.; Foster, C.; Scott, N.; Allen, J. T.; Fogarty, L. M. R.; Lorente, N. P. F.; Sweet, S. M.; Hopkins, A. M.; Bland-Hawthorn, J.; Bryant, J. J.; Croom, S. M.; Goodwin, M.; Lawrence, J. S.; Owers, M. S.; Richards, S. N.

    2015-11-01

    We describe the data archive and database for the SAMI Galaxy Survey, an ongoing observational program that will cover ≈3400 galaxies with integral-field (spatially-resolved) spectroscopy. Amounting to some three million spectra, this is the largest sample of its kind to date. The data archive and built-in query engine use the versatile Hierarchical Data Format (HDF5), which precludes the need for external metadata tables and hence the setup and maintenance overhead those carry. The code produces simple outputs that can easily be translated to plots and tables, and the combination of these tools makes for a light system that can handle heavy data. This article acts as a contextual companion to the SAMI Survey Database source code repository, samiDB, which is freely available online and written entirely in Python. We also discuss the decisions related to the selection of tools and the creation of data visualisation modules. It is our aim that the work presented in this article-descriptions, rationale, and source code-will be of use to scientists looking to set up a maintenance-light data archive for a Big Science data load.

  10. A novel approach: chemical relational databases, and the role of the ISSCAN database on assessing chemical carcinogenicity.

    PubMed

    Benigni, Romualdo; Bossa, Cecilia; Richard, Ann M; Yang, Chihae

    2008-01-01

    Mutagenicity and carcinogenicity databases are crucial resources for toxicologists and regulators involved in chemicals risk assessment. Until recently, existing public toxicity databases have been constructed primarily as "look-up-tables" of existing data, and most often did not contain chemical structures. Concepts and technologies originated from the structure-activity relationships science have provided powerful tools to create new types of databases, where the effective linkage of chemical toxicity with chemical structure can facilitate and greatly enhance data gathering and hypothesis generation, by permitting: a) exploration across both chemical and biological domains; and b) structure-searchability through the data. This paper reviews the main public databases, together with the progress in the field of chemical relational databases, and presents the ISSCAN database on experimental chemical carcinogens.

  11. On the design of a radix-10 online floating-point multiplier

    NASA Astrophysics Data System (ADS)

    McIlhenny, Robert D.; Ercegovac, Milos D.

    2009-08-01

    This paper describes an approach to design and implement a radix-10 online floating-point multiplier. An online approach is considered because it offers computational flexibility not available with conventional arithmetic. The design was coded in VHDL and compiled, synthesized, and mapped onto a Virtex 5 FPGA to measure cost in terms of LUTs (look-up-tables) as well as the cycle time and total latency. The routing delay which was not optimized is the major component in the cycle time. For a rough estimate of the cost/latency characteristics, our design was compared to a standard radix-2 floating-point multiplier of equivalent precision. The results demonstrate that even an unoptimized radix-10 online design is an attractive implementation alternative for FPGA floating-point multiplication.

  12. High-speed digital signal normalization for feature identification

    NASA Technical Reports Server (NTRS)

    Ortiz, J. A.; Meredith, B. D.

    1983-01-01

    A design approach for high speed normalization of digital signals was developed. A reciprocal look up table technique is employed, where a digital value is mapped to its reciprocal via a high speed memory. This reciprocal is then multiplied with an input signal to obtain the normalized result. Normalization improves considerably the accuracy of certain feature identification algorithms. By using the concept of pipelining the multispectral sensor data processing rate is limited only by the speed of the multiplier. The breadboard system was found to operate at an execution rate of five million normalizations per second. This design features high precision, a reduced hardware complexity, high flexibility, and expandability which are very important considerations for spaceborne applications. It also accomplishes a high speed normalization rate essential for real time data processing.

  13. Speed and accuracy improvements in FLAASH atmospheric correction of hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Perkins, Timothy; Adler-Golden, Steven; Matthew, Michael W.; Berk, Alexander; Bernstein, Lawrence S.; Lee, Jamine; Fox, Marsha

    2012-11-01

    Remotely sensed spectral imagery of the earth's surface can be used to fullest advantage when the influence of the atmosphere has been removed and the measurements are reduced to units of reflectance. Here, we provide a comprehensive summary of the latest version of the Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes atmospheric correction algorithm. We also report some new code improvements for speed and accuracy. These include the re-working of the original algorithm in C-language code parallelized with message passing interface and containing a new radiative transfer look-up table option, which replaces executions of the MODTRAN model. With computation times now as low as ~10 s per image per computer processor, automated, real-time, on-board atmospheric correction of hyper- and multi-spectral imagery is within reach.

  14. New real-time algorithms for arbitrary, high precision function generation with applications to acoustic transducer excitation

    NASA Astrophysics Data System (ADS)

    Gaydecki, P.

    2009-07-01

    A system is described for the design, downloading and execution of arbitrary functions, intended for use with acoustic and low-frequency ultrasonic transducers in condition monitoring and materials testing applications. The instrumentation comprises a software design tool and a powerful real-time digital signal processor unit, operating at 580 million multiplication-accumulations per second (MMACs). The embedded firmware employs both an established look-up table approach and a new function interpolation technique to generate the real-time signals with very high precision and flexibility. Using total harmonic distortion (THD) analysis, the purity of the waveforms have been compared with those generated using traditional analogue function generators; this analysis has confirmed that the new instrument has a consistently superior signal-to-noise ratio.

  15. 1984-1995 Evolution of Stratospheric Aerosol Size, Surface Area, and Volume Derived by Combining SAGE II and CLAES Extinction Measurements

    NASA Technical Reports Server (NTRS)

    Russell, Philip B.; Bauman, Jill J.

    2000-01-01

    This SAGE II Science Team task focuses on the development of a multi-wavelength, multi- sensor Look-Up-Table (LUT) algorithm for retrieving information about stratospheric aerosols from global satellite-based observations of particulate extinction. The LUT algorithm combines the 4-wavelength SAGE II extinction measurements (0.385 <= lambda <= 1.02 microns) with the 7.96 micron and 12.82 micron extinction measurements from the Cryogenic Limb Array Etalon Spectrometer (CLAES) instrument, thus increasing the information content available from either sensor alone. The algorithm uses the SAGE II/CLAES composite spectra in month-latitude-altitude bins to retrieve values and uncertainties of particle effective radius R(sub eff), surface area S, volume V and size distribution width sigma(sub g).

  16. Implementing asyncronous collective operations in a multi-node processing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Dong; Eisley, Noel A.; Heidelberger, Philip

    A method, system, and computer program product are disclosed for implementing an asynchronous collective operation in a multi-node data processing system. In one embodiment, the method comprises sending data to a plurality of nodes in the data processing system, broadcasting a remote get to the plurality of nodes, and using this remote get to implement asynchronous collective operations on the data by the plurality of nodes. In one embodiment, each of the nodes performs only one task in the asynchronous operations, and each nodes sets up a base address table with an entry for a base address of a memorymore » buffer associated with said each node. In another embodiment, each of the nodes performs a plurality of tasks in said collective operations, and each task of each node sets up a base address table with an entry for a base address of a memory buffer associated with the task.« less

  17. Cross-Matching Source Observations from the Palomar Transient Factory (PTF)

    NASA Astrophysics Data System (ADS)

    Laher, Russ; Grillmair, C.; Surace, J.; Monkewitz, S.; Jackson, E.

    2009-01-01

    Over the four-year lifetime of the PTF project, approximately 40 billion instances of astronomical-source observations will be extracted from the image data. The instances will correspond to the same astronomical objects being observed at roughly 25-50 different times, and so a very large catalog containing important object-variability information will be the chief PTF product. Organizing astronomical-source catalogs is conventionally done by dividing the catalog into declination zones and sorting by right ascension within each zone (e.g., the USNOA star catalog), in order to facilitate catalog searches. This method was reincarnated as the "zones" algorithm in a SQL-Server database implementation (Szalay et al., MSR-TR-2004-32), with corrections given by Gray et al. (MSR-TR-2006-52). The primary advantage of this implementation is that all of the work is done entirely on the database server and client/server communication is eliminated. We implemented the methods outlined in Gray et al. for a PostgreSQL database. We programmed the methods as database functions in PL/pgSQL procedural language. The cross-matching is currently based on source positions, but we intend to extend it to use both positions and positional uncertainties to form a chi-square statistic for optimal thresholding. The database design includes three main tables, plus a handful of internal tables. The Sources table stores the SExtractor source extractions taken at various times; the MergedSources table stores statistics about the astronomical objects, which are the result of cross-matching records in the Sources table; and the Merges table, which associates cross-matched primary keys in the Sources table with primary keys in the MergedSoures table. Besides judicious database indexing, we have also internally partitioned the Sources table by declination zone, in order to speed up the population of Sources records and make the database more manageable. The catalog will be accessible to the public after the proprietary period through IRSA (irsa.ipac.caltech.edu).

  18. The conformal characters

    NASA Astrophysics Data System (ADS)

    Bourget, Antoine; Troost, Jan

    2018-04-01

    We revisit the study of the multiplets of the conformal algebra in any dimension. The theory of highest weight representations is reviewed in the context of the Bernstein-Gelfand-Gelfand category of modules. The Kazhdan-Lusztig polynomials code the relation between the Verma modules and the irreducible modules in the category and are the key to the characters of the conformal multiplets (whether finite dimensional, infinite dimensional, unitary or non-unitary). We discuss the representation theory and review in full generality which representations are unitarizable. The mathematical theory that allows for both the general treatment of characters and the full analysis of unitarity is made accessible. A good understanding of the mathematics of conformal multiplets renders the treatment of all highest weight representations in any dimension uniform, and provides an overarching comprehension of case-by-case results. Unitary highest weight representations and their characters are classified and computed in terms of data associated to cosets of the Weyl group of the conformal algebra. An executive summary is provided, as well as look-up tables up to and including rank four.

  19. A New Methodology for Vibration Error Compensation of Optical Encoders

    PubMed Central

    Lopez, Jesus; Artes, Mariano

    2012-01-01

    Optical encoders are sensors based on grating interference patterns. Tolerances inherent to the manufacturing process can induce errors in the position accuracy as the measurement signals stand apart from the ideal conditions. In case the encoder is working under vibrations, the oscillating movement of the scanning head is registered by the encoder system as a displacement, introducing an error into the counter to be added up to graduation, system and installation errors. Behavior improvement can be based on different techniques trying to compensate the error from measurement signals processing. In this work a new “ad hoc” methodology is presented to compensate the error of the encoder when is working under the influence of vibration. The methodology is based on fitting techniques to the Lissajous figure of the deteriorated measurement signals and the use of a look up table, giving as a result a compensation procedure in which a higher accuracy of the sensor is obtained. PMID:22666067

  20. Effect of black carbon on dust property retrievals from satellite observations

    NASA Astrophysics Data System (ADS)

    Lin, Tang-Huang; Yang, Ping; Yi, Bingqi

    2013-01-01

    The effect of black carbon on the optical properties of polluted mineral dust is studied from a satellite remote-sensing perspective. By including the auxiliary data of surface reflectivity and aerosol mixing weight, the optical properties of mineral dust, or more specifically, the aerosol optical depth (AOD) and single-scattering albedo (SSA), can be retrieved with improved accuracy. Precomputed look-up tables based on the principle of the Deep Blue algorithm are utilized in the retrieval. The mean differences between the retrieved results and the corresponding ground-based measurements are smaller than 1% for both AOD and SSA in the case of pure dust. However, the retrievals can be underestimated by as much as 11.9% for AOD and overestimated by up to 4.1% for SSA in the case of polluted dust with an estimated 10% (in terms of the number-density mixing ratio) of soot aggregates if the black carbon effect on dust aerosols is neglected.

  1. Iterative retrieval of surface emissivity and temperature for a hyperspectral sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borel, C.C.

    1997-11-01

    The central problem of temperature-emissivity separation is that we obtain N spectral measurements of radiance and need to find N + 1 unknowns (N emissivities and one temperature). To solve this problem in the presence of the atmosphere we need to find even more unknowns: N spectral transmissions {tau}{sub atmo}({lambda}) up-welling path radiances L{sub path}{up_arrow}({lambda}) and N down-welling path radiances L{sub path}{down_arrow}({lambda}). Fortunately there are radiative transfer codes such as MODTRAN 3 and FASCODE available to get good estimates of {tau}{sub atmo}({lambda}), L{sub path}{up_arrow}({lambda}) and L{sub path}{down_arrow}({lambda}) in the order of a few percent. With the growing use of hyperspectralmore » imagers, e.g. AVIRIS in the visible and short-wave infrared there is hope of using such instruments in the mid-wave and thermal IR (TIR) some day. We believe that this will enable us to get around using the present temperature - emissivity separation (TES) algorithms using methods which take advantage of the many channels available in hyperspectral imagers. The first idea we had is to take advantage of the simple fact that a typical surface emissivity spectrum is rather smooth compared to spectral features introduced by the atmosphere. Thus iterative solution techniques can be devised which retrieve emissivity spectra {epsilon} based on spectral smoothness. To make the emissivities realistic, atmospheric parameters are varied using approximations, look-up tables derived from a radiative transfer code and spectral libraries. By varying the surface temperature over a small range a series of emissivity spectra are calculated. The one with the smoothest characteristic is chosen. The algorithm was tested on synthetic data using MODTRAN and the Salisbury emissivity database.« less

  2. Positional accuracy and geographic bias of four methods of geocoding in epidemiologic research.

    PubMed

    Schootman, Mario; Sterling, David A; Struthers, James; Yan, Yan; Laboube, Ted; Emo, Brett; Higgs, Gary

    2007-06-01

    We examined the geographic bias of four methods of geocoding addresses using ArcGIS, commercial firm, SAS/GIS, and aerial photography. We compared "point-in-polygon" (ArcGIS, commercial firm, and aerial photography) and the "look-up table" method (SAS/GIS) to allocate addresses to census geography, particularly as it relates to census-based poverty rates. We randomly selected 299 addresses of children treated for asthma at an urban emergency department (1999-2001). The coordinates of the building address side door were obtained by constant offset based on ArcGIS and a commercial firm and true ground location based on aerial photography. Coordinates were available for 261 addresses across all methods. For 24% to 30% of geocoded road/door coordinates the positional error was 51 meters or greater, which was similar across geocoding methods. The mean bearing was -26.8 degrees for the vector of coordinates based on aerial photography and ArcGIS and 8.5 degrees for the vector based on aerial photography and the commercial firm (p < 0.0001). ArcGIS and the commercial firm performed very well relative to SAS/GIS in terms of allocation to census geography. For 20%, the door location based on aerial photography was assigned to a different block group compared to SAS/GIS. The block group poverty rate varied at least two standard deviations for 6% to 7% of addresses. We found important differences in distance and bearing between geocoding relative to aerial photography. Allocation of locations based on aerial photography to census-based geographic areas could lead to substantial errors.

  3. Design and fabrication of a multi-layered solid dynamic phantom: validation platform on methods for reducing scalp-hemodynamic effect from fNIRS signal

    NASA Astrophysics Data System (ADS)

    Kawaguchi, Hiroshi; Tanikawa, Yukari; Yamada, Toru

    2017-02-01

    Scalp hemodynamics contaminates the signals from functional near-infrared spectroscopy (fNIRS). Numerous methods have been proposed to reduce this contamination, but no golden standard has yet been established. Here we constructed a multi-layered solid phantom to experimentally validate such methods. This phantom comprises four layers corresponding to epidermides, dermis/skull (upper dynamic layer), cerebrospinal fluid and brain (lower dynamic layer) and the thicknesses of these layers were 0.3, 10, 1, and 50 mm, respectively. The epidermides and cerebrospinal fluid layers were made of polystyrene and an acrylic board, respectively. Both of these dynamic layers were made of epoxy resin. An infrared dye and titanium dioxide were mixed to match their absorption and reduced scattering coefficients (μa and μs', respectively) with those of biological tissues. The bases of both upper and lower dynamic layers have a slot for laterally sliding a bar that holds an absorber piece. This bar was laterally moved using a programmable stepping motor. The optical properties of dynamic layers were estimated based on the transmittance and reflectance using the Monte Carlo look-up table method. The estimated coefficients for lower and upper dynamic layers approximately coincided with those for biological tissues. We confirmed that the preliminary fNIRS measurement using the fabricated phantom showed that the signals from the brain layer were recovered if those from the dermis layer were completely removed from their mixture, indicating that the phantom is useful for evaluating methods for reducing the contamination of the signals from the scalp.

  4. Numerical Model Sensitivity to Heterogeneous Satellite Derived Vegetation Roughness

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael; Eastman, Joseph; Borak, Jordan

    2011-01-01

    The sensitivity of a mesoscale weather prediction model to a 1 km satellite-based vegetation roughness initialization is investigated for a domain within the south central United States. Three different roughness databases are employed: i) a control or standard lookup table roughness that is a function only of land cover type, ii) a spatially heterogeneous roughness database, specific to the domain, that was previously derived using a physically based procedure and Moderate Resolution Imaging Spectroradiometer (MODIS) imagery, and iii) a MODIS climatologic roughness database that like (i) is a function only of land cover type, but possesses domain specific mean values from (ii). The model used is the Weather Research and Forecast Model (WRF) coupled to the Community Land Model within the Land Information System (LIS). For each simulation, a statistical comparison is made between modeled results and ground observations within a domain including Oklahoma, Eastern Arkansas, and Northwest Louisiana during a 4-day period within IHOP 2002. Sensitivity analysis compares the impact the three roughness initializations on time-series temperature, precipitation probability of detection (POD), average wind speed, boundary layer height, and turbulent kinetic energy (TKE). Overall, the results indicate that, for the current investigation, replacement of the standard look-up table values with the satellite-derived values statistically improves model performance for most observed variables. Such natural roughness heterogeneity enhances the surface wind speed, PBL height and TKE production up to 10 percent, with a lesser effect over grassland, and greater effect over mixed land cover domains.

  5. Self-improving Inference System to Support the Intelligence Preparation of the Battlefield: Requirements, State of the Art, and Prototypes

    DTIC Science & Technology

    2014-12-01

    X Establish the limits of the areas of interest X X Determine intelligence and information gaps X X X Describe the impact of the battlespace on...X X X Identify critical gaps X X X X X DRDC-RDDC-2014-R136 9 Table 2: Mapping tools and functionalities. Looking at Table 1 and...it would seem that taking on requirements from IPB/IPOE Steps 3 and 4, although possibly much more challenging, is likely to yield more useful results

  6. What is the Uncertainty in MODIS Aerosol Optical Depth in the Vicinity of Clouds?

    NASA Technical Reports Server (NTRS)

    Patadia, Falguni; Levy, Rob; Mattoo, Shana

    2017-01-01

    MODIS dark-target (DT) algorithm retrieves aerosol optical depth (AOD) using a Look Up Table (LUT) approach. Global comparison of AOD (Collection 6 ) with ground-based sun photometer gives an Estimated Error (EE) of +/-(0.04 + 10%) over ocean. However, EE does not represent per-retrieval uncertainty. For retrievals that are biased high compared to AERONET, here we aim to closely examine the contribution of biases due to presence of clouds and per-pixel retrieval uncertainty. We have characterized AOD uncertainty at 550 nm, due to standard deviation of reflectance in 10 km retrieval region, uncertainty related to gas (H2O, O3) absorption, surface albedo, and aerosol models. The uncertainty in retrieved AOD seems to lie within the estimated over ocean error envelope of +/-(0.03+10%). Regions between broken clouds tend to have higher uncertainty. Compared to C6 AOD, a retrieval omitting observations in the vicinity of clouds (< or = 1 km) is biased by about +/- 0.05. For homogeneous aerosol distribution, clear sky retrievals show near zero bias. Close look at per-pixel reflectance histograms suggests retrieval possibility using median reflectance values.

  7. The Endangered Species Act and a Deeper Look at Extinction.

    ERIC Educational Resources Information Center

    Borowski, John F.

    1992-01-01

    Discusses the importance of saving species and dispels myths surrounding the endangered species act as background to three student activities that include a round table debate, writing to congresspeople, and a research project suggestion. Lists reference materials for endangered species. (MCO)

  8. A Look at the U.S. Commercial Building Stock: Results from EIA's 2012 Commercial Buildings Energy Consumption Survey (CBECS)

    EIA Publications

    2015-01-01

    The 2012 CBECS collected building characteristics data from more than 6,700 U.S. commercial buildings. This report highlights findings from the survey, with details presented in the Building Characteristics tables.

  9. Two Experimental Approaches of Looking at Buoyancy

    ERIC Educational Resources Information Center

    Moreira, J. Agostinho; Almeida, A.; Carvalho, P. Simeao

    2013-01-01

    In our teaching practice, we find that a large number of first-year university physics and chemistry students exhibit some difficulties with applying Newton's third law to fluids because they think fluids do not react to forces. (Contains 1 table and 3 figures.)

  10. Expedition 19 Crew Training

    NASA Image and Video Library

    2009-03-20

    Expedition 19 Commander Gennady I. Padalka, left, and Flight Engineer Michael R. Barratt listen to their mp3 players as a medical doctor looks on during the tilt table training at the Cosmonaut Hotel, Saturday, March 21, 2009 in Baikonur, Kazakhstan.(Photo Credit: NASA/Bill Ingalls)

  11. 14. DETAIL OF INLCINED CONVEYOR RAIL AT HEAD OF SKINNING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    14. DETAIL OF INLCINED CONVEYOR RAIL AT HEAD OF SKINNING TABLE; HEADS WERE REMOVED IN OPEN AREA AT LOWER RIGHT; LOOKING TOWARD NORTHWEST - Rath Packing Company, Beef Killing Building, Sycamore Street between Elm & Eighteenth Streets, Waterloo, Black Hawk County, IA

  12. Global root zone storage capacity from satellite-based evaporation

    NASA Astrophysics Data System (ADS)

    Wang-Erlandsson, Lan; Bastiaanssen, Wim G. M.; Gao, Hongkai; Jägermeyr, Jonas; Senay, Gabriel B.; van Dijk, Albert I. J. M.; Guerschman, Juan P.; Keys, Patrick W.; Gordon, Line J.; Savenije, Hubert H. G.

    2016-04-01

    This study presents an "Earth observation-based" method for estimating root zone storage capacity - a critical, yet uncertain parameter in hydrological and land surface modelling. By assuming that vegetation optimises its root zone storage capacity to bridge critical dry periods, we were able to use state-of-the-art satellite-based evaporation data computed with independent energy balance equations to derive gridded root zone storage capacity at global scale. This approach does not require soil or vegetation information, is model independent, and is in principle scale independent. In contrast to a traditional look-up table approach, our method captures the variability in root zone storage capacity within land cover types, including in rainforests where direct measurements of root depths otherwise are scarce. Implementing the estimated root zone storage capacity in the global hydrological model STEAM (Simple Terrestrial Evaporation to Atmosphere Model) improved evaporation simulation overall, and in particular during the least evaporating months in sub-humid to humid regions with moderate to high seasonality. Our results suggest that several forest types are able to create a large storage to buffer for severe droughts (with a very long return period), in contrast to, for example, savannahs and woody savannahs (medium length return period), as well as grasslands, shrublands, and croplands (very short return period). The presented method to estimate root zone storage capacity eliminates the need for poor resolution soil and rooting depth data that form a limitation for achieving progress in the global land surface modelling community.

  13. Near-infrared spectroscopy of renal tissue in vivo

    NASA Astrophysics Data System (ADS)

    Grosenick, Dirk; Steinkellner, Oliver; Wabnitz, Heidrun; Macdonald, Rainer; Niendorf, Thoralf; Cantow, Kathleen; Flemming, Bert; Seeliger, Erdmann

    2013-03-01

    We have developed a method to quantify hemoglobin concentration and oxygen saturation within the renal cortex by near-infrared spectroscopy. A fiber optic probe was used to transmit the radiation of three semiconductor lasers at 690 nm, 800 nm and 830 nm to the tissue, and to collect diffusely remitted light at source-detector separations from 1 mm to 4 mm. To derive tissue hemoglobin concentration and oxygen saturation of hemoglobin the spatial dependence of the measured cw intensities was fitted by a Monte Carlo model. In this model the tissue was assumed to be homogeneous. The scaling factors between measured intensities and simulated photon flux were obtained by applying the same setup to a homogeneous semi-infinite phantom with known optical properties and by performing Monte Carlo simulations for this phantom. To accelerate the fit of the tissue optical properties a look-up table of the simulated reflected intensities was generated for the needed range of absorption and scattering coefficients. The intensities at the three wavelengths were fitted simultaneously using hemoglobin concentration, oxygen saturation, the reduced scattering coefficient at 800 nm and the scatter power coefficient as fit parameters. The method was employed to study the temporal changes of renal hemoglobin concentration and blood oxygenation on an anesthetized rat during a short period of renal ischemia induced by aortic occlusion and during subsequent reperfusion.

  14. HIM Correlational Study

    ERIC Educational Resources Information Center

    Powell, Evan R.

    1977-01-01

    This study uses two methods of analysis to examine the degree to which items within the cells of the Hill Interaction Matrix correlate. It is found that the table of specifications does not hold up. But the author recommends caution in interpreting this finding. (Author/BP)

  15. Radiation exposure during in-situ pinning of slipped capital femoral epiphysis hips: does the patient positioning matter?

    PubMed

    Mohammed, Riazuddin; Johnson, Karl; Bache, Ed

    2010-07-01

    Multiple radiographic images may be necessary during the standard procedure of in-situ pinning of slipped capital femoral epiphysis (SCFE) hips. This procedure can be performed with the patient positioned on a fracture table or a radiolucent table. Our study aims to look at any differences in the amount and duration of radiation exposure for in-situ pinning of SCFE performed using a traction table or a radiolucent table. Sixteen hips in thirteen patients who were pinned on radiolucent table were compared for the cumulative radiation exposure to 35 hips pinned on a fracture table in 33 patients during the same time period. Cumulative radiation dose was measured as dose area product in Gray centimeter2 and the duration of exposure was measured in minutes. Appropriate statistical tests were used to test the significance of any differences. Mean cumulative radiation dose for SCFE pinned on radiolucent table was statistically less than for those pinned on fracture table (P<0.05). The mean duration of radiation exposure on either table was not significantly different. Lateral projections may increase the radiation doses compared with anteroposterior projections because of the higher exposure parameters needed for side imaging. Our results showing decreased exposure doses on the radiolucent table are probably because of the ease of a frog leg lateral positioning obtained and thereby the ease of lateral imaging. In-situ pinning of SCFE hips on a radiolucent table has an additional advantage that the radiation dose during the procedure is significantly less than that of the procedure that is performed on a fracture table.

  16. A computer program for multiple decrement life table analyses.

    PubMed

    Poole, W K; Cooley, P C

    1977-06-01

    Life table analysis has traditionally been the tool of choice in analyzing distribution of "survival" times when a parametric form for the survival curve could not be reasonably assumed. Chiang, in two papers [1,2] formalized the theory of life table analyses in a Markov chain framework and derived maximum likelihood estimates of the relevant parameters for the analyses. He also discussed how the techniques could be generalized to consider competing risks and follow-up studies. Although various computer programs exist for doing different types of life table analysis [3] to date, there has not been a generally available, well documented computer program to carry out multiple decrement analyses, either by Chiang's or any other method. This paper describes such a program developed by Research Triangle Institute. A user's manual is available at printing costs which supplements the contents of this paper with a discussion of the formula used in the program listing.

  17. White Paper: A Defect Prioritization Method Based on the Risk Priority Number

    DTIC Science & Technology

    2013-11-01

    adapted The Failure Modes and Effects Analysis ( FMEA ) method employs a measurement technique called Risk Priority Number (RPN) to quantify the...Up to an hour 16-60 1.5 Brief Interrupt 0-15 1 Table 1 – Time Scaling Factors In the FMEA formulation, RPN is a product of the three categories

  18. Control law system for X-Wing aircraft

    NASA Technical Reports Server (NTRS)

    Lawrence, Thomas H. (Inventor); Gold, Phillip J. (Inventor)

    1990-01-01

    Control law system for the collective axis, as well as pitch and roll axes, of an X-Wing aircraft and for the pneumatic valving controlling circulation control blowing for the rotor. As to the collective axis, the system gives the pilot single-lever direct lift control and insures that maximum cyclic blowing control power is available in transition. Angle-of-attach de-coupling is provided in rotary wing flight, and mechanical collective is used to augment pneumatic roll control when appropriate. Automatic gain variations with airspeed and rotor speed are provided, so a unitary set of control laws works in all three X-Wing flight modes. As to pitch and roll axes, the system produces essentially the same aircraft response regardless of flight mode or condition. Undesirable cross-couplings are compensated for in a manner unnoticeable to the pilot without requiring pilot action, as flight mode or condition is changed. A hub moment feedback scheme is implemented, utilizing a P+I controller, significantly improving bandwidth. Limits protect aircraft structure from inadvertent damage. As to pneumatic valving, the system automatically provides the pressure required at each valve azimuth location, as dictated by collective, cyclic and higher harmonic blowing commands. Variations in the required control phase angle are automatically introduced, and variations in plenum pressure are compensated for. The required switching for leading, trailing and dual edge blowing is automated, using a simple table look-up procedure. Non-linearities due to valve characteristics of circulation control lift are linearized by map look-ups.

  19. TEAM: efficient two-locus epistasis tests in human genome-wide association study.

    PubMed

    Zhang, Xiang; Huang, Shunping; Zou, Fei; Wang, Wei

    2010-06-15

    As a promising tool for identifying genetic markers underlying phenotypic differences, genome-wide association study (GWAS) has been extensively investigated in recent years. In GWAS, detecting epistasis (or gene-gene interaction) is preferable over single locus study since many diseases are known to be complex traits. A brute force search is infeasible for epistasis detection in the genome-wide scale because of the intensive computational burden. Existing epistasis detection algorithms are designed for dataset consisting of homozygous markers and small sample size. In human study, however, the genotype may be heterozygous, and number of individuals can be up to thousands. Thus, existing methods are not readily applicable to human datasets. In this article, we propose an efficient algorithm, TEAM, which significantly speeds up epistasis detection for human GWAS. Our algorithm is exhaustive, i.e. it does not ignore any epistatic interaction. Utilizing the minimum spanning tree structure, the algorithm incrementally updates the contingency tables for epistatic tests without scanning all individuals. Our algorithm has broader applicability and is more efficient than existing methods for large sample study. It supports any statistical test that is based on contingency tables, and enables both family-wise error rate and false discovery rate controlling. Extensive experiments show that our algorithm only needs to examine a small portion of the individuals to update the contingency tables, and it achieves at least an order of magnitude speed up over the brute force approach.

  20. NORTH ELEVATION OF GOLD HILL MILL, LOOKING SOUTH. AT LEFT ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    NORTH ELEVATION OF GOLD HILL MILL, LOOKING SOUTH. AT LEFT EDGE IS THE SINGLE CYLINDER “HOT SHOT” ENGINE THAT PROVIDED POWER FOR THE MILL. JUST IN FRONT OF IT IS AN ARRASTRA. AT CENTER IS THE BALL MILL AND SECONDARY ORE BIN. JUST TO THE RIGHT OF THE BALL MILL IS A RAKE CLASSIFIER, AND TO THE RIGHT ARE THE CONCENTRATION TABLES. WARM SPRINGS CAMP IS IN THE DISTANCE. SEE CA-292-4 FOR IDENTICAL B&W NEGATIVE. - Gold Hill Mill, Warm Spring Canyon Road, Death Valley Junction, Inyo County, CA

  1. NORTH ELEVATION OF GOLD HILL MILL, LOOKING SOUTH. AT LEFT ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    NORTH ELEVATION OF GOLD HILL MILL, LOOKING SOUTH. AT LEFT EDGE IS THE SINGLE CYLINDER “HOT SHOT” ENGINE THAT PROVIDED POWER FOR THE MILL. JUST IN FRONT OF IT IS AN ARRASTRA. AT CENTER IS THE BALL MILL AND SECONDARY ORE BIN. JUST TO THE RIGHT OF THE BALL MILL IS A RAKE CLASSIFIER, AND TO THE RIGHT ARE THE CONCENTRATION TABLES. WARM SPRINGS CAMP IS IN THE DISTANCE. SEE CA-292-17 (CT) FOR IDENTICAL COLOR TRANSPARENCY. - Gold Hill Mill, Warm Spring Canyon Road, Death Valley Junction, Inyo County, CA

  2. Query-Adaptive Hash Code Ranking for Large-Scale Multi-View Visual Search.

    PubMed

    Liu, Xianglong; Huang, Lei; Deng, Cheng; Lang, Bo; Tao, Dacheng

    2016-10-01

    Hash-based nearest neighbor search has become attractive in many applications. However, the quantization in hashing usually degenerates the discriminative power when using Hamming distance ranking. Besides, for large-scale visual search, existing hashing methods cannot directly support the efficient search over the data with multiple sources, and while the literature has shown that adaptively incorporating complementary information from diverse sources or views can significantly boost the search performance. To address the problems, this paper proposes a novel and generic approach to building multiple hash tables with multiple views and generating fine-grained ranking results at bitwise and tablewise levels. For each hash table, a query-adaptive bitwise weighting is introduced to alleviate the quantization loss by simultaneously exploiting the quality of hash functions and their complement for nearest neighbor search. From the tablewise aspect, multiple hash tables are built for different data views as a joint index, over which a query-specific rank fusion is proposed to rerank all results from the bitwise ranking by diffusing in a graph. Comprehensive experiments on image search over three well-known benchmarks show that the proposed method achieves up to 17.11% and 20.28% performance gains on single and multiple table search over the state-of-the-art methods.

  3. Does Training in Table Creation Enhance Table Interpretation? A Quasi-Experimental Study with Follow-Up

    ERIC Educational Resources Information Center

    Karazsia, Bryan T.; Wong, Kendal

    2016-01-01

    Quantitative and statistical literacy are core domains in the undergraduate psychology curriculum. An important component of such literacy includes interpretation of visual aids, such as tables containing results from statistical analyses. This article presents results of a quasi-experimental study with longitudinal follow-up that tested the…

  4. Women in the Civil Engineer Corps.

    DTIC Science & Technology

    1986-01-01

    order to "measure up"? __ often__ occasionally__ rarely_ never 18.Have you experienced sexual harrassment on the job? -__often ___occasionally __rarely...Assignments. . . . . . . 23 Sexual Harassment/Discrimination . . . 26 Relating Studies and Literature. . . . . . 28 IV. RESEARCH METHOLOGY...Table 13: Question 16. Work Harder . . . . . . . 56 Table 14: Question 17, Measure Up. . . . . . . . 57 Table 15: Question 18. Sexual Harassment

  5. Research of spectacle frame measurement system based on structured light method

    NASA Astrophysics Data System (ADS)

    Guan, Dong; Chen, Xiaodong; Zhang, Xiuda; Yan, Huimin

    2016-10-01

    Automatic eyeglass lens edging system is now widely used to automatically cut and polish the uncut lens based on the spectacle frame shape data which is obtained from the spectacle frame measuring machine installed on the system. The conventional approach to acquire the frame shape data works in the contact scanning mode with a probe tracing around the groove contour of the spectacle frame which requires a sophisticated mechanical and numerical control system. In this paper, a novel non-contact optical measuring method based on structured light to measure the three dimensional (3D) data of the spectacle frame is proposed. First we focus on the processing approach solving the problem of deterioration of the structured light stripes caused by intense specular reflection on the frame surface. The techniques of bright-dark bi-level fringe projecting, multiple exposuring and high dynamic range imaging are introduced to obtain a high-quality image of structured light stripes. Then, the Gamma transform and median filtering are applied to enhance image contrast. In order to get rid of background noise from the image and extract the region of interest (ROI), an auxiliary lighting system of special design is utilized to help effectively distinguish between the object and the background. In addition, a morphological method with specific morphological structure-elements is adopted to remove noise between stripes and boundary of the spectacle frame. By further fringe center extraction and depth information acquisition through the method of look-up table, the 3D shape of the spectacle frame is recovered.

  6. [A Method to Reconstruct Surface Reflectance Spectrum from Multispectral Image Based on Canopy Radiation Transfer Model].

    PubMed

    Zhao, Yong-guang; Ma, Ling-ling; Li, Chuan-rong; Zhu, Xiao-hua; Tang, Ling-li

    2015-07-01

    Due to the lack of enough spectral bands for multi-spectral sensor, it is difficult to reconstruct surface retlectance spectrum from finite spectral information acquired by multi-spectral instrument. Here, taking into full account of the heterogeneity of pixel from remote sensing image, a method is proposed to simulate hyperspectral data from multispectral data based on canopy radiation transfer model. This method first assumes the mixed pixels contain two types of land cover, i.e., vegetation and soil. The sensitive parameters of Soil-Leaf-Canopy (SLC) model and a soil ratio factor were retrieved from multi-spectral data based on Look-Up Table (LUT) technology. Then, by combined with a soil ratio factor, all the parameters were input into the SLC model to simulate the surface reflectance spectrum from 400 to 2 400 nm. Taking Landsat Enhanced Thematic Mapper Plus (ETM+) image as reference image, the surface reflectance spectrum was simulated. The simulated reflectance spectrum revealed different feature information of different surface types. To test the performance of this method, the simulated reflectance spectrum was convolved with the Landsat ETM + spectral response curves and Moderate Resolution Imaging Spectrometer (MODIS) spectral response curves to obtain the simulated Landsat ETM+ and MODIS image. Finally, the simulated Landsat ETM+ and MODIS images were compared with the observed Landsat ETM+ and MODIS images. The results generally showed high correction coefficients (Landsat: 0.90-0.99, MODIS: 0.74-0.85) between most simulated bands and observed bands and indicated that the simulated reflectance spectrum was well simulated and reliable.

  7. Noise generator for tinnitus treatment based on look-up tables

    NASA Astrophysics Data System (ADS)

    Uriz, Alejandro J.; Agüero, Pablo; Tulli, Juan C.; Castiñeira Moreira, Jorge; González, Esteban; Hidalgo, Roberto; Casadei, Manuel

    2016-04-01

    Treatment of tinnitus by means of masking sounds allows to obtain a significant improve of the quality of life of the individual that suffer that condition. In view of that, it is possible to develop noise synthesizers based on random number generators in digital signal processors (DSP), which are used in almost any digital hearing aid devices. DSP architecture have limitations to implement a pseudo random number generator, due to it, the noise statistics can be not as good as expectations. In this paper, a technique to generate additive white gaussian noise (AWGN) or other types of filtered noise using coefficients stored in program memory of the DSP is proposed. Also, an implementation of the technique is carried out on a dsPIC from Microchip®. Objective experiments and experimental measurements are performed to analyze the proposed technique.

  8. Population attribute compression

    DOEpatents

    White, James M.; Faber, Vance; Saltzman, Jeffrey S.

    1995-01-01

    An image population having a large number of attributes is processed to form a display population with a predetermined smaller number of attributes that represent the larger number of attributes. In a particular application, the color values in an image are compressed for storage in a discrete look-up table (LUT). Color space containing the LUT color values is successively subdivided into smaller volumes until a plurality of volumes are formed, each having no more than a preselected maximum number of color values. Image pixel color values can then be rapidly placed in a volume with only a relatively few LUT values from which a nearest neighbor is selected. Image color values are assigned 8 bit pointers to their closest LUT value whereby data processing requires only the 8 bit pointer value to provide 24 bit color values from the LUT.

  9. Digital to analog conversion and visual evaluation of Thematic Mapper data

    USGS Publications Warehouse

    McCord, James R.; Binnie, Douglas R.; Seevers, Paul M.

    1985-01-01

    As a part of the National Aeronautics and Space Administration Landsat D Image Data Quality Analysis Program, the Earth Resources Observation Systems Data Center (EDC) developed procedures to optimize the visual information content of Thematic Mapper data and evaluate the resulting photographic products by visual interpretation. A digital-to-analog transfer function was developed which would properly place the digital values on the most useable portion of a film response curve. Individual black-and-white transparencies generated using the resulting look-up tables were utilized in the production of color-composite images with varying band combinations. Four experienced photointerpreters ranked 2-cm-diameter (0. 75 inch) chips of selected image features of each band combination for ease of interpretability. A nonparametric rank-order test determined the significance of interpreter preference for the band combinations.

  10. Digital to Analog Conversion and Visual Evaluation of Thematic Mapper Data

    USGS Publications Warehouse

    McCord, James R.; Binnie, Douglas R.; Seevers, Paul M.

    1985-01-01

    As a part of the National Aeronautics and Space Administration Landsat D Image Data Quality Analysis Program, the Earth Resources Observation Systems Data Center (EDC) developed procedures to optimize the visual information content of Thematic Mapper data and evaluate the resulting photographic products by visual interpretation. A digital-to-analog transfer function was developed which would properly place the digital values on the most useable portion of a film response curve. Individual black-and-white transparencies generated using the resulting look-up tables were utilized in the production of color-composite images with varying band combinations. Four experienced photointerpreters ranked 2-cm-diameter (0. 75 inch) chips of selected image features of each band combination for ease of interpretability. A nonparametric rank-order test determined the significance of interpreter preference for the band combinations.

  11. The Enterprise Project

    NASA Technical Reports Server (NTRS)

    Dolci, Wendy

    2003-01-01

    Let us look at this thing with two agendas in mind. Agenda number one was to give the class a problem, which was challenging and stimulating. Agenda number two was to see if a bright group of people might come up with some notions about how to bridge these worlds of technology development and flight system development. Here is an opportunity to get some bright folks who bring a lot of capability to the table. Explain the problem to them and see if they can offer some fresh insights and ideas. It s a very powerful process and one that already put to use in MSL in a number of different areas: getting people who haven't been in the middle of the forest, but are still very strong technically, to step in and think about the problem for a while and offer their observations.

  12. [A method to estimate one's own blood alcohol concentration when the ministerial tables are not avaible].

    PubMed

    Dosi, G; Taggi, F; Macchia, T

    2009-01-01

    To reduce the prevalence of driving under the influence, tables allowing to estimate one's own blood alcohol concentration (BAC) by type and quantity of alcoholic drinks intake have been enacted by decree in Italy. Such tables, based on a modified Widmark's formula, are now put up in all public concerns serving alcoholic beverages. Aim of this initiative is to try to get subjects which consume alcoholics and then will drive a vehicle take in account their own estimated BAC and, on this base, put into effect, if needed, suitable actions (to avoid or to limit a further consumption, to wait more time before driving, to leave driving a sober subject). Nevertheless, many occasions exist in which these tables are not available. To allow anybody to rough estimate his own BAC in these cases too, a proper method has been developed. Briefly, the weight (in grams) of consumed alcohol has to be divided by half her own weight, if female drunk on an empty stomach (by the 90% of her own weight, if she drunk on a full stomach); by 70% of his own weight, if male drunk on an empty stomach (by 120% of his own weight, if he drunk in a full stomach). Consistency between BAC values estimated by the proposed method and those shown in the ministerial tables is very narrow: they differ in a few hundredth of grams/liter. Unlike the ministerial tables, the proposed method needs to compute the grams of ingested alcohol. This maybe involves some difficulties that, nevertheless, can be overcome easily. In our opinion, the skillfulness in computing the grams of assumed alcohol is of great significance since it provides the subject with a strong signal not only in road safety terms, but also in health terms. The ministerial tables and the proposed method should be part of teaching to issue the driving licence and to recovery of driving licence taken away points. In broad terms, the school should teach youngs to calculate alcohol quantities assumed by drink to acquaint them with the risks paving the way for a more aware drinking when they will come age.

  13. The look AHEAD trial: bone loss at four-year follow-up in type 2 diabetes

    USDA-ARS?s Scientific Manuscript database

    OBJECTIVE: To determine whether an intensive lifestyle intervention (ILI) designed to sustain weight loss and improve physical fitness in overweight or obese persons with type 2 diabetes was associated with bone loss after 4 years of follow-up. RESEARCH DESIGN AND METHODS: This randomized controlled...

  14. A New Look for Coriolis.

    ERIC Educational Resources Information Center

    Levi, F. A.

    1988-01-01

    Describes a demonstration of Coriolis acceleration. Discusses two different meanings of "Coriolis" and two causes of Coriolis acceleration. Gives a set-up method of the demonstration apparatus by using a rotary disk with rubber tubing for tap water, switches, lamps, battery, and counterweight. Provides two pictures with operating method.…

  15. 40 CFR 455.50 - Identification of test procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... methods cited and described in Table IG at 40 CFR 136.3(a). Pesticide manufacturers may not use the analytical method cited in Table IB, Table IC, or Table ID of 40 CFR 136.3(a) to make these determinations (except where the method cited in those tables is identical to the method specified in Table IG at 40 CFR...

  16. SU-D-206-05: A Critical Look at CBCT-Based Dose Calculation Accuracy as It Is Applied to Adaptive Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bejarano Buele, A; Sperling, N; Parsai, E

    2016-06-15

    Purpose: Cone-beam CTs (CBCT) obtained from On-Board Imaging Devices (OBI) are increasingly being used for dose calculation purposes in adaptive radiotherapy. Patient and target morphology are monitored and the treatment plan is updated using CBCT. Due to the difference in image acquisition parameters, dose calculated in a CBCT can differ from planned dose. We evaluate the difference between dose calculation in kV CBCT and simulation CT, and the effect of HU-density tables in dose discrepancies Methods: HU values for various materials were obtained using a Catphan 504 phantom for a simulator CT (CTSIM) and two different OBI systems using threemore » imaging protocols: Head, Thorax and Pelvis. HU-density tables were created in the TPS for each OBI image protocol. Treatment plans were made on each Catphan 504 dataset and on the head, thorax and pelvis sections of an anthropomorphic phantom, with and without the respective HU-density table. DVH information was compared among OBI systems and planning CT. Results: Dose calculations carried on the Catphan 504 CBCTs, with and without the respective CT-density table, had a maximum difference of −0.65% from the values on the planning CT. The use of the respective HU-density table decreased the percent differences from planned values by half in most of the protocols. For the anthropomorphic phantom datasets, the use of the correct HU-density table reduced differences by 0.89% on OBI1 and 0.59% on OBI2 for the head, 0.49% on OBI1 for the thorax, and 0.25% on OBI2 for the pelvis. Differences from planned values without HU-density correction ranged from 3.13% (OBI1, thorax) to 0.30% (OBI2, thorax). Conclusion: CT-density tables in the TPS yield acceptable differences when used in partly homogeneous medium. Further corrections are needed when the medium contains pronounced density differences for accurate CBCT calculation. Current difference range (1–3%) can be clinically acceptable.« less

  17. D0 Solenoid Upgrade Project: Pressure Ratings for Some Chimney and Control Dewar Componenets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rucinski, R.; /Fermilab

    1993-05-25

    Pressure rating calculations were done for some of the chimney and control dewar components. This engineering note documents these calculations. The table below summarizes the components looked at, and what pressure rating are. The raw engineering calculations for each of the components is given.

  18. 2. VIEW OF LOWER MILL FLOOR FOUNDATION, SHOWING, LEFT TO ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. VIEW OF LOWER MILL FLOOR FOUNDATION, SHOWING, LEFT TO RIGHT, EDGE OF MILLING FLOOR, TABLE FLOOR, VANNING FLOOR, LOADING LEVEL, TAILINGS POND IN RIGHT BACKGROUND. VIEW IS LOOKING FROM THE NORTHWEST - Mountain King Gold Mine & Mill, 4.3 Air miles Northwest of Copperopolis, Copperopolis, Calaveras County, CA

  19. Automated System Marketplace 1987: Maturity and Competition.

    ERIC Educational Resources Information Center

    Walton, Robert A.; Bridge, Frank R.

    1988-01-01

    This annual review of the library automation marketplace presents profiles of 15 major library automation firms and looks at emerging trends. Seventeen charts and tables provide data on market shares, number and size of installations, hardware availability, operating systems, and interfaces. A directory of 49 automation sources is included. (MES)

  20. Editors' Spring Picks

    ERIC Educational Resources Information Center

    Library Journal, 2011

    2011-01-01

    While they do not represent the rainbow of reading tastes American public libraries accommodate, Book Review editors are a wildly eclectic bunch. One look at their bedside tables and ereaders would reveal very little crossover. This article highlights an eclectic array of spring offerings ranging from print books to an audiobook to ebook apps. It…

  1. Another Look at Public Library Referenda in Illinois.

    ERIC Educational Resources Information Center

    Adams, Stanley E.

    1996-01-01

    Presents 14 tables depicting Illinois public library referenda data from fiscal year 1977/78 through November 1995. Discusses success rates in terms of even versus odd years and spring versus fall for fiscal years 1986-95. Outlines types of library referenda, including: annexation; tax increase; bond issue; establishment (district); conversion to…

  2. Life expectancy--a commentary on this life table variable.

    PubMed

    Singer, Richard B

    2005-01-01

    In 1992, I wrote an article on a method of modifying the Decennial US Life Table to accommodate any pattern of excess mortality expressed in terms of excess death rate (EDR), for the specific purpose of calculating the reduced life expectancy, e. I believe this was the first article published in the Journal of Insurance Medicine (JIM) that dealt specifically with life expectancy as an index of survival and risk appraisal, never used in the classification of extra mortality risk in applicants for life insurance. In this commentary, I discuss the 1989-91 US Decennial Life Table in detail. I link the subject matter of the 1992 article with several more recent articles that also focus on the utility of life expectancy in underwriting structured settlement annuities and preparing reports on life expectancy for an attorney in a tort case. A few references are given for further reading on life table methodology and its use in the most accurate estimate of life expectancy, given the inherent limitations of the life table and the limited duration of follow-up studies.

  3. Performance characteristics of the EPR dosimetry system with table sugar in radiotherapy applications.

    PubMed

    Mikou, M; Ghosne, N; El Baydaoui, R; Zirari, Z; Kuntz, F

    2015-05-01

    Performance characteristics of the megavoltage photon dose measurements with EPR and table sugar were analyzed. An advantage of sugar as a dosimetric material is its tissue equivalency. The minimal detectable dose was found to be 1.5Gy for both the 6 and 18MV photons. The dose response curves are linear up to at least 20Gy. The energy dependence of the dose response in the megavoltage energy range is very weak and probably statistically insignificant. Reproducibility of measurements of various doses in this range performed with the peak-to-peak and double-integral methods is reported. The method can be used in real-time dosimetry in radiation therapy. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Predicting the past: a simple reverse stand table projection method

    Treesearch

    Quang V. Cao; Shanna M. McCarty

    2006-01-01

    A stand table gives number of trees in each diameter class. Future stand tables can be predicted from current stand tables using a stand table projection method. In the simplest form of this method, a future stand table can be expressed as the product of a matrix of transitional proportions (based on diameter growth rates) and a vector of the current stand table. There...

  5. Contamination of table salts from Turkey with microplastics.

    PubMed

    Gündoğdu, Sedat

    2018-05-01

    Microplastics (MPs) pollution has become a problem that affects all aquatic, atmospheric and terrestial environments in the world. In this study, we looked into whether MPs in seas and lakes reach consumers through table salt. For this purpose, we obtained 16 brands of table salts from the Turkish market and determined their MPs content with microscopic and Raman spectroscopic examination. According to our results, the MP particle content was 16-84 item/kg in sea salt, 8-102 item/kg in lake salt and 9-16 item/kg in rock salt. The most common plastic polymers were polyethylene (22.9%) and polypropylene (19.2%). When the amounts of MPs and the amount of salt consumed by Turkish consumers per year are considered together, if they consume sea salt, lake salt or rock salt, they consume 249-302, 203-247 or 64-78 items per year, respectively. This is the first time this concerning level of MPs content in table salts in the Turkish market has been reported.

  6. Early detection of Aspergillus carbonarius and A. niger on table grapes: a tool for quality improvement.

    PubMed

    Ayoub, F; Reverberi, M; Ricelli, A; D'Onghia, A M; Yaseen, T

    2010-09-01

    Aspergillus carbonarius and A. niger aggregate are the main fungal contaminants of table grapes. Besides their ability to cause black rot, they can produce ochratoxin A (OTA), a mycotoxin that has attracted increasing attention worldwide. The objective of this work was to set up a simple and rapid molecular method for the early detection of both fungi in table grapes before fungal development becomes evident. Polymerase chain reaction (PCR)-based assays were developed by designing species-specific primers based on the polyketide synthases (PKS(S)) sequences of A. carbonarius and A. niger that have recently been demonstrated to be involved in OTA biosynthesis. Three table grape varieties (Red globe, Crimson seedless, and Italia) were inoculated with A. carbonarius and A. niger aggregate strains producing OTA. The extracted DNA from control (non-inoculated) and inoculated grapes was amplified by PCR using ACPKS2F-ACPKS2R for A. carbonarius and ANPKS5-ANPKS6 for A. niger aggregate. Both primers allowed a clear detection, even in symptomless samples. PCR-based methods are considered to be a good alternative to traditional diagnostic means for the early detection of fungi in complex matrix for their high specificity and sensitivity. The results obtained could be useful for the definition of a 'quality label' for tested grapes to improve the safety measures taken to guarantee the production of fresh table grapes.

  7. A functional-dependencies-based Bayesian networks learning method and its application in a mobile commerce system.

    PubMed

    Liao, Stephen Shaoyi; Wang, Huai Qing; Li, Qiu Dan; Liu, Wei Yi

    2006-06-01

    This paper presents a new method for learning Bayesian networks from functional dependencies (FD) and third normal form (3NF) tables in relational databases. The method sets up a linkage between the theory of relational databases and probabilistic reasoning models, which is interesting and useful especially when data are incomplete and inaccurate. The effectiveness and practicability of the proposed method is demonstrated by its implementation in a mobile commerce system.

  8. Coronary artery stenosis detection with holographic display of 3D angiograms

    NASA Astrophysics Data System (ADS)

    Ritman, Erik L.; Schwanke, Todd D.; Simari, Robert D.; Schwartz, Robert S.; Thomas, Paul J.

    1995-05-01

    The objective of this study was to establish the accuracy of an holographic display approach for detection of stenoses in coronary arteries. The rationale for using an holographic display approach is that multiple angles of view of the coronary arteriogram are provided by a single 'x-ray'-like film, backlit by a special light box. This should be more convenient in that the viewer does not have to page back and forth through a cine angiogram to obtain the multiple angles of view. The method used to test this technique involved viewing 100 3D coronary angiograms. These images were generated from the 3D angiographic images of nine normal coronary arterial trees generated with the Dynamic Spatial Reconstructor (DSR) fast CT scanner. Using our image processing programs, the image of the coronary artery lumen was locally 'narrowed' by an amount and length and at a location determined by a random look-up table. Two independent, blinded, experienced angiographers viewed the holographic displays of these angiograms and recorded their confidence about the presence, location, and severity of the stenoses. This procedure evaluates the sensitivity and specificity of the detection of coronary artery stenoses as a function of the severity, size, and location along the arteries.

  9. Elimination of single-beam substitution error in diffuse reflectance measurements using an integrating sphere.

    PubMed

    Vidovic, Luka; Majaron, Boris

    2014-02-01

    Diffuse reflectance spectra (DRS) of biological samples are commonly measured using an integrating sphere (IS). To account for the incident light spectrum, measurement begins by placing a highly reflective white standard against the IS sample opening and collecting the reflected light. After replacing the white standard with the test sample of interest, DRS of the latter is determined as the ratio of the two values at each involved wavelength. However, such a substitution may alter the fluence rate inside the IS. This leads to distortion of measured DRS, which is known as single-beam substitution error (SBSE). Barring the use of more complex experimental setups, the literature states that only approximate corrections of the SBSE are possible, e.g., by using look-up tables generated with calibrated low-reflectivity standards. We present a practical method for elimination of SBSE when using IS equipped with an additional reference port. Two additional measurements performed at this port enable a rigorous elimination of SBSE. Our experimental characterization of SBSE is replicated by theoretical derivation. This offers an alternative possibility of computational removal of SBSE based on advance characterization of a specific DRS setup. The influence of SBSE on quantitative analysis of DRS is illustrated in one application example.

  10. Anterior chamber blood cell differentiation using spectroscopic optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Qian, Ruobing; McNabb, Ryan P.; Kuo, Anthony N.; Izatt, Joseph A.

    2018-02-01

    There is great clinical importance in identifying cellular responses in the anterior chamber (AC) which can indicate signs of hyphema (an accumulation of red blood cells (RBCs)) or aberrant intraocular inflammation (an accumulation of white blood cells (WBCs)). These responses are difficult to diagnose and require specialized equipment such as ophthalmic microscopes and specialists trained in examining the eye. In this work, we applied spectroscopic OCT to differentiate between RBCs and subtypes of WBCs, including neutrophils, lymphocytes and monocytes, both in vitro and in ACs of porcine eyes. We located and tracked single cells in OCT volumetric images, and extracted the spectroscopic data of each cell from the detected interferograms using short-time Fourier Transform (STFT). A look-up table of Mie spectra was generated and used to correlate the spectroscopic data of single cells to their characteristic sizes. The accuracy of the method was first validated on 10um polystyrene microspheres. For RBCs and subtypes of WBCs, the extracted size distributions based on the best Mie spectra fit were significantly different between each cell type by using the Wilcoxon rank-sum test. A similar size distribution of neutrophils was also acquired in the measurements of cells introduced into the ACs of porcine eyes, further supporting spectroscopic OCT for potentially differentiating and quantifying blood cell types in the AC in vivo.

  11. Real-time distortion correction for visual inspection systems based on FPGA

    NASA Astrophysics Data System (ADS)

    Liang, Danhua; Zhang, Zhaoxia; Chen, Xiaodong; Yu, Daoyin

    2008-03-01

    Visual inspection is a kind of new technology based on the research of computer vision, which focuses on the measurement of the object's geometry and location. It can be widely used in online measurement, and other real-time measurement process. Because of the defects of the traditional visual inspection, a new visual detection mode -all-digital intelligent acquisition and transmission is presented. The image processing, including filtering, image compression, binarization, edge detection and distortion correction, can be completed in the programmable devices -FPGA. As the wide-field angle lens is adopted in the system, the output images have serious distortion. Limited by the calculating speed of computer, software can only correct the distortion of static images but not the distortion of dynamic images. To reach the real-time need, we design a distortion correction system based on FPGA. The method of hardware distortion correction is that the spatial correction data are calculated first under software circumstance, then converted into the address of hardware storage and stored in the hardware look-up table, through which data can be read out to correct gray level. The major benefit using FPGA is that the same circuit can be used for other circularly symmetric wide-angle lenses without being modified.

  12. RELAP-7 Progress Report: A Mathematical Model for 1-D Compressible, Single-Phase Flow Through a Branching Junction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, R. A.

    In the literature, the abundance of pipe network junction models, as well as inclusion of dissipative losses between connected pipes with loss coefficients, has been treated using the incompressible flow assumption of constant density. This approach is fundamentally, physically wrong for compressible flow with density change. This report introduces a mathematical modeling approach for general junctions in piping network systems for which the transient flows are compressible and single-phase. The junction could be as simple as a 1-pipe input and 1-pipe output with differing pipe cross-sectional areas for which a dissipative loss is necessary, or it could include an activemore » component, between an inlet pipe and an outlet pipe, such as a pump or turbine. In this report, discussion will be limited to the former. A more general branching junction connecting an arbitrary number of pipes with transient, 1-D compressible single-phase flows is also presented. These models will be developed in a manner consistent with the use of a general equation of state like, for example, the recent Spline-Based Table Look-up method [1] for incorporating the IAPWS-95 formulation [2] to give accurate and efficient calculations for properties for water and steam with RELAP-7 [3].« less

  13. Color reproduction software for a digital still camera

    NASA Astrophysics Data System (ADS)

    Lee, Bong S.; Park, Du-Sik; Nam, Byung D.

    1998-04-01

    We have developed a color reproduction software for a digital still camera. The image taken by the camera was colorimetrically reproduced on the monitor after characterizing the camera and the monitor, and color matching between two devices. The reproduction was performed at three levels; level processing, gamma correction, and color transformation. The image contrast was increased after the level processing adjusting the level of dark and bright portions of the image. The relationship between the level processed digital values and the measured luminance values of test gray samples was calculated, and the gamma of the camera was obtained. The method for getting the unknown monitor gamma was proposed. As a result, the level processed values were adjusted by the look-up table created by the camera and the monitor gamma correction. For a color transformation matrix for the camera, 3 by 3 or 3 by 4 matrix was used, which was calculated by the regression between the gamma corrected values and the measured tristimulus values of each test color samples the various reproduced images were displayed on the dialogue box implemented in our software, which were generated according to four illuminations for the camera and three color temperatures for the monitor. An user can easily choose he best reproduced image comparing each others.

  14. LANDSAT-D investigations in snow hydrology

    NASA Technical Reports Server (NTRS)

    Dozier, J.

    1983-01-01

    The atmospheric radiative transfer calculation program (ATARD) and its supporting programs (setting up atmospheric profile, making Mie tables and an exponential-sum-fitting table) were completed. More sophisticated treatment of aerosol scattering (including angular phase function or asymmetric factor) and multichannel analysis of results from ATRAD are being developed. Some progress was made on a Monte Carlo program for examining two dimensional effects, specifically a surface boundary condition that varies across a scene. The MONTE program combines ATRAD and the Monte Carlo method together to produce an atmospheric point spread function. Currently the procedure passes monochromatic tests and the results are reasonable.

  15. Multi-Touch Tabletop System Using Infrared Image Recognition for User Position Identification.

    PubMed

    Suto, Shota; Watanabe, Toshiya; Shibusawa, Susumu; Kamada, Masaru

    2018-05-14

    A tabletop system can facilitate multi-user collaboration in a variety of settings, including small meetings, group work, and education and training exercises. The ability to identify the users touching the table and their positions can promote collaborative work among participants, so methods have been studied that involve attaching sensors to the table, chairs, or to the users themselves. An effective method of recognizing user actions without placing a burden on the user would be some type of visual process, so the development of a method that processes multi-touch gestures by visual means is desired. This paper describes the development of a multi-touch tabletop system using infrared image recognition for user position identification and presents the results of touch-gesture recognition experiments and a system-usability evaluation. Using an inexpensive FTIR touch panel and infrared light, this system picks up the touch areas and the shadow area of the user's hand by an infrared camera to establish an association between the hand and table touch points and estimate the position of the user touching the table. The multi-touch gestures prepared for this system include an operation to change the direction of an object to face the user and a copy operation in which two users generate duplicates of an object. The system-usability evaluation revealed that prior learning was easy and that system operations could be easily performed.

  16. Multi-Touch Tabletop System Using Infrared Image Recognition for User Position Identification

    PubMed Central

    Suto, Shota; Watanabe, Toshiya; Shibusawa, Susumu; Kamada, Masaru

    2018-01-01

    A tabletop system can facilitate multi-user collaboration in a variety of settings, including small meetings, group work, and education and training exercises. The ability to identify the users touching the table and their positions can promote collaborative work among participants, so methods have been studied that involve attaching sensors to the table, chairs, or to the users themselves. An effective method of recognizing user actions without placing a burden on the user would be some type of visual process, so the development of a method that processes multi-touch gestures by visual means is desired. This paper describes the development of a multi-touch tabletop system using infrared image recognition for user position identification and presents the results of touch-gesture recognition experiments and a system-usability evaluation. Using an inexpensive FTIR touch panel and infrared light, this system picks up the touch areas and the shadow area of the user’s hand by an infrared camera to establish an association between the hand and table touch points and estimate the position of the user touching the table. The multi-touch gestures prepared for this system include an operation to change the direction of an object to face the user and a copy operation in which two users generate duplicates of an object. The system-usability evaluation revealed that prior learning was easy and that system operations could be easily performed. PMID:29758006

  17. Experiments with Cross-Language Information Retrieval on a Health Portal for Psychology and Psychotherapy.

    PubMed

    Andrenucci, Andrea

    2016-01-01

    Few studies have been performed within cross-language information retrieval (CLIR) in the field of psychology and psychotherapy. The aim of this paper is to to analyze and assess the quality of available query translation methods for CLIR on a health portal for psychology. A test base of 100 user queries, 50 Multi Word Units (WUs) and 50 Single WUs, was used. Swedish was the source language and English the target language. Query translation methods based on machine translation (MT) and dictionary look-up were utilized in order to submit query translations to two search engines: Google Site Search and Quick Ask. Standard IR evaluation measures and a qualitative analysis were utilized to assess the results. The lexicon extracted with word alignment of the portal's parallel corpus provided better statistical results among dictionary look-ups. Google Translate provided more linguistically correct translations overall and also delivered better retrieval results in MT.

  18. Are financial incentives cost-effective to support smoking cessation during pregnancy?

    PubMed

    Boyd, Kathleen A; Briggs, Andrew H; Bauld, Linda; Sinclair, Lesley; Tappin, David

    2016-02-01

    To investigate the cost-effectiveness of up to £400 worth of financial incentives for smoking cessation in pregnancy as an adjunct to routine health care. Cost-effectiveness analysis based on a Phase II randomized controlled trial (RCT) and a cost-utility analysis using a life-time Markov model. The RCT was undertaken in Glasgow, Scotland. The economic analysis was undertaken from the UK National Health Service (NHS) perspective. A total of 612 pregnant women randomized to receive usual cessation support plus or minus financial incentives of up to £400 vouchers (US $609), contingent upon smoking cessation. Comparison of usual support and incentive interventions in terms of cotinine-validated quitters, quality-adjusted life years (QALYs) and direct costs to the NHS. The incremental cost per quitter at 34-38 weeks pregnant was £1127 ($1716).This is similar to the standard look-up value derived from Stapleton & West's published ICER tables, £1390 per quitter, by looking up the Cessation in Pregnancy Incentives Trial (CIPT) incremental cost (£157) and incremental 6-month quit outcome (0.14). The life-time model resulted in an incremental cost of £17 [95% confidence interval (CI) = -£93, £107] and a gain of 0.04 QALYs (95% CI = -0.058, 0.145), giving an ICER of £482/QALY ($734/QALY). Probabilistic sensitivity analysis indicates uncertainty in these results, particularly regarding relapse after birth. The expected value of perfect information was £30 million (at a willingness to pay of £30 000/QALY), so given current uncertainty, additional research is potentially worthwhile. Financial incentives for smoking cessation in pregnancy are highly cost-effective, with an incremental cost per quality-adjusted life years of £482, which is well below recommended decision thresholds. © 2015 Society for the Study of Addiction.

  19. Color management with a hammer: the B-spline fitter

    NASA Astrophysics Data System (ADS)

    Bell, Ian E.; Liu, Bonny H. P.

    2003-01-01

    To paraphrase Abraham Maslow: If the only tool you have is a hammer, every problem looks like a nail. We have a B-spline fitter customized for 3D color data, and many problems in color management can be solved with this tool. Whereas color devices were once modeled with extensive measurement, look-up tables and trilinear interpolation, recent improvements in hardware have made B-spline models an affordable alternative. Such device characterizations require fewer color measurements than piecewise linear models, and have uses beyond simple interpolation. A B-spline fitter, for example, can act as a filter to remove noise from measurements, leaving a model with guaranteed smoothness. Inversion of the device model can then be carried out consistently and efficiently, as the spline model is well behaved and its derivatives easily computed. Spline-based algorithms also exist for gamut mapping, the composition of maps, and the extrapolation of a gamut. Trilinear interpolation---a degree-one spline---can still be used after nonlinear spline smoothing for high-speed evaluation with robust convergence. Using data from several color devices, this paper examines the use of B-splines as a generic tool for modeling devices and mapping one gamut to another, and concludes with applications to high-dimensional and spectral data.

  20. A Pipelined Non-Deterministic Finite Automaton-Based String Matching Scheme Using Merged State Transitions in an FPGA

    PubMed Central

    Choi, Kang-Il

    2016-01-01

    This paper proposes a pipelined non-deterministic finite automaton (NFA)-based string matching scheme using field programmable gate array (FPGA) implementation. The characteristics of the NFA such as shared common prefixes and no failure transitions are considered in the proposed scheme. In the implementation of the automaton-based string matching using an FPGA, each state transition is implemented with a look-up table (LUT) for the combinational logic circuit between registers. In addition, multiple state transitions between stages can be performed in a pipelined fashion. In this paper, it is proposed that multiple one-to-one state transitions, called merged state transitions, can be performed with an LUT. By cutting down the number of used LUTs for implementing state transitions, the hardware overhead of combinational logic circuits is greatly reduced in the proposed pipelined NFA-based string matching scheme. PMID:27695114

  1. A single-layer platform for Boolean logic and arithmetic through DNA excision in mammalian cells

    PubMed Central

    Weinberg, Benjamin H.; Hang Pham, N. T.; Caraballo, Leidy D.; Lozanoski, Thomas; Engel, Adrien; Bhatia, Swapnil; Wong, Wilson W.

    2017-01-01

    Genetic circuits engineered for mammalian cells often require extensive fine-tuning to perform their intended functions. To overcome this problem, we present a generalizable biocomputing platform that can engineer genetic circuits which function in human cells with minimal optimization. We used our Boolean Logic and Arithmetic through DNA Excision (BLADE) platform to build more than 100 multi-input-multi-output circuits. We devised a quantitative metric to evaluate the performance of the circuits in human embryonic kidney and Jurkat T cells. Of 113 circuits analysed, 109 functioned (96.5%) with the correct specified behavior without any optimization. We used our platform to build a three-input, two-output Full Adder and six-input, one-output Boolean Logic Look Up Table. We also used BLADE to design circuits with temporal small molecule-mediated inducible control and circuits that incorporate CRISPR/Cas9 to regulate endogenous mammalian genes. PMID:28346402

  2. A Pipelined Non-Deterministic Finite Automaton-Based String Matching Scheme Using Merged State Transitions in an FPGA.

    PubMed

    Kim, HyunJin; Choi, Kang-Il

    2016-01-01

    This paper proposes a pipelined non-deterministic finite automaton (NFA)-based string matching scheme using field programmable gate array (FPGA) implementation. The characteristics of the NFA such as shared common prefixes and no failure transitions are considered in the proposed scheme. In the implementation of the automaton-based string matching using an FPGA, each state transition is implemented with a look-up table (LUT) for the combinational logic circuit between registers. In addition, multiple state transitions between stages can be performed in a pipelined fashion. In this paper, it is proposed that multiple one-to-one state transitions, called merged state transitions, can be performed with an LUT. By cutting down the number of used LUTs for implementing state transitions, the hardware overhead of combinational logic circuits is greatly reduced in the proposed pipelined NFA-based string matching scheme.

  3. Model-Based Wavefront Control for CCAT

    NASA Technical Reports Server (NTRS)

    Redding, David; Lou, John Z.; Kissil, Andy; Bradford, Matt; Padin, Steve; Woody, David

    2011-01-01

    The 25-m aperture CCAT submillimeter-wave telescope will have a primary mirror that is divided into 162 individual segments, each of which is provided with 3 positioning actuators. CCAT will be equipped with innovative Imaging Displacement Sensors (IDS) inexpensive optical edge sensors capable of accurately measuring all segment relative motions. These measurements are used in a Kalman-filter-based Optical State Estimator to estimate wavefront errors, permitting use of a minimum-wavefront controller without direct wavefront measurement. This controller corrects the optical impact of errors in 6 degrees of freedom per segment, including lateral translations of the segments, using only the 3 actuated degrees of freedom per segment. The global motions of the Primary and Secondary Mirrors are not measured by the edge sensors. These are controlled using a gravity-sag look-up table. Predicted performance is illustrated by simulated response to errors such as gravity sag.

  4. The effect of structural design parameters on FPGA-based feed-forward space-time trellis coding-orthogonal frequency division multiplexing channel encoders

    NASA Astrophysics Data System (ADS)

    Passas, Georgios; Freear, Steven; Fawcett, Darren

    2010-08-01

    Orthogonal frequency division multiplexing (OFDM)-based feed-forward space-time trellis code (FFSTTC) encoders can be synthesised as very high speed integrated circuit hardware description language (VHDL) designs. Evaluation of their FPGA implementation can lead to conclusions that help a designer to decide the optimum implementation, given the encoder structural parameters. VLSI architectures based on 1-bit multipliers and look-up tables (LUTs) are compared in terms of FPGA slices and block RAMs (area), as well as in terms of minimum clock period (speed). Area and speed graphs versus encoder memory order are provided for quadrature phase shift keying (QPSK) and 8 phase shift keying (8-PSK) modulation and two transmit antennas, revealing best implementation under these conditions. The effect of number of modulation bits and transmit antennas on the encoder implementation complexity is also investigated.

  5. Physiological basis for noninvasive skin cancer diagnosis using diffuse reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Zhang, Yao; Markey, Mia K.; Tunnell, James W.

    2017-02-01

    Diffuse reflectance spectroscopy offers a noninvasive, fast, and low-cost alternative to visual screening and biopsy for skin cancer diagnosis. We have previously acquired reflectance spectra from 137 lesions in 76 patients and determined the capability of spectral diagnosis using principal component analysis (PCA). However, it is not well elucidated why spectral analysis enables tissue classification. To provide the physiological basis, we used the Monte Carlo look-up table (MCLUT) model to extract physiological parameters from those clinical data. The MCLUT model results in the following physiological parameters: oxygen saturation, hemoglobin concentration, melanin concentration, vessel radius, and scattering parameters. Physiological parameters show that cancerous skin tissue has lower scattering and larger vessel radii, compared to normal tissue. These results demonstrate the potential of diffuse reflectance spectroscopy for detection of early precancerous changes in tissue. In the future, a diagnostic algorithm that combines these physiological parameters could be enable non-invasive diagnosis of skin cancer.

  6. Mars Radiation Surface Model

    NASA Astrophysics Data System (ADS)

    Alzate, N.; Grande, M.; Matthiae, D.

    2017-09-01

    Planetary Space Weather Services (PSWS) within the Europlanet H2020 Research Infrastructure have been developed following protocols and standards available in Astrophysical, Solar Physics and Planetary Science Virtual Observatories. Several VO-compliant functionalities have been implemented in various tools. The PSWS extends the concepts of space weather and space situational awareness to other planets in our Solar System and in particular to spacecraft that voyage through it. One of the five toolkits developed as part of these services is a model dedicated to the Mars environment. This model has been developed at Aberystwyth University and the Institut fur Luft- und Raumfahrtmedizin (DLR Cologne) using modeled average conditions available from Planetocosmics. It is available for tracing propagation of solar events through the Solar System and modeling the response of the Mars environment. The results have been synthesized into look-up tables parameterized to variable solar wind conditions at Mars.

  7. Advanced linear and nonlinear compensations for 16QAM SC-400G unrepeatered transmission system

    NASA Astrophysics Data System (ADS)

    Zhang, Junwen; Yu, Jianjun; Chien, Hung-Chang

    2018-02-01

    Digital signal processing (DSP) with both linear equalization and nonlinear compensations are studied in this paper for the single-carrier 400G system based on 65-GBaud 16-quadrature amplitude modulation (QAM) signals. The 16-QAM signals are generated and pre-processed with pre-equalization (Pre-EQ) and Look-up-Table (LUT) based pre-distortion (Pre-DT) at the transmitter (Tx)-side. The implementation principle of training-based equalization and pre-distortion are presented here in this paper with experimental studies. At the receiver (Rx)-side, fiber-nonlinearity compensation based on digital backward propagation (DBP) are also utilized to further improve the transmission performances. With joint LUT-based Pre-DT and DBP-based post-compensation to mitigate the opto-electronic components and fiber nonlinearity impairments, we demonstrate the unrepeatered transmission of 1.6Tb/s based on 4-lane 400G single-carrier PDM-16QAM over 205-km SSMF without distributed amplifier.

  8. Optimization and performance evaluation of a conical mirror based fluorescence molecular tomography imaging system

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Zhang, Wei; Zhu, Dianwen; Li, Changqing

    2016-03-01

    We performed numerical simulations and phantom experiments with a conical mirror based fluorescence molecular tomography (FMT) imaging system to optimize its performance. With phantom experiments, we have compared three measurement modes in FMT: the whole surface measurement mode, the transmission mode, and the reflection mode. Our results indicated that the whole surface measurement mode performed the best. Then, we applied two different neutral density (ND) filters to improve the measurement's dynamic range. The benefits from ND filters are not as much as predicted. Finally, with numerical simulations, we have compared two laser excitation patterns: line and point. With the same excitation position number, we found that the line laser excitation had slightly better FMT reconstruction results than the point laser excitation. In the future, we will implement Monte Carlo ray tracing simulations to calculate multiple reflection photons, and create a look-up table accordingly for calibration.

  9. [Research on the High Efficiency Data Communication Repeater Based on STM32F103].

    PubMed

    Zhang, Yahui; Li, Zheng; Chen, Guangfei

    2015-11-01

    To improve the radio frequency (RF) transmission distance of the wireless terminal of the medical internet of things (LOT), to realize the real-time and efficient data communication, the intelligent relay system based on STM32F103 single chip microcomputer (SCM) is proposed. The system used nRF905 chip to achieve the collection, of medical and health information of patients in the 433 MHz band, used SCM to control the serial port to Wi-Fi module to transmit information from 433 MHz to 2.4 GHz wireless Wi-Fi band, and used table look-up algorithm of ready list to improve the efficiency of data communications. The design can realize real-time and efficient data communication. The relay which is easy to use with high practical value can extend the distance and mode of data transmission and achieve real-time transmission of data.

  10. Designing Image Operators for MRI-PET Image Fusion of the Brain

    NASA Astrophysics Data System (ADS)

    Márquez, Jorge; Gastélum, Alfonso; Padilla, Miguel A.

    2006-09-01

    Our goal is to obtain images combining in a useful and precise way the information from 3D volumes of medical imaging sets. We address two modalities combining anatomy (Magnetic Resonance Imaging or MRI) and functional information (Positron Emission Tomography or PET). Commercial imaging software offers image fusion tools based on fixed blending or color-channel combination of two modalities, and color Look-Up Tables (LUTs), without considering the anatomical and functional character of the image features. We used a sensible approach for image fusion taking advantage mainly from the HSL (Hue, Saturation and Luminosity) color space, in order to enhance the fusion results. We further tested operators for gradient and contour extraction to enhance anatomical details, plus other spatial-domain filters for functional features corresponding to wide point-spread-function responses in PET images. A set of image-fusion operators was formulated and tested on PET and MRI acquisitions.

  11. Synergetic use of Aerosol Robotic Network (AERONET) and Moderate Image Spectrometer (MODIS)

    NASA Technical Reports Server (NTRS)

    Kaufman, Y.

    2004-01-01

    I shall describe several distinct modes in which AERONET data are used in conjunction with MODIS data to evaluate the global aerosol system and its impact on climate. These includes: 1) Evaluation of the aerosol diurnal cycle not available from MODIS, and the relationship between the aerosol properties derived from MODIS and the daily average of these properties; 2) Climatology of the aerosol size distribution and single scattering albedo. The climatology is used to formulate the assumptions used in the MODIS look up tables used in the inversion of MODIS data; 3) Measurement of the aerosol effect on irradiation of the surface, this is used in conjunction with the MODIS evaluation of the aerosol effect at the TOA; and 4) Assessment of the aerosol baseline on top off which the satellite data are used to find the amount of dust or anthropogenic aerosol.

  12. Five Years Later: A Look at the EIA Investment.

    ERIC Educational Resources Information Center

    South Carolina State Dept. of Education, Columbia. Div. of Public Accountability.

    A 5-year review of the impact of South Carolina's comprehensive reform legislation, the Education Improvement Act of 1984 (EIA), is presented. Throughout the report, comparisons of EIA program productivity in 1989 with pre-EIA performance are displayed in short program summaries, 33 graphs, and 14 tables. The EIA targeted seven major areas for…

  13. Understanding University Reform in Japan through the Prism of the Social Sciences

    ERIC Educational Resources Information Center

    Goodman, Roger

    2008-01-01

    This article looks at current university reforms in Japan through two slightly different social science prisms: how social science methodologies and theories can help us understand those reforms better and how social science teaching in universities will be affected by the current reform processes. (Contains 3 tables and 7 notes.)

  14. Condensed Proceedings of the Ad Hoc Committee on Environmental Behavior

    ERIC Educational Resources Information Center

    Cancro, Robert

    1972-01-01

    Fourteen leading behavioral scientists explore the relationship between environment and health with a focus on the following question: As we look at health care as people receive it in their communities and the realities of America today, what can we do to improve it?'' Philosophical, scientific issues discussed in round table fashion. (LK)

  15. Monsters that Eat People--Oh My! Selecting Literature to Ease Children's Fears

    ERIC Educational Resources Information Center

    Mercurio, Mia Lynn; McNamee, Abigail

    2008-01-01

    What should families and teachers look for when they choose picture books to help young children overcome their fears of imaginary monsters, dark places, thunderstorms, and dogs? This article provides criteria for assessing picture books and suggests ways to read them in ways that support children's development. (Contains 4 tables.)

  16. A Comparative Analysis of Juvenile Book Review Media.

    ERIC Educational Resources Information Center

    Witucke, A. Virginia

    This study of book reviews takes an objective look at the major sources that review children's books. Periodicals examined are Booklist, Bulletin of the Center for Children's Books, Horn Book, New York Times Book Review, and School Library Journal. Presented in a series of eight tables, the report examines reviews of 30 titles published between…

  17. Systematic design methodology for robust genetic transistors based on I/O specifications via promoter-RBS libraries.

    PubMed

    Lee, Yi-Ying; Hsu, Chih-Yuan; Lin, Ling-Jiun; Chang, Chih-Chun; Cheng, Hsiao-Chun; Yeh, Tsung-Hsien; Hu, Rei-Hsing; Lin, Che; Xie, Zhen; Chen, Bor-Sen

    2013-10-27

    Synthetic genetic transistors are vital for signal amplification and switching in genetic circuits. However, it is still problematic to efficiently select the adequate promoters, Ribosome Binding Sides (RBSs) and inducer concentrations to construct a genetic transistor with the desired linear amplification or switching in the Input/Output (I/O) characteristics for practical applications. Three kinds of promoter-RBS libraries, i.e., a constitutive promoter-RBS library, a repressor-regulated promoter-RBS library and an activator-regulated promoter-RBS library, are constructed for systematic genetic circuit design using the identified kinetic strengths of their promoter-RBS components.According to the dynamic model of genetic transistors, a design methodology for genetic transistors via a Genetic Algorithm (GA)-based searching algorithm is developed to search for a set of promoter-RBS components and adequate concentrations of inducers to achieve the prescribed I/O characteristics of a genetic transistor. Furthermore, according to design specifications for different types of genetic transistors, a look-up table is built for genetic transistor design, from which we could easily select an adequate set of promoter-RBS components and adequate concentrations of external inducers for a specific genetic transistor. This systematic design method will reduce the time spent using trial-and-error methods in the experimental procedure for a genetic transistor with a desired I/O characteristic. We demonstrate the applicability of our design methodology to genetic transistors that have desirable linear amplification or switching by employing promoter-RBS library searching.

  18. Systematic design methodology for robust genetic transistors based on I/O specifications via promoter-RBS libraries

    PubMed Central

    2013-01-01

    Background Synthetic genetic transistors are vital for signal amplification and switching in genetic circuits. However, it is still problematic to efficiently select the adequate promoters, Ribosome Binding Sides (RBSs) and inducer concentrations to construct a genetic transistor with the desired linear amplification or switching in the Input/Output (I/O) characteristics for practical applications. Results Three kinds of promoter-RBS libraries, i.e., a constitutive promoter-RBS library, a repressor-regulated promoter-RBS library and an activator-regulated promoter-RBS library, are constructed for systematic genetic circuit design using the identified kinetic strengths of their promoter-RBS components. According to the dynamic model of genetic transistors, a design methodology for genetic transistors via a Genetic Algorithm (GA)-based searching algorithm is developed to search for a set of promoter-RBS components and adequate concentrations of inducers to achieve the prescribed I/O characteristics of a genetic transistor. Furthermore, according to design specifications for different types of genetic transistors, a look-up table is built for genetic transistor design, from which we could easily select an adequate set of promoter-RBS components and adequate concentrations of external inducers for a specific genetic transistor. Conclusion This systematic design method will reduce the time spent using trial-and-error methods in the experimental procedure for a genetic transistor with a desired I/O characteristic. We demonstrate the applicability of our design methodology to genetic transistors that have desirable linear amplification or switching by employing promoter-RBS library searching. PMID:24160305

  19. 20 CFR Appendix C to Part 718 - Blood-Gas Tables

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Blood-Gas Tables C Appendix C to Part 718... PNEUMOCONIOSIS Pt. 718, App. C Appendix C to Part 718—Blood-Gas Tables The following tables set forth the values... tables are met: (1) For arterial blood-gas studies performed at test sites up to 2,999 feet above sea...

  20. 20 CFR Appendix C to Part 718 - Blood-Gas Tables

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Blood-Gas Tables C Appendix C to Part 718... DUE TO PNEUMOCONIOSIS Pt. 718, App. C Appendix C to Part 718—Blood-Gas Tables The following tables set... of the following tables are met: (1) For arterial blood-gas studies performed at test sites up to 2...

  1. 20 CFR Appendix C to Part 718 - Blood-Gas Tables

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 4 2013-04-01 2013-04-01 false Blood-Gas Tables C Appendix C to Part 718... DUE TO PNEUMOCONIOSIS Pt. 718, App. C Appendix C to Part 718—Blood-Gas Tables The following tables set... of the following tables are met: (1) For arterial blood-gas studies performed at test sites up to 2...

  2. Validation of the Social Security Administration Life Tables (2004-2014) in Localized Prostate Cancer Patients within the Surveillance, Epidemiology, and End Results database.

    PubMed

    Preisser, Felix; Bandini, Marco; Mazzone, Elio; Nazzani, Sebastiano; Marchioni, Michele; Tian, Zhe; Saad, Fred; Pompe, Raisa S; Shariat, Shahrokh F; Heinzer, Hans; Montorsi, Francesco; Huland, Hartwig; Graefen, Markus; Tilki, Derya; Karakiewicz, Pierre I

    2018-05-22

    Accurate life expectancy estimation is crucial in clinical decision-making including management and treatment of clinically localized prostate cancer (PCa). We hypothesized that Social Security Administration (SSA) life tables' derived survival estimates closely follow observed survival of PCa patients. To test this relationship, we examined 10-yr overall survival rates in patients with clinically localized PCa and compared it with survival estimates derived from the SSA life tables. Within the Surveillance, Epidemiology, and End Results database (2004), we identified patients aged >50-<90yr. Follow-up was at least 10 yr for patients who did not die of disease or other causes. Monte Carlo method was used to define individual survival in years, according to the SSA life tables (2004-2014). Subsequently, SSA life tables' predicted survival was compared with observed survival rates in Kaplan-Meier analyses. Subgroup analyses were stratified according to treatment type and D'Amico risk classification. Overall, 39191 patients with localized PCa were identified. At 10-yr follow-up, the SSA life tables' predicted survival was 69.5% versus 73.1% according to the observed rate (p<0.0001). The largest differences between estimated versus observed survival rates were recorded for D'Amico low-risk PCa (8.0%), brachytherapy (9.1%), and radical prostatectomy (8.6%) patients. Conversely, the smallest differences were recorded for external beam radiotherapy (1.7%) and unknown treatment type (1.6%) patients. Overall, SSA life tables' predicted life expectancy closely approximate observed overall survival rates. However, SSA life tables' predicted rates underestimate by as much as 9.1% the survival in brachytherapy patients, as well as in D'Amico low-risk and radical prostatectomy patients. In these patient categories, an adjustment for the degree of underestimation might be required when counseling is provided in clinical practice. Social Security Administration (SSA) life tables' predicted life expectancy closely approximate observed overall survival rates. However, SSA life tables' predicted rates underestimate by as much as 9.1% the survival in brachytherapy patients, as well as in D'Amico low-risk and radical prostatectomy patients. Copyright © 2018 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  3. Monthly analysis of PM ratio characteristics and its relation to AOD.

    PubMed

    Sorek-Hamer, Meytar; Broday, David M; Chatfield, Robert; Esswein, Robert; Stafoggia, Massimo; Lepeule, Johanna; Lyapustin, Alexei; Kloog, Itai

    2017-01-01

    Airborne particulate matter (PM) is derived from diverse sources-natural and anthropogenic. Climate change processes and remote sensing measurements are affected by the PM properties, which are often lumped into homogeneous size fractions that show spatiotemporal variation. Since different sources are attributed to different geographic locations and show specific spatial and temporal PM patterns, we explored the spatiotemporal characteristics of the PM 2.5 /PM 10 ratio in different areas. Furthermore, we examined the statistical relationships between AERONET aerosol optical depth (AOD) products, satellite-based AOD, and the PM ratio, as well as the specific PM size fractions. PM data from the northeastern United States, from San Joaquin Valley, CA, and from Italy, Israel, and France were analyzed, as well as the spatial and temporal co-measured AOD products obtained from the MultiAngle Implementation of Atmospheric Correction (MAIAC) algorithm. Our results suggest that when both the AERONET AOD and the AERONET fine-mode AOD are available, the AERONET AOD ratio can be a fair proxy for the ground PM ratio. Therefore, we recommend incorporating the fine-mode AERONET AOD in the calibration of MAIAC. Along with a relatively large variation in the observed PM ratio (especially in the northeastern United States), this shows the need to revisit MAIAC assumptions on aerosol microphysical properties, and perhaps their seasonal variability, which are used to generate the look-up tables and conduct aerosol retrievals. Our results call for further scrutiny of satellite-borne AOD, in particular its errors, limitations, and relation to the vertical aerosol profile and the particle size, shape, and composition distribution. This work is one step of the required analyses to gain better understanding of what the satellite-based AOD represents. The analysis results recommend incorporating the fine-mode AERONET AOD in MAIAC calibration. Specifically, they indicate the need to revisit MAIAC regional aerosol microphysical model assumptions used to generate look-up tables (LUTs) and conduct retrievals. Furthermore, relatively large variations in measured PM ratio shows that adding seasonality in aerosol microphysics used in LUTs, which is currently static, could also help improve accuracy of MAIAC retrievals. These results call for further scrutiny of satellite-borne AOD for better understanding of its limitations and relation to the vertical aerosol profile and particle size, shape, and composition.

  4. Remote sensing of atmospheric aerosols with the SPEX spectropolarimeter

    NASA Astrophysics Data System (ADS)

    van Harten, G.; Rietjens, J.; Smit, M.; Snik, F.; Keller, C. U.; di Noia, A.; Hasekamp, O.; Vonk, J.; Volten, H.

    2013-12-01

    Characterizing atmospheric aerosols is key to understanding their influence on climate through their direct and indirect radiative forcing. This requires long-term global coverage, at high spatial (~km) and temporal (~days) resolution, which can only be provided by satellite remote sensing. Aerosol load and properties such as particle size, shape and chemical composition can be derived from multi-wavelength radiance and polarization measurements of sunlight that is scattered by the Earth's atmosphere at different angles. The required polarimetric accuracy of ~10^(-3) is very challenging, particularly since the instrument is located on a rapidly moving platform. Our Spectropolarimeter for Planetary EXploration (SPEX) is based on a novel, snapshot spectral modulator, with the intrinsic ability to measure polarization at high accuracy. It exhibits minimal instrumental polarization and is completely solid-state and passive. An athermal set of birefringent crystals in front of an analyzer encodes the incoming linear polarization into a sinusoidal modulation in the intensity spectrum. Moreover, a dual beam implementation yields redundancy that allows for a mutual correction in both the spectrally and spatially modulated data to increase the measurement accuracy. A partially polarized calibration stimulus has been developed, consisting of a carefully depolarized source followed by tilted glass plates to induce polarization in a controlled way. Preliminary calibration measurements show an accuracy of SPEX of well below 10^(-3), with a sensitivity limit of 2*10^(-4). We demonstrate the potential of the SPEX concept by presenting retrievals of aerosol properties based on clear sky measurements using a prototype satellite instrument and a dedicated ground-based SPEX. The retrieval algorithm, originally designed for POLDER data, performs iterative fitting of aerosol properties and surface albedo, where the initial guess is provided by a look-up table. The retrieved aerosol properties, including aerosol optical thickness, single scattering albedo, size distribution and complex refractive index, will be compared with the on-site AERONET sun-photometer, lidar, particle counter and sizer, and PM10 and PM2.5 monitoring instruments. Retrievals of the aerosol layer height based on polarization measurements in the O2A absorption band will be compared with lidar profiles. Furthermore, the possibility of enhancing the retrieval accuracy by replacing the look-up table with a neural network based initial guess will be discussed, using retrievals from simulated ground-based data.

  5. Catching up with Harvard: Results from Regression Analysis of World Universities League Tables

    ERIC Educational Resources Information Center

    Li, Mei; Shankar, Sriram; Tang, Kam Ki

    2011-01-01

    This paper uses regression analysis to test if the universities performing less well according to Shanghai Jiao Tong University's world universities league tables are able to catch up with the top performers, and to identify national and institutional factors that could affect this catching up process. We have constructed a dataset of 461…

  6. Mountain Wave Analysis Using Fourier Methods

    DTIC Science & Technology

    2007-10-01

    model for altitudes up to 18 km for the same location using the Hilo , Hawaii 1200 UTC rawinsonde for the background velocity and temperature profile... Hawaii terrain and atmosphere 46 for 12 Dec 2002 vii Tables 1...20 3. Three-Layer Model Specifications for Hawaii 12 December 2002 06 UTC 22 4. Three-Layer Model

  7. Operating experience with existing light sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barton, M.Q.

    It is instructive to consider what an explosive growth there has been in the development of light sources using synchrotron radiation. This is well illustrated by the list of facilities given in Table I. In many cases, synchrotron light facilities have been obtained by tacking on parasitic beam lines to rings that were built for high energy physics. Of the twenty-three facilities in this table, however, eleven were built explicitely for this synchrotron radiation. Another seven have by now been converted for use as dedicated facilities leaving only five that share time with high energy physics. These five parasitically operatedmore » facilities are still among our best sources of hard x-rays, however, and their importance to the fields of science where these x-rays are needed must be emphasized. While the number of facilities in this table is impressive, it is even more impressive to add up the total number of user beam lines. Most of these rings are absolutely surrounded by beam lines and finding real estate on the experimental floor of one of these facilities for adding a new experiment looks about as practical as adding a farm in the middle of Manhattan. Nonetheless, the managers of these rings seem to have an attitude of ''always room for one more'' and new experimental beam lines do appear. This situation is necessary because the demand for beam time has exploded at an even faster rate than the development of the facilities. The field is not only growing, it can be expected to continue to grow for some time. Some of the explicit plans for future development will be discussed in the companion paper by Lee Teng.« less

  8. The look of LaTeX

    NASA Astrophysics Data System (ADS)

    This has always been the major objection to its use by those not driven by the need to typeset mathematics since the “what-you-see-is-what-you-get” (WYSIWYG) packages offered by Microsoft Word and WordPerfect are easy to learn and use. Recently, however, com-mercial software companies have begun to market almost-WYSIWYG programs that create LaTeX files. Some commercial software that creates LaTeX files are listed in Table 1. EXP and SWP have some of the “look and feel” of the software that is popular in offices and PCTeX32 allows quick and convenient previews of the translated LaTeX files.

  9. 11th Annual CMMI Technology Conference and User Group

    DTIC Science & Technology

    2011-11-17

    Examples of triggers may include: – Cost performance – Schedule performance – Results of management reviews – Occurrence of the risk • as a...Analysis (PHA) – Method 3 – Through bottom- up analysis of design data (e.g., flow diagrams, Failure Mode Effects and Criticality Analysis (FMECA...of formal reviews and the setting up of delta or follow- up reviews can be used to give the organization more places to look at the products as they

  10. Pedagogies That Explore Food Practices: Resetting the Table for Improved Eco-Justice

    ERIC Educational Resources Information Center

    Harris, Carol E.; Barter, Barbara G.

    2015-01-01

    As health threats appear with increasing regularity in our food systems and other food crises loom worldwide, we look to rural areas to provide local and nutritious foods. Educationally, we seek approaches to food studies that engage students and their communities and, ultimately, lead to positive action. Yet food studies receive only generic…

  11. Temporary Personal Radioactivity

    ERIC Educational Resources Information Center

    Myers, Fred

    2012-01-01

    As part of a bone scan procedure to look for the spread of prostate cancer, I was injected with radioactive technetium. In an effort to occupy/distract my mind, I used a Geiger counter to determine if the radioactive count obeyed the inverse-square law as a sensor was moved away from my bladder by incremental distances. (Contains 1 table and 2…

  12. A Comparison of Keyboarding Software for the Elementary Grades. A Quarterly Report.

    ERIC Educational Resources Information Center

    Nolf, Kathleen; Weaver, Dave

    This paper provides generalizations and ideas on what to look for when previewing software products designed for teaching or improving the keyboarding skills of elementary school students, a list of nine products that the MicroSIFT (Microcomputer Software and Information for Teachers) staff recommends for preview, and a table of features comparing…

  13. Montessori Infant and Toddler Programs: How Our Approach Meshes with Other Models

    ERIC Educational Resources Information Center

    Miller, Darla Ferris

    2011-01-01

    Today, Montessori infant & toddler programs around the country usually have a similar look and feel--low floor beds, floor space for movement, low shelves, natural materials, tiny wooden chairs and tables for eating, and not a highchair or swing in sight. But Montessori toddler programs seem to fall into two paradigms--one model seeming more…

  14. Oil and Natural Gas Industry Sources Covered by the 2012 New Source Performance Standards (NSPS) for Volatile Organic Compounds (VOCs) and the 2016 NSPS for Methane and VOCs, by Site

    EPA Pesticide Factsheets

    This is a 2016 table that looks at oil and natural gas industry site types and lists the applicable rules for the 2012 and 2016 new source performance standards (NSPS) and Volatile Organic Compounds (VOC) rules.

  15. Look-Listen Opinion Poll, 1983-1984. Project of the National Telemedia Council, Inc.

    ERIC Educational Resources Information Center

    National Telemedia Council, Inc., Madison, WI.

    Designed to indicate the reasons behind viewer program preferences, this report presents results of a survey which asked 1,576 television viewers (monitors) to evaluate programs they liked, did not like, and/or new programs. Tables summarize the findings for why programs were chosen, their technical quality, content realism, overall quality, and…

  16. The Construction of Interculturality: A Study of Initial Encounters between Japanese and American Students.

    ERIC Educational Resources Information Center

    Mori, Junko

    2003-01-01

    Investigates how Japanese and American students initiate topical talk as they get acquainted with each other during their initial encounter at a student-organized conversation table. Looks at the observable and reportable ways in which the participants demonstrate the relevance, or the irrelevance, of interculturality in the development of the…

  17. 24. Photographic copy of undated photo; Photographer unknown; Original in ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    24. Photographic copy of undated photo; Photographer unknown; Original in Rath collection at Iowa State University Libraries, Ames, Iowa; Filed under: Rath Packing Company, Printed Photographs, Symbol M, Box 2; REMOVING HIDES ON THE MOVING SKINNING TABLE; LOOKING NORTH - Rath Packing Company, Beef Killing Building, Sycamore Street between Elm & Eighteenth Streets, Waterloo, Black Hawk County, IA

  18. Post-tensioning and splicing of precast/prestressed bridge beams to extend spans

    NASA Astrophysics Data System (ADS)

    Collett, Brandon S.; Saliba, Joseph E.

    2002-06-01

    This paper explores the status and techniques of post-tensioning and splicing precast concrete I-beams in bridge applications. It will look at the current practices that have been used in the United States and comment on the advantages of these techniques. Representative projects are presented to demonstrate the application and success of specific methods used. To demonstrate the benefits of using post-tensioning and splicing to extend spans, multiple analysis of simple span post-tensioned I-beams were performed varying such characteristics as beam spacing, beam sections, beam depth and concrete strength. Tables were then developed to compare the maximum span length of a prestressed I-beam versus a one segment or a spliced three segment post-tensioned I-beam. The lateral stability of the beam during fabrication, transportation and erection is also examined and discussed. These tables are intended to aid designers and owners in preliminary project studies to determine if post-tensioning can be beneficial to their situation. AASHTO Standard Specifications(2) will be used as basic guidelines and specifications. In many cases, post-tensioning was found to extend the maximum span length of a typical 72-inch precast I-beam more than 40 feet over conventional prestress.

  19. Development of a generalized algorithm of satellite remote sensing using multi-wavelength and multi-pixel information (MWP method) for aerosol properties by satellite-borne imager

    NASA Astrophysics Data System (ADS)

    Hashimoto, M.; Nakajima, T.; Morimoto, S.; Takenaka, H.

    2014-12-01

    We have developed a new satellite remote sensing algorithm to retrieve the aerosol optical characteristics using multi-wavelength and multi-pixel information of satellite imagers (MWP method). In this algorithm, the inversion method is a combination of maximum a posteriori (MAP) method (Rodgers, 2000) and the Phillips-Twomey method (Phillips, 1962; Twomey, 1963) as a smoothing constraint for the state vector. Furthermore, with the progress of computing technique, this method has being combined with the direct radiation transfer calculation numerically solved by each iteration step of the non-linear inverse problem, without using LUT (Look Up Table) with several constraints.Retrieved parameters in our algorithm are aerosol optical properties, such as aerosol optical thickness (AOT) of fine and coarse mode particles, a volume soot fraction in fine mode particles, and ground surface albedo of each observed wavelength. We simultaneously retrieve all the parameters that characterize pixels in each of horizontal sub-domains consisting the target area. Then we successively apply the retrieval method to all the sub-domains in the target area.We conducted numerical tests for the retrieval of aerosol properties and ground surface albedo for GOSAT/CAI imager data to test the algorithm for the land area. The result of the experiment showed that AOTs of fine mode and coarse mode, soot fraction and ground surface albedo are successfully retrieved within expected accuracy. We discuss the accuracy of the algorithm for various land surface types. Then, we applied this algorithm to GOSAT/CAI imager data, and we compared retrieved and surface-observed AOTs at the CAI pixel closest to an AERONET (Aerosol Robotic Network) or SKYNET site in each region. Comparison at several sites in urban area indicated that AOTs retrieved by our method are in agreement with surface-observed AOT within ±0.066.Our future work is to extend the algorithm for analysis of AGEOS-II/GLI and GCOM/C-SGLI data.

  20. Fast in-database cross-matching of high-cadence, high-density source lists with an up-to-date sky model

    NASA Astrophysics Data System (ADS)

    Scheers, B.; Bloemen, S.; Mühleisen, H.; Schellart, P.; van Elteren, A.; Kersten, M.; Groot, P. J.

    2018-04-01

    Coming high-cadence wide-field optical telescopes will image hundreds of thousands of sources per minute. Besides inspecting the near real-time data streams for transient and variability events, the accumulated data archive is a wealthy laboratory for making complementary scientific discoveries. The goal of this work is to optimise column-oriented database techniques to enable the construction of a full-source and light-curve database for large-scale surveys, that is accessible by the astronomical community. We adopted LOFAR's Transients Pipeline as the baseline and modified it to enable the processing of optical images that have much higher source densities. The pipeline adds new source lists to the archive database, while cross-matching them with the known cataloguedsources in order to build a full light-curve archive. We investigated several techniques of indexing and partitioning the largest tables, allowing for faster positional source look-ups in the cross matching algorithms. We monitored all query run times in long-term pipeline runs where we processed a subset of IPHAS data that have image source density peaks over 170,000 per field of view (500,000 deg-2). Our analysis demonstrates that horizontal table partitions of declination widths of one-degree control the query run times. Usage of an index strategy where the partitions are densely sorted according to source declination yields another improvement. Most queries run in sublinear time and a few (< 20%) run in linear time, because of dependencies on input source-list and result-set size. We observed that for this logical database partitioning schema the limiting cadence the pipeline achieved with processing IPHAS data is 25 s.

  1. 40 CFR Table 3 of Subpart Aaaaaaa... - Test Methods

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 15 2014-07-01 2014-07-01 false Test Methods 3 Table 3 of Subpart..., Subpt. AAAAAAA, Table 3 Table 3 of Subpart AAAAAAA of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in...

  2. 40 CFR Table 3 of Subpart Aaaaaaa... - Test Methods

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 15 2012-07-01 2012-07-01 false Test Methods 3 Table 3 of Subpart..., Subpt. AAAAAAA, Table 3 Table 3 of Subpart AAAAAAA of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in...

  3. 40 CFR Table 3 of Subpart Aaaaaaa... - Test Methods

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 14 2011-07-01 2011-07-01 false Test Methods 3 Table 3 of Subpart..., Subpt. AAAAAAA, Table 3 Table 3 of Subpart AAAAAAA of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in...

  4. 40 CFR Table 3 of Subpart Aaaaaaa... - Test Methods

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 14 2010-07-01 2010-07-01 false Test Methods 3 Table 3 of Subpart..., Subpt. AAAAAAA, Table 3 Table 3 of Subpart AAAAAAA of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in...

  5. 40 CFR Table 3 of Subpart Aaaaaaa... - Test Methods

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 15 2013-07-01 2013-07-01 false Test Methods 3 Table 3 of Subpart..., Subpt. AAAAAAA, Table 3 Table 3 of Subpart AAAAAAA of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in...

  6. A novel calibration method for non-orthogonal shaft laser theodolite measurement system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Bin, E-mail: wubin@tju.edu.cn, E-mail: xueting@tju.edu.cn; Yang, Fengting; Ding, Wen

    2016-03-15

    Non-orthogonal shaft laser theodolite (N-theodolite) is a new kind of large-scale metrological instrument made up by two rotary tables and one collimated laser. There are three axes for an N-theodolite. According to naming conventions in traditional theodolite, rotary axes of two rotary tables are called as horizontal axis and vertical axis, respectively, and the collimated laser beam is named as sight axis. And the difference between N-theodolite and traditional theodolite is obvious, since the former one with no orthogonal and intersecting accuracy requirements. So the calibration method for traditional theodolite is no longer suitable for N-theodolite, while the calibration methodmore » applied currently is really complicated. Thus this paper introduces a novel calibration method for non-orthogonal shaft laser theodolite measurement system to simplify the procedure and to improve the calibration accuracy. A simple two-step process, calibration for intrinsic parameters and for extrinsic parameters, is proposed by the novel method. And experiments have shown its efficiency and accuracy.« less

  7. 40 CFR Table C-3 to Subpart C of... - Test Specifications for Pb in TSP and Pb in PM10 Methods

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Pb in PM10 Methods C Table C-3 to Subpart C of Part 53 Protection of Environment ENVIRONMENTAL..., Subpt. C, Table C-3 Table C-3 to Subpart C of Part 53—Test Specifications for Pb in TSP and Pb in PM10 Methods Table C-3 to Subpart C of Part 53—Test Specifications for Pb in TSP and Pb in PM10 Methods...

  8. 40 CFR Table C-3 to Subpart C of... - Test Specifications for Pb in TSP and Pb in PM10 Methods

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Pb in PM10 Methods C Table C-3 to Subpart C of Part 53 Protection of Environment ENVIRONMENTAL..., Subpt. C, Table C-3 Table C-3 to Subpart C of Part 53—Test Specifications for Pb in TSP and Pb in PM10 Methods Table C-3 to Subpart C of Part 53—Test Specifications for Pb in TSP and Pb in PM10 Methods...

  9. Opening Up the Pandora's Box of Sustainability League Tables of Universities: A Kafkaesque Perspective

    ERIC Educational Resources Information Center

    Jones, David R.

    2017-01-01

    The aim of this paper is to explore the institutional impact of sustainability league tables on current university agendas. It focuses on a narrative critique of one such league table, the UK's "Green League Table", compiled and reported by the student campaigning NGO, "People & Planet" annually between 2007 and 2013.…

  10. Summary of Round Table Session and Appendixes

    Treesearch

    1992-01-01

    The round table session was designed for interaction between the presenters and other round table participants. Twelve round tables, each capable of holding 10 participants were set up in one room. Presenters for the sessions were encouraged to lead discussions on one of many topics in these areas: a research idea that the presenter was just formulating; an unpublished...

  11. 78 FR 79061 - Noise Exposure Map Notice; Key West International Airport, Key West, FL

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-27

    ..., Flight Track Utilization by Aircraft Category for East Flow Operations; Table 4-3, Flight Track Utilization by Aircraft Category for West Flow Operations; Table 4-4, 2013 Air Carrier Flight Operations; Table 4-5, 2013 Commuter and Air Taxi Flight Operations; Table 4-6, 2013 Average Daily Engine Run-Up...

  12. An Analysis of Class II Supplies Requisitions in the Korean Army’s Organizational Supply

    DTIC Science & Technology

    2009-03-26

    five methods for qualitative research : Case study , Ethnography , 45 Phenomenological study , Grounded theory , and...Approaches .. 42 Table 9 Five Qualitative Research Methods ..................................................................... 45 Table 10 Six...Content analysis. Table 9 provides a brief overview of the five methods . Table 9 Five Qualitative

  13. AeroADL: applying the integration of the Suomi-NPP science algorithms with the Algorithm Development Library to the calibration and validation task

    NASA Astrophysics Data System (ADS)

    Houchin, J. S.

    2014-09-01

    A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.

  14. Carbon cycling responses to a water table drawdown and decadal vegetation changes in a bog

    NASA Astrophysics Data System (ADS)

    Talbot, J.; Roulet, N. T.

    2009-12-01

    The quantity of carbon stored in peat depends on the imbalance between production and decomposition of organic matter. This imbalance is mainly controlled by the wetness of the peatland, usually described by the water table depth. However, long-term processes resulting from hydrological changes, such as vegetation succession, also play a major role in the biogeochemistry of peatlands. Previous studies have looked at the impact of a water table lowering on carbon fluxes in different types of peatlands. However, most of these studies were conducted within a time frame that did not allow the examination of vegetation changes due to the water table lowering. We conducted a study along a drainage gradient resulting from the digging of a drainage ditch 85 years ago in a portion of the Mer Bleue bog, located near Ottawa, Canada. According to water table reconstructions based on testate amoeba, the drainage dropped the water table by approximately 18 cm. On the upslope side of the ditch, the water table partly recovered and the vegetation changed only marginally. However, on the downslope side of the ditch, the water table stayed persistently lower and trees established (Larix and Betula). The importance of Sphagnum decreased with a lower water table, and evergreen shrubs were replaced by deciduous shrubs. The water table drop and subsequent vegetation changes had combined and individual effects on the carbon functioning of the peatland. Methane fluxes decreased because of the water table lowering, but were not affected by vegetation changes, whereas respiration and net ecosystem productivity were affected by both. The carbon storage of the system increased because of an increase in plant biomass, but the long-term carbon storage as peat decreased. The inclusion of the feedback effect that vegetation has on the carbon functioning of a peatland when a disturbance occurs is crucial to simulate the long-term carbon balance of this ecosystem.

  15. [Formaldehyde-reducing efficiency of a newly developed dissection-table-connected local ventilation system in the gross anatomy laboratory room].

    PubMed

    Shinoda, Koh; Oba, Jun

    2010-03-01

    In compliance with health and safety management guidelines against harmful formaldehyde (FA) levels in the gross anatomy laboratory, we newly developed a dissection-table-connected local ventilation system in 2006. The system was composed of (1) a simple plenum-chambered dissection table with low-cost filters, (2) a transparent vinyl flexible duct for easy mounting and removal, which connects the table and the exhaust pipe laid above the ceiling, and (3) an intake creating a downward-flow of air, which was installed on the ceiling just above each table. The dissection table was also designed as a separate-component system, of which the upper plate and marginal suction inlets can be taken apart for cleaning after dissection, and equipped with opening/closing side-windows for picking up materials dropped during dissection and a container underneath the table to receive exudate from the cadaver through a waste-fluid pipe. The local ventilation system dramatically reduced FA levels to 0.01-0.03 ppm in the gross anatomy laboratory room, resulting in no discomforting FA smell and irritating sensation while preserving the student's view of room and line of flow as well as solving the problems of high maintenance cost, sanitation issues inside the table, and working-inconvenience during dissection practice. Switching ventilation methods or power-modes, the current local ventilation system was demonstrated to be more than ten times efficient in FA reduction compared to the whole-room ventilation system and suggested that 11 m3/min/table in exhaust volume should decrease FA levels in both A- and B-measurements to less than 0.1 ppm in 1000 m3 space containing thirty-one 3.5%-FA-fixed cadavers.

  16. PROJECT MERCURY SUMMARY CONFERENCE - NASA - HOUSTON, TX

    NASA Image and Video Library

    1963-10-01

    In October 1963, the Project Mercury Summary Conference was held in the Houston, TX, Coliseum. This series of 44 photos is documentation of that conference. A view of the Houston, TX, Coliseum, and parking area in front with a Mercury Redstone Rocket setup in the parking lot for display (S63-16451). A view of an Air Force Atlas Rocket, a Mercury Redstone Rocket, and a Mercury Spacecraft on a test booster on display in the front area of the Coliseum (S63-16452). A view an Air Force Atlas Rocket and a Mercury Redstone Rocket set up for display with the Houston City Hall in the background (S63- 16453). This view shows the Atlas Rocket, Mercury Redstone, and Mercury Test Rocket with the Houston, TX, Coliseum in the background (S63- 16454). A balcony view, from the audience right side, of the attendees looking at the stage (S63-16455). A view of the NASA Space Science Demonstration with equipment setup on a table, center stage and Space Science Specialist briefing the group as he pours Liquid Oxygen into a beaker (S63-16456). View of the audience from the balcony on the audience right showing the speakers lecturn on stage to the audience left (S63-16457). A view of attendees in the lobby. Bennet James, MSC Public Affairs Office is seen to the left of center (S63-16458). Another view of the attendees in the lobby (S63- 16459). In this view, Astronaut Neil Armstrong is seen writing as others look on (S63-16460). In this view of the attendees, Astronauts Buzz Aldrin and Walt Cunningham are seen in the center of the shot. The October Calendar of Events is visable in the background (S63-16461). Dr. Charles Berry is seen in this view to the right of center, seated in the audience (S63-16462). View of " Special Registration " and the five ladies working there (S63-16463). A view from behind the special registration table, of the attendees being registered (S63-16464). A view of a conference table with a panel seated. (R-L): Dr. Robert R. Gilruth, Hugh L. Dryden, Walter C. Williams, and an unidentified man (S63- 16465). A closeup of the panel at the table with Dr. Gilruth on the left (S63-16466). About the same shot as number S63-16462, Dr. Berry is seen in this shot as well (S63-16467). In this view the audio setup is seen. In the audience, (L-R): C. C. Kraft, Vernon E. (Buddy) Powell, Public Affairs Office (PAO); and, in the foreground mixing the audio is Art Tantillo; and, at the recorder is Doyle Hodges both of the audio people are contractors that work for PAO at MSC (S63-16468). In this view Maxime Faget is seen speaking at the lecturn (S63-16469). Unidentified person at the lecturn (S63-16470). In this view the motion picture cameras and personel are shown documenting the conference (S63-16471). A motion picture cameraman in the balcony is shown filming the audience during a break (S63- 16472). Family members enjoy an exhibit (S63-16473). A young person gets a boost to look in a Gemini Capsule on display (S63-16474). A young person looks at the Gemini Capsule on display (S63-16475). Dr. Robert R. Gilruth is seen at the conference table (S63-16476). Walt Williams is seen in this view at the conference table (S63-16477). Unidentified man sitting next to Walt Williams (S63-16478). (L-R): Seated at the conference table, Dr. Robert Gilruth, Hugh L. Dryden, and Walt Williams (S63- 16479). Group in lobby faces visable, (L-R): Walt Williams, unidentified person, Dr. Robert Gilruth, Congressman (S63-16480). Man in uniform at the lecturn (S63-16481). Astronaut Leroy Gordon Cooper at the lecturn (S63-16482). Astronaut Cooper at the lecturn with a picture on the screen with the title, " Astronaut Names for Spacecraft " (S63-16483). Dr. Gilruth at the lecturn (S63-16484). Walt Williams at the lecturn (S63-16485). Unidentified man at the lecturn (S63-16486). John H. Boynton addresses the Summary Conference (S63-16487). (L-R): Astronaut Leroy Gordon Cooper, Mrs. Cooper, Senator Cris Cole, and Mrs. Cole (S63- 16488). In this view in the lobby, Senator and Mrs. Cris Cole, with Astronaut Gordon Cooper standing near the heatshield, and Mrs. Cooper; next, on the right is a press photographer (S63-16489). (L-R): Astronaut L. Gordon Cooper and Mrs. Cooper, unidentified man, and Senator Walter Richter (S63-16490). (L-R): Eugene Horton, partially obscured, briefs a group on the Mercury Spacecraft, an unidentified person, Harold Ogden, a female senator, Senator Chris Cole, Mrs. Cole, an unidentified female, Senator Walter Richter, Jim Bower, and an unidentified female (S63-16491). In this view, Mrs. Jim Bates is seen in the center, and Senator Walter Richter to the right (S63- 16492). The next three (3) shots are 4X5 CN (S63-16493 - S63-16495). In this view a NASA Space Science Demonstration is seen (S63-16493). In this view a shot of the conference table is seen, and, (L-R): Dr. Robert R. Gilruth, Hugh L. Dryden, Mr. Walter Williams, and an unidentfied man (S63-16494 - S63-16495). HOUSTON, TX

  17. Experimental/analytical approaches to modeling, calibrating and optimizing shaking table dynamics for structural dynamic applications

    NASA Astrophysics Data System (ADS)

    Trombetti, Tomaso

    This thesis presents an Experimental/Analytical approach to modeling and calibrating shaking tables for structural dynamic applications. This approach was successfully applied to the shaking table recently built in the structural laboratory of the Civil Engineering Department at Rice University. This shaking table is capable of reproducing model earthquake ground motions with a peak acceleration of 6 g's, a peak velocity of 40 inches per second, and a peak displacement of 3 inches, for a maximum payload of 1500 pounds. It has a frequency bandwidth of approximately 70 Hz and is designed to test structural specimens up to 1/5 scale. The rail/table system is mounted on a reaction mass of about 70,000 pounds consisting of three 12 ft x 12 ft x 1 ft reinforced concrete slabs, post-tensioned together and connected to the strong laboratory floor. The slip table is driven by a hydraulic actuator governed by a 407 MTS controller which employs a proportional-integral-derivative-feedforward-differential pressure algorithm to control the actuator displacement. Feedback signals are provided by two LVDT's (monitoring the slip table relative displacement and the servovalve main stage spool position) and by one differential pressure transducer (monitoring the actuator force). The dynamic actuator-foundation-specimen system is modeled and analyzed by combining linear control theory and linear structural dynamics. The analytical model developed accounts for the effects of actuator oil compressibility, oil leakage in the actuator, time delay in the response of the servovalve spool to a given electrical signal, foundation flexibility, and dynamic characteristics of multi-degree-of-freedom specimens. In order to study the actual dynamic behavior of the shaking table, the transfer function between target and actual table accelerations were identified using experimental results and spectral estimation techniques. The power spectral density of the system input and the cross power spectral density of the table input and output were estimated using the Bartlett's spectral estimation method. The experimentally-estimated table acceleration transfer functions obtained for different working conditions are correlated with their analytical counterparts. As a result of this comprehensive correlation study, a thorough understanding of the shaking table dynamics and its sensitivities to control and payload parameters is obtained. Moreover, the correlation study leads to a calibrated analytical model of the shaking table of high predictive ability. It is concluded that, in its present conditions, the Rice shaking table is able to reproduce, with a high degree of accuracy, model earthquake accelerations time histories in the frequency bandwidth from 0 to 75 Hz. Furthermore, the exhaustive analysis performed indicates that the table transfer function is not significantly affected by the presence of a large (in terms of weight) payload with a fundamental frequency up to 20 Hz. Payloads having a higher fundamental frequency do affect significantly the shaking table performance and require a modification of the table control gain setting that can be easily obtained using the predictive analytical model of the shaking table. The complete description of a structural dynamic experiment performed using the Rice shaking table facility is also reported herein. The object of this experimentation was twofold: (1) to verify the testing capability of the shaking table and, (2) to experimentally validate a simplified theory developed by the author, which predicts the maximum rotational response developed by seismic isolated building structures characterized by non-coincident centers of mass and rigidity, when subjected to strong earthquake ground motions.

  18. Testing a satellite automatic nutation control system. [on synchronous meteorological satellite

    NASA Technical Reports Server (NTRS)

    Hrasiar, J. A.

    1974-01-01

    Testing of a particular nutation control system for the synchronous meteorological satellite (SMS) is described. The test method and principles are applicable to nutation angle control for other satellites with similar requirements. During its ascent to synchronous orbit, a spacecraft like the SMS spins about its minimum-moment-of-inertia axis. An uncontrolled spacecraft in this state is unstable because torques due to fuel motion increase the nutation angle. However, the SMS is equipped with an automatic nutation control (ANC) system which will keep the nutation angle close to zero. Because correct operation of this system is critical to mission success, it was tested on an air-bearing table. The ANC system was mounted on the three-axis air-bearing table which was scaled to the SMS and equipped with appropriate sensors and thrusters. The table was spun up in an altitude chamber and nutation induced so that table motion simulated spacecraft motion. The ANC system was used to reduce the nutation angle. This dynamic test of the ANC system met all its objectives and provided confidence that the ANC system will control the SMS nutation angle.

  19. A New Compression Method for FITS Tables

    NASA Technical Reports Server (NTRS)

    Pence, William; Seaman, Rob; White, Richard L.

    2010-01-01

    As the size and number of FITS binary tables generated by astronomical observatories increases, so does the need for a more efficient compression method to reduce the amount disk space and network bandwidth required to archive and down1oad the data tables. We have developed a new compression method for FITS binary tables that is modeled after the FITS tiled-image compression compression convention that has been in use for the past decade. Tests of this new method on a sample of FITS binary tables from a variety of current missions show that on average this new compression technique saves about 50% more disk space than when simply compressing the whole FITS file with gzip. Other advantages of this method are (1) the compressed FITS table is itself a valid FITS table, (2) the FITS headers remain uncompressed, thus allowing rapid read and write access to the keyword values, and (3) in the common case where the FITS file contains multiple tables, each table is compressed separately and may be accessed without having to uncompress the whole file.

  20. "We Are like Dictionaries, Miss, You Can Look Things up in Us": Evaluating Child-Centred Research Methods

    ERIC Educational Resources Information Center

    Elton-Chalcraft, Sally

    2011-01-01

    Research concerning children is often presented with only a brief comment on the research methods adopted. This paper takes a "behind the scenes" view and I discuss my adoption of a non-hierarchical "least adult role" adapted from Mandell's work in 1991 to undertake qualitative research in the sensitive area of children's…

  1. Evaluation of a local exhaust system used in the manufacture of small parts made of reinforced plastic.

    PubMed

    Lazure, L P

    2000-09-01

    Fiber-reinforced plastics are used to manufacture a large variety of products, particularly for the transportation sector. Hand lay-up molding and projection molding are the main methods of manufacture. The users of these processes are exposed to appreciable emissions of styrene; in Quebec, more than 3000 workers work in this industry. A statistical analysis of styrene concentrations measured over a five-year period by the Institut de recherche en santé et en sécurité du travail (IRSST, Occupational Health and Safety Research Institute) reveals that for all of the main manufacturing sectors involved, between 40 percent and 78 percent of the results exceed the exposure standard of 50 ppm. This study evaluated the effectiveness of a ventilated table in controlling worker exposure to styrene and acetone in a shop that manufactures fiber-reinforced plastics parts. The evaluated local extraction system consists of a ventilated table with a surface area of 1.2 m x 1.2 m. During molding, the styrene emissions are exhausted through the ventilated table as well as through the slots in a lateral hood. Replacement air, introduced vertically through a supply air shower located above the worker, limits the diffusion of contaminants toward the worker's breathing zone. The reduction in worker exposure to styrene and acetone during hand lay-up molding was measured in the breathing zone for two sizes of molds. The results show that exhaust ventilation reduced the styrene concentrations by 91 percent and that the introduction of replacement air increased the efficiency of the ventilated table to 96 percent. The evaluation performed indicates that the ventilated table adequately controls worker exposure to styrene and acetone during the molding of small components.

  2. Generalized Minimum-Time Follow-up Approaches Applied to Tasking Electro-Optical Sensor Tasking

    NASA Astrophysics Data System (ADS)

    Murphy, T. S.; Holzinger, M. J.

    This work proposes a methodology for tasking of sensors to search an area of state space for a particular object, group of objects, or class of objects. This work creates a general unified mathematical framework for analyzing reacquisition, search, scheduling, and custody operations. In particular, this work looks at searching for unknown space object(s) with prior knowledge in the form of a set, which can be defined via an uncorrelated track, region of state space, or a variety of other methods. The follow-up tasking can occur from a variable location and time, which often requires searching a large region of the sky. This work analyzes the area of a search region over time to inform a time optimal search method. Simulation work looks at analyzing search regions relative to a particular sensor, and testing a tasking algorithm to search through the region. The tasking algorithm is also validated on a reacquisition problem with a telescope system at Georgia Tech.

  3. Shhh! No Opinions in the Library: "IndyKids" and Kids' Right to an Independent Press

    ERIC Educational Resources Information Center

    Vender, Amanda

    2011-01-01

    "Nintendo Power," "Sports Illustrated for Kids," and a biography of President Obama were on prominent display as the author entered the branch library in Forest Hills, Queens. The librarian looked skeptical when the author asked the librarian if she could leave copies of "IndyKids" newspapers on the free literature table. The branch manager…

  4. Employees in Postsecondary Institutions, Fall 2005 and Salaries of Full-Time Instructional Faculty, 2005-06. First Look. NCES 2007-150

    ERIC Educational Resources Information Center

    Knapp, Laura G.; Kelly-Reid, Janice E.; Whitmore, Roy W.; Miller, Elise

    2007-01-01

    This report presents information from the Winter 2005-06 Integrated Postsecondary Education Data System (IPEDS) web-based data collection. Tabulations represent data requested from all postsecondary institutions participating in Title IV federal student financial aid programs. The tables in this publication include data on the number of staff…

  5. GEMINI-TITAN (GT)-9 TEST - ASTRONAUT BEAN, ALAN - KSC

    NASA Image and Video Library

    1973-08-14

    S73-31973 (August 1973) --- Scientist-astronaut Owen K. Garriott, Skylab 3 science pilot, looks at a map of Earth at the food table in the ward room of the Orbital Workshop (OWS). In this photographic reproduction taken from a television transmission made by a color TV camera aboard the Skylab space station cluster in Earth orbit. Photo credit: NASA

  6. Look-Listen Opinion Poll, 1984-1985. Project of the National Telemedia Council, Inc.

    ERIC Educational Resources Information Center

    Giles, Doris, Ed.; And Others

    Designed to indicate the reasons behind viewer program preferences, this 32nd report of an annual opinion poll presents the results of a survey which asked 914 participants to evaluate 3,584 television programs they liked, did not like, and/or to evaluate new programs. Tables summarize the reasons why programs were selected by viewers, their…

  7. Maintaining Multimedia Data in a Geospatial Database

    DTIC Science & Technology

    2012-09-01

    at PostgreSQL and MySQL as spatial databases was offered. Given their results, as each database produced result sets from zero to 100,000, it was...excelled given multiple conditions. A different look at PostgreSQL and MySQL as spatial databases was offered. Given their results, as each database... MySQL ................................................................................................14  B.  BENCHMARKING DATA RETRIEVED FROM TABLE

  8. The Community at the Bargaining Table. A Report on the Community's Role in Collective Bargaining in the Schools.

    ERIC Educational Resources Information Center

    Institute for Responsive Education, Boston, MA.

    The Institute for Responsive Education and its study team are looking at ways to widen the scope of collective bargaining to provide room for communities to participate in policy formulation in their schools. The traditional management-labor approach was designed to resolve differences about wages, fringe benefits, and the rules, rights, and…

  9. 25. Photographic copy of undated photo; Photographer unknown; Original in ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    25. Photographic copy of undated photo; Photographer unknown; Original in Rath collection at Iowa State University Libraries, Ames, Iowa; Filed under: Rath Packing Company, Printed Photographs, Symbol M, Box 2; REMOVING HIDES ON THE SKINNING TABLE; CARCASSES IN HALF-HOIST POSITION; LOOKING SOUTH - Rath Packing Company, Beef Killing Building, Sycamore Street between Elm & Eighteenth Streets, Waterloo, Black Hawk County, IA

  10. Closeup view looking into the nozzle of the Space Shuttle ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Close-up view looking into the nozzle of the Space Shuttle Main Engine number 2061 looking at the cooling tubes along the nozzle wall and up towards the Main Combustion Chamber and Injector Plate - Space Transportation System, Space Shuttle Main Engine, Lyndon B. Johnson Space Center, 2101 NASA Parkway, Houston, Harris County, TX

  11. MATH77 - A LIBRARY OF MATHEMATICAL SUBPROGRAMS FOR FORTRAN 77, RELEASE 4.0

    NASA Technical Reports Server (NTRS)

    Lawson, C. L.

    1994-01-01

    MATH77 is a high quality library of ANSI FORTRAN 77 subprograms implementing contemporary algorithms for the basic computational processes of science and engineering. The portability of MATH77 meets the needs of present-day scientists and engineers who typically use a variety of computing environments. Release 4.0 of MATH77 contains 454 user-callable and 136 lower-level subprograms. Usage of the user-callable subprograms is described in 69 sections of the 416 page users' manual. The topics covered by MATH77 are indicated by the following list of chapter titles in the users' manual: Mathematical Functions, Pseudo-random Number Generation, Linear Systems of Equations and Linear Least Squares, Matrix Eigenvalues and Eigenvectors, Matrix Vector Utilities, Nonlinear Equation Solving, Curve Fitting, Table Look-Up and Interpolation, Definite Integrals (Quadrature), Ordinary Differential Equations, Minimization, Polynomial Rootfinding, Finite Fourier Transforms, Special Arithmetic , Sorting, Library Utilities, Character-based Graphics, and Statistics. Besides subprograms that are adaptations of public domain software, MATH77 contains a number of unique packages developed by the authors of MATH77. Instances of the latter type include (1) adaptive quadrature, allowing for exceptional generality in multidimensional cases, (2) the ordinary differential equations solver used in spacecraft trajectory computation for JPL missions, (3) univariate and multivariate table look-up and interpolation, allowing for "ragged" tables, and providing error estimates, and (4) univariate and multivariate derivative-propagation arithmetic. MATH77 release 4.0 is a subroutine library which has been carefully designed to be usable on any computer system that supports the full ANSI standard FORTRAN 77 language. It has been successfully implemented on a CRAY Y/MP computer running UNICOS, a UNISYS 1100 computer running EXEC 8, a DEC VAX series computer running VMS, a Sun4 series computer running SunOS, a Hewlett-Packard 720 computer running HP-UX, a Macintosh computer running MacOS, and an IBM PC compatible computer running MS-DOS. Accompanying the library is a set of 196 "demo" drivers that exercise all of the user-callable subprograms. The FORTRAN source code for MATH77 comprises 109K lines of code in 375 files with a total size of 4.5Mb. The demo drivers comprise 11K lines of code and 418K. Forty-four percent of the lines of the library code and 29% of those in the demo code are comment lines. The standard distribution medium for MATH77 is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 9track 1600 BPI magnetic tape in VAX BACKUP format and a TK50 tape cartridge in VAX BACKUP format. An electronic copy of the documentation is included on the distribution media. Previous releases of MATH77 have been used over a number of years in a variety of JPL applications. MATH77 Release 4.0 was completed in 1992. MATH77 is a copyrighted work with all copyright vested in NASA.

  12. Crystal identification for a dual-layer-offset LYSO based PET system via Lu-176 background radiation and mean shift algorithm

    NASA Astrophysics Data System (ADS)

    Wei, Qingyang; Ma, Tianyu; Xu, Tianpeng; Zeng, Ming; Gu, Yu; Dai, Tiantian; Liu, Yaqiang

    2018-01-01

    Modern positron emission tomography (PET) detectors are made from pixelated scintillation crystal arrays and readout by Anger logic. The interaction position of the gamma-ray should be assigned to a crystal using a crystal position map or look-up table. Crystal identification is a critical procedure for pixelated PET systems. In this paper, we propose a novel crystal identification method for a dual-layer-offset LYSO based animal PET system via Lu-176 background radiation and mean shift algorithm. Single photon event data of the Lu-176 background radiation are acquired in list-mode for 3 h to generate a single photon flood map (SPFM). Coincidence events are obtained from the same data using time information to generate a coincidence flood map (CFM). The CFM is used to identify the peaks of the inner layer using the mean shift algorithm. The response of the inner layer is deducted from the SPFM by subtracting CFM. Then, the peaks of the outer layer are also identified using the mean shift algorithm. The automatically identified peaks are manually inspected by a graphical user interface program. Finally, a crystal position map is generated using a distance criterion based on these peaks. The proposed method is verified on the animal PET system with 48 detector blocks on a laptop with an Intel i7-5500U processor. The total runtime for whole system peak identification is 67.9 s. Results show that the automatic crystal identification has 99.98% and 99.09% accuracy for the peaks of the inner and outer layers of the whole system respectively. In conclusion, the proposed method is suitable for the dual-layer-offset lutetium based PET system to perform crystal identification instead of external radiation sources.

  13. Reducing the computational footprint for real-time BCPNN learning

    PubMed Central

    Vogginger, Bernhard; Schüffny, René; Lansner, Anders; Cederström, Love; Partzsch, Johannes; Höppner, Sebastian

    2015-01-01

    The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware. PMID:25657618

  14. Reducing the computational footprint for real-time BCPNN learning.

    PubMed

    Vogginger, Bernhard; Schüffny, René; Lansner, Anders; Cederström, Love; Partzsch, Johannes; Höppner, Sebastian

    2015-01-01

    The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware.

  15. Applications of the BIOPHYS Algorithm for Physically-Based Retrieval of Biophysical, Structural and Forest Disturbance Information

    NASA Technical Reports Server (NTRS)

    Peddle, Derek R.; Huemmrich, K. Fred; Hall, Forrest G.; Masek, Jeffrey G.; Soenen, Scott A.; Jackson, Chris D.

    2011-01-01

    Canopy reflectance model inversion using look-up table approaches provides powerful and flexible options for deriving improved forest biophysical structural information (BSI) compared with traditional statistical empirical methods. The BIOPHYS algorithm is an improved, physically-based inversion approach for deriving BSI for independent use and validation and for monitoring, inventory and quantifying forest disturbance as well as input to ecosystem, climate and carbon models. Based on the multiple-forward mode (MFM) inversion approach, BIOPHYS results were summarized from different studies (Minnesota/NASA COVER; Virginia/LEDAPS; Saskatchewan/BOREAS), sensors (airborne MMR; Landsat; MODIS) and models (GeoSail; GOMS). Applications output included forest density, height, crown dimension, branch and green leaf area, canopy cover, disturbance estimates based on multi-temporal chronosequences, and structural change following recovery from forest fires over the last century. Good correspondences with validation field data were obtained. Integrated analyses of multiple solar and view angle imagery further improved retrievals compared with single pass data. Quantifying ecosystem dynamics such as the area and percent of forest disturbance, early regrowth and succession provide essential inputs to process-driven models of carbon flux. BIOPHYS is well suited for large-area, multi-temporal applications involving multiple image sets and mosaics for assessing vegetation disturbance and quantifying biophysical structural dynamics and change. It is also suitable for integration with forest inventory, monitoring, updating, and other programs.

  16. The influence of sea fog inhomogeneity on its microphysical characteristics retrieval

    NASA Astrophysics Data System (ADS)

    Hao, Zengzhou; Pan, Delu; Gong, Fang; He, Xianqiang

    2008-10-01

    A study on the effect of sea fog inhomogeneity on its microphysical parameters retrieval is presented. On the condition that the average liquid water content is linear vertically and the power spectrum spectral index sets 2.0, we generate a 3D sea fog fields by controlling the total liquid water contents greater than 0.04g/m3 based on the iterative method for generating scaling log-normal random field with an energy spectrum and a fragmentized cloud algorithm. Based on the fog field, the radiance at the wavelengths of 0.67 and 1.64 μm are simulated with 3D radiative transfer model SHDOM, and then the fog optical thickness and effective particle radius are simultaneously retrieved using the generic look-up-table AVHRR cloud algorithm. By comparing those fog optical thickness and effective particle radius, the influence of sea fog inhomogeneity on its properties retrieval is discussed. It exhibits the system bias when inferring sea fog physical properties from satellite measurements based on the assumption of plane parallel homogeneous atmosphere. And the bias depends on the solar zenith angel. The optical thickness is overrated while the effective particle radius is under-estimated at two solar zenith angle 30° and 60°. Those results show that it is necessary for sea fog true characteristics retrieval to develop a new algorithm using the 3D radiative transfer.

  17. Retrieval of Winter Wheat Leaf Area Index from Chinese GF-1 Satellite Data Using the PROSAIL Model.

    PubMed

    Li, He; Liu, Gaohuan; Liu, Qingsheng; Chen, Zhongxin; Huang, Chong

    2018-04-06

    Leaf area index (LAI) is one of the key biophysical parameters in crop structure. The accurate quantitative estimation of crop LAI is essential to verify crop growth and health. The PROSAIL radiative transfer model (RTM) is one of the most established methods for estimating crop LAI. In this study, a look-up table (LUT) based on the PROSAIL RTM was first used to estimate winter wheat LAI from GF-1 data, which accounted for some available prior knowledge relating to the distribution of winter wheat characteristics. Next, the effects of 15 LAI-LUT strategies with reflectance bands and 10 LAI-LUT strategies with vegetation indexes on the accuracy of the winter wheat LAI retrieval with different phenological stages were evaluated against in situ LAI measurements. The results showed that the LUT strategies of LAI-GNDVI were optimal and had the highest accuracy with a root mean squared error (RMSE) value of 0.34, and a coefficient of determination (R²) of 0.61 during the elongation stages, and the LUT strategies of LAI-Green were optimal with a RMSE of 0.74, and R² of 0.20 during the grain-filling stages. The results demonstrated that the PROSAIL RTM had great potential in winter wheat LAI inversion with GF-1 satellite data and the performance could be improved by selecting the appropriate LUT inversion strategies in different growth periods.

  18. Influence of Dynamic Hydraulic Conditions on Nitrogen Cycling in Column Experiments

    NASA Astrophysics Data System (ADS)

    Gassen, Niklas; von Netzer, Frederick; Ryabenko, Evgenia; Lüders, Tillmann; Stumpp, Christine

    2015-04-01

    In order to improve management strategies of agricultural nitrogen input, it is of major importance to further understand which factors influence turnover processes within the nitrogen cycle. Many studies have focused on the fate of nitrate in hydrological systems, but up to date only little is known about the influence of dynamic hydraulic conditions on the fate of nitrate at the soil-groundwater interface. We conducted column experiments with natural sediment and compared a system with a fluctuating water table to systems with different water content and static conditions under the constant input of ammonia into the system. We used hydrochemical methods in order to trace nitrogen species, 15N isotope methods to get information about dominating turnover processes and microbial community analysis in order to connect hydrochemical and microbial information. We found that added ammonia was removed more effectively under dynamic hydraulic conditions than under static conditions. Furthermore, denitrification is the dominant process under saturated, static conditions, while nitrification is more important under unsaturated, static conditions. We conclude that a fluctuating water table creates hot spots where both nitrification and denitrification processes can occur spatially close to each other and therefore remove nitrogen more effectively from the system. Furthermore, the fluctuating water table enhances the exchange of solutes and triggers hot moments of solute turnover. Therefore we conclude that a fluctuating water table can amplify hot spots and trigger hot moments of nitrogen cycling.

  19. UCAC3: ASTROMETRIC REDUCTIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finch, Charlie T.; Zacharias, Norbert; Wycoff, Gary L., E-mail: finch@usno.navy.mi

    2010-06-15

    Presented here are the details of the astrometric reductions from the x, y data to mean right ascension (R.A.), declination (decl.) coordinates of the third U.S. Naval Observatory CCD Astrograph Catalog (UCAC3). For these new reductions we used over 216,000 CCD exposures. The Two-Micron All-Sky Survey (2MASS) data are used extensively to probe for coordinate and coma-like systematic errors in UCAC data mainly caused by the poor charge transfer efficiency of the 4K CCD. Errors up to about 200 mas have been corrected using complex look-up tables handling multiple dependences derived from the residuals. Similarly, field distortions and sub-pixel phasemore » errors have also been evaluated using the residuals with respect to 2MASS. The overall magnitude equation is derived from UCAC calibration field observations alone, independent of external catalogs. Systematic errors of positions at the UCAC observing epoch as presented in UCAC3 are better corrected than in the previous catalogs for most stars. The Tycho-2 catalog is used to obtain final positions on the International Celestial Reference Frame. Residuals of the Tycho-2 reference stars show a small magnitude equation (depending on declination zone) that might be inherent in the Tycho-2 catalog.« less

  20. Strategies for Proximal Femoral Nailing of Unstable Intertrochanteric Fractures: Lateral Decubitus Position or Traction Table.

    PubMed

    Sonmez, Mesut Mehmet; Camur, Savas; Erturer, Erden; Ugurlar, Meric; Kara, Adnan; Ozturk, Irfan

    2017-03-01

    The aim of this prospective randomized study was to compare the traction table and lateral decubitus position techniques in the management of unstable intertrochanteric fractures. Eighty-two patients with unstable intertrochanteric fractures between 2011 and 2013 were included in this study. All patients were treated surgically with the Proximal Femoral Nail Antirotation implant (DePuy Synthes). Patients were randomized to undergo the procedure in the lateral decubitus position (42 patients) or with the use of a traction table (40 patients). Patients whose procedure was not performed entirely with a semi-invasive method or who required the use of additional fixation materials, such as cables, were excluded from the study. The groups were compared on the basis of the setup time, surgical time, fluoroscopic exposure time, tip-to-apex distance, collodiaphyseal angle, and modified Baumgaertner criteria for radiologic reduction. The setup time, surgical time, and fluoroscopic exposure time were lower and the differences were statistically significant in the lateral decubitus group compared with the traction table group. The collodiaphyseal angles were significantly different between the groups in favor of the lateral decubitus method. The tip-to-apex distance and the classification of reduction according to the modified Baumgaertner criteria did not demonstrate a statistically significant difference between the groups. The lateral decubitus position is used for most open procedures of the hip. We found that this position facilitates exposure for the surgical treatment of unstable intertrochanteric fractures and has advantages over the traction table in terms of set up time, surgical time and fluoroscopic exposure time.

  1. Distinctive Steady-State Heart Rate and Blood Pressure Responses to Passive Robotic Leg Exercise and Functional Electrical Stimulation during Head-Up Tilt.

    PubMed

    Sarabadani Tafreshi, Amirehsan; Riener, Robert; Klamroth-Marganska, Verena

    2016-01-01

    Introduction: Tilt tables enable early mobilization of patients by providing verticalization. But there is a high risk of orthostatic hypotension provoked by verticalization, especially after neurological diseases such as spinal cord injury. Robot-assisted tilt tables might be an alternative as they add passive robotic leg exercise (PE) that can be enhanced with functional electrical stimulation (FES) to the verticalization, thus reducing the risk of orthostatic hypotension. We hypothesized that the influence of PE on the cardiovascular system during verticalization (i.e., head-up tilt) depends on the verticalization angle, and FES strengthens the PE influence. To test our hypotheses, we investigated the PE effects on the cardiovascular parameters heart rate (HR), and systolic and diastolic blood pressures (sBP, dBP) at different angles of verticalization in a healthy population. Methods: Ten healthy subjects on a robot-assisted tilt table underwent four different study protocols while HR, sBP, and dBP were measured: (1) head-up tilt to 60° and 71° without PE; (2) PE at 20°, 40°, and 60° of head-up tilt; (3) PE while constant FES intensity was applied to the leg muscles, at 20°, 40°, and 60° of head-up tilt; (4) PE with variation of the applied FES intensity at 0°, 20°, 40°, and 60° of head-up tilt. Linear mixed models were used to model changes in HR, sBP, and dBP responses. Results: The models show that: (1) head-up tilt alone resulted in statistically significant increases in HR and dBP, but no change in sBP. (2) PE during head-up tilt resulted in statistically significant changes in HR, sBP, and dBP, but not at each angle and not always in the same direction (i.e., increase or decrease of cardiovascular parameters). Neither adding (3) FES at constant intensity to PE nor (4) variation of FES intensity during PE had any statistically significant effects on the cardiovascular parameters. Conclusion: The effect of PE on the cardiovascular system during head-up tilt is strongly dependent on the verticalization angle. Therefore, we conclude that orthostatic hypotension cannot be prevented by PE alone, but that the preventive effect depends on the verticalization angle of the robot-assisted tilt table. FES (independent of intensity) is not an important contributing factor to the PE effect.

  2. Distinctive Steady-State Heart Rate and Blood Pressure Responses to Passive Robotic Leg Exercise and Functional Electrical Stimulation during Head-Up Tilt

    PubMed Central

    Sarabadani Tafreshi, Amirehsan; Riener, Robert; Klamroth-Marganska, Verena

    2016-01-01

    Introduction: Tilt tables enable early mobilization of patients by providing verticalization. But there is a high risk of orthostatic hypotension provoked by verticalization, especially after neurological diseases such as spinal cord injury. Robot-assisted tilt tables might be an alternative as they add passive robotic leg exercise (PE) that can be enhanced with functional electrical stimulation (FES) to the verticalization, thus reducing the risk of orthostatic hypotension. We hypothesized that the influence of PE on the cardiovascular system during verticalization (i.e., head-up tilt) depends on the verticalization angle, and FES strengthens the PE influence. To test our hypotheses, we investigated the PE effects on the cardiovascular parameters heart rate (HR), and systolic and diastolic blood pressures (sBP, dBP) at different angles of verticalization in a healthy population. Methods: Ten healthy subjects on a robot-assisted tilt table underwent four different study protocols while HR, sBP, and dBP were measured: (1) head-up tilt to 60° and 71° without PE; (2) PE at 20°, 40°, and 60° of head-up tilt; (3) PE while constant FES intensity was applied to the leg muscles, at 20°, 40°, and 60° of head-up tilt; (4) PE with variation of the applied FES intensity at 0°, 20°, 40°, and 60° of head-up tilt. Linear mixed models were used to model changes in HR, sBP, and dBP responses. Results: The models show that: (1) head-up tilt alone resulted in statistically significant increases in HR and dBP, but no change in sBP. (2) PE during head-up tilt resulted in statistically significant changes in HR, sBP, and dBP, but not at each angle and not always in the same direction (i.e., increase or decrease of cardiovascular parameters). Neither adding (3) FES at constant intensity to PE nor (4) variation of FES intensity during PE had any statistically significant effects on the cardiovascular parameters. Conclusion: The effect of PE on the cardiovascular system during head-up tilt is strongly dependent on the verticalization angle. Therefore, we conclude that orthostatic hypotension cannot be prevented by PE alone, but that the preventive effect depends on the verticalization angle of the robot-assisted tilt table. FES (independent of intensity) is not an important contributing factor to the PE effect. PMID:28018240

  3. VizieR Online Data Catalog: ISOCAM survey of Serpens/G3-G6 (Djupvik+, 2006)

    NASA Astrophysics Data System (ADS)

    Djupvik, A. A.; Andre, P.; Bontemps, S.; Motte, F.; Olofsson, G.; Gaalfalk, M.; Floren, H.-G.

    2006-08-01

    We present results from an ISOCAM survey in the two broadband filters LW2 (5-8.5um) and LW3 (12-18um) of a 19'x16' field called Serp_NH3 centred on the optical group Serpens/G3-G6. A total of 186 sources were detected in the 6.7um band and/or the 14.3um band to a limiting sensitivity of ~2mJy. These have been cross-correlated with the 2MASS catalogue and are all listed in table1. Deep follow-up photometry in the Ks band obtained with Arnica at the Nordic Optical Telescope (NOT) is listed in table2. Deep L' band photometry of selected sources using SIRCA at the NOT is listed in table3. Continuum emission at 1.3mm and 3.6cm was observed with IRAM and VLA, respectively, and deep imaging in the 2.12um S(1) line of H2 was obtained with NOTCam at the NOT. We find strong evidence for a stellar population of 31 Class II sources (listed in table5), 5 flat-spectrum sources, 5 Class I sources (listed in table4), and two Class 0 sources. Our method does not sample the Class III sources. (3 data files).

  4. Fast generation of computer-generated hologram by graphics processing unit

    NASA Astrophysics Data System (ADS)

    Matsuda, Sho; Fujii, Tomohiko; Yamaguchi, Takeshi; Yoshikawa, Hiroshi

    2009-02-01

    A cylindrical hologram is well known to be viewable in 360 deg. This hologram depends high pixel resolution.Therefore, Computer-Generated Cylindrical Hologram (CGCH) requires huge calculation amount.In our previous research, we used look-up table method for fast calculation with Intel Pentium4 2.8 GHz.It took 480 hours to calculate high resolution CGCH (504,000 x 63,000 pixels and the average number of object points are 27,000).To improve quality of CGCH reconstructed image, fringe pattern requires higher spatial frequency and resolution.Therefore, to increase the calculation speed, we have to change the calculation method. In this paper, to reduce the calculation time of CGCH (912,000 x 108,000 pixels), we employ Graphics Processing Unit (GPU).It took 4,406 hours to calculate high resolution CGCH on Xeon 3.4 GHz.Since GPU has many streaming processors and a parallel processing structure, GPU works as the high performance parallel processor.In addition, GPU gives max performance to 2 dimensional data and streaming data.Recently, GPU can be utilized for the general purpose (GPGPU).For example, NVIDIA's GeForce7 series became a programmable processor with Cg programming language.Next GeForce8 series have CUDA as software development kit made by NVIDIA.Theoretically, calculation ability of GPU is announced as 500 GFLOPS. From the experimental result, we have achieved that 47 times faster calculation compared with our previous work which used CPU.Therefore, CGCH can be generated in 95 hours.So, total time is 110 hours to calculate and print the CGCH.

  5. Principles in Remote Sensing of Aerosol from MODIS Over Land and Ocean

    NASA Technical Reports Server (NTRS)

    Remer, L. A.; Kaufman, Y. J.; Tanre, D.; Chu, D. A.

    1999-01-01

    The well-calibrated spectral radiances measured by MODIS will be processed to retrieve daily aerosol properties that include optical thickness and mass loading over land and optical thickness, the mean particle size of the dominant mode and the ratio between aerosol modes over ocean. In addition, after launch, aerosol single scattering albedo will be calculated as an experimental product. The retrieval process over land is based on a dark target method that identifies appropriate targets in the mid-IR channels and uses an empirical relationship found between the mid-ER and the visible channels to estimate surface reflectance in the visible from the mid-HZ reflectance measured by satellite. The method employs new aerosol models for industrial, smoke and dust aerosol. The process for retrieving aerosol over the ocean makes use of the wide spectral band from 0.55-2.13 microns and a look-up table constructed from combinations of five accumulation modes and five coarse modes. Both the over land and over ocean algorithms have been validated with satellite and airborne radiance measurements. We estimate that MODIS will be able to measure aerosol optical thickness (t) to within 0.05 +/- 0.2t over land and to within 0.05 +/- 0.05t over ocean. Much of the earth's surface is located far from aerosol sources and experience very low aerosol optical thickness. Will the accuracy expected from MODIS retrievals be sufficient to measure the global aerosol direct and indirect forcing? We are attempting to answer this question using global model results and cloud climatology.

  6. Regional coupling of unsaturated and saturated flow and transport modeling - implementation at an alpine foothill aquifer in Austria

    NASA Astrophysics Data System (ADS)

    Klammler, G.; Rock, G.; Kupfersberger, H.; Fank, J.

    2012-04-01

    For many European countries nitrate leaching from the soil zone into the aquifer due to surplus application of mineral fertilizer and animal manure by farmers constitutes the most important threat to groundwater quality. Since this is a diffuse pollution situation measures to change agricultural production have to be investigated at the aquifer scale. In principal, the problem could be solved by the 3 dimensional equation describing variable saturated groundwater flow and solute transport. However, this is computationally prohibitive due to the temporal and spatial scope of the task, particularly in the framework of running numerous simulations to compromise between conflicting interests (i.e. good groundwater status and high agricultural yield). For the aquifer 'Westliches Leibnitzer Feld' we break down this task into 1d vertical movement of water and nitrate mass in the unsaturated zone and 2d horizontal flow of water and solutes in the saturated compartment. The aquifer is located within the Mur Valley about 20 km south of Graz and consists of early Holocene gravel with varying amounts of sand and some silt. The unsaturated flow and nitrate leaching package SIMWASER/STOTRASIM (Stenitzer, 1988; Feichtinger, 1998) is calibrated to the lysimeter data sets and further on applied to so called hydrotopes which are unique combinations of soil type and agricultural management. To account for the unknown regional distribution of crops grown and amount, timing and kind of fertilizers used a stochastic tool (Klammler et al, 2011) is developed that generates sequences of crop rotations derived from municipal statistical data. To match the observed nitrate concentrations in groundwater with a saturated nitrate transport model it is of utmost importance to apply a realistic input distribution of nitrate mass in terms of spatial and temporal characteristics. A table is generated by running SIMWASER/STOTRASIM that consists of unsaturated water and nitrate fluxes for each 10 cm interval of every hydrotope vertical profile until the lowest observed groundwater table is reached. The fluctuation range of the phreatic surface is also discretized in 10 cm intervals and used as outflow boundary condition. By this procedure, the influence of the groundwater table on the water and nitrate mass leaving the unsaturated can be considered taken into account varying soil horizons. To cover saturated flow in the WLF aquifer a 2-dimensional transient horizontal flow and solute transport model is set up. A sequential coupling between the two models is implemented, i.e. a unidirectional transfer of recharge and nitrate mass outflow from the hydrotopes to the saturated compartment. For this purpose, a one-time assignment between the spatial discretization of the hydrotopes and the finite element mesh has to be set up. The resulting groundwater table computed for a given time step with the input from SIMWASER/STOTRASIM is then used to extract the corresponding water and nitrate mass values from the look-up table to be used for the consecutive time step. This process is being repeated until the end of the simulation period. Within this approach there is no direct feedback between the unsaturated and the saturated aquifer compartment, i.e. there is no simultaneous (within the same time step) update of the pressure head - unsaturated head relationship at the soil and the phreatic surface (like is shown e.g. in Walsum and Groedendijk, 2008). For the dominating coarse sand conditions of the WLF aquifer we believe that this simplification is not of further relevance. For higher soil moisture contents (i.e. almost full saturation near the groundwater table) the curve returns to specific retention within a short vertical distance. Thus, there might only be mutual impact between soil and phreatic surface conditions for shallow groundwater tables. However, it should be mentioned here that all other processes in the two compartments (including capillary rise due to clay rich soils and groundwater withdrawn by root plants or evaporation losses) are accordingly considered given the capabilities of the used models. If we impose the computed groundwater table elevation as the outflow condition of the hydrotope for the next time step we postulate that the associated water volume of the saturated storage change will lead to the same change of the phreatic surface in the hydrotope column. This is only valid if the storage characteristics of the affected unsaturated soil layers can be adequately described by the co-located porosity of the saturated model. Moreover, the current soil moisture content of the respective soil layers is not being considered by the implemented new outflow boundary condition. Thus, from the perspective of continuity of mass it might be more correct, to transfer the same water volume that led to the saturated change (rise and fall) of the groundwater table to the unsaturated hydrotope column and compute the adjusted outflow boundary position for use in the next time step. Due to the hydrogeological conditions in our application, for almost all hydrotopes we have the same soil type (i.e. coarse sand) in the range of groundwater table fluctuations and thus, we expect no further impact of transferring the groundwater table from the saturated computation to the unsaturated domain. Summarizing, for the hydrogeologic conditions of our test site and the scope of the problem to be solved the sequential coupling between 1d unsaturated vertical and 2d saturated horizontal simulation of water movement and solute transport is regarded as an appropriate conceptual and numerical approach. Due to the extensive look-up table containing unsaturated water and nitrate fluxes for each hydrotope at a vertical resolution of 10 cm no further feedback processes between the unsaturated and saturated subsurface compartment need to be considered. Feichtinger, F. (1998). STOTRASIM - Ein Modell zur Simulation der Stickstoffdynamik in der ungesättigten Zone eines Ackerstandortes. Schriftenreihe des Bundesamtes für Wasserwirtschaft, Bd. 7, 14-41. Klammler, G., Rock, G., Fank, J. & H. Kupfersberger, H. (2011): Generating land use information to derive diffuse water and nitrate transfer as input for groundwater modelling at the aquifer scale, Proc of MODELCARE 2011 Models - Repository of Knowledge, Leipzig. Stenitzer, E. (1988). SIMWASER - Ein numerisches Modell zur Simulation des Bodenwasserhaushaltes und des Pflanzenertrages eines Standortes. Mitteilung Nr. 31, Bundesanstalt für Kulturtechnik und Bodenwasserhaushalt, A-3252 Petzenkirchen. Van Walsum, P.E.V. and P. Groedendilk (2008). Quasi steady-state simulation of the unsaturated zone in groundwater modeling of lowland regions. Vadose Zone J. 7:769-781 doi:10.2136/vzj2007.0146.

  7. Continuous table acquisition MRI for radiotherapy treatment planning: Distortion assessment with a new extended 3D volumetric phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Amy, E-mail: aw554@uowmail.edu.au; Metcalfe, Peter; Liney, Gary

    2015-04-15

    Purpose: Accurate geometry is required for radiotherapy treatment planning (RTP). When considering the use of magnetic resonance imaging (MRI) for RTP, geometric distortions observed in the acquired images should be considered. While scanner technology and vendor supplied correction algorithms provide some correction, large distortions are still present in images, even when considering considerably smaller scan lengths than those typically acquired with CT in conventional RTP. This study investigates MRI acquisition with a moving table compared with static scans for potential geometric benefits for RTP. Methods: A full field of view (FOV) phantom (diameter 500 mm; length 513 mm) was developedmore » for measuring geometric distortions in MR images over volumes pertinent to RTP. The phantom consisted of layers of refined plastic within which vitamin E capsules were inserted. The phantom was scanned on CT to provide the geometric gold standard and on MRI, with differences in capsule location determining the distortion. MRI images were acquired with two techniques. For the first method, standard static table acquisitions were considered. Both 2D and 3D acquisition techniques were investigated. With the second technique, images were acquired with a moving table. The same sequence was acquired with a static table and then with table speeds of 1.1 mm/s and 2 mm/s. All of the MR images acquired were registered to the CT dataset using a deformable B-spline registration with the resulting deformation fields providing the distortion information for each acquisition. Results: MR images acquired with the moving table enabled imaging of the whole phantom length while images acquired with a static table were only able to image 50%–70% of the phantom length of 513 mm. Maximum distortion values were reduced across a larger volume when imaging with a moving table. Increased table speed resulted in a larger contribution of distortion from gradient nonlinearities in the through-plane direction and an increased blurring of capsule images, resulting in an apparent capsule volume increase by up to 170% in extreme axial FOV regions. Blurring increased with table speed and in the central regions of the phantom, geometric distortion was less for static table acquisitions compared to a table speed of 2 mm/s over the same volume. Overall, the best geometric accuracy was achieved with a table speed of 1.1 mm/s. Conclusions: The phantom designed enables full FOV imaging for distortion assessment for the purposes of RTP. MRI acquisition with a moving table extends the imaging volume in the z direction with reduced distortions which could be useful particularly if considering MR-only planning. If utilizing MR images to provide additional soft tissue information to the planning CT, standard acquisition sequences over a smaller volume would avoid introducing additional blurring or distortions from the through-plane table movement.« less

  8. TH-C-BRD-04: Beam Modeling and Validation with Triple and Double Gaussian Dose Kernel for Spot Scanning Proton Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirayama, S; Takayanagi, T; Fujii, Y

    2014-06-15

    Purpose: To present the validity of our beam modeling with double and triple Gaussian dose kernels for spot scanning proton beams in Nagoya Proton Therapy Center. This study investigates the conformance between the measurements and calculation results in absolute dose with two types of beam kernel. Methods: A dose kernel is one of the important input data required for the treatment planning software. The dose kernel is the 3D dose distribution of an infinitesimal pencil beam of protons in water and consists of integral depth doses and lateral distributions. We have adopted double and triple Gaussian model as lateral distributionmore » in order to take account of the large angle scattering due to nuclear reaction by fitting simulated inwater lateral dose profile for needle proton beam at various depths. The fitted parameters were interpolated as a function of depth in water and were stored as a separate look-up table for the each beam energy. The process of beam modeling is based on the method of MDACC [X.R.Zhu 2013]. Results: From the comparison results between the absolute doses calculated by double Gaussian model and those measured at the center of SOBP, the difference is increased up to 3.5% in the high-energy region because the large angle scattering due to nuclear reaction is not sufficiently considered at intermediate depths in the double Gaussian model. In case of employing triple Gaussian dose kernels, the measured absolute dose at the center of SOBP agrees with calculation within ±1% regardless of the SOBP width and maximum range. Conclusion: We have demonstrated the beam modeling results of dose distribution employing double and triple Gaussian dose kernel. Treatment planning system with the triple Gaussian dose kernel has been successfully verified and applied to the patient treatment with a spot scanning technique in Nagoya Proton Therapy Center.« less

  9. Look Up for Healing: Embodiment of the Heal Concept in Looking Upward

    PubMed Central

    Leitan, N. D.; Williams, B.; Murray, G.

    2015-01-01

    Objective Conceptual processing may not be restricted to the mind. The heal concept has been metaphorically associated with an “up” bodily posture. Perceptual Symbol Systems (PSS) theory suggests that this association is underpinned by bodily states which occur during learning and become instantiated as the concept. Thus the aim of this study was to examine whether processing related to the heal concept is promoted by priming the bodily state of looking upwards. Method We used a mixed 2x2 priming paradigm in which 58 participants were asked to evaluate words as either related to the heal concept or not after being primed to trigger the concept of looking up versus down (Direction – within subjects). A possible dose-response effect of priming was investigated via allocating participants to two ‘strengths’ of prime, observing an image of someone whose gaze was upward/downward (low strength) and observing an image of someone whose gaze was upward/downward while physically tilting their head upwards or downwards in accord with the image (high strength) (Strength – between subjects). Results Participants responded to words related to heal faster than words unrelated to heal across both “Strength” conditions. There was no evidence that priming was stronger in the high strength condition. Conclusion The present study found that, consistent with a PSS view of cognition, the heal concept is embodied in looking upward, which has important implications for cognition, general health, health psychology, health promotion and therapy. PMID:26161967

  10. Bathymetry and capacity of Chambers Lake, Chester County, Pennsylvania

    USGS Publications Warehouse

    Gyves, Matthew C.

    2015-10-26

    This report describes the methods used to create a bathymetric map of Chambers Lake for the computation of reservoir storage capacity as of September 2014. The product is a bathymetric map and a table showing the storage capacity of the reservoir at 2-foot increments from minimum usable elevation up to full capacity at the crest of the auxiliary spillway.

  11. The Sentinel-3 Surface Topography Mission (S-3 STM): Level 2 SAR Ocean Retracker

    NASA Astrophysics Data System (ADS)

    Dinardo, S.; Lucas, B.; Benveniste, J.

    2015-12-01

    The SRAL Radar Altimeter, on board of the ESA Mission Sentinel-3 (S-3), has the capacity to operate either in the Pulse-Limited Mode (also known as LRM) or in the novel Synthetic Aperture Radar (SAR) mode. Thanks to the initial results from SAR Altimetry obtained exploiting CryoSat-2 data, lately the interest by the scientific community in this new technology has significantly increased and consequently the definition of accurate processing methodologies (along with validation strategies) has now assumed a capital importance. In this paper, we present the algorithm proposed to retrieve from S-3 STM SAR return waveforms the standard ocean geophysical parameters (ocean topography, wave height and sigma nought) and the validation results that have been so far achieved exploiting the CryoSat-2 data as well as the simulated data. The inversion method (retracking) to extract from the return waveform the geophysical information is a curve best-fitting scheme based on the bounded Levenberg-Marquardt Least-Squares Estimation Method (LEVMAR-LSE). The S-3 STM SAR Ocean retracking algorithm adopts, as return waveform’s model, the “SAMOSA” model [Ray et al, 2014], named after the R&D project SAMOSA (led by Satoc and funded by ESA), in which it has been initially developed. The SAMOSA model is a physically-based model that offers a complete description of a SAR Altimeter return waveform from ocean surface, expressed in the form of maps of reflected power in Delay-Doppler space (also known as stack) or expressed as multilooked echoes. SAMOSA is able to account for an elliptical antenna pattern, mispointing errors in roll and yaw, surface scattering pattern, non-linear ocean wave statistics and spherical Earth surface effects. In spite of its truly comprehensive character, the SAMOSA model comes with a compact analytical formulation expressed in term of Modified Bessel functions. The specifications of the retracking algorithm have been gathered in a technical document (DPM) and delivered as baseline for industrial implementation. For operational needs, thanks to the fine tuning of the fitting library parameters and the usage of look-up table for Bessel functions computation, the CPU execution time was accelerated over 100 times and made the execution in par with real time. In the course of the ESA-funded project CryoSat+ for Ocean (CP4O), new technical evolutions for the algorithm have been proposed (as usage of PTR width look up table and application of a stack masking). One of the main outcomes of the CP4O project was that, with these latest evolutions, the SAMOSA SAR retracking was giving equivalent results to CNES CPP retracking prototype, which was built with a totally different approach, which enforces the validation results. Work actually is underway to align the industrial implementation with the last new evolutions. Further, in order to test the algorithm with a dataset as realistic as possible, a set of simulated Test Data Set (generated by S-3 STM End-to-End Simulator) has been created by CLS following the specifications as described in a test data set requirements document drafted by ESA. In this work, we will show the baseline algorithm details, the evolutions, the impact of the evolutions and the results obtained processing the CryoSat-2 data and the simulated test data set.

  12. Records for conversion of laser energy to nuclear energy in exploding nanostructures

    NASA Astrophysics Data System (ADS)

    Jortner, Joshua; Last, Isidore

    2017-09-01

    Table-top nuclear fusion reactions in the chemical physics laboratory can be driven by high-energy dynamics of Coulomb exploding, multicharged, deuterium containing nanostructures generated by ultraintense, femtosecond, near-infrared laser pulses. Theoretical-computational studies of table-top laser-driven nuclear fusion of high-energy (up to 15 MeV) deuterons with 7Li, 6Li and D nuclei demonstrate the attainment of high fusion yields within a source-target reaction design, which constitutes the highest table-top fusion efficiencies obtained up to date. The conversion efficiency of laser energy to nuclear energy (0.1-1.0%) for table-top fusion is comparable to that for DT fusion currently accomplished for 'big science' inertial fusion setups.

  13. [Determine and Implement Updates to Be Made to MODEAR (Mission Operations Data Enterprise Architecture Repository)

    NASA Technical Reports Server (NTRS)

    Fanourakis, Sofia

    2015-01-01

    My main project was to determine and implement updates to be made to MODEAR (Mission Operations Data Enterprise Architecture Repository) process definitions to be used for CST-100 (Crew Space Transportation-100) related missions. Emphasis was placed on the scheduling aspect of the processes. In addition, I was to complete other tasks as given. Some of the additional tasks were: to create pass-through command look-up tables for the flight controllers, finish one of the MDT (Mission Operations Directorate Display Tool) displays, gather data on what is included in the CST-100 public data, develop a VBA (Visual Basic for Applications) script to create a csv (Comma-Separated Values) file with specific information from spreadsheets containing command data, create a command script for the November MCC-ASIL (Mission Control Center-Avionics System Integration Laboratory) testing, and take notes for one of the TCVB (Terminal Configured Vehicle B-737) meetings. In order to make progress in my main project I scheduled meetings with the appropriate subject matter experts, prepared material for the meetings, and assisted in the discussions in order to understand the process or processes at hand. After such discussions I made updates to various MODEAR processes and process graphics. These meetings have resulted in significant updates to the processes that were discussed. In addition, the discussions have helped the departments responsible for these processes better understand the work ahead and provided material to help document how their products are created. I completed my other tasks utilizing resources available to me and, when necessary, consulting with the subject matter experts. Outputs resulting from my other tasks were: two completed and one partially completed pass through command look-up tables for the fight controllers, significant updates to one of the MDT displays, a spreadsheet containing data on what is included in the CST-100 public data, a tool to create a csv file with specific information from spreadsheets containing command data, a command script for the November MCC-ASIL testing which resulted in a successful test day identifying several potential issues, and notes from one of the TCVB meetings that was used to keep the teams up to date on what was discussed and decided. I have learned a great deal working at NASA these last four months. I was able to meet and work with amazing individuals, further develop my technical knowledge, expand my knowledge base regarding human spaceflight, and contribute to the CST-100 missions. My work at NASA has strengthened my desire to continue my education in order to make further contributions to the field, and has given me the opportunity to see the advantages of a career at NASA.

  14. SU-F-P-31: Dosimetric Effects of Roll and Pitch Corrections Using Robotic Table

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mamalui, M; Su, Z; Flampouri, S

    Purpose: To quantify the dosimetric effect of roll and pitch corrections being performed by two types of robotic tables available at our institution: BrainLabTM 5DOF robotic table installed at VERO (BrainLab&MHI) dedicated SBRT linear accelerator and 6DOF robotic couch by IBA Proton Therapy with QFixTM couch top. Methods: Planning study used a thorax phantom (CIRSTM), scanned at 4DCT protocol; targets (IGTV, PTV) were determined according to the institutional lung site-specific standards. 12 CT sets were generated with Pitch and Roll angles ranging from −4 to +4 degrees each. 2 table tops were placed onto the scans according to the modality-specificmore » patient treatment workflows. The pitched/rolled CT sets were fused to the original CT scan and the verification treatment plans were generated (12 photon SBRT plans and 12 proton conventional fractionation lung plans). Then the CT sets were fused again to simulate the effect of patient roll/pitch corrections by the robotic table. DVH sets were evaluated for all cases. Results: The effect of not correcting the phantom position for roll/pitch in photon SBRT cases was reducing the target coverage by 2% as maximum; correcting the positional errors by robotic table varied the target coverage within 0.7%. in case of proton treatment, not correcting the phantom position led to the coverage loss up to 4%, applying the corrections using robotic table reduced the coverage variation to less than 2% for PTV and within 1% for IGTV. Conclusion: correcting the patient position by using robotic tables is highly preferable, despite the small dosimetric changes introduced by the devices.« less

  15. Advanced Visualization and Interactive Display Rapid Innovation and Discovery Evaluation Research Program task 8: Survey of WEBGL Graphics Engines

    DTIC Science & Technology

    2015-01-01

    1 3.0 Methods, Assumptions, and Procedures ...18 4.6.3. LineUp Web... Procedures A search of the internet looking at web sites specializing in graphics, graphics engines, web browser applications, and games was conducted to

  16. 75 FR 62923 - WRC-07 Table Clean-up Order

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-13

    ..., the Commission's Allocation Table is revised by expressing frequencies in the High Frequency (HF....S. Table 28 frequencies designated for disaster communications and 40 frequencies designated for..., highlights the availability of the high frequency broadcasting (HFBC) bands 7.2-7.3 and 7.4-7.45 MHz in...

  17. INTERIOR PERSPECTIVE, LOOKING SOUTH SOUTHWEST WITH FIELD SET UP IN ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    INTERIOR PERSPECTIVE, LOOKING SOUTH SOUTHWEST WITH FIELD SET UP IN FOOTBALL CONFIGURATION. FIELD SEATING ROTATES TO ACCOMMODATE BASEBALL GAMES. - Houston Astrodome, 8400 Kirby Drive, Houston, Harris County, TX

  18. Dynamic generation of a table of contents with consumer-friendly labels.

    PubMed

    Miller, Trudi; Leroy, Gondy; Wood, Elizabeth

    2006-01-01

    Consumers increasingly look to the Internet for health information, but available resources are too difficult for the majority to understand. Interactive tables of contents (TOC) can help consumers access health information by providing an easy to understand structure. Using natural language processing and the Unified Medical Language System (UMLS), we have automatically generated TOCs for consumer health information. The TOC are categorized according to consumer-friendly labels for the UMLS semantic types and semantic groups. Categorizing phrases by semantic types is significantly more correct and relevant. Greater correctness and relevance was achieved with documents that are difficult to read than those at an easier reading level. Pruning TOCs to use categories that consumers favor further increases relevancy and correctness while reducing structural complexity.

  19. Transformational Solar Array Final Report

    NASA Technical Reports Server (NTRS)

    Gaddy, Edward; Ballarotto, Mihaela; Drabenstadt, Christian; Nichols, John; Douglas, Mark; Spence, Brian; Stall, Richard A.; Sulyma, Chris; Sharps, Paul

    2017-01-01

    We have made outstanding progress in the Base Phase towards achieving the final NASA Research Announcement (NRA) goals. Progress is better than anticipated due to the lighter than predicted mass of the IMM solar cells. We look forward to further improvements in the IMM cell performance during Option I and Option II; so, we have confidence that the first four items listed in the table will improve to better than the NRA goals. The computation of the end of life blanket efficiency is uncertain because we have extrapolated the radiation damage from room temperature measurements. The last three items listed in the Table were not intended to be accomplished during the Base Phase; they will be achieved during Option I and Option II.

  20. InSight Spacecraft Lift to Spin Table & Pre-Spin Processing

    NASA Image and Video Library

    2018-03-28

    In the Astrotech facility at Vandenberg Air Force Base in California, technicians and engineers inspect NASA's Interior Exploration using Seismic Investigations, Geodesy and Heat Transport, or InSight, spacecraft after it was placed on a spin table during preflight processing. InSight will be the first mission to look deep beneath the Martian surface. It will study the planet's interior by measuring its heat output and listen for marsquakes. The spacecraft will use the seismic waves generated by marsquakes to develop a map of the planet’s deep interior. The resulting insight into Mars’ formation will provide a better understanding of how other rocky planets, including Earth, were created. InSight is scheduled for liftoff May 5, 2018.

  1. The "periodic table" of the genetic code: A new way to look at the code and the decoding process.

    PubMed

    Komar, Anton A

    2016-01-01

    Henri Grosjean and Eric Westhof recently presented an information-rich, alternative view of the genetic code, which takes into account current knowledge of the decoding process, including the complex nature of interactions between mRNA, tRNA and rRNA that take place during protein synthesis on the ribosome, and it also better reflects the evolution of the code. The new asymmetrical circular genetic code has a number of advantages over the traditional codon table and the previous circular diagrams (with a symmetrical/clockwise arrangement of the U, C, A, G bases). Most importantly, all sequence co-variances can be visualized and explained based on the internal logic of the thermodynamics of codon-anticodon interactions.

  2. The Link between Academies in England, Pupil Outcomes and Local Patterns of Socio-Economic Segregation between Schools

    ERIC Educational Resources Information Center

    Gorard, Stephen

    2014-01-01

    This paper considers the pupil intakes to Academies in England, and their attainment, based on a re-analysis of figures from the Annual Schools Census 1989-2012, the Department for Education School Performance Tables 2004-2012 and the National Pupil Database. It looks at the national picture, and the situation for Local Education Authorities, and…

  3. L'alphabetisation et les femmes (Women and Literacy) Round Table (Toronto, Ontario, Canada, June 17-20, 1991). English Translation.

    ERIC Educational Resources Information Center

    Ontario Dept. of Education, Toronto.

    The transcript (translated into English) of a roundtable discussion of literacy among francophone women in Canada begins with the personal narrative of one women who gained literacy skills as an adult. The panel of three specialists in francophone women's literacy in Canada then look at the literacy rate among Canadian women, and the demand for…

  4. Optimal CV-22 Centralized Intermediate Repair Facility Locations and Parts Repair

    DTIC Science & Technology

    2009-06-01

    and Reorder Point for TEWS ............................ 36 Table 8. Excel Model for Safety Stock and Reorder Point for FADEC ...Digital Engine Control ( FADEC ) Main Wheel Assembly Blade Fold System Landing Gear Control Panel Drive System Interface Unit Main Landing Gear...Radar 4 Forward Looking Infrared System (FLIR) 4 Tactical Electronic Warfare System (TEWS) 1 Full Authority Digital Engine Control ( FADEC ) 2 Blade

  5. Brain Potentials and Personality: A New Look at Stress Susceptibility.

    DTIC Science & Technology

    1987-09-01

    disinhibition (Dis) measures a hedonistic , extraverted lifestyle including drinking, parties, sex, and gambling; boredom susceptibility (BS) indicates an...adventure seeking; ES = Experience seeking; Dis = Disinhibition; BS = Boredom susceptibility. 1 14 I N i*5’ Table 4 Correlation of Auditory Evoked...20. aTAS = Thrill and adventure seeking; ES = Experience seeking; Dis = Disinhibition; BS = Boredom susceptibility. < .05. 15 I The present study

  6. Who Wins? Who Pays? The Economic Returns and Costs of a Bachelor's Degree

    ERIC Educational Resources Information Center

    de Alva, Jorge Klor; Schneider, Mark

    2011-01-01

    Given the importance of a college education to entering and staying in the middle class and the high cost of obtaining a bachelor's degree, "Who Wins? and Who Pays?" are questions being asked today at kitchen tables and in the halls of government throughout the nation. Using publicly available data, the authors look at who wins and who pays…

  7. RANS Simulation (Virtual Blade Model [VBM]) of Single Full Scale DOE RM1 MHK Turbine

    DOE Data Explorer

    Javaherchi, Teymour; Aliseda, Alberto

    2013-04-10

    Attached are the .cas and .dat files along with the required User Defined Functions (UDFs) and look-up table of lift and drag coefficients for Reynolds Averaged Navier-Stokes (RANS) simulation of a single full scale DOE RM1 turbine implemented in ANSYS FLUENT CFD-package. In this case study the flow field around and in the wake of the full scale DOE RM1 turbine is simulated using Blade Element Model (a.k.a Virtual Blade Model) by solving RANS equations coupled with k-\\omega turbulence closure model. It should be highlighted that in this simulation the actual geometry of the rotor blade is not modeled. The effect of turbine rotating blades are modeled using the Blade Element Theory. This simulation provides an accurate estimate for the performance of device and structure of it's turbulent far wake. Due to the simplifications implemented for modeling the rotating blades in this model, VBM is limited to capture details of the flow field in near wake region of the device.

  8. Computerized systems analysis and optimization of aircraft engine performance, weight, and life cycle costs

    NASA Technical Reports Server (NTRS)

    Fishbach, L. H.

    1979-01-01

    The computational techniques utilized to determine the optimum propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements are described. The characteristics and use of the following computer codes are discussed: (1) NNEP - a very general cycle analysis code that can assemble an arbitrary matrix fans, turbines, ducts, shafts, etc., into a complete gas turbine engine and compute on- and off-design thermodynamic performance; (2) WATE - a preliminary design procedure for calculating engine weight using the component characteristics determined by NNEP; (3) POD DRG - a table look-up program to calculate wave and friction drag of nacelles; (4) LIFCYC - a computer code developed to calculate life cycle costs of engines based on the output from WATE; and (5) INSTAL - a computer code developed to calculate installation effects, inlet performance and inlet weight. Examples are given to illustrate how these computer techniques can be applied to analyze and optimize propulsion system fuel consumption, weight, and cost for representative types of aircraft and missions.

  9. Development and Implementation of Software for Visualizing and Editing Multidimensional Flight Simulation Input Data

    NASA Technical Reports Server (NTRS)

    Whelan, Todd Michael

    1996-01-01

    In a real-time or batch mode simulation that is designed to model aircraft dynamics over a wide range of flight conditions, a table look- up scheme is implemented to determine the forces and moments on the vehicle based upon the values of parameters such as angle of attack, altitude, Mach number, and control surface deflections. Simulation Aerodynamic Variable Interface (SAVI) is a graphical user interface to the flight simulation input data, designed to operate on workstations that support X Windows. The purpose of the application is to provide two and three dimensional visualization of the data, to allow an intuitive sense of the data set. SAVI also allows the user to manipulate the data, either to conduct an interactive study of the influence of changes on the vehicle dynamics, or to make revisions to data set based on new information such as flight test. This paper discusses the reasons for developing the application, provides an overview of its capabilities, and outlines the software architecture and operating environment.

  10. Using Neural Networks to Improve the Performance of Radiative Transfer Modeling Used for Geometry Dependent Surface Lambertian-Equivalent Reflectivity Calculations

    NASA Technical Reports Server (NTRS)

    Fasnacht, Zachary; Qin, Wenhan; Haffner, David P.; Loyola, Diego; Joiner, Joanna; Krotkov, Nickolay; Vasilkov, Alexander; Spurr, Robert

    2017-01-01

    Surface Lambertian-equivalent reflectivity (LER) is important for trace gas retrievals in the direct calculation of cloud fractions and indirect calculation of the air mass factor. Current trace gas retrievals use climatological surface LER's. Surface properties that impact the bidirectional reflectance distribution function (BRDF) as well as varying satellite viewing geometry can be important for retrieval of trace gases. Geometry Dependent LER (GLER) captures these effects with its calculation of sun normalized radiances (I/F) and can be used in current LER algorithms (Vasilkov et al. 2016). Pixel by pixel radiative transfer calculations are computationally expensive for large datasets. Modern satellite missions such as the Tropospheric Monitoring Instrument (TROPOMI) produce very large datasets as they take measurements at much higher spatial and spectral resolutions. Look up table (LUT) interpolation improves the speed of radiative transfer calculations but complexity increases for non-linear functions. Neural networks perform fast calculations and can accurately predict both non-linear and linear functions with little effort.

  11. Combining in situ characterization methods in one set-up: looking with more eyes into the intricate chemistry of the synthesis and working of heterogeneous catalysts.

    PubMed

    Bentrup, Ursula

    2010-12-01

    Several in situ techniques are known which allow investigations of catalysts and catalytic reactions under real reaction conditions using different spectroscopic and X-ray methods. In recent years, specific set-ups have been established which combine two or more in situ methods in order to get a more detailed understanding of catalytic systems. This tutorial review will give a summary of currently available set-ups equipped with multiple techniques for in situ catalyst characterization, catalyst preparation, and reaction monitoring. Besides experimental and technical aspects of method coupling including X-ray techniques, spectroscopic methods (Raman, UV-vis, FTIR), and magnetic resonance spectroscopies (NMR, EPR), essential results will be presented to demonstrate the added value of multitechnique in situ approaches. A special section is focussed on selected examples of use which show new developments and application fields.

  12. Mortality table construction

    NASA Astrophysics Data System (ADS)

    Sutawanir

    2015-12-01

    Mortality tables play important role in actuarial studies such as life annuities, premium determination, premium reserve, valuation pension plan, pension funding. Some known mortality tables are CSO mortality table, Indonesian Mortality Table, Bowers mortality table, Japan Mortality table. For actuary applications some tables are constructed with different environment such as single decrement, double decrement, and multiple decrement. There exist two approaches in mortality table construction : mathematics approach and statistical approach. Distribution model and estimation theory are the statistical concepts that are used in mortality table construction. This article aims to discuss the statistical approach in mortality table construction. The distributional assumptions are uniform death distribution (UDD) and constant force (exponential). Moment estimation and maximum likelihood are used to estimate the mortality parameter. Moment estimation methods are easier to manipulate compared to maximum likelihood estimation (mle). However, the complete mortality data are not used in moment estimation method. Maximum likelihood exploited all available information in mortality estimation. Some mle equations are complicated and solved using numerical methods. The article focus on single decrement estimation using moment and maximum likelihood estimation. Some extension to double decrement will introduced. Simple dataset will be used to illustrated the mortality estimation, and mortality table.

  13. 20. View of Clark Fork Vehicle Bridge facing up. Looking ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    20. View of Clark Fork Vehicle Bridge facing up. Looking at understructure of northernmost span. - Clark Fork Vehicle Bridge, Spanning Clark Fork River, serves Highway 200, Clark Fork, Bonner County, ID

  14. Digital dissection system for medical school anatomy training

    NASA Astrophysics Data System (ADS)

    Augustine, Kurt E.; Pawlina, Wojciech; Carmichael, Stephen W.; Korinek, Mark J.; Schroeder, Kathryn K.; Segovis, Colin M.; Robb, Richard A.

    2003-05-01

    As technology advances, new and innovative ways of viewing and visualizing the human body are developed. Medicine has benefited greatly from imaging modalities that provide ways for us to visualize anatomy that cannot be seen without invasive procedures. As long as medical procedures include invasive operations, students of anatomy will benefit from the cadaveric dissection experience. Teaching proper technique for dissection of human cadavers is a challenging task for anatomy educators. Traditional methods, which have not changed significantly for centuries, include the use of textbooks and pictures to show students what a particular dissection specimen should look like. The ability to properly carry out such highly visual and interactive procedures is significantly constrained by these methods. The student receives a single view and has no idea how the procedure was carried out. The Department of Anatomy at Mayo Medical School recently built a new, state-of-the-art teaching laboratory, including data ports and power sources above each dissection table. This feature allows students to access the Mayo intranet from a computer mounted on each table. The vision of the Department of Anatomy is to replace all paper-based resources in the laboratory (dissection manuals, anatomic atlases, etc.) with a more dynamic medium that will direct students in dissection and in learning human anatomy. Part of that vision includes the use of interactive 3-D visualization technology. The Biomedical Imaging Resource (BIR) at Mayo Clinic has developed, in collaboration with the Department of Anatomy, a system for the control and capture of high resolution digital photographic sequences which can be used to create 3-D interactive visualizations of specimen dissections. The primary components of the system include a Kodak DC290 digital camera, a motorized controller rig from Kaidan, a PC, and custom software to synchronize and control the components. For each dissection procedure, the images are captured automatically, and then processed to generate a Quicktime VR sequence, which permits users to view an object from multiple angles by rotating it on the screen. This provides 3-D visualizations of anatomy for students without the need for special '3-D glasses' that would be impractical to use in a laboratory setting. In addition, a digital video camera may be mounted on the rig for capturing video recordings of selected dissection procedures being carried out by expert anatomists for playback by the students. Anatomists from the Department of Anatomy at Mayo have captured several sets of dissection sequences and processed them into Quicktime VR sequences. The students are able to look at these specimens from multiple angles using this VR technology. In addition, the student may zoom in to obtain high-resolution close-up views of the specimen. They may interactively view the specimen at varying stages of dissection, providing a way to quickly and intuitively navigate through the layers of tissue. Electronic media has begun to impact all areas of education, but a 3-D interactive visualization of specimen dissections in the laboratory environment is a unique and powerful means of teaching anatomy. When fully implemented, anatomy education will be enhanced significantly by comparison to traditional methods.

  15. Long Duration Enhancement And Depletion Observed In The Topside Ionospheric Electron Content During The March 2015 Strong Storm

    NASA Astrophysics Data System (ADS)

    Zhong, J.; Wang, W.; Yue, X.; Burns, A. G.; Dou, X.; Lei, J.

    2015-12-01

    Up-looking total electron content (TEC) measurements from multiple low Earth orbit (LEO) satellites have been utilized to study the topside ionospheric response to the 17 March 2015 great storm. The combined up-looking TEC observations from these LEO satellites are valuable in addressing the local time and altitudinal dependences of the topside ionospheric response to geomagnetic storms from a global perspective, especially over the southern hemisphere and oceans. In the evening sector, the up-looking TEC showed an obvious long-duration of positive storm effect during the main phase and a long duration of negative storm effect during the recovery phase of this storm. The increases of the topside TEC during the main phase were symmetric with respect to the magnetic equator, which was probably associated with penetration electric fields. Additionally, the up-looking TEC from different orbital altitudes suggested that the negative storm effect at higher altitudes was stronger in the evening sector. In the morning sector, the up-looking TEC also showed increases at low and middle latitudes during the storm main phase. Obvious TEC enhancement can be also seen over the Pacific Ocean in the topside ionosphere during the storm recovery phase. These results imply that the topside ionospheric responses significantly depend on local time. Thus, the LEO-based up-looking TEC provides an important database to study the possible physical mechanisms of the topside ionospheric response to storms.

  16. Error compensation of IQ modulator using two-dimensional DFT

    NASA Astrophysics Data System (ADS)

    Ohshima, Takashi; Maesaka, Hirokazu; Matsubara, Shinichi; Otake, Yuji

    2016-06-01

    It is important to precisely set and keep the phase and amplitude of an rf signal in the accelerating cavity of modern accelerators, such as an X-ray Free Electron Laser (XFEL) linac. In these accelerators an acceleration rf signal is generated or detected by an In-phase and Quadrature (IQ) modulator, or a demodulator. If there are any deviations of the phase and the amplitude from the ideal values, crosstalk between the phase and the amplitude of the output signal of the IQ modulator or the demodulator arises. This causes instability of the feedback controls that simultaneously stabilize both the rf phase and the amplitude. To compensate for such deviations, we developed a novel compensation method using a two-dimensional Discrete Fourier Transform (DFT). Because the observed deviations of the phase and amplitude of an IQ modulator involve sinusoidal and polynomial behaviors on the phase angle and the amplitude of the rf vector, respectively, the DFT calculation with these basis functions makes a good approximation with a small number of compensation coefficients. Also, we can suppress high-frequency noise components arising when we measure the deviation data. These characteristics have advantages compared to a Look Up Table (LUT) compensation method. The LUT method usually demands many compensation elements, such as about 300, that are not easy to treat. We applied the DFT compensation method to the output rf signal of a C-band IQ modulator at SACLA, which is an XFEL facility in Japan. The amplitude deviation of the IQ modulator after the DFT compensation was reduced from 15.0% at the peak to less than 0.2% at the peak for an amplitude control range of from 0.1 V to 0.9 V (1.0 V full scale) and for a phase control range from 0 degree to 360 degrees. The number of compensation coefficients is 60, which is smaller than that of the LUT method, and is easy to treat and maintain.

  17. A fast radiative transfer method for the simulation of visible satellite imagery

    NASA Astrophysics Data System (ADS)

    Scheck, Leonhard; Frèrebeau, Pascal; Buras-Schnell, Robert; Mayer, Bernhard

    2016-05-01

    A computationally efficient radiative transfer method for the simulation of visible satellite images is presented. The top of atmosphere reflectance is approximated by a function depending on vertically integrated optical depths and effective particle sizes for water and ice clouds, the surface albedo, the sun and satellite zenith angles and the scattering angle. A look-up table (LUT) for this reflectance function is generated by means of the discrete ordinate method (DISORT). For a constant scattering angle the reflectance is a relatively smooth and symmetric function of the two zenith angles, which can be well approximated by the lowest-order terms of a 2D Fourier series. By storing only the lowest Fourier coefficients and adopting a non-equidistant grid for the scattering angle, the LUT is reduced to a size of 21 MB per satellite channel. The computation of the top of atmosphere reflectance requires only the calculation of the cloud parameters from the model state and the evaluation and interpolation of the reflectance function using the compressed LUT and is thus orders of magnitude faster than DISORT. The accuracy of the method is tested by generating synthetic satellite images for the 0.6 μm and 0.8 μm channels of the SEVIRI instrument for operational COSMO-DE model forecasts from the German Weather Service (DWD) and comparing them to DISORT results. For a test period in June the root mean squared absolute reflectance error is about 10-2 and the mean relative reflectance error is less than 2% for both channels. For scattering angles larger than 170 ° the rapid variation of reflectance with the particle size related to the backscatter glory reduces the accuracy and the errors increase by a factor of 3-4. Speed and accuracy of the new method are sufficient for operational data assimilation and high-resolution model verification applications.

  18. Teachers' approaches to teaching physics

    NASA Astrophysics Data System (ADS)

    2012-12-01

    Benjamin Franklin said, "Tell me, and I forget. Teach me, and I remember. Involve me, and I learn." He would not be surprised to learn that research in physics pedagogy has consistently shown that the traditional lecture is the least effective teaching method for teaching physics. We asked high school physics teachers which teaching activities they used in their classrooms. While almost all teachers still lecture sometimes, two-thirds use something other than lecture most of the time. The five most often-used activities are shown in the table below. In the January issue, we will look at the 2013 Nationwide Survey of High School Physics teachers. Susan White is Research Manager in the Statistical Research Center at the American Institute of Physics; she directs the Nationwide Survey of High School Physics Teachers. If you have any questions, please contact Susan at swhite@aip.org.

  19. Altered visual focus on sensorimotor control in people with chronic ankle instability.

    PubMed

    Terada, Masafumi; Ball, Lindsay M; Pietrosimone, Brian G; Gribble, Phillip A

    2016-01-01

    The purpose of this investigation was to examine the effects of the combination of chronic ankle instability (CAI) and altered visual focus on strategies for dynamic stability during a drop-jump task. Nineteen participants with self-reported CAI and 19 healthy participants performed a drop-jump task in looking-up and looking-down conditions. For the looking-up condition, participants looked up and read a random number that flashed on a computer monitor. For the looking-down condition, participants focused their vision on the force plate. Sagittal- and frontal-plane kinematics in the hip, knee and ankle were calculated at the time points of 100 ms pre-initial foot contact to ground and at IC. The resultant vector time to stabilisation was calculated with ground reaction force data. The CAI group demonstrated less hip flexion at the point of 100 ms pre-initial contact (P < 0.01), and less hip flexion (P = 0.03) and knee flexion at initial contact (P = 0.047) compared to controls. No differences in kinematics or dynamic stability were observed in either looking-up or looking-down conditions (P > 0.05). Altered visual focus did not influence movement patterns during the drop-jump task, but the presence of CAI did. The current data suggests that centrally mediated changes associated with CAI may lead to global alterations in the sensorimotor control.

  20. Generating constrained randomized sequences: item frequency matters.

    PubMed

    French, Robert M; Perruchet, Pierre

    2009-11-01

    All experimental psychologists understand the importance of randomizing lists of items. However, randomization is generally constrained, and these constraints-in particular, not allowing immediately repeated items-which are designed to eliminate particular biases, frequently engender others. We describe a simple Monte Carlo randomization technique that solves a number of these problems. However, in many experimental settings, we are concerned not only with the number and distribution of items but also with the number and distribution of transitions between items. The algorithm mentioned above provides no control over this. We therefore introduce a simple technique that uses transition tables for generating correctly randomized sequences. We present an analytic method of producing item-pair frequency tables and item-pair transitional probability tables when immediate repetitions are not allowed. We illustrate these difficulties and how to overcome them, with reference to a classic article on word segmentation in infants. Finally, we provide free access to an Excel file that allows users to generate transition tables with up to 10 different item types, as well as to generate appropriately distributed randomized sequences of any length without immediately repeated elements. This file is freely available from http://leadserv.u-bourgogne.fr/IMG/xls/TransitionMatrix.xls.

Top