NASA Astrophysics Data System (ADS)
Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong
2016-03-01
Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.
NASA Astrophysics Data System (ADS)
Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong
2014-09-01
In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.
Instantaneous and controllable integer ambiguity resolution: review and an alternative approach
NASA Astrophysics Data System (ADS)
Zhang, Jingyu; Wu, Meiping; Li, Tao; Zhang, Kaidong
2015-11-01
In the high-precision application of Global Navigation Satellite System (GNSS), integer ambiguity resolution is the key step to realize precise positioning and attitude determination. As the necessary part of quality control, integer aperture (IA) ambiguity resolution provides the theoretical and practical foundation for ambiguity validation. It is mainly realized by acceptance testing. Due to the constraint of correlation between ambiguities, it is impossible to realize the controlling of failure rate according to analytical formula. Hence, the fixed failure rate approach is implemented by Monte Carlo sampling. However, due to the characteristics of Monte Carlo sampling and look-up table, we have to face the problem of a large amount of time consumption if sufficient GNSS scenarios are included in the creation of look-up table. This restricts the fixed failure rate approach to be a post process approach if a look-up table is not available. Furthermore, if not enough GNSS scenarios are considered, the table may only be valid for a specific scenario or application. Besides this, the method of creating look-up table or look-up function still needs to be designed for each specific acceptance test. To overcome these problems in determination of critical values, this contribution will propose an instantaneous and CONtrollable (iCON) IA ambiguity resolution approach for the first time. The iCON approach has the following advantages: (a) critical value of acceptance test is independently determined based on the required failure rate and GNSS model without resorting to external information such as look-up table; (b) it can be realized instantaneously for most of IA estimators which have analytical probability formulas. The stronger GNSS model, the less time consumption; (c) it provides a new viewpoint to improve the research about IA estimation. To verify these conclusions, multi-frequency and multi-GNSS simulation experiments are implemented. Those results show that IA estimators based on iCON approach can realize controllable ambiguity resolution. Besides this, compared with ratio test IA based on look-up table, difference test IA and IA least square based on the iCON approach most of times have higher success rates and better controllability to failure rates.
Overview of fast algorithm in 3D dynamic holographic display
NASA Astrophysics Data System (ADS)
Liu, Juan; Jia, Jia; Pan, Yijie; Wang, Yongtian
2013-08-01
3D dynamic holographic display is one of the most attractive techniques for achieving real 3D vision with full depth cue without any extra devices. However, huge 3D information and data should be preceded and be computed in real time for generating the hologram in 3D dynamic holographic display, and it is a challenge even for the most advanced computer. Many fast algorithms are proposed for speeding the calculation and reducing the memory usage, such as:look-up table (LUT), compressed look-up table (C-LUT), split look-up table (S-LUT), and novel look-up table (N-LUT) based on the point-based method, and full analytical polygon-based methods, one-step polygon-based method based on the polygon-based method. In this presentation, we overview various fast algorithms based on the point-based method and the polygon-based method, and focus on the fast algorithm with low memory usage, the C-LUT, and one-step polygon-based method by the 2D Fourier analysis of the 3D affine transformation. The numerical simulations and the optical experiments are presented, and several other algorithms are compared. The results show that the C-LUT algorithm and the one-step polygon-based method are efficient methods for saving calculation time. It is believed that those methods could be used in the real-time 3D holographic display in future.
NASA Astrophysics Data System (ADS)
Chakraborty, S.; Dasgupta, A.; Das, R.; Kar, M.; Kundu, A.; Sarkar, C. K.
2017-12-01
In this paper, we explore the possibility of mapping devices designed in TCAD environment to its modeled version developed in cadence virtuoso environment using a look-up table (LUT) approach. Circuit simulation of newly designed devices in TCAD environment is a very slow and tedious process involving complex scripting. Hence, the LUT based modeling approach has been proposed as a faster and easier alternative in cadence environment. The LUTs are prepared by extracting data from the device characteristics obtained from device simulation in TCAD. A comparative study is shown between the TCAD simulation and the LUT-based alternative to showcase the accuracy of modeled devices. Finally the look-up table approach is used to evaluate the performance of circuits implemented using 14 nm nMOSFET.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yao; Wan, Liang; Chen, Kai
An automated method has been developed to characterize the type and spatial distribution of twinning in crystal orientation maps from synchrotron X-ray Laue microdiffraction results. The method relies on a look-up table approach. Taking into account the twin axis and twin plane for plausible rotation and reflection twins, respectively, and the point group symmetry operations for a specific crystal, a look-up table listing crystal-specific rotation angle–axis pairs, which reveal the orientation relationship between the twin and the parent lattice, is generated. By comparing these theoretical twin–parent orientation relationships in the look-up table with the measured misorientations, twin boundaries are mappedmore » automatically from Laue microdiffraction raster scans with thousands of data points. Finally, taking advantage of the high orientation resolution of the Laue microdiffraction method, this automated approach is also applicable to differentiating twinning elements among multiple twinning modes in any crystal system.« less
Li, Yao; Wan, Liang; Chen, Kai
2015-04-25
An automated method has been developed to characterize the type and spatial distribution of twinning in crystal orientation maps from synchrotron X-ray Laue microdiffraction results. The method relies on a look-up table approach. Taking into account the twin axis and twin plane for plausible rotation and reflection twins, respectively, and the point group symmetry operations for a specific crystal, a look-up table listing crystal-specific rotation angle–axis pairs, which reveal the orientation relationship between the twin and the parent lattice, is generated. By comparing these theoretical twin–parent orientation relationships in the look-up table with the measured misorientations, twin boundaries are mappedmore » automatically from Laue microdiffraction raster scans with thousands of data points. Finally, taking advantage of the high orientation resolution of the Laue microdiffraction method, this automated approach is also applicable to differentiating twinning elements among multiple twinning modes in any crystal system.« less
Table look-up estimation of signal and noise parameters from quantized observables
NASA Technical Reports Server (NTRS)
Vilnrotter, V. A.; Rodemich, E. R.
1986-01-01
A table look-up algorithm for estimating underlying signal and noise parameters from quantized observables is examined. A general mathematical model is developed, and a look-up table designed specifically for estimating parameters from four-bit quantized data is described. Estimator performance is evaluated both analytically and by means of numerical simulation, and an example is provided to illustrate the use of the look-up table for estimating signal-to-noise ratios commonly encountered in Voyager-type data.
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2014-10-28
Methods and systems for engine control optimization are provided. A first and a second operating condition of a vehicle engine are detected. An initial value is identified for a first and a second engine control parameter corresponding to a combination of the detected operating conditions according to a first and a second engine map look-up table. The initial values for the engine control parameters are adjusted based on a detected engine performance variable to cause the engine performance variable to approach a target value. A first and a second sensitivity of the engine performance variable are determined in response to changes in the engine control parameters. The first engine map look-up table is adjusted when the first sensitivity is greater than a threshold, and the second engine map look-up table is adjusted when the second sensitivity is greater than a threshold.
Spectral Retrieval of Latent Heating Profiles from TRMM PR Data: Comparison of Look-Up Tables
NASA Technical Reports Server (NTRS)
Shige, Shoichi; Takayabu, Yukari N.; Tao, Wei-Kuo; Johnson, Daniel E.; Shie, Chung-Lin
2003-01-01
The primary goal of the Tropical Rainfall Measuring Mission (TRMM) is to use the information about distributions of precipitation to determine the four dimensional (i.e., temporal and spatial) patterns of latent heating over the whole tropical region. The Spectral Latent Heating (SLH) algorithm has been developed to estimate latent heating profiles for the TRMM Precipitation Radar (PR) with a cloud- resolving model (CRM). The method uses CRM- generated heating profile look-up tables for the three rain types; convective, shallow stratiform, and anvil rain (deep stratiform with a melting level). For convective and shallow stratiform regions, the look-up table refers to the precipitation top height (PTH). For anvil region, on the other hand, the look- up table refers to the precipitation rate at the melting level instead of PTH. For global applications, it is necessary to examine the universality of the look-up table. In this paper, we compare the look-up tables produced from the numerical simulations of cloud ensembles forced with the Tropical Ocean Global Atmosphere (TOGA) Coupled Atmosphere-Ocean Response Experiment (COARE) data and the GARP Atlantic Tropical Experiment (GATE) data. There are some notable differences between the TOGA-COARE table and the GATE table, especially for the convective heating. First, there is larger number of deepest convective profiles in the TOGA-COARE table than in the GATE table, mainly due to the differences in SST. Second, shallow convective heating is stronger in the TOGA COARE table than in the GATE table. This might be attributable to the difference in the strength of the low-level inversions. Third, altitudes of convective heating maxima are larger in the TOGA COARE table than in the GATE table. Levels of convective heating maxima are located just below the melting level, because warm-rain processes are prevalent in tropical oceanic convective systems. Differences in levels of convective heating maxima probably reflect differences in melting layer heights. We are now extending our study to simulations of other field experiments (e.g. SCSMEX and ARM) in order to examine the universality of the look-up table. The impact of look-up tables on the retrieved latent heating profiles will also be assessed.
Generation of Look-Up Tables for Dynamic Job Shop Scheduling Decision Support Tool
NASA Astrophysics Data System (ADS)
Oktaviandri, Muchamad; Hassan, Adnan; Mohd Shaharoun, Awaluddin
2016-02-01
Majority of existing scheduling techniques are based on static demand and deterministic processing time, while most job shop scheduling problem are concerned with dynamic demand and stochastic processing time. As a consequence, the solutions obtained from the traditional scheduling technique are ineffective wherever changes occur to the system. Therefore, this research intends to develop a decision support tool (DST) based on promising artificial intelligent that is able to accommodate the dynamics that regularly occur in job shop scheduling problem. The DST was designed through three phases, i.e. (i) the look-up table generation, (ii) inverse model development and (iii) integration of DST components. This paper reports the generation of look-up tables for various scenarios as a part in development of the DST. A discrete event simulation model was used to compare the performance among SPT, EDD, FCFS, S/OPN and Slack rules; the best performances measures (mean flow time, mean tardiness and mean lateness) and the job order requirement (inter-arrival time, due dates tightness and setup time ratio) which were compiled into look-up tables. The well-known 6/6/J/Cmax Problem from Muth and Thompson (1963) was used as a case study. In the future, the performance measure of various scheduling scenarios and the job order requirement will be mapped using ANN inverse model.
A VLSI architecture for performing finite field arithmetic with reduced table look-up
NASA Technical Reports Server (NTRS)
Hsu, I. S.; Truong, T. K.; Reed, I. S.
1986-01-01
A new table look-up method for finding the log and antilog of finite field elements has been developed by N. Glover. In his method, the log and antilog of a field element is found by the use of several smaller tables. The method is based on a use of the Chinese Remainder Theorem. The technique often results in a significant reduction in the memory requirements of the problem. A VLSI architecture is developed for a special case of this new algorithm to perform finite field arithmetic including multiplication, division, and the finding of an inverse element in the finite field.
On the look-up tables for the critical heat flux in tubes (history and problems)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirillov, P.L.; Smogalev, I.P.
1995-09-01
The complication of critical heat flux (CHF) problem for boiling in channels is caused by the large number of variable factors and the variety of two-phase flows. The existence of several hundreds of correlations for the prediction of CHF demonstrates the unsatisfactory state of this problem. The phenomenological CHF models can provide only the qualitative predictions of CHF primarily in annular-dispersed flow. The CHF look-up tables covered the results of numerous experiments received more recognition in the last 15 years. These tables are based on the statistical averaging of CHF values for each range of pressure, mass flux and quality.more » The CHF values for regions, where no experimental data is available, are obtained by extrapolation. The correction of these tables to account for the diameter effect is a complicated problem. There are ranges of conditions where the simple correlations cannot produce the reliable results. Therefore, diameter effect on CHF needs additional study. The modification of look-up table data for CHF in tubes to predict CHF in rod bundles must include a method which to take into account the nonuniformity of quality in a rod bundle cross section.« less
Design of Cancelable Palmprint Templates Based on Look Up Table
NASA Astrophysics Data System (ADS)
Qiu, Jian; Li, Hengjian; Dong, Jiwen
2018-03-01
A novel cancelable palmprint templates generation scheme is proposed in this paper. Firstly, the Gabor filter and chaotic matrix are used to extract palmprint features. It is then arranged into a row vector and divided into equal size blocks. These blocks are converted to corresponding decimals and mapped to look up tables, forming final cancelable palmprint features based on the selected check bits. Finally, collaborative representation based classification with regularized least square is used for classification. Experimental results on the Hong Kong PolyU Palmprint Database verify that the proposed cancelable templates can achieve very high performance and security levels. Meanwhile, it can also satisfy the needs of real-time applications.
NASA Astrophysics Data System (ADS)
Kawai, Soshi; Terashima, Hiroshi; Negishi, Hideyo
2015-11-01
This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture the steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier-Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawai, Soshi, E-mail: kawai@cfd.mech.tohoku.ac.jp; Terashima, Hiroshi; Negishi, Hideyo
2015-11-01
This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture themore » steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier–Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.« less
Optoelectronic switch matrix as a look-up table for residue arithmetic.
Macdonald, R I
1987-10-01
The use of optoelectronic matrix switches to perform look-up table functions in residue arithmetic processors is proposed. In this application, switchable detector arrays give the advantage of a greatly reduced requirement for optical sources by comparison with previous optoelectronic residue processors.
NASA Astrophysics Data System (ADS)
Nakamura, Kazuyuki; Sasao, Tsutomu; Matsuura, Munehiro; Tanaka, Katsumasa; Yoshizumi, Kenichi; Nakahara, Hiroki; Iguchi, Yukihiro
2006-04-01
A large-scale memory-technology-based programmable logic device (PLD) using a look-up table (LUT) cascade is developed in the 0.35-μm standard complementary metal oxide semiconductor (CMOS) logic process. Eight 64 K-bit synchronous SRAMs are connected to form an LUT cascade with a few additional circuits. The features of the LUT cascade include: 1) a flexible cascade connection structure, 2) multi phase pseudo asynchronous operations with synchronous static random access memory (SRAM) cores, and 3) LUT-bypass redundancy. This chip operates at 33 MHz in 8-LUT cascades at 122 mW. Benchmark results show that it achieves a comparable performance to field programmable gate array (FPGAs).
Radiometry simulation within the end-to-end simulation tool SENSOR
NASA Astrophysics Data System (ADS)
Wiest, Lorenz; Boerner, Anko
2001-02-01
12 An end-to-end simulation is a valuable tool for sensor system design, development, optimization, testing, and calibration. This contribution describes the radiometry module of the end-to-end simulation tool SENSOR. It features MODTRAN 4.0-based look up tables in conjunction with a cache-based multilinear interpolation algorithm to speed up radiometry calculations. It employs a linear reflectance parameterization to reduce look up table size, considers effects due to the topology of a digital elevation model (surface slope, sky view factor) and uses a reflectance class feature map to assign Lambertian and BRDF reflectance properties to the digital elevation model. The overall consistency of the radiometry part is demonstrated by good agreement between ATCOR 4-retrieved reflectance spectra of a simulated digital image cube and the original reflectance spectra used to simulate this image data cube.
A shower look-up table to trace the dynamics of meteoroid streams and their sources
NASA Astrophysics Data System (ADS)
Jenniskens, Petrus
2018-04-01
Meteor showers are caused by meteoroid streams from comets (and some primitive asteroids). They trace the comet population and its dynamical evolution, warn of dangerous long-period comets that can pass close to Earth's orbit, outline volumes of space with a higher satellite impact probability, and define how meteoroids evolve in the interplanetary medium. Ongoing meteoroid orbit surveys have mapped these showers in recent years, but the surveys are now running up against a more and more complicated scene. The IAU Working List of Meteor Showers has reached 956 entries to be investigated (per March 1, 2018). The picture is even more complicated with the discovery that radar-detected streams are often different, or differently distributed, than video-detected streams. Complicating matters even more, some meteor showers are active over many months, during which their radiant position gradually changes, which makes the use of mean orbits as a proxy for a meteoroid stream's identity meaningless. The dispersion of the stream in space and time is important to that identity and contains much information about its origin and dynamical evolution. To make sense of the meteor shower zoo, a Shower Look-Up Table was created that captures this dispersion. The Shower Look-Up Table has enabled the automated identification of showers in the ongoing CAMS video-based meteoroid orbit survey, results of which are presented now online in near-real time at http://cams.seti.org/FDL/. Visualization tools have been built that depict the streams in a planetarium setting. Examples will be presented that sample the range of meteoroid streams that this look-up table describes. Possibilities for further dynamical studies will be discussed.
Extension of Generalized Fluid System Simulation Program's Fluid Property Database
NASA Technical Reports Server (NTRS)
Patel, Kishan
2011-01-01
This internship focused on the development of additional capabilities for the General Fluid Systems Simulation Program (GFSSP). GFSSP is a thermo-fluid code used to evaluate system performance by a finite volume-based network analysis method. The program was developed primarily to analyze the complex internal flow of propulsion systems and is capable of solving many problems related to thermodynamics and fluid mechanics. GFSSP is integrated with thermodynamic programs that provide fluid properties for sub-cooled, superheated, and saturation states. For fluids that are not included in the thermodynamic property program, look-up property tables can be provided. The look-up property tables of the current release version can only handle sub-cooled and superheated states. The primary purpose of the internship was to extend the look-up tables to handle saturated states. This involves a) generation of a property table using REFPROP, a thermodynamic property program that is widely used, and b) modifications of the Fortran source code to read in an additional property table containing saturation data for both saturated liquid and saturated vapor states. Also, a method was implemented to calculate the thermodynamic properties of user-fluids within the saturation region, given values of pressure and enthalpy. These additions required new code to be written, and older code had to be adjusted to accommodate the new capabilities. Ultimately, the changes will lead to the incorporation of this new capability in future versions of GFSSP. This paper describes the development and validation of the new capability.
Assessment of the Broadleaf Crops Leaf Area Index Product from the Terra MODIS Instrument
NASA Technical Reports Server (NTRS)
Tan, Bin; Hu, Jiannan; Huang, Dong; Yang, Wenze; Zhang, Ping; Shabanov, Nikolay V.; Knyazikhin, Yuri; Nemani, Ramakrishna R.; Myneni, Ranga B.
2005-01-01
The first significant processing of Terra MODIS data, called Collection 3, covered the period from November 2000 to December 2002. The Collection 3 leaf area index (LAI) and fraction vegetation absorbed photosynthetically active radiation (FPAR) products for broadleaf crops exhibited three anomalies (a) high LAI values during the peak growing season, (b) differences in LAI seasonality between the radiative transfer-based main algorithm and the vegetation index based back-up algorithm, and (c) too few retrievals from the main algorithm during the summer period when the crops are at full flush. The cause of these anomalies is a mismatch between reflectances modeled by the algorithm and MODIS measurements. Therefore, the Look-Up-Tables accompanying the algorithm were revised and implemented in Collection 4 processing. The main algorithm with the revised Look-Up-Tables generated retrievals for over 80% of the pixels with valid data. Retrievals from the back-up algorithm, although few, should be used with caution as they are generated from surface reflectances with high uncertainties.
NASA Technical Reports Server (NTRS)
Welton, Ellsworth J.; Campbell, James R.; Spinhime, James D.; Berkoff, Timothy A.; Holben, Brent; Tsay, Si-Chee; Bucholtz, Anthony
2004-01-01
Backscatter lidar signals are a function of both backscatter and extinction. Hence, these lidar observations alone cannot separate the two quantities. The aerosol extinction-to-backscatter ratio, S, is the key parameter required to accurately retrieve extinction and optical depth from backscatter lidar observations of aerosol layers. S is commonly defined as 4*pi divided by the product of the single scatter albedo and the phase function at 180-degree scattering angle. Values of S for different aerosol types are not well known, and are even more difficult to determine when aerosols become mixed. Here we present a new lidar-sunphotometer S database derived from Observations of the NASA Micro-Pulse Lidar Network (MPLNET). MPLNET is a growing worldwide network of eye-safe backscatter lidars co-located with sunphotometers in the NASA Aerosol Robotic Network (AERONET). Values of S for different aerosol species and geographic regions will be presented. A framework for constructing an S look-up table will be shown. Look-up tables of S are needed to calculate aerosol extinction and optical depth from space-based lidar observations in the absence of co-located AOD data. Applications for using the new S look-up table to reprocess aerosol products from NASA's Geoscience Laser Altimeter System (GLAS) will be discussed.
NASA Astrophysics Data System (ADS)
Luo, Lin-Bo; An, Sang-Woo; Wang, Chang-Shuai; Li, Ying-Chun; Chong, Jong-Wha
2012-09-01
Digital cameras usually decrease exposure time to capture motion-blur-free images. However, this operation will generate an under-exposed image with a low-budget complementary metal-oxide semiconductor image sensor (CIS). Conventional color correction algorithms can efficiently correct under-exposed images; however, they are generally not performed in real time and need at least one frame memory if they are implemented by hardware. The authors propose a real-time look-up table-based color correction method that corrects under-exposed images with hardware without using frame memory. The method utilizes histogram matching of two preview images, which are exposed for a long and short time, respectively, to construct an improved look-up table (ILUT) and then corrects the captured under-exposed image in real time. Because the ILUT is calculated in real time before processing the captured image, this method does not require frame memory to buffer image data, and therefore can greatly save the cost of CIS. This method not only supports single image capture, but also bracketing to capture three images at a time. The proposed method was implemented by hardware description language and verified by a field-programmable gate array with a 5 M CIS. Simulations show that the system can perform in real time with a low cost and can correct the color of under-exposed images well.
NASA Astrophysics Data System (ADS)
Jo, Hyunho; Sim, Donggyu
2014-06-01
We present a bitstream decoding processor for entropy decoding of variable length coding-based multiformat videos. Since most of the computational complexity of entropy decoders comes from bitstream accesses and table look-up process, the developed bitstream processing unit (BsPU) has several designated instructions to access bitstreams and to minimize branch operations in the table look-up process. In addition, the instruction for bitstream access has the capability to remove emulation prevention bytes (EPBs) of H.264/AVC without initial delay, repeated memory accesses, and additional buffer. Experimental results show that the proposed method for EPB removal achieves a speed-up of 1.23 times compared to the conventional EPB removal method. In addition, the BsPU achieves speed-ups of 5.6 and 3.5 times in entropy decoding of H.264/AVC and MPEG-4 Visual bitstreams, respectively, compared to an existing processor without designated instructions and a new table mapping algorithm. The BsPU is implemented on a Xilinx Virtex5 LX330 field-programmable gate array. The MPEG-4 Visual (ASP, Level 5) and H.264/AVC (Main Profile, Level 4) are processed using the developed BsPU with a core clock speed of under 250 MHz in real time.
Generating functional analysis of minority games with inner product strategy definitions
NASA Astrophysics Data System (ADS)
Coolen, A. C. C.; Shayeghi, N.
2008-08-01
We use generating functional methods to solve the so-called inner product versions of the minority game (MG), with fake and/or real market histories, by generalizing the theory developed recently for look-up table MGs with real histories. The phase diagrams of the look-up table and inner product MG versions are generally found to be identical, with the exception of inner product MGs where histories are sampled linearly, which are found to be structurally critical. However, we encounter interesting differences both in the theory (where the role of the history frequency distribution in look-up table MGs is taken over by the eigenvalue spectrum of a history covariance matrix in inner product MGs) and in the static and dynamic phenomenology of the models. Our theoretical predictions are supported by numerical simulations.
Identification of sea ice types in spaceborne synthetic aperture radar data
NASA Technical Reports Server (NTRS)
Kwok, Ronald; Rignot, Eric; Holt, Benjamin; Onstott, R.
1992-01-01
This study presents an approach for identification of sea ice types in spaceborne SAR image data. The unsupervised classification approach involves cluster analysis for segmentation of the image data followed by cluster labeling based on previously defined look-up tables containing the expected backscatter signatures of different ice types measured by a land-based scatterometer. Extensive scatterometer observations and experience accumulated in field campaigns during the last 10 yr were used to construct these look-up tables. The classification approach, its expected performance, the dependence of this performance on radar system performance, and expected ice scattering characteristics are discussed. Results using both aircraft and simulated ERS-1 SAR data are presented and compared to limited field ice property measurements and coincident passive microwave imagery. The importance of an integrated postlaunch program for the validation and improvement of this approach is discussed.
Piovesana, Adina M; Harrison, Jessica L; Ducat, Jacob J
2017-12-01
This study aimed to develop a motor-free short-form of the Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V) that allows clinicians to estimate the Full Scale Intelligence Quotients of youths with motor impairments. Using the reliabilities and intercorrelations of six WISC-V motor-free subtests, psychometric methodologies were applied to develop look-up tables for four Motor-free Short-form indices: Verbal Comprehension Short-form, Perceptual Reasoning Short-form, Working Memory Short-form, and a Motor-free Intelligence Quotient. Index-level discrepancy tables were developed using the same methods to allow clinicians to statistically compare visual, verbal, and working memory abilities. The short-form indices had excellent reliabilities ( r = .92-.97) comparable to the original WISC-V. This motor-free short-form of the WISC-V is a reliable alternative for the assessment of intellectual functioning in youths with motor impairments. Clinicians are provided with user-friendly look-up tables, index level discrepancy tables, and base rates, displayed similar to those in the WISC-V manuals to enable interpretation of assessment results.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-16
.... FMCSA used a look-up table for scaling the individual hours of work in the previous week. Look-up tables...). Column C (cells C93-137) presents the values scaled to our average work (52 hours per week of work) and... Evaluation to link the hours worked in the previous week to fatigue the following week. On January 28, 2011...
NASA Technical Reports Server (NTRS)
Habiby, Sarry F.; Collins, Stuart A., Jr.
1987-01-01
The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. A Hughes liquid crystal light valve, the residue arithmetic representation, and a holographic optical memory are used to construct position coded optical look-up tables. All operations are performed in effectively one light valve response time with a potential for a high information density.
Habiby, S F; Collins, S A
1987-11-01
The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. A Hughes liquid crystal light valve, the residue arithmetic representation, and a holographic optical memory are used to construct position coded optical look-up tables. All operations are performed in effectively one light valve response time with a potential for a high information density.
Fuzzy Rule Suram for Wood Drying
NASA Astrophysics Data System (ADS)
Situmorang, Zakarias
2017-12-01
Implemented of fuzzy rule must used a look-up table as defuzzification analysis. Look-up table is the actuator plant to doing the value of fuzzification. Rule suram based of fuzzy logic with variables of weather is temperature ambient and humidity ambient, it implemented for wood drying process. The membership function of variable of state represented in error value and change error with typical map of triangle and map of trapezium. Result of analysis to reach 4 fuzzy rule in 81 conditions to control the output system can be constructed in a number of way of weather and conditions of air. It used to minimum of the consumption of electric energy by heater. One cycle of schedule drying is a serial of condition of chamber to process as use as a wood species.
Improved look-up table method of computer-generated holograms.
Wei, Hui; Gong, Guanghong; Li, Ni
2016-11-10
Heavy computation load and vast memory requirements are major bottlenecks of computer-generated holograms (CGHs), which are promising and challenging in three-dimensional displays. To solve these problems, an improved look-up table (LUT) method suitable for arbitrarily sampled object points is proposed and implemented on a graphics processing unit (GPU) whose reconstructed object quality is consistent with that of the coherent ray-trace (CRT) method. The concept of distance factor is defined, and the distance factors are pre-computed off-line and stored in a look-up table. The results show that while reconstruction quality close to that of the CRT method is obtained, the on-line computation time is dramatically reduced compared with the LUT method on the GPU and the memory usage is lower than that of the novel-LUT considerably. Optical experiments are carried out to validate the effectiveness of the proposed method.
Rodrigues, A; Nguyen, G; Li, Y; Roy Choudhury, K; Kirsch, D; Das, S; Yoshizumi, T
2012-06-01
To verify the accuracy of TG-61 based dosimetry with MOSFET technology using a tissue-equivalent mouse phantom. Accuracy of mouse dose between a TG-61 based look-up table was verified with MOSFET technology. The look-up table followed a TG-61 based commissioning and used a solid water block and radiochromic film. A tissue-equivalent mouse phantom (2 cm diameter, 8 cm length) was used for the MOSFET method. Detectors were placed in the phantom at the head and center of the body. MOSFETs were calibrated in air with an ion chamber and f-factor was applied to derive the dose to tissue. In CBCT mode, the phantom was positioned such that the system isocenter coincided with the center of the MOSFET with the active volume perpendicular to the beam. The absorbed dose was measured three times for seven different collimators, respectively. The exposure parameters were 225 kVp, 13 mA, and an exposure time of 20 s. For a 10 mm, 15 mm, and 20 mm circular collimator, the dose measured by the phantom was 4.3%, 2.7%, and 6% lower than TG-61 based measurements, respectively. For a 10 × 10 mm, 20 × 20 mm, and 40 × 40 mm collimator, the dose difference was 4.7%, 7.7%, and 2.9%, respectively. The MOSFET data was systematically lower than the commissioning data. The dose difference is due to the increased scatter radiation in the solid water block versus the dimension of the mouse phantom leading to an overestimation of the actual dose in the solid water block. The MOSFET method with the use of a tissue- equivalent mouse phantom provides less labor intensive geometry-specific dosimetry and accuracy with better dose tolerances of up to ± 2.7%. © 2012 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Fasnacht, Z.; Qin, W.; Haffner, D. P.; Loyola, D. G.; Joiner, J.; Krotkov, N. A.; Vasilkov, A. P.; Spurr, R. J. D.
2017-12-01
In order to estimate surface reflectance used in trace gas retrieval algorithms, radiative transfer models (RTM) such as the Vector Linearized Discrete Ordinate Radiative Transfer Model (VLIDORT) can be used to simulate the top of the atmosphere (TOA) radiances with advanced models of surface properties. With large volumes of satellite data, these model simulations can become computationally expensive. Look up table interpolation can improve the computational cost of the calculations, but the non-linear nature of the radiances requires a dense node structure if interpolation errors are to be minimized. In order to reduce our computational effort and improve the performance of look-up tables, neural networks can be trained to predict these radiances. We investigate the impact of using look-up table interpolation versus a neural network trained using the smart sampling technique, and show that neural networks can speed up calculations and reduce errors while using significantly less memory and RTM calls. In future work we will implement a neural network in operational processing to meet growing demands for reflectance modeling in support of high spatial resolution satellite missions.
Efficient generation of 3D hologram for American Sign Language using look-up table
NASA Astrophysics Data System (ADS)
Park, Joo-Sup; Kim, Seung-Cheol; Kim, Eun-Soo
2010-02-01
American Sign Language (ASL) is one of the languages giving the greatest help for communication of the hearing impaired person. Current 2-D broadcasting, 2-D movies are used the ASL to give some information, help understand the situation of the scene and translate the foreign language. These ASL will not be disappeared in future three-dimensional (3-D) broadcasting or 3-D movies because the usefulness of the ASL. On the other hands, some approaches for generation of CGH patterns have been suggested like the ray-tracing method and look-up table (LUT) method. However, these methods have some drawbacks that needs much time or needs huge memory size for look-up table. Recently, a novel LUT (N-LUT) method for fast generation of CGH patterns of 3-D objects with a dramatically reduced LUT without the loss of computational speed was proposed. Therefore, we proposed the method to efficiently generate the holographic ASL in holographic 3DTV or 3-D movies using look-up table method. The proposed method is largely consisted of five steps: construction of the LUT for each ASL images, extraction of characters in scripts or situation, call the fringe patterns for characters in the LUT for each ASL, composition of hologram pattern for 3-D video and hologram pattern for ASL and reconstruct the holographic 3D video with ASL. Some simulation results confirmed the feasibility of the proposed method in efficient generation of CGH patterns for ASL.
A Web-Based Visualization and Animation Platform for Digital Logic Design
ERIC Educational Resources Information Center
Shoufan, Abdulhadi; Lu, Zheng; Huss, Sorin A.
2015-01-01
This paper presents a web-based education platform for the visualization and animation of the digital logic design process. This includes the design of combinatorial circuits using logic gates, multiplexers, decoders, and look-up-tables as well as the design of finite state machines. Various configurations of finite state machines can be selected…
NASA Astrophysics Data System (ADS)
Zhu, Xiaohua; Li, Chuanrong; Tang, Lingli
2018-03-01
Leaf area index (LAI) is a key structural characteristic of vegetation and plays a significant role in global change research. Several methods and remotely sensed data have been evaluated for LAI estimation. This study aimed to evaluate the suitability of the look-up-table (LUT) approach for crop LAI retrieval from Satellite Pour l'Observation de la Terre (SPOT)-5 data and establish an LUT approach for LAI inversion based on scale information. The LAI inversion result was validated by in situ LAI measurements, indicating that the LUT generated based on the PROSAIL (PROSPECT+SAIL: properties spectra + scattering by arbitrarily inclined leaves) model was suitable for crop LAI estimation, with a root mean square error (RMSE) of ˜0.31m2 / m2 and determination coefficient (R2) of 0.65. The scale effect of crop LAI was analyzed based on Taylor expansion theory, indicating that when the SPOT data aggregated by 200 × 200 pixel, the relative error is significant with 13.7%. Finally, an LUT method integrated with scale information was proposed in this article, improving the inversion accuracy with RMSE of 0.20 m2 / m2 and R2 of 0.83.
Simulation model of a twin-tail, high performance airplane
NASA Technical Reports Server (NTRS)
Buttrill, Carey S.; Arbuckle, P. Douglas; Hoffler, Keith D.
1992-01-01
The mathematical model and associated computer program to simulate a twin-tailed high performance fighter airplane (McDonnell Douglas F/A-18) are described. The simulation program is written in the Advanced Continuous Simulation Language. The simulation math model includes the nonlinear six degree-of-freedom rigid-body equations, an engine model, sensors, and first order actuators with rate and position limiting. A simplified form of the F/A-18 digital control laws (version 8.3.3) are implemented. The simulated control law includes only inner loop augmentation in the up and away flight mode. The aerodynamic forces and moments are calculated from a wind-tunnel-derived database using table look-ups with linear interpolation. The aerodynamic database has an angle-of-attack range of -10 to +90 and a sideslip range of -20 to +20 degrees. The effects of elastic deformation are incorporated in a quasi-static-elastic manner. Elastic degrees of freedom are not actively simulated. In the engine model, the throttle-commanded steady-state thrust level and the dynamic response characteristics of the engine are based on airflow rate as determined from a table look-up. Afterburner dynamics are switched in at a threshold based on the engine airflow and commanded thrust.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grove, John W.
2016-08-16
The xRage code supports a variety of hydrodynamic equation of state (EOS) models. In practice these are generally accessed in the executing code via a pressure-temperature based table look up. This document will describe the various models supported by these codes and provide details on the algorithms used to evaluate the equation of state.
Kwon, Min-Woo; Kim, Seung-Cheol; Kim, Eun-Soo
2016-01-20
A three-directional motion-compensation mask-based novel look-up table method is proposed and implemented on graphics processing units (GPUs) for video-rate generation of digital holographic videos of three-dimensional (3D) scenes. Since the proposed method is designed to be well matched with the software and memory structures of GPUs, the number of compute-unified-device-architecture kernel function calls can be significantly reduced. This results in a great increase of the computational speed of the proposed method, allowing video-rate generation of the computer-generated hologram (CGH) patterns of 3D scenes. Experimental results reveal that the proposed method can generate 39.8 frames of Fresnel CGH patterns with 1920×1080 pixels per second for the test 3D video scenario with 12,088 object points on dual GPU boards of NVIDIA GTX TITANs, and they confirm the feasibility of the proposed method in the practical application fields of electroholographic 3D displays.
Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi
2012-10-22
This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second.
Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi
2012-01-01
This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second. PMID:23202040
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alhroob, M.; Boyd, G.; Hasib, A.
Precision ultrasonic measurements in binary gas systems provide continuous real-time monitoring of mixture composition and flow. Using custom micro-controller-based electronics, we have developed an ultrasonic instrument, with numerous potential applications, capable of making continuous high-precision sound velocity measurements. The instrument measures sound transit times along two opposite directions aligned parallel to - or obliquely crossing - the gas flow. The difference between the two measured times yields the gas flow rate while their average gives the sound velocity, which can be compared with a sound velocity vs. molar composition look-up table for the binary mixture at a given temperature andmore » pressure. The look-up table may be generated from prior measurements in known mixtures of the two components, from theoretical calculations, or from a combination of the two. We describe the instrument and its performance within numerous applications in the ATLAS experiment at the CERN Large Hadron Collider (LHC). The instrument can be of interest in other areas where continuous in-situ binary gas analysis and flowmetry are required. (authors)« less
Giltrap, Donna L; Ausseil, Anne-Gaelle E; Thakur, Kailash P; Sutherland, M Anne
2013-11-01
In this study, we developed emission factor (EF) look-up tables for calculating the direct nitrous oxide (N2O) emissions from grazed pasture soils in New Zealand. Look-up tables of long-term average direct emission factors (and their associated uncertainties) were generated using multiple simulations of the NZ-DNDC model over a representative range of major soil, climate and management conditions occurring in New Zealand using 20 years of climate data. These EFs were then combined with national activity data maps to estimate direct N2O emissions from grazed pasture in New Zealand using 2010 activity data. The total direct N2O emissions using look-up tables were 12.7±12.1 Gg N2O-N (equivalent to using a national average EF of 0.70±0.67%). This agreed with the amount calculated using the New Zealand specific EFs (95% confidence interval 7.7-23.1 Gg N2O-N), although the relative uncertainty increased. The high uncertainties in the look-up table EFs were primarily due to the high uncertainty of the soil parameters within the selected soil categories. Uncertainty analyses revealed that the uncertainty in soil parameters contributed much more to the uncertainty in N2O emissions than the inter-annual weather variability. The effect of changes to fertiliser applications was also examined and it was found that for fertiliser application rates of 0-50 kg N/ha for sheep and beef and 60-240 kg N/ha for dairy the modelled EF was within ±10% of the value simulated using annual fertiliser application rates of 15 kg N/ha and 140 kg N/ha respectively. Copyright © 2013 Elsevier B.V. All rights reserved.
Differential Binary Encoding Method for Calibrating Image Sensors Based on IOFBs
Fernández, Pedro R.; Lázaro-Galilea, José Luis; Gardel, Alfredo; Espinosa, Felipe; Bravo, Ignacio; Cano, Ángel
2012-01-01
Image transmission using incoherent optical fiber bundles (IOFBs) requires prior calibration to obtain the spatial in-out fiber correspondence necessary to reconstruct the image captured by the pseudo-sensor. This information is recorded in a Look-Up Table called the Reconstruction Table (RT), used later for reordering the fiber positions and reconstructing the original image. This paper presents a very fast method based on image-scanning using spaces encoded by a weighted binary code to obtain the in-out correspondence. The results demonstrate that this technique yields a remarkable reduction in processing time and the image reconstruction quality is very good compared to previous techniques based on spot or line scanning, for example. PMID:22666023
ICL: The Image Composition Language
NASA Technical Reports Server (NTRS)
Foley, James D.; Kim, Won Chul
1986-01-01
The Image Composition Language (ICL) provides a convenient way for programmers of interactive graphics application programs to define how the video look-up table of a raster display system is to be loaded. The ICL allows one or several images stored in the frame buffer to be combined in a variety of ways. The ICL treats these images as variables, and provides arithematic, relational, and conditional operators to combine the images, scalar variables, and constants in image composition expressions. The objective of ICL is to provide programmers with a simple way to compose images, to relieve the tedium usually associated with loading the video look-up table to obtain desired results.
A Low-Complexity and High-Performance 2D Look-Up Table for LDPC Hardware Implementation
NASA Astrophysics Data System (ADS)
Chen, Jung-Chieh; Yang, Po-Hui; Lain, Jenn-Kaie; Chung, Tzu-Wen
In this paper, we propose a low-complexity, high-efficiency two-dimensional look-up table (2D LUT) for carrying out the sum-product algorithm in the decoding of low-density parity-check (LDPC) codes. Instead of employing adders for the core operation when updating check node messages, in the proposed scheme, the main term and correction factor of the core operation are successfully merged into a compact 2D LUT. Simulation results indicate that the proposed 2D LUT not only attains close-to-optimal bit error rate performance but also enjoys a low complexity advantage that is suitable for hardware implementation.
Kim, Seung-Cheol; Dong, Xiao-Bin; Kwon, Min-Woo; Kim, Eun-Soo
2013-05-06
A novel approach for fast generation of video holograms of three-dimensional (3-D) moving objects using a motion compensation-based novel-look-up-table (MC-N-LUT) method is proposed. Motion compensation has been widely employed in compression of conventional 2-D video data because of its ability to exploit high temporal correlation between successive video frames. Here, this concept of motion-compensation is firstly applied to the N-LUT based on its inherent property of shift-invariance. That is, motion vectors of 3-D moving objects are extracted between the two consecutive video frames, and with them motions of the 3-D objects at each frame are compensated. Then, through this process, 3-D object data to be calculated for its video holograms are massively reduced, which results in a dramatic increase of the computational speed of the proposed method. Experimental results with three kinds of 3-D video scenarios reveal that the average number of calculated object points and the average calculation time for one object point of the proposed method, have found to be reduced down to 86.95%, 86.53% and 34.99%, 32.30%, respectively compared to those of the conventional N-LUT and temporal redundancy-based N-LUT (TR-N-LUT) methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Andrew; Lawrence, Earl
The Response Surface Modeling (RSM) Tool Suite is a collection of three codes used to generate an empirical interpolation function for a collection of drag coefficient calculations computed with Test Particle Monte Carlo (TPMC) simulations. The first code, "Automated RSM", automates the generation of a drag coefficient RSM for a particular object to a single command. "Automated RSM" first creates a Latin Hypercube Sample (LHS) of 1,000 ensemble members to explore the global parameter space. For each ensemble member, a TPMC simulation is performed and the object drag coefficient is computed. In the next step of the "Automated RSM" code,more » a Gaussian process is used to fit the TPMC simulations. In the final step, Markov Chain Monte Carlo (MCMC) is used to evaluate the non-analytic probability distribution function from the Gaussian process. The second code, "RSM Area", creates a look-up table for the projected area of the object based on input limits on the minimum and maximum allowed pitch and yaw angles and pitch and yaw angle intervals. The projected area from the look-up table is used to compute the ballistic coefficient of the object based on its pitch and yaw angle. An accurate ballistic coefficient is crucial in accurately computing the drag on an object. The third code, "RSM Cd", uses the RSM generated by the "Automated RSM" code and the projected area look-up table generated by the "RSM Area" code to accurately compute the drag coefficient and ballistic coefficient of the object. The user can modify the object velocity, object surface temperature, the translational temperature of the gas, the species concentrations of the gas, and the pitch and yaw angles of the object. Together, these codes allow for the accurate derivation of an object's drag coefficient and ballistic coefficient under any conditions with only knowledge of the object's geometry and mass.« less
a Mapping Method of Slam Based on Look up Table
NASA Astrophysics Data System (ADS)
Wang, Z.; Li, J.; Wang, A.; Wang, J.
2017-09-01
In the last years several V-SLAM(Visual Simultaneous Localization and Mapping) approaches have appeared showing impressive reconstructions of the world. However these maps are built with far more than the required information. This limitation comes from the whole process of each key-frame. In this paper we present for the first time a mapping method based on the LOOK UP TABLE(LUT) for visual SLAM that can improve the mapping effectively. As this method relies on extracting features in each cell divided from image, it can get the pose of camera that is more representative of the whole key-frame. The tracking direction of key-frames is obtained by counting the number of parallax directions of feature points. LUT stored all mapping needs the number of cell corresponding to the tracking direction which can reduce the redundant information in the key-frame, and is more efficient to mapping. The result shows that a better map with less noise is build using less than one-third of the time. We believe that the capacity of LUT efficiently building maps makes it a good choice for the community to investigate in the scene reconstruction problems.
Quantifying anti-gravity torques in the design of a powered exoskeleton.
Ragonesi, Daniel; Agrawal, Sunil; Sample, Whitney; Rahman, Tariq
2011-01-01
Designing an upper extremity exoskeleton for people with arm weakness requires knowledge of the passive and active residual force capabilities of users. This paper experimentally measures the passive gravitational torques of 3 groups of subjects: able-bodied adults, able bodied children, and children with neurological disabilities. The experiment involves moving the arm to various positions in the sagittal plane and measuring the gravitational force at the wrist. This force is then converted to static gravitational torques at the elbow and shoulder. Data are compared between look-up table data based on anthropometry and empirical data. Results show that the look-up torques deviate from experimentally measured torques as the arm reaches up and down. This experiment informs designers of Upper Limb orthoses on the contribution of passive human joint torques.
Table-driven image transformation engine algorithm
NASA Astrophysics Data System (ADS)
Shichman, Marc
1993-04-01
A high speed image transformation engine (ITE) was designed and a prototype built for use in a generic electronic light table and image perspective transformation application code. The ITE takes any linear transformation, breaks the transformation into two passes and resamples the image appropriately for each pass. The system performance is achieved by driving the engine with a set of look up tables computed at start up time for the calculation of pixel output contributions. Anti-aliasing is done automatically in the image resampling process. Operations such as multiplications and trigonometric functions are minimized. This algorithm can be used for texture mapping, image perspective transformation, electronic light table, and virtual reality.
NASA Astrophysics Data System (ADS)
Berdanier, Reid A.; Key, Nicole L.
2016-03-01
The single slanted hot-wire technique has been used extensively as a method for measuring three velocity components in turbomachinery applications. The cross-flow orientation of probes with respect to the mean flow in rotating machinery results in detrimental prong interference effects when using multi-wire probes. As a result, the single slanted hot-wire technique is often preferred. Typical data reduction techniques solve a set of nonlinear equations determined by curve fits to calibration data. A new method is proposed which utilizes a look-up table method applied to a simulated triple-wire sensor with application to turbomachinery environments having subsonic, incompressible flows. Specific discussion regarding corrections for temperature and density changes present in a multistage compressor application is included, and additional consideration is given to the experimental error which accompanies each data reduction process. Hot-wire data collected from a three-stage research compressor with two rotor tip clearances are used to compare the look-up table technique with the traditional nonlinear equation method. The look-up table approach yields velocity errors of less than 5 % for test conditions deviating by more than 20 °C from calibration conditions (on par with the nonlinear solver method), while requiring less than 10 % of the computational processing time.
Kim, Seung-Cheol; Kim, Eun-Soo
2009-02-20
In this paper we propose a new approach for fast generation of computer-generated holograms (CGHs) of a 3D object by using the run-length encoding (RLE) and the novel look-up table (N-LUT) methods. With the RLE method, spatially redundant data of a 3D object are extracted and regrouped into the N-point redundancy map according to the number of the adjacent object points having the same 3D value. Based on this redundancy map, N-point principle fringe patterns (PFPs) are newly calculated by using the 1-point PFP of the N-LUT, and the CGH pattern for the 3D object is generated with these N-point PFPs. In this approach, object points to be involved in calculation of the CGH pattern can be dramatically reduced and, as a result, an increase of computational speed can be obtained. Some experiments with a test 3D object are carried out and the results are compared to those of the conventional methods.
All-optical 10Gb/s ternary-CAM cell for routing look-up table applications.
Mourgias-Alexandris, George; Vagionas, Christos; Tsakyridis, Apostolos; Maniotis, Pavlos; Pleros, Nikos
2018-03-19
We experimentally demonstrate the first all-optical Ternary-Content Addressable Memory (T-CAM) cell that operates at 10Gb/s and comprises two monolithically integrated InP Flip-Flops (FF) and a SOA-MZI optical XOR gate. The two FFs are responsible for storing the data bit and the ternary state 'X', respectively, with the XOR gate used for comparing the stored FF-data and the search bit. The experimental results reveal error-free operation at 10Gb/s for both Write and Ternary Content Addressing of the T-CAM cell, indicating that the proposed optical T-CAM cell could in principle lead to all-optical T-CAM-based Address Look-up memory architectures for high-end routing applications.
Research on Aircraft Target Detection Algorithm Based on Improved Radial Gradient Transformation
NASA Astrophysics Data System (ADS)
Zhao, Z. M.; Gao, X. M.; Jiang, D. N.; Zhang, Y. Q.
2018-04-01
Aiming at the problem that the target may have different orientation in the unmanned aerial vehicle (UAV) image, the target detection algorithm based on the rotation invariant feature is studied, and this paper proposes a method of RIFF (Rotation-Invariant Fast Features) based on look up table and polar coordinate acceleration to be used for aircraft target detection. The experiment shows that the detection performance of this method is basically equal to the RIFF, and the operation efficiency is greatly improved.
SU-E-T-169: Characterization of Pacemaker/ICD Dose in SAVI HDR Brachytherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalavagunta, C; Lasio, G; Yi, B
2015-06-15
Purpose: It is important to estimate dose to pacemaker (PM)/Implantable Cardioverter Defibrillator (ICD) before undertaking Accelerated Partial Breast Treatment using High Dose Rate (HDR) brachytherapy. Kim et al. have reported HDR PM/ICD dose using a single-source balloon applicator. To the authors knowledge, there have so far not been any published PM/ICD dosimetry literature for the Strut Adjusted Volume Implant (SAVI, Cianna Medical, Aliso Viejo, CA). This study aims to fill this gap by generating a dose look up table (LUT) to predict maximum dose to the PM/ICD in SAVI HDR brachytherapy. Methods: CT scans for 3D dosimetric planning were acquiredmore » for four SAVI applicators (6−1-mini, 6−1, 8−1 and 10−1) expanded to their maximum diameter in air. The CT datasets were imported into the Elekta Oncentra TPS for planning and each applicator was digitized in a multiplanar reconstruction window. A dose of 340 cGy was prescribed to the surface of a 1 cm expansion of the SAVI applicator cavity. Cartesian coordinates of the digitized applicator were determined in the treatment leading to the generation of a dose distribution and corresponding distance-dose prediction look up table (LUT) for distances from 2 to 15 cm (6-mini) and 2 to 20 cm (10–1).The deviation between the LUT doses and the dose to the cardiac device in a clinical case was evaluated. Results: Distance-dose look up table were compared to clinical SAVI plan and the discrepancy between the max dose predicted by the LUT and the clinical plan was found to be in the range (−0.44%, 0.74%) of the prescription dose. Conclusion: The distance-dose look up tables for SAVI applicators can be used to estimate the maximum dose to the ICD/PM, with a potential usefulness for quick assessment of dose to the cardiac device prior to applicator placement.« less
Choice: 36 band feature selection software with applications to multispectral pattern recognition
NASA Technical Reports Server (NTRS)
Jones, W. C.
1973-01-01
Feature selection software was developed at the Earth Resources Laboratory that is capable of inputting up to 36 channels and selecting channel subsets according to several criteria based on divergence. One of the criterion used is compatible with the table look-up classifier requirements. The software indicates which channel subset best separates (based on average divergence) each class from all other classes. The software employs an exhaustive search technique, and computer time is not prohibitive. A typical task to select the best 4 of 22 channels for 12 classes takes 9 minutes on a Univac 1108 computer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Lehua; Oldenburg, Curtis M.
Potential CO 2 leakage through existing open wellbores is one of the most significant hazards that need to be addressed in geologic carbon sequestration (GCS) projects. In the framework of the National Risk Assessment Partnership (NRAP) which requires fast computations for uncertainty analysis, rigorous simulation of the coupled wellbore-reservoir system is not practical. We have developed a 7,200-point look-up table reduced-order model (ROM) for estimating the potential leakage rate up open wellbores in response to CO 2 injection nearby. The ROM is based on coupled simulations using T2Well/ECO2H which was run repeatedly for representative conditions relevant to NRAP to createmore » a look-up table response-surface ROM. The ROM applies to a wellbore that fully penetrates a 20-m thick reservoir that is used for CO 2 storage. The radially symmetric reservoir is assumed to have initially uniform pressure, temperature, gas saturation, and brine salinity, and it is assumed these conditions are held constant at the far-field boundary (100 m away from the wellbore). In such a system, the leakage can quickly reach quasi-steady state. The ROM table can be used to estimate both the free-phase CO 2 and brine leakage rates through an open well as a function of wellbore and reservoir conditions. Results show that injection-induced pressure and reservoir gas saturation play important roles in controlling leakage. Caution must be used in the application of this ROM because well leakage is formally transient and the ROM lookup table was populated using quasi-steady simulation output after 1000 time steps which may correspond to different physical times for the various parameter combinations of the coupled wellbore-reservoir system.« less
New realisation of Preisach model using adaptive polynomial approximation
NASA Astrophysics Data System (ADS)
Liu, Van-Tsai; Lin, Chun-Liang; Wing, Home-Young
2012-09-01
Modelling system with hysteresis has received considerable attention recently due to the increasing accurate requirement in engineering applications. The classical Preisach model (CPM) is the most popular model to demonstrate hysteresis which can be represented by infinite but countable first-order reversal curves (FORCs). The usage of look-up tables is one way to approach the CPM in actual practice. The data in those tables correspond with the samples of a finite number of FORCs. This approach, however, faces two major problems: firstly, it requires a large amount of memory space to obtain an accurate prediction of hysteresis; secondly, it is difficult to derive efficient ways to modify the data table to reflect the timing effect of elements with hysteresis. To overcome, this article proposes the idea of using a set of polynomials to emulate the CPM instead of table look-up. The polynomial approximation requires less memory space for data storage. Furthermore, the polynomial coefficients can be obtained accurately by using the least-square approximation or adaptive identification algorithm, such as the possibility of accurate tracking of hysteresis model parameters.
LMDS Lightweight Modular Display System.
1982-02-16
based on standard functions. This means that the cost to produce a particular display function can be met in the most economical fashion and at the same...not mean that the NTDS interface would be eliminated. What is anticipated is the use of ETHERNET at a low level of system interface, ie internal to...GENERATOR dSYMBOL GEN eCOMMUNICATION 3-2 The architecture of the unit’s (fig 3-4) input circuitry is based on a video table look-up ROM. The function
Advanced Machine Learning Emulators of Radiative Transfer Models
NASA Astrophysics Data System (ADS)
Camps-Valls, G.; Verrelst, J.; Martino, L.; Vicent, J.
2017-12-01
Physically-based model inversion methodologies are based on physical laws and established cause-effect relationships. A plethora of remote sensing applications rely on the physical inversion of a Radiative Transfer Model (RTM), which lead to physically meaningful bio-geo-physical parameter estimates. The process is however computationally expensive, needs expert knowledge for both the selection of the RTM, its parametrization and the the look-up table generation, as well as its inversion. Mimicking complex codes with statistical nonlinear machine learning algorithms has become the natural alternative very recently. Emulators are statistical constructs able to approximate the RTM, although at a fraction of the computational cost, providing an estimation of uncertainty, and estimations of the gradient or finite integral forms. We review the field and recent advances of emulation of RTMs with machine learning models. We posit Gaussian processes (GPs) as the proper framework to tackle the problem. Furthermore, we introduce an automatic methodology to construct emulators for costly RTMs. The Automatic Gaussian Process Emulator (AGAPE) methodology combines the interpolation capabilities of GPs with the accurate design of an acquisition function that favours sampling in low density regions and flatness of the interpolation function. We illustrate the good capabilities of our emulators in toy examples, leaf and canopy levels PROSPECT and PROSAIL RTMs, and for the construction of an optimal look-up-table for atmospheric correction based on MODTRAN5.
Gaze Estimation for Off-Angle Iris Recognition Based on the Biometric Eye Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karakaya, Mahmut; Barstow, Del R; Santos-Villalobos, Hector J
Iris recognition is among the highest accuracy biometrics. However, its accuracy relies on controlled high quality capture data and is negatively affected by several factors such as angle, occlusion, and dilation. Non-ideal iris recognition is a new research focus in biometrics. In this paper, we present a gaze estimation method designed for use in an off-angle iris recognition framework based on the ANONYMIZED biometric eye model. Gaze estimation is an important prerequisite step to correct an off-angle iris images. To achieve the accurate frontal reconstruction of an off-angle iris image, we first need to estimate the eye gaze direction frommore » elliptical features of an iris image. Typically additional information such as well-controlled light sources, head mounted equipment, and multiple cameras are not available. Our approach utilizes only the iris and pupil boundary segmentation allowing it to be applicable to all iris capture hardware. We compare the boundaries with a look-up-table generated by using our biologically inspired biometric eye model and find the closest feature point in the look-up-table to estimate the gaze. Based on the results from real images, the proposed method shows effectiveness in gaze estimation accuracy for our biometric eye model with an average error of approximately 3.5 degrees over a 50 degree range.« less
Miksys, Nelson; Gordon, Christopher L; Thomas, Karen; Connolly, Bairbre L
2010-05-01
The purpose of this study was to estimate the effective doses received by pediatric patients during interventional radiology procedures and to present those doses in "look-up tables" standardized according to minute of fluoroscopy and frame of digital subtraction angiography (DSA). Organ doses were measured with metal oxide semiconductor field effect transistor (MOSFET) dosimeters inserted within three anthropomorphic phantoms, representing children at ages 1, 5, and 10 years, at locations corresponding to radiosensitive organs. The phantoms were exposed to mock interventional radiology procedures of the head, chest, and abdomen using posteroanterior and lateral geometries, varying magnification, and fluoroscopy or DSA exposures. Effective doses were calculated from organ doses recorded by the MOSFET dosimeters and are presented in look-up tables according to the different age groups. The largest effective dose burden for fluoroscopy was recorded for posteroanterior and lateral abdominal procedures (0.2-1.1 mSv/min of fluoroscopy), whereas procedures of the head resulted in the lowest effective doses (0.02-0.08 mSv/min of fluoroscopy). DSA exposures of the abdomen imparted higher doses (0.02-0.07 mSv/DSA frame) than did those involving the head and chest. Patient doses during interventional procedures vary significantly depending on the type of procedure. User-friendly look-up tables may provide a helpful tool for health care providers in estimating effective doses for an individual procedure.
NASA Astrophysics Data System (ADS)
Huttunen, Jani; Kokkola, Harri; Mielonen, Tero; Esa Juhani Mononen, Mika; Lipponen, Antti; Reunanen, Juha; Vilhelm Lindfors, Anders; Mikkonen, Santtu; Erkki Juhani Lehtinen, Kari; Kouremeti, Natalia; Bais, Alkiviadis; Niska, Harri; Arola, Antti
2016-07-01
In order to have a good estimate of the current forcing by anthropogenic aerosols, knowledge on past aerosol levels is needed. Aerosol optical depth (AOD) is a good measure for aerosol loading. However, dedicated measurements of AOD are only available from the 1990s onward. One option to lengthen the AOD time series beyond the 1990s is to retrieve AOD from surface solar radiation (SSR) measurements taken with pyranometers. In this work, we have evaluated several inversion methods designed for this task. We compared a look-up table method based on radiative transfer modelling, a non-linear regression method and four machine learning methods (Gaussian process, neural network, random forest and support vector machine) with AOD observations carried out with a sun photometer at an Aerosol Robotic Network (AERONET) site in Thessaloniki, Greece. Our results show that most of the machine learning methods produce AOD estimates comparable to the look-up table and non-linear regression methods. All of the applied methods produced AOD values that corresponded well to the AERONET observations with the lowest correlation coefficient value being 0.87 for the random forest method. While many of the methods tended to slightly overestimate low AODs and underestimate high AODs, neural network and support vector machine showed overall better correspondence for the whole AOD range. The differences in producing both ends of the AOD range seem to be caused by differences in the aerosol composition. High AODs were in most cases those with high water vapour content which might affect the aerosol single scattering albedo (SSA) through uptake of water into aerosols. Our study indicates that machine learning methods benefit from the fact that they do not constrain the aerosol SSA in the retrieval, whereas the LUT method assumes a constant value for it. This would also mean that machine learning methods could have potential in reproducing AOD from SSR even though SSA would have changed during the observation period.
Lettieri, S.; Zuckerman, D.M.
2011-01-01
Typically, the most time consuming part of any atomistic molecular simulation is due to the repeated calculation of distances, energies and forces between pairs of atoms. However, many molecules contain nearly rigid multi-atom groups such as rings and other conjugated moieties, whose rigidity can be exploited to significantly speed up computations. The availability of GB-scale random-access memory (RAM) offers the possibility of tabulation (pre-calculation) of distance and orientation-dependent interactions among such rigid molecular bodies. Here, we perform an investigation of this energy tabulation approach for a fluid of atomistic – but rigid – benzene molecules at standard temperature and density. In particular, using O(1) GB of RAM, we construct an energy look-up table which encompasses the full range of allowed relative positions and orientations between a pair of whole molecules. We obtain a hardware-dependent speed-up of a factor of 24-50 as compared to an ordinary (“exact”) Monte Carlo simulation and find excellent agreement between energetic and structural properties. Second, we examine the somewhat reduced fidelity of results obtained using energy tables based on much less memory use. Third, the energy table serves as a convenient platform to explore potential energy smoothing techniques, akin to coarse-graining. Simulations with smoothed tables exhibit near atomistic accuracy while increasing diffusivity. The combined speed-up in sampling from tabulation and smoothing exceeds a factor of 100. For future applications greater speed-ups can be expected for larger rigid groups, such as those found in biomolecules. PMID:22120971
NASA Astrophysics Data System (ADS)
Li, Will X. Y.; Cui, Ke; Zhang, Wei
2017-04-01
Cognitive neural prosthesis is a manmade device which can be used to restore or compensate for lost human cognitive modalities. The generalized Laguerre-Volterra (GLV) network serves as a robust mathematical underpinning for the development of such prosthetic instrument. In this paper, a hardware implementation scheme of Gauss error function for the GLV network targeting reconfigurable platforms is reported. Numerical approximations are formulated which transform the computation of nonelementary function into combinational operations of elementary functions, and memory-intensive look-up table (LUT) based approaches can therefore be circumvented. The computational precision can be made adjustable with the utilization of an error compensation scheme, which is proposed based on the experimental observation of the mathematical characteristics of the error trajectory. The precision can be further customizable by exploiting the run-time characteristics of the reconfigurable system. Compared to the polynomial expansion based implementation scheme, the utilization of slice LUTs, occupied slices, and DSP48E1s on a Xilinx XC6VLX240T field-programmable gate array has decreased by 94.2%, 94.1%, and 90.0%, respectively. While compared to the look-up table based scheme, 1.0 ×1017 bits of storage can be spared under the maximum allowable error of 1.0 ×10-3 . The proposed implementation scheme can be employed in the study of large-scale neural ensemble activity and in the design and development of neural prosthetic device.
Image processing on the image with pixel noise bits removed
NASA Astrophysics Data System (ADS)
Chuang, Keh-Shih; Wu, Christine
1992-06-01
Our previous studies used statistical methods to assess the noise level in digital images of various radiological modalities. We separated the pixel data into signal bits and noise bits and demonstrated visually that the removal of the noise bits does not affect the image quality. In this paper we apply image enhancement techniques on noise-bits-removed images and demonstrate that the removal of noise bits has no effect on the image property. The image processing techniques used are gray-level look up table transformation, Sobel edge detector, and 3-D surface display. Preliminary results show no noticeable difference between original image and noise bits removed image using look up table operation and Sobel edge enhancement. There is a slight enhancement of the slicing artifact in the 3-D surface display of the noise bits removed image.
Recognizing human actions by learning and matching shape-motion prototype trees.
Jiang, Zhuolin; Lin, Zhe; Davis, Larry S
2012-03-01
A shape-motion prototype-based approach is introduced for action recognition. The approach represents an action as a sequence of prototypes for efficient and flexible action matching in long video sequences. During training, an action prototype tree is learned in a joint shape and motion space via hierarchical K-means clustering and each training sequence is represented as a labeled prototype sequence; then a look-up table of prototype-to-prototype distances is generated. During testing, based on a joint probability model of the actor location and action prototype, the actor is tracked while a frame-to-prototype correspondence is established by maximizing the joint probability, which is efficiently performed by searching the learned prototype tree; then actions are recognized using dynamic prototype sequence matching. Distance measures used for sequence matching are rapidly obtained by look-up table indexing, which is an order of magnitude faster than brute-force computation of frame-to-frame distances. Our approach enables robust action matching in challenging situations (such as moving cameras, dynamic backgrounds) and allows automatic alignment of action sequences. Experimental results demonstrate that our approach achieves recognition rates of 92.86 percent on a large gesture data set (with dynamic backgrounds), 100 percent on the Weizmann action data set, 95.77 percent on the KTH action data set, 88 percent on the UCF sports data set, and 87.27 percent on the CMU action data set.
2010-01-01
Respondents should be aware that notwithstanding any other provision of law, no person shall be sublet to any penalty for failing to comply with a...Laboratory, NOAA Boulder, CO 8030S USA ’ Naval Research Laboratory, Code 7330. Stennis Space Center. NASA MS 39529. USA ’ Shellfish Assessment. Alaska...of peak) could be retrieved based solely on Rn (A, 0+ ) measurements. The use of Look-Up Tables (LUTs) of regionally and seasonally averaged lOPs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holden, Jacob; Van Til, Harrison J; Wood, Eric W
A data-informed model to predict energy use for a proposed vehicle trip has been developed in this paper. The methodology leverages nearly 1 million miles of real-world driving data to generate the estimation model. Driving is categorized at the sub-trip level by average speed, road gradient, and road network geometry, then aggregated by category. An average energy consumption rate is determined for each category, creating an energy rates look-up table. Proposed vehicle trips are then categorized in the same manner, and estimated energy rates are appended from the look-up table. The methodology is robust and applicable to almost any typemore » of driving data. The model has been trained on vehicle global positioning system data from the Transportation Secure Data Center at the National Renewable Energy Laboratory and validated against on-road fuel consumption data from testing in Phoenix, Arizona. The estimation model has demonstrated an error range of 8.6% to 13.8%. The model results can be used to inform control strategies in routing tools, such as change in departure time, alternate routing, and alternate destinations to reduce energy consumption. This work provides a highly extensible framework that allows the model to be tuned to a specific driver or vehicle type.« less
Caroline Müllenbroich, M; McGhee, Ewan J; Wright, Amanda J; Anderson, Kurt I; Mathieson, Keith
2014-01-01
We have developed a nonlinear adaptive optics microscope utilizing a deformable membrane mirror (DMM) and demonstrated its use in compensating for system- and sample-induced aberrations. The optimum shape of the DMM was determined with a random search algorithm optimizing on either two photon fluorescence or second harmonic signals as merit factors. We present here several strategies to overcome photobleaching issues associated with lengthy optimization routines by adapting the search algorithm and the experimental methodology. Optimizations were performed on extrinsic fluorescent dyes, fluorescent beads loaded into organotypic tissue cultures and the intrinsic second harmonic signal of these cultures. We validate the approach of using these preoptimized mirror shapes to compile a robust look-up table that can be applied for imaging over several days and through a variety of tissues. In this way, the photon exposure to the fluorescent cells under investigation is limited to imaging. Using our look-up table approach, we show signal intensity improvement factors ranging from 1.7 to 4.1 in organotypic tissue cultures and freshly excised mouse tissue. Imaging zebrafish in vivo, we demonstrate signal improvement by a factor of 2. This methodology is easily reproducible and could be applied to many photon starved experiments, for example fluorescent life time imaging, or when photobleaching is a concern.
Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding
NASA Technical Reports Server (NTRS)
Mahmoud, Saad; Hi, Jianjun
2012-01-01
The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of the deviation. This method is more complicated than the Pilot-Guided Method due to the gain control circuitry, but does not have the real-time computation complexity of the Blind Estimation method. Each of these methods can be used to provide an accurate estimation of the combining ratio, and the final selection of the estimation method depends on other design constraints.
Single-Chip Microcomputer Control Of The PWM Inverter
NASA Astrophysics Data System (ADS)
Morimoto, Masayuki; Sato, Shinji; Sumito, Kiyotaka; Oshitani, Katsumi
1987-10-01
A single-chip microcomputer-based con-troller for a pulsewidth modulated 1.7 KVA inverter of an airconditioner is presented. The PWM pattern generation and the system control of the airconditioner are achieved by software of the 8-bit single-chip micro-computer. The single-chip microcomputer has the disadvantages of low processing speed and small memory capacity which can be overcome by the magnetic flux control method. The PWM pattern is generated every 90 psec. The memory capacity of the PWM look-up table is less than 2 kbytes. The simple and reliable control is realized by the software-based implementation.
Using the tabulated diffusion flamelet model ADF-PCM to simulate a lifted methane-air jet flame
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michel, Jean-Baptiste; Colin, Olivier; Angelberger, Christian
2009-07-15
Two formulations of a turbulent combustion model based on the approximated diffusion flame presumed conditional moment (ADF-PCM) approach [J.-B. Michel, O. Colin, D. Veynante, Combust. Flame 152 (2008) 80-99] are presented. The aim is to describe autoignition and combustion in nonpremixed and partially premixed turbulent flames, while accounting for complex chemistry effects at a low computational cost. The starting point is the computation of approximate diffusion flames by solving the flamelet equation for the progress variable only, reading all chemical terms such as reaction rates or mass fractions from an FPI-type look-up table built from autoigniting PSR calculations using complexmore » chemistry. These flamelets are then used to generate a turbulent look-up table where mean values are estimated by integration over presumed probability density functions. Two different versions of ADF-PCM are presented, differing by the probability density functions used to describe the evolution of the stoichiometric scalar dissipation rate: a Dirac function centered on the mean value for the basic ADF-PCM formulation, and a lognormal function for the improved formulation referenced ADF-PCM{chi}. The turbulent look-up table is read in the CFD code in the same manner as for PCM models. The developed models have been implemented into the compressible RANS CFD code IFP-C3D and applied to the simulation of the Cabra et al. experiment of a lifted methane jet flame [R. Cabra, J. Chen, R. Dibble, A. Karpetis, R. Barlow, Combust. Flame 143 (2005) 491-506]. The ADF-PCM{chi} model accurately reproduces the experimental lift-off height, while it is underpredicted by the basic ADF-PCM model. The ADF-PCM{chi} model shows a very satisfactory reproduction of the experimental mean and fluctuating values of major species mass fractions and temperature, while ADF-PCM yields noticeable deviations. Finally, a comparison of the experimental conditional probability densities of the progress variable for a given mixture fraction with model predictions is performed, showing that ADF-PCM{chi} reproduces the experimentally observed bimodal shape and its dependency on the mixture fraction, whereas ADF-PCM cannot retrieve this shape. (author)« less
Sandia Unstructured Triangle Tabular Interpolation Package v 0.1 beta
DOE Office of Scientific and Technical Information (OSTI.GOV)
2013-09-24
The software interpolates tabular data, such as for equations of state, provided on an unstructured triangular grid. In particular, interpolation occurs in a two dimensional space by looking up the triangle in which the desired evaluation point resides and then performing a linear interpolation over the n-tuples associated with the nodes of the chosen triangle. The interface to the interpolation routines allows for automated conversion of units from those tabulated to the desired output units. when multiple tables are included in a data file, new tables may be generated by on-the-fly mixing of the provided tables
Fixed-Base Comb with Window-Non-Adjacent Form (NAF) Method for Scalar Multiplication
Seo, Hwajeong; Kim, Hyunjin; Park, Taehwan; Lee, Yeoncheol; Liu, Zhe; Kim, Howon
2013-01-01
Elliptic curve cryptography (ECC) is one of the most promising public-key techniques in terms of short key size and various crypto protocols. For this reason, many studies on the implementation of ECC on resource-constrained devices within a practical execution time have been conducted. To this end, we must focus on scalar multiplication, which is the most expensive operation in ECC. A number of studies have proposed pre-computation and advanced scalar multiplication using a non-adjacent form (NAF) representation, and more sophisticated approaches have employed a width-w NAF representation and a modified pre-computation table. In this paper, we propose a new pre-computation method in which zero occurrences are much more frequent than in previous methods. This method can be applied to ordinary group scalar multiplication, but it requires large pre-computation table, so we combined the previous method with ours for practical purposes. This novel structure establishes a new feature that adjusts speed performance and table size finely, so we can customize the pre-computation table for our own purposes. Finally, we can establish a customized look-up table for embedded microprocessors. PMID:23881143
TILTING TABLE AREA, PDP ROOM, LEVEL +27, LOOKING SOUTHWEST, SHOWING ...
TILTING TABLE AREA, PDP ROOM, LEVEL +27, LOOKING SOUTHWEST, SHOWING TILTING TABLE, MARKED BY WHITE ELECTRICAL CORD - Physics Assembly Laboratory, Area A/M, Savannah River Site, Aiken, Aiken County, SC
TILTING TABLE AREA, PDP ROOM, LEVEL +27, LOOKING NORTHWEST. TILTING ...
TILTING TABLE AREA, PDP ROOM, LEVEL +27, LOOKING NORTHWEST. TILTING TABLE MARKED BY WHITE ELECTRICAL CORD IN LOWER LEFT CENTER - Physics Assembly Laboratory, Area A/M, Savannah River Site, Aiken, Aiken County, SC
Inductive System for Reliable Magnesium Level Detection in a Titanium Reduction Reactor
NASA Astrophysics Data System (ADS)
Krauter, Nico; Eckert, Sven; Gundrum, Thomas; Stefani, Frank; Wondrak, Thomas; Frick, Peter; Khalilov, Ruslan; Teimurazov, Andrei
2018-05-01
The determination of the Magnesium level in a Titanium reduction retort by inductive methods is often hampered by the formation of Titanium sponge rings which disturb the propagation of electromagnetic signals between excitation and receiver coils. We present a new method for the reliable identification of the Magnesium level which explicitly takes into account the presence of sponge rings with unknown geometry and conductivity. The inverse problem is solved by a look-up-table method, based on the solution of the inductive forward problems for several tens of thousands parameter combinations.
Looking at Debit and Credit Card Fraud
ERIC Educational Resources Information Center
Porkess, Roger; Mason, Stephen
2012-01-01
This article, written jointly by a mathematician and a barrister, looks at some of the statistical issues raised by court cases based on fraud involving chip and PIN cards. It provides examples and insights that statistics teachers should find helpful. (Contains 4 tables and 1 figure.)
Self-Contained Avionics Sensing and Flight Control System for Small Unmanned Aerial Vehicle
NASA Technical Reports Server (NTRS)
Ingham, John C. (Inventor); Shams, Qamar A. (Inventor); Logan, Michael J. (Inventor); Fox, Robert L. (Inventor); Fox, legal representative, Melanie L. (Inventor); Kuhn, III, Theodore R. (Inventor); Babel, III, Walter C. (Inventor); Fox, legal representative, Christopher L. (Inventor); Adams, James K. (Inventor); Laughter, Sean A. (Inventor)
2011-01-01
A self-contained avionics sensing and flight control system is provided for an unmanned aerial vehicle (UAV). The system includes sensors for sensing flight control parameters and surveillance parameters, and a Global Positioning System (GPS) receiver. Flight control parameters and location signals are processed to generate flight control signals. A Field Programmable Gate Array (FPGA) is configured to provide a look-up table storing sets of values with each set being associated with a servo mechanism mounted on the UAV and with each value in each set indicating a unique duty cycle for the servo mechanism associated therewith. Each value in each set is further indexed to a bit position indicative of a unique percentage of a maximum duty cycle for the servo mechanism associated therewith. The FPGA is further configured to provide a plurality of pulse width modulation (PWM) generators coupled to the look-up table. Each PWM generator is associated with and adapted to be coupled to one of the servo mechanisms.
A Dual-Wavelength Radar Technique to Detect Hydrometeor Phases
NASA Technical Reports Server (NTRS)
Liao, Liang; Meneghini, Robert
2016-01-01
This study is aimed at investigating the feasibility of a Ku- and Ka-band space/air-borne dual wavelength radar algorithm to discriminate various phase states of precipitating hydrometeors. A phase-state classification algorithm has been developed from the radar measurements of snow, mixed-phase and rain obtained from stratiform storms. The algorithm, presented in the form of the look-up table that links the Ku-band radar reflectivities and dual-frequency ratio (DFR) to the phase states of hydrometeors, is checked by applying it to the measurements of the Jet Propulsion Laboratory, California Institute of Technology, Airborne Precipitation Radar Second Generation (APR-2). In creating the statistically-based phase look-up table, the attenuation corrected (or true) radar reflectivity factors are employed, leading to better accuracy in determining the hydrometeor phase. In practice, however, the true radar reflectivities are not always available before the phase states of the hydrometeors are determined. Therefore, it is desirable to make use of the measured radar reflectivities in classifying the phase states. To do this, a phase-identification procedure is proposed that uses only measured radar reflectivities. The procedure is then tested using APR-2 airborne radar data. Analysis of the classification results in stratiform rain indicates that the regions of snow, mixed-phase and rain derived from the phase-identification algorithm coincide reasonably well with those determined from the measured radar reflectivities and linear depolarization ratio (LDR).
Mutagenicity and carcinogenicity databases are crucial resources for toxicologists and regulators involved in chemicals risk assessment. Until recently, existing public toxicity databases have been constructed primarily as "look-up-tables" of existing data, and most often did no...
DOT National Transportation Integrated Search
1999-03-01
A methodology for developing modal vehicle emissions and fuel consumption models has been developed by Oak Ridge National Laboratory (ORNL), sponsored by the Federal Highway Administration. These models, in the form of look-up tables for fuel consump...
7. Credit BG. View looking west into small solid rocket ...
7. Credit BG. View looking west into small solid rocket motor testing bay of Test Stand 'E' (Building 4259/E-60). Motors are mounted on steel table and fired horizontally toward the east. - Jet Propulsion Laboratory Edwards Facility, Test Stand E, Edwards Air Force Base, Boron, Kern County, CA
A trainable decisions-in decision-out (DEI-DEO) fusion system
NASA Astrophysics Data System (ADS)
Dasarathy, Belur V.
1998-03-01
Most of the decision fusion systems proposed hitherto in the literature for multiple data source (sensor) environments operate on the basis of pre-defined fusion logic, be they crisp (deterministic), probabilistic, or fuzzy in nature, with no specific learning phase. The fusion systems that are trainable, i.e., ones that have a learning phase, mostly operate in the features-in-decision-out mode, which essentially reduces the fusion process functionally to a pattern classification task in the joint feature space. In this study, a trainable decisions-in-decision-out fusion system is described which estimates a fuzzy membership distribution spread across the different decision choices based on the performance of the different decision processors (sensors) corresponding to each training sample (object) which is associated with a specific ground truth (true decision). Based on a multi-decision space histogram analysis of the performance of the different processors over the entire training data set, a look-up table associating each cell of the histogram with a specific true decision is generated which forms the basis for the operational phase. In the operational phase, for each set of decision inputs, a pointer to the look-up table learnt previously is generated from which a fused decision is derived. This methodology, although primarily designed for fusing crisp decisions from the multiple decision sources, can be adapted for fusion of fuzzy decisions as well if such are the inputs from these sources. Examples, which illustrate the benefits and limitations of the crisp and fuzzy versions of the trainable fusion systems, are also included.
Using false colors to protect visual privacy of sensitive content
NASA Astrophysics Data System (ADS)
Ćiftçi, Serdar; Korshunov, Pavel; Akyüz, Ahmet O.; Ebrahimi, Touradj
2015-03-01
Many privacy protection tools have been proposed for preserving privacy. Tools for protection of visual privacy available today lack either all or some of the important properties that are expected from such tools. Therefore, in this paper, we propose a simple yet effective method for privacy protection based on false color visualization, which maps color palette of an image into a different color palette, possibly after a compressive point transformation of the original pixel data, distorting the details of the original image. This method does not require any prior face detection or other sensitive regions detection and, hence, unlike typical privacy protection methods, it is less sensitive to inaccurate computer vision algorithms. It is also secure as the look-up tables can be encrypted, reversible as table look-ups can be inverted, flexible as it is independent of format or encoding, adjustable as the final result can be computed by interpolating the false color image with the original using different degrees of interpolation, less distracting as it does not create visually unpleasant artifacts, and selective as it preserves better semantic structure of the input. Four different color scales and four different compression functions, one which the proposed method relies, are evaluated via objective (three face recognition algorithms) and subjective (50 human subjects in an online-based study) assessments using faces from FERET public dataset. The evaluations demonstrate that DEF and RBS color scales lead to the strongest privacy protection, while compression functions add little to the strength of privacy protection. Statistical analysis also shows that recognition algorithms and human subjects perceive the proposed protection similarly
NASA Astrophysics Data System (ADS)
Pathak, P.; Guyon, O.; Jovanovic, N.; Lozi, J.; Martinache, F.; Minowa, Y.; Kudo, T.; Kotani, T.; Takami, H.
2018-02-01
Adaptive optic (AO) systems delivering high levels of wavefront correction are now common at observatories. One of the main limitations to image quality after wavefront correction comes from atmospheric refraction. An atmospheric dispersion compensator (ADC) is employed to correct for atmospheric refraction. The correction is applied based on a look-up table consisting of dispersion values as a function of telescope elevation angle. The look-up table-based correction of atmospheric dispersion results in imperfect compensation leading to the presence of residual dispersion in the point spread function (PSF) and is insufficient when sub-milliarcsecond precision is required. The presence of residual dispersion can limit the achievable contrast while employing high-performance coronagraphs or can compromise high-precision astrometric measurements. In this paper, we present the first on-sky closed-loop correction of atmospheric dispersion by directly using science path images. The concept behind the measurement of dispersion utilizes the chromatic scaling of focal plane speckles. An adaptive speckle grid generated with a deformable mirror (DM) that has a sufficiently large number of actuators is used to accurately measure the residual dispersion and subsequently correct it by driving the ADC. We have demonstrated with the Subaru Coronagraphic Extreme AO (SCExAO) system on-sky closed-loop correction of residual dispersion to <1 mas across H-band. This work will aid in the direct detection of habitable exoplanets with upcoming extremely large telescopes (ELTs) and also provide a diagnostic tool to test the performance of instruments which require sub-milliarcsecond correction.
General Mission Analysis Tool (GMAT) User's Guide (Draft)
NASA Technical Reports Server (NTRS)
Hughes, Steven P.
2007-01-01
4The General Mission Analysis Tool (GMAT) is a space trajectory optimization and mission analysis system. This document is a draft of the users guide for the tool. Included in the guide is information about Configuring Objects/Resources, Object Fields: Quick Look-up Tables, and Commands and Events.
A short note on calculating the adjusted SAR index
USDA-ARS?s Scientific Manuscript database
A simple algebraic technique is presented for computing the adjusted SAR Index proposed by Suarez (1981). The statistical formula presented in this note facilitates the computation of the adjusted SAR without the use of either a look-up table, custom computer software or the need to compute exact a...
Computer Vision for Artificially Intelligent Robotic Systems
NASA Astrophysics Data System (ADS)
Ma, Chialo; Ma, Yung-Lung
1987-04-01
In this paper An Acoustic Imaging Recognition System (AIRS) will be introduced which is installed on an Intelligent Robotic System and can recognize different type of Hand tools' by Dynamic pattern recognition. The dynamic pattern recognition is approached by look up table method in this case, the method can save a lot of calculation time and it is practicable. The Acoustic Imaging Recognition System (AIRS) is consist of four parts -- position control unit, pulse-echo signal processing unit, pattern recognition unit and main control unit. The position control of AIRS can rotate an angle of ±5 degree Horizental and Vertical seperately, the purpose of rotation is to find the maximum reflection intensity area, from the distance, angles and intensity of the target we can decide the characteristic of this target, of course all the decision is target, of course all the decision is processed bye the main control unit. In Pulse-Echo Signal Process Unit, we ultilize the correlation method, to overcome the limitation of short burst of ultrasonic, because the Correlation system can transmit large time bandwidth signals and obtain their resolution and increased intensity through pulse compression in the correlation receiver. The output of correlator is sampled and transfer into digital data by u law coding method, and this data together with delay time T, angle information OH, eV will be sent into main control unit for further analysis. The recognition process in this paper, we use dynamic look up table method, in this method at first we shall set up serval recognition pattern table and then the new pattern scanned by Transducer array will be devided into serval stages and compare with the sampling table. The comparison is implemented by dynamic programing and Markovian process. All the hardware control signals, such as optimum delay time for correlator receiver, horizental and vertical rotation angle for transducer plate, are controlled by the Main Control Unit, the Main Control Unit also handles the pattern recognition process. The distance from the target to the transducer plate is limitted by the power and beam angle of transducer elements, in this AIRS Model, we use a narrow beam transducer and it's input voltage is 50V p-p. A RobOt equipped with AIRS can not only measure the distance from the target but also recognize a three dimensional image of target from the image lab of Robot memory. Indexitems, Accoustic System, Supersonic transducer, Dynamic programming, Look-up-table, Image process, pattern Recognition, Quad Tree, Quadappoach.
NASA Astrophysics Data System (ADS)
Ma, Yung-Lung; Ma, Chialo
1987-03-01
In this paper An Acoustic Imaging Recognition System (AIRS) will be introduced which is installed on an Intelligent Robotic System and can recognize different type of Hand tools' by Dynamic pattern recognition. The dynamic pattern recognition is approached by look up table method in this case, the method can save a lot of calculation time and it is practicable. The Acoustic Imaging Recognition System (AIRS) is consist of four parts _ position control unit, pulse-echo signal processing unit, pattern recognition unit and main control unit. The position control of AIRS can rotate an angle of ±5 degree Horizental and Vertical seperately, the purpose of rotation is to find the maximum reflection intensity area, from the distance, angles and intensity of the target we can decide the characteristic of this target, of course all the decision is target, of course all the decision is processed by the main control unit. In Pulse-Echo Signal Process Unit, we utilize the correlation method, to overcome the limitation of short burst of ultrasonic, because the Correlation system can transmit large time bandwidth signals and obtain their resolution and increased intensity through pulse compression in the correlation receiver. The output of correlator is sampled and transfer into digital data by p law coding method, and this data together with delay time T, angle information eH, eV will be sent into main control unit for further analysis. The recognition process in this paper, we use dynamic look up table method, in this method at first we shall set up serval recognition pattern table and then the new pattern scanned by Transducer array will be devided into serval stages and compare with the sampling table. The comparison is implemented by dynamic programing and Markovian process. All the hardware control signals, such as optimum delay time for correlator receiver, horizental and vertical rotation angle for transducer plate, are controlled by the Main Control Unit, the Main Control Unit also handles the pattern recognition process. The distance from the target to the transducer plate is limitted by the power and beam angle of transducer elements, in this AIRS Models, we use a narrow beam transducer and it's input voltage is 50V p-p. A Robot equipped with AIRS can not only measure the distance from the target but also recognize a three dimensional image of target from the image lab of Robot memory. Indexitems, Accoustic System, Supersonic transducer, Dynamic programming, Look-up-table, Image process, pattern Recognition, Quad Tree, Quadappoach.
A new methodology for vibration error compensation of optical encoders.
Lopez, Jesus; Artes, Mariano
2012-01-01
Optical encoders are sensors based on grating interference patterns. Tolerances inherent to the manufacturing process can induce errors in the position accuracy as the measurement signals stand apart from the ideal conditions. In case the encoder is working under vibrations, the oscillating movement of the scanning head is registered by the encoder system as a displacement, introducing an error into the counter to be added up to graduation, system and installation errors. Behavior improvement can be based on different techniques trying to compensate the error from measurement signals processing. In this work a new "ad hoc" methodology is presented to compensate the error of the encoder when is working under the influence of vibration. The methodology is based on fitting techniques to the Lissajous figure of the deteriorated measurement signals and the use of a look up table, giving as a result a compensation procedure in which a higher accuracy of the sensor is obtained.
Looking northeast across transfer table pit at Boiler Shop (Bldg. ...
Looking northeast across transfer table pit at Boiler Shop (Bldg. 152) - Atchison, Topeka, Santa Fe Railroad, Albuquerque Shops, Boiler Shop, 908 Second Street, Southwest, Albuquerque, Bernalillo County, NM
Selection Algorithm for the CALIPSO Lidar Aerosol Extinction-to-Backscatter Ratio
NASA Technical Reports Server (NTRS)
Omar, Ali H.; Winker, David M.; Vaughan, Mark A.
2006-01-01
The extinction-to-backscatter ratio (S(sub a)) is an important parameter used in the determination of the aerosol extinction and subsequently the optical depth from lidar backscatter measurements. We outline the algorithm used to determine Sa for the Cloud and Aerosol Lidar and Infrared Pathfinder Spaceborne Observations (CALIPSO) lidar. S(sub a) for the CALIPSO lidar will either be selected from a look-up table or calculated using the lidar measurements depending on the characteristics of aerosol layer. Whenever suitable lofted layers are encountered, S(sub a) is computed directly from the integrated backscatter and transmittance. In all other cases, the CALIPSO observables: the depolarization ratio, delta, the layer integrated attenuated backscatter, beta, and the mean layer total attenuated color ratio, gamma, together with the surface type, are used to aid in aerosol typing. Once the type is identified, a look-up-table developed primarily from worldwide observations, is used to determine the S(sub a) value. The CALIPSO aerosol models include desert dust, biomass burning, background, polluted continental, polluted dust, and marine aerosols.
NASA Technical Reports Server (NTRS)
Habiby, Sarry F.
1987-01-01
The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. The objective is to demonstrate the operation of an optical processor designed to minimize computation time in performing a practical computing application. This is done by using the large array of processing elements in a Hughes liquid crystal light valve, and relying on the residue arithmetic representation, a holographic optical memory, and position coded optical look-up tables. In the design, all operations are performed in effectively one light valve response time regardless of matrix size. The features of the design allowing fast computation include the residue arithmetic representation, the mapping approach to computation, and the holographic memory. In addition, other features of the work include a practical light valve configuration for efficient polarization control, a model for recording multiple exposures in silver halides with equal reconstruction efficiency, and using light from an optical fiber for a reference beam source in constructing the hologram. The design can be extended to implement larger matrix arrays without increasing computation time.
Looking southwest at dualtrack transfer table, with Machine Shop (Bldg. ...
Looking southwest at dual-track transfer table, with Machine Shop (Bldg. 163) in background - Atchison, Topeka, Santa Fe Railroad, Albuquerque Shops, 908 Second Street, Southwest, Albuquerque, Bernalillo County, NM
A Comparative Study on Safe Pile Capacity as Shown in Table 1 of IS 2911 (Part III): 1980
NASA Astrophysics Data System (ADS)
Pakrashi, Somdev
2017-06-01
Code of practice for design and construction of under reamed pile foundations: IS 2911 (Part-III)—1980 presents one table in respect of safe load for bored cast in situ under reamed piles in sandy and clayey soils including black cotton soils, stem dia. of pile ranging from 20 to 50 cm and its effective length being 3.50 m. A comparative study, was taken up by working out safe pile capacity for one 400 dia., 3.5 m long bored cast in situ under reamed pile based on subsoil properties obtained from soil investigation work as well as subsoil properties of different magnitudes of clayey, sandy soils and comparing the same with the safe pile capacity shown in Table 1 of that IS Code. The study reveals that safe pile capacity computed from subsoil properties, barring a very few cases, considerably differs from that shown in the aforesaid code and looks forward for more research work and study to find out a conclusive explanation of this probable anomaly.
NASA Astrophysics Data System (ADS)
Ichihara, Takashi; George, Richard T.; Silva, Caterina; Lima, Joao A. C.; Lardo, Albert C.
2011-02-01
The purpose of this study was to develop a quantitative method for myocardial blood flow (MBF) measurement that can be used to derive accurate myocardial perfusion measurements from dynamic multidetector computed tomography (MDCT) images by using a compartment model for calculating the first-order transfer constant (K1) with correction for the capillary transit extraction fraction (E). Six canine models of left anterior descending (LAD) artery stenosis were prepared and underwent first-pass contrast-enhanced MDCT perfusion imaging during adenosine infusion (0.14-0.21 mg/kg/min). K1 , which is the first-order transfer constant from left ventricular (LV) blood to myocardium, was measured using the Patlak plot method applied to time-attenuation curve data of the LV blood pool and myocardium. The results were compared against microsphere MBF measurements, and the extraction fraction of contrast agent was calculated. K1 is related to the regional MBF as K1=EF, E=(1-exp(-PS/F)), where PS is the permeability-surface area product and F is myocardial flow. Based on the above relationship, a look-up table from K1 to MBF can be generated and Patlak plot-derived K1 values can be converted to the calculated MBF. The calculated MBF and microsphere MBF showed a strong linear association. The extraction fraction in dogs as a function of flow (F) was E=(1-exp(-(0.2532F+0.7871)/F)) . Regional MBF can be measured accurately using the Patlak plot method based on a compartment model and look-up table with extraction fraction correction from K1 to MBF.
PROXIMAL: a method for Prediction of Xenobiotic Metabolism.
Yousofshahi, Mona; Manteiga, Sara; Wu, Charmian; Lee, Kyongbum; Hassoun, Soha
2015-12-22
Contamination of the environment with bioactive chemicals has emerged as a potential public health risk. These substances that may cause distress or disease in humans can be found in air, water and food supplies. An open question is whether these chemicals transform into potentially more active or toxic derivatives via xenobiotic metabolizing enzymes expressed in the body. We present a new prediction tool, which we call PROXIMAL (Prediction of Xenobiotic Metabolism) for identifying possible transformation products of xenobiotic chemicals in the liver. Using reaction data from DrugBank and KEGG, PROXIMAL builds look-up tables that catalog the sites and types of structural modifications performed by Phase I and Phase II enzymes. Given a compound of interest, PROXIMAL searches for substructures that match the sites cataloged in the look-up tables, applies the corresponding modifications to generate a panel of possible transformation products, and ranks the products based on the activity and abundance of the enzymes involved. PROXIMAL generates transformations that are specific for the chemical of interest by analyzing the chemical's substructures. We evaluate the accuracy of PROXIMAL's predictions through case studies on two environmental chemicals with suspected endocrine disrupting activity, bisphenol A (BPA) and 4-chlorobiphenyl (PCB3). Comparisons with published reports confirm 5 out of 7 and 17 out of 26 of the predicted derivatives for BPA and PCB3, respectively. We also compare biotransformation predictions generated by PROXIMAL with those generated by METEOR and Metaprint2D-react, two other prediction tools. PROXIMAL can predict transformations of chemicals that contain substructures recognizable by human liver enzymes. It also has the ability to rank the predicted metabolites based on the activity and abundance of enzymes involved in xenobiotic transformation.
26. WARDROOM, LOOKING TOWARDS PORT, AT TABLE, WEAPONS CLOSET, AND ...
26. WARDROOM, LOOKING TOWARDS PORT, AT TABLE, WEAPONS CLOSET, AND DESK. - U.S. Coast Guard Cutter WHITE LUPINE, U.S. Coast Guard Station Rockland, east end of Tillson Avenue, Rockland, Knox County, ME
True 3D display and BeoWulf connectivity
NASA Astrophysics Data System (ADS)
Jannson, Tomasz P.; Kostrzewski, Andrew A.; Kupiec, Stephen A.; Yu, Kevin H.; Aye, Tin M.; Savant, Gajendra D.
2003-09-01
We propose a novel true 3-D display based on holographic optics, called HAD (Holographic Autostereoscopic Display), or Holographic Inverse Look-around and Autostereoscopic Reality (HILAR), its latest generation. It does not require goggles, unlike the state of the art 3-D system which do not work without goggles, and has a table-like 360° look-around capability. Also, novel 3-D image-rendering software, based on Beowulf PC cluster hardware is discussed.
Simulation and mitigation of higher-order ionospheric errors in PPP
NASA Astrophysics Data System (ADS)
Zus, Florian; Deng, Zhiguo; Wickert, Jens
2017-04-01
We developed a rapid and precise algorithm to compute ionospheric phase advances in a realistic electron density field. The electron density field is derived from a plasmaspheric extension of the International Reference Ionosphere (Gulyaeva and Bilitza, 2012) and the magnetic field stems from the International Geomagnetic Reference Field. For specific station locations, elevation and azimuth angles the ionospheric phase advances are stored in a look-up table. The higher-order ionospheric residuals are computed by forming the standard linear combination of the ionospheric phase advances. In a simulation study we examine how the higher-order ionospheric residuals leak into estimated station coordinates, clocks, zenith delays and tropospheric gradients in precise point positioning. The simulation study includes a few hundred globally distributed stations and covers the time period 1990-2015. We take a close look on the estimated zenith delays and tropospheric gradients as they are considered a data source for meteorological and climate related research. We also show how the by product of this simulation study, the look-up tables, can be used to mitigate higher-order ionospheric errors in practise. Gulyaeva, T.L., and Bilitza, D. Towards ISO Standard Earth Ionosphere and Plasmasphere Model. In: New Developments in the Standard Model, edited by R.J. Larsen, pp. 1-39, NOVA, Hauppauge, New York, 2012, available at https://www.novapublishers.com/catalog/product_info.php?products_id=35812
Digital slip frequency generator and method for determining the desired slip frequency
Klein, Frederick F.
1989-01-01
The output frequency of an electric power generator is kept constant with variable rotor speed by automatic adjustment of the excitation slip frequency. The invention features a digital slip frequency generator which provides sine and cosine waveforms from a look-up table, which are combined with real and reactive power output of the power generator.
Assessment and validation of the community radiative transfer model for ice cloud conditions
NASA Astrophysics Data System (ADS)
Yi, Bingqi; Yang, Ping; Weng, Fuzhong; Liu, Quanhua
2014-11-01
The performance of the Community Radiative Transfer Model (CRTM) under ice cloud conditions is evaluated and improved with the implementation of MODIS collection 6 ice cloud optical property model based on the use of severely roughened solid column aggregates and a modified Gamma particle size distribution. New ice cloud bulk scattering properties (namely, the extinction efficiency, single-scattering albedo, asymmetry factor, and scattering phase function) suitable for application to the CRTM are calculated by using the most up-to-date ice particle optical property library. CRTM-based simulations illustrate reasonable accuracy in comparison with the counterparts derived from a combination of the Discrete Ordinate Radiative Transfer (DISORT) model and the Line-by-line Radiative Transfer Model (LBLRTM). Furthermore, simulations of the top of the atmosphere brightness temperature with CRTM for the Crosstrack Infrared Sounder (CrIS) are carried out to further evaluate the updated CRTM ice cloud optical property look-up table.
Minho Won; Albalawi, Hassan; Xin Li; Thomas, Donald E
2014-01-01
This paper describes a low-power hardware implementation for movement decoding of brain computer interface. Our proposed hardware design is facilitated by two novel ideas: (i) an efficient feature extraction method based on reduced-resolution discrete cosine transform (DCT), and (ii) a new hardware architecture of dual look-up table to perform discrete cosine transform without explicit multiplication. The proposed hardware implementation has been validated for movement decoding of electrocorticography (ECoG) signal by using a Xilinx FPGA Zynq-7000 board. It achieves more than 56× energy reduction over a reference design using band-pass filters for feature extraction.
Zone plate method for electronic holographic display using resolution redistribution technique.
Takaki, Yasuhiro; Nakamura, Junya
2011-07-18
The resolution redistribution (RR) technique can increase the horizontal viewing-zone angle and screen size of electronic holographic display. The present study developed a zone plate method that would reduce hologram calculation time for the RR technique. This method enables calculation of an image displayed on a spatial light modulator by performing additions of the zone plates, while the previous calculation method required performing the Fourier transform twice. The derivation and modeling of the zone plate are shown. In addition, the look-up table approach was introduced for further reduction in computation time. Experimental verification using a holographic display module based on the RR technique is presented.
New DICOM extensions for softcopy and hardcopy display consistency.
Eichelberg, M; Riesmeier, J; Kleber, K; Grönemeyer, D H; Oosterwijk, H; Jensch, P
2000-01-01
The DICOM standard defines in detail how medical images can be communicated. However, the rules on how to interpret the parameters contained in a DICOM image which deal with the image presentation were either lacking or not well defined. As a result, the same image frequently looks different when displayed on different workstations or printed on a film from various printers. Three new DICOM extensions attempt to close this gap by defining a comprehensive model for the display of images on softcopy and hardcopy devices: Grayscale Standard Display Function, Grayscale Softcopy Presentation State and Presentation Look Up Table.
A novel high-frequency encoding algorithm for image compression
NASA Astrophysics Data System (ADS)
Siddeq, Mohammed M.; Rodrigues, Marcos A.
2017-12-01
In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.
The Periodic Table. Physical Science in Action[TM]. Schlessinger Science Library. [Videotape].
ERIC Educational Resources Information Center
2000
Kids know that when they are lost, they look at a map to find their way. It's no different in the world of science, as they'll learn in The Periodic Table--a fun and engaging look at the road map of the elements. Young students will learn about key information included on the table, including atomic number, atomic mass and chemical symbol. They'll…
An Up-to-Date Look at the Supply of Child Care.
ERIC Educational Resources Information Center
Neugebauer, Roger
1992-01-01
Cites recently completed child care supply and demand studies showing 400 percent growth rate in center care in past two decades, with more children enrolled in centers than in any other form of nonparental child care. Data on trends, usage by age, forms of care used, types of centers used, and a profile of centers are provided in tables. (LB)
Non-equilibrium condensation of supercritical carbon dioxide in a converging-diverging nozzle
NASA Astrophysics Data System (ADS)
Ameli, Alireza; Afzalifar, Ali; Turunen-Saaresti, Teemu
2017-03-01
Carbon dioxide (CO2) is a promising alternative as a working fluid for future energy conversion and refrigeration cycles. CO2 has low global warming potential compared to refrigerants and supercritical CO2 Brayton cycle ought to have better efficiency than today’s counter parts. However, there are several issues concerning behaviour of supercritical CO2 in aforementioned applications. One of these issues arises due to non-equilibrium condensation of CO2 for some operating conditions in supercritical compressors. This paper investigates the non-equilibrium condensation of carbon dioxide in the course of an expansion from supercritical stagnation conditions in a converging-diverging nozzle. An external look-up table was implemented, using an in-house FORTRAN code, to calculate the fluid properties in supercritical, metastable and saturated regions. This look-up table is coupled with the flow solver and the non-equilibrium condensation model is introduced to the solver using user defined expressions. Numerical results are compared with the experimental measurements. In agreement with the experiment, the distribution of Mach number in the nozzle shows that the flow becomes supersonic in upstream region near the throat where speed of sound is minimum also the equilibrium reestablishment occurs at the outlet boundary condition.
Towards Linking 3D SAR and Lidar Models with a Spatially Explicit Individual Based Forest Model
NASA Astrophysics Data System (ADS)
Osmanoglu, B.; Ranson, J.; Sun, G.; Armstrong, A. H.; Fischer, R.; Huth, A.
2017-12-01
In this study, we present a parameterization of the FORMIND individual-based gap model (IBGM)for old growth Atlantic lowland rainforest in La Selva, Costa Rica for the purpose of informing multisensor remote sensing techniques for above ground biomass techniques. The model was successfully parameterized and calibrated for the study site; results show that the simulated forest reproduces the structural complexity of Costa Rican rainforest based on comparisons with CARBONO inventory plot data. Though the simulated stem numbers (378) slightly underestimated the plot data (418), particularly for canopy dominant intermediate shade tolerant trees and shade tolerant understory trees, overall there was a 9.7% difference. Aboveground biomass (kg/ha) showed a 0.1% difference between the simulated forest and inventory plot dataset. The Costa Rica FORMIND simulation was then used to parameterize a spatially explicit (3D) SAR and lidar backscatter models. The simulated forest stands were used to generate a Look Up Table as a tractable means to estimate aboveground forest biomass for these complex forests. Various combinations of lidar and radar variables were evaluated in the LUT inversion. To test the capability of future data for estimation of forest height and biomass, we considered data of 1) L- (or P-) band polarimetric data (backscattering coefficients of HH, HV and VV); 2) L-band dual-pol repeat-pass InSAR data (HH/HV backscattering coefficients and coherences, height of scattering phase center at HH and HV using DEM or surface height from lidar data as reference); 3) P-band polarimetric InSAR data (canopy height from inversion of PolInSAR data or use the coherences and height of scattering phase center at HH, HV and VV); 4) various height indices from waveform lidar data); and 5) surface and canopy top height from photon-counting lidar data. The methods for parameterizing the remote sensing models with the IBGM and developing Look Up Tables will be discussed. Results from various remote sensing scenarios will also be presented.
Advanced linear and nonlinear compensations for 16QAM SC-400G unrepeatered transmission system
NASA Astrophysics Data System (ADS)
Zhang, Junwen; Yu, Jianjun; Chien, Hung-Chang
2018-02-01
Digital signal processing (DSP) with both linear equalization and nonlinear compensations are studied in this paper for the single-carrier 400G system based on 65-GBaud 16-quadrature amplitude modulation (QAM) signals. The 16-QAM signals are generated and pre-processed with pre-equalization (Pre-EQ) and Look-up-Table (LUT) based pre-distortion (Pre-DT) at the transmitter (Tx)-side. The implementation principle of training-based equalization and pre-distortion are presented here in this paper with experimental studies. At the receiver (Rx)-side, fiber-nonlinearity compensation based on digital backward propagation (DBP) are also utilized to further improve the transmission performances. With joint LUT-based Pre-DT and DBP-based post-compensation to mitigate the opto-electronic components and fiber nonlinearity impairments, we demonstrate the unrepeatered transmission of 1.6Tb/s based on 4-lane 400G single-carrier PDM-16QAM over 205-km SSMF without distributed amplifier.
A New Methodology for Vibration Error Compensation of Optical Encoders
Lopez, Jesus; Artes, Mariano
2012-01-01
Optical encoders are sensors based on grating interference patterns. Tolerances inherent to the manufacturing process can induce errors in the position accuracy as the measurement signals stand apart from the ideal conditions. In case the encoder is working under vibrations, the oscillating movement of the scanning head is registered by the encoder system as a displacement, introducing an error into the counter to be added up to graduation, system and installation errors. Behavior improvement can be based on different techniques trying to compensate the error from measurement signals processing. In this work a new “ad hoc” methodology is presented to compensate the error of the encoder when is working under the influence of vibration. The methodology is based on fitting techniques to the Lissajous figure of the deteriorated measurement signals and the use of a look up table, giving as a result a compensation procedure in which a higher accuracy of the sensor is obtained. PMID:22666067
32. PILOT HOUSE, LOOKING TOWARDS PORT, TABLE TO LEFT IS ...
32. PILOT HOUSE, LOOKING TOWARDS PORT, TABLE TO LEFT IS WHERE CHARTS ARE PLOTTED AT BACKGROUND LEFT IS TOP OF STAIRS DOWN TO MESS DECK. - U.S. Coast Guard Cutter WHITE HEATH, USGS Integrated Support Command Boston, 427 Commercial Street, Boston, Suffolk County, MA
[Cleanliness Norms 1964-1975].
Noelle-Neumann, E
1976-01-01
In 1964 the Institut für Demoskopie Allensbach made a first survey taking stock of norms concerning cleanliness in the Federal Republic of Germany. At that time, 78% of respondents thought that the vogue among young people of cultivating an unkempt look was past or on the wane (Table 1.). Today we know that this fashion was an indicator of more serious desires for change in many different areas like politics, sexual morality, education and that its high point was still to come. In the fall of 1975 a second survey, modelled on the one of 1964, was conducted. Again, it concentrated on norms, not on behavior. As expected, norms have changed over this period but not in a one-directional or simple manner. In general, people are much more large-minded about children's looks: neat, clean school-dress, properly combed hair, clean shoes, all this and also holding their things in order has become less important in 1975 (Table 2). To carry a clean handkerchief is becoming oldfashioned (Table 3). On the other hand, principles of bringing-up children have not loosened concerning personal hygiene - brushing ones teeth, washing hands, feet, and neck, clean fingernails (Table 4). On one item related to protection of the environment, namely throwing around waste paper, standards have even become more strict (Table 5). With regard to school-leavers, norms of personal hygiene have generally become more strict (Table 6). As living standards have gone up and the number of full bathrooms has risen from 42% to 75% of households, norms of personal hygiene have also increased: one warm bath a week seemed enough to 56% of adults in 1964, but to only 32% in 1975 (Table 7). Also standards for changing underwear have changed a lot: in 1964 only 12% of respondents said "every day", in 1975 48% said so (Table 8). Even more stringent norms are applied to young women (Tables 9/10). For comparison: 1964 there were automatic washing machines in 16%, 1975 in 79% of households. Answers to questions which qualities men value especially in women and which qualities women value especially in men show a decrease in valutation of "cleanliness". These results can be interpreted in different ways (Tables 11/12). It seems, however, that "cleanliness" is not going out as a cultural value. We have found that young people today do not consider clean dress important but that they are probably better washed under their purposely neglected clothing than young people were ten years ago. As a nation, Germans still consider cleanliness to be a articularly German virtue, 1975 even more so than 1964 (Table 13). An association test, first made in March 1976, confirms this: When they hear "Germany", 68% of Germans think of "cleanliness" (Table 14).
Use of NOAA-N satellites for land/water discrimination and flood monitoring
NASA Technical Reports Server (NTRS)
Tappan, G.; Horvath, N. C.; Doraiswamy, P. C.; Engman, T.; Goss, D. W. (Principal Investigator)
1983-01-01
A tool for monitoring the extent of major floods was developed using data collected by the NOAA-6 advanced very high resolution radiometer (AVHRR). A basic understanding of the spectral returns in AVHRR channels 1 and 2 for water, soil, and vegetation was reached using a large number of NOAA-6 scenes from different seasons and geographic locations. A look-up table classifier was developed based on analysis of the reflective channel relationships for each surface feature. The classifier automatically separated land from water and produced classification maps which were registered for a number of acquisitions, including coverage of a major flood on the Parana River of Argentina.
Choi, Kang-Il
2016-01-01
This paper proposes a pipelined non-deterministic finite automaton (NFA)-based string matching scheme using field programmable gate array (FPGA) implementation. The characteristics of the NFA such as shared common prefixes and no failure transitions are considered in the proposed scheme. In the implementation of the automaton-based string matching using an FPGA, each state transition is implemented with a look-up table (LUT) for the combinational logic circuit between registers. In addition, multiple state transitions between stages can be performed in a pipelined fashion. In this paper, it is proposed that multiple one-to-one state transitions, called merged state transitions, can be performed with an LUT. By cutting down the number of used LUTs for implementing state transitions, the hardware overhead of combinational logic circuits is greatly reduced in the proposed pipelined NFA-based string matching scheme. PMID:27695114
Kim, HyunJin; Choi, Kang-Il
2016-01-01
This paper proposes a pipelined non-deterministic finite automaton (NFA)-based string matching scheme using field programmable gate array (FPGA) implementation. The characteristics of the NFA such as shared common prefixes and no failure transitions are considered in the proposed scheme. In the implementation of the automaton-based string matching using an FPGA, each state transition is implemented with a look-up table (LUT) for the combinational logic circuit between registers. In addition, multiple state transitions between stages can be performed in a pipelined fashion. In this paper, it is proposed that multiple one-to-one state transitions, called merged state transitions, can be performed with an LUT. By cutting down the number of used LUTs for implementing state transitions, the hardware overhead of combinational logic circuits is greatly reduced in the proposed pipelined NFA-based string matching scheme.
Yang, Chuanlei; Wang, Yinyan; Wang, Hechun
2018-01-01
To achieve a much more extensive intake air flow range of the diesel engine, a variable-geometry compressor (VGC) is introduced into a turbocharged diesel engine. However, due to the variable diffuser vane angle (DVA), the prediction for the performance of the VGC becomes more difficult than for a normal compressor. In the present study, a prediction model comprising an elliptical equation and a PLS (partial least-squares) model was proposed to predict the performance of the VGC. The speed lines of the pressure ratio map and the efficiency map were fitted with the elliptical equation, and the coefficients of the elliptical equation were introduced into the PLS model to build the polynomial relationship between the coefficients and the relative speed, the DVA. Further, the maximal order of the polynomial was investigated in detail to reduce the number of sub-coefficients and achieve acceptable fit accuracy simultaneously. The prediction model was validated with sample data and in order to present the superiority of compressor performance prediction, the prediction results of this model were compared with those of the look-up table and back-propagation neural networks (BPNNs). The validation and comparison results show that the prediction accuracy of the new developed model is acceptable, and this model is much more suitable than the look-up table and the BPNN methods under the same condition in VGC performance prediction. Moreover, the new developed prediction model provides a novel and effective prediction solution for the VGC and can be used to improve the accuracy of the thermodynamic model for turbocharged diesel engines in the future. PMID:29410849
Li, Xu; Yang, Chuanlei; Wang, Yinyan; Wang, Hechun
2018-01-01
To achieve a much more extensive intake air flow range of the diesel engine, a variable-geometry compressor (VGC) is introduced into a turbocharged diesel engine. However, due to the variable diffuser vane angle (DVA), the prediction for the performance of the VGC becomes more difficult than for a normal compressor. In the present study, a prediction model comprising an elliptical equation and a PLS (partial least-squares) model was proposed to predict the performance of the VGC. The speed lines of the pressure ratio map and the efficiency map were fitted with the elliptical equation, and the coefficients of the elliptical equation were introduced into the PLS model to build the polynomial relationship between the coefficients and the relative speed, the DVA. Further, the maximal order of the polynomial was investigated in detail to reduce the number of sub-coefficients and achieve acceptable fit accuracy simultaneously. The prediction model was validated with sample data and in order to present the superiority of compressor performance prediction, the prediction results of this model were compared with those of the look-up table and back-propagation neural networks (BPNNs). The validation and comparison results show that the prediction accuracy of the new developed model is acceptable, and this model is much more suitable than the look-up table and the BPNN methods under the same condition in VGC performance prediction. Moreover, the new developed prediction model provides a novel and effective prediction solution for the VGC and can be used to improve the accuracy of the thermodynamic model for turbocharged diesel engines in the future.
Effect of black carbon on dust property retrievals from satellite observations
NASA Astrophysics Data System (ADS)
Lin, Tang-Huang; Yang, Ping; Yi, Bingqi
2013-01-01
The effect of black carbon on the optical properties of polluted mineral dust is studied from a satellite remote-sensing perspective. By including the auxiliary data of surface reflectivity and aerosol mixing weight, the optical properties of mineral dust, or more specifically, the aerosol optical depth (AOD) and single-scattering albedo (SSA), can be retrieved with improved accuracy. Precomputed look-up tables based on the principle of the Deep Blue algorithm are utilized in the retrieval. The mean differences between the retrieved results and the corresponding ground-based measurements are smaller than 1% for both AOD and SSA in the case of pure dust. However, the retrievals can be underestimated by as much as 11.9% for AOD and overestimated by up to 4.1% for SSA in the case of polluted dust with an estimated 10% (in terms of the number-density mixing ratio) of soot aggregates if the black carbon effect on dust aerosols is neglected.
AGILE: Autonomous Global Integrated Language Exploitation
2008-04-01
training is extending the pronunciation dictionary to cover any additional words. For many languages this is relatively straightforward via grapheme-to...into one or more word sequences and look up the constituent parts in the Master dictionary or apply Buckwalter to them. The Buckwalter prefix table was...errors involve the article ’Al’. As a result of this analysis, the pronunciation dictionary was extended to add alternate pronunciations for the
Modeling radiative transfer with the doubling and adding approach in a climate GCM setting
NASA Astrophysics Data System (ADS)
Lacis, A. A.
2017-12-01
The nonlinear dependence of multiply scattered radiation on particle size, optical depth, and solar zenith angle, makes accurate treatment of multiple scattering in the climate GCM setting problematic, due primarily to computational cost issues. In regard to the accurate methods of calculating multiple scattering that are available, their computational cost is far too prohibitive for climate GCM applications. Utilization of two-stream-type radiative transfer approximations may be computationally fast enough, but at the cost of reduced accuracy. We describe here a parameterization of the doubling/adding method that is being used in the GISS climate GCM, which is an adaptation of the doubling/adding formalism configured to operate with a look-up table utilizing a single gauss quadrature point with an extra-angle formulation. It is designed to closely reproduce the accuracy of full-angle doubling and adding for the multiple scattering effects of clouds and aerosols in a realistic atmosphere as a function of particle size, optical depth, and solar zenith angle. With an additional inverse look-up table, this single-gauss-point doubling/adding approach can be adapted to model fractional cloud cover for any GCM grid-box in the independent pixel approximation as a function of the fractional cloud particle sizes, optical depths, and solar zenith angle dependence.
Walters, Daniel; Stringer, Simon; Rolls, Edmund
2013-01-01
The head direction cell system is capable of accurately updating its current representation of head direction in the absence of visual input. This is known as the path integration of head direction. An important question is how the head direction cell system learns to perform accurate path integration of head direction. In this paper we propose a model of velocity path integration of head direction in which the natural time delay of axonal transmission between a linked continuous attractor network and competitive network acts as a timing mechanism to facilitate the correct speed of path integration. The model effectively learns a "look-up" table for the correct speed of path integration. In simulation, we show that the model is able to successfully learn two different speeds of path integration across two different axonal conduction delays, and without the need to alter any other model parameters. An implication of this model is that, by learning look-up tables for each speed of path integration, the model should exhibit a degree of robustness to damage. In simulations, we show that the speed of path integration is not significantly affected by degrading the network through removing a proportion of the cells that signal rotational velocity.
ECG compression using Slantlet and lifting wavelet transform with and without normalisation
NASA Astrophysics Data System (ADS)
Aggarwal, Vibha; Singh Patterh, Manjeet
2013-05-01
This article analyses the performance of: (i) linear transform: Slantlet transform (SLT), (ii) nonlinear transform: lifting wavelet transform (LWT) and (iii) nonlinear transform (LWT) with normalisation for electrocardiogram (ECG) compression. First, an ECG signal is transformed using linear transform and nonlinear transform. The transformed coefficients (TC) are then thresholded using bisection algorithm in order to match the predefined user-specified percentage root mean square difference (UPRD) within the tolerance. Then, the binary look up table is made to store the position map for zero and nonzero coefficients (NZCs). The NZCs are quantised by Max-Lloyd quantiser followed by Arithmetic coding. The look up table is encoded by Huffman coding. The results show that the LWT gives the best result as compared to SLT evaluated in this article. This transform is then considered to evaluate the effect of normalisation before thresholding. In case of normalisation, the TC is normalised by dividing the TC by ? (where ? is number of samples) to reduce the range of TC. The normalised coefficients (NC) are then thresholded. After that the procedure is same as in case of coefficients without normalisation. The results show that the compression ratio (CR) in case of LWT with normalisation is improved as compared to that without normalisation.
NASA Technical Reports Server (NTRS)
Harris, Charles E.; Starnes, James H., Jr.; Newman, James C., Jr.
1995-01-01
NASA is developing a 'tool box' that includes a number of advanced structural analysis computer codes which, taken together, represent the comprehensive fracture mechanics capability required to predict the onset of widespread fatigue damage. These structural analysis tools have complementary and specialized capabilities ranging from a finite-element-based stress-analysis code for two- and three-dimensional built-up structures with cracks to a fatigue and fracture analysis code that uses stress-intensity factors and material-property data found in 'look-up' tables or from equations. NASA is conducting critical experiments necessary to verify the predictive capabilities of the codes, and these tests represent a first step in the technology-validation and industry-acceptance processes. NASA has established cooperative programs with aircraft manufacturers to facilitate the comprehensive transfer of this technology by making these advanced structural analysis codes available to industry.
NASA Astrophysics Data System (ADS)
Passas, Georgios; Freear, Steven; Fawcett, Darren
2010-08-01
Orthogonal frequency division multiplexing (OFDM)-based feed-forward space-time trellis code (FFSTTC) encoders can be synthesised as very high speed integrated circuit hardware description language (VHDL) designs. Evaluation of their FPGA implementation can lead to conclusions that help a designer to decide the optimum implementation, given the encoder structural parameters. VLSI architectures based on 1-bit multipliers and look-up tables (LUTs) are compared in terms of FPGA slices and block RAMs (area), as well as in terms of minimum clock period (speed). Area and speed graphs versus encoder memory order are provided for quadrature phase shift keying (QPSK) and 8 phase shift keying (8-PSK) modulation and two transmit antennas, revealing best implementation under these conditions. The effect of number of modulation bits and transmit antennas on the encoder implementation complexity is also investigated.
A survey of southern hemisphere meteor showers
NASA Astrophysics Data System (ADS)
Jenniskens, Peter; Baggaley, Jack; Crumpton, Ian; Aldous, Peter; Pokorny, Petr; Janches, Diego; Gural, Peter S.; Samuels, Dave; Albers, Jim; Howell, Andreas; Johannink, Carl; Breukers, Martin; Odeh, Mohammad; Moskovitz, Nicholas; Collison, Jack; Ganju, Siddha
2018-05-01
Results are presented from a video-based meteoroid orbit survey conducted in New Zealand between Sept. 2014 and Dec. 2016, which netted 24,906 orbits from +5 to -5 magnitude meteors. 44 new southern hemisphere meteor showers are identified after combining this data with that of other video-based networks. Results are compared to showers reported from recent radar-based surveys. We find that video cameras and radar often see different showers and sometimes measure different semi-major axis distributions for the same meteoroid stream. For identifying showers in sparse daily orbit data, a shower look-up table of radiant position and speed as a function of time was created. This can replace the commonly used method of identifying showers from a set of mean orbital elements by using a discriminant criterion, which does not fully describe the distribution of meteor shower radiants over time.
PAM-4 delivery based on pre-distortion and CMMA equalization in a ROF system at 40 GHz
NASA Astrophysics Data System (ADS)
Zhou, Wen; Zhang, Jiao; Han, Xifeng; Kong, Miao; Gou, Pengqi
2018-06-01
In this paper, we proposed a PAM-4 delivery in a ROF system at 40-GHz. PAM-4 transmission data can be generated via look-up table (LUT) pre-distortion, then delivered over 25km single-mode fiber and 0.5m wireless link. At the receiver side, the received signal can be processed with cascaded multi-module algorithm (CMMA) equalization to improve the decision precision. Our measured results show that 10Gbaud PAM-4 transmission in a ROF system at 40-GHz can be achieved with BER of 1.6 × 10-3. To our knowledge, this is the first time to introduce LUT pre-distortion and CMMA equalization in a ROF system to improve signal performance.
Noise generator for tinnitus treatment based on look-up tables
NASA Astrophysics Data System (ADS)
Uriz, Alejandro J.; Agüero, Pablo; Tulli, Juan C.; Castiñeira Moreira, Jorge; González, Esteban; Hidalgo, Roberto; Casadei, Manuel
2016-04-01
Treatment of tinnitus by means of masking sounds allows to obtain a significant improve of the quality of life of the individual that suffer that condition. In view of that, it is possible to develop noise synthesizers based on random number generators in digital signal processors (DSP), which are used in almost any digital hearing aid devices. DSP architecture have limitations to implement a pseudo random number generator, due to it, the noise statistics can be not as good as expectations. In this paper, a technique to generate additive white gaussian noise (AWGN) or other types of filtered noise using coefficients stored in program memory of the DSP is proposed. Also, an implementation of the technique is carried out on a dsPIC from Microchip®. Objective experiments and experimental measurements are performed to analyze the proposed technique.
The VLSI design of a Reed-Solomon encoder using Berlekamps bit-serial multiplier algorithm
NASA Technical Reports Server (NTRS)
Truong, T. K.; Deutsch, L. J.; Reed, I. S.; Hsu, I. S.; Wang, K.; Yeh, C. S.
1982-01-01
Realization of a bit-serial multiplication algorithm for the encoding of Reed-Solomon (RS) codes on a single VLSI chip using NMOS technology is demonstrated to be feasible. A dual basis (255, 223) over a Galois field is used. The conventional RS encoder for long codes ofter requires look-up tables to perform the multiplication of two field elements. Berlekamp's algorithm requires only shifting and exclusive-OR operations.
Puzzler Solution: Perfect Weather for a Picnic | Poster
It looks like we stumped you. We did not receive any correct guesses for the current Poster Puzzler, which is an image of the top of the Building 434 picnic table, with a view looking towards Building 472. This picnic table and others across campus were supplied by the NCI at Frederick Campus Improvement Committee. Building 434, located on Wood Street, is home to the staff of
Becoming Reactive by Concretization
NASA Technical Reports Server (NTRS)
Prieditis, Armand; Janakiraman, Bhaskar
1992-01-01
One way to build a reactive system is to construct an action table indexed by the current situation or stimulus. The action table describes what course of action to pursue for each situation or stimulus. This paper describes an incremental approach to constructing the action table through achieving goals with a hierarchical search system. These hierarchies are generated with transformations called concretizations, which add constraints to a problem and which can reduce the search space. The basic idea is that an action for a state is looked up in the action table and executed whenever the action table has an entry for that state; otherwise, a path is found to the nearest (cost-wise in a graph with costweighted arcs) state that has a mappring from a state in the next highest hierarchy. For each state along the solution path, the successor state in the path is cached in the action table entry for that state. Without caching, the hierarchical search system can logarithmically reduce search. When the table is complete the system no longer searches: it simply reacts by proceeding to the state listed in the table for each state. Since the cached information is specific only to the nearest state in the next highest hierarchy and not the goal, inter-goal transfer of reactivity is possible. To illustrate our approach, we show how an implemented hierarchical search system can completely reactive.
Evaluation of CFD to Determine Two-Dimensional Airfoil Characteristics for Rotorcraft Applications
NASA Technical Reports Server (NTRS)
Smith, Marilyn J.; Wong, Tin-Chee; Potsdam, Mark; Baeder, James; Phanse, Sujeet
2004-01-01
The efficient prediction of helicopter rotor performance, vibratory loads, and aeroelastic properties still relies heavily on the use of comprehensive analysis codes by the rotorcraft industry. These comprehensive codes utilize look-up tables to provide two-dimensional aerodynamic characteristics. Typically these tables are comprised of a combination of wind tunnel data, empirical data and numerical analyses. The potential to rely more heavily on numerical computations based on Computational Fluid Dynamics (CFD) simulations has become more of a reality with the advent of faster computers and more sophisticated physical models. The ability of five different CFD codes applied independently to predict the lift, drag and pitching moments of rotor airfoils is examined for the SC1095 airfoil, which is utilized in the UH-60A main rotor. Extensive comparisons with the results of ten wind tunnel tests are performed. These CFD computations are found to be as good as experimental data in predicting many of the aerodynamic performance characteristics. Four turbulence models were examined (Baldwin-Lomax, Spalart-Allmaras, Menter SST, and k-omega).
An automated approach to the design of decision tree classifiers
NASA Technical Reports Server (NTRS)
Argentiero, P.; Chin, P.; Beaudet, P.
1980-01-01
The classification of large dimensional data sets arising from the merging of remote sensing data with more traditional forms of ancillary data is considered. Decision tree classification, a popular approach to the problem, is characterized by the property that samples are subjected to a sequence of decision rules before they are assigned to a unique class. An automated technique for effective decision tree design which relies only on apriori statistics is presented. This procedure utilizes a set of two dimensional canonical transforms and Bayes table look-up decision rules. An optimal design at each node is derived based on the associated decision table. A procedure for computing the global probability of correct classfication is also provided. An example is given in which class statistics obtained from an actual LANDSAT scene are used as input to the program. The resulting decision tree design has an associated probability of correct classification of .76 compared to the theoretically optimum .79 probability of correct classification associated with a full dimensional Bayes classifier. Recommendations for future research are included.
Remote sensing of atmospheric aerosols with the SPEX spectropolarimeter
NASA Astrophysics Data System (ADS)
van Harten, G.; Rietjens, J.; Smit, M.; Snik, F.; Keller, C. U.; di Noia, A.; Hasekamp, O.; Vonk, J.; Volten, H.
2013-12-01
Characterizing atmospheric aerosols is key to understanding their influence on climate through their direct and indirect radiative forcing. This requires long-term global coverage, at high spatial (~km) and temporal (~days) resolution, which can only be provided by satellite remote sensing. Aerosol load and properties such as particle size, shape and chemical composition can be derived from multi-wavelength radiance and polarization measurements of sunlight that is scattered by the Earth's atmosphere at different angles. The required polarimetric accuracy of ~10^(-3) is very challenging, particularly since the instrument is located on a rapidly moving platform. Our Spectropolarimeter for Planetary EXploration (SPEX) is based on a novel, snapshot spectral modulator, with the intrinsic ability to measure polarization at high accuracy. It exhibits minimal instrumental polarization and is completely solid-state and passive. An athermal set of birefringent crystals in front of an analyzer encodes the incoming linear polarization into a sinusoidal modulation in the intensity spectrum. Moreover, a dual beam implementation yields redundancy that allows for a mutual correction in both the spectrally and spatially modulated data to increase the measurement accuracy. A partially polarized calibration stimulus has been developed, consisting of a carefully depolarized source followed by tilted glass plates to induce polarization in a controlled way. Preliminary calibration measurements show an accuracy of SPEX of well below 10^(-3), with a sensitivity limit of 2*10^(-4). We demonstrate the potential of the SPEX concept by presenting retrievals of aerosol properties based on clear sky measurements using a prototype satellite instrument and a dedicated ground-based SPEX. The retrieval algorithm, originally designed for POLDER data, performs iterative fitting of aerosol properties and surface albedo, where the initial guess is provided by a look-up table. The retrieved aerosol properties, including aerosol optical thickness, single scattering albedo, size distribution and complex refractive index, will be compared with the on-site AERONET sun-photometer, lidar, particle counter and sizer, and PM10 and PM2.5 monitoring instruments. Retrievals of the aerosol layer height based on polarization measurements in the O2A absorption band will be compared with lidar profiles. Furthermore, the possibility of enhancing the retrieval accuracy by replacing the look-up table with a neural network based initial guess will be discussed, using retrievals from simulated ground-based data.
Reactive Collision Avoidance Algorithm
NASA Technical Reports Server (NTRS)
Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred
2010-01-01
The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on-line. The optimal avoidance trajectory is implemented as a receding-horizon model predictive control law. Therefore, at each time step, the optimal avoidance trajectory is found and the first time step of its acceleration is applied. At the next time step of the control computer, the problem is re-solved and the new first time step is again applied. This continual updating allows the RCA algorithm to adapt to a colliding spacecraft that is making erratic course changes.
Numerical Model Sensitivity to Heterogeneous Satellite Derived Vegetation Roughness
NASA Technical Reports Server (NTRS)
Jasinski, Michael; Eastman, Joseph; Borak, Jordan
2011-01-01
The sensitivity of a mesoscale weather prediction model to a 1 km satellite-based vegetation roughness initialization is investigated for a domain within the south central United States. Three different roughness databases are employed: i) a control or standard lookup table roughness that is a function only of land cover type, ii) a spatially heterogeneous roughness database, specific to the domain, that was previously derived using a physically based procedure and Moderate Resolution Imaging Spectroradiometer (MODIS) imagery, and iii) a MODIS climatologic roughness database that like (i) is a function only of land cover type, but possesses domain specific mean values from (ii). The model used is the Weather Research and Forecast Model (WRF) coupled to the Community Land Model within the Land Information System (LIS). For each simulation, a statistical comparison is made between modeled results and ground observations within a domain including Oklahoma, Eastern Arkansas, and Northwest Louisiana during a 4-day period within IHOP 2002. Sensitivity analysis compares the impact the three roughness initializations on time-series temperature, precipitation probability of detection (POD), average wind speed, boundary layer height, and turbulent kinetic energy (TKE). Overall, the results indicate that, for the current investigation, replacement of the standard look-up table values with the satellite-derived values statistically improves model performance for most observed variables. Such natural roughness heterogeneity enhances the surface wind speed, PBL height and TKE production up to 10 percent, with a lesser effect over grassland, and greater effect over mixed land cover domains.
Correlation and prediction of dynamic human isolated joint strength from lean body mass
NASA Technical Reports Server (NTRS)
Pandya, Abhilash K.; Hasson, Scott M.; Aldridge, Ann M.; Maida, James C.; Woolford, Barbara J.
1992-01-01
A relationship between a person's lean body mass and the amount of maximum torque that can be produced with each isolated joint of the upper extremity was investigated. The maximum dynamic isolated joint torque (upper extremity) on 14 subjects was collected using a dynamometer multi-joint testing unit. These data were reduced to a table of coefficients of second degree polynomials, computed using a least squares regression method. All the coefficients were then organized into look-up tables, a compact and convenient storage/retrieval mechanism for the data set. Data from each joint, direction and velocity, were normalized with respect to that joint's average and merged into files (one for each curve for a particular joint). Regression was performed on each one of these files to derive a table of normalized population curve coefficients for each joint axis, direction, and velocity. In addition, a regression table which included all upper extremity joints was built which related average torque to lean body mass for an individual. These two tables are the basis of the regression model which allows the prediction of dynamic isolated joint torques from an individual's lean body mass.
FIR Filter of DS-CDMA UWB Modem Transmitter
NASA Astrophysics Data System (ADS)
Kang, Kyu-Min; Cho, Sang-In; Won, Hui-Chul; Choi, Sang-Sung
This letter presents low-complexity digital pulse shaping filter structures of a direct sequence code division multiple access (DS-CDMA) ultra wide-band (UWB) modem transmitter with a ternary spreading code. The proposed finite impulse response (FIR) filter structures using a look-up table (LUT) have the effect of saving the amount of memory by about 50% to 80% in comparison to the conventional FIR filter structures, and consequently are suitable for a high-speed parallel data process.
Near constant-time optimal piecewise LDR to HDR inverse tone mapping
NASA Astrophysics Data System (ADS)
Chen, Qian; Su, Guan-Ming; Yin, Peng
2015-02-01
In a backward compatible HDR image/video compression, it is a general approach to reconstruct HDR from compressed LDR as a prediction to original HDR, which is referred to as inverse tone mapping. Experimental results show that 2- piecewise 2nd order polynomial has the best mapping accuracy than 1 piece high order or 2-piecewise linear, but it is also the most time-consuming method because to find the optimal pivot point to split LDR range to 2 pieces requires exhaustive search. In this paper, we propose a fast algorithm that completes optimal 2-piecewise 2nd order polynomial inverse tone mapping in near constant time without quality degradation. We observe that in least square solution, each entry in the intermediate matrix can be written as the sum of some basic terms, which can be pre-calculated into look-up tables. Since solving the matrix becomes looking up values in tables, computation time barely differs regardless of the number of points searched. Hence, we can carry out the most thorough pivot point search to find the optimal pivot that minimizes MSE in near constant time. Experiment shows that our proposed method achieves the same PSNR performance while saving 60 times computation time compared to the traditional exhaustive search in 2-piecewise 2nd order polynomial inverse tone mapping with continuous constraint.
Gao, Zheng; Gui, Ping
2012-07-01
In this paper, we present a digital predistortion technique to improve the linearity and power efficiency of a high-voltage class-AB power amplifier (PA) for ultrasound transmitters. The system is composed of a digital-to-analog converter (DAC), an analog-to-digital converter (ADC), and a field-programmable gate array (FPGA) in which the digital predistortion (DPD) algorithm is implemented. The DPD algorithm updates the error, which is the difference between the ideal signal and the attenuated distorted output signal, in the look-up table (LUT) memory during each cycle of a sinusoidal signal using the least-mean-square (LMS) algorithm. On the next signal cycle, the error data are used to equalize the signal with negative harmonic components to cancel the amplifier's nonlinear response. The algorithm also includes a linear interpolation method applied to the windowed sinusoidal signals for the B-mode and Doppler modes. The measurement test bench uses an arbitrary function generator as the DAC to generate the input signal, an oscilloscope as the ADC to capture the output waveform, and software to implement the DPD algorithm. The measurement results show that the proposed system is able to reduce the second-order harmonic distortion (HD2) by 20 dB and the third-order harmonic distortion (HD3) by 14.5 dB, while at the same time improving the power efficiency by 18%.
NASA Astrophysics Data System (ADS)
Zhao, Huayong; Williams, Ben; Stone, Richard
2014-01-01
A new low-cost optical diagnostic technique, called Cone Beam Tomographic Three Colour Spectrometry (CBT-TCS), has been developed to measure the planar distributions of temperature, soot particle size, and soot volume fraction in a co-flow axi-symmetric laminar diffusion flame. The image of a flame is recorded by a colour camera, and then by using colour interpolation and applying a cone beam tomography algorithm, a colour map can be reconstructed that corresponds to a diametral plane. Look-up tables calculated using Planck's law and different scattering models are then employed to deduce the temperature, approximate average soot particle size and soot volume fraction in each voxel (volumetric pixel). A sensitivity analysis of the look-up tables shows that the results have a high temperature resolution but a relatively low soot particle size resolution. The assumptions underlying the technique are discussed in detail. Sample data from an ethylene laminar diffusion flame are compared with data in the literature for similar flames. The comparison shows very consistent temperature and soot volume fraction profiles. Further analysis indicates that the difference seen in comparison with published results are within the measurement uncertainties. This methodology is ready to be applied to measure 3D data by capturing multiple flame images from different angles for non-axisymmetric flame.
Model-Based Wavefront Control for CCAT
NASA Technical Reports Server (NTRS)
Redding, David; Lou, John Z.; Kissil, Andy; Bradford, Matt; Padin, Steve; Woody, David
2011-01-01
The 25-m aperture CCAT submillimeter-wave telescope will have a primary mirror that is divided into 162 individual segments, each of which is provided with 3 positioning actuators. CCAT will be equipped with innovative Imaging Displacement Sensors (IDS) inexpensive optical edge sensors capable of accurately measuring all segment relative motions. These measurements are used in a Kalman-filter-based Optical State Estimator to estimate wavefront errors, permitting use of a minimum-wavefront controller without direct wavefront measurement. This controller corrects the optical impact of errors in 6 degrees of freedom per segment, including lateral translations of the segments, using only the 3 actuated degrees of freedom per segment. The global motions of the Primary and Secondary Mirrors are not measured by the edge sensors. These are controlled using a gravity-sag look-up table. Predicted performance is illustrated by simulated response to errors such as gravity sag.
NASA Astrophysics Data System (ADS)
Zhao, Yue; Zhang, Wei; Zhu, Dianwen; Li, Changqing
2016-03-01
We performed numerical simulations and phantom experiments with a conical mirror based fluorescence molecular tomography (FMT) imaging system to optimize its performance. With phantom experiments, we have compared three measurement modes in FMT: the whole surface measurement mode, the transmission mode, and the reflection mode. Our results indicated that the whole surface measurement mode performed the best. Then, we applied two different neutral density (ND) filters to improve the measurement's dynamic range. The benefits from ND filters are not as much as predicted. Finally, with numerical simulations, we have compared two laser excitation patterns: line and point. With the same excitation position number, we found that the line laser excitation had slightly better FMT reconstruction results than the point laser excitation. In the future, we will implement Monte Carlo ray tracing simulations to calculate multiple reflection photons, and create a look-up table accordingly for calibration.
[Research on the High Efficiency Data Communication Repeater Based on STM32F103].
Zhang, Yahui; Li, Zheng; Chen, Guangfei
2015-11-01
To improve the radio frequency (RF) transmission distance of the wireless terminal of the medical internet of things (LOT), to realize the real-time and efficient data communication, the intelligent relay system based on STM32F103 single chip microcomputer (SCM) is proposed. The system used nRF905 chip to achieve the collection, of medical and health information of patients in the 433 MHz band, used SCM to control the serial port to Wi-Fi module to transmit information from 433 MHz to 2.4 GHz wireless Wi-Fi band, and used table look-up algorithm of ready list to improve the efficiency of data communications. The design can realize real-time and efficient data communication. The relay which is easy to use with high practical value can extend the distance and mode of data transmission and achieve real-time transmission of data.
Accelerated computer generated holography using sparse bases in the STFT domain.
Blinder, David; Schelkens, Peter
2018-01-22
Computer-generated holography at high resolutions is a computationally intensive task. Efficient algorithms are needed to generate holograms at acceptable speeds, especially for real-time and interactive applications such as holographic displays. We propose a novel technique to generate holograms using a sparse basis representation in the short-time Fourier space combined with a wavefront-recording plane placed in the middle of the 3D object. By computing the point spread functions in the transform domain, we update only a small subset of the precomputed largest-magnitude coefficients to significantly accelerate the algorithm over conventional look-up table methods. We implement the algorithm on a GPU, and report a speedup factor of over 30. We show that this transform is superior over wavelet-based approaches, and show quantitative and qualitative improvements over the state-of-the-art WASABI method; we report accuracy gains of 2dB PSNR, as well improved view preservation.
The Periodic Round Table (by Gary Katz)
NASA Astrophysics Data System (ADS)
Rodgers, Reviewed By Glen E.
2000-02-01
Unwrapping and lifting the Periodic Round Table out of its colorful box is an exciting experience for a professional chemist or a chemistry student. Touted as a "new way of looking at the elements", it is certainly thatat least at first blush. The "table" consists of four sets of two finely finished hardwood discs each with the following elemental symbols and their corresponding atomic numbers pleasingly and symmetrically wood-burned into their faces. The four sets of two discs are 1 1/2, 3, 4 1/2, and 6 in. in diameter, each disc is 3/4 in. thick, and therefore the entire "round table" stands 6 in. high and is 6 in. in diameter at its base. The eight beautifully polished discs (represented below) are held together by center dowels that allow each to be rotated separately.
NASA Technical Reports Server (NTRS)
Russell, Philip B.; Bauman, Jill J.
2000-01-01
This SAGE II Science Team task focuses on the development of a multi-wavelength, multi- sensor Look-Up-Table (LUT) algorithm for retrieving information about stratospheric aerosols from global satellite-based observations of particulate extinction. The LUT algorithm combines the 4-wavelength SAGE II extinction measurements (0.385 <= lambda <= 1.02 microns) with the 7.96 micron and 12.82 micron extinction measurements from the Cryogenic Limb Array Etalon Spectrometer (CLAES) instrument, thus increasing the information content available from either sensor alone. The algorithm uses the SAGE II/CLAES composite spectra in month-latitude-altitude bins to retrieve values and uncertainties of particle effective radius R(sub eff), surface area S, volume V and size distribution width sigma(sub g).
NASA Astrophysics Data System (ADS)
Almansa, A. Fernando; Cuevas, Emilio; Torres, Benjamín; Barreto, África; García, Rosa D.; Cachorro, Victoria E.; de Frutos, Ángel M.; López, César; Ramos, Ramón
2017-02-01
A new zenith-looking narrow-band radiometer based system (ZEN), conceived for dust aerosol optical depth (AOD) monitoring, is presented in this paper. The ZEN system comprises a new radiometer (ZEN-R41) and a methodology for AOD retrieval (ZEN-LUT). ZEN-R41 has been designed to be stand alone and without moving parts, making it a low-cost and robust instrument with low maintenance, appropriate for deployment in remote and unpopulated desert areas. The ZEN-LUT method is based on the comparison of the measured zenith sky radiance (ZSR) with a look-up table (LUT) of computed ZSRs. The LUT is generated with the LibRadtran radiative transfer code. The sensitivity study proved that the ZEN-LUT method is appropriate for inferring AOD from ZSR measurements with an AOD standard uncertainty up to 0.06 for AOD500 nm ˜ 0.5 and up to 0.15 for AOD500 nm ˜ 1.0, considering instrumental errors of 5 %. The validation of the ZEN-LUT technique was performed using data from AErosol RObotic NETwork (AERONET) Cimel Electronique 318 photometers (CE318). A comparison between AOD obtained by applying the ZEN-LUT method on ZSRs (inferred from CE318 diffuse-sky measurements) and AOD provided by AERONET (derived from CE318 direct-sun measurements) was carried out at three sites characterized by a regular presence of desert mineral dust aerosols: Izaña and Santa Cruz in the Canary Islands and Tamanrasset in Algeria. The results show a coefficient of determination (R2) ranging from 0.99 to 0.97, and root mean square errors (RMSE) ranging from 0.010 at Izaña to 0.032 at Tamanrasset. The comparison of ZSR values from ZEN-R41 and the CE318 showed absolute relative mean bias (RMB) < 10 %. ZEN-R41 AOD values inferred from ZEN-LUT methodology were compared with AOD provided by AERONET, showing a fairly good agreement in all wavelengths, with mean absolute AOD differences < 0.030 and R2 higher than 0.97.
Mercury⊕: An evidential reasoning image classifier
NASA Astrophysics Data System (ADS)
Peddle, Derek R.
1995-12-01
MERCURY⊕ is a multisource evidential reasoning classification software system based on the Dempster-Shafer theory of evidence. The design and implementation of this software package is described for improving the classification and analysis of multisource digital image data necessary for addressing advanced environmental and geoscience applications. In the remote-sensing context, the approach provides a more appropriate framework for classifying modern, multisource, and ancillary data sets which may contain a large number of disparate variables with different statistical properties, scales of measurement, and levels of error which cannot be handled using conventional Bayesian approaches. The software uses a nonparametric, supervised approach to classification, and provides a more objective and flexible interface to the evidential reasoning framework using a frequency-based method for computing support values from training data. The MERCURY⊕ software package has been implemented efficiently in the C programming language, with extensive use made of dynamic memory allocation procedures and compound linked list and hash-table data structures to optimize the storage and retrieval of evidence in a Knowledge Look-up Table. The software is complete with a full user interface and runs under Unix, Ultrix, VAX/VMS, MS-DOS, and Apple Macintosh operating system. An example of classifying alpine land cover and permafrost active layer depth in northern Canada is presented to illustrate the use and application of these ideas.
... cross-sectional pictures of your body. Doctors use CT scans to look for Broken bones Cancers Blood clots Signs of heart disease Internal bleeding During a CT scan, you lie still on a table. The table ...
Bit-serial neuroprocessor architecture
NASA Technical Reports Server (NTRS)
Tawel, Raoul (Inventor)
2001-01-01
A neuroprocessor architecture employs a combination of bit-serial and serial-parallel techniques for implementing the neurons of the neuroprocessor. The neuroprocessor architecture includes a neural module containing a pool of neurons, a global controller, a sigmoid activation ROM look-up-table, a plurality of neuron state registers, and a synaptic weight RAM. The neuroprocessor reduces the number of neurons required to perform the task by time multiplexing groups of neurons from a fixed pool of neurons to achieve the successive hidden layers of a recurrent network topology.
Digital intermediate frequency QAM modulator using parallel processing
Pao, Hsueh-Yuan [Livermore, CA; Tran, Binh-Nien [San Ramon, CA
2008-05-27
The digital Intermediate Frequency (IF) modulator applies to various modulation types and offers a simple and low cost method to implement a high-speed digital IF modulator using field programmable gate arrays (FPGAs). The architecture eliminates multipliers and sequential processing by storing the pre-computed modulated cosine and sine carriers in ROM look-up-tables (LUTs). The high-speed input data stream is parallel processed using the corresponding LUTs, which reduces the main processing speed, allowing the use of low cost FPGAs.
2012-09-28
spectral-geotechnical libraries and models developed during remote sensing and calibration/ validation campaigns conducted by NRL and collaborating...geotechnical libraries and models developed during remote sensing and calibration/ validation campaigns conducted by NRL and collaborating institutions in four...2010; Bachmann, Fry, et al, 2012a). The NRL HITT tool is a model for how we develop and validate software, and the future development of tools by
ERIC Educational Resources Information Center
National Center for Education Statistics, 2013
2013-01-01
This paper provides Appendix D, Standard Error tables, for the full report, entitled. "Literacy, Numeracy, and Problem Solving in Technology-Rich Environments among U.S. Adults: Results from the Program for the International Assessment of Adult Competencies 2012. First Look. NCES 2014-008." The full report presents results of the Program…
3. DETAIL OF STONEWORK ON ARCH, WATER TABLE AND DENTILS ...
3. DETAIL OF STONEWORK ON ARCH, WATER TABLE AND DENTILS ON EAST ELEVATION LOOKING NORTHWEST. - Original Airport Entrance Overpass, Spanning original Airport Entrance Road at National Airport, Arlington, Arlington County, VA
Light Curve Simulation Using Spacecraft CAD Models and Empirical Material Spectral BRDFS
NASA Astrophysics Data System (ADS)
Willison, A.; Bedard, D.
This paper presents a Matlab-based light curve simulation software package that uses computer-aided design (CAD) models of spacecraft and the spectral bidirectional reflectance distribution function (sBRDF) of their homogenous surface materials. It represents the overall optical reflectance of objects as a sBRDF, a spectrometric quantity, obtainable during an optical ground truth experiment. The broadband bidirectional reflectance distribution function (BRDF), the basis of a broadband light curve, is produced by integrating the sBRDF over the optical wavelength range. Colour-filtered BRDFs, the basis of colour-filtered light curves, are produced by first multiplying the sBRDF by colour filters, and integrating the products. The software package's validity is established through comparison of simulated reflectance spectra and broadband light curves with those measured of the CanX-1 Engineering Model (EM) nanosatellite, collected during an optical ground truth experiment. It is currently being extended to simulate light curves of spacecraft in Earth orbit, using spacecraft Two-Line-Element (TLE) sets, yaw/pitch/roll angles, and observer coordinates. Measured light curves of the NEOSSat spacecraft will be used to validate simulated quantities. The sBRDF was chosen to represent material reflectance as it is spectrometric and a function of illumination and observation geometry. Homogeneous material sBRDFs were obtained using a goniospectrometer for a range of illumination and observation geometries, collected in a controlled environment. The materials analyzed include aluminum alloy, two types of triple-junction photovoltaic (TJPV) cell, white paint, and multi-layer insulation (MLI). Interpolation and extrapolation methods were used to determine the sBRDF for all possible illumination and observation geometries not measured in the laboratory, resulting in empirical look-up tables. These look-up tables are referenced when calculating the overall sBRDF of objects, where the contribution of each facet is proportionally integrated.
1. DOWNRIVER VIEW OF BRIDGE, LOOKING SOUTHSOUTHWEST Peter J. Edwards, ...
1. DOWNRIVER VIEW OF BRIDGE, LOOKING SOUTH-SOUTHWEST Peter J. Edwards, photographer, August 1988 - Four Mile Bridge, Copper Creek Road, Spans Table Rock Fork, Mollala River, Molalla, Clackamas County, OR
5. DETAIL VIEW SHOWING ARCH AND SUPPORTS, LOOKING WESTSOUTHWEST Mike ...
5. DETAIL VIEW SHOWING ARCH AND SUPPORTS, LOOKING WEST-SOUTHWEST Mike Hanemann, photographer, August 1988 - Four Mile Bridge, Copper Creek Road, Spans Table Rock Fork, Mollala River, Molalla, Clackamas County, OR
87. AFT CREWS' MESS DECK STARBOARD LOOKING TO PORT ...
87. AFT CREWS' MESS DECK - STARBOARD LOOKING TO PORT SHOWING COFFEE MAKER, ICE CREAM FREEZER, TABLES AND SCUTTLEBUTTS. - U.S.S. HORNET, Puget Sound Naval Shipyard, Sinclair Inlet, Bremerton, Kitsap County, WA
Gangadari, Bhoopal Rao; Rafi Ahamed, Shaik
2016-09-01
In biomedical, data security is the most expensive resource for wireless body area network applications. Cryptographic algorithms are used in order to protect the information against unauthorised access. Advanced encryption standard (AES) cryptographic algorithm plays a vital role in telemedicine applications. The authors propose a novel approach for design of substitution bytes (S-Box) using second-order reversible one-dimensional cellular automata (RCA 2 ) as a replacement to the classical look-up-table (LUT) based S-Box used in AES algorithm. The performance of proposed RCA 2 based S-Box and conventional LUT based S-Box is evaluated in terms of security using the cryptographic properties such as the nonlinearity, correlation immunity bias, strict avalanche criteria and entropy. Moreover, it is also shown that RCA 2 based S-Boxes are dynamic in nature, invertible and provide high level of security. Further, it is also found that the RCA 2 based S-Box have comparatively better performance than that of conventional LUT based S-Box.
Rafi Ahamed, Shaik
2016-01-01
In biomedical, data security is the most expensive resource for wireless body area network applications. Cryptographic algorithms are used in order to protect the information against unauthorised access. Advanced encryption standard (AES) cryptographic algorithm plays a vital role in telemedicine applications. The authors propose a novel approach for design of substitution bytes (S-Box) using second-order reversible one-dimensional cellular automata (RCA2) as a replacement to the classical look-up-table (LUT) based S-Box used in AES algorithm. The performance of proposed RCA2 based S-Box and conventional LUT based S-Box is evaluated in terms of security using the cryptographic properties such as the nonlinearity, correlation immunity bias, strict avalanche criteria and entropy. Moreover, it is also shown that RCA2 based S-Boxes are dynamic in nature, invertible and provide high level of security. Further, it is also found that the RCA2 based S-Box have comparatively better performance than that of conventional LUT based S-Box. PMID:27733924
41. PATTERN STORAGE, GRIND STONE, WATER TANK, SHAFTING, AND TABLE ...
41. PATTERN STORAGE, GRIND STONE, WATER TANK, SHAFTING, AND TABLE SAW (L TO R)-LOOKING WEST. - W. A. Young & Sons Foundry & Machine Shop, On Water Street along Monongahela River, Rices Landing, Greene County, PA
An 18-ps TDC using timing adjustment and bin realignment methods in a Cyclone-IV FPGA
NASA Astrophysics Data System (ADS)
Cao, Guiping; Xia, Haojie; Dong, Ning
2018-05-01
The method commonly used to produce a field-programmable gate array (FPGA)-based time-to-digital converter (TDC) creates a tapped delay line (TDL) for time interpolation to yield high time precision. We conduct timing adjustment and bin realignment to implement a TDC in the Altera Cyclone-IV FPGA. The former tunes the carry look-up table (LUT) cell delay by changing the LUT's function through low-level primitives according to timing analysis results, while the latter realigns bins according to the timing result obtained by timing adjustment so as to create a uniform TDL with bins of equivalent width. The differential nonlinearity and time resolution can be improved by realigning the bins. After calibration, the TDC has a 18 ps root-mean-square timing resolution and a 45 ps least-significant bit resolution.
NASA Astrophysics Data System (ADS)
Zuo, Chao; Chen, Qian; Gu, Guohua; Feng, Shijie; Feng, Fangxiaoyu; Li, Rubin; Shen, Guochen
2013-08-01
This paper introduces a high-speed three-dimensional (3-D) shape measurement technique for dynamic scenes by using bi-frequency tripolar pulse-width-modulation (TPWM) fringe projection. Two wrapped phase maps with different wavelengths can be obtained simultaneously by our bi-frequency phase-shifting algorithm. Then the two phase maps are unwrapped using a simple look-up-table based number-theoretical approach. To guarantee the robustness of phase unwrapping as well as the high sinusoidality of projected patterns, TPWM technique is employed to generate ideal fringe patterns with slight defocus. We detailed our technique, including its principle, pattern design, and system setup. Several experiments on dynamic scenes were performed, verifying that our method can achieve a speed of 1250 frames per second for fast, dense, and accurate 3-D measurements.
Moon Data for One Day Rise/Set/Twilight Table for an Entire Year What the Moon Looks Like Now Dates of Contact more... Sitemap Rise/Set/Transit/Twilight Data Complete Sun and Moon Data for One Day Table of Solar System Objects and Bright Stars Duration of Daylight/Darkness Table for One Year Phases of the
117. VIEW, LOOKING NORTHWEST, OF DIESTER MODEL 6 CONCENTRATING (SHAKING) ...
117. VIEW, LOOKING NORTHWEST, OF DIESTER MODEL 6 CONCENTRATING (SHAKING) TABLE, USED FOR PRIMARY, MECHANICAL SEPARATION OF GOLD FROM ORE. - Shenandoah-Dives Mill, 135 County Road 2, Silverton, San Juan County, CO
NASA Astrophysics Data System (ADS)
Dubovik, O.; Litvinov, P.; Lapyonok, T.; Ducos, F.; Fuertes, D.; Huang, X.; Torres, B.; Aspetsberger, M.; Federspiel, C.
2014-12-01
The POLDER imager on board of the PARASOL micro-satellite is the only satellite polarimeter provided ~ 9 years extensive record of detailed polarmertic observations of Earth atmosphere from space. POLDER / PARASOL registers spectral polarimetric characteristics of the reflected atmospheric radiation at up to 16 viewing directions over each observed pixel. Such observations have very high sensitivity to the variability of the properties of atmosphere and underlying surface and can not be adequately interpreted using look-up-table retrieval algorithms developed for analyzing mono-viewing intensity only observations traditionally used in atmospheric remote sensing. Therefore, a new enhanced retrieval algorithm GRASP (Generalized Retrieval of Aerosol and Surface Properties) has been developed and applied for processing of PARASOL data. GRASP relies on highly optimized statistical fitting of observations and derives large number of unknowns for each observed pixel. The algorithm uses elaborated model of the atmosphere and fully accounts for all multiple interactions of scattered solar light with aerosol, gases and the underlying surface. All calculations are implemented during inversion and no look-up tables are used. The algorithm is very flexible in utilization of various types of a priori constraints on the retrieved characteristics and in parameterization of surface - atmosphere system. It is also optimized for high performance calculations. The results of the PARASOL data processing will be presented with the emphasis on the discussion of transferability and adaptability of the developed retrieval concept for processing polarimetric observations of other planets. For example, flexibility and possible alternative in modeling properties of aerosol polydisperse mixtures, particle composition and shape, reflectance of surface, etc. will be discussed.
Chander, G.; Haque, Md. O.; Micijevic, E.; Barsi, J.A.
2008-01-01
From the Landsat program's inception in 1972 to the present, the earth science user community has benefited from a historical record of remotely sensed data. The multispectral data from the Landsat 5 (L5) Thematic Mapper (TM) sensor provide the backbone for this extensive archive. Historically, the radiometric calibration procedure for this imagery used the instrument's response to the Internal Calibrator (IC) on a scene-by-scene basis to determine the gain and offset for each detector. The IC system degraded with time causing radiometric calibration errors up to 20 percent. In May 2003 the National Landsat Archive Production System (NLAPS) was updated to use a gain model rather than the scene acquisition specific IC gains to calibrate TM data processed in the United States. Further modification of the gain model was performed in 2007. L5 TM data that were processed using IC prior to the calibration update do not benefit from the recent calibration revisions. A procedure has been developed to give users the ability to recalibrate their existing Level-1 products. The best recalibration results are obtained if the work order report that was originally included in the standard data product delivery is available. However, many users may not have the original work order report. In such cases, the IC gain look-up table that was generated using the radiometric gain trends recorded in the NLAPS database can be used for recalibration. This paper discusses the procedure to recalibrate L5 TM data when the work order report originally used in processing is not available. A companion paper discusses the generation of the NLAPS IC gain and bias look-up tables required to perform the recalibration.
Transformational Solar Array Final Report
NASA Technical Reports Server (NTRS)
Gaddy, Edward; Ballarotto, Mihaela; Drabenstadt, Christian; Nichols, John; Douglas, Mark; Spence, Brian; Stall, Richard A.; Sulyma, Chris; Sharps, Paul
2017-01-01
We have made outstanding progress in the Base Phase towards achieving the final NASA Research Announcement (NRA) goals. Progress is better than anticipated due to the lighter than predicted mass of the IMM solar cells. We look forward to further improvements in the IMM cell performance during Option I and Option II; so, we have confidence that the first four items listed in the table will improve to better than the NRA goals. The computation of the end of life blanket efficiency is uncertain because we have extrapolated the radiation damage from room temperature measurements. The last three items listed in the Table were not intended to be accomplished during the Base Phase; they will be achieved during Option I and Option II.
The "periodic table" of the genetic code: A new way to look at the code and the decoding process.
Komar, Anton A
2016-01-01
Henri Grosjean and Eric Westhof recently presented an information-rich, alternative view of the genetic code, which takes into account current knowledge of the decoding process, including the complex nature of interactions between mRNA, tRNA and rRNA that take place during protein synthesis on the ribosome, and it also better reflects the evolution of the code. The new asymmetrical circular genetic code has a number of advantages over the traditional codon table and the previous circular diagrams (with a symmetrical/clockwise arrangement of the U, C, A, G bases). Most importantly, all sequence co-variances can be visualized and explained based on the internal logic of the thermodynamics of codon-anticodon interactions.
What is the Uncertainty in MODIS Aerosol Optical Depth in the Vicinity of Clouds?
NASA Technical Reports Server (NTRS)
Patadia, Falguni; Levy, Rob; Mattoo, Shana
2017-01-01
MODIS dark-target (DT) algorithm retrieves aerosol optical depth (AOD) using a Look Up Table (LUT) approach. Global comparison of AOD (Collection 6 ) with ground-based sun photometer gives an Estimated Error (EE) of +/-(0.04 + 10%) over ocean. However, EE does not represent per-retrieval uncertainty. For retrievals that are biased high compared to AERONET, here we aim to closely examine the contribution of biases due to presence of clouds and per-pixel retrieval uncertainty. We have characterized AOD uncertainty at 550 nm, due to standard deviation of reflectance in 10 km retrieval region, uncertainty related to gas (H2O, O3) absorption, surface albedo, and aerosol models. The uncertainty in retrieved AOD seems to lie within the estimated over ocean error envelope of +/-(0.03+10%). Regions between broken clouds tend to have higher uncertainty. Compared to C6 AOD, a retrieval omitting observations in the vicinity of clouds (< or = 1 km) is biased by about +/- 0.05. For homogeneous aerosol distribution, clear sky retrievals show near zero bias. Close look at per-pixel reflectance histograms suggests retrieval possibility using median reflectance values.
49. COMMAND INFORMATION CENTER (CIC) AFT LOOKING FORWARD PORT ...
49. COMMAND INFORMATION CENTER (CIC) - AFT LOOKING FORWARD PORT TO STARBOARD SHOWING VARIOUS TYPES OF RADAR UNITS, PLOT TABLES AND PLOTTING BOARDS. - U.S.S. HORNET, Puget Sound Naval Shipyard, Sinclair Inlet, Bremerton, Kitsap County, WA
Looking northeast from roof of Machine Shop (Bldg. 163) at ...
Looking northeast from roof of Machine Shop (Bldg. 163) at transfer table pit and Boiler Shop (Bldg. 152) - Atchison, Topeka, Santa Fe Railroad, Albuquerque Shops, Machine Shop, 908 Second Street, Southwest, Albuquerque, Bernalillo County, NM
Performance optimization of internet firewalls
NASA Astrophysics Data System (ADS)
Chiueh, Tzi-cker; Ballman, Allen
1997-01-01
Internet firewalls control the data traffic in and out of an enterprise network by checking network packets against a set of rules that embodies an organization's security policy. Because rule checking is computationally more expensive than routing-table look-up, it could become a potential bottleneck for scaling up the performance of IP routers, which typically implement firewall functions in software. in this paper, we analyzed the performance problems associated with firewalls, particularly packet filters, propose a good connection cache to amortize the costly security check over the packets in a connection, and report the preliminary performance results of a trace-driven simulation that show the average packet check time can be reduced by a factor of 2.5 at the least.
Fuel Cell Stack Testing and Durability in Support of Ion Tiger UAV
2010-06-02
N00173-08-2-C008 specified. In June 2008, the first M250 stack 242503 data were incorporated into the PEMFC system model as a look-up data table...control and operational model which implements the operational strategy by controlling the power from the PEMFC systems and battery pack for a total...Outputs PEMFC System Outputs <~~>*<*,yrx**i~yc*r»>r-’+**^^ FCS_P«wi_Dwn«l (W) 10 15 20 25 BfOfumon PCM« Cwnind Wi ) 5 10 15
1981-11-01
d’essence comparativement au plan en vigueur. Le rapport mentionne 6galement que la consommation d’un gros v"hicule a6t calcul6e a l’aide d’un module...most of the submitted data was not readily enterable into the Vehicle Simulation program. Because of the design of the table look-ups in the program
Monrose, Erica; Ledergerber, Jessica; Acheampong, Derrick; Jandorf, Lina
2017-09-21
To assess participants' reasons for seeking cancer screening information at community health fairs and what they do with the information they receive. Mixed quantitative and qualitative approach was used. Community health fairs are organized in underserved New York City neighbourhoods. From June 14, 2016 to August 26, 2016, cancer prevention tables providing information about various cancer screenings were established at 12 local community health fairs in New York City. In-person and follow up telephone surveys assessing interest in the cancer prevention table, personal cancer screening adherence rates, information-sharing behaviours and demographic variables have been taken into account. Statistical analyses were performed using IBM SPSS 22.0: frequencies, descriptive, cross tabulations. All qualitative data was coded by theme so that it could be analysed through SPSS. For example, Were you interested in a specific cancer? may be coded as 2 for yes , breast cancer . One hundred and sixteen patrons participated in the initial survey. Of those, 88 (78%) agreed to give their contact information for the follow-up survey and 60 follow-up surveys were completed (68%). Of those who reported reading the material, 45% shared the information; 15% subsequently spoke to a provider about cancer screenings and 40% intended to speak to a provider. Participants disseminated information without prompting; suggesting the reach of these fairs extends beyond the people who visit our table. Future studies should look at whether patrons would share information at higher rates when they are explicitly encouraged to share the information.
Puzzler Solution: Perfect Weather for a Picnic | Poster
It looks like we stumped you. We did not receive any correct guesses for the current Poster Puzzler, which is an image of the top of the Building 434 picnic table, with a view looking towards Building 472. This picnic table and others across campus were supplied by the NCI at Frederick Campus Improvement Committee. Building 434, located on Wood Street, is home to the staff of Scientific Publications, Graphics & Media (SPGM), the Central Repository, and the NCI Experimental Therapeutics Program support group, Applied and Developmental Research Directorate.
2015-04-08
ISS043E091650 (04/08/2015) --- A view of the food table located in the Russian Zvezda service module on the International Space Station taken by Expedition 43 Flight Engineer Scott Kelly. Assorted food, drink and condiment packets are visible. Kelly tweeted this image along with the comment: ""Looks messy, but it's functional. Our #food table on the @space station. What's for breakfast? #YearInSpace".
ERIC Educational Resources Information Center
Burge, Bethan
2015-01-01
This election factsheet highlights the following points: (1) It isn't always possible to say with certainty from looking at a country's rank in the PISA educational league tables alone whether one country or economy has definitely performed better than another; (2) England's position in the league tables is dependent on which countries and…
ERIC Educational Resources Information Center
Soh, Kay Cheng
2011-01-01
The outcome of university ranking is of much interest and concern to the many stakeholders, including university's sponsors, administrators, staff, current and prospective students, and the public. The results of rankings presented in the form of league tables, analogous to football league tables, attract more attention than do the processes by…
ERIC Educational Resources Information Center
Knapp, Laura G.; Kelly-Reid, Janice E.; Whitmore, Roy W.; Miller, Elise
2007-01-01
This report presents information from the Winter 2005-06 Integrated Postsecondary Education Data System (IPEDS) web-based data collection. Tabulations represent data requested from all postsecondary institutions participating in Title IV federal student financial aid programs. The tables in this publication include data on the number of staff…
125. BENCH SHOP, LOOKING SOUTHEAST AT CENTER OF ROOM SHOWING ...
125. BENCH SHOP, LOOKING SOUTHEAST AT CENTER OF ROOM SHOWING TOOL SHARPENER ON RIGHT AND ELECTRIC TABLE SAW AT CENTER. - Gruber Wagon Works, Pennsylvania Route 183 & State Hill Road at Red Bridge Park, Bernville, Berks County, PA
VIEW OF PDP ROOM AT LEVEL +27, LOOKING NORTH TOWARD ...
VIEW OF PDP ROOM AT LEVEL +27, LOOKING NORTH TOWARD TILTING TABLE AREA. PART OF SHEAVE RACK FOR PDP IN LOWER LEFT - Physics Assembly Laboratory, Area A/M, Savannah River Site, Aiken, Aiken County, SC
NASA Technical Reports Server (NTRS)
Whelan, Todd Michael
1996-01-01
In a real-time or batch mode simulation that is designed to model aircraft dynamics over a wide range of flight conditions, a table look- up scheme is implemented to determine the forces and moments on the vehicle based upon the values of parameters such as angle of attack, altitude, Mach number, and control surface deflections. Simulation Aerodynamic Variable Interface (SAVI) is a graphical user interface to the flight simulation input data, designed to operate on workstations that support X Windows. The purpose of the application is to provide two and three dimensional visualization of the data, to allow an intuitive sense of the data set. SAVI also allows the user to manipulate the data, either to conduct an interactive study of the influence of changes on the vehicle dynamics, or to make revisions to data set based on new information such as flight test. This paper discusses the reasons for developing the application, provides an overview of its capabilities, and outlines the software architecture and operating environment.
EXTENSION OF SHEAR RUNOUT TABLE INTO SHIPPING BUILDING, WHICH LAY ...
EXTENSION OF SHEAR RUNOUT TABLE INTO SHIPPING BUILDING, WHICH LAY PERPENDICULAR TO 8" MILL. VIEW LOOKING NORTH INCLUDES NEW BOLD PRODUCT SHEARS, STOPS LENGTH GAUGES, AND BUNDLING CRADLES. - LTV Steel, 8-inch Bar Mill, Buffalo Plant, Buffalo, Erie County, NY
Walters, Daniel; Stringer, Simon; Rolls, Edmund
2013-01-01
The head direction cell system is capable of accurately updating its current representation of head direction in the absence of visual input. This is known as the path integration of head direction. An important question is how the head direction cell system learns to perform accurate path integration of head direction. In this paper we propose a model of velocity path integration of head direction in which the natural time delay of axonal transmission between a linked continuous attractor network and competitive network acts as a timing mechanism to facilitate the correct speed of path integration. The model effectively learns a “look-up” table for the correct speed of path integration. In simulation, we show that the model is able to successfully learn two different speeds of path integration across two different axonal conduction delays, and without the need to alter any other model parameters. An implication of this model is that, by learning look-up tables for each speed of path integration, the model should exhibit a degree of robustness to damage. In simulations, we show that the speed of path integration is not significantly affected by degrading the network through removing a proportion of the cells that signal rotational velocity. PMID:23526976
Estimating skin blood saturation by selecting a subset of hyperspectral imaging data
NASA Astrophysics Data System (ADS)
Ewerlöf, Maria; Salerud, E. Göran; Strömberg, Tomas; Larsson, Marcus
2015-03-01
Skin blood haemoglobin saturation (?b) can be estimated with hyperspectral imaging using the wavelength (λ) range of 450-700 nm where haemoglobin absorption displays distinct spectral characteristics. Depending on the image size and photon transport algorithm, computations may be demanding. Therefore, this work aims to evaluate subsets with a reduced number of wavelengths for ?b estimation. White Monte Carlo simulations are performed using a two-layered tissue model with discrete values for epidermal thickness (?epi) and the reduced scattering coefficient (μ's ), mimicking an imaging setup. A detected intensity look-up table is calculated for a range of model parameter values relevant to human skin, adding absorption effects in the post-processing. Skin model parameters, including absorbers, are; μ's (λ), ?epi, haemoglobin saturation (?b), tissue fraction blood (?b) and tissue fraction melanin (?mel). The skin model paired with the look-up table allow spectra to be calculated swiftly. Three inverse models with varying number of free parameters are evaluated: A(?b, ?b), B(?b, ?b, ?mel) and C(all parameters free). Fourteen wavelength candidates are selected by analysing the maximal spectral sensitivity to ?b and minimizing the sensitivity to ?b. All possible combinations of these candidates with three, four and 14 wavelengths, as well as the full spectral range, are evaluated for estimating ?b for 1000 randomly generated evaluation spectra. The results show that the simplified models A and B estimated ?b accurately using four wavelengths (mean error 2.2% for model B). If the number of wavelengths increased, the model complexity needed to be increased to avoid poor estimations.
Efficient generation of holographic news ticker in holographic 3DTV
NASA Astrophysics Data System (ADS)
Kim, Seung-Cheol; Kim, Eun-Soo
2009-08-01
News ticker is used to show breaking news or news headlines in conventional 2-D broadcasting system. For the case of the breaking news, the fast creation is need, because the information should be sent quickly. In addition, if holographic 3- D broadcasting system is started in the future, news ticker will remain. On the other hands, some approaches for generation of CGH patterns have been suggested like the ray-tracing method and look-up table (LUT) method. However, these methods have some drawbacks that needs much time or needs huge memory size for look-up table. Recently, a novel LUT (N-LUT) method for fast generation of CGH patterns of 3-D objects with a dramatically reduced LUT without the loss of computational speed was proposed. Therefore, we proposed the method to efficiently generate the holographic news ticker in holographic 3DTV or 3-D movies using N-LUT method. The proposed method is largely consisted of five steps: construction of the LUT for each character, extraction of characters in news ticker, generation and shift of the CGH pattern for news ticker using the LUT for each character, composition of hologram pattern for 3-D video and hologram pattern for news ticker and reconstruct the holographic 3D video with news ticker. To confirm the proposed method, moving car in front of the castle is used as a 3D video and the words 'HOLOGRAM CAPTION GENERATOR' is used as a news ticker. From this simulation results confirmed the feasibility of the proposed method in fast generation of CGH patterns for holographic captions.
NASA Astrophysics Data System (ADS)
Damay, Nicolas; Forgez, Christophe; Bichat, Marie-Pierre; Friedrich, Guy
2016-11-01
The entropy-variation of a battery is responsible for heat generation or consumption during operation and its prior measurement is mandatory for developing a thermal model. It is generally done through the potentiometric method which is considered as a reference. However, it requires several days or weeks to get a look-up table with a 5 or 10% SoC (State of Charge) resolution. In this study, a calorimetric method based on the inversion of a thermal model is proposed for the fast estimation of a nearly continuous curve of entropy-variation. This is achieved by separating the heats produced while charging and discharging the battery. The entropy-variation is then deduced from the extracted entropic heat. The proposed method is validated by comparing the results obtained with several current rates to measurements made with the potentiometric method.
Designing Image Operators for MRI-PET Image Fusion of the Brain
NASA Astrophysics Data System (ADS)
Márquez, Jorge; Gastélum, Alfonso; Padilla, Miguel A.
2006-09-01
Our goal is to obtain images combining in a useful and precise way the information from 3D volumes of medical imaging sets. We address two modalities combining anatomy (Magnetic Resonance Imaging or MRI) and functional information (Positron Emission Tomography or PET). Commercial imaging software offers image fusion tools based on fixed blending or color-channel combination of two modalities, and color Look-Up Tables (LUTs), without considering the anatomical and functional character of the image features. We used a sensible approach for image fusion taking advantage mainly from the HSL (Hue, Saturation and Luminosity) color space, in order to enhance the fusion results. We further tested operators for gradient and contour extraction to enhance anatomical details, plus other spatial-domain filters for functional features corresponding to wide point-spread-function responses in PET images. A set of image-fusion operators was formulated and tested on PET and MRI acquisitions.
47. INTERIOR VIEW LOOKING NORTH AT THE FRONT OF THE ...
47. INTERIOR VIEW LOOKING NORTH AT THE FRONT OF THE STAMP BATTERIES AND MORTAR BOXES. THE AMALGAMATION TABLES EXTEND TO THE FOREGROUND AND BOTTOM OF THE IMAGE. - Standard Gold Mill, East of Bodie Creek, Northeast of Bodie, Bodie, Mono County, CA
ERIC Educational Resources Information Center
Gorard, Stephen
2014-01-01
This paper considers the pupil intakes to Academies in England, and their attainment, based on a re-analysis of figures from the Annual Schools Census 1989-2012, the Department for Education School Performance Tables 2004-2012 and the National Pupil Database. It looks at the national picture, and the situation for Local Education Authorities, and…
7. GENERAL VIEW OF GUT SHANTY ON LEVEL 3; LOOKING ...
7. GENERAL VIEW OF GUT SHANTY ON LEVEL 3; LOOKING SOUTHEAST; HOG VISCERA WERE SORTED AND CLEANED WITH HOT WATER ON LONG STAINLESS STEEL TABLES - Rath Packing Company, Hog Dressing Building, Sycamore Street between Elm & Eighteenth Streets, Waterloo, Black Hawk County, IA
Looking south at, left to right, Heavy Equipment Shop (Bldg. ...
Looking south at, left to right, Heavy Equipment Shop (Bldg. 188), C.W.E. Office (Bldg. 130), Boiler Shop (Bldg. 152), and canopy over drop table pits - Atchison, Topeka, Santa Fe Railroad, Albuquerque Shops, 908 Second Street, Southwest, Albuquerque, Bernalillo County, NM
Monthly analysis of PM ratio characteristics and its relation to AOD.
Sorek-Hamer, Meytar; Broday, David M; Chatfield, Robert; Esswein, Robert; Stafoggia, Massimo; Lepeule, Johanna; Lyapustin, Alexei; Kloog, Itai
2017-01-01
Airborne particulate matter (PM) is derived from diverse sources-natural and anthropogenic. Climate change processes and remote sensing measurements are affected by the PM properties, which are often lumped into homogeneous size fractions that show spatiotemporal variation. Since different sources are attributed to different geographic locations and show specific spatial and temporal PM patterns, we explored the spatiotemporal characteristics of the PM 2.5 /PM 10 ratio in different areas. Furthermore, we examined the statistical relationships between AERONET aerosol optical depth (AOD) products, satellite-based AOD, and the PM ratio, as well as the specific PM size fractions. PM data from the northeastern United States, from San Joaquin Valley, CA, and from Italy, Israel, and France were analyzed, as well as the spatial and temporal co-measured AOD products obtained from the MultiAngle Implementation of Atmospheric Correction (MAIAC) algorithm. Our results suggest that when both the AERONET AOD and the AERONET fine-mode AOD are available, the AERONET AOD ratio can be a fair proxy for the ground PM ratio. Therefore, we recommend incorporating the fine-mode AERONET AOD in the calibration of MAIAC. Along with a relatively large variation in the observed PM ratio (especially in the northeastern United States), this shows the need to revisit MAIAC assumptions on aerosol microphysical properties, and perhaps their seasonal variability, which are used to generate the look-up tables and conduct aerosol retrievals. Our results call for further scrutiny of satellite-borne AOD, in particular its errors, limitations, and relation to the vertical aerosol profile and the particle size, shape, and composition distribution. This work is one step of the required analyses to gain better understanding of what the satellite-based AOD represents. The analysis results recommend incorporating the fine-mode AERONET AOD in MAIAC calibration. Specifically, they indicate the need to revisit MAIAC regional aerosol microphysical model assumptions used to generate look-up tables (LUTs) and conduct retrievals. Furthermore, relatively large variations in measured PM ratio shows that adding seasonality in aerosol microphysics used in LUTs, which is currently static, could also help improve accuracy of MAIAC retrievals. These results call for further scrutiny of satellite-borne AOD for better understanding of its limitations and relation to the vertical aerosol profile and particle size, shape, and composition.
Color quality management in advanced flat panel display engines
NASA Astrophysics Data System (ADS)
Lebowsky, Fritz; Neugebauer, Charles F.; Marnatti, David M.
2003-01-01
During recent years color reproduction systems for consumer needs have experienced various difficulties. In particular, flat panels and printers could not reach a satisfactory color match. The RGB image stored on an Internet server of a retailer did not show the desired colors on a consumer display device or printer device. STMicroelectronics addresses this important color reproduction issue inside their advanced display engines using novel algorithms targeted for low cost consumer flat panels. Using a new and genuine RGB color space transformation, which combines a gamma correction Look-Up-Table, tetrahedrization, and linear interpolation, we satisfy market demands.
An efficient energy response model for liquid scintillator detectors
NASA Astrophysics Data System (ADS)
Lebanowski, Logan; Wan, Linyan; Ji, Xiangpan; Wang, Zhe; Chen, Shaomin
2018-05-01
Liquid scintillator detectors are playing an increasingly important role in low-energy neutrino experiments. In this article, we describe a generic energy response model of liquid scintillator detectors that provides energy estimations of sub-percent accuracy. This model fits a minimal set of physically-motivated parameters that capture the essential characteristics of scintillator response and that can naturally account for changes in scintillator over time, helping to avoid associated biases or systematic uncertainties. The model employs a one-step calculation and look-up tables, yielding an immediate estimation of energy and an efficient framework for quantifying systematic uncertainties and correlations.
Determination of circumsolar radiation from Meteosat Second Generation
NASA Astrophysics Data System (ADS)
Reinhardt, B.; Buras, R.; Bugliaro, L.; Wilbert, S.; Mayer, B.
2014-03-01
Reliable data on circumsolar radiation, which is caused by scattering of sunlight by cloud or aerosol particles, is becoming more and more important for the resource assessment and design of concentrating solar technologies (CSTs). However, measuring circumsolar radiation is demanding and only very limited data sets are available. As a step to bridge this gap, a method was developed which allows for determination of circumsolar radiation from cirrus cloud properties retrieved by the geostationary satellites of the Meteosat Second Generation (MSG) family. The method takes output from the COCS algorithm to generate a cirrus mask from MSG data and then uses the retrieval algorithm APICS to obtain the optical thickness and the effective radius of the detected cirrus, which in turn are used to determine the circumsolar radiation from a pre-calculated look-up table. The look-up table was generated from extensive calculations using a specifically adjusted version of the Monte Carlo radiative transfer model MYSTIC and by developing a fast yet precise parameterization. APICS was also improved such that it determines the surface albedo, which is needed for the cloud property retrieval, in a self-consistent way instead of using external data. Furthermore, it was extended to consider new ice particle shapes to allow for an uncertainty analysis concerning this parameter. We found that the nescience of the ice particle shape leads to an uncertainty of up to 50%. A validation with 1 yr of ground-based measurements shows, however, that the frequency distribution of the circumsolar radiation can be well characterized with typical ice particle shape mixtures, which feature either smooth or severely roughened particle surfaces. However, when comparing instantaneous values, timing and amplitude errors become evident. For the circumsolar ratio (CSR) this is reflected in a mean absolute deviation (MAD) of 0.11 for both employed particle shape mixtures, and a bias of 4 and 11%, for the mixture with smooth and roughend particles, respectively. If measurements with sub-scale cumulus clouds within the relevant satellite pixels are manually excluded, the instantaneous agreement between satellite and ground measurements improves. For a 2-monthly time series, for which a manual screening of all-sky images was performed, MAD values of 0.08 and 0.07 were obtained for the two employed ice particle mixtures, respectively.
18. CROWS NEST ATOP SUPERSTRUCTURE. Looking up from northeast corner ...
18. CROWS NEST ATOP SUPERSTRUCTURE. Looking up from northeast corner of run line deck. - Edwards Air Force Base, Air Force Rocket Propulsion Laboratory, Test Stand 1-A, Test Area 1-120, north end of Jupiter Boulevard, Boron, Kern County, CA
10. DETAIL SHOWING THRUST MEASURING SYSTEM. Looking up from the ...
10. DETAIL SHOWING THRUST MEASURING SYSTEM. Looking up from the test stand deck to east. - Edwards Air Force Base, Air Force Rocket Propulsion Laboratory, Test Stand 1-A, Test Area 1-120, north end of Jupiter Boulevard, Boron, Kern County, CA
NASA Astrophysics Data System (ADS)
Montzka, C.; Rötzer, K.; Bogena, H. R.; Vereecken, H.
2017-12-01
Improving the coarse spatial resolution of global soil moisture products from SMOS, SMAP and ASCAT is currently an up-to-date topic. Soil texture heterogeneity is known to be one of the main sources of soil moisture spatial variability. A method has been developed that predicts the soil moisture standard deviation as a function of the mean soil moisture based on soil texture information. It is a closed-form expression using stochastic analysis of 1D unsaturated gravitational flow in an infinitely long vertical profile based on the Mualem-van Genuchten model and first-order Taylor expansions. With the recent development of high resolution maps of basic soil properties such as soil texture and bulk density, relevant information to estimate soil moisture variability within a satellite product grid cell is available. Here, we predict for each SMOS, SMAP and ASCAT grid cell the sub-grid soil moisture variability based on the SoilGrids1km data set. We provide a look-up table that indicates the soil moisture standard deviation for any given soil moisture mean. The resulting data set provides important information for downscaling coarse soil moisture observations of the SMOS, SMAP and ASCAT missions. Downscaling SMAP data by a field capacity proxy indicates adequate accuracy of the sub-grid soil moisture patterns.
Color management with a hammer: the B-spline fitter
NASA Astrophysics Data System (ADS)
Bell, Ian E.; Liu, Bonny H. P.
2003-01-01
To paraphrase Abraham Maslow: If the only tool you have is a hammer, every problem looks like a nail. We have a B-spline fitter customized for 3D color data, and many problems in color management can be solved with this tool. Whereas color devices were once modeled with extensive measurement, look-up tables and trilinear interpolation, recent improvements in hardware have made B-spline models an affordable alternative. Such device characterizations require fewer color measurements than piecewise linear models, and have uses beyond simple interpolation. A B-spline fitter, for example, can act as a filter to remove noise from measurements, leaving a model with guaranteed smoothness. Inversion of the device model can then be carried out consistently and efficiently, as the spline model is well behaved and its derivatives easily computed. Spline-based algorithms also exist for gamut mapping, the composition of maps, and the extrapolation of a gamut. Trilinear interpolation---a degree-one spline---can still be used after nonlinear spline smoothing for high-speed evaluation with robust convergence. Using data from several color devices, this paper examines the use of B-splines as a generic tool for modeling devices and mapping one gamut to another, and concludes with applications to high-dimensional and spectral data.
Looking north through Machine Shop (Bldg. 163) Track 409 Doors ...
Looking north through Machine Shop (Bldg. 163) Track 409 Doors at transfer table, with Boiler Shop (Bldg. 152) at left and C.W.E. Shop No. 2 (Bldg. 47) at right - Atchison, Topeka, Santa Fe Railroad, Albuquerque Shops, 908 Second Street, Southwest, Albuquerque, Bernalillo County, NM
League tables and school effectiveness: a mathematical model.
Hoyle, Rebecca B; Robinson, James C
2003-01-01
'School performance tables', an alphabetical list of secondary schools along with aggregates of their pupils' performances in national tests, have been published in the UK since 1992. Inevitably, the media have responded by publishing ranked 'league tables'. Despite concern over the potentially divisive effect of such tables, the current government has continued to publish this information in the same form. The effect of this information on standards and on the social make-up of the community has been keenly debated. Since there is no control group available that would allow us to investigate this issue directly, we present here a simple mathematical model. Our results indicate that, while random fluctuations from year to year can cause large distortions in the league-table positions, some schools still establish themselves as 'desirable'. To our surprise, we found that 'value-added' tables were no more accurate than tables based on raw exam scores, while a different method of drawing up the tables, in which exam results are averaged over a period of time, appears to give a much more reliable measure of school performance. PMID:12590748
15. View looking up Dramp from middle floor level showing ...
15. View looking up D-ramp from middle floor level showing lighting conduits and manometer panel on wall of decontamination area. Building 501, October 2, 1956 - Offutt Air Force Base, Strategic Air Command Headquarters & Command Center, Command Center, 901 SAC Boulevard, Bellevue, Sarpy County, NE
Self-Organization of Blood Pressure Regulation: Experimental Evidence
Fortrat, Jacques-Olivier; Levrard, Thibaud; Courcinous, Sandrine; Victor, Jacques
2016-01-01
Blood pressure regulation is a prime example of homeostatic regulation. However, some characteristics of the cardiovascular system better match a non-linear self-organized system than a homeostatic one. To determine whether blood pressure regulation is self-organized, we repeated the seminal demonstration of self-organized control of movement, but applied it to the cardiovascular system. We looked for two distinctive features peculiar to self-organization: non-equilibrium phase transitions and hysteresis in their occurrence when the system is challenged. We challenged the cardiovascular system by means of slow, 20-min Tilt-Up and Tilt-Down tilt table tests in random order. We continuously determined the phase between oscillations at the breathing frequency of Total Peripheral Resistances and Heart Rate Variability by means of cross-spectral analysis. We looked for a significant phase drift during these procedures, which signed a non-equilibrium phase transition. We determined at which head-up tilt angle it occurred. We checked that this angle was significantly different between Tilt-Up and Tilt-Down to demonstrate hysteresis. We observed a significant non-equilibrium phase transition in nine healthy volunteers out of 11 with significant hysteresis (48.1 ± 7.5° and 21.8 ± 3.9° during Tilt-Up and Tilt-Down, respectively, p < 0.05). Our study shows experimental evidence of self-organized short-term blood pressure regulation. It provides new insights into blood pressure regulation and its related disorders. PMID:27065880
Somayajula, Srikanth Ayyala; Devred, Emmanuel; Bélanger, Simon; Antoine, David; Vellucci, V; Babin, Marcel
2018-04-20
In this study, we report on the performance of satellite-based photosynthetically available radiation (PAR) algorithms used in published oceanic primary production models. The performance of these algorithms was evaluated using buoy observations under clear and cloudy skies, and for the particular case of low sun angles typically encountered at high latitudes or at moderate latitudes in winter. The PAR models consisted of (i) the standard one from the NASA-Ocean Biology Processing Group (OBPG), (ii) the Gregg and Carder (GC) semi-analytical clear-sky model, and (iii) look-up-tables based on the Santa Barbara DISORT atmospheric radiative transfer (SBDART) model. Various combinations of atmospheric inputs, empirical cloud corrections, and semi-analytical irradiance models yielded a total of 13 (11 + 2 developed in this study) different PAR products, which were compared with in situ measurements collected at high frequency (15 min) at a buoy site in the Mediterranean Sea (the "BOUée pour l'acquiSition d'une Série Optique à Long termE," or, "BOUSSOLE" site). An objective ranking method applied to the algorithm results indicated that seven PAR products out of 13 were well in agreement with the in situ measurements. Specifically, the OBPG method showed the best overall performance with a root mean square difference (RMSD) (bias) of 19.7% (6.6%) and 10% (6.3%) followed by the look-up-table method with a RMSD (bias) of 25.5% (6.8%) and 9.6% (2.6%) at daily and monthly scales, respectively. Among the four methods based on clear-sky PAR empirically corrected for cloud cover, the Dobson and Smith method consistently underestimated daily PAR while the Budyko formulation overestimated daily PAR. Empirically cloud-corrected methods using cloud fraction (CF) performed better under quasi-clear skies (CF<0.3) with an RMSD (bias) of 9.7%-14.8% (3.6%-11.3%) than under partially clear to cloudy skies (0.3
NASA Astrophysics Data System (ADS)
Jeon, Hosang; Nam, Jiho; Lee, Jayoung; Park, Dahl; Baek, Cheol-Ha; Kim, Wontaek; Ki, Yongkan; Kim, Dongwon
2015-06-01
Accurate dose delivery is crucial to the success of modern radiotherapy. To evaluate the dose actually delivered to patients, in-vivo dosimetry (IVD) is generally performed during radiotherapy to measure the entrance doses. In IVD, a build-up device should be placed on top of an in-vivo dosimeter to satisfy the electron equilibrium condition. However, a build-up device made of tissue-equivalent material or metal may perturb dose delivery to a patient, and requires an additional laborious and time-consuming process. We developed a novel IVD method using a look-up table of conversion ratios instead of a build-up device. We validated this method through a monte-carlo simulation and 31 clinical trials. The mean error of clinical IVD is 3.17% (standard deviation: 2.58%), which is comparable to that of conventional IVD methods. Moreover, the required time was greatly reduced so that the efficiency of IVD could be improved for both patients and therapists.
A virtual image chain for perceived image quality of medical display
NASA Astrophysics Data System (ADS)
Marchessoux, Cédric; Jung, Jürgen
2006-03-01
This paper describes a virtual image chain for medical display (project VICTOR: granted in the 5th framework program by European commission). The chain starts from raw data of an image digitizer (CR, DR) or synthetic patterns and covers image enhancement (MUSICA by Agfa) and both display possibilities, hardcopy (film on viewing box) and softcopy (monitor). Key feature of the chain is a complete image wise approach. A first prototype is implemented in an object-oriented software platform. The display chain consists of several modules. Raw images are either taken from scanners (CR-DR) or from a pattern generator, in which characteristics of DR- CR systems are introduced by their MTF and their dose-dependent Poisson noise. The image undergoes image enhancement and comes to display. For soft display, color and monochrome monitors are used in the simulation. The image is down-sampled. The non-linear response of a color monitor is taken into account by the GOG or S-curve model, whereas the Standard Gray-Scale-Display-Function (DICOM) is used for monochrome display. The MTF of the monitor is applied on the image in intensity levels. For hardcopy display, the combination of film, printer, lightbox and viewing condition is modeled. The image is up-sampled and the DICOM-GSDF or a Kanamori Look-Up-Table is applied. An anisotropic model for the MTF of the printer is applied on the image in intensity levels. The density-dependent color (XYZ) of the hardcopy film is introduced by Look-Up-tables. Finally a Human Visual System Model is applied to the intensity images (XYZ in terms of cd/m2) in order to eliminate nonvisible differences. Comparison leads to visible differences, which are quantified by higher order image quality metrics. A specific image viewer is used for the visualization of the intensity image and the visual difference maps.
Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks
Naveros, Francisco; Garrido, Jesus A.; Carrillo, Richard R.; Ros, Eduardo; Luque, Niceto R.
2017-01-01
Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under increasing levels of neural complexity. PMID:28223930
NASA Astrophysics Data System (ADS)
Viegas, Jaime; Mayeh, Mona; Srinivasan, Pradeep; Johnson, Eric G.; Marques, Paulo V. S.; Farahi, Faramarz
2017-02-01
In this work, a silicon oxynitride-on-silica refractometer is presented, based on sub-wavelength coupled arrayed waveguide interference, and capable of low-cost, high resolution, large scale deployment. The sensor has an experimental spectral sensitivity as high as 3200 nm/RIU, covering refractive indices ranging from 1 (air) up to 1.43 (oils). The sensor readout can be performed by standard spectrometers techniques of by pattern projection onto a camera, followed by optical pattern recognition. Positive identification of the refractive index of an unknown species is obtained by pattern cross-correlation with a look-up calibration table based algorithm. Given the lower contrast between core and cladding in such devices, higher mode overlap with single mode fiber is achieved, leading to a larger coupling efficiency and more relaxed alignment requirements as compared to silicon photonics platform. Also, the optical transparency of the sensor in the visible range allows the operation with light sources and camera detectors in the visible range, of much lower capital costs for a complete sensor system. Furthermore, the choice of refractive indices of core and cladding in the sensor head with integrated readout, allows the fabrication of the same device in polymers, for mass-production replication of disposable sensors.
7. VIEW OF ESCAPE TRAINING TANK, LOOKING UP SOUTH SIDE ...
7. VIEW OF ESCAPE TRAINING TANK, LOOKING UP SOUTH SIDE FROM 50-FOOT PASSAGEWAY, SHOWING 25-FOOT BLISTER AT LEFT, 18-FOOT PASSAGEWAY AND PLATFORM AT RIGHT - U.S. Naval Submarine Base, New London Submarine Escape Training Tank, Albacore & Darter Roads, Groton, New London County, CT
InSight Spacecraft Lift to Spin Table & Pre-Spin Processing
2018-03-28
In the Astrotech facility at Vandenberg Air Force Base in California, technicians and engineers inspect NASA's Interior Exploration using Seismic Investigations, Geodesy and Heat Transport, or InSight, spacecraft after it was placed on a spin table during preflight processing. InSight will be the first mission to look deep beneath the Martian surface. It will study the planet's interior by measuring its heat output and listen for marsquakes. The spacecraft will use the seismic waves generated by marsquakes to develop a map of the planet’s deep interior. The resulting insight into Mars’ formation will provide a better understanding of how other rocky planets, including Earth, were created. InSight is scheduled for liftoff May 5, 2018.
A fast point-cloud computing method based on spatial symmetry of Fresnel field
NASA Astrophysics Data System (ADS)
Wang, Xiangxiang; Zhang, Kai; Shen, Chuan; Zhu, Wenliang; Wei, Sui
2017-10-01
Aiming at the great challenge for Computer Generated Hologram (CGH) duo to the production of high spatial-bandwidth product (SBP) is required in the real-time holographic video display systems. The paper is based on point-cloud method and it takes advantage of the propagating reversibility of Fresnel diffraction in the propagating direction and the fringe pattern of a point source, known as Gabor zone plate has spatial symmetry, so it can be used as a basis for fast calculation of diffraction field in CGH. A fast Fresnel CGH method based on the novel look-up table (N-LUT) method is proposed, the principle fringe patterns (PFPs) at the virtual plane is pre-calculated by the acceleration algorithm and be stored. Secondly, the Fresnel diffraction fringe pattern at dummy plane can be obtained. Finally, the Fresnel propagation from dummy plan to hologram plane. The simulation experiments and optical experiments based on Liquid Crystal On Silicon (LCOS) is setup to demonstrate the validity of the proposed method under the premise of ensuring the quality of 3D reconstruction the method proposed in the paper can be applied to shorten the computational time and improve computational efficiency.
Two-dimensional interpreter for field-reversed configurations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steinhauer, Loren, E-mail: lstein@uw.edu
2014-08-15
An interpretive method is developed for extracting details of the fully two-dimensional (2D) “internal” structure of field-reversed configurations (FRC) from common diagnostics. The challenge is that only external and “gross” diagnostics are routinely available in FRC experiments. Inferring such critical quantities as the poloidal flux and the particle inventory has commonly relied on a theoretical construct based on a quasi-one-dimensional approximation. Such inferences sometimes differ markedly from the more accurate, fully 2D reconstructions of equilibria. An interpreter based on a fully 2D reconstruction is needed to enable realistic within-the-shot tracking of evolving equilibrium properties. Presented here is a flexible equilibriummore » reconstruction with which an extensive data base of equilibria was constructed. An automated interpreter then uses this data base as a look-up table to extract evolving properties. This tool is applied to data from the FRC facility at Tri Alpha Energy. It yields surprising results at several points, such as the inferences that the local β (plasma pressure/external magnetic pressure) of the plasma climbs well above unity and the poloidal flux loss time is somewhat longer than previously thought, both of which arise from full two-dimensionality of FRCs.« less
NASA Astrophysics Data System (ADS)
Atzberger, C.; Richter, K.
2009-09-01
The robust and accurate retrieval of vegetation biophysical variables using radiative transfer models (RTM) is seriously hampered by the ill-posedness of the inverse problem. With this research we further develop our previously published (object-based) inversion approach [Atzberger (2004)]. The object-based RTM inversion takes advantage of the geostatistical fact that the biophysical characteristics of nearby pixel are generally more similar than those at a larger distance. A two-step inversion based on PROSPECT+SAIL generated look-up-tables is presented that can be easily implemented and adapted to other radiative transfer models. The approach takes into account the spectral signatures of neighboring pixel and optimizes a common value of the average leaf angle (ALA) for all pixel of a given image object, such as an agricultural field. Using a large set of leaf area index (LAI) measurements (n = 58) acquired over six different crops of the Barrax test site, Spain), we demonstrate that the proposed geostatistical regularization yields in most cases more accurate and spatially consistent results compared to the traditional (pixel-based) inversion. Pros and cons of the approach are discussed and possible future extensions presented.
9. GENERAL INTERIOR VIEW OF BEEF KILLING FLOOR; LOOKING SOUTHEAST; ...
9. GENERAL INTERIOR VIEW OF BEEF KILLING FLOOR; LOOKING SOUTHEAST; PLATFORMS IN FOREGROUND WERE USED BY SPLITTERS, TRIMMERS AND GOVERNMENT INSPECTORS; SKINNING TABLE RAN ALONG THE WINDOWS NEAR THE CENTER OF THE PHOTO - Rath Packing Company, Beef Killing Building, Sycamore Street between Elm & Eighteenth Streets, Waterloo, Black Hawk County, IA
37. July 1974. WOOD SHOP, VIEW LOOKING NORTHWEST, SHOWING THE ...
37. July 1974. WOOD SHOP, VIEW LOOKING NORTHWEST, SHOWING THE PLANER WITH ITS BELT CHASE FROM THE BASEMENT LINESHAFT AND THE BELTING SYSTEM FOR THE TABLE-SHAPER. BEYOND THE PLANER IS THE BAND SAW. - Gruber Wagon Works, Pennsylvania Route 183 & State Hill Road at Red Bridge Park, Bernville, Berks County, PA
Global root zone storage capacity from satellite-based evaporation data
NASA Astrophysics Data System (ADS)
Wang-Erlandsson, Lan; Bastiaanssen, Wim; Gao, Hongkai; Jägermeyr, Jonas; Senay, Gabriel; van Dijk, Albert; Guerschman, Juan; Keys, Patrick; Gordon, Line; Savenije, Hubert
2016-04-01
We present an "earth observation-based" method for estimating root zone storage capacity - a critical, yet uncertain parameter in hydrological and land surface modelling. By assuming that vegetation optimises its root zone storage capacity to bridge critical dry periods, we were able to use state-of-the-art satellite-based evaporation data computed with independent energy balance equations to derive gridded root zone storage capacity at global scale. This approach does not require soil or vegetation information, is model independent, and is in principle scale-independent. In contrast to traditional look-up table approaches, our method captures the variability in root zone storage capacity within land cover type, including in rainforests where direct measurements of root depth otherwise are scarce. Implementing the estimated root zone storage capacity in the global hydrological model STEAM improved evaporation simulation overall, and in particular during the least evaporating months in sub-humid to humid regions with moderate to high seasonality. We find that evergreen forests are able to create a large storage to buffer for extreme droughts (with a return period of up to 60 years), in contrast to short vegetation and crops (which seem to adapt to a drought return period of about 2 years). The presented method to estimate root zone storage capacity eliminates the need for soils and rooting depth information, which could be a game-changer in global land surface modelling.
12 CFR Appendix C to Subpart G - OCC Interpretations
Code of Federal Regulations, 2014 CFR
2014-01-01
....203(a)(1)-2. Paragraph 34.203(c)(2)(iii). 1. Confirming elements in the appraisal. To confirm that the elements in appendix A to this subpart are included in the written appraisal, a creditor need not look...)(7)(viii). 1. Bureau table of rural counties The Bureau publishes on its Web site a table of rural...
[Estimation of forest canopy chlorophyll content based on PROSPECT and SAIL models].
Yang, Xi-guang; Fan, Wen-yi; Yu, Ying
2010-11-01
The forest canopy chlorophyll content directly reflects the health and stress of forest. The accurate estimation of the forest canopy chlorophyll content is a significant foundation for researching forest ecosystem cycle models. In the present paper, the inversion of the forest canopy chlorophyll content was based on PROSPECT and SAIL models from the physical mechanism angle. First, leaf spectrum and canopy spectrum were simulated by PROSPECT and SAIL models respectively. And leaf chlorophyll content look-up-table was established for leaf chlorophyll content retrieval. Then leaf chlorophyll content was converted into canopy chlorophyll content by Leaf Area Index (LAD). Finally, canopy chlorophyll content was estimated from Hyperion image. The results indicated that the main effect bands of chlorophyll content were 400-900 nm, the simulation of leaf and canopy spectrum by PROSPECT and SAIL models fit better with the measured spectrum with 7.06% and 16.49% relative error respectively, the RMSE of LAI inversion was 0. 542 6 and the forest canopy chlorophyll content was estimated better by PROSPECT and SAIL models with precision = 77.02%.
NASA Technical Reports Server (NTRS)
Joyce, A. T.
1974-01-01
Significant progress has been made in the classification of surface conditions (land uses) with computer-implemented techniques based on the use of ERTS digital data and pattern recognition software. The supervised technique presently used at the NASA Earth Resources Laboratory is based on maximum likelihood ratioing with a digital table look-up approach to classification. After classification, colors are assigned to the various surface conditions (land uses) classified, and the color-coded classification is film recorded on either positive or negative 9 1/2 in. film at the scale desired. Prints of the film strips are then mosaicked and photographed to produce a land use map in the format desired. Computer extraction of statistical information is performed to show the extent of each surface condition (land use) within any given land unit that can be identified in the image. Evaluations of the product indicate that classification accuracy is well within the limits for use by land resource managers and administrators. Classifications performed with digital data acquired during different seasons indicate that the combination of two or more classifications offer even better accuracy.
A Fast Visible-Infrared Imaging Radiometer Suite Simulator for Cloudy Atmopheres
NASA Technical Reports Server (NTRS)
Liu, Chao; Yang, Ping; Nasiri, Shaima L.; Platnick, Steven; Meyer, Kerry G.; Wang, Chen Xi; Ding, Shouguo
2015-01-01
A fast instrument simulator is developed to simulate the observations made in cloudy atmospheres by the Visible Infrared Imaging Radiometer Suite (VIIRS). The correlated k-distribution (CKD) technique is used to compute the transmissivity of absorbing atmospheric gases. The bulk scattering properties of ice clouds used in this study are based on the ice model used for the MODIS Collection 6 ice cloud products. Two fast radiative transfer models based on pre-computed ice cloud look-up-tables are used for the VIIRS solar and infrared channels. The accuracy and efficiency of the fast simulator are quantify in comparison with a combination of the rigorous line-by-line (LBLRTM) and discrete ordinate radiative transfer (DISORT) models. Relative errors are less than 2 for simulated TOA reflectances for the solar channels and the brightness temperature differences for the infrared channels are less than 0.2 K. The simulator is over three orders of magnitude faster than the benchmark LBLRTM+DISORT model. Furthermore, the cloudy atmosphere reflectances and brightness temperatures from the fast VIIRS simulator compare favorably with those from VIIRS observations.
Kwon, M-W; Kim, S-C; Yoon, S-E; Ho, Y-S; Kim, E-S
2015-02-09
A new object tracking mask-based novel-look-up-table (OTM-NLUT) method is proposed and implemented on graphics-processing-units (GPUs) for real-time generation of holographic videos of three-dimensional (3-D) scenes. Since the proposed method is designed to be matched with software and memory structures of the GPU, the number of compute-unified-device-architecture (CUDA) kernel function calls and the computer-generated hologram (CGH) buffer size of the proposed method have been significantly reduced. It therefore results in a great increase of the computational speed of the proposed method and enables real-time generation of CGH patterns of 3-D scenes. Experimental results show that the proposed method can generate 31.1 frames of Fresnel CGH patterns with 1,920 × 1,080 pixels per second, on average, for three test 3-D video scenarios with 12,666 object points on three GPU boards of NVIDIA GTX TITAN, and confirm the feasibility of the proposed method in the practical application of electro-holographic 3-D displays.
RELAP-7 Progress Report. FY-2015 Optimization Activities Summary
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, Ray Alden; Zou, Ling; Andrs, David
2015-09-01
This report summarily documents the optimization activities on RELAP-7 for FY-2015. It includes the migration from the analytical stiffened gas equation of state for both the vapor and liquid phases to accurate and efficient property evaluations for both equilibrium and metastable (nonequilibrium) states using the Spline-Based Table Look-up (SBTL) method with the IAPWS-95 properties for steam and water. It also includes the initiation of realistic closure models based, where appropriate, on the U.S. Nuclear Regulatory Commission’s TRACE code. It also describes an improved entropy viscosity numerical stabilization method for the nonequilibrium two-phase flow model of RELAP-7. For ease of presentationmore » to the reader, the nonequilibrium two-phase flow model used in RELAP-7 is briefly presented, though for detailed explanation the reader is referred to RELAP-7 Theory Manual [R.A. Berry, J.W. Peterson, H. Zhang, R.C. Martineau, H. Zhao, L. Zou, D. Andrs, “RELAP-7 Theory Manual,” Idaho National Laboratory INL/EXT-14-31366(rev. 1), February 2014].« less
Implementation of high-resolution time-to-digital converter in 8-bit microcontrollers.
Bengtsson, Lars E
2012-04-01
This paper will demonstrate how a time-to-digital converter (TDC) with sub-nanosecond resolution can be implemented into an 8-bit microcontroller using so called "direct" methods. This means that a TDC is created using only five bidirectional digital input-output-pins of a microcontroller and a few passive components (two resistors, a capacitor, and a diode). We will demonstrate how a TDC for the range 1-10 μs is implemented with 0.17 ns resolution. This work will also show how to linearize the output by combining look-up tables and interpolation. © 2012 American Institute of Physics
Statistical classification of drug incidents due to look-alike sound-alike mix-ups.
Wong, Zoie Shui Yee
2016-06-01
It has been recognised that medication names that look or sound similar are a cause of medication errors. This study builds statistical classifiers for identifying medication incidents due to look-alike sound-alike mix-ups. A total of 227 patient safety incident advisories related to medication were obtained from the Canadian Patient Safety Institute's Global Patient Safety Alerts system. Eight feature selection strategies based on frequent terms, frequent drug terms and constituent terms were performed. Statistical text classifiers based on logistic regression, support vector machines with linear, polynomial, radial-basis and sigmoid kernels and decision tree were trained and tested. The models developed achieved an average accuracy of above 0.8 across all the model settings. The receiver operating characteristic curves indicated the classifiers performed reasonably well. The results obtained in this study suggest that statistical text classification can be a feasible method for identifying medication incidents due to look-alike sound-alike mix-ups based on a database of advisories from Global Patient Safety Alerts. © The Author(s) 2014.
Are financial incentives cost-effective to support smoking cessation during pregnancy?
Boyd, Kathleen A; Briggs, Andrew H; Bauld, Linda; Sinclair, Lesley; Tappin, David
2016-02-01
To investigate the cost-effectiveness of up to £400 worth of financial incentives for smoking cessation in pregnancy as an adjunct to routine health care. Cost-effectiveness analysis based on a Phase II randomized controlled trial (RCT) and a cost-utility analysis using a life-time Markov model. The RCT was undertaken in Glasgow, Scotland. The economic analysis was undertaken from the UK National Health Service (NHS) perspective. A total of 612 pregnant women randomized to receive usual cessation support plus or minus financial incentives of up to £400 vouchers (US $609), contingent upon smoking cessation. Comparison of usual support and incentive interventions in terms of cotinine-validated quitters, quality-adjusted life years (QALYs) and direct costs to the NHS. The incremental cost per quitter at 34-38 weeks pregnant was £1127 ($1716).This is similar to the standard look-up value derived from Stapleton & West's published ICER tables, £1390 per quitter, by looking up the Cessation in Pregnancy Incentives Trial (CIPT) incremental cost (£157) and incremental 6-month quit outcome (0.14). The life-time model resulted in an incremental cost of £17 [95% confidence interval (CI) = -£93, £107] and a gain of 0.04 QALYs (95% CI = -0.058, 0.145), giving an ICER of £482/QALY ($734/QALY). Probabilistic sensitivity analysis indicates uncertainty in these results, particularly regarding relapse after birth. The expected value of perfect information was £30 million (at a willingness to pay of £30 000/QALY), so given current uncertainty, additional research is potentially worthwhile. Financial incentives for smoking cessation in pregnancy are highly cost-effective, with an incremental cost per quality-adjusted life years of £482, which is well below recommended decision thresholds. © 2015 Society for the Study of Addiction.
ERIC Educational Resources Information Center
Cronen, Stephanie; McQuiggan, Meghan; Isenberg, Emily
2018-01-01
This First Look report provides selected key findings on adults' attainment of nondegree credentials (licenses, certifications, and postsecondary certificates), and their completion of work experience programs such as apprenticeships and internships. This version of the report corrects an error in three tables in the originally released version…
Quad-rotor flight path energy optimization
NASA Astrophysics Data System (ADS)
Kemper, Edward
Quad-Rotor unmanned areal vehicles (UAVs) have been a popular area of research and development in the last decade, especially with the advent of affordable microcontrollers like the MSP 430 and the Raspberry Pi. Path-Energy Optimization is an area that is well developed for linear systems. In this thesis, this idea of path-energy optimization is extended to the nonlinear model of the Quad-rotor UAV. The classical optimization technique is adapted to the nonlinear model that is derived for the problem at hand, coming up with a set of partial differential equations and boundary value conditions to solve these equations. Then, different techniques to implement energy optimization algorithms are tested using simulations in Python. First, a purely nonlinear approach is used. This method is shown to be computationally intensive, with no practical solution available in a reasonable amount of time. Second, heuristic techniques to minimize the energy of the flight path are tested, using Ziegler-Nichols' proportional integral derivative (PID) controller tuning technique. Finally, a brute force look-up table based PID controller is used. Simulation results of the heuristic method show that both reliable control of the system and path-energy optimization are achieved in a reasonable amount of time.
NASA Technical Reports Server (NTRS)
Fishbach, L. H.
1979-01-01
The computational techniques utilized to determine the optimum propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements are described. The characteristics and use of the following computer codes are discussed: (1) NNEP - a very general cycle analysis code that can assemble an arbitrary matrix fans, turbines, ducts, shafts, etc., into a complete gas turbine engine and compute on- and off-design thermodynamic performance; (2) WATE - a preliminary design procedure for calculating engine weight using the component characteristics determined by NNEP; (3) POD DRG - a table look-up program to calculate wave and friction drag of nacelles; (4) LIFCYC - a computer code developed to calculate life cycle costs of engines based on the output from WATE; and (5) INSTAL - a computer code developed to calculate installation effects, inlet performance and inlet weight. Examples are given to illustrate how these computer techniques can be applied to analyze and optimize propulsion system fuel consumption, weight, and cost for representative types of aircraft and missions.
Integrated large view angle hologram system with multi-slm
NASA Astrophysics Data System (ADS)
Yang, ChengWei; Liu, Juan
2017-10-01
Recently holographic display has attracted much attention for its ability to generate real-time 3D reconstructed image. CGH provides an effective way to produce hologram, and spacial light modulator (SLM) is used to reconstruct the image. However the reconstructing system is usually very heavy and complex, and the view-angle is limited by the pixel size and spatial bandwidth product (SBP) of the SLM. In this paper a light portable holographic display system is proposed by integrating the optical elements and host computer units.Which significantly reduces the space taken in horizontal direction. CGH is produced based on the Fresnel diffraction and point source method. To reduce the memory usage and image distortion, we use an optimized accurate compressed look up table method (AC-LUT) to compute the hologram. In the system, six SLMs are concatenated to a curved plane, each one loading the phase-only hologram in a different angle of the object, the horizontal view-angle of the reconstructed image can be expanded to about 21.8°.
Characterisation of the n-colour printing process using the spot colour overprint model.
Deshpande, Kiran; Green, Phil; Pointer, Michael R
2014-12-29
This paper is aimed at reproducing the solid spot colours using the n-colour separation. A simplified numerical method, called as the spot colour overprint (SCOP) model, was used for characterising the n-colour printing process. This model was originally developed for estimating the spot colour overprints. It was extended to be used as a generic forward characterisation model for the n-colour printing process. The inverse printer model based on the look-up table was implemented to obtain the colour separation for n-colour printing process. Finally the real-world spot colours were reproduced using 7-colour separation on lithographic offset printing process. The colours printed with 7 inks were compared against the original spot colours to evaluate the accuracy. The results show good accuracy with the mean CIEDE2000 value between the target colours and the printed colours of 2.06. The proposed method can be used successfully to reproduce the spot colours, which can potentially save significant time and cost in the printing and packaging industry.
Modeling and Calibration of a Novel One-Mirror Galvanometric Laser Scanner
Yu, Chengyi; Chen, Xiaobo; Xi, Juntong
2017-01-01
A laser stripe sensor has limited application when a point cloud of geometric samples on the surface of the object needs to be collected, so a galvanometric laser scanner is designed by using a one-mirror galvanometer element as its mechanical device to drive the laser stripe to sweep along the object. A novel mathematical model is derived for the proposed galvanometer laser scanner without any position assumptions and then a model-driven calibration procedure is proposed. Compared with available model-driven approaches, the influence of machining and assembly errors is considered in the proposed model. Meanwhile, a plane-constraint-based approach is proposed to extract a large number of calibration points effectively and accurately to calibrate the galvanometric laser scanner. Repeatability and accuracy of the galvanometric laser scanner are evaluated on the automobile production line to verify the efficiency and accuracy of the proposed calibration method. Experimental results show that the proposed calibration approach yields similar measurement performance compared with a look-up table calibration method. PMID:28098844
7. INTERIOR, ROBERTS AND SCHAEFER SHAKER TABLE (LEFT), MARYLAND NEW ...
7. INTERIOR, ROBERTS AND SCHAEFER SHAKER TABLE (LEFT), MARYLAND NEW RIVER COAL COMPANY INSTALLED APRON CONVEYOR (RIGHT) USED TO CONVEY COAL TO THE BELKNAP CHORIDE WASHER, RETURN CHUTE FOR CLEANED COAL (FAR RIGHT), AND COAL STORAGE SILO (BACKGROUND), LOOKING WEST - Nuttallburg Mine Complex, Tipple, North side of New River, 2.7 miles upstream from Fayette Landing, Lookout, Fayette County, WV
Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan
2015-10-16
An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient.
Very low cost real time histogram-based contrast enhancer utilizing fixed-point DSP processing
NASA Astrophysics Data System (ADS)
McCaffrey, Nathaniel J.; Pantuso, Francis P.
1998-03-01
A real time contrast enhancement system utilizing histogram- based algorithms has been developed to operate on standard composite video signals. This low-cost DSP based system is designed with fixed-point algorithms and an off-chip look up table (LUT) to reduce the cost considerably over other contemporary approaches. This paper describes several real- time contrast enhancing systems advanced at the Sarnoff Corporation for high-speed visible and infrared cameras. The fixed-point enhancer was derived from these high performance cameras. The enhancer digitizes analog video and spatially subsamples the stream to qualify the scene's luminance. Simultaneously, the video is streamed through a LUT that has been programmed with the previous calculation. Reducing division operations by subsampling reduces calculation- cycles and also allows the processor to be used with cameras of nominal resolutions. All values are written to the LUT during blanking so no frames are lost. The enhancer measures 13 cm X 6.4 cm X 3.2 cm, operates off 9 VAC and consumes 12 W. This processor is small and inexpensive enough to be mounted with field deployed security cameras and can be used for surveillance, video forensics and real- time medical imaging.
5. Credit BG. This interior view shows the weigh room, ...
5. Credit BG. This interior view shows the weigh room, looking west (240°): Electric lighting and scale read-outs (boxes with circular windows on the wall) are fitted with explosion-proof enclosures; these enclosures prevent malfunctioning electrical parts from sparking and starting fires or explosions. One marble table and scale have been removed at the extreme left of the view. Two remaining scales handle small and large quantities of propellants and additives. Marble tables do not absorb chemicals or conduct electricity; their mass also prevents vibration from upsetting the scales. The floor has an electrically conductive coating to dissipate static electric charges, thus preventing sparks which might ignite propellants. - Jet Propulsion Laboratory Edwards Facility, Weigh & Control Building, Edwards Air Force Base, Boron, Kern County, CA
AdiosStMan: Parallelizing Casacore Table Data System using Adaptive IO System
NASA Astrophysics Data System (ADS)
Wang, R.; Harris, C.; Wicenec, A.
2016-07-01
In this paper, we investigate the Casacore Table Data System (CTDS) used in the casacore and CASA libraries, and methods to parallelize it. CTDS provides a storage manager plugin mechanism for third-party developers to design and implement their own CTDS storage managers. Having this in mind, we looked into various storage backend techniques that can possibly enable parallel I/O for CTDS by implementing new storage managers. After carrying on benchmarks showing the excellent parallel I/O throughput of the Adaptive IO System (ADIOS), we implemented an ADIOS based parallel CTDS storage manager. We then applied the CASA MSTransform frequency split task to verify the ADIOS Storage Manager. We also ran a series of performance tests to examine the I/O throughput in a massively parallel scenario.
A Linear Programming Application to Aircrew Scheduling.
1980-06-06
1968, our total force -- Air National Guard (ANG), Air Force Reserve (AFRES), and Active duty - aircraft inventory has dropped from over 15,000...3-3) I 8 Table 2 Active Duty A-7D Training Program Level A Level B Level C Day Night Day Night Sorties WD/SATa b 22/18 6 11/6 1 9/5 Maverick 6 4/2 2...and C are listed in % LTable 5. There are additional constraints not apparent from looking at the table. First, night events listed in Table 5 are only
ERIC Educational Resources Information Center
National Center for Education Statistics, 2014
2014-01-01
Between 1995-96 and 2011-12, the number of undergraduates attending postsecondary institutions in the United States increased from nearly 17 million to 23 million. The web tables presented in this report provide a comprehensive look over a 16-year period at the trends in how undergraduates enrolled in U.S. postsecondary institutions finance their…
Heuristic Modeling for TRMM Lifetime Predictions
NASA Technical Reports Server (NTRS)
Jordan, P. S.; Sharer, P. J.; DeFazio, R. L.
1996-01-01
Analysis time for computing the expected mission lifetimes of proposed frequently maneuvering, tightly altitude constrained, Earth orbiting spacecraft have been significantly reduced by means of a heuristic modeling method implemented in a commercial-off-the-shelf spreadsheet product (QuattroPro) running on a personal computer (PC). The method uses a look-up table to estimate the maneuver frequency per month as a function of the spacecraft ballistic coefficient and the solar flux index, then computes the associated fuel use by a simple engine model. Maneuver frequency data points are produced by means of a single 1-month run of traditional mission analysis software for each of the 12 to 25 data points required for the table. As the data point computations are required only a mission design start-up and on the occasion of significant mission redesigns, the dependence on time consuming traditional modeling methods is dramatically reduced. Results to date have agreed with traditional methods to within 1 to 1.5 percent. The spreadsheet approach is applicable to a wide variety of Earth orbiting spacecraft with tight altitude constraints. It will be particularly useful to such missions as the Tropical Rainfall Measurement Mission scheduled for launch in 1997, whose mission lifetime calculations are heavily dependent on frequently revised solar flux predictions.
NASA Astrophysics Data System (ADS)
Ding, Y. H.; Hu, S. X.
2017-10-01
Beryllium has been considered a superior ablator material for inertial confinement fusion target designs. Based on density-functional-theory calculations, we have established a wide-range beryllium equation-of-state (EOS) table of density ρ = 0.001 to ρ = 500 g/cm3 and temperature T = 2000 to 108 K. Our first-principles equation-of-state (FPEOS) table is in better agreement with widely used SESAMEEOS table (SESAME2023) than the average-atom INFERNOmodel and the Purgatoriomodel. For the principal Hugoniot, our FPEOS prediction shows 10% stiffer behavior than the last two models at maximum compression. Comparisons between FPEOS and SESAMEfor off-Hugoniot conditions show that both the pressure and internal energy differences are within 20% between two EOS tables. By implementing the FPEOS table into the 1-D radiation-hydrodynamics code LILAC, we studied the EOS effects on beryllium target-shell implosions. The FPEOS simulation predicts up to an 15% higher neutron yield compared to the simulation using the SESAME2023 EOS table. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.
The Terminal Interface Message Processor Program.
1973-11-01
table entry for this device to one of CONECO, CONVT, CONEEE, CONESC , IBMEEE, IBMESC, IBMECO, IBMCON, BINECO, BINCON, or HUNT 8.2.2.1.1-2 8/73...transmit on EDM, goto NOPE EOMa set up counter to make buffer look full goto NOPE 8.2.2.1.1-6 8/73 A I I CONEEE call ECHO to echo characterI CONESC mask...6 82 CCHAR 8.2.2.2.2-3CCHARA 8 . 2,2 .2 .2- 3 CLKOI 8.2.2.2-1 CLOCK 8.2.2-1 CLOCK4 8.2.2-1 CLOCKA 8.2.2-2 CONEEE 8.2.2.1.1-7 CONESC 8.2.2.1.1-7
Aerodynamic Characteristics of SC1095 and SC1094 R8 Airfoils
NASA Technical Reports Server (NTRS)
Bousman, William G.
2003-01-01
Two airfoils are used on the main rotor blade of the UH-60A helicopter, the SC1095 and the SC1094 R8. Measurements of the section lift, drag, and pitching moment have been obtained in ten wind tunnel tests for the SC1095 airfoil, and in five of these tests, measurements have also been obtained for the SC1094 R8. The ten wind tunnel tests are characterized and described in the present study. A number of fundamental parameters measured in these tests are compared and an assessment is made of the adequacy of the test data for use in look-up tables required by lifting-line calculation methods.
An Improved Method for Real-Time 3D Construction of DTM
NASA Astrophysics Data System (ADS)
Wei, Yi
This paper discusses the real-time optimal construction of DTM by two measures. One is to improve coordinate transformation of discrete points acquired from lidar, after processing a total number of 10000 data points, the formula calculation for transformation costs 0.810s, while the table look-up method for transformation costs 0.188s, indicating that the latter is superior to the former. The other one is to adjust the density of the point cloud acquired from lidar, the certain amount of the data points are used for 3D construction in proper proportion in order to meet different needs for 3D imaging, and ultimately increase efficiency of DTM construction while saving system resources.
NASA Astrophysics Data System (ADS)
Liang, S.; Wang, K.; Wang, D.; Townshend, J.; Running, S.; Tsay, S.
2008-05-01
Incident photosynthetically active radiation (PAR) is a key variable required by almost all terrestrial ecosystem models. Many radiation efficiency models are linearly related canopy productivity to the absorbed PAR. Unfortunately, the current incident PAR products estimated from remotely sensed data or calculated by radiation models at spatial and temporal resolutions are not sufficient for carbon cycle modeling and various applications. In this study, we aim to develop incident PAR products at one kilometer scale from multiple satellite sensors, such as Moderate Resolution Imaging Spectrometer (MODIS) and Geostationary Operational Environmental Satellite (GOES) sensor. We first developed a look-up table approach to estimate instantanerous incident PAR product from MODIS (Liang et al., 2006). The temporal observations of each pixel are used to estimate land surface reflectance and look-up tables of both aerosol and cloud are searched, based on the top-of-atmosphere reflectance and surface reflectance for determining incident PAR. The incident PAR product includes both the direct and diffuse components. The calculation of a daily integrated PAR using two different methods has also been developed (Wang, et al., 2008a). The similar algorithm has been further extended to GOES data (Wang, et al., 2008b, Zheng, et al., 2008). Extensive validation activities are conducted to evaluate the algorithms and products using the ground measurements from FLUXNET and other networks. They are also compared with other satellite products. The results indicate that our approaches can produce reasonable PAR product at 1km resolution. We have generated 1km incident PAR products over North America for several years, which are freely available to the science community. Liang, S., T. Zheng, R. Liu, H. Fang, S. C. Tsay, S. Running, (2006), Estimation of incident Photosynthetically Active Radiation from MODIS Data, Journal of Geophysical Research ¡§CAtmosphere. 111, D15208,doi:10.1029/2005JD006730. Wang, D., S. Liang, and Zheng, T., (2008a), Integrated daily PAR from MODIS. International Journal of Remote Sensing, revised. Wang, K., S. Liang, T. Zheng and D. Wang, (2008b), Simultaneous estimation of surface photosynthetically active radiation and albedo from GOES, Remote Sensing of Environment, revised. Zheng, T., S. Liang, K. Wang, (2008), Estimation of incident PAR from GOES imagery, Journal of Applied Meteorology and Climatology. in press.
Carbon cycling responses to a water table drawdown and decadal vegetation changes in a bog
NASA Astrophysics Data System (ADS)
Talbot, J.; Roulet, N. T.
2009-12-01
The quantity of carbon stored in peat depends on the imbalance between production and decomposition of organic matter. This imbalance is mainly controlled by the wetness of the peatland, usually described by the water table depth. However, long-term processes resulting from hydrological changes, such as vegetation succession, also play a major role in the biogeochemistry of peatlands. Previous studies have looked at the impact of a water table lowering on carbon fluxes in different types of peatlands. However, most of these studies were conducted within a time frame that did not allow the examination of vegetation changes due to the water table lowering. We conducted a study along a drainage gradient resulting from the digging of a drainage ditch 85 years ago in a portion of the Mer Bleue bog, located near Ottawa, Canada. According to water table reconstructions based on testate amoeba, the drainage dropped the water table by approximately 18 cm. On the upslope side of the ditch, the water table partly recovered and the vegetation changed only marginally. However, on the downslope side of the ditch, the water table stayed persistently lower and trees established (Larix and Betula). The importance of Sphagnum decreased with a lower water table, and evergreen shrubs were replaced by deciduous shrubs. The water table drop and subsequent vegetation changes had combined and individual effects on the carbon functioning of the peatland. Methane fluxes decreased because of the water table lowering, but were not affected by vegetation changes, whereas respiration and net ecosystem productivity were affected by both. The carbon storage of the system increased because of an increase in plant biomass, but the long-term carbon storage as peat decreased. The inclusion of the feedback effect that vegetation has on the carbon functioning of a peatland when a disturbance occurs is crucial to simulate the long-term carbon balance of this ecosystem.
NASA Astrophysics Data System (ADS)
Liu, Jing; Li, Qiang
2014-03-01
Fast localization of organs is a key step in computer-aided detection of lesions and in image guided radiation therapy. We developed a context-driven Generalized Hough Transform (GHT) for robust localization of organ-of-interests (OOIs) in a CT volume. Conventional GHT locates the center of an organ by looking-up center locations of pre-learned organs with "matching" edges. It often suffers from mislocalization because "similar" edges in vicinity may attract the prelearned organs towards wrong places. The proposed method not only uses information from organ's own shape but also takes advantage of nearby "similar" edge structures. First, multiple GHT co-existing look-up tables (cLUT) were constructed from a set of training shapes of different organs. Each cLUT represented the spatial relationship between the center of the OOI and the shape of a co-existing organ. Second, the OOI center in a test image was determined using GHT with each cLUT separately. Third, the final localization of OOI was based on weighted combination of the centers obtained in the second stage. The training set consisted of 10 CT volumes with manually segmented OOIs including liver, spleen and kidneys. The method was tested on a set of 25 abdominal CT scans. Context-driven GHT correctly located all OOIs in the test image and gave localization errors of 19.5±9.0, 12.8±7.3, 9.4±4.6 and 8.6±4.1 mm for liver, spleen, left and right kidney respectively. Conventional GHT mis-located 8 out of 100 organs and its localization errors were 26.0±32.6, 14.1±10.6, 30.1±42.6 and 23.6±39.7mm for liver, spleen, left and right kidney respectively.
NASA Astrophysics Data System (ADS)
Verrelst, Jochem; Malenovský, Zbyněk; Van der Tol, Christiaan; Camps-Valls, Gustau; Gastellu-Etchegorry, Jean-Philippe; Lewis, Philip; North, Peter; Moreno, Jose
2018-06-01
An unprecedented spectroscopic data stream will soon become available with forthcoming Earth-observing satellite missions equipped with imaging spectroradiometers. This data stream will open up a vast array of opportunities to quantify a diversity of biochemical and structural vegetation properties. The processing requirements for such large data streams require reliable retrieval techniques enabling the spatiotemporally explicit quantification of biophysical variables. With the aim of preparing for this new era of Earth observation, this review summarizes the state-of-the-art retrieval methods that have been applied in experimental imaging spectroscopy studies inferring all kinds of vegetation biophysical variables. Identified retrieval methods are categorized into: (1) parametric regression, including vegetation indices, shape indices and spectral transformations; (2) nonparametric regression, including linear and nonlinear machine learning regression algorithms; (3) physically based, including inversion of radiative transfer models (RTMs) using numerical optimization and look-up table approaches; and (4) hybrid regression methods, which combine RTM simulations with machine learning regression methods. For each of these categories, an overview of widely applied methods with application to mapping vegetation properties is given. In view of processing imaging spectroscopy data, a critical aspect involves the challenge of dealing with spectral multicollinearity. The ability to provide robust estimates, retrieval uncertainties and acceptable retrieval processing speed are other important aspects in view of operational processing. Recommendations towards new-generation spectroscopy-based processing chains for operational production of biophysical variables are given.
Six-Position, Frontal View Photography in Blepharoplasty: A Simple Method.
Zhang, Cheng; Guo, Xiaoshuang; Han, Xuefeng; Tian, Yi; Jin, Xiaolei
2018-02-26
Photography plays a pivotal role in patient education, photo-documentation, preoperative planning and postsurgical evaluation in plastic surgeries. It has long been serving as a bridge that facilitated communication not only between patients and doctors, but also among plastic surgeons from different countries. Although several basic principles and photographic methods have been proposed, there is no internationally accepted photography that could provide both static and dynamic information in blepharoplasty. In this article, we introduced a novel six-position, frontal view photography for thorough assessment in blepharoplasty. From October 2013 to January 2017, 1068 patients who underwent blepharoplasty were enrolled in our clinical research. All patients received six-position, frontal view photography. Pictures were taken of the patients looking up, looking down, squeezing, smiling, looking ahead and with closed eyes. Conventionally, frontal view photography only contained the last two positions. Then, both novel six-position photographs and conventional two-position photographs were used to appraise postsurgical outcomes. Compared to conventional two-position, frontal view photography, six-position, frontal view photography can provide more detailed, thorough information about the eyes. It is of clinical significance in indicating underlying adhesion of skin/muscle/fat according to individual's features and assessing preoperative and postoperative dynamic changes and aesthetic outcomes. Six-position, frontal view photography is technically uncomplicated while exhibiting static, dynamic and detailed information of the eyes. This innovative method is favorable in eye assessment, especially for revision blepharoplasty. We suggest using six-position, frontal view photography to obtain comprehensive photographs. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Electrostatic polymer-based microdeformable mirror for adaptive optics
NASA Astrophysics Data System (ADS)
Zamkotsian, Frederic; Conedera, Veronique; Granier, Hugues; Liotard, Arnaud; Lanzoni, Patrick; Salvagnac, Ludovic; Fabre, Norbert; Camon, Henri
2007-02-01
Future adaptive optics (AO) systems require deformable mirrors with very challenging parameters, up to 250 000 actuators and inter-actuator spacing around 500 μm. MOEMS-based devices are promising for the development of a complete generation of new deformable mirrors. Our micro-deformable mirror (MDM) is based on an array of electrostatic actuators with attachments to a continuous mirror on top. The originality of our approach lies in the elaboration of layers made of polymer materials. Mirror layers and active actuators have been demonstrated. Based on the design of this actuator and our polymer process, realization of a complete polymer-MDM has been done using two process flows: the first involves exclusively polymer materials while the second uses SU8 polymer for structural layers and SiO II and sol-gel for sacrificial layers. The latest shows a better capability in order to produce completely released structures. The electrostatic force provides a non-linear actuation, while AO systems are based on linear matrices operations. Then, we have developed a dedicated 14-bit electronics in order to "linearize" the actuation, using a calibration and a sixth-order polynomial fitting strategy. The response is nearly perfect over our 3×3 MDM prototype with a standard deviation of 3.5 nm; the influence function of the central actuator has been measured. First evaluation on the cross non-linarities has also been studied on OKO mirror and a simple look-up table is sufficient for determining the location of each actuator whatever the locations of the neighbor actuators. Electrostatic MDM are particularly well suited for open-loop AO applications.
Franz, Delbert D.; Melching, Charles S.
1997-01-01
The Full EQuations UTiLities (FEQUTL) model is a computer program for computation of tables that list the hydraulic characteristics of open channels and control structures as a function of upstream and downstream depths; these tables facilitate the simulation of unsteady flow in a stream system with the Full Equations (FEQ) model. Simulation of unsteady flow requires many iterations for each time period computed. Thus, computation of hydraulic characteristics during the simulations is impractical, and preparation of function tables and application of table look-up procedures facilitates simulation of unsteady flow. Three general types of function tables are computed: one-dimensional tables that relate hydraulic characteristics to upstream flow depth, two-dimensional tables that relate flow through control structures to upstream and downstream flow depth, and three-dimensional tables that relate flow through gated structures to upstream and downstream flow depth and gate setting. For open-channel reaches, six types of one-dimensional function tables contain different combinations of the top width of flow, area, first moment of area with respect to the water surface, conveyance, flux coefficients, and correction coefficients for channel curvilinearity. For hydraulic control structures, one type of one-dimensional function table contains relations between flow and upstream depth, and two types of two-dimensional function tables contain relations among flow and upstream and downstream flow depths. For hydraulic control structures with gates, a three-dimensional function table lists the system of two-dimensional tables that contain the relations among flow and upstream and downstream flow depths that correspond to different gate openings. Hydraulic control structures for which function tables containing flow relations are prepared in FEQUTL include expansions, contractions, bridges, culverts, embankments, weirs, closed conduits (circular, rectangular, and pipe-arch shapes), dam failures, floodways, and underflow gates (sluice and tainter gates). The theory for computation of the hydraulic characteristics is presented for open channels and for each hydraulic control structure. For the hydraulic control structures, the theory is developed from the results of experimental tests of flow through the structure for different upstream and downstream flow depths. These tests were done to describe flow hydraulics for a single, steady-flow design condition and, thus, do not provide complete information on flow transitions (for example, between free- and submerged-weir flow) that may result in simulation of unsteady flow. Therefore, new procedures are developed to approximate the hydraulics of flow transitions for culverts, embankments, weirs, and underflow gates.
Astronaut Jack Lousma looks at map of Earth in ward room of Skylab cluster
1973-08-01
S73-34193 (1 Aug. 1973) --- Astronaut Jack R. Lousma, Skylab 3 pilot, looks at a map of Earth at the food table in the ward room of the Orbital Workshop (OWS). In this photographic reproduction taken from a television transmission made by a color TV camera aboard the Skylab space station cluster in Earth orbit. Photo credit: NASA
26. July 1974. BENCH SHOP, VIEW LOOKING SOUTH, SHOWING THE ...
26. July 1974. BENCH SHOP, VIEW LOOKING SOUTH, SHOWING THE BORING MACHINE PURCHASED IN 1885. THE BIT MAY BE LOWERED BY THE HANGING LINKAGE OR THE TABLE RAISED BY THE FOOT PEDAL. NOTICE THE CHASE FOR THE BELTS, BUILT NO LESS CAREFULLY THAN THE MACHINE ITSELF. - Gruber Wagon Works, Pennsylvania Route 183 & State Hill Road at Red Bridge Park, Bernville, Berks County, PA
Process and representation in graphical displays
NASA Technical Reports Server (NTRS)
Gillan, Douglas J.; Lewis, Robert; Rudisill, Marianne
1990-01-01
How people comprehend graphics is examined. Graphical comprehension involves the cognitive representation of information from a graphic display and the processing strategies that people apply to answer questions about graphics. Research on representation has examined both the features present in a graphic display and the cognitive representation of the graphic. The key features include the physical components of a graph, the relation between the figure and its axes, and the information in the graph. Tests of people's memory for graphs indicate that both the physical and informational aspect of a graph are important in the cognitive representation of a graph. However, the physical (or perceptual) features overshadow the information to a large degree. Processing strategies also involve a perception-information distinction. In order to answer simple questions (e.g., determining the value of a variable, comparing several variables, and determining the mean of a set of variables), people switch between two information processing strategies: (1) an arithmetic, look-up strategy in which they use a graph much like a table, looking up values and performing arithmetic calculations; and (2) a perceptual strategy in which they use the spatial characteristics of the graph to make comparisons and estimations. The user's choice of strategies depends on the task and the characteristics of the graph. A theory of graphic comprehension is presented.
NASA Astrophysics Data System (ADS)
Zhong, J.; Wang, W.; Yue, X.; Burns, A. G.; Dou, X.; Lei, J.
2015-12-01
Up-looking total electron content (TEC) measurements from multiple low Earth orbit (LEO) satellites have been utilized to study the topside ionospheric response to the 17 March 2015 great storm. The combined up-looking TEC observations from these LEO satellites are valuable in addressing the local time and altitudinal dependences of the topside ionospheric response to geomagnetic storms from a global perspective, especially over the southern hemisphere and oceans. In the evening sector, the up-looking TEC showed an obvious long-duration of positive storm effect during the main phase and a long duration of negative storm effect during the recovery phase of this storm. The increases of the topside TEC during the main phase were symmetric with respect to the magnetic equator, which was probably associated with penetration electric fields. Additionally, the up-looking TEC from different orbital altitudes suggested that the negative storm effect at higher altitudes was stronger in the evening sector. In the morning sector, the up-looking TEC also showed increases at low and middle latitudes during the storm main phase. Obvious TEC enhancement can be also seen over the Pacific Ocean in the topside ionosphere during the storm recovery phase. These results imply that the topside ionospheric responses significantly depend on local time. Thus, the LEO-based up-looking TEC provides an important database to study the possible physical mechanisms of the topside ionospheric response to storms.
Parallel processor for real-time structural control
NASA Astrophysics Data System (ADS)
Tise, Bert L.
1993-07-01
A parallel processor that is optimized for real-time linear control has been developed. This modular system consists of A/D modules, D/A modules, and floating-point processor modules. The scalable processor uses up to 1,000 Motorola DSP96002 floating-point processors for a peak computational rate of 60 GFLOPS. Sampling rates up to 625 kHz are supported by this analog-in to analog-out controller. The high processing rate and parallel architecture make this processor suitable for computing state-space equations and other multiply/accumulate-intensive digital filters. Processor features include 14-bit conversion devices, low input-to-output latency, 240 Mbyte/s synchronous backplane bus, low-skew clock distribution circuit, VME connection to host computer, parallelizing code generator, and look- up-tables for actuator linearization. This processor was designed primarily for experiments in structural control. The A/D modules sample sensors mounted on the structure and the floating- point processor modules compute the outputs using the programmed control equations. The outputs are sent through the D/A module to the power amps used to drive the structure's actuators. The host computer is a Sun workstation. An OpenWindows-based control panel is provided to facilitate data transfer to and from the processor, as well as to control the operating mode of the processor. A diagnostic mode is provided to allow stimulation of the structure and acquisition of the structural response via sensor inputs.
Self-addressed diffractive lens schemes for the characterization of LCoS displays
NASA Astrophysics Data System (ADS)
Zhang, Haolin; Lizana, Angel; Iemmi, Claudio; Monroy-Ramírez, Freddy A.; Marquez, Andrés.; Moreno, Ignacio; Campos, Juan
2018-02-01
We proposed a self-calibration method to calibrate both the phase-voltage look-up table and the screen phase distribution of Liquid Crystal on Silicon (LCoS) displays by implementing different lens configurations on the studied device within a same optical scheme. On the one hand, the phase-voltage relation is determined from interferometric measurements, which are obtained by addressing split-lens phase distributions on the LCoS display. On the other hand, the surface profile is retrieved by self-addressing a diffractive micro-lens array to the LCoS display, in a way that we configure a Shack-Hartmann wavefront sensor that self-determines the screen spatial variations. Moreover, both the phase-voltage response and the surface phase inhomogeneity of the LCoS are measured within the same experimental set-up, without the necessity of further adjustments. Experimental results prove the usefulness of the above-mentioned technique for LCoS displays characterization.
Synchronization trigger control system for flow visualization
NASA Technical Reports Server (NTRS)
Chun, K. S.
1987-01-01
The use of cinematography or holographic interferometry for dynamic flow visualization in an internal combustion engine requires a control device that globally synchronizes camera and light source timing at a predefined shaft encoder angle. The device is capable of 0.35 deg resolution for rotational speeds of up to 73 240 rpm. This was achieved by implementing the shaft encoder signal addressed look-up table (LUT) and appropriate latches. The developed digital signal processing technique achieves 25 nsec of high speed triggering angle detection by using direct parallel bit comparison of the shaft encoder digital code with a simulated angle reference code, instead of using angle value comparison which involves more complicated computation steps. In order to establish synchronization to an AC reference signal whose magnitude is variant with the rotating speed, a dynamic peak followup synchronization technique has been devised. This method scrutinizes the reference signal and provides the right timing within 40 nsec. Two application examples are described.
aMC fast: automation of fast NLO computations for PDF fits
NASA Astrophysics Data System (ADS)
Bertone, Valerio; Frederix, Rikkert; Frixione, Stefano; Rojo, Juan; Sutton, Mark
2014-08-01
We present the interface between M adG raph5_ aMC@NLO, a self-contained program that calculates cross sections up to next-to-leading order accuracy in an automated manner, and APPL grid, a code that parametrises such cross sections in the form of look-up tables which can be used for the fast computations needed in the context of PDF fits. The main characteristic of this interface, which we dub aMC fast, is its being fully automated as well, which removes the need to extract manually the process-specific information for additional physics processes, as is the case with other matrix-element calculators, and renders it straightforward to include any new process in the PDF fits. We demonstrate this by studying several cases which are easily measured at the LHC, have a good constraining power on PDFs, and some of which were previously unavailable in the form of a fast interface.
Development of esMOCA RULA, Motion Capture Instrumentation for RULA Assessment
NASA Astrophysics Data System (ADS)
Akhmad, S.; Arendra, A.
2018-01-01
The purpose of this research is to build motion capture instrumentation using sensors fusion accelerometer and gyroscope to assist in RULA assessment. Data processing of sensor orientation is done in every sensor node by digital motion processor. Nine sensors are placed in the upper limb of operator subject. Development of kinematics model is done with Simmechanic Simulink. This kinematics model receives streaming data from sensors via wireless sensors network. The output of the kinematics model is the relative angular angle between upper limb members and visualized on the monitor. This angular information is compared to the look-up table of the RULA worksheet and gives the RULA score. The assessment result of the instrument is compared with the result of the assessment by rula assessors. To sum up, there is no significant difference of assessment by the instrument with an assessment by an assessor.
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Saumyadip; Abraham, John
2012-07-01
The unsteady flamelet progress variable (UFPV) model has been proposed by Pitsch and Ihme ["An unsteady/flamelet progress variable method for LES of nonpremixed turbulent combustion," AIAA Paper No. 2005-557, 2005] for modeling the averaged/filtered chemistry source terms in Reynolds averaged simulations and large eddy simulations of reacting non-premixed combustion. In the UFPV model, a look-up table of source terms is generated as a function of mixture fraction Z, scalar dissipation rate χ, and progress variable C by solving the unsteady flamelet equations. The assumption is that the unsteady flamelet represents the evolution of the reacting mixing layer in the non-premixed flame. We assess the accuracy of the model in predicting autoignition and flame development in compositionally stratified n-heptane/air mixtures using direct numerical simulations (DNS). The focus in this work is primarily on the assessment of accuracy of the probability density functions (PDFs) employed for obtaining averaged source terms. The performance of commonly employed presumed functions, such as the dirac-delta distribution function, the β distribution function, and statistically most likely distribution (SMLD) approach in approximating the shapes of the PDFs of the reactive and the conserved scalars is evaluated. For unimodal distributions, it is observed that functions that need two-moment information, e.g., the β distribution function and the SMLD approach with two-moment closure, are able to reasonably approximate the actual PDF. As the distribution becomes multimodal, higher moment information is required. Differences are observed between the ignition trends obtained from DNS and those predicted by the look-up table, especially for smaller gradients where the flamelet assumption becomes less applicable. The formulation assumes that the shape of the χ(Z) profile can be modeled by an error function which remains unchanged in the presence of heat release. We show that this assumption is not accurate.
Geades, Nicolas; Hunt, Benjamin A E; Shah, Simon M; Peters, Andrew; Mougin, Olivier E; Gowland, Penny A
2017-08-01
To develop a method that fits a multipool model to z-spectra acquired from non-steady state sequences, taking into account the effects of variations in T1 or B1 amplitude and the results estimating the parameters for a four-pool model to describe the z-spectrum from the healthy brain. We compared measured spectra with a look-up table (LUT) of possible spectra and investigated the potential advantages of simultaneously considering spectra acquired at different saturation powers (coupled spectra) to provide sensitivity to a range of different physicochemical phenomena. The LUT method provided reproducible results in healthy controls. The average values of the macromolecular pool sizes measured in white matter (WM) and gray matter (GM) of 10 healthy volunteers were 8.9% ± 0.3% (intersubject standard deviation) and 4.4% ± 0.4%, respectively, whereas the average nuclear Overhauser effect pool sizes in WM and GM were 5% ± 0.1% and 3% ± 0.1%, respectively, and average amide proton transfer pool sizes in WM and GM were 0.21% ± 0.03% and 0.20% ± 0.02%, respectively. The proposed method demonstrated increased robustness when compared with existing methods (such as Lorentzian fitting and asymmetry analysis) while yielding fully quantitative results. The method can be adjusted to measure other parameters relevant to the z-spectrum. Magn Reson Med 78:645-655, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.
Gronewold, Andrew D; Sobsey, Mark D; McMahan, Lanakila
2017-06-01
For the past several years, the compartment bag test (CBT) has been employed in water quality monitoring and public health protection around the world. To date, however, the statistical basis for the design and recommended procedures for enumerating fecal indicator bacteria (FIB) concentrations from CBT results have not been formally documented. Here, we provide that documentation following protocols for communicating the evolution of similar water quality testing procedures. We begin with an overview of the statistical theory behind the CBT, followed by a description of how that theory was applied to determine an optimal CBT design. We then provide recommendations for interpreting CBT results, including procedures for estimating quantiles of the FIB concentration probability distribution, and the confidence of compliance with recognized water quality guidelines. We synthesize these values in custom user-oriented 'look-up' tables similar to those developed for other FIB water quality testing methods. Modified versions of our tables are currently distributed commercially as part of the CBT testing kit. Published by Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tao, Wei-Kuo; Takayabu, Yukari N.; Lang, Steve
Yanai et al. (1973) utilized the meteorological data collected from a sounding network to present a pioneering work on thermodynamic budgets, which are referred to as the apparent heat source (Q1) and apparent moisture sink (Q2). Latent heating (LH) is one of the most dominant terms in Q1. Yanai’s paper motivated the development of satellite-based LH algorithms and provided a theoretical background for imposing large-scale advective forcing into cloud-resolving models (CRMs). These CRM-simulated LH and Q1 data have been used to generate the look-up tables in Tropical Rainfall Measuring Mission (TRMM) LH algorithms. A set of algorithms developed for retrievingmore » LH profiles from TRMM-based rainfall profiles are described and evaluated, including details concerning their intrinsic space-time resolutions. Included in the paper are results from a variety of validation analyses that define the uncertainty of the LH profile estimates. Also, examples of how TRMM-retrieved LH profiles have been used to understand the lifecycle of the MJO and improve the predictions of global weather and climate models as well as comparisons with large-scale analyses are provided. Areas for further improvement of the TRMM products are discussed.« less
NASA Astrophysics Data System (ADS)
Plante, Ianik; Devroye, Luc
2015-09-01
Several computer codes simulating chemical reactions in particles systems are based on the Green's functions of the diffusion equation (GFDE). Indeed, many types of chemical systems have been simulated using the exact GFDE, which has also become the gold standard for validating other theoretical models. In this work, a simulation algorithm is presented to sample the interparticle distance for partially diffusion-controlled reversible ABCD reaction. This algorithm is considered exact for 2-particles systems, is faster than conventional look-up tables and uses only a few kilobytes of memory. The simulation results obtained with this method are compared with those obtained with the independent reaction times (IRT) method. This work is part of our effort in developing models to understand the role of chemical reactions in the radiation effects on cells and tissues and may eventually be included in event-based models of space radiation risks. However, as many reactions are of this type in biological systems, this algorithm might play a pivotal role in future simulation programs not only in radiation chemistry, but also in the simulation of biochemical networks in time and space as well.
Evolutionary Optimization of Centrifugal Nozzles for Organic Vapours
NASA Astrophysics Data System (ADS)
Persico, Giacomo
2017-03-01
This paper discusses the shape-optimization of non-conventional centrifugal turbine nozzles for Organic Rankine Cycle applications. The optimal aerodynamic design is supported by the use of a non-intrusive, gradient-free technique specifically developed for shape optimization of turbomachinery profiles. The method is constructed as a combination of a geometrical parametrization technique based on B-Splines, a high-fidelity and experimentally validated Computational Fluid Dynamic solver, and a surrogate-based evolutionary algorithm. The non-ideal gas behaviour featuring the flow of organic fluids in the cascades of interest is introduced via a look-up-table approach, which is rigorously applied throughout the whole optimization process. Two transonic centrifugal nozzles are considered, featuring very different loading and radial extension. The use of a systematic and automatic design method to such a non-conventional configuration highlights the character of centrifugal cascades; the blades require a specific and non-trivial definition of the shape, especially in the rear part, to avoid the onset of shock waves. It is shown that the optimization acts in similar way for the two cascades, identifying an optimal curvature of the blade that both provides a relevant increase of cascade performance and a reduction of downstream gradients.
GUI Type Fault Diagnostic Program for a Turboshaft Engine Using Fuzzy and Neural Networks
NASA Astrophysics Data System (ADS)
Kong, Changduk; Koo, Youngju
2011-04-01
The helicopter to be operated in a severe flight environmental condition must have a very reliable propulsion system. On-line condition monitoring and fault detection of the engine can promote reliability and availability of the helicopter propulsion system. A hybrid health monitoring program using Fuzzy Logic and Neural Network Algorithms can be proposed. In this hybrid method, the Fuzzy Logic identifies easily the faulted components from engine measuring parameter changes, and the Neural Networks can quantify accurately its identified faults. In order to use effectively the fault diagnostic system, a GUI (Graphical User Interface) type program is newly proposed. This program is composed of the real time monitoring part, the engine condition monitoring part and the fault diagnostic part. The real time monitoring part can display measuring parameters of the study turboshaft engine such as power turbine inlet temperature, exhaust gas temperature, fuel flow, torque and gas generator speed. The engine condition monitoring part can evaluate the engine condition through comparison between monitoring performance parameters the base performance parameters analyzed by the base performance analysis program using look-up tables. The fault diagnostic part can identify and quantify the single faults the multiple faults from the monitoring parameters using hybrid method.
Computed Tomography (CT) -- Sinuses
MedlinePlus Videos and Cool Tools
... for the moving table. top of page Additional Information and Resources RTAnswers.org Radiation Therapy for Head ... Send us your feedback Did you find the information you were looking for? Yes No Please type ...
ERIC Educational Resources Information Center
Lehman, Rosemary
2007-01-01
This chapter looks at the development and nature of learning objects, meta-tagging standards and taxonomies, learning object repositories, learning object repository characteristics, and types of learning object repositories, with type examples. (Contains 1 table.)
Changing the surgical dogma in frontal sinus trauma: transnasal endoscopic repair.
Grayson, Jessica W; Jeyarajan, Hari; Illing, Elisa A; Cho, Do-Yeon; Riley, Kristen O; Woodworth, Bradford A
2017-05-01
Management of frontal sinus trauma includes coronal or direct open approaches through skin incisions to either ablate or obliterate the frontal sinus for posterior table fractures and openly reduce/internally fixate fractured anterior tables. The objective of this prospective case-series study was to evaluate outcomes of frontal sinus anterior and posterior table trauma using endoscopic techniques. Prospective evaluation of patients undergoing surgery for frontal sinus fractures was performed. Data were collected regarding demographics, etiology, technique, operative site, length involving the posterior table, size of skull base defects, complications, and clinical follow-up. Forty-six patients (average age, 42 years) with frontal sinus fractures were treated using endoscopic techniques from 2008 to 2016. Mean follow-up was 26 (range, 0.5 to 79) months. Patients were treated primarily with a Draf IIb frontal sinusotomies. Draf III was used in 8 patients. Average fracture defect (length vs width) was 17.1 × 9.1 mm, and the average length involving the posterior table was 13.1 mm. Skull base defects were covered with either nasoseptal flaps or free tissue grafts. One individual required Draf IIb revision, but all sinuses were patent on final examination and all closed reductions of anterior table defects resulted in cosmetically acceptable outcomes. Frontal sinus trauma has traditionally been treated using open approaches. Our findings show that endoscopic management should become part of the management algorithm for frontal sinus trauma, which challenges current surgical dogma regarding mandatory open approaches. © 2017 ARS-AAOA, LLC.
Partition-based acquisition model for speed up navigated beta-probe surface imaging
NASA Astrophysics Data System (ADS)
Monge, Frédéric; Shakir, Dzhoshkun I.; Navab, Nassir; Jannin, Pierre
2016-03-01
Although gross total resection in low-grade glioma surgery leads to a better patient outcome, the in-vivo control of resection borders remains challenging. For this purpose, navigated beta-probe systems combined with 18F-based radiotracer, relying on activity distribution surface estimation, have been proposed to generate reconstructed images. The clinical relevancy has been outlined by early studies where intraoperative functional information is leveraged although inducing low spatial resolution in reconstruction. To improve reconstruction quality, multiple acquisition models have been proposed. They involve the definition of attenuation matrix for designing radiation detection physics. Yet, they require high computational power for efficient intraoperative use. To address the problem, we propose a new acquisition model called Partition Model (PM) considering an existing model where coefficients of the matrix are taken from a look-up table (LUT). Our model is based upon the division of the LUT into averaged homogeneous values for assigning attenuation coefficients. We validated our model using in vitro datasets, where tumors and peri-tumoral tissues have been simulated. We compared our acquisition model with the o_-the-shelf LUT and the raw method. Acquisition models outperformed the raw method in term of tumor contrast (7.97:1 mean T:B) but with a difficulty of real-time use. Both acquisition models reached the same detection performance with references (0.8 mean AUC and 0.77 mean NCC), where PM slightly improves the mean tumor contrast up to 10.1:1 vs 9.9:1 with the LUT model and more importantly, it reduces the mean computation time by 7.5%. Our model gives a faster solution for an intraoperative use of navigated beta-probe surface imaging system, with improved image quality.
NASA Astrophysics Data System (ADS)
Beichner, Robert
2016-03-01
The Student-Centered Active Learning Environment with Upside-down Pedagogies (SCALE-UP) Project combines curricula and a specially-designed instructional space to enhance learning. SCALE-UP students practice communication and teamwork skills while performing activities that enhance their conceptual understanding and problem solving skills. This can be done with small or large classes and has been implemented at more than 250 institutions. Educational research indicates that students should collaborate on interesting tasks and be deeply involved with the material they are studying. SCALE-UP classtime is spent primarily on ``tangibles'' and ``ponderables''--hands-on measurements/observations and interesting questions. There are also computer simulations (called ``visibles'') and hypothesis-driven labs. Students sit at tables designed to facilitate group interactions. Instructors circulate and engage in Socratic dialogues. The setting looks like a banquet hall, with lively interactions nearly all the time. Impressive learning gains have been measured at institutions across the US and internationally. This talk describes today's students, how lecturing got started, what happens in a SCALE-UP classroom, and how the approach has spread. The SCALE-UP project has greatly benefitted from numerous Grants made by NSF and FIPSE to NCSU and other institutions.
3. CABLE TUNNEL TO TEST STAND 1A, LOOKING SOUTH TO ...
3. CABLE TUNNEL TO TEST STAND 1-A, LOOKING SOUTH TO STAIRS LEADING UP TO CONTROL CENTER. - Edwards Air Force Base, Air Force Rocket Propulsion Laboratory, Control Center, Test Area 1-115, near Altair & Saturn Boulevards, Boron, Kern County, CA
19. TRAVELING CRANE ATOP SUPERSTRUCTURE, FROM RUN LINE DECK. Looking ...
19. TRAVELING CRANE ATOP SUPERSTRUCTURE, FROM RUN LINE DECK. Looking up to north northeast. - Edwards Air Force Base, Air Force Rocket Propulsion Laboratory, Test Stand 1-A, Test Area 1-120, north end of Jupiter Boulevard, Boron, Kern County, CA
Surface emissivity and temperature retrieval for a hyperspectral sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borel, C.C.
1998-12-01
With the growing use of hyper-spectral imagers, e.g., AVIRIS in the visible and short-wave infrared there is hope of using such instruments in the mid-wave and thermal IR (TIR) some day. The author believes that this will enable him to get around using the present temperature-emissivity separation algorithms using methods which take advantage of the many channels available in hyper-spectral imagers. A simple fact used in coming up with a novel algorithm is that a typical surface emissivity spectrum are rather smooth compared to spectral features introduced by the atmosphere. Thus, a iterative solution technique can be devised which retrievesmore » emissivity spectra based on spectral smoothness. To make the emissivities realistic, atmospheric parameters are varied using approximations, look-up tables derived from a radiative transfer code and spectral libraries. One such iterative algorithm solves the radiative transfer equation for the radiance at the sensor for the unknown emissivity and uses the blackbody temperature computed in an atmospheric window to get a guess for the unknown surface temperature. By varying the surface temperature over a small range a series of emissivity spectra are calculated. The one with the smoothest characteristic is chosen. The algorithm was tested on synthetic data using MODTRAN and the Salisbury emissivity database.« less
Glaucoma: Symptoms, Diagnosis, Treatment and Latest Research
... Feature: Glaucoma Glaucoma: Symptoms, Diagnosis, Treatment and Latest Research Past Issues / Fall 2009 Table of Contents Symptoms ... patients may need to keep taking drugs. Latest Research Researchers are studying the causes of glaucoma, looking ...
Lo, Kam W; Ferguson, Brian G
2012-11-01
The accurate localization of small arms fire using fixed acoustic sensors is considered. First, the conventional wavefront-curvature passive ranging method, which requires only differential time-of-arrival (DTOA) measurements of the muzzle blast wave to estimate the source position, is modified to account for sensor positions that are not strictly collinear (bowed array). Second, an existing single-sensor-node ballistic model-based localization method, which requires both DTOA and differential angle-of-arrival (DAOA) measurements of the muzzle blast wave and ballistic shock wave, is improved by replacing the basic external ballistics model (which describes the bullet's deceleration along its trajectory) with a more rigorous model and replacing the look-up table ranging procedure with a nonlinear (or polynomial) equation-based ranging procedure. Third, a new multiple-sensor-node ballistic model-based localization method, which requires only DTOA measurements of the ballistic shock wave to localize the point of fire, is formulated. The first method is applicable to situations when only the muzzle blast wave is received, whereas the third method applies when only the ballistic shock wave is received. The effectiveness of each of these methods is verified using an extensive set of real data recorded during a 7 day field experiment.
Improving wavelet denoising based on an in-depth analysis of the camera color processing
NASA Astrophysics Data System (ADS)
Seybold, Tamara; Plichta, Mathias; Stechele, Walter
2015-02-01
While Denoising is an extensively studied task in signal processing research, most denoising methods are designed and evaluated using readily processed image data, e.g. the well-known Kodak data set. The noise model is usually additive white Gaussian noise (AWGN). This kind of test data does not correspond to nowadays real-world image data taken with a digital camera. Using such unrealistic data to test, optimize and compare denoising algorithms may lead to incorrect parameter tuning or suboptimal choices in research on real-time camera denoising algorithms. In this paper we derive a precise analysis of the noise characteristics for the different steps in the color processing. Based on real camera noise measurements and simulation of the processing steps, we obtain a good approximation for the noise characteristics. We further show how this approximation can be used in standard wavelet denoising methods. We improve the wavelet hard thresholding and bivariate thresholding based on our noise analysis results. Both the visual quality and objective quality metrics show the advantage of the proposed method. As the method is implemented using look-up-tables that are calculated before the denoising step, our method can be implemented with very low computational complexity and can process HD video sequences real-time in an FPGA.
Real-time distortion correction for visual inspection systems based on FPGA
NASA Astrophysics Data System (ADS)
Liang, Danhua; Zhang, Zhaoxia; Chen, Xiaodong; Yu, Daoyin
2008-03-01
Visual inspection is a kind of new technology based on the research of computer vision, which focuses on the measurement of the object's geometry and location. It can be widely used in online measurement, and other real-time measurement process. Because of the defects of the traditional visual inspection, a new visual detection mode -all-digital intelligent acquisition and transmission is presented. The image processing, including filtering, image compression, binarization, edge detection and distortion correction, can be completed in the programmable devices -FPGA. As the wide-field angle lens is adopted in the system, the output images have serious distortion. Limited by the calculating speed of computer, software can only correct the distortion of static images but not the distortion of dynamic images. To reach the real-time need, we design a distortion correction system based on FPGA. The method of hardware distortion correction is that the spatial correction data are calculated first under software circumstance, then converted into the address of hardware storage and stored in the hardware look-up table, through which data can be read out to correct gray level. The major benefit using FPGA is that the same circuit can be used for other circularly symmetric wide-angle lenses without being modified.
Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan
2015-01-01
An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient. PMID:26501287
Recognition and Quantification of Area Damaged by Oligonychus Perseae in Avocado Leaves
NASA Astrophysics Data System (ADS)
Díaz, Gloria; Romero, Eduardo; Boyero, Juan R.; Malpica, Norberto
The measure of leaf damage is a basic tool in plant epidemiology research. Measuring the area of a great number of leaves is subjective and time consuming. We investigate the use of machine learning approaches for the objective segmentation and quantification of leaf area damaged by mites in avocado leaves. After extraction of the leaf veins, pixels are labeled with a look-up table generated using a Support Vector Machine with a polynomial kernel of degree 3, on the chrominance components of YCrCb color space. Spatial information is included in the segmentation process by rating the degree of membership to a certain class and the homogeneity of the classified region. Results are presented on real images with different degrees of damage.
Hoenicke, Dirk
2014-12-02
Disclosed are a unified method and apparatus to classify, route, and process injected data packets into a network so as to belong to a plurality of logical networks, each implementing a specific flow of data on top of a common physical network. The method allows to locally identify collectives of packets for local processing, such as the computation of the sum, difference, maximum, minimum, or other logical operations among the identified packet collective. Packets are injected together with a class-attribute and an opcode attribute. Network routers, employing the described method, use the packet attributes to look-up the class-specific route information from a local route table, which contains the local incoming and outgoing directions as part of the specifically implemented global data flow of the particular virtual network.
Programmable remapper for image processing
NASA Technical Reports Server (NTRS)
Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)
1991-01-01
A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.
An automated approach to the design of decision tree classifiers
NASA Technical Reports Server (NTRS)
Argentiero, P.; Chin, R.; Beaudet, P.
1982-01-01
An automated technique is presented for designing effective decision tree classifiers predicated only on a priori class statistics. The procedure relies on linear feature extractions and Bayes table look-up decision rules. Associated error matrices are computed and utilized to provide an optimal design of the decision tree at each so-called 'node'. A by-product of this procedure is a simple algorithm for computing the global probability of correct classification assuming the statistical independence of the decision rules. Attention is given to a more precise definition of decision tree classification, the mathematical details on the technique for automated decision tree design, and an example of a simple application of the procedure using class statistics acquired from an actual Landsat scene.
Time-dependent phase error correction using digital waveform synthesis
Doerry, Armin W.; Buskirk, Stephen
2017-10-10
The various technologies presented herein relate to correcting a time-dependent phase error generated as part of the formation of a radar waveform. A waveform can be pre-distorted to facilitate correction of an error induced into the waveform by a downstream operation/component in a radar system. For example, amplifier power droop effect can engender a time-dependent phase error in a waveform as part of a radar signal generating operation. The error can be quantified and an according complimentary distortion can be applied to the waveform to facilitate negation of the error during the subsequent processing of the waveform. A time domain correction can be applied by a phase error correction look up table incorporated into a waveform phase generator.
Computer-generated holograms by multiple wavefront recording plane method with occlusion culling.
Symeonidou, Athanasia; Blinder, David; Munteanu, Adrian; Schelkens, Peter
2015-08-24
We propose a novel fast method for full parallax computer-generated holograms with occlusion processing, suitable for volumetric data such as point clouds. A novel light wave propagation strategy relying on the sequential use of the wavefront recording plane method is proposed, which employs look-up tables in order to reduce the computational complexity in the calculation of the fields. Also, a novel technique for occlusion culling with little additional computation cost is introduced. Additionally, the method adheres a Gaussian distribution to the individual points in order to improve visual quality. Performance tests show that for a full-parallax high-definition CGH a speedup factor of more than 2,500 compared to the ray-tracing method can be achieved without hardware acceleration.
Generation and transmission of DPSK signals using a directly modulated passive feedback laser.
Karar, Abdullah S; Gao, Ying; Zhong, Kang Ping; Ke, Jian Hong; Cartledge, John C
2012-12-10
The generation of differential-phase-shift keying (DPSK) signals is demonstrated using a directly modulated passive feedback laser at 10.709-Gb/s, 14-Gb/s and 16-Gb/s. The quality of the DPSK signals is assessed using both noncoherent detection for a bit rate of 10.709-Gb/s and coherent detection with digital signal processing involving a look-up table pattern-dependent distortion compensator. Transmission over a passive link consisting of 100 km of single mode fiber at a bit rate of 10.709-Gb/s is achieved with a received optical power of -45 dBm at a bit-error-ratio of 3.8 × 10(-3) and a 49 dB loss margin.
New Arab social order: a study of the social impact of oil wealth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibrahim, S.E.
1982-01-01
The skyrocketing Arab oil revenues of the 1970s have triggered socio-economic forces in the Arab world. Observers have studied the financial and geopolitical aspects of Arab oil, but generally have ignored the human and social repercussions stimulated by the oil wealth. This book challenges the commonly accepted view of the impact of manpower movements across the Arab wealth divide, looking at the new social formations, class structures, value systems, and social cleavages that have been emerging in both rich and poor Arab countries. These developments may add up to a silent social revolution, and are possibly a prelude to moremore » overt tension, conflict, and political turmoil. 136 references, 13 figures, 39 tables.« less
Numerical solution of Space Shuttle Orbiter flow field including real gas effects
NASA Technical Reports Server (NTRS)
Prabhu, D. K.; Tannehill, J. C.
1984-01-01
The hypersonic, laminar flow around the Space Shuttle Orbiter has been computed for both an ideal gas (gamma = 1.2) and equilibrium air using a real-gas, parabolized Navier-Stokes code. This code employs a generalized coordinate transformation; hence, it places no restrictions on the orientation of the solution surfaces. The initial solution in the nose region was computed using a 3-D, real-gas, time-dependent Navier-Stokes code. The thermodynamic and transport properties of equilibrium air were obtained from either approximate curve fits or a table look-up procedure. Numerical results are presented for flight conditions corresponding to the STS-3 trajectory. The computed surface pressures and convective heating rates are compared with data from the STS-3 flight.
Minimalist design of a robust real-time quantum random number generator
NASA Astrophysics Data System (ADS)
Kravtsov, K. S.; Radchenko, I. V.; Kulik, S. P.; Molotkov, S. N.
2015-08-01
We present a simple and robust construction of a real-time quantum random number generator (QRNG). Our minimalist approach ensures stable operation of the device as well as its simple and straightforward hardware implementation as a stand-alone module. As a source of randomness the device uses measurements of time intervals between clicks of a single-photon detector. The obtained raw sequence is then filtered and processed by a deterministic randomness extractor, which is realized as a look-up table. This enables high speed on-the-fly processing without the need of extensive computations. The overall performance of the device is around 1 random bit per detector click, resulting in 1.2 Mbit/s generation rate in our implementation.
Sparse Unorganized Point Cloud Based Relative Pose Estimation for Uncooperative Space Target.
Yin, Fang; Chou, Wusheng; Wu, Yun; Yang, Guang; Xu, Song
2018-03-28
This paper proposes an autonomous algorithm to determine the relative pose between the chaser spacecraft and the uncooperative space target, which is essential in advanced space applications, e.g., on-orbit serving missions. The proposed method, named Congruent Tetrahedron Align (CTA) algorithm, uses the very sparse unorganized 3D point cloud acquired by a LIDAR sensor, and does not require any prior pose information. The core of the method is to determine the relative pose by looking for the congruent tetrahedron in scanning point cloud and model point cloud on the basis of its known model. The two-level index hash table is built for speeding up the search speed. In addition, the Iterative Closest Point (ICP) algorithm is used for pose tracking after CTA. In order to evaluate the method in arbitrary initial attitude, a simulated system is presented. Specifically, the performance of the proposed method to provide the initial pose needed for the tracking algorithm is demonstrated, as well as their robustness against noise. Finally, a field experiment is conducted and the results demonstrated the effectiveness of the proposed method.
Fast simulation tool for ultraviolet radiation at the earth's surface
NASA Astrophysics Data System (ADS)
Engelsen, Ola; Kylling, Arve
2005-04-01
FastRT is a fast, yet accurate, UV simulation tool that computes downward surface UV doses, UV indices, and irradiances in the spectral range 290 to 400 nm with a resolution as small as 0.05 nm. It computes a full UV spectrum within a few milliseconds on a standard PC, and enables the user to convolve the spectrum with user-defined and built-in spectral response functions including the International Commission on Illumination (CIE) erythemal response function used for UV index calculations. The program accounts for the main radiative input parameters, i.e., instrumental characteristics, solar zenith angle, ozone column, aerosol loading, clouds, surface albedo, and surface altitude. FastRT is based on look-up tables of carefully selected entries of atmospheric transmittances and spherical albedos, and exploits the smoothness of these quantities with respect to atmospheric, surface, geometrical, and spectral parameters. An interactive site, http://nadir.nilu.no/~olaeng/fastrt/fastrt.html, enables the public to run the FastRT program with most input options. This page also contains updated information about FastRT and links to freely downloadable source codes and binaries.
Monitoring Error Rates In Illumina Sequencing.
Manley, Leigh J; Ma, Duanduan; Levine, Stuart S
2016-12-01
Guaranteeing high-quality next-generation sequencing data in a rapidly changing environment is an ongoing challenge. The introduction of the Illumina NextSeq 500 and the depreciation of specific metrics from Illumina's Sequencing Analysis Viewer (SAV; Illumina, San Diego, CA, USA) have made it more difficult to determine directly the baseline error rate of sequencing runs. To improve our ability to measure base quality, we have created an open-source tool to construct the Percent Perfect Reads (PPR) plot, previously provided by the Illumina sequencers. The PPR program is compatible with HiSeq 2000/2500, MiSeq, and NextSeq 500 instruments and provides an alternative to Illumina's quality value (Q) scores for determining run quality. Whereas Q scores are representative of run quality, they are often overestimated and are sourced from different look-up tables for each platform. The PPR's unique capabilities as a cross-instrument comparison device, as a troubleshooting tool, and as a tool for monitoring instrument performance can provide an increase in clarity over SAV metrics that is often crucial for maintaining instrument health. These capabilities are highlighted.
A Thermal Infrared Radiation Parameterization for Atmospheric Studies
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Suarez, Max J.; Liang, Xin-Zhong; Yan, Michael M.-H.; Cote, Charles (Technical Monitor)
2001-01-01
This technical memorandum documents the longwave radiation parameterization developed at the Climate and Radiation Branch, NASA Goddard Space Flight Center, for a wide variety of weather and climate applications. Based on the 1996-version of the Air Force Geophysical Laboratory HITRAN data, the parameterization includes the absorption due to major gaseous absorption (water vapor, CO2, O3) and most of the minor trace gases (N2O, CH4, CFCs), as well as clouds and aerosols. The thermal infrared spectrum is divided into nine bands. To achieve a high degree of accuracy and speed, various approaches of computing the transmission function are applied to different spectral bands and gases. The gaseous transmission function is computed either using the k-distribution method or the table look-up method. To include the effect of scattering due to clouds and aerosols, the optical thickness is scaled by the single-scattering albedo and asymmetry factor. The parameterization can accurately compute fluxes to within 1% of the high spectral-resolution line-by-line calculations. The cooling rate can be accurately computed in the region extending from the surface to the 0.01-hPa level.
Fiori, Simone
2007-01-01
Bivariate statistical modeling from incomplete data is a useful statistical tool that allows to discover the model underlying two data sets when the data in the two sets do not correspond in size nor in ordering. Such situation may occur when the sizes of the two data sets do not match (i.e., there are “holes” in the data) or when the data sets have been acquired independently. Also, statistical modeling is useful when the amount of available data is enough to show relevant statistical features of the phenomenon underlying the data. We propose to tackle the problem of statistical modeling via a neural (nonlinear) system that is able to match its input-output statistic to the statistic of the available data sets. A key point of the new implementation proposed here is that it is based on look-up-table (LUT) neural systems, which guarantee a computationally advantageous way of implementing neural systems. A number of numerical experiments, performed on both synthetic and real-world data sets, illustrate the features of the proposed modeling procedure. PMID:18566641
Zhang, Yong; Li, Yuan; Rong, Zhi-Guo
2010-06-01
Remote sensors' channel spectral response function (SRF) was one of the key factors to influence the quantitative products' inversion algorithm, accuracy and the geophysical characteristics. Aiming at the adjustments of FY-2E's split window channels' SRF, detailed comparisons between the FY-2E and FY-2C corresponding channels' SRF differences were carried out based on three data collections: the NOAA AVHRR corresponding channels' calibration look up tables, field measured water surface radiance and atmospheric profiles at Lake Qinghai and radiance calculated from the PLANK function within all dynamic range of FY-2E/C. The results showed that the adjustments of FY-2E's split window channels' SRF would result in the spectral range's movements and influence the inversion algorithms of some ground quantitative products. On the other hand, these adjustments of FY-2E SRFs would increase the brightness temperature differences between FY-2E's two split window channels within all dynamic range relative to FY-2C's. This would improve the inversion ability of FY-2E's split window channels.
A high-resolution programmable Vernier delay generator based on carry chains in FPGA
NASA Astrophysics Data System (ADS)
Cui, Ke; Li, Xiangyu; Zhu, Rihong
2017-06-01
This paper presents an architecture of a high-resolution delay generator implemented in a single field programmable gate array chip by exploiting the method of utilizing dedicated carry chains. It serves as the core component in various physical instruments. The proposed delay generator contains the coarse delay step and the fine delay step to guarantee both large dynamic range and high resolution. The carry chains are organized in the Vernier delay loop style to fulfill the fine delay step with high precision and high linearity. The delay generator was implemented in the EP3SE110F1152I3 Stratix III device from Altera on a self-designed test board. Test results show that the obtained resolution is 38.6 ps, and the differential nonlinearity/integral nonlinearity is in the range of [-0.18 least significant bit (LSB), 0.24 LSB]/(-0.02 LSB, 0.01 LSB) under the nominal supply voltage of 1100 mV and environmental temperature of 2 0°C. The delay generator is rather efficient concerning resource cost, which uses only 668 look-up tables and 146 registers in total.
Sparse Unorganized Point Cloud Based Relative Pose Estimation for Uncooperative Space Target
Chou, Wusheng; Wu, Yun; Yang, Guang; Xu, Song
2018-01-01
This paper proposes an autonomous algorithm to determine the relative pose between the chaser spacecraft and the uncooperative space target, which is essential in advanced space applications, e.g., on-orbit serving missions. The proposed method, named Congruent Tetrahedron Align (CTA) algorithm, uses the very sparse unorganized 3D point cloud acquired by a LIDAR sensor, and does not require any prior pose information. The core of the method is to determine the relative pose by looking for the congruent tetrahedron in scanning point cloud and model point cloud on the basis of its known model. The two-level index hash table is built for speeding up the search speed. In addition, the Iterative Closest Point (ICP) algorithm is used for pose tracking after CTA. In order to evaluate the method in arbitrary initial attitude, a simulated system is presented. Specifically, the performance of the proposed method to provide the initial pose needed for the tracking algorithm is demonstrated, as well as their robustness against noise. Finally, a field experiment is conducted and the results demonstrated the effectiveness of the proposed method. PMID:29597323
Analytical modeling of the temporal evolution of hot spot temperatures in silicon solar cells
NASA Astrophysics Data System (ADS)
Wasmer, Sven; Rajsrima, Narong; Geisemeyer, Ino; Fertig, Fabian; Greulich, Johannes Michael; Rein, Stefan
2018-03-01
We present an approach to predict the equilibrium temperature of hot spots in crystalline silicon solar cells based on the analysis of their temporal evolution right after turning on a reverse bias. For this end, we derive an analytical expression for the time-dependent heat diffusion of a breakdown channel that is assumed to be cylindrical. We validate this by means of thermography imaging of hot spots right after turning on a reverse bias. The expression allows to be used to extract hot spot powers and radii from short-term measurements, targeting application in inline solar cell characterization. The extracted hot spot powers are validated at the hands of long-term dark lock-in thermography imaging. Using a look-up table of expected equilibrium temperatures determined by numerical and analytical simulations, we utilize the determined hot spot properties to predict the equilibrium temperatures of about 100 industrial aluminum back-surface field solar cells and achieve a high correlation coefficient of 0.86 and a mean absolute error of only 3.3 K.
Whole-body to tissue concentration ratios for use in biota dose assessments for animals.
Yankovich, Tamara L; Beresford, Nicholas A; Wood, Michael D; Aono, Tasuo; Andersson, Pål; Barnett, Catherine L; Bennett, Pamela; Brown, Justin E; Fesenko, Sergey; Fesenko, J; Hosseini, Ali; Howard, Brenda J; Johansen, Mathew P; Phaneuf, Marcel M; Tagami, Keiko; Takata, Hyoe; Twining, John R; Uchida, Shigeo
2010-11-01
Environmental monitoring programs often measure contaminant concentrations in animal tissues consumed by humans (e.g., muscle). By comparison, demonstration of the protection of biota from the potential effects of radionuclides involves a comparison of whole-body doses to radiological dose benchmarks. Consequently, methods for deriving whole-body concentration ratios based on tissue-specific data are required to make best use of the available information. This paper provides a series of look-up tables with whole-body:tissue-specific concentration ratios for non-human biota. Focus was placed on relatively broad animal categories (including molluscs, crustaceans, freshwater fishes, marine fishes, amphibians, reptiles, birds and mammals) and commonly measured tissues (specifically, bone, muscle, liver and kidney). Depending upon organism, whole-body to tissue concentration ratios were derived for between 12 and 47 elements. The whole-body to tissue concentration ratios can be used to estimate whole-body concentrations from tissue-specific measurements. However, we recommend that any given whole-body to tissue concentration ratio should not be used if the value falls between 0.75 and 1.5. Instead, a value of one should be assumed.
NASA Astrophysics Data System (ADS)
Arif, C.; Fauzan, M. I.; Satyanto, K. S.; Budi, I. S.; Masaru, M.
2018-05-01
Water table in rice fields play important role to mitigate greenhouse gas (GHG) emissions from paddy fields. Continuous flooding by maintenance water table 2-5 cm above soil surface is not effective and release more GHG emissions. System of Rice Intensification (SRI) as alternative rice farming apply intermittent irrigation by maintaining lower water table is proven can reduce GHG emissions reducing productivity significantly. The objectives of this study were to develop automatic water table control system for SRI application and then evaluate the performances. The control system was developed based on fuzzy logic algorithms using the mini PC of Raspberry Pi. Based on laboratory and field tests, the developed system was working well as indicated by lower MAPE (mean absolute percentage error) values. MAPE values for simulation and field tests were 16.88% and 15.80%, respectively. This system can save irrigation water up to 42.54% without reducing productivity significantly when compared to manual irrigation systems.
DETAIL VIEW OF CLASSIFIER, TAILINGS LAUNDER TROUGH, LINE SHAFTS, AND ...
DETAIL VIEW OF CLASSIFIER, TAILINGS LAUNDER TROUGH, LINE SHAFTS, AND CONCENTRATION TABLES, LOOKING SOUTHWEST. SLURRY EXITING THE BALL MILL WAS COLLECTED IN AN AMALGAMATION BOX (MISSING) FROM THE END OF THE MILL, AND INTRODUCED INTO THE CLASSIFIER. THE TAILINGS LAUDER IS ON THE GROUND AT LOWER RIGHT. THE LINE SHAFTING ABOVE PROVIDED POWER TO THE CONCENTRATION TABLES BELOW AT CENTER RIGHT. - Gold Hill Mill, Warm Spring Canyon Road, Death Valley Junction, Inyo County, CA
Nurses and global health: 'at the table' or 'on the menu'?
Scammell, Janet
2018-01-11
Janet Scammell, Associate Professor (Nursing), Bournemouth University, looks at the role of the nursing workforce in shaping wider global health care, and the part nurse educators play in promoting international involvement.
Juvenile Delinquency: An Introduction
ERIC Educational Resources Information Center
Smith, Carolyn A.
2008-01-01
Juvenile Delinquency is a term which is often inaccurately used. This article clarifies definitions, looks at prevalence, and explores the relationship between juvenile delinquency and mental health. Throughout, differences between males and females are explored. (Contains 1 table.)
Life-table methods for detecting age-risk factor interactions in long-term follow-up studies.
Logue, E E; Wing, S
1986-01-01
Methodological investigation has suggested that age-risk factor interactions should be more evident in age of experience life tables than in follow-up time tables due to the mixing of ages of experience over follow-up time in groups defined by age at initial examination. To illustrate the two approaches, age modification of the effect of total cholesterol on ischemic heart disease mortality in two long-term follow-up studies was investigated. Follow-up time life table analysis of 116 deaths over 20 years in one study was more consistent with a uniform relative risk due to cholesterol, while age of experience life table analysis was more consistent with a monotonic negative age interaction. In a second follow-up study (160 deaths over 24 years), there was no evidence of a monotonic negative age-cholesterol interaction by either method. It was concluded that age-specific life table analysis should be used when age-risk factor interactions are considered, but that both approaches yield almost identical results in absence of age interaction. The identification of the more appropriate life-table analysis should be ultimately guided by the nature of the age or time phenomena of scientific interest.
MATH77 - A LIBRARY OF MATHEMATICAL SUBPROGRAMS FOR FORTRAN 77, RELEASE 4.0
NASA Technical Reports Server (NTRS)
Lawson, C. L.
1994-01-01
MATH77 is a high quality library of ANSI FORTRAN 77 subprograms implementing contemporary algorithms for the basic computational processes of science and engineering. The portability of MATH77 meets the needs of present-day scientists and engineers who typically use a variety of computing environments. Release 4.0 of MATH77 contains 454 user-callable and 136 lower-level subprograms. Usage of the user-callable subprograms is described in 69 sections of the 416 page users' manual. The topics covered by MATH77 are indicated by the following list of chapter titles in the users' manual: Mathematical Functions, Pseudo-random Number Generation, Linear Systems of Equations and Linear Least Squares, Matrix Eigenvalues and Eigenvectors, Matrix Vector Utilities, Nonlinear Equation Solving, Curve Fitting, Table Look-Up and Interpolation, Definite Integrals (Quadrature), Ordinary Differential Equations, Minimization, Polynomial Rootfinding, Finite Fourier Transforms, Special Arithmetic , Sorting, Library Utilities, Character-based Graphics, and Statistics. Besides subprograms that are adaptations of public domain software, MATH77 contains a number of unique packages developed by the authors of MATH77. Instances of the latter type include (1) adaptive quadrature, allowing for exceptional generality in multidimensional cases, (2) the ordinary differential equations solver used in spacecraft trajectory computation for JPL missions, (3) univariate and multivariate table look-up and interpolation, allowing for "ragged" tables, and providing error estimates, and (4) univariate and multivariate derivative-propagation arithmetic. MATH77 release 4.0 is a subroutine library which has been carefully designed to be usable on any computer system that supports the full ANSI standard FORTRAN 77 language. It has been successfully implemented on a CRAY Y/MP computer running UNICOS, a UNISYS 1100 computer running EXEC 8, a DEC VAX series computer running VMS, a Sun4 series computer running SunOS, a Hewlett-Packard 720 computer running HP-UX, a Macintosh computer running MacOS, and an IBM PC compatible computer running MS-DOS. Accompanying the library is a set of 196 "demo" drivers that exercise all of the user-callable subprograms. The FORTRAN source code for MATH77 comprises 109K lines of code in 375 files with a total size of 4.5Mb. The demo drivers comprise 11K lines of code and 418K. Forty-four percent of the lines of the library code and 29% of those in the demo code are comment lines. The standard distribution medium for MATH77 is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 9track 1600 BPI magnetic tape in VAX BACKUP format and a TK50 tape cartridge in VAX BACKUP format. An electronic copy of the documentation is included on the distribution media. Previous releases of MATH77 have been used over a number of years in a variety of JPL applications. MATH77 Release 4.0 was completed in 1992. MATH77 is a copyrighted work with all copyright vested in NASA.
An FPGA-Based Massively Parallel Neuromorphic Cortex Simulator
Wang, Runchun M.; Thakur, Chetan S.; van Schaik, André
2018-01-01
This paper presents a massively parallel and scalable neuromorphic cortex simulator designed for simulating large and structurally connected spiking neural networks, such as complex models of various areas of the cortex. The main novelty of this work is the abstraction of a neuromorphic architecture into clusters represented by minicolumns and hypercolumns, analogously to the fundamental structural units observed in neurobiology. Without this approach, simulating large-scale fully connected networks needs prohibitively large memory to store look-up tables for point-to-point connections. Instead, we use a novel architecture, based on the structural connectivity in the neocortex, such that all the required parameters and connections can be stored in on-chip memory. The cortex simulator can be easily reconfigured for simulating different neural networks without any change in hardware structure by programming the memory. A hierarchical communication scheme allows one neuron to have a fan-out of up to 200 k neurons. As a proof-of-concept, an implementation on one Altera Stratix V FPGA was able to simulate 20 million to 2.6 billion leaky-integrate-and-fire (LIF) neurons in real time. We verified the system by emulating a simplified auditory cortex (with 100 million neurons). This cortex simulator achieved a low power dissipation of 1.62 μW per neuron. With the advent of commercially available FPGA boards, our system offers an accessible and scalable tool for the design, real-time simulation, and analysis of large-scale spiking neural networks. PMID:29692702
An FPGA-Based Massively Parallel Neuromorphic Cortex Simulator.
Wang, Runchun M; Thakur, Chetan S; van Schaik, André
2018-01-01
This paper presents a massively parallel and scalable neuromorphic cortex simulator designed for simulating large and structurally connected spiking neural networks, such as complex models of various areas of the cortex. The main novelty of this work is the abstraction of a neuromorphic architecture into clusters represented by minicolumns and hypercolumns, analogously to the fundamental structural units observed in neurobiology. Without this approach, simulating large-scale fully connected networks needs prohibitively large memory to store look-up tables for point-to-point connections. Instead, we use a novel architecture, based on the structural connectivity in the neocortex, such that all the required parameters and connections can be stored in on-chip memory. The cortex simulator can be easily reconfigured for simulating different neural networks without any change in hardware structure by programming the memory. A hierarchical communication scheme allows one neuron to have a fan-out of up to 200 k neurons. As a proof-of-concept, an implementation on one Altera Stratix V FPGA was able to simulate 20 million to 2.6 billion leaky-integrate-and-fire (LIF) neurons in real time. We verified the system by emulating a simplified auditory cortex (with 100 million neurons). This cortex simulator achieved a low power dissipation of 1.62 μW per neuron. With the advent of commercially available FPGA boards, our system offers an accessible and scalable tool for the design, real-time simulation, and analysis of large-scale spiking neural networks.
Implementing asyncronous collective operations in a multi-node processing system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Dong; Eisley, Noel A.; Heidelberger, Philip
A method, system, and computer program product are disclosed for implementing an asynchronous collective operation in a multi-node data processing system. In one embodiment, the method comprises sending data to a plurality of nodes in the data processing system, broadcasting a remote get to the plurality of nodes, and using this remote get to implement asynchronous collective operations on the data by the plurality of nodes. In one embodiment, each of the nodes performs only one task in the asynchronous operations, and each nodes sets up a base address table with an entry for a base address of a memorymore » buffer associated with said each node. In another embodiment, each of the nodes performs a plurality of tasks in said collective operations, and each task of each node sets up a base address table with an entry for a base address of a memory buffer associated with the task.« less
When Does Model-Based Control Pay Off?
Kool, Wouter; Cushman, Fiery A; Gershman, Samuel J
2016-08-01
Many accounts of decision making and reinforcement learning posit the existence of two distinct systems that control choice: a fast, automatic system and a slow, deliberative system. Recent research formalizes this distinction by mapping these systems to "model-free" and "model-based" strategies in reinforcement learning. Model-free strategies are computationally cheap, but sometimes inaccurate, because action values can be accessed by inspecting a look-up table constructed through trial-and-error. In contrast, model-based strategies compute action values through planning in a causal model of the environment, which is more accurate but also more cognitively demanding. It is assumed that this trade-off between accuracy and computational demand plays an important role in the arbitration between the two strategies, but we show that the hallmark task for dissociating model-free and model-based strategies, as well as several related variants, do not embody such a trade-off. We describe five factors that reduce the effectiveness of the model-based strategy on these tasks by reducing its accuracy in estimating reward outcomes and decreasing the importance of its choices. Based on these observations, we describe a version of the task that formally and empirically obtains an accuracy-demand trade-off between model-free and model-based strategies. Moreover, we show that human participants spontaneously increase their reliance on model-based control on this task, compared to the original paradigm. Our novel task and our computational analyses may prove important in subsequent empirical investigations of how humans balance accuracy and demand.
ERIC Educational Resources Information Center
Gray, Lucinda; Taie, Soheyla
2015-01-01
This First Look report provides selected findings from all five waves of the Beginning Teacher Longitudinal Study (BTLS) along with data tables and methodological information. The BTLS follows a sample of public elementary and secondary school teachers who participated in the 2007-08 Schools and Staffing Survey (SASS), and whose first year of…
NASA Technical Reports Server (NTRS)
Fanourakis, Sofia
2015-01-01
My main project was to determine and implement updates to be made to MODEAR (Mission Operations Data Enterprise Architecture Repository) process definitions to be used for CST-100 (Crew Space Transportation-100) related missions. Emphasis was placed on the scheduling aspect of the processes. In addition, I was to complete other tasks as given. Some of the additional tasks were: to create pass-through command look-up tables for the flight controllers, finish one of the MDT (Mission Operations Directorate Display Tool) displays, gather data on what is included in the CST-100 public data, develop a VBA (Visual Basic for Applications) script to create a csv (Comma-Separated Values) file with specific information from spreadsheets containing command data, create a command script for the November MCC-ASIL (Mission Control Center-Avionics System Integration Laboratory) testing, and take notes for one of the TCVB (Terminal Configured Vehicle B-737) meetings. In order to make progress in my main project I scheduled meetings with the appropriate subject matter experts, prepared material for the meetings, and assisted in the discussions in order to understand the process or processes at hand. After such discussions I made updates to various MODEAR processes and process graphics. These meetings have resulted in significant updates to the processes that were discussed. In addition, the discussions have helped the departments responsible for these processes better understand the work ahead and provided material to help document how their products are created. I completed my other tasks utilizing resources available to me and, when necessary, consulting with the subject matter experts. Outputs resulting from my other tasks were: two completed and one partially completed pass through command look-up tables for the fight controllers, significant updates to one of the MDT displays, a spreadsheet containing data on what is included in the CST-100 public data, a tool to create a csv file with specific information from spreadsheets containing command data, a command script for the November MCC-ASIL testing which resulted in a successful test day identifying several potential issues, and notes from one of the TCVB meetings that was used to keep the teams up to date on what was discussed and decided. I have learned a great deal working at NASA these last four months. I was able to meet and work with amazing individuals, further develop my technical knowledge, expand my knowledge base regarding human spaceflight, and contribute to the CST-100 missions. My work at NASA has strengthened my desire to continue my education in order to make further contributions to the field, and has given me the opportunity to see the advantages of a career at NASA.
After High School, Then What? A Look at the Postsecondary Sorting-Out Process for American Youth
1991-01-01
then remained stable from 1984 to 1987. The two time series for women show slightly different patterns, in that the college entrance rates in Table 8...standing of the sorting-out process-the process by which young people with widely differing talents and ambitions choose among competing alternatives such...Table 3.1 These differences between the male and female rates underscore the huge gender gap in college enrollment patterns that existed in 1970. Men
NASA Technical Reports Server (NTRS)
Peddle, Derek R.; Huemmrich, K. Fred; Hall, Forrest G.; Masek, Jeffrey G.; Soenen, Scott A.; Jackson, Chris D.
2011-01-01
Canopy reflectance model inversion using look-up table approaches provides powerful and flexible options for deriving improved forest biophysical structural information (BSI) compared with traditional statistical empirical methods. The BIOPHYS algorithm is an improved, physically-based inversion approach for deriving BSI for independent use and validation and for monitoring, inventory and quantifying forest disturbance as well as input to ecosystem, climate and carbon models. Based on the multiple-forward mode (MFM) inversion approach, BIOPHYS results were summarized from different studies (Minnesota/NASA COVER; Virginia/LEDAPS; Saskatchewan/BOREAS), sensors (airborne MMR; Landsat; MODIS) and models (GeoSail; GOMS). Applications output included forest density, height, crown dimension, branch and green leaf area, canopy cover, disturbance estimates based on multi-temporal chronosequences, and structural change following recovery from forest fires over the last century. Good correspondences with validation field data were obtained. Integrated analyses of multiple solar and view angle imagery further improved retrievals compared with single pass data. Quantifying ecosystem dynamics such as the area and percent of forest disturbance, early regrowth and succession provide essential inputs to process-driven models of carbon flux. BIOPHYS is well suited for large-area, multi-temporal applications involving multiple image sets and mosaics for assessing vegetation disturbance and quantifying biophysical structural dynamics and change. It is also suitable for integration with forest inventory, monitoring, updating, and other programs.
When Does Model-Based Control Pay Off?
2016-01-01
Many accounts of decision making and reinforcement learning posit the existence of two distinct systems that control choice: a fast, automatic system and a slow, deliberative system. Recent research formalizes this distinction by mapping these systems to “model-free” and “model-based” strategies in reinforcement learning. Model-free strategies are computationally cheap, but sometimes inaccurate, because action values can be accessed by inspecting a look-up table constructed through trial-and-error. In contrast, model-based strategies compute action values through planning in a causal model of the environment, which is more accurate but also more cognitively demanding. It is assumed that this trade-off between accuracy and computational demand plays an important role in the arbitration between the two strategies, but we show that the hallmark task for dissociating model-free and model-based strategies, as well as several related variants, do not embody such a trade-off. We describe five factors that reduce the effectiveness of the model-based strategy on these tasks by reducing its accuracy in estimating reward outcomes and decreasing the importance of its choices. Based on these observations, we describe a version of the task that formally and empirically obtains an accuracy-demand trade-off between model-free and model-based strategies. Moreover, we show that human participants spontaneously increase their reliance on model-based control on this task, compared to the original paradigm. Our novel task and our computational analyses may prove important in subsequent empirical investigations of how humans balance accuracy and demand. PMID:27564094
Parallel processor for real-time structural control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tise, B.L.
1992-01-01
A parallel processor that is optimized for real-time linear control has been developed. This modular system consists of A/D modules, D/A modules, and floating-point processor modules. The scalable processor uses up to 1,000 Motorola DSP96002 floating-point processors for a peak computational rate of 60 GFLOPS. Sampling rates up to 625 kHz are supported by this analog-in to analog-out controller. The high processing rate and parallel architecture make this processor suitable for computing state-space equations and other multiply/accumulate-intensive digital filters. Processor features include 14-bit conversion devices, low input-output latency, 240 Mbyte/s synchronous backplane bus, low-skew clock distribution circuit, VME connection tomore » host computer, parallelizing code generator, and look-up-tables for actuator linearization. This processor was designed primarily for experiments in structural control. The A/D modules sample sensors mounted on the structure and the floating-point processor modules compute the outputs using the programmed control equations. The outputs are sent through the D/A module to the power amps used to drive the structure's actuators. The host computer is a Sun workstation. An Open Windows-based control panel is provided to facilitate data transfer to and from the processor, as well as to control the operating mode of the processor. A diagnostic mode is provided to allow stimulation of the structure and acquisition of the structural response via sensor inputs.« less
Report on computation of repetitive hyperbaric-hypobaric decompression tables
NASA Technical Reports Server (NTRS)
Edel, P. O.
1975-01-01
The tables were constructed specifically for NASA's simulated weightlessness training program; they provide for 8 depth ranges covering depths from 7 to 47 FSW, with exposure times of 15 to 360 minutes. These tables were based up on an 8 compartment model using tissue half-time values of 5 to 360 minutes and Workmanline M-values for control of the decompression obligation resulting from hyperbaric exposures. Supersaturation ratios of 1.55:1 to 2:1 were used for control of ascents to altitude following such repetitive dives. Adequacy of the method and the resultant tables were determined in light of past experience with decompression involving hyperbaric-hypobaric interfaces in human exposures. Using these criteria, the method showed conformity with empirically determined values. In areas where a discrepancy existed, the tables would err in the direction of safety.
NASA Astrophysics Data System (ADS)
2012-09-01
WE RECOMMEND Nucleus: A Trip into the Heart of Matter A coffee-table book for everyone to dip into and learn from The Wonderful World of Relativity A charming, stand-out introduction to relativity The Physics DemoLab, National University of Singapore A treasure trove of physics for hands-on science experiences Quarks, Leptons and the Big Bang Perfect to polish up on particle physics for older students Victor 70C USB Digital Multimeter Equipment impresses for usability and value WORTH A LOOK Cosmos Close-Up Weighty tour of the galaxy that would make a good display Shooting Stars Encourage students to try astrophotography with this ebook HANDLE WITH CARE Head Shot: The Science Behind the JKF Assassination Exploration of the science behind the crime fails to impress WEB WATCH App-lied science for education: a selection of free Android apps are reviewed and iPhone app options are listed
A special look at New Jersey's transportation system
DOT National Transportation Integrated Search
2000-08-01
This document is a photographic presentation of New Jersey's transportation system. Its table of contents lists the following 8 subject headings: 1 Bridges, 2. Roadsides, 3. Rail Stations, 4. Non-motor Transport, 5. Nature, 6. History, 7. Housekeepin...
NASA Astrophysics Data System (ADS)
Flatt, H.; Tarnowsky, A.; Blume, H.; Pirsch, P.
2010-10-01
Dieser Beitrag behandelt die Abbildung eines videobasierten Verfahrens zur echtzeitfähigen Auswertung von Winkelhistogrammen auf eine modulare Coprozessor-Architektur. Die Architektur besteht aus mehreren dedizierten Recheneinheiten zur parallelen Verarbeitung rechenintensiver Bildverarbeitungsverfahren und ist mit einem RISC-Prozessor verbunden. Eine konfigurierbare Architekturerweiterung um eine Recheneinheit zur Auswertung von Winkelhistogrammen von Objekten ermöglicht in Verbindung mit dem RISC eine echtzeitfähige Klassifikation. Je nach Konfiguration sind für die Architekturerweiterung auf einem Xilinx Virtex-5-FPGA zwischen 3300 und 12 000 Lookup-Tables erforderlich. Bei einer Taktfrequenz von 100 MHz können unabhängig von der Bildauflösung pro Einzelbild in einem 25-Hz-Videodatenstrom bis zu 100 Objekte der Größe 256×256 Pixel analysiert werden. This paper presents the mapping of a video-based approach for real-time evaluation of angular histograms on a modular coprocessor architecture. The architecture comprises several dedicated processing elements for parallel processing of computation-intensive image processing tasks and is coupled with a RISC processor. A configurable architecture extension, especially a processing element for evaluating angular histograms of objects in conjunction with a RISC processor, provides a real-time classification. Depending on the configuration of the architecture extension, 3 300 to 12 000 look-up tables are required for a Xilinx Virtex-5 FPGA implementation. Running at a clock frequency of 100 MHz and independently of the image resolution per frame, 100 objects of size 256×256 pixels are analyzed in a 25 Hz video stream by the architecture.
Interior. Looking from balance room to the front entrance. Chemicals ...
Interior. Looking from balance room to the front entrance. Chemicals related to Edison's experiments on the extraction of latex for rubber from the goldenrod plant. Room is set up based on reconstruction research done in 1972. - Thomas A. Edison Laboratories, Building No. 2, Main Street & Lakeside Avenue, West Orange, Essex County, NJ
The SAMI Galaxy Survey: A prototype data archive for Big Science exploration
NASA Astrophysics Data System (ADS)
Konstantopoulos, I. S.; Green, A. W.; Foster, C.; Scott, N.; Allen, J. T.; Fogarty, L. M. R.; Lorente, N. P. F.; Sweet, S. M.; Hopkins, A. M.; Bland-Hawthorn, J.; Bryant, J. J.; Croom, S. M.; Goodwin, M.; Lawrence, J. S.; Owers, M. S.; Richards, S. N.
2015-11-01
We describe the data archive and database for the SAMI Galaxy Survey, an ongoing observational program that will cover ≈3400 galaxies with integral-field (spatially-resolved) spectroscopy. Amounting to some three million spectra, this is the largest sample of its kind to date. The data archive and built-in query engine use the versatile Hierarchical Data Format (HDF5), which precludes the need for external metadata tables and hence the setup and maintenance overhead those carry. The code produces simple outputs that can easily be translated to plots and tables, and the combination of these tools makes for a light system that can handle heavy data. This article acts as a contextual companion to the SAMI Survey Database source code repository, samiDB, which is freely available online and written entirely in Python. We also discuss the decisions related to the selection of tools and the creation of data visualisation modules. It is our aim that the work presented in this article-descriptions, rationale, and source code-will be of use to scientists looking to set up a maintenance-light data archive for a Big Science data load.
Benigni, Romualdo; Bossa, Cecilia; Richard, Ann M; Yang, Chihae
2008-01-01
Mutagenicity and carcinogenicity databases are crucial resources for toxicologists and regulators involved in chemicals risk assessment. Until recently, existing public toxicity databases have been constructed primarily as "look-up-tables" of existing data, and most often did not contain chemical structures. Concepts and technologies originated from the structure-activity relationships science have provided powerful tools to create new types of databases, where the effective linkage of chemical toxicity with chemical structure can facilitate and greatly enhance data gathering and hypothesis generation, by permitting: a) exploration across both chemical and biological domains; and b) structure-searchability through the data. This paper reviews the main public databases, together with the progress in the field of chemical relational databases, and presents the ISSCAN database on experimental chemical carcinogens.
Methods for comparative evaluation of propulsion system designs for supersonic aircraft
NASA Technical Reports Server (NTRS)
Tyson, R. M.; Mairs, R. Y.; Halferty, F. D., Jr.; Moore, B. E.; Chaloff, D.; Knudsen, A. W.
1976-01-01
The propulsion system comparative evaluation study was conducted to define a rapid, approximate method for evaluating the effects of propulsion system changes for an advanced supersonic cruise airplane, and to verify the approximate method by comparing its mission performance results with those from a more detailed analysis. A table look up computer program was developed to determine nacelle drag increments for a range of parametric nacelle shapes and sizes. Aircraft sensitivities to propulsion parameters were defined. Nacelle shapes, installed weights, and installed performance was determined for four study engines selected from the NASA supersonic cruise aircraft research (SCAR) engine studies program. Both rapid evaluation method (using sensitivities) and traditional preliminary design methods were then used to assess the four engines. The method was found to compare well with the more detailed analyses.
On the design of a radix-10 online floating-point multiplier
NASA Astrophysics Data System (ADS)
McIlhenny, Robert D.; Ercegovac, Milos D.
2009-08-01
This paper describes an approach to design and implement a radix-10 online floating-point multiplier. An online approach is considered because it offers computational flexibility not available with conventional arithmetic. The design was coded in VHDL and compiled, synthesized, and mapped onto a Virtex 5 FPGA to measure cost in terms of LUTs (look-up-tables) as well as the cycle time and total latency. The routing delay which was not optimized is the major component in the cycle time. For a rough estimate of the cost/latency characteristics, our design was compared to a standard radix-2 floating-point multiplier of equivalent precision. The results demonstrate that even an unoptimized radix-10 online design is an attractive implementation alternative for FPGA floating-point multiplication.
High-speed digital signal normalization for feature identification
NASA Technical Reports Server (NTRS)
Ortiz, J. A.; Meredith, B. D.
1983-01-01
A design approach for high speed normalization of digital signals was developed. A reciprocal look up table technique is employed, where a digital value is mapped to its reciprocal via a high speed memory. This reciprocal is then multiplied with an input signal to obtain the normalized result. Normalization improves considerably the accuracy of certain feature identification algorithms. By using the concept of pipelining the multispectral sensor data processing rate is limited only by the speed of the multiplier. The breadboard system was found to operate at an execution rate of five million normalizations per second. This design features high precision, a reduced hardware complexity, high flexibility, and expandability which are very important considerations for spaceborne applications. It also accomplishes a high speed normalization rate essential for real time data processing.
Speed and accuracy improvements in FLAASH atmospheric correction of hyperspectral imagery
NASA Astrophysics Data System (ADS)
Perkins, Timothy; Adler-Golden, Steven; Matthew, Michael W.; Berk, Alexander; Bernstein, Lawrence S.; Lee, Jamine; Fox, Marsha
2012-11-01
Remotely sensed spectral imagery of the earth's surface can be used to fullest advantage when the influence of the atmosphere has been removed and the measurements are reduced to units of reflectance. Here, we provide a comprehensive summary of the latest version of the Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes atmospheric correction algorithm. We also report some new code improvements for speed and accuracy. These include the re-working of the original algorithm in C-language code parallelized with message passing interface and containing a new radiative transfer look-up table option, which replaces executions of the MODTRAN model. With computation times now as low as ~10 s per image per computer processor, automated, real-time, on-board atmospheric correction of hyper- and multi-spectral imagery is within reach.
NASA Astrophysics Data System (ADS)
Gaydecki, P.
2009-07-01
A system is described for the design, downloading and execution of arbitrary functions, intended for use with acoustic and low-frequency ultrasonic transducers in condition monitoring and materials testing applications. The instrumentation comprises a software design tool and a powerful real-time digital signal processor unit, operating at 580 million multiplication-accumulations per second (MMACs). The embedded firmware employs both an established look-up table approach and a new function interpolation technique to generate the real-time signals with very high precision and flexibility. Using total harmonic distortion (THD) analysis, the purity of the waveforms have been compared with those generated using traditional analogue function generators; this analysis has confirmed that the new instrument has a consistently superior signal-to-noise ratio.
Formal verification of a set of memory management units
NASA Technical Reports Server (NTRS)
Schubert, E. Thomas; Levitt, K.; Cohen, Gerald C.
1992-01-01
This document describes the verification of a set of memory management units (MMU). The verification effort demonstrates the use of hierarchical decomposition and abstract theories. The MMUs can be organized into a complexity hierarchy. Each new level in the hierarchy adds a few significant features or modifications to the lower level MMU. The units described include: (1) a page check translation look-aside module (TLM); (2) a page check TLM with supervisor line; (3) a base bounds MMU; (4) a virtual address translation MMU; and (5) a virtual address translation MMU with memory resident segment table.
NASA Astrophysics Data System (ADS)
Bourget, Antoine; Troost, Jan
2018-04-01
We revisit the study of the multiplets of the conformal algebra in any dimension. The theory of highest weight representations is reviewed in the context of the Bernstein-Gelfand-Gelfand category of modules. The Kazhdan-Lusztig polynomials code the relation between the Verma modules and the irreducible modules in the category and are the key to the characters of the conformal multiplets (whether finite dimensional, infinite dimensional, unitary or non-unitary). We discuss the representation theory and review in full generality which representations are unitarizable. The mathematical theory that allows for both the general treatment of characters and the full analysis of unitarity is made accessible. A good understanding of the mathematics of conformal multiplets renders the treatment of all highest weight representations in any dimension uniform, and provides an overarching comprehension of case-by-case results. Unitary highest weight representations and their characters are classified and computed in terms of data associated to cosets of the Weyl group of the conformal algebra. An executive summary is provided, as well as look-up tables up to and including rank four.
Image mosaic and topographic map of the moon
Hare, Trent M.; Hayward, Rosalyn K.; Blue, Jennifer S.; Archinal, Brent A.
2015-01-01
Sheet 2: This map is based on data from the Lunar Orbiter Laser Altimeter (LOLA; Smith and others, 2010), an instrument on the National Aeronautics and Space Administration (NASA) Lunar Reconnaissance Orbiter (LRO) spacecraft (Tooley and others, 2010). The image used for the base of this map represents more than 6.5 billion measurements gathered between July 2009 and July 2013, adjusted for consistency in the coordinate system described below, and then converted to lunar radii (Mazarico and others, 2012). For the Mercator portion, these measurements were converted into a digital elevation model (DEM) with a resolution of 0.015625 degrees per pixel, or 64 pixels per degree. In projection, the pixels are 473.8 m in size at the equator. For the polar portion, the LOLA elevation points were used to create a DEM at 240 meters per pixel. A shaded relief map was generated from each DEM with a sun angle of 45° from horizontal, and a sun azimuth of 270°, as measured clockwise from north with no vertical exaggeration. The DEM values were then mapped to a global color look-up table, with each color representing a range of 1 km of elevation. For this map sheet, only larger feature names are shown. For references listed above, please open the full PDF.
NASA Astrophysics Data System (ADS)
Li, J.; Wu, Z.; Wei, X.; Zhang, Y.; Feng, F.; Guo, F.
2018-04-01
Cross-calibration has the advantages of high precision, low resource requirements and simple implementation. It has been widely used in recent years. The four wide-field-of-view (WFV) cameras on-board Gaofen-1 satellite provide high spatial resolution and wide combined coverage (4 × 200 km) without onboard calibration. In this paper, the four-band radiometric cross-calibration coefficients of WFV1 camera were obtained based on radiation and geometry matching taking Landsat 8 OLI (Operational Land Imager) sensor as reference. Scale Invariant Feature Transform (SIFT) feature detection method and distance and included angle weighting method were introduced to correct misregistration of WFV-OLI image pair. The radiative transfer model was used to eliminate difference between OLI sensor and WFV1 camera through the spectral match factor (SMF). The near-infrared band of WFV1 camera encompasses water vapor absorption bands, thus a Look Up Table (LUT) for SMF varies from water vapor amount is established to estimate the water vapor effects. The surface synchronization experiment was designed to verify the reliability of the cross-calibration coefficients, which seem to perform better than the official coefficients claimed by the China Centre for Resources Satellite Data and Application (CCRSDA).
NASA Astrophysics Data System (ADS)
Xianqiang, He; Delu, Pan; Yan, Bai; Qiankun, Zhu
2005-10-01
The numerical model of the vector radiative transfer of the coupled ocean-atmosphere system is developed based on the matrix-operator method, which is named PCOART. In PCOART, using the Fourier analysis, the vector radiative transfer equation (VRTE) splits up into a set of independent equations with zenith angle as only angular coordinate. Using the Gaussian-Quadrature method, VRTE is finally transferred into the matrix equation, which is calculated by using the adding-doubling method. According to the reflective and refractive properties of the ocean-atmosphere interface, the vector radiative transfer numerical model of ocean and atmosphere is coupled in PCOART. By comparing with the exact Rayleigh scattering look-up-table of MODIS(Moderate-resolution Imaging Spectroradiometer), it is shown that PCOART is an exact numerical calculation model, and the processing methods of the multi-scattering and polarization are correct in PCOART. Also, by validating with the standard problems of the radiative transfer in water, it is shown that PCOART could be used to calculate the underwater radiative transfer problems. Therefore, PCOART is a useful tool to exactly calculate the vector radiative transfer of the coupled ocean-atmosphere system, which can be used to study the polarization properties of the radiance in the whole ocean-atmosphere system and the remote sensing of the atmosphere and ocean.
NASA Astrophysics Data System (ADS)
Zhang, Minwei; Hu, Chuanmin; Cannizzaro, Jennifer; Kowalewski, Matthew G.; Janz, Scott J.
2018-01-01
Using hyperspectral data collected by the Airborne Compact Atmospheric Mapper (ACAM) and a shipborne radiometer in Chesapeake Bay in July-August 2011, this study investigates diurnal changes of surface remote sensing reflectance (Rrs). Atmospheric correction of ACAM data is performed using the traditional "black pixel" approach through radiative transfer based look-up-tables (LUTs) with non-zero Rrs in the near-infrared (NIR) accounted for by iterations. The ACAM-derived Rrs was firstly evaluated through comparison with Rrs derived from the Moderate Resolution Imaging Spectroradiometer satellite measurements, and then validated against in situ Rrs using a time window of ±1 h or ±3 h. Results suggest that the uncertainties in ACAM-derived Rrs are generally comparable to those from MODIS satellite measurements over coastal waters, and therefore may be used to assess whether Rrs diurnal changes observed by ACAM are realistic (i.e., with changes > 2 × uncertainties). Diurnal changes observed by repeated ACAM measurements reaches up to 66.8% depending on wavelength and location and are consistent with those from the repeated in situ Rrs measurements. These findings suggest that once airborne data are processed using proper algorithms and validated using in situ data, they are suitable for assessing diurnal changes in moderately turbid estuaries such as Chesapeake Bay. The findings also support future geostationary satellite missions that are particularly useful to assess short-term changes.
NASA Astrophysics Data System (ADS)
Cox, M.; Shirono, K.
2017-10-01
A criticism levelled at the Guide to the Expression of Uncertainty in Measurement (GUM) is that it is based on a mixture of frequentist and Bayesian thinking. In particular, the GUM’s Type A (statistical) uncertainty evaluations are frequentist, whereas the Type B evaluations, using state-of-knowledge distributions, are Bayesian. In contrast, making the GUM fully Bayesian implies, among other things, that a conventional objective Bayesian approach to Type A uncertainty evaluation for a number n of observations leads to the impractical consequence that n must be at least equal to 4, thus presenting a difficulty for many metrologists. This paper presents a Bayesian analysis of Type A uncertainty evaluation that applies for all n ≥slant 2 , as in the frequentist analysis in the current GUM. The analysis is based on assuming that the observations are drawn from a normal distribution (as in the conventional objective Bayesian analysis), but uses an informative prior based on lower and upper bounds for the standard deviation of the sampling distribution for the quantity under consideration. The main outcome of the analysis is a closed-form mathematical expression for the factor by which the standard deviation of the mean observation should be multiplied to calculate the required standard uncertainty. Metrological examples are used to illustrate the approach, which is straightforward to apply using a formula or look-up table.
The influence of sea fog inhomogeneity on its microphysical characteristics retrieval
NASA Astrophysics Data System (ADS)
Hao, Zengzhou; Pan, Delu; Gong, Fang; He, Xianqiang
2008-10-01
A study on the effect of sea fog inhomogeneity on its microphysical parameters retrieval is presented. On the condition that the average liquid water content is linear vertically and the power spectrum spectral index sets 2.0, we generate a 3D sea fog fields by controlling the total liquid water contents greater than 0.04g/m3 based on the iterative method for generating scaling log-normal random field with an energy spectrum and a fragmentized cloud algorithm. Based on the fog field, the radiance at the wavelengths of 0.67 and 1.64 μm are simulated with 3D radiative transfer model SHDOM, and then the fog optical thickness and effective particle radius are simultaneously retrieved using the generic look-up-table AVHRR cloud algorithm. By comparing those fog optical thickness and effective particle radius, the influence of sea fog inhomogeneity on its properties retrieval is discussed. It exhibits the system bias when inferring sea fog physical properties from satellite measurements based on the assumption of plane parallel homogeneous atmosphere. And the bias depends on the solar zenith angel. The optical thickness is overrated while the effective particle radius is under-estimated at two solar zenith angle 30° and 60°. Those results show that it is necessary for sea fog true characteristics retrieval to develop a new algorithm using the 3D radiative transfer.
Easy Volcanic Aerosol (EVA v1.0): an idealized forcing generator for climate simulations
NASA Astrophysics Data System (ADS)
Toohey, Matthew; Stevens, Bjorn; Schmidt, Hauke; Timmreck, Claudia
2016-11-01
Stratospheric sulfate aerosols from volcanic eruptions have a significant impact on the Earth's climate. To include the effects of volcanic eruptions in climate model simulations, the Easy Volcanic Aerosol (EVA) forcing generator provides stratospheric aerosol optical properties as a function of time, latitude, height, and wavelength for a given input list of volcanic eruption attributes. EVA is based on a parameterized three-box model of stratospheric transport and simple scaling relationships used to derive mid-visible (550 nm) aerosol optical depth and aerosol effective radius from stratospheric sulfate mass. Precalculated look-up tables computed from Mie theory are used to produce wavelength-dependent aerosol extinction, single scattering albedo, and scattering asymmetry factor values. The structural form of EVA and the tuning of its parameters are chosen to produce best agreement with the satellite-based reconstruction of stratospheric aerosol properties following the 1991 Pinatubo eruption, and with prior millennial-timescale forcing reconstructions, including the 1815 eruption of Tambora. EVA can be used to produce volcanic forcing for climate models which is based on recent observations and physical understanding but internally self-consistent over any timescale of choice. In addition, EVA is constructed so as to allow for easy modification of different aspects of aerosol properties, in order to be used in model experiments to help advance understanding of what aspects of the volcanic aerosol are important for the climate system.
A new fiducial marker for Image-guided radiotherapy of prostate cancer: clinical experience.
Carl, Jesper; Nielsen, Jane; Holmberg, Mats; Højkjaer Larsen, Erik; Fabrin, Knud; Fisker, Rune V
2008-01-01
A new fiducial marker for image guided radiotherapy (IGRT) based on a removable prostate stent made of Ni Ti has been developed during two previous clinical feasibility studies. The marker is currently being evaluated for IGRT treatment in a third clinical study. The new marker is used to co-register MR and planning CT scans with high accuracy in the region around the prostate. The co-registered MR-CT volumes are used for delineation of GTV before planning. In each treatment session the IGRT system is used to position the patient before treatment. The IGRT system use a stereo pair of kV images matched to corresponding Digital Reconstructed Radiograms (DRR) from the planning CT scan. The match is done using mutual gray scale information. The pair of DRR's for positioning is created in the IGRT system with a threshold in the Look Up Table (LUT). The resulting match provides the necessary shift in couch coordinates to position the stent with an accuracy of 1-2 mm within the planned position. At the present time 39 patients have received the new marker. Of the 39 one has migrated to the bladder. Deviations of more than 5 mm between CTV outlined on CT and MR are seen in several cases and in anterior-posterior (AP), left-right (LR) and cranial-caudal (CC) directions. Intra-fraction translation movements up to +/- 3 mm are seen as well. As the stent is also clearly visible on images taken with high voltage x-rays using electronic portal images devices (EPID), the positioning has been verified independently of the IGRT system. The preliminary result of an on going clinical study of a Ni Ti prostate stent, potentially a new fiducial marker for image guided radiotherapy, looks promising. The risk of migration appears to be much lower compared to previous designs.
2014-12-01
X Establish the limits of the areas of interest X X Determine intelligence and information gaps X X X Describe the impact of the battlespace on...X X X Identify critical gaps X X X X X DRDC-RDDC-2014-R136 9 Table 2: Mapping tools and functionalities. Looking at Table 1 and...it would seem that taking on requirements from IPB/IPOE Steps 3 and 4, although possibly much more challenging, is likely to yield more useful results
The Endangered Species Act and a Deeper Look at Extinction.
ERIC Educational Resources Information Center
Borowski, John F.
1992-01-01
Discusses the importance of saving species and dispels myths surrounding the endangered species act as background to three student activities that include a round table debate, writing to congresspeople, and a research project suggestion. Lists reference materials for endangered species. (MCO)
2015-01-01
The 2012 CBECS collected building characteristics data from more than 6,700 U.S. commercial buildings. This report highlights findings from the survey, with details presented in the Building Characteristics tables.
Two Experimental Approaches of Looking at Buoyancy
ERIC Educational Resources Information Center
Moreira, J. Agostinho; Almeida, A.; Carvalho, P. Simeao
2013-01-01
In our teaching practice, we find that a large number of first-year university physics and chemistry students exhibit some difficulties with applying Newton's third law to fluids because they think fluids do not react to forces. (Contains 1 table and 3 figures.)
2009-03-20
Expedition 19 Commander Gennady I. Padalka, left, and Flight Engineer Michael R. Barratt listen to their mp3 players as a medical doctor looks on during the tilt table training at the Cosmonaut Hotel, Saturday, March 21, 2009 in Baikonur, Kazakhstan.(Photo Credit: NASA/Bill Ingalls)
14. DETAIL OF INLCINED CONVEYOR RAIL AT HEAD OF SKINNING ...
14. DETAIL OF INLCINED CONVEYOR RAIL AT HEAD OF SKINNING TABLE; HEADS WERE REMOVED IN OPEN AREA AT LOWER RIGHT; LOOKING TOWARD NORTHWEST - Rath Packing Company, Beef Killing Building, Sycamore Street between Elm & Eighteenth Streets, Waterloo, Black Hawk County, IA
Spatial Data Mining for Estimating Cover Management Factor of Universal Soil Loss Equation
NASA Astrophysics Data System (ADS)
Tsai, F.; Lin, T. C.; Chiang, S. H.; Chen, W. W.
2016-12-01
Universal Soil Loss Equation (USLE) is a widely used mathematical model that describes long-term soil erosion processes. Among the six different soil erosion risk factors of USLE, the cover-management factor (C-factor) is related to land-cover/land-use. The value of C-factor ranges from 0.001 to 1, so it alone might cause a thousandfold difference in a soil erosion analysis using USLE. The traditional methods for the estimation of USLE C-factor include in situ experiments, soil physical parameter models, USLE look-up tables with land use maps, and regression models between vegetation indices and C-factors. However, these methods are either difficult or too expensive to implement in large areas. In addition, the values of C-factor obtained using these methods can not be updated frequently, either. To address this issue, this research developed a spatial data mining approach to estimate the values of C-factor with assorted spatial datasets for a multi-temporal (2004 to 2008) annual soil loss analysis of a reservoir watershed in northern Taiwan. The idea is to establish the relationship between the USLE C-factor and spatial data consisting of vegetation indices and texture features extracted from satellite images, soil and geology attributes, digital elevation model, road and river distribution etc. A decision tree classifier was used to rank influential conditional attributes in the preliminary data mining. Then, factor simplification and separation were considered to optimize the model and the random forest classifier was used to analyze 9 simplified factor groups. Experimental results indicate that the overall accuracy of the data mining model is about 79% with a kappa value of 0.76. The estimated soil erosion amounts in 2004-2008 according to the data mining results are about 50.39 - 74.57 ton/ha-year after applying the sediment delivery ratio and correction coefficient. Comparing with estimations calculated with C-factors from look-up tables, the soil erosion values estimated with C-factors generated from spatial data mining results are more in agreement with the values published by the watershed administration authority.
Viscosity of NaCl and other solutions up to 350{sup 0}C and 50 MPa pressures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, S.L.; Ozbek, H.; Igbene, A.
1980-11-01
Experimental values for the viscosity of sodium chloride solutions are critically reviewed for application to geothermal energy. Data published recently by Kestin, Los, Pepinov, and Semenyuk as well as earlier data are included. A theoretically based equation for calculating relative viscosity was developed, and used to generate tables of smoothed values over the ranges 20{sup 0}C to 350{sup 0}C, 0 to 5 m and pressures up to 50 MPa. The equation reproduces selected data to an average of better than 2 percent over the entire range of temperatures and pressures. Selected tables of data are included for KCl up tomore » 150{sup 0}C, CaCl{sub 2} solutions up to 100{sup 0}C, and for mixtures of NaCl with KCl and CaCl{sub 2}. Recommendations are given for additional data needs.« less
Mohammed, Riazuddin; Johnson, Karl; Bache, Ed
2010-07-01
Multiple radiographic images may be necessary during the standard procedure of in-situ pinning of slipped capital femoral epiphysis (SCFE) hips. This procedure can be performed with the patient positioned on a fracture table or a radiolucent table. Our study aims to look at any differences in the amount and duration of radiation exposure for in-situ pinning of SCFE performed using a traction table or a radiolucent table. Sixteen hips in thirteen patients who were pinned on radiolucent table were compared for the cumulative radiation exposure to 35 hips pinned on a fracture table in 33 patients during the same time period. Cumulative radiation dose was measured as dose area product in Gray centimeter2 and the duration of exposure was measured in minutes. Appropriate statistical tests were used to test the significance of any differences. Mean cumulative radiation dose for SCFE pinned on radiolucent table was statistically less than for those pinned on fracture table (P<0.05). The mean duration of radiation exposure on either table was not significantly different. Lateral projections may increase the radiation doses compared with anteroposterior projections because of the higher exposure parameters needed for side imaging. Our results showing decreased exposure doses on the radiolucent table are probably because of the ease of a frog leg lateral positioning obtained and thereby the ease of lateral imaging. In-situ pinning of SCFE hips on a radiolucent table has an additional advantage that the radiation dose during the procedure is significantly less than that of the procedure that is performed on a fracture table.
Positional accuracy and geographic bias of four methods of geocoding in epidemiologic research.
Schootman, Mario; Sterling, David A; Struthers, James; Yan, Yan; Laboube, Ted; Emo, Brett; Higgs, Gary
2007-06-01
We examined the geographic bias of four methods of geocoding addresses using ArcGIS, commercial firm, SAS/GIS, and aerial photography. We compared "point-in-polygon" (ArcGIS, commercial firm, and aerial photography) and the "look-up table" method (SAS/GIS) to allocate addresses to census geography, particularly as it relates to census-based poverty rates. We randomly selected 299 addresses of children treated for asthma at an urban emergency department (1999-2001). The coordinates of the building address side door were obtained by constant offset based on ArcGIS and a commercial firm and true ground location based on aerial photography. Coordinates were available for 261 addresses across all methods. For 24% to 30% of geocoded road/door coordinates the positional error was 51 meters or greater, which was similar across geocoding methods. The mean bearing was -26.8 degrees for the vector of coordinates based on aerial photography and ArcGIS and 8.5 degrees for the vector based on aerial photography and the commercial firm (p < 0.0001). ArcGIS and the commercial firm performed very well relative to SAS/GIS in terms of allocation to census geography. For 20%, the door location based on aerial photography was assigned to a different block group compared to SAS/GIS. The block group poverty rate varied at least two standard deviations for 6% to 7% of addresses. We found important differences in distance and bearing between geocoding relative to aerial photography. Allocation of locations based on aerial photography to census-based geographic areas could lead to substantial errors.
Fast associative memory + slow neural circuitry = the computational model of the brain.
NASA Astrophysics Data System (ADS)
Berkovich, Simon; Berkovich, Efraim; Lapir, Gennady
1997-08-01
We propose a computational model of the brain based on a fast associative memory and relatively slow neural processors. In this model, processing time is expensive but memory access is not, and therefore most algorithmic tasks would be accomplished by using large look-up tables as opposed to calculating. The essential feature of an associative memory in this context (characteristic for a holographic type memory) is that it works without an explicit mechanism for resolution of multiple responses. As a result, the slow neuronal processing elements, overwhelmed by the flow of information, operate as a set of templates for ranking of the retrieved information. This structure addresses the primary controversy in the brain architecture: distributed organization of memory vs. localization of processing centers. This computational model offers an intriguing explanation of many of the paradoxical features in the brain architecture, such as integration of sensors (through DMA mechanism), subliminal perception, universality of software, interrupts, fault-tolerance, certain bizarre possibilities for rapid arithmetics etc. In conventional computer science the presented type of a computational model did not attract attention as it goes against the technological grain by using a working memory faster than processing elements.
Age Differences in Memory Retrieval Shift: Governed by Feeling-of-Knowing?
Hertzog, Christopher; Touron, Dayna R.
2010-01-01
The noun-pair lookup (NP) task was used to evaluate strategic shift from visual scanning to retrieval. We investigated whether age differences in feeling-of-knowing (FOK) account for older adults' delayed retrieval shift. Participants were randomly assigned to one of three conditions: (1) standard NP learning, (2) fast binary FOK judgments, or (3) Choice, where participants had to choose in advance whether to see the look-up table or respond from memory. We found small age differences in FOK magnitudes, but major age differences in memory retrieval choices that mirrored retrieval use in the standard NP task. Older adults showed lower resolution in their confidence judgments (CJs) for recognition memory tests on the NP items, and this difference appeared to influence rates of retrieval shift, given that retrieval use was correlated with CJ magnitudes in both age groups. Older adults had particular difficulty with accuracy and confidence for rearranged pairs, relative to intact pairs. Older adults' slowed retrieval shift appears to be due to (a) impaired associative learning early in practice, not just a lower FOK; but also (b) retrieval reluctance later in practice after the degree of associative learning would afford memory-based responding. PMID:21401263
Grunert, Klaus G; Wills, Josephine M; Fernández-Celemín, Laura
2010-10-01
Based on in-store observations in three major UK retailers, in-store interviews (2019) and questionnaires filled out at home and returned (921), use of nutrition information on food labels and its understanding were investigated. Respondents' nutrition knowledge was also measured, using a comprehensive instrument covering knowledge of expert recommendations, nutrient content in different food products, and calorie content in different food products. Across six product categories, 27% of shoppers were found to have looked at nutrition information on the label, with guideline daily amount (GDA) labels and the nutrition grid/table as the main sources consulted. Respondents' understanding of major front-of-pack nutrition labels was measured using a variety of tasks dealing with conceptual understanding, substantial understanding and health inferences. Understanding was high, with up to 87.5% of respondents being able to identify the healthiest product in a set of three. Differences between level of understanding and level of usage are explained by different causal mechanisms. Regression analysis showed that usage is mainly related to interest in healthy eating, whereas understanding of nutrition information on food labels is mainly related to nutrition knowledge. Both are in turn affected by demographic variables, but in different ways.
On the suitability of the connection machine for direct particle simulation
NASA Technical Reports Server (NTRS)
Dagum, Leonard
1990-01-01
The algorithmic structure was examined of the vectorizable Stanford particle simulation (SPS) method and the structure is reformulated in data parallel form. Some of the SPS algorithms can be directly translated to data parallel, but several of the vectorizable algorithms have no direct data parallel equivalent. This requires the development of new, strictly data parallel algorithms. In particular, a new sorting algorithm is developed to identify collision candidates in the simulation and a master/slave algorithm is developed to minimize communication cost in large table look up. Validation of the method is undertaken through test calculations for thermal relaxation of a gas, shock wave profiles, and shock reflection from a stationary wall. A qualitative measure is provided of the performance of the Connection Machine for direct particle simulation. The massively parallel architecture of the Connection Machine is found quite suitable for this type of calculation. However, there are difficulties in taking full advantage of this architecture because of lack of a broad based tradition of data parallel programming. An important outcome of this work has been new data parallel algorithms specifically of use for direct particle simulation but which also expand the data parallel diction.
NASA Technical Reports Server (NTRS)
Zhang, Z.; Meyer, K.; Platnick, S.; Oreopoulos, L.; Lee, D.; Yu, H.
2013-01-01
This paper describes an efficient and unique method for computing the shortwave direct radiative effect (DRE) of aerosol residing above low-level liquid-phase clouds using CALIOP and MODIS data. It accounts for the overlapping of aerosol and cloud rigorously by utilizing the joint histogram of cloud optical depth and cloud top pressure. Effects of sub-grid scale cloud and aerosol variations on DRE are accounted for. It is computationally efficient through using grid-level cloud and aerosol statistics, instead of pixel-level products, and a pre-computed look-up table in radiative transfer calculations. We verified that for smoke over the southeast Atlantic Ocean the method yields a seasonal mean instantaneous shortwave DRE that generally agrees with more rigorous pixel-level computation within 4%. We have also computed the annual mean instantaneous shortwave DRE of light-absorbing aerosols (i.e., smoke and polluted dust) over global ocean based on 4 yr of CALIOP and MODIS data. We found that the variability of the annual mean shortwave DRE of above-cloud light-absorbing aerosol is mainly driven by the optical depth of the underlying clouds.
NASA Technical Reports Server (NTRS)
Zhang, Z.; Meyer, K.; Platnick, S.; Oreopoulos, L.; Lee, D.; Yu, H.
2014-01-01
This paper describes an efficient and unique method for computing the shortwave direct radiative effect (DRE) of aerosol residing above low-level liquid-phase clouds using CALIOP and MODIS data. It accounts for the overlapping of aerosol and cloud rigorously by utilizing the joint histogram of cloud optical depth and cloud top pressure. Effects of sub-grid scale cloud and aerosol variations on DRE are accounted for. It is computationally efficient through using grid-level cloud and aerosol statistics, instead of pixel-level products, and a pre-computed look-up table in radiative transfer calculations. We verified that for smoke over the southeast Atlantic Ocean the method yields a seasonal mean instantaneous shortwave DRE that generally agrees with more rigorous pixel-level computation within 4. We have also computed the annual mean instantaneous shortwave DRE of light-absorbing aerosols (i.e., smoke and polluted dust) over global ocean based on 4 yr of CALIOP and MODIS data. We found that the variability of the annual mean shortwave DRE of above-cloud light-absorbing aerosol is mainly driven by the optical depth of the underlying clouds.
NASA Astrophysics Data System (ADS)
Solovjov, Vladimir P.; Andre, Frederic; Lemonnier, Denis; Webb, Brent W.
2018-02-01
The Scaled SLW model for prediction of radiation transfer in non-uniform gaseous media is presented. The paper considers a new approach for construction of a Scaled SLW model. In order to maintain the SLW method as a simple and computationally efficient engineering method special attention is paid to explicit non-iterative methods of calculation of the scaling coefficient. The moments of gas absorption cross-section weighted by the Planck blackbody emissive power (in particular, the first moment - Planck mean, and first inverse moment - Rosseland mean) are used as the total characteristics of the absorption spectrum to be preserved by scaling. Generalized SLW modelling using these moments including both discrete gray gases and the continuous formulation is presented. Application of line-by-line look-up table for corresponding ALBDF and inverse ALBDF distribution functions (such that no solution of implicit equations is needed) ensures that the method is flexible and efficient. Predictions for radiative transfer using the Scaled SLW model are compared to line-by-line benchmark solutions, and predictions using the Rank Correlated SLW model and SLW Reference Approach. Conclusions and recommendations regarding application of the Scaled SLW model are made.
Ultra-Low Power Dynamic Knob in Adaptive Compressed Sensing Towards Biosignal Dynamics.
Wang, Aosen; Lin, Feng; Jin, Zhanpeng; Xu, Wenyao
2016-06-01
Compressed sensing (CS) is an emerging sampling paradigm in data acquisition. Its integrated analog-to-information structure can perform simultaneous data sensing and compression with low-complexity hardware. To date, most of the existing CS implementations have a fixed architectural setup, which lacks flexibility and adaptivity for efficient dynamic data sensing. In this paper, we propose a dynamic knob (DK) design to effectively reconfigure the CS architecture by recognizing the biosignals. Specifically, the dynamic knob design is a template-based structure that comprises a supervised learning module and a look-up table module. We model the DK performance in a closed analytic form and optimize the design via a dynamic programming formulation. We present the design on a 130 nm process, with a 0.058 mm (2) fingerprint and a 187.88 nJ/event energy-consumption. Furthermore, we benchmark the design performance using a publicly available dataset. Given the energy constraint in wireless sensing, the adaptive CS architecture can consistently improve the signal reconstruction quality by more than 70%, compared with the traditional CS. The experimental results indicate that the ultra-low power dynamic knob can provide an effective adaptivity and improve the signal quality in compressed sensing towards biosignal dynamics.
Improvement in absolute calibration accuracy of Landsat-5 TM with Landsat-7 ETM+ data
Chander, G.; Markham, B.L.; Micijevic, E.; Teillet, P.M.; Helder, D.L.; ,
2005-01-01
The ability to detect and quantify changes in the Earth's environment depends on satellites sensors that can provide calibrated, consistent measurements of Earth's surface features through time. A critical step in this process is to put image data from subsequent generations of sensors onto a common radiometric scale. To evaluate Landsat-5 (L5) Thematic Mapper's (TM) utility in this role, image pairs from the L5 TM and Landsat-7 (L7) Enhanced Thematic Mapper Plus (ETM+) sensors were compared. This approach involves comparison of surface observations based on image statistics from large common areas observed eight days apart by the two sensors. The results indicate a significant improvement in the consistency of L5 TM data with respect to L7 ETM+ data, achieved using a revised Look-Up-Table (LUT) procedure as opposed to the historical Internal Calibrator (IC) procedure previously used in the L5 TM product generation system. The average percent difference in reflectance estimates obtained from the L5 TM agree with those from the L7 ETM+ in the Visible and Near Infrared (VNIR) bands to within four percent and in the Short Wave Infrared (SWIR) bands to within six percent.
Control law system for X-Wing aircraft
NASA Technical Reports Server (NTRS)
Lawrence, Thomas H. (Inventor); Gold, Phillip J. (Inventor)
1990-01-01
Control law system for the collective axis, as well as pitch and roll axes, of an X-Wing aircraft and for the pneumatic valving controlling circulation control blowing for the rotor. As to the collective axis, the system gives the pilot single-lever direct lift control and insures that maximum cyclic blowing control power is available in transition. Angle-of-attach de-coupling is provided in rotary wing flight, and mechanical collective is used to augment pneumatic roll control when appropriate. Automatic gain variations with airspeed and rotor speed are provided, so a unitary set of control laws works in all three X-Wing flight modes. As to pitch and roll axes, the system produces essentially the same aircraft response regardless of flight mode or condition. Undesirable cross-couplings are compensated for in a manner unnoticeable to the pilot without requiring pilot action, as flight mode or condition is changed. A hub moment feedback scheme is implemented, utilizing a P+I controller, significantly improving bandwidth. Limits protect aircraft structure from inadvertent damage. As to pneumatic valving, the system automatically provides the pressure required at each valve azimuth location, as dictated by collective, cyclic and higher harmonic blowing commands. Variations in the required control phase angle are automatically introduced, and variations in plenum pressure are compensated for. The required switching for leading, trailing and dual edge blowing is automated, using a simple table look-up procedure. Non-linearities due to valve characteristics of circulation control lift are linearized by map look-ups.
NORTH ELEVATION OF GOLD HILL MILL, LOOKING SOUTH. AT LEFT ...
NORTH ELEVATION OF GOLD HILL MILL, LOOKING SOUTH. AT LEFT EDGE IS THE SINGLE CYLINDER HOT SHOT ENGINE THAT PROVIDED POWER FOR THE MILL. JUST IN FRONT OF IT IS AN ARRASTRA. AT CENTER IS THE BALL MILL AND SECONDARY ORE BIN. JUST TO THE RIGHT OF THE BALL MILL IS A RAKE CLASSIFIER, AND TO THE RIGHT ARE THE CONCENTRATION TABLES. WARM SPRINGS CAMP IS IN THE DISTANCE. SEE CA-292-4 FOR IDENTICAL B&W NEGATIVE. - Gold Hill Mill, Warm Spring Canyon Road, Death Valley Junction, Inyo County, CA
NORTH ELEVATION OF GOLD HILL MILL, LOOKING SOUTH. AT LEFT ...
NORTH ELEVATION OF GOLD HILL MILL, LOOKING SOUTH. AT LEFT EDGE IS THE SINGLE CYLINDER HOT SHOT ENGINE THAT PROVIDED POWER FOR THE MILL. JUST IN FRONT OF IT IS AN ARRASTRA. AT CENTER IS THE BALL MILL AND SECONDARY ORE BIN. JUST TO THE RIGHT OF THE BALL MILL IS A RAKE CLASSIFIER, AND TO THE RIGHT ARE THE CONCENTRATION TABLES. WARM SPRINGS CAMP IS IN THE DISTANCE. SEE CA-292-17 (CT) FOR IDENTICAL COLOR TRANSPARENCY. - Gold Hill Mill, Warm Spring Canyon Road, Death Valley Junction, Inyo County, CA
ERIC Educational Resources Information Center
Karazsia, Bryan T.; Wong, Kendal
2016-01-01
Quantitative and statistical literacy are core domains in the undergraduate psychology curriculum. An important component of such literacy includes interpretation of visual aids, such as tables containing results from statistical analyses. This article presents results of a quasi-experimental study with longitudinal follow-up that tested the…
Women in the Civil Engineer Corps.
1986-01-01
order to "measure up"? __ often__ occasionally__ rarely_ never 18.Have you experienced sexual harrassment on the job? -__often ___occasionally __rarely...Assignments. . . . . . . 23 Sexual Harassment/Discrimination . . . 26 Relating Studies and Literature. . . . . . 28 IV. RESEARCH METHOLOGY...Table 13: Question 16. Work Harder . . . . . . . 56 Table 14: Question 17, Measure Up. . . . . . . . 57 Table 15: Question 18. Sexual Harassment
K-12 Accreditation's Next Move: A Storied Guarantee Looks to Accountability 2.0
ERIC Educational Resources Information Center
Oldham, Jennifer
2018-01-01
The Current Generation of American public-school students has grown up in the era of centralized, standardized data. Anyone curious about how local schools were doing could look at pass rates on annual exams in math and reading, the foundation of federally mandated, test-based accountability. New rules are poised to change this system. The federal…
Population attribute compression
White, James M.; Faber, Vance; Saltzman, Jeffrey S.
1995-01-01
An image population having a large number of attributes is processed to form a display population with a predetermined smaller number of attributes that represent the larger number of attributes. In a particular application, the color values in an image are compressed for storage in a discrete look-up table (LUT). Color space containing the LUT color values is successively subdivided into smaller volumes until a plurality of volumes are formed, each having no more than a preselected maximum number of color values. Image pixel color values can then be rapidly placed in a volume with only a relatively few LUT values from which a nearest neighbor is selected. Image color values are assigned 8 bit pointers to their closest LUT value whereby data processing requires only the 8 bit pointer value to provide 24 bit color values from the LUT.
Digital to analog conversion and visual evaluation of Thematic Mapper data
McCord, James R.; Binnie, Douglas R.; Seevers, Paul M.
1985-01-01
As a part of the National Aeronautics and Space Administration Landsat D Image Data Quality Analysis Program, the Earth Resources Observation Systems Data Center (EDC) developed procedures to optimize the visual information content of Thematic Mapper data and evaluate the resulting photographic products by visual interpretation. A digital-to-analog transfer function was developed which would properly place the digital values on the most useable portion of a film response curve. Individual black-and-white transparencies generated using the resulting look-up tables were utilized in the production of color-composite images with varying band combinations. Four experienced photointerpreters ranked 2-cm-diameter (0. 75 inch) chips of selected image features of each band combination for ease of interpretability. A nonparametric rank-order test determined the significance of interpreter preference for the band combinations.
Digital to Analog Conversion and Visual Evaluation of Thematic Mapper Data
McCord, James R.; Binnie, Douglas R.; Seevers, Paul M.
1985-01-01
As a part of the National Aeronautics and Space Administration Landsat D Image Data Quality Analysis Program, the Earth Resources Observation Systems Data Center (EDC) developed procedures to optimize the visual information content of Thematic Mapper data and evaluate the resulting photographic products by visual interpretation. A digital-to-analog transfer function was developed which would properly place the digital values on the most useable portion of a film response curve. Individual black-and-white transparencies generated using the resulting look-up tables were utilized in the production of color-composite images with varying band combinations. Four experienced photointerpreters ranked 2-cm-diameter (0. 75 inch) chips of selected image features of each band combination for ease of interpretability. A nonparametric rank-order test determined the significance of interpreter preference for the band combinations.
NASA Technical Reports Server (NTRS)
Dolci, Wendy
2003-01-01
Let us look at this thing with two agendas in mind. Agenda number one was to give the class a problem, which was challenging and stimulating. Agenda number two was to see if a bright group of people might come up with some notions about how to bridge these worlds of technology development and flight system development. Here is an opportunity to get some bright folks who bring a lot of capability to the table. Explain the problem to them and see if they can offer some fresh insights and ideas. It s a very powerful process and one that already put to use in MSL in a number of different areas: getting people who haven't been in the middle of the forest, but are still very strong technically, to step in and think about the problem for a while and offer their observations.
Combustor air flow control method for fuel cell apparatus
Clingerman, Bruce J.; Mowery, Kenneth D.; Ripley, Eugene V.
2001-01-01
A method for controlling the heat output of a combustor in a fuel cell apparatus to a fuel processor where the combustor has dual air inlet streams including atmospheric air and fuel cell cathode effluent containing oxygen depleted air. In all operating modes, an enthalpy balance is provided by regulating the quantity of the air flow stream to the combustor to support fuel cell processor heat requirements. A control provides a quick fast forward change in an air valve orifice cross section in response to a calculated predetermined air flow, the molar constituents of the air stream to the combustor, the pressure drop across the air valve, and a look up table of the orifice cross sectional area and valve steps. A feedback loop fine tunes any error between the measured air flow to the combustor and the predetermined air flow.
Iterative retrieval of surface emissivity and temperature for a hyperspectral sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borel, C.C.
1997-11-01
The central problem of temperature-emissivity separation is that we obtain N spectral measurements of radiance and need to find N + 1 unknowns (N emissivities and one temperature). To solve this problem in the presence of the atmosphere we need to find even more unknowns: N spectral transmissions {tau}{sub atmo}({lambda}) up-welling path radiances L{sub path}{up_arrow}({lambda}) and N down-welling path radiances L{sub path}{down_arrow}({lambda}). Fortunately there are radiative transfer codes such as MODTRAN 3 and FASCODE available to get good estimates of {tau}{sub atmo}({lambda}), L{sub path}{up_arrow}({lambda}) and L{sub path}{down_arrow}({lambda}) in the order of a few percent. With the growing use of hyperspectralmore » imagers, e.g. AVIRIS in the visible and short-wave infrared there is hope of using such instruments in the mid-wave and thermal IR (TIR) some day. We believe that this will enable us to get around using the present temperature - emissivity separation (TES) algorithms using methods which take advantage of the many channels available in hyperspectral imagers. The first idea we had is to take advantage of the simple fact that a typical surface emissivity spectrum is rather smooth compared to spectral features introduced by the atmosphere. Thus iterative solution techniques can be devised which retrieve emissivity spectra {epsilon} based on spectral smoothness. To make the emissivities realistic, atmospheric parameters are varied using approximations, look-up tables derived from a radiative transfer code and spectral libraries. By varying the surface temperature over a small range a series of emissivity spectra are calculated. The one with the smoothest characteristic is chosen. The algorithm was tested on synthetic data using MODTRAN and the Salisbury emissivity database.« less
1982-10-01
this is critical. All buoy and many shore based systems employ back-up systems . In the case of buoys, 4 -when a lamp fails to emit, a new lamp is...88 LIST OF TABLES 1. Incandescent Source Table of Information -------------------- 38 2. Background Information on Subjects...functional aids-to-navigation system . The principal component of this system is the lighted aid which exists in many forms, from the wind and wave
Topological study of the periodic system.
Restrepo, Guillermo; Mesa, Héber; Llanos, Eugenio J; Villaveces, José L
2004-01-01
We carried out a topological study of the Space of Chemical Elements, SCE, based on a clustering analysis of 72 elements, each one defined by a vector of 31 properties. We looked for neighborhoods, boundaries, and other topological properties of the SCE. Among the results one sees the well-known patterns of the Periodic Table and relationships such as the Singularity Principle and the Diagonal Relationship, but there appears also a robustness property of some of the better-known families of elements. Alkaline metals and Noble Gases are sets whose neighborhoods have no other elements besides themselves, whereas the topological boundary of the set of metals is formed by semimetallic elements.
Doğanay Erdoğan, Beyza; Elhan, Atilla Halİl; Kaskatı, Osman Tolga; Öztuna, Derya; Küçükdeveci, Ayşe Adile; Kutlay, Şehim; Tennant, Alan
2017-10-01
This study aimed to explore the potential of an inclusive and fully integrated measurement system for the Activities component of the International Classification of Functioning, Disability and Health (ICF), incorporating four classical scales, including the Health Assessment Questionnaire (HAQ), and a Computerized Adaptive Testing (CAT). Three hundred patients with rheumatoid arthritis (RA) answered relevant questions from four questionnaires. Rasch analysis was performed to create an item bank using this item pool. A further 100 RA patients were recruited for a CAT application. Both real and simulated CATs were applied and the agreement between these CAT-based scores and 'paper-pencil' scores was evaluated with intraclass correlation coefficient (ICC). Anchoring strategies were used to obtain a direct translation from the item bank common metric to the HAQ score. Mean age of 300 patients was 52.3 ± 11.7 years; disease duration was 11.3 ± 8.0 years; 74.7% were women. After testing for the assumptions of Rasch analysis, a 28-item Activities item bank was created. The agreement between CAT-based scores and paper-pencil scores were high (ICC = 0.993). Using those HAQ items in the item bank as anchoring items, another Rasch analysis was performed with HAQ-8 scores as separate items together with anchoring items. Finally a conversion table of the item bank common metric to the HAQ scores was created. A fully integrated and inclusive health assessment system, illustrating the Activities component of the ICF, was built to assess RA patients. Raw score to metric conversions and vice versa were available, giving access to the metric by a simple look-up table. © 2015 Asia Pacific League of Associations for Rheumatology and Wiley Publishing Asia Pty Ltd.
VizieR Online Data Catalog: 27 pulsating DA WDs follow-up observations (Hermes+, 2017)
NASA Astrophysics Data System (ADS)
Hermes, J. J.; Gansicke, B. T.; Kawaler, S. D.; Greiss, S.; Tremblay, P.-E.; Gentile Fusillo, N. P.; Raddi, R.; Fanale, S. M.; Bell, K. J.; Dennihy, E.; Fuchs, J. T.; Dunlap, B. H.; Clemens, J. C.; Montgomery, M. H.; Winget, D. E.; Chote, P.; Marsh, T. R.; Redfield, S.
2017-11-01
All observations analyzed here were collected by the Kepler spacecraft with short-cadence exposures from 2012 to 2016. Full details of the raw and processed Kepler and K2 observations are summarized in Table2. We complemented our space-based photometry of these 27 pulsating hydrogen-atmosphere white dwarfs (DAVs) by determining their atmospheric parameters based on model-atmosphere fits to follow-up spectroscopy obtained from two 4m class, ground-based facilities. Spectra taken with the 4.2m William Herschel Telescope (WHT) on the island of La Palma cover roughly 3800-5100Å at roughly 2.0Å resolution; spanning 2013 Jun 06 to 2014 Jul 25. Spectra taken with the 4.1m Southern Astrophysical Research (SOAR) telescope on Cerro Pachon in Chile cover roughly 3600-5200Å; spanning 2014 Oct 13 to 2017 Apr 21. We detail these spectroscopic observations and their resultant fits in Table 3. (4 data files).
D0 Solenoid Upgrade Project: Pressure Ratings for Some Chimney and Control Dewar Componenets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rucinski, R.; /Fermilab
1993-05-25
Pressure rating calculations were done for some of the chimney and control dewar components. This engineering note documents these calculations. The table below summarizes the components looked at, and what pressure rating are. The raw engineering calculations for each of the components is given.
2. VIEW OF LOWER MILL FLOOR FOUNDATION, SHOWING, LEFT TO ...
2. VIEW OF LOWER MILL FLOOR FOUNDATION, SHOWING, LEFT TO RIGHT, EDGE OF MILLING FLOOR, TABLE FLOOR, VANNING FLOOR, LOADING LEVEL, TAILINGS POND IN RIGHT BACKGROUND. VIEW IS LOOKING FROM THE NORTHWEST - Mountain King Gold Mine & Mill, 4.3 Air miles Northwest of Copperopolis, Copperopolis, Calaveras County, CA
Automated System Marketplace 1987: Maturity and Competition.
ERIC Educational Resources Information Center
Walton, Robert A.; Bridge, Frank R.
1988-01-01
This annual review of the library automation marketplace presents profiles of 15 major library automation firms and looks at emerging trends. Seventeen charts and tables provide data on market shares, number and size of installations, hardware availability, operating systems, and interfaces. A directory of 49 automation sources is included. (MES)
ERIC Educational Resources Information Center
Library Journal, 2011
2011-01-01
While they do not represent the rainbow of reading tastes American public libraries accommodate, Book Review editors are a wildly eclectic bunch. One look at their bedside tables and ereaders would reveal very little crossover. This article highlights an eclectic array of spring offerings ranging from print books to an audiobook to ebook apps. It…
Another Look at Public Library Referenda in Illinois.
ERIC Educational Resources Information Center
Adams, Stanley E.
1996-01-01
Presents 14 tables depicting Illinois public library referenda data from fiscal year 1977/78 through November 1995. Discusses success rates in terms of even versus odd years and spring versus fall for fiscal years 1986-95. Outlines types of library referenda, including: annexation; tax increase; bond issue; establishment (district); conversion to…
Ayoub, F; Reverberi, M; Ricelli, A; D'Onghia, A M; Yaseen, T
2010-09-01
Aspergillus carbonarius and A. niger aggregate are the main fungal contaminants of table grapes. Besides their ability to cause black rot, they can produce ochratoxin A (OTA), a mycotoxin that has attracted increasing attention worldwide. The objective of this work was to set up a simple and rapid molecular method for the early detection of both fungi in table grapes before fungal development becomes evident. Polymerase chain reaction (PCR)-based assays were developed by designing species-specific primers based on the polyketide synthases (PKS(S)) sequences of A. carbonarius and A. niger that have recently been demonstrated to be involved in OTA biosynthesis. Three table grape varieties (Red globe, Crimson seedless, and Italia) were inoculated with A. carbonarius and A. niger aggregate strains producing OTA. The extracted DNA from control (non-inoculated) and inoculated grapes was amplified by PCR using ACPKS2F-ACPKS2R for A. carbonarius and ANPKS5-ANPKS6 for A. niger aggregate. Both primers allowed a clear detection, even in symptomless samples. PCR-based methods are considered to be a good alternative to traditional diagnostic means for the early detection of fungi in complex matrix for their high specificity and sensitivity. The results obtained could be useful for the definition of a 'quality label' for tested grapes to improve the safety measures taken to guarantee the production of fresh table grapes.
Ding, Y. H.; Hu, S. X.
2017-06-06
Beryllium has been considered a superior ablator material for inertial confinement fusion (ICF) target designs. An accurate equation-of-state (EOS) of beryllium under extreme conditions is essential for reliable ICF designs. Based on density-functional theory (DFT) calculations, we have established a wide-range beryllium EOS table of density ρ = 0.001 to 500 g/cm 3 and temperature T = 2000 to 10 8 K. Our first-principle equation-of-state (FPEOS) table is in better agreement with the widely used SESAME EOS table (SESAME 2023) than the average-atom INFERNO and Purgatorio models. For the principal Hugoniot, our FPEOS prediction shows ~10% stiffer than the lastmore » two models in the maximum compression. Although the existing experimental data (only up to 17 Mbar) cannot distinguish these EOS models, we anticipate that high-pressure experiments at the maximum compression region should differentiate our FPEOS from INFERNO and Purgatorio models. Comparisons between FPEOS and SESAME EOS for off-Hugoniot conditions show that the differences in the pressure and internal energy are within ~20%. By implementing the FPEOS table into the 1-D radiation–hydrodynamic code LILAC, we studied in this paper the EOS effects on beryllium-shell–target implosions. Finally, the FPEOS simulation predicts higher neutron yield (~15%) compared to the simulation using the SESAME 2023 EOS table.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Y. H.; Hu, S. X.
Beryllium has been considered a superior ablator material for inertial confinement fusion (ICF) target designs. An accurate equation-of-state (EOS) of beryllium under extreme conditions is essential for reliable ICF designs. Based on density-functional theory (DFT) calculations, we have established a wide-range beryllium EOS table of density ρ = 0.001 to 500 g/cm 3 and temperature T = 2000 to 10 8 K. Our first-principle equation-of-state (FPEOS) table is in better agreement with the widely used SESAME EOS table (SESAME 2023) than the average-atom INFERNO and Purgatorio models. For the principal Hugoniot, our FPEOS prediction shows ~10% stiffer than the lastmore » two models in the maximum compression. Although the existing experimental data (only up to 17 Mbar) cannot distinguish these EOS models, we anticipate that high-pressure experiments at the maximum compression region should differentiate our FPEOS from INFERNO and Purgatorio models. Comparisons between FPEOS and SESAME EOS for off-Hugoniot conditions show that the differences in the pressure and internal energy are within ~20%. By implementing the FPEOS table into the 1-D radiation–hydrodynamic code LILAC, we studied in this paper the EOS effects on beryllium-shell–target implosions. Finally, the FPEOS simulation predicts higher neutron yield (~15%) compared to the simulation using the SESAME 2023 EOS table.« less
Contamination of table salts from Turkey with microplastics.
Gündoğdu, Sedat
2018-05-01
Microplastics (MPs) pollution has become a problem that affects all aquatic, atmospheric and terrestial environments in the world. In this study, we looked into whether MPs in seas and lakes reach consumers through table salt. For this purpose, we obtained 16 brands of table salts from the Turkish market and determined their MPs content with microscopic and Raman spectroscopic examination. According to our results, the MP particle content was 16-84 item/kg in sea salt, 8-102 item/kg in lake salt and 9-16 item/kg in rock salt. The most common plastic polymers were polyethylene (22.9%) and polypropylene (19.2%). When the amounts of MPs and the amount of salt consumed by Turkish consumers per year are considered together, if they consume sea salt, lake salt or rock salt, they consume 249-302, 203-247 or 64-78 items per year, respectively. This is the first time this concerning level of MPs content in table salts in the Turkish market has been reported.
Far Ultraviolet Imaging from the Image Spacecraft
NASA Technical Reports Server (NTRS)
Mende, S. B.; Heetderks, H.; Frey, H. U.; Lampton, M.; Geller, S. P.; Stock, J. M.; Abiad, R.; Siegmund, O. H. W.; Tremsin, A. S.; Habraken, S.
2000-01-01
Direct imaging of the magnetosphere by the IMAGE spacecraft will be supplemented by observation of the global aurora. The IMAGE satellite instrument complement includes three Far Ultraviolet (FUV) instruments. The Wideband Imaging Camera (WIC) will provide broad band ultraviolet images of the aurora for maximum spatial and temporal resolution by imaging the LBH N2 bands of the aurora. The Spectrographic Imager (SI), a novel form of monochromatic imager, will image the aurora, filtered by wavelength. The proton-induced component of the aurora will be imaged separately by measuring the Doppler-shifted Lyman-a. Finally, the GEO instrument will observe the distribution of the geocoronal emission to obtain the neutral background density source for charge exchange in the magnetosphere. The FUV instrument complement looks radially outward from the rotating IMAGE satellite and, therefore, it spends only a short time observing the aurora and the Earth during each spin. To maximize photon collection efficiency and use efficiently the short time available for exposures the FUV auroral imagers WIC and SI both have wide fields of view and take data continuously as the auroral region proceeds through the field of view. To minimize data volume, the set of multiple images are electronically co-added by suitably shifting each image to compensate for the spacecraft rotation. In order to minimize resolution loss, the images have to be distort ion-corrected in real time. The distortion correction is accomplished using high speed look up tables that are pre-generated by least square fitting to polynomial functions by the on-orbit processor. The instruments were calibrated individually while on stationary platforms, mostly in vacuum chambers. Extensive ground-based testing was performed with visible and near UV simulators mounted on a rotating platform to emulate their performance on a rotating spacecraft.
NASA Astrophysics Data System (ADS)
Poulter, B.; Ciais, P.; Joetzjer, E.; Maignan, F.; Luyssaert, S.; Barichivich, J.
2015-12-01
Accurately estimating forest biomass and forest carbon dynamics requires new integrated remote sensing, forest inventory, and carbon cycle modeling approaches. Presently, there is an increasing and urgent need to reduce forest biomass uncertainty in order to meet the requirements of carbon mitigation treaties, such as Reducing Emissions from Deforestation and forest Degradation (REDD+). Here we describe a new parameterization and assimilation methodology used to estimate tropical forest biomass using the ORCHIDEE-CAN dynamic global vegetation model. ORCHIDEE-CAN simulates carbon uptake and allocation to individual trees using a mechanistic representation of photosynthesis, respiration and other first-order processes. The model is first parameterized using forest inventory data to constrain background mortality rates, i.e., self-thinning, and productivity. Satellite remote sensing data for forest structure, i.e., canopy height, is used to constrain simulated forest stand conditions using a look-up table approach to match canopy height distributions. The resulting forest biomass estimates are provided for spatial grids that match REDD+ project boundaries and aim to provide carbon estimates for the criteria described in the IPCC Good Practice Guidelines Tier 3 category. With the increasing availability of forest structure variables derived from high-resolution LIDAR, RADAR, and optical imagery, new methodologies and applications with process-based carbon cycle models are becoming more readily available to inform land management.
Quantification of effective plant rooting depth: advancing global hydrological modelling
NASA Astrophysics Data System (ADS)
Yang, Y.; Donohue, R. J.; McVicar, T.
2017-12-01
Plant rooting depth (Zr) is a key parameter in hydrological and biogeochemical models, yet the global spatial distribution of Zr is largely unknown due to the difficulties in its direct measurement. Moreover, Zr observations are usually only representative of a single plant or several plants, which can differ greatly from the effective Zr over a modelling unit (e.g., catchment or grid-box). Here, we provide a global parameterization of an analytical Zr model that balances the marginal carbon cost and benefit of deeper roots, and produce a climatological (i.e., 1982-2010 average) global Zr map. To test the Zr estimates, we apply the estimated Zr in a highly transparent hydrological model (i.e., the Budyko-Choudhury-Porporato (BCP) model) to estimate mean annual actual evapotranspiration (E) across the globe. We then compare the estimated E with both water balance-based E observations at 32 major catchments and satellite grid-box retrievals across the globe. Our results show that the BCP model, when implemented with Zr estimated herein, optimally reproduced the spatial pattern of E at both scales and provides improved model outputs when compared to BCP model results from two already existing global Zr datasets. These results suggest that our Zr estimates can be effectively used in state-of-the-art hydrological models, and potentially biogeochemical models, where the determination of Zr currently largely relies on biome type-based look-up tables.
Medical table: A major tool for antimicrobial stewardship policy.
Roger, P-M; Demonchy, E; Risso, K; Courjon, J; Leroux, S; Leroux, E; Cua, É
2017-09-01
Infectious diseases are unpredictable, with heterogeneous clinical presentations, diverse pathogens, and various susceptibility rates to anti-infective agents. These features lead to a wide variety of clinical practices, which in turn strongly limits their evaluation. We have been using a medical table since 2005 to monitor the medical activity in our department. The observation of heterogeneous therapeutic practices led to drafting up our own antibiotic guidelines and to implementing a continuous evaluation of their observance and impact on morbidity and mortality associated with infectious diseases, including adverse effects of antibiotics, duration of hospital stay, use of intensive care, and deaths. The 10-year analysis of medical practices using the medical table is based on more than 10,000 hospitalizations. It shows simplified antibiotic therapies and a reduction in infection-related morbidity and mortality. The medical table is a major tool for antimicrobial stewardship, leading to constant benefits for patients. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
The Imperial College Thermophysical Properties Data Centre
NASA Astrophysics Data System (ADS)
Angus, S.; Cole, W. A.; Craven, R.; de Reuck, K. M.; Trengove, R. D.; Wakeham, W. A.
1986-07-01
The IUPAC Thermodynamic Tables Project Centre in London has at its disposal considerable expertise on the production and utilization of high-accuracy equations of state which represent the thermodynamic properties of substances. For some years they have been content to propagate this information by the traditional method of book production, but the increasing use of the computer in industry for process design has shown that an additional method was needed. The setting up of the IUPAC Transport Properties Project Centre, also at Imperial College, whose products would also be in demand by industry, afforded the occasion for a new look at the problem. The solution has been to set up the Imperial College Thermophysical Properties Data Centre, which embraces the two IUPAC Project Centres, and for it to establish a link with the existing Physical Properties Data Service of the Institution of Chemical Engineers, thus providing for the dissemination of the available information without involving the Centres in problems such as those of marketing and advertising. This paper outlines the activities of the Centres and discusses the problems in bringing their products to the attention of industry in suitable form.
MODIS Solar Diffuser On-Orbit Degradation Characterization Using Improved SDSM Screen Modeling
NASA Technical Reports Server (NTRS)
Chen, H.; Xiong, Xiaoxiong; Angal, Amit Avinash; Wang, Z.; Wu, A.
2016-01-01
The Solar Diffuser (SD) is used for the MODIS reflective solar bands (RSB) calibration. An on-board Solar Diffuser Stability Monitor (SDSM) tracks the degradation of its on-orbit bi-directional reflectance factor (BRF). To best match the SDSM detector signals from its Sun view and SD view, a fixed attenuation screen is placed in its Sun view path, where the responses show ripples up to 10%, much larger than design expectation. Algorithms have been developed since the mission beginning to mitigate the impacts of these ripples. In recent years, a look-up-table (LUT) based approach has been implemented to account for these ripples. The LUT modeling of the elevation and azimuth angles is constructed from the detector 9 (D9) of SDSM observations in the MODIS early mission. The response of other detectors is normalized to D9 to reduce the ripples observed in the sun-view data. The accuracy of all detectors degradation estimation depends on how well the D9 approximated. After multiple years of operation (Terra: 16 years; Aqua: 14 years), degradation behavior of all detectors can be monitored by their own. This paper revisits the LUT modeling and proposes a dynamic scheme to build a LUT independently for each detector. Further refinement in the Sun view screen characterization will be highlighted to ensure the degradation estimation accuracy. Results of both Terra and Aqua SD on-orbit degradation are derived from the improved modeling and curve fitting strategy.
Li, Ya Ni; Lu, Lei; Liu, Yong
2017-12-01
The tasseled cap triangle (TCT)-leaf area index (LAI) isoline is a model that reflects the distribution of LAI isoline in the spectral space constituted by reflectance of red and near-infrared (NIR) bands, and the LAI retrieval model developed on the basis of this is more accurate than the commonly used statistical relationship models. This study used ground-based measurements of the rice field, validated the applicability of PROSAIL model in simulating canopy reflectance of rice field, and calibrated the input parameters of the model. The ranges of values of PROSAIL input parameters for simulating rice canopy reflectance were determined. Based on this, the TCT-LAI isoline model of rice field was established, and a look-up table (LUT) required in remote sensing retrieval of LAI was developed. Then, the LUT was used for Landsat 8 and WorldView 3 data to retrieve LAI of rice field, respectively. The results showed that the LAI retrieved using the LUT developed from TCT-LAI isoline model had a good linear relationship with the measured LAI R 2 =0.76, RMSE=0.47. Compared with the LAI retrieved from Landsat 8, LAI values retrieved from WorldView 3 va-ried with wider range, and data distribution was more scattered. Resampling the Landsat 8 and WorldView 3 reflectance data to 1 km to retrieve LAI, the result of MODIS LAI product was significantly underestimated compared to that of retrieved LAI.
Global root zone storage capacity from satellite-based evaporation
NASA Astrophysics Data System (ADS)
Wang-Erlandsson, Lan; Bastiaanssen, Wim G. M.; Gao, Hongkai; Jägermeyr, Jonas; Senay, Gabriel B.; van Dijk, Albert I. J. M.; Guerschman, Juan P.; Keys, Patrick W.; Gordon, Line J.; Savenije, Hubert H. G.
2016-04-01
This study presents an "Earth observation-based" method for estimating root zone storage capacity - a critical, yet uncertain parameter in hydrological and land surface modelling. By assuming that vegetation optimises its root zone storage capacity to bridge critical dry periods, we were able to use state-of-the-art satellite-based evaporation data computed with independent energy balance equations to derive gridded root zone storage capacity at global scale. This approach does not require soil or vegetation information, is model independent, and is in principle scale independent. In contrast to a traditional look-up table approach, our method captures the variability in root zone storage capacity within land cover types, including in rainforests where direct measurements of root depths otherwise are scarce. Implementing the estimated root zone storage capacity in the global hydrological model STEAM (Simple Terrestrial Evaporation to Atmosphere Model) improved evaporation simulation overall, and in particular during the least evaporating months in sub-humid to humid regions with moderate to high seasonality. Our results suggest that several forest types are able to create a large storage to buffer for severe droughts (with a very long return period), in contrast to, for example, savannahs and woody savannahs (medium length return period), as well as grasslands, shrublands, and croplands (very short return period). The presented method to estimate root zone storage capacity eliminates the need for poor resolution soil and rooting depth data that form a limitation for achieving progress in the global land surface modelling community.
Dynamic alarm response procedures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, J.; Gordon, P.; Fitch, K.
2006-07-01
The Dynamic Alarm Response Procedure (DARP) system provides a robust, Web-based alternative to existing hard-copy alarm response procedures. This paperless system improves performance by eliminating time wasted looking up paper procedures by number, looking up plant process values and equipment and component status at graphical display or panels, and maintenance of the procedures. Because it is a Web-based system, it is platform independent. DARP's can be served from any Web server that supports CGI scripting, such as Apache{sup R}, IIS{sup R}, TclHTTPD, and others. DARP pages can be viewed in any Web browser that supports Javascript and Scalable Vector Graphicsmore » (SVG), such as Netscape{sup R}, Microsoft Internet Explorer{sup R}, Mozilla Firefox{sup R}, Opera{sup R}, and others. (authors)« less
A Digital Lock-In Amplifier for Use at Temperatures of up to 200 °C
Cheng, Jingjing; Xu, Yingjun; Wu, Lei; Wang, Guangwei
2016-01-01
Weak voltage signals cannot be reliably measured using currently available logging tools when these tools are subject to high-temperature (up to 200 °C) environments for prolonged periods. In this paper, we present a digital lock-in amplifier (DLIA) capable of operating at temperatures of up to 200 °C. The DLIA contains a low-noise instrument amplifier and signal acquisition and the corresponding signal processing electronics. The high-temperature stability of the DLIA is achieved by designing system-in-package (SiP) and multi-chip module (MCM) components with low thermal resistances. An effective look-up-table (LUT) method was developed for the lock-in amplifier algorithm, to decrease the complexity of the calculations and generate less heat than the traditional way. The performance of the design was tested by determining the linearity, gain, Q value, and frequency characteristic of the DLIA between 25 and 200 °C. The maximal nonlinear error in the linearity of the DLIA working at 200 °C was about 1.736% when the equivalent input was a sine wave signal with an amplitude of between 94.8 and 1896.0 nV and a frequency of 800 kHz. The tests showed that the DLIA proposed could work effectively in high-temperature environments up to 200 °C. PMID:27845710
A single-layer platform for Boolean logic and arithmetic through DNA excision in mammalian cells
Weinberg, Benjamin H.; Hang Pham, N. T.; Caraballo, Leidy D.; Lozanoski, Thomas; Engel, Adrien; Bhatia, Swapnil; Wong, Wilson W.
2017-01-01
Genetic circuits engineered for mammalian cells often require extensive fine-tuning to perform their intended functions. To overcome this problem, we present a generalizable biocomputing platform that can engineer genetic circuits which function in human cells with minimal optimization. We used our Boolean Logic and Arithmetic through DNA Excision (BLADE) platform to build more than 100 multi-input-multi-output circuits. We devised a quantitative metric to evaluate the performance of the circuits in human embryonic kidney and Jurkat T cells. Of 113 circuits analysed, 109 functioned (96.5%) with the correct specified behavior without any optimization. We used our platform to build a three-input, two-output Full Adder and six-input, one-output Boolean Logic Look Up Table. We also used BLADE to design circuits with temporal small molecule-mediated inducible control and circuits that incorporate CRISPR/Cas9 to regulate endogenous mammalian genes. PMID:28346402
Physiological basis for noninvasive skin cancer diagnosis using diffuse reflectance spectroscopy
NASA Astrophysics Data System (ADS)
Zhang, Yao; Markey, Mia K.; Tunnell, James W.
2017-02-01
Diffuse reflectance spectroscopy offers a noninvasive, fast, and low-cost alternative to visual screening and biopsy for skin cancer diagnosis. We have previously acquired reflectance spectra from 137 lesions in 76 patients and determined the capability of spectral diagnosis using principal component analysis (PCA). However, it is not well elucidated why spectral analysis enables tissue classification. To provide the physiological basis, we used the Monte Carlo look-up table (MCLUT) model to extract physiological parameters from those clinical data. The MCLUT model results in the following physiological parameters: oxygen saturation, hemoglobin concentration, melanin concentration, vessel radius, and scattering parameters. Physiological parameters show that cancerous skin tissue has lower scattering and larger vessel radii, compared to normal tissue. These results demonstrate the potential of diffuse reflectance spectroscopy for detection of early precancerous changes in tissue. In the future, a diagnostic algorithm that combines these physiological parameters could be enable non-invasive diagnosis of skin cancer.
NASA Astrophysics Data System (ADS)
Alzate, N.; Grande, M.; Matthiae, D.
2017-09-01
Planetary Space Weather Services (PSWS) within the Europlanet H2020 Research Infrastructure have been developed following protocols and standards available in Astrophysical, Solar Physics and Planetary Science Virtual Observatories. Several VO-compliant functionalities have been implemented in various tools. The PSWS extends the concepts of space weather and space situational awareness to other planets in our Solar System and in particular to spacecraft that voyage through it. One of the five toolkits developed as part of these services is a model dedicated to the Mars environment. This model has been developed at Aberystwyth University and the Institut fur Luft- und Raumfahrtmedizin (DLR Cologne) using modeled average conditions available from Planetocosmics. It is available for tracing propagation of solar events through the Solar System and modeling the response of the Mars environment. The results have been synthesized into look-up tables parameterized to variable solar wind conditions at Mars.
Capacitive touch sensing : signal and image processing algorithms
NASA Astrophysics Data System (ADS)
Baharav, Zachi; Kakarala, Ramakrishna
2011-03-01
Capacitive touch sensors have been in use for many years, and recently gained center stage with the ubiquitous use in smart-phones. In this work we will analyze the most common method of projected capacitive sensing, that of absolute capacitive sensing, together with the most common sensing pattern, that of diamond-shaped sensors. After a brief introduction to the problem, and the reasons behind its popularity, we will formulate the problem as a reconstruction from projections. We derive analytic solutions for two simple cases: circular finger on a wire grid, and square finger on a square grid. The solutions give insight into the ambiguities of finding finger location from sensor readings. The main contribution of our paper is the discussion of interpolation algorithms including simple linear interpolation , curve fitting (parabolic and Gaussian), filtering, general look-up-table, and combinations thereof. We conclude with observations on the limits of the present algorithmic methods, and point to possible future research.
Further optimization of SeDDaRA blind image deconvolution algorithm and its DSP implementation
NASA Astrophysics Data System (ADS)
Wen, Bo; Zhang, Qiheng; Zhang, Jianlin
2011-11-01
Efficient algorithm for blind image deconvolution and its high-speed implementation is of great value in practice. Further optimization of SeDDaRA is developed, from algorithm structure to numerical calculation methods. The main optimization covers that, the structure's modularization for good implementation feasibility, reducing the data computation and dependency of 2D-FFT/IFFT, and acceleration of power operation by segmented look-up table. Then the Fast SeDDaRA is proposed and specialized for low complexity. As the final implementation, a hardware system of image restoration is conducted by using the multi-DSP parallel processing. Experimental results show that, the processing time and memory demand of Fast SeDDaRA decreases 50% at least; the data throughput of image restoration system is over 7.8Msps. The optimization is proved efficient and feasible, and the Fast SeDDaRA is able to support the real-time application.
Method for routing events from key strokes in a multi-processing computer systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rhodes, D.A.; Rustici, E.; Carter, K.H.
1990-01-23
The patent describes a method of routing user input in a computer system which concurrently runs a plurality of processes. It comprises: generating keycodes representative of keys typed by a user; distinguishing generated keycodes by looking up each keycode in a routing table which assigns each possible keycode to an individual assigned process of the plurality of processes, one of which processes being a supervisory process; then, sending each keycode to its assigned process until a keycode assigned to the supervisory process is received; sending keycodes received subsequent to the keycode assigned to the supervisory process to a buffer; next,more » providing additional keycodes to the supervisory process from the buffer until the supervisory process has completed operation; and sending keycodes stored in the buffer to processes assigned therewith after the supervisory process has completedoperation.« less
Synergetic use of Aerosol Robotic Network (AERONET) and Moderate Image Spectrometer (MODIS)
NASA Technical Reports Server (NTRS)
Kaufman, Y.
2004-01-01
I shall describe several distinct modes in which AERONET data are used in conjunction with MODIS data to evaluate the global aerosol system and its impact on climate. These includes: 1) Evaluation of the aerosol diurnal cycle not available from MODIS, and the relationship between the aerosol properties derived from MODIS and the daily average of these properties; 2) Climatology of the aerosol size distribution and single scattering albedo. The climatology is used to formulate the assumptions used in the MODIS look up tables used in the inversion of MODIS data; 3) Measurement of the aerosol effect on irradiation of the surface, this is used in conjunction with the MODIS evaluation of the aerosol effect at the TOA; and 4) Assessment of the aerosol baseline on top off which the satellite data are used to find the amount of dust or anthropogenic aerosol.
NASA Technical Reports Server (NTRS)
Favor, R. J.; Maykuth, D. J.; Bartlett, E. S.; Mindlin, H.
1972-01-01
A program to determine the characteristics of two coated columbium alloy systems for spacecraft structures is discussed. The alloy was evaluated as coated base material, coated butt-welded material, and material thermal/pressure cycled prior to testing up to 30 cycles. Evaluation was by means of tensile tests covering the temperature range to 2400 F. Design allowables were computed and are presented as tables of data. The summary includes a room temperature property table, effect of temperature curves, and typical stress-strain curves.
Five Years Later: A Look at the EIA Investment.
ERIC Educational Resources Information Center
South Carolina State Dept. of Education, Columbia. Div. of Public Accountability.
A 5-year review of the impact of South Carolina's comprehensive reform legislation, the Education Improvement Act of 1984 (EIA), is presented. Throughout the report, comparisons of EIA program productivity in 1989 with pre-EIA performance are displayed in short program summaries, 33 graphs, and 14 tables. The EIA targeted seven major areas for…
Understanding University Reform in Japan through the Prism of the Social Sciences
ERIC Educational Resources Information Center
Goodman, Roger
2008-01-01
This article looks at current university reforms in Japan through two slightly different social science prisms: how social science methodologies and theories can help us understand those reforms better and how social science teaching in universities will be affected by the current reform processes. (Contains 3 tables and 7 notes.)
Condensed Proceedings of the Ad Hoc Committee on Environmental Behavior
ERIC Educational Resources Information Center
Cancro, Robert
1972-01-01
Fourteen leading behavioral scientists explore the relationship between environment and health with a focus on the following question: As we look at health care as people receive it in their communities and the realities of America today, what can we do to improve it?'' Philosophical, scientific issues discussed in round table fashion. (LK)
Monsters that Eat People--Oh My! Selecting Literature to Ease Children's Fears
ERIC Educational Resources Information Center
Mercurio, Mia Lynn; McNamee, Abigail
2008-01-01
What should families and teachers look for when they choose picture books to help young children overcome their fears of imaginary monsters, dark places, thunderstorms, and dogs? This article provides criteria for assessing picture books and suggests ways to read them in ways that support children's development. (Contains 4 tables.)
A Comparative Analysis of Juvenile Book Review Media.
ERIC Educational Resources Information Center
Witucke, A. Virginia
This study of book reviews takes an objective look at the major sources that review children's books. Periodicals examined are Booklist, Bulletin of the Center for Children's Books, Horn Book, New York Times Book Review, and School Library Journal. Presented in a series of eight tables, the report examines reviews of 30 titles published between…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bejarano Buele, A; Sperling, N; Parsai, E
2016-06-15
Purpose: Cone-beam CTs (CBCT) obtained from On-Board Imaging Devices (OBI) are increasingly being used for dose calculation purposes in adaptive radiotherapy. Patient and target morphology are monitored and the treatment plan is updated using CBCT. Due to the difference in image acquisition parameters, dose calculated in a CBCT can differ from planned dose. We evaluate the difference between dose calculation in kV CBCT and simulation CT, and the effect of HU-density tables in dose discrepancies Methods: HU values for various materials were obtained using a Catphan 504 phantom for a simulator CT (CTSIM) and two different OBI systems using threemore » imaging protocols: Head, Thorax and Pelvis. HU-density tables were created in the TPS for each OBI image protocol. Treatment plans were made on each Catphan 504 dataset and on the head, thorax and pelvis sections of an anthropomorphic phantom, with and without the respective HU-density table. DVH information was compared among OBI systems and planning CT. Results: Dose calculations carried on the Catphan 504 CBCTs, with and without the respective CT-density table, had a maximum difference of −0.65% from the values on the planning CT. The use of the respective HU-density table decreased the percent differences from planned values by half in most of the protocols. For the anthropomorphic phantom datasets, the use of the correct HU-density table reduced differences by 0.89% on OBI1 and 0.59% on OBI2 for the head, 0.49% on OBI1 for the thorax, and 0.25% on OBI2 for the pelvis. Differences from planned values without HU-density correction ranged from 3.13% (OBI1, thorax) to 0.30% (OBI2, thorax). Conclusion: CT-density tables in the TPS yield acceptable differences when used in partly homogeneous medium. Further corrections are needed when the medium contains pronounced density differences for accurate CBCT calculation. Current difference range (1–3%) can be clinically acceptable.« less
Multi-satellites normalization of the FengYun-2s visible detectors by the MVP method
NASA Astrophysics Data System (ADS)
Li, Yuan; Rong, Zhi-guo; Zhang, Li-jun; Sun, Ling; Xu, Na
2013-08-01
After January 13, 2012, FY-2F had successfully launched, the total number of the in orbit operating FengYun-2 geostationary meteorological satellites reached three. For accurate and efficient application of multi-satellite observation data, the study of the multi-satellites normalization of the visible detector was urgent. The method required to be non-rely on the in orbit calibration. So as to validate the calibration results before and after the launch; calculate day updating surface bidirectional reflectance distribution function (BRDF); at the same time track the long-term decay phenomenon of the detector's linearity and responsivity. By research of the typical BRDF model, the normalization method was designed. Which could effectively solute the interference of surface directional reflectance characteristics, non-rely on visible detector in orbit calibration. That was the Median Vertical Plane (MVP) method. The MVP method was based on the symmetry of principal plane, which were the directional reflective properties of the general surface targets. Two geostationary satellites were taken as the endpoint of a segment, targets on the intersecting line of the segment's MVP and the earth surface could be used as a normalization reference target (NRT). Observation on the NRT by two satellites at the moment the sun passing through the MVP brought the same observation zenith, solar zenith, and opposite relative direction angle. At that time, the linear regression coefficients of the satellite output data were the required normalization coefficients. The normalization coefficients between FY-2D, FY-2E and FY-2F were calculated, and the self-test method of the normalized results was designed and realized. The results showed the differences of the responsivity between satellites could up to 10.1%(FY-2E to FY-2F); the differences of the output reflectance calculated by the broadcast calibration look-up table could up to 21.1%(FY-2D to FY-2F); the differences of the output reflectance from FY-2D and FY-2E calculated by the site experiment results reduced to 2.9%(13.6% when using the broadcast table). The normalized relative error was also calculated by the self-test method, which was less than 0.2%.
A 3D simulation look-up library for real-time airborne gamma-ray spectroscopy
NASA Astrophysics Data System (ADS)
Kulisek, Jonathan A.; Wittman, Richard S.; Miller, Erin A.; Kernan, Warnick J.; McCall, Jonathon D.; McConn, Ron J.; Schweppe, John E.; Seifert, Carolyn E.; Stave, Sean C.; Stewart, Trevor N.
2018-01-01
A three-dimensional look-up library consisting of simulated gamma-ray spectra was developed to leverage, in real-time, the abundance of data provided by a helicopter-mounted gamma-ray detection system consisting of 92 CsI-based radiation sensors and exhibiting a highly angular-dependent response. We have demonstrated how this library can be used to help effectively estimate the terrestrial gamma-ray background, develop simulated flight scenarios, and to localize radiological sources. Source localization accuracy was significantly improved, particularly for weak sources, by estimating the entire gamma-ray spectra while accounting for scattering in the air, and especially off the ground.
Real-time processing of radar return on a parallel computer
NASA Technical Reports Server (NTRS)
Aalfs, David D.
1992-01-01
NASA is working with the FAA to demonstrate the feasibility of pulse Doppler radar as a candidate airborne sensor to detect low altitude windshears. The need to provide the pilot with timely information about possible hazards has motivated a demand for real-time processing of a radar return. Investigated here is parallel processing as a means of accommodating the high data rates required. A PC based parallel computer, called the transputer, is used to investigate issues in real time concurrent processing of radar signals. A transputer network is made up of an array of single instruction stream processors that can be networked in a variety of ways. They are easily reconfigured and software development is largely independent of the particular network topology. The performance of the transputer is evaluated in light of the computational requirements. A number of algorithms have been implemented on the transputers in OCCAM, a language specially designed for parallel processing. These include signal processing algorithms such as the Fast Fourier Transform (FFT), pulse-pair, and autoregressive modelling, as well as routing software to support concurrency. The most computationally intensive task is estimating the spectrum. Two approaches have been taken on this problem, the first and most conventional of which is to use the FFT. By using table look-ups for the basis function and other optimizing techniques, an algorithm has been developed that is sufficient for real time. The other approach is to model the signal as an autoregressive process and estimate the spectrum based on the model coefficients. This technique is attractive because it does not suffer from the spectral leakage problem inherent in the FFT. Benchmark tests indicate that autoregressive modeling is feasible in real time.
Satellite aerosol retrieval using dark target algorithm by coupling BRDF effect over AERONET site
NASA Astrophysics Data System (ADS)
Yang, Leiku; Xue, Yong; Guang, Jie; Li, Chi
2012-11-01
For most satellite aerosol retrieval algorithms even for multi-angle instrument, the simple forward model (FM) based on Lambertian surface assumption is employed to simulate top of the atmosphere (TOA) spectral reflectance, which does not fully consider the surface bi-directional reflectance functions (BRDF) effect. The approximating forward model largely simplifies the radiative transfer model, reduces the size of the look-up tables, and creates faster algorithm. At the same time, it creates systematic biases in the aerosol optical depth (AOD) retrieval. AOD product from the Moderate Resolution Imaging Spectro-radiometer (MODIS) data based on the dark target algorithm is considered as one of accurate satellite aerosol products at present. Though it performs well at a global scale, uncertainties are still found on regional in a lot of studies. The Lambertian surface assumpiton employed in the retrieving algorithm may be one of the uncertain factors. In this study, we first use radiative transfer simulations over dark target to assess the uncertainty to what extent is introduced from the Lambertian surface assumption. The result shows that the uncertainties of AOD retrieval could reach up to ±0.3. Then the Lambertian FM (L_FM) and the BRDF FM (BRDF_FM) are respectively employed in AOD retrieval using dark target algorithm from MODARNSS (MODIS/Terra and MODIS/Aqua Atmosphere Aeronet Subsetting Product) data over Beijing AERONET site. The validation shows that accuracy in AOD retrieval has been improved by employing the BRDF_FM accounting for the surface BRDF effect, the regression slope of scatter plots with retrieved AOD against AEROENET AOD increases from 0.7163 (for L_FM) to 0.7776 (for BRDF_FM) and the intercept decreases from 0.0778 (for L_FM) to 0.0627 (for BRDF_FM).
Automated spot defect characterization in a field portable night vision goggle test set
NASA Astrophysics Data System (ADS)
Scopatz, Stephen; Ozten, Metehan; Aubry, Gilles; Arquetoux, Guillaume
2018-05-01
This paper discusses a new capability developed for and results from a field portable test set for Gen 2 and Gen 3 Image Intensifier (I2) tube-based Night Vision Goggles (NVG). A previous paper described the test set and the automated and semi-automated tests supported for NVGs including a Knife Edge MTF test to replace the operator's interpretation of the USAF 1951 resolution chart. The major improvement and innovation detailed in this paper is the use of image analysis algorithms to automate the characterization of spot defects of I² tubes with the same test set hardware previously presented. The original and still common Spot Defect Test requires the operator to look through the NVGs at target of concentric rings; compare the size of the defects to a chart and manually enter the results into a table based on the size and location of each defect; this is tedious and subjective. The prior semi-automated improvement captures and displays an image of the defects and the rings; allowing the operator determine the defects with less eyestrain; while electronically storing the image and the resulting table. The advanced Automated Spot Defect Test utilizes machine vision algorithms to determine the size and location of the defects, generates the result table automatically and then records the image and the results in a computer-generated report easily usable for verification. This is inherently a more repeatable process that ensures consistent spot detection independent of the operator. Results of across several NVGs will be presented.
20 CFR Appendix C to Part 718 - Blood-Gas Tables
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Blood-Gas Tables C Appendix C to Part 718... PNEUMOCONIOSIS Pt. 718, App. C Appendix C to Part 718—Blood-Gas Tables The following tables set forth the values... tables are met: (1) For arterial blood-gas studies performed at test sites up to 2,999 feet above sea...
20 CFR Appendix C to Part 718 - Blood-Gas Tables
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Blood-Gas Tables C Appendix C to Part 718... DUE TO PNEUMOCONIOSIS Pt. 718, App. C Appendix C to Part 718—Blood-Gas Tables The following tables set... of the following tables are met: (1) For arterial blood-gas studies performed at test sites up to 2...
20 CFR Appendix C to Part 718 - Blood-Gas Tables
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 4 2013-04-01 2013-04-01 false Blood-Gas Tables C Appendix C to Part 718... DUE TO PNEUMOCONIOSIS Pt. 718, App. C Appendix C to Part 718—Blood-Gas Tables The following tables set... of the following tables are met: (1) For arterial blood-gas studies performed at test sites up to 2...
Wehmiller, J.F.; Harris, W.B.; Boutin, B.S.; Farrell, K.M.
2012-01-01
The use of amino acid racemization (AAR) for estimating ages of Quaternary fossils usually requires a combination of kinetic and effective temperature modeling or independent age calibration of analyzed samples. Because of limited availability of calibration samples, age estimates are often based on model extrapolations from single calibration points over wide ranges of D/L values. Here we present paired AAR and 87Sr/ 86Sr results for Pleistocene mollusks from the North Carolina Coastal Plain, USA. 87Sr/ 86Sr age estimates, derived from the lookup table of McArthur et al. [McArthur, J.M., Howarth, R.J., Bailey, T.R., 2001. Strontium isotopic stratigraphy: LOWESS version 3: best fit to the marine Sr-isotopic curve for 0-509 Ma and accompanying Look-up table for deriving numerical age. Journal of Geology 109, 155-169], provide independent age calibration over the full range of amino acid D/L values, thereby allowing comparisons of alternative kinetic models for seven amino acids. The often-used parabolic kinetic model is found to be insufficient to explain the pattern of racemization, although the kinetic pathways for valine racemization and isoleucine epimerization can be closely approximated with this function. Logarithmic and power law regressions more accurately represent the racemization pathways for all amino acids. The reliability of a non-linear model for leucine racemization, developed and refined over the past 20 years, is confirmed by the 87Sr/ 86Sr age results. This age model indicates that the subsurface record (up to 80m thick) of the North Carolina Coastal Plain spans the entire Quaternary, back to ???2.5Ma. The calibrated kinetics derived from this age model yield an estimate of the effective temperature for the study region of 11??2??C., from which we estimate full glacial (Last Glacial Maximum - LGM) temperatures for the region on the order of 7-10??C cooler than present. These temperatures compare favorably with independent paleoclimate information for the region. ?? 2011 Elsevier B.V.
A Depolarisation Lidar Based Method for the Determination of Liquid-Cloud Microphysical Properties.
NASA Astrophysics Data System (ADS)
Donovan, D. P.; Klein Baltink, H.; Henzing, J. S.; De Roode, S. R.; Siebesma, P.
2014-12-01
The fact that polarisation lidars measure a multiple-scattering induced depolarisation signal in liquid clouds is well-known. The depolarisation signal depends on the lidar characteristics (e.g. wavelength and field-of-view) as well as the cloud properties (e.g. liquid water content (LWC) and cloud droplet number concentration (CDNC)). Previous efforts seeking to use depolarisation information in a quantitative manner to retrieve cloud properties have been undertaken with, arguably, limited practical success. In this work we present a retrieval procedure applicable to clouds with (quasi-)linear LWC profiles and (quasi-)constant CDNC in the cloud base region. Limiting the applicability of the procedure in this manner allows us to reduce the cloud variables to two parameters (namely liquid water content lapse-rate and the CDNC). This simplification, in turn, allows us to employ a robust optimal-estimation inversion using pre-computed look-up-tables produced using lidar Monte-Carlo multiple-scattering simulations. Here, we describe the theory behind the inversion procedure and apply it to simulated observations based on large-eddy simulation model output. The inversion procedure is then applied to actual depolarisation lidar data covering to a range of cases taken from the Cabauw measurement site in the central Netherlands. The lidar results were then used to predict the corresponding cloud-base region radar reflectivities. In non-drizzling condition, it was found that the lidar inversion results can be used to predict the observed radar reflectivities with an accuracy within the radar calibration uncertainty (2-3 dBZ). This result strongly supports the accuracy of the lidar inversion results. Results of a comparison between ground-based aerosol number concentration and lidar-derived CDNC are also presented. The results are seen to be consistent with previous studies based on aircraft-based in situ measurements.
Perceptions of a mobile technology on learning strategies in the anatomy laboratory.
Mayfield, Chandler H; Ohara, Peter T; O'Sullivan, Patricia S
2013-01-01
Mobile technologies offer new opportunities to improve dissection learning. This study examined the effect of using an iPad-based multimedia dissection manual during anatomy laboratory instruction on learner's perception of anatomy dissection activities and use of time. Three experimental dissection tables used iPads and three tables served as a control for two identical sessions. Trained, non-medical school anatomy faculty observers recorded use of resources at two-minute intervals for 20 observations per table. Students completed pre- and post-perception questionnaires. We used descriptive and inferential analyses. Twenty-one control and 22 experimental students participated. Compared with controls, experimental students reported significantly (P < 0.05) less reliance on paper and instructor resources, greater ability to achieve anatomy laboratory objectives, and clarity of the role of dissection in learning anatomy. Experimental students indicated that the iPad helped them in dissection. We observed experimental students more on task (93% vs. 83% of the time) and less likely to be seeking an instructor (2% vs. 32%). The groups received similar attention from instructors (33% vs. 37%). Fifty-nine percent of the time at least one student was looking at the iPad. Groups clustered around the iPad a third of their time. We conclude that the iPad-manual aided learner engagement, achieved instructional objectives, and enhanced the effectiveness and efficiency of dissection education. Copyright © 2012 American Association of Anatomists.
Satellite estimation of surface spectral ultraviolet irradiance using OMI data in East Asia
NASA Astrophysics Data System (ADS)
Lee, H.; Kim, J.; Jeong, U.
2017-12-01
Due to a strong influence to the human health and ecosystem environment, continuous monitoring of the surface ultraviolet (UV) irradiance is important nowadays. The amount of UVA (320-400 nm) and UVB (290-320 nm) radiation at the Earth surface depends on the extent of Rayleigh scattering by atmospheric gas molecules, the radiative absorption by ozone, radiative scattering by clouds, and both absorption and scattering by airborne aerosols. Thus advanced consideration of these factors is the essential part to establish the process of UV irradiance estimation. Also UV index (UVI) is a simple parameter to show the strength of surface UV irradiance, therefore UVI has been widely utilized for the purpose of UV monitoring. In this study, we estimate surface UV irradiance at East Asia using realistic input based on OMI Total Ozone and reflectivity, and then validate this estimated comparing to UV irradiance from World Ozone and Ultraviolet Radiation Data Centre (WOUDC) data. In this work, we also try to develop our own retrieval algorithm for better estimation of surface irradiance. We use the Vector Linearized Discrete Ordinate Radiative Transfer (VLIDORT) model version 2.6 for our UV irradiance calculation. The input to the VLIDORT radiative transfer calculations are the total ozone column (TOMS V7 climatology), the surface albedo (Herman and Celarier, 1997) and the cloud optical depth. Based on these, the UV irradiance is calculated based on look-up table (LUT) approach. To correct absorbing aerosol, UV irradiance algorithm added climatological aerosol information (Arola et al., 2009). The further study, we analyze the comprehensive uncertainty analysis based on LUT and all input parameters.
Applicability of aquifer impact models to support decisions at CO 2 sequestration sites
Keating, Elizabeth; Bacon, Diana; Carroll, Susan; ...
2016-07-25
The National Risk Assessment Partnership has developed a suite of tools to assess and manage risk at CO 2 sequestration sites. This capability includes polynomial or look-up table based reduced-order models (ROMs) that predict the impact of CO 2 and brine leaks on overlying aquifers. The development of these computationally-efficient models and the underlying reactive transport simulations they emulate has been documented elsewhere (Carroll et al., 2014a; Carroll et al., 2014b; Dai et al., 2014 ; Keating et al., 2016). Here in this paper, we seek to demonstrate applicability of ROM-based analysis by considering what types of decisions and aquifermore » types would benefit from the ROM analysis. We present four hypothetical examples where applying ROMs, in ensemble mode, could support decisions during a geologic CO 2 sequestration project. These decisions pertain to site selection, site characterization, monitoring network evaluation, and health impacts. In all cases, we consider potential brine/CO 2 leak rates at the base of the aquifer to be uncertain. We show that derived probabilities provide information relevant to the decision at hand. Although the ROMs were developed using site-specific data from two aquifers (High Plains and Edwards), the models accept aquifer characteristics as variable inputs and so they may have more broad applicability. We conclude that pH and TDS predictions are the most transferable to other aquifers based on the analysis of the nine water quality metrics (pH, TDS, 4 trace metals, 3 organic compounds). Guidelines are presented for determining the aquifer types for which the ROMs should be applicable.« less
NASA Astrophysics Data System (ADS)
Hong, Gang; Minnis, Patrick; Doelling, David; Ayers, J. Kirk; Sun-Mack, Szedung
2012-03-01
A method for estimating effective ice particle radius Re at the tops of tropical deep convective clouds (DCC) is developed on the basis of precomputed look-up tables (LUTs) of brightness temperature differences (BTDs) between the 3.7 and 11.0 μm bands. A combination of discrete ordinates radiative transfer and correlated k distribution programs, which account for the multiple scattering and monochromatic molecular absorption in the atmosphere, is utilized to compute the LUTs as functions of solar zenith angle, satellite zenith angle, relative azimuth angle, Re, cloud top temperature (CTT), and cloud visible optical thickness τ. The LUT-estimated DCC Re agrees well with the cloud retrievals of the Moderate Resolution Imaging Spectroradiometer (MODIS) for the NASA Clouds and Earth's Radiant Energy System with a correlation coefficient of 0.988 and differences of less than 10%. The LUTs are applied to 1 year of measurements taken from MODIS aboard Aqua in 2007 to estimate DCC Re and are compared to a similar quantity from CloudSat over the region bounded by 140°E, 180°E, 0°N, and 20°N in the Western Pacific Warm Pool. The estimated DCC Re values are mainly concentrated in the range of 25-45 μm and decrease with CTT. Matching the LUT-estimated Re with ice cloud Re retrieved by CloudSat, it is found that the ice cloud τ values from DCC top to the vertical location where LUT-estimated Re is located at the CloudSat-retrieved Re profile are mostly less than 2.5 with a mean value of about 1.3. Changes in the DCC τ can result in differences of less than 10% for Re estimated from LUTs. The LUTs of 0.65 μm bidirectional reflectance distribution function (BRDF) are built as functions of viewing geometry and column amount of ozone above upper troposphere. The 0.65 μm BRDF can eliminate some noncore portions of the DCCs detected using only 11 μm brightness temperature thresholds, which result in a mean difference of only 0.6 μm for DCC Re estimated from BTD LUTs.
NASA Astrophysics Data System (ADS)
Shiri, Jalal; Kisi, Ozgur; Yoon, Heesung; Lee, Kang-Kun; Hossein Nazemi, Amir
2013-07-01
The knowledge of groundwater table fluctuations is important in agricultural lands as well as in the studies related to groundwater utilization and management levels. This paper investigates the abilities of Gene Expression Programming (GEP), Adaptive Neuro-Fuzzy Inference System (ANFIS), Artificial Neural Networks (ANN) and Support Vector Machine (SVM) techniques for groundwater level forecasting in following day up to 7-day prediction intervals. Several input combinations comprising water table level, rainfall and evapotranspiration values from Hongcheon Well station (South Korea), covering a period of eight years (2001-2008) were used to develop and test the applied models. The data from the first six years were used for developing (training) the applied models and the last two years data were reserved for testing. A comparison was also made between the forecasts provided by these models and the Auto-Regressive Moving Average (ARMA) technique. Based on the comparisons, it was found that the GEP models could be employed successfully in forecasting water table level fluctuations up to 7 days beyond data records.
Price current-meter standard rating development by the U.S. geological survey
Hubbard, E.F.; Schwarz, G.E.; Thibodeaux, K.G.; Turcios, L.M.
2001-01-01
The U.S. Geological Survey has developed new standard rating tables for use with Price type AA and pygmy current meters, which are employed to measure streamflow velocity. Current-meter calibration data, consisting of the rates of rotation of meters at several different constant water velocities, have shown that the original rating tables are no longer representative of the average responsiveness of newly purchased meters or meters in the field. The new rating tables are based on linear regression equations that are weighted to reflect the population mix of current meters in the field and weighted inversely to the variability of the data at each calibration velocity. For calibration velocities of 0.3 m/s and faster, at which most streamflow measurements are made, the new AA-rating predicts the true velocities within 1.5% and the new pygmy-meter rating within 2.0% for more than 95% of the meters. At calibration velocities, the new AA-meter rating is up to 1.4% different from the original rating, and the new pygmy-meter rating is up to 1.6% different.
Catching up with Harvard: Results from Regression Analysis of World Universities League Tables
ERIC Educational Resources Information Center
Li, Mei; Shankar, Sriram; Tang, Kam Ki
2011-01-01
This paper uses regression analysis to test if the universities performing less well according to Shanghai Jiao Tong University's world universities league tables are able to catch up with the top performers, and to identify national and institutional factors that could affect this catching up process. We have constructed a dataset of 461…
A Temperature Sensor using a Silicon-on-Insulator (SOI) Timer for Very Wide Temperature Measurement
NASA Technical Reports Server (NTRS)
Patterson, Richard L.; Hammoud, Ahmad; Elbuluk, Malik; Culley, Dennis E.
2008-01-01
A temperature sensor based on a commercial-off-the-shelf (COTS) Silicon-on-Insulator (SOI) Timer was designed for extreme temperature applications. The sensor can operate under a wide temperature range from hot jet engine compartments to cryogenic space exploration missions. For example, in Jet Engine Distributed Control Architecture, the sensor must be able to operate at temperatures exceeding 150 C. For space missions, extremely low cryogenic temperatures need to be measured. The output of the sensor, which consisted of a stream of digitized pulses whose period was proportional to the sensed temperature, can be interfaced with a controller or a computer. The data acquisition system would then give a direct readout of the temperature through the use of a look-up table, a built-in algorithm, or a mathematical model. Because of the wide range of temperature measurement and because the sensor is made of carefully selected COTS parts, this work is directly applicable to the NASA Fundamental Aeronautics/Subsonic Fixed Wing Program--Jet Engine Distributed Engine Control Task and to the NASA Electronic Parts and Packaging (NEPP) Program. In the past, a temperature sensor was designed and built using an SOI operational amplifier, and a report was issued. This work used an SOI 555 timer as its core and is completely new work.
Assessment of Biases in MODIS Surface Reflectance Due to Lambertian Approximation
NASA Technical Reports Server (NTRS)
Wang, Yujie; Lyapustin, Alexei I.; Privette, Jeffrey L.; Cook, Robert B.; SanthanaVannan, Suresh K.; Vermote, Eric F.; Schaaf, Crystal
2010-01-01
Using MODIS data and the AERONET-based Surface Reflectance Validation Network (ASRVN), this work studies errors of MODIS atmospheric correction caused by the Lambertian approximation. On one hand, this approximation greatly simplifies the radiative transfer model, reduces the size of the look-up tables, and makes operational algorithm faster. On the other hand, uncompensated atmospheric scattering caused by Lambertian model systematically biases the results. For example, for a typical bowl-shaped bidirectional reflectance distribution function (BRDF), the derived reflectance is underestimated at high solar or view zenith angles, where BRDF is high, and is overestimated at low zenith angles where BRDF is low. The magnitude of biases grows with the amount of scattering in the atmosphere, i.e., at shorter wavelengths and at higher aerosol concentration. The slope of regression of Lambertian surface reflectance vs. ASRVN bidirectional reflectance factor (BRF) is about 0.85 in the red and 0.6 in the green bands. This error propagates into the MODIS BRDF/albedo algorithm, slightly reducing the magnitude of overall reflectance and anisotropy of BRDF. This results in a small negative bias of spectral surface albedo. An assessment for the GSFC (Greenbelt, USA) validation site shows the albedo reduction by 0.004 in the near infrared, 0.005 in the red, and 0.008 in the green MODIS bands.
Vidovic, Luka; Majaron, Boris
2014-02-01
Diffuse reflectance spectra (DRS) of biological samples are commonly measured using an integrating sphere (IS). To account for the incident light spectrum, measurement begins by placing a highly reflective white standard against the IS sample opening and collecting the reflected light. After replacing the white standard with the test sample of interest, DRS of the latter is determined as the ratio of the two values at each involved wavelength. However, such a substitution may alter the fluence rate inside the IS. This leads to distortion of measured DRS, which is known as single-beam substitution error (SBSE). Barring the use of more complex experimental setups, the literature states that only approximate corrections of the SBSE are possible, e.g., by using look-up tables generated with calibrated low-reflectivity standards. We present a practical method for elimination of SBSE when using IS equipped with an additional reference port. Two additional measurements performed at this port enable a rigorous elimination of SBSE. Our experimental characterization of SBSE is replicated by theoretical derivation. This offers an alternative possibility of computational removal of SBSE based on advance characterization of a specific DRS setup. The influence of SBSE on quantitative analysis of DRS is illustrated in one application example.
Anterior chamber blood cell differentiation using spectroscopic optical coherence tomography
NASA Astrophysics Data System (ADS)
Qian, Ruobing; McNabb, Ryan P.; Kuo, Anthony N.; Izatt, Joseph A.
2018-02-01
There is great clinical importance in identifying cellular responses in the anterior chamber (AC) which can indicate signs of hyphema (an accumulation of red blood cells (RBCs)) or aberrant intraocular inflammation (an accumulation of white blood cells (WBCs)). These responses are difficult to diagnose and require specialized equipment such as ophthalmic microscopes and specialists trained in examining the eye. In this work, we applied spectroscopic OCT to differentiate between RBCs and subtypes of WBCs, including neutrophils, lymphocytes and monocytes, both in vitro and in ACs of porcine eyes. We located and tracked single cells in OCT volumetric images, and extracted the spectroscopic data of each cell from the detected interferograms using short-time Fourier Transform (STFT). A look-up table of Mie spectra was generated and used to correlate the spectroscopic data of single cells to their characteristic sizes. The accuracy of the method was first validated on 10um polystyrene microspheres. For RBCs and subtypes of WBCs, the extracted size distributions based on the best Mie spectra fit were significantly different between each cell type by using the Wilcoxon rank-sum test. A similar size distribution of neutrophils was also acquired in the measurements of cells introduced into the ACs of porcine eyes, further supporting spectroscopic OCT for potentially differentiating and quantifying blood cell types in the AC in vivo.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, R. A.
In the literature, the abundance of pipe network junction models, as well as inclusion of dissipative losses between connected pipes with loss coefficients, has been treated using the incompressible flow assumption of constant density. This approach is fundamentally, physically wrong for compressible flow with density change. This report introduces a mathematical modeling approach for general junctions in piping network systems for which the transient flows are compressible and single-phase. The junction could be as simple as a 1-pipe input and 1-pipe output with differing pipe cross-sectional areas for which a dissipative loss is necessary, or it could include an activemore » component, between an inlet pipe and an outlet pipe, such as a pump or turbine. In this report, discussion will be limited to the former. A more general branching junction connecting an arbitrary number of pipes with transient, 1-D compressible single-phase flows is also presented. These models will be developed in a manner consistent with the use of a general equation of state like, for example, the recent Spline-Based Table Look-up method [1] for incorporating the IAPWS-95 formulation [2] to give accurate and efficient calculations for properties for water and steam with RELAP-7 [3].« less
Operating experience with existing light sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barton, M.Q.
It is instructive to consider what an explosive growth there has been in the development of light sources using synchrotron radiation. This is well illustrated by the list of facilities given in Table I. In many cases, synchrotron light facilities have been obtained by tacking on parasitic beam lines to rings that were built for high energy physics. Of the twenty-three facilities in this table, however, eleven were built explicitely for this synchrotron radiation. Another seven have by now been converted for use as dedicated facilities leaving only five that share time with high energy physics. These five parasitically operatedmore » facilities are still among our best sources of hard x-rays, however, and their importance to the fields of science where these x-rays are needed must be emphasized. While the number of facilities in this table is impressive, it is even more impressive to add up the total number of user beam lines. Most of these rings are absolutely surrounded by beam lines and finding real estate on the experimental floor of one of these facilities for adding a new experiment looks about as practical as adding a farm in the middle of Manhattan. Nonetheless, the managers of these rings seem to have an attitude of ''always room for one more'' and new experimental beam lines do appear. This situation is necessary because the demand for beam time has exploded at an even faster rate than the development of the facilities. The field is not only growing, it can be expected to continue to grow for some time. Some of the explicit plans for future development will be discussed in the companion paper by Lee Teng.« less
NASA Astrophysics Data System (ADS)
This has always been the major objection to its use by those not driven by the need to typeset mathematics since the “what-you-see-is-what-you-get” (WYSIWYG) packages offered by Microsoft Word and WordPerfect are easy to learn and use. Recently, however, com-mercial software companies have begun to market almost-WYSIWYG programs that create LaTeX files. Some commercial software that creates LaTeX files are listed in Table 1. EXP and SWP have some of the “look and feel” of the software that is popular in offices and PCTeX32 allows quick and convenient previews of the translated LaTeX files.
Pedagogies That Explore Food Practices: Resetting the Table for Improved Eco-Justice
ERIC Educational Resources Information Center
Harris, Carol E.; Barter, Barbara G.
2015-01-01
As health threats appear with increasing regularity in our food systems and other food crises loom worldwide, we look to rural areas to provide local and nutritious foods. Educationally, we seek approaches to food studies that engage students and their communities and, ultimately, lead to positive action. Yet food studies receive only generic…
Temporary Personal Radioactivity
ERIC Educational Resources Information Center
Myers, Fred
2012-01-01
As part of a bone scan procedure to look for the spread of prostate cancer, I was injected with radioactive technetium. In an effort to occupy/distract my mind, I used a Geiger counter to determine if the radioactive count obeyed the inverse-square law as a sensor was moved away from my bladder by incremental distances. (Contains 1 table and 2…
A Comparison of Keyboarding Software for the Elementary Grades. A Quarterly Report.
ERIC Educational Resources Information Center
Nolf, Kathleen; Weaver, Dave
This paper provides generalizations and ideas on what to look for when previewing software products designed for teaching or improving the keyboarding skills of elementary school students, a list of nine products that the MicroSIFT (Microcomputer Software and Information for Teachers) staff recommends for preview, and a table of features comparing…
Montessori Infant and Toddler Programs: How Our Approach Meshes with Other Models
ERIC Educational Resources Information Center
Miller, Darla Ferris
2011-01-01
Today, Montessori infant & toddler programs around the country usually have a similar look and feel--low floor beds, floor space for movement, low shelves, natural materials, tiny wooden chairs and tables for eating, and not a highchair or swing in sight. But Montessori toddler programs seem to fall into two paradigms--one model seeming more…
This is a 2016 table that looks at oil and natural gas industry site types and lists the applicable rules for the 2012 and 2016 new source performance standards (NSPS) and Volatile Organic Compounds (VOC) rules.
Look-Listen Opinion Poll, 1983-1984. Project of the National Telemedia Council, Inc.
ERIC Educational Resources Information Center
National Telemedia Council, Inc., Madison, WI.
Designed to indicate the reasons behind viewer program preferences, this report presents results of a survey which asked 1,576 television viewers (monitors) to evaluate programs they liked, did not like, and/or new programs. Tables summarize the findings for why programs were chosen, their technical quality, content realism, overall quality, and…
ERIC Educational Resources Information Center
Mori, Junko
2003-01-01
Investigates how Japanese and American students initiate topical talk as they get acquainted with each other during their initial encounter at a student-organized conversation table. Looks at the observable and reportable ways in which the participants demonstrate the relevance, or the irrelevance, of interculturality in the development of the…
24. Photographic copy of undated photo; Photographer unknown; Original in ...
24. Photographic copy of undated photo; Photographer unknown; Original in Rath collection at Iowa State University Libraries, Ames, Iowa; Filed under: Rath Packing Company, Printed Photographs, Symbol M, Box 2; REMOVING HIDES ON THE MOVING SKINNING TABLE; LOOKING NORTH - Rath Packing Company, Beef Killing Building, Sycamore Street between Elm & Eighteenth Streets, Waterloo, Black Hawk County, IA
NASA Astrophysics Data System (ADS)
Scheers, B.; Bloemen, S.; Mühleisen, H.; Schellart, P.; van Elteren, A.; Kersten, M.; Groot, P. J.
2018-04-01
Coming high-cadence wide-field optical telescopes will image hundreds of thousands of sources per minute. Besides inspecting the near real-time data streams for transient and variability events, the accumulated data archive is a wealthy laboratory for making complementary scientific discoveries. The goal of this work is to optimise column-oriented database techniques to enable the construction of a full-source and light-curve database for large-scale surveys, that is accessible by the astronomical community. We adopted LOFAR's Transients Pipeline as the baseline and modified it to enable the processing of optical images that have much higher source densities. The pipeline adds new source lists to the archive database, while cross-matching them with the known cataloguedsources in order to build a full light-curve archive. We investigated several techniques of indexing and partitioning the largest tables, allowing for faster positional source look-ups in the cross matching algorithms. We monitored all query run times in long-term pipeline runs where we processed a subset of IPHAS data that have image source density peaks over 170,000 per field of view (500,000 deg-2). Our analysis demonstrates that horizontal table partitions of declination widths of one-degree control the query run times. Usage of an index strategy where the partitions are densely sorted according to source declination yields another improvement. Most queries run in sublinear time and a few (< 20%) run in linear time, because of dependencies on input source-list and result-set size. We observed that for this logical database partitioning schema the limiting cadence the pipeline achieved with processing IPHAS data is 25 s.
Utilization of ontology look-up services in information retrieval for biomedical literature.
Vishnyakova, Dina; Pasche, Emilie; Lovis, Christian; Ruch, Patrick
2013-01-01
With the vast amount of biomedical data we face the necessity to improve information retrieval processes in biomedical domain. The use of biomedical ontologies facilitated the combination of various data sources (e.g. scientific literature, clinical data repository) by increasing the quality of information retrieval and reducing the maintenance efforts. In this context, we developed Ontology Look-up services (OLS), based on NEWT and MeSH vocabularies. Our services were involved in some information retrieval tasks such as gene/disease normalization. The implementation of OLS services significantly accelerated the extraction of particular biomedical facts by structuring and enriching the data context. The results of precision in normalization tasks were boosted on about 20%.
Setting up an Online Panel Representative of the General Population: The German Internet Panel
ERIC Educational Resources Information Center
Blom, Annelies G.; Gathmann, Christina; Krieger, Ulrich
2015-01-01
This article looks into the processes and outcomes of setting up and maintaining a probability-based longitudinal online survey, which is recruited face-to-face and representative of both the online and the offline population aged 16-75 in Germany. This German Internet Panel studies political and economic attitudes and reform preferences through…
ERIC Educational Resources Information Center
Jones, David R.
2017-01-01
The aim of this paper is to explore the institutional impact of sustainability league tables on current university agendas. It focuses on a narrative critique of one such league table, the UK's "Green League Table", compiled and reported by the student campaigning NGO, "People & Planet" annually between 2007 and 2013.…
Summary of Round Table Session and Appendixes
1992-01-01
The round table session was designed for interaction between the presenters and other round table participants. Twelve round tables, each capable of holding 10 participants were set up in one room. Presenters for the sessions were encouraged to lead discussions on one of many topics in these areas: a research idea that the presenter was just formulating; an unpublished...
78 FR 79061 - Noise Exposure Map Notice; Key West International Airport, Key West, FL
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-27
..., Flight Track Utilization by Aircraft Category for East Flow Operations; Table 4-3, Flight Track Utilization by Aircraft Category for West Flow Operations; Table 4-4, 2013 Air Carrier Flight Operations; Table 4-5, 2013 Commuter and Air Taxi Flight Operations; Table 4-6, 2013 Average Daily Engine Run-Up...
NASA Astrophysics Data System (ADS)
Zimmer, A. L.; Minsker, B. S.; Schmidt, A. R.; Ostfeld, A.
2011-12-01
Real-time mitigation of combined sewer overflows (CSOs) requires evaluation of multiple operational strategies during rapidly changing rainfall events. Simulation models for hydraulically complex systems can effectively provide decision support for short time intervals when coupled with efficient optimization. This work seeks to reduce CSOs for a test case roughly based on the North Branch of the Chicago Tunnel and Reservoir Plan (TARP), which is operated by the Metropolitan Water Reclamation District of Greater Chicago (MWRDGC). The North Branch tunnel flows to a junction with the main TARP system. The Chicago combined sewer system alleviates potential CSOs by directing high interceptor flows through sluice gates and dropshafts to a deep tunnel. Decision variables to control CSOs consist of sluice gate positions that control water flow to the tunnel as well as a treatment plant pumping rate that lowers interceptor water levels. A physics-based numerical model is used to simulate the hydraulic effects of changes in the decision variables. The numerical model is step-wise steady and conserves water mass and momentum at each time step by iterating through a series of look-up tables. The look-up tables are constructed offline to avoid extensive real-time calculations, and describe conduit storage and water elevations as a function of flow. A genetic algorithm (GA) is used to minimize CSOs at each time interval within a moving horizon framework. Decision variables are coded at 15-minute increments and GA solutions are two hours in duration. At each 15-minute interval, the algorithm identifies a good solution for a two-hour rainfall forecast. Three GA modifications help reduce optimization time. The first adjustment reduces the search alphabet by eliminating sluice gate positions that do not influence overflow volume. The second GA retains knowledge of the best decision at the previous interval by shifting the genes in the best previous sequence to initialize search at the new interval. The third approach is a micro-GA with a small population size and high diversity. Current tunnel operations attempt to avoid dropshaft geysers by simultaneously closing all sluice gates when the downstream end of the deep tunnel pressurizes. In an effort to further reduce CSOs, this research introduces a constraint that specifies a maximum allowable tunnel flow to prevent pressurization. The downstream junction depth is bounded by two flow conditions: a low tunnel water level represents inflow from the main system only, while a higher level includes main system flow as well as all possible North Branch inflow. If the lower of the two tunnel levels is pressurized, no North Branch flow is allowed to enter the junction. If only the higher level pressurizes, a linear rating is used to restrict the total North Branch flow below the volume that pressurizes the boundary. The numerical model is successfully calibrated to EPA SWMM and efficiently portrays system hydraulics in real-time. Results on the three GA approaches as well as impacts of various policies for the downstream constraint will be presented at the conference.
PROJECT MERCURY SUMMARY CONFERENCE - NASA - HOUSTON, TX
1963-10-01
In October 1963, the Project Mercury Summary Conference was held in the Houston, TX, Coliseum. This series of 44 photos is documentation of that conference. A view of the Houston, TX, Coliseum, and parking area in front with a Mercury Redstone Rocket setup in the parking lot for display (S63-16451). A view of an Air Force Atlas Rocket, a Mercury Redstone Rocket, and a Mercury Spacecraft on a test booster on display in the front area of the Coliseum (S63-16452). A view an Air Force Atlas Rocket and a Mercury Redstone Rocket set up for display with the Houston City Hall in the background (S63- 16453). This view shows the Atlas Rocket, Mercury Redstone, and Mercury Test Rocket with the Houston, TX, Coliseum in the background (S63- 16454). A balcony view, from the audience right side, of the attendees looking at the stage (S63-16455). A view of the NASA Space Science Demonstration with equipment setup on a table, center stage and Space Science Specialist briefing the group as he pours Liquid Oxygen into a beaker (S63-16456). View of the audience from the balcony on the audience right showing the speakers lecturn on stage to the audience left (S63-16457). A view of attendees in the lobby. Bennet James, MSC Public Affairs Office is seen to the left of center (S63-16458). Another view of the attendees in the lobby (S63- 16459). In this view, Astronaut Neil Armstrong is seen writing as others look on (S63-16460). In this view of the attendees, Astronauts Buzz Aldrin and Walt Cunningham are seen in the center of the shot. The October Calendar of Events is visable in the background (S63-16461). Dr. Charles Berry is seen in this view to the right of center, seated in the audience (S63-16462). View of " Special Registration " and the five ladies working there (S63-16463). A view from behind the special registration table, of the attendees being registered (S63-16464). A view of a conference table with a panel seated. (R-L): Dr. Robert R. Gilruth, Hugh L. Dryden, Walter C. Williams, and an unidentified man (S63- 16465). A closeup of the panel at the table with Dr. Gilruth on the left (S63-16466). About the same shot as number S63-16462, Dr. Berry is seen in this shot as well (S63-16467). In this view the audio setup is seen. In the audience, (L-R): C. C. Kraft, Vernon E. (Buddy) Powell, Public Affairs Office (PAO); and, in the foreground mixing the audio is Art Tantillo; and, at the recorder is Doyle Hodges both of the audio people are contractors that work for PAO at MSC (S63-16468). In this view Maxime Faget is seen speaking at the lecturn (S63-16469). Unidentified person at the lecturn (S63-16470). In this view the motion picture cameras and personel are shown documenting the conference (S63-16471). A motion picture cameraman in the balcony is shown filming the audience during a break (S63- 16472). Family members enjoy an exhibit (S63-16473). A young person gets a boost to look in a Gemini Capsule on display (S63-16474). A young person looks at the Gemini Capsule on display (S63-16475). Dr. Robert R. Gilruth is seen at the conference table (S63-16476). Walt Williams is seen in this view at the conference table (S63-16477). Unidentified man sitting next to Walt Williams (S63-16478). (L-R): Seated at the conference table, Dr. Robert Gilruth, Hugh L. Dryden, and Walt Williams (S63- 16479). Group in lobby faces visable, (L-R): Walt Williams, unidentified person, Dr. Robert Gilruth, Congressman (S63-16480). Man in uniform at the lecturn (S63-16481). Astronaut Leroy Gordon Cooper at the lecturn (S63-16482). Astronaut Cooper at the lecturn with a picture on the screen with the title, " Astronaut Names for Spacecraft " (S63-16483). Dr. Gilruth at the lecturn (S63-16484). Walt Williams at the lecturn (S63-16485). Unidentified man at the lecturn (S63-16486). John H. Boynton addresses the Summary Conference (S63-16487). (L-R): Astronaut Leroy Gordon Cooper, Mrs. Cooper, Senator Cris Cole, and Mrs. Cole (S63- 16488). In this view in the lobby, Senator and Mrs. Cris Cole, with Astronaut Gordon Cooper standing near the heatshield, and Mrs. Cooper; next, on the right is a press photographer (S63-16489). (L-R): Astronaut L. Gordon Cooper and Mrs. Cooper, unidentified man, and Senator Walter Richter (S63-16490). (L-R): Eugene Horton, partially obscured, briefs a group on the Mercury Spacecraft, an unidentified person, Harold Ogden, a female senator, Senator Chris Cole, Mrs. Cole, an unidentified female, Senator Walter Richter, Jim Bower, and an unidentified female (S63-16491). In this view, Mrs. Jim Bates is seen in the center, and Senator Walter Richter to the right (S63- 16492). The next three (3) shots are 4X5 CN (S63-16493 - S63-16495). In this view a NASA Space Science Demonstration is seen (S63-16493). In this view a shot of the conference table is seen, and, (L-R): Dr. Robert R. Gilruth, Hugh L. Dryden, Mr. Walter Williams, and an unidentfied man (S63-16494 - S63-16495). HOUSTON, TX
NASA Astrophysics Data System (ADS)
Srinivasa, K. G.; Shree Devi, B. N.
2017-10-01
String searching in documents has become a tedious task with the evolution of Big Data. Generation of large data sets demand for a high performance search algorithm in areas such as text mining, information retrieval and many others. The popularity of GPU's for general purpose computing has been increasing for various applications. Therefore it is of great interest to exploit the thread feature of a GPU to provide a high performance search algorithm. This paper proposes an optimized new approach to N-gram model for string search in a number of lengthy documents and its GPU implementation. The algorithm exploits GPGPUs for searching strings in many documents employing character level N-gram matching with parallel Score Table approach and search using CUDA API. The new approach of Score table used for frequency storage of N-grams in a document, makes the search independent of the document's length and allows faster access to the frequency values, thus decreasing the search complexity. The extensive thread feature in a GPU has been exploited to enable parallel pre-processing of trigrams in a document for Score Table creation and parallel search in huge number of documents, thus speeding up the whole search process even for a large pattern size. Experiments were carried out for many documents of varied length and search strings from the standard Lorem Ipsum text on NVIDIA's GeForce GT 540M GPU with 96 cores. Results prove that the parallel approach for Score Table creation and searching gives a good speed up than the same approach executed serially.
A Retrieval of Tropical Latent Heating Using the 3D Structure of Precipitation Features
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmed, Fiaz; Schumacher, Courtney; Feng, Zhe
Traditionally, radar-based latent heating retrievals use rainfall to estimate the total column-integrated latent heating and then distribute that heating in the vertical using a model-based look-up table (LUT). In this study, we develop a new method that uses size characteristics of radar-observed precipitating echo (i.e., area and mean echo-top height) to estimate the vertical structure of latent heating. This technique (named the Convective-Stratiform Area [CSA] algorithm) builds on the fact that the shape and magnitude of latent heating profiles are dependent on the organization of convective systems and aims to avoid some of the pitfalls involved in retrieving accurate rainfallmore » amounts and microphysical information from radars and models. The CSA LUTs are based on a high-resolution Weather Research and Forecasting model (WRF) simulation whose domain spans much of the near-equatorial Indian Ocean. When applied to S-PolKa radar observations collected during the DYNAMO/CINDY2011/AMIE field campaign, the CSA retrieval compares well to heating profiles from a sounding-based budget analysis and improves upon a simple rain-based latent heating retrieval. The CSA LUTs also highlight the fact that convective latent heating increases in magnitude and height as cluster area and echo-top heights grow, with a notable congestus signature of cooling at mid levels. Stratiform latent heating is less dependent on echo-top height, but is strongly linked to area. Unrealistic latent heating profiles in the stratiform LUT, viz., a low-level heating spike, an elevated melting layer, and net column cooling were identified and corrected for. These issues highlight the need for improvement in model parameterizations, particularly in linking microphysical phase changes to larger mesoscale processes.« less
NASA Astrophysics Data System (ADS)
Taasti, Vicki T.; Michalak, Gregory J.; Hansen, David C.; Deisher, Amanda J.; Kruse, Jon J.; Krauss, Bernhard; Muren, Ludvig P.; Petersen, Jørgen B. B.; McCollough, Cynthia H.
2018-01-01
Dual energy CT (DECT) has been shown, in theoretical and phantom studies, to improve the stopping power ratio (SPR) determination used for proton treatment planning compared to the use of single energy CT (SECT). However, it has not been shown that this also extends to organic tissues. The purpose of this study was therefore to investigate the accuracy of SPR estimation for fresh pork and beef tissue samples used as surrogates of human tissues. The reference SPRs for fourteen tissue samples, which included fat, muscle and femur bone, were measured using proton pencil beams. The tissue samples were subsequently CT scanned using four different scanners with different dual energy acquisition modes, giving in total six DECT-based SPR estimations for each sample. The SPR was estimated using a proprietary algorithm (syngo.via DE Rho/Z Maps, Siemens Healthcare, Forchheim, Germany) for extracting the electron density and the effective atomic number. SECT images were also acquired and SECT-based SPR estimations were performed using a clinical Hounsfield look-up table. The mean and standard deviation of the SPR over large volume-of-interests were calculated. For the six different DECT acquisition methods, the root-mean-square errors (RMSEs) for the SPR estimates over all tissue samples were between 0.9% and 1.5%. For the SECT-based SPR estimation the RMSE was 2.8%. For one DECT acquisition method, a positive bias was seen in the SPR estimates, having a mean error of 1.3%. The largest errors were found in the very dense cortical bone from a beef femur. This study confirms the advantages of DECT-based SPR estimation although good results were also obtained using SECT for most tissues.
Andrenucci, Andrea
2016-01-01
Few studies have been performed within cross-language information retrieval (CLIR) in the field of psychology and psychotherapy. The aim of this paper is to to analyze and assess the quality of available query translation methods for CLIR on a health portal for psychology. A test base of 100 user queries, 50 Multi Word Units (WUs) and 50 Single WUs, was used. Swedish was the source language and English the target language. Query translation methods based on machine translation (MT) and dictionary look-up were utilized in order to submit query translations to two search engines: Google Site Search and Quick Ask. Standard IR evaluation measures and a qualitative analysis were utilized to assess the results. The lexicon extracted with word alignment of the portal's parallel corpus provided better statistical results among dictionary look-ups. Google Translate provided more linguistically correct translations overall and also delivered better retrieval results in MT.
Determining Greenland Ice Sheet Accumulation Rates from Radar Remote Sensing
NASA Technical Reports Server (NTRS)
Jezek, Kenneth C.
2002-01-01
An important component of NASA's Program for Arctic Regional Climate Assessment (PARCA) is a mass balance investigation of the Greenland Ice Sheet. The mass balance is calculated by taking the difference between the areally Integrated snow accumulation and the net ice discharge of the ice sheet. Uncertainties in this calculation Include the snow accumulation rate, which has traditionally been determined by interpolating data from ice core samples taken from isolated spots across the ice sheet. The sparse data associated with ice cores juxtaposed against the high spatial and temporal resolution provided by remote sensing , has motivated scientists to investigate relationships between accumulation rate and microwave observations as an option for obtaining spatially contiguous estimates. The objective of this PARCA continuation proposal was to complete an estimate of surface accumulation rate on the Greenland Ice Sheet derived from C-band radar backscatter data compiled in the ERS-1 SAR mosaic of data acquired during, September-November, 1992. An empirical equation, based on elevation and latitude, is used to determine the mean annual temperature. We examine the influence of accumulation rate, and mean annual temperature on C-band radar backscatter using a forward model, which incorporates snow metamorphosis and radar backscatter components. Our model is run over a range of accumulation and temperature conditions. Based on the model results, we generate a look-up table, which uniquely maps the measured radar backscatter, and mean annual temperature to accumulation rate. Our results compare favorably with in situ accumulation rate measurements falling within our study area.
Lee, Yi-Ying; Hsu, Chih-Yuan; Lin, Ling-Jiun; Chang, Chih-Chun; Cheng, Hsiao-Chun; Yeh, Tsung-Hsien; Hu, Rei-Hsing; Lin, Che; Xie, Zhen; Chen, Bor-Sen
2013-10-27
Synthetic genetic transistors are vital for signal amplification and switching in genetic circuits. However, it is still problematic to efficiently select the adequate promoters, Ribosome Binding Sides (RBSs) and inducer concentrations to construct a genetic transistor with the desired linear amplification or switching in the Input/Output (I/O) characteristics for practical applications. Three kinds of promoter-RBS libraries, i.e., a constitutive promoter-RBS library, a repressor-regulated promoter-RBS library and an activator-regulated promoter-RBS library, are constructed for systematic genetic circuit design using the identified kinetic strengths of their promoter-RBS components.According to the dynamic model of genetic transistors, a design methodology for genetic transistors via a Genetic Algorithm (GA)-based searching algorithm is developed to search for a set of promoter-RBS components and adequate concentrations of inducers to achieve the prescribed I/O characteristics of a genetic transistor. Furthermore, according to design specifications for different types of genetic transistors, a look-up table is built for genetic transistor design, from which we could easily select an adequate set of promoter-RBS components and adequate concentrations of external inducers for a specific genetic transistor. This systematic design method will reduce the time spent using trial-and-error methods in the experimental procedure for a genetic transistor with a desired I/O characteristic. We demonstrate the applicability of our design methodology to genetic transistors that have desirable linear amplification or switching by employing promoter-RBS library searching.
NASA Astrophysics Data System (ADS)
Zhang, Yuhuan; Li, Zhengqiang; Zhang, Ying; Hou, Weizhen; Xu, Hua; Chen, Cheng; Ma, Yan
2014-01-01
The Geostationary Ocean Color Imager (GOCI) provides multispectral imagery of the East Asia region hourly from 9:00 to 16:00 local time (GMT+9) and collects multispectral imagery at eight spectral channels (412, 443, 490, 555, 660, 680, 745, and 865 nm) with a spatial resolution of 500 m. Thus, this technology brings significant advantages to high temporal resolution environmental monitoring. We present the retrieval of aerosol optical depth (AOD) in northern China based on GOCI data. Cross-calibration was performed against Moderate Resolution Imaging Spectrometer (MODIS) data in order to correct the land calibration bias of the GOCI sensor. AOD retrievals were then accomplished using a look-up table (LUT) strategy with assumptions of a quickly varying aerosol and a slowly varying surface with time. The AOD retrieval algorithm calculates AOD by minimizing the surface reflectance variations of a series of observations in a short period of time, such as several days. The monitoring of hourly AOD variations was implemented, and the retrieved AOD agreed well with AErosol RObotic NETwork (AERONET) ground-based measurements with a good R2 of approximately 0.74 at validation sites at the cities of Beijing and Xianghe, although intercept bias may be high in specific cases. The comparisons with MODIS products also show a good agreement in AOD spatial distribution. This work suggests that GOCI imagery can provide high temporal resolution monitoring of atmospheric aerosols over land, which is of great interest in climate change studies and environmental monitoring.
2013-01-01
Background Synthetic genetic transistors are vital for signal amplification and switching in genetic circuits. However, it is still problematic to efficiently select the adequate promoters, Ribosome Binding Sides (RBSs) and inducer concentrations to construct a genetic transistor with the desired linear amplification or switching in the Input/Output (I/O) characteristics for practical applications. Results Three kinds of promoter-RBS libraries, i.e., a constitutive promoter-RBS library, a repressor-regulated promoter-RBS library and an activator-regulated promoter-RBS library, are constructed for systematic genetic circuit design using the identified kinetic strengths of their promoter-RBS components. According to the dynamic model of genetic transistors, a design methodology for genetic transistors via a Genetic Algorithm (GA)-based searching algorithm is developed to search for a set of promoter-RBS components and adequate concentrations of inducers to achieve the prescribed I/O characteristics of a genetic transistor. Furthermore, according to design specifications for different types of genetic transistors, a look-up table is built for genetic transistor design, from which we could easily select an adequate set of promoter-RBS components and adequate concentrations of external inducers for a specific genetic transistor. Conclusion This systematic design method will reduce the time spent using trial-and-error methods in the experimental procedure for a genetic transistor with a desired I/O characteristic. We demonstrate the applicability of our design methodology to genetic transistors that have desirable linear amplification or switching by employing promoter-RBS library searching. PMID:24160305
Top of the charts: download versus citations in the International Journal of Cardiology.
Coats, Andrew J S
2005-11-02
The medical literature is growing at an alarming rate. Research assessment exercises, research quality frameworks, league tables and the like have attempted to quantify the volume, quality and impact of research. Yet the established measures (such as citation rates) are being challenged by the sheer number of journals, variability in the "gold standard" of peer-review and the emergence of open-source or web-based journals. In the last few years, we have seen a growth in downloads to individual journal articles that now easily exceeds formal journal subscriptions. We have recorded the 10 top cited articles over a 12-month period and compared them to the 10 most popular articles being downloaded over the same time period. The citation-based listing included basic and applied, observational and interventional original research reports. For downloaded articles, which have shown a dramatic increase for the International Journal of Cardiology from 48,000 in 2002 to 120,000 in 2003 to 200,000 in 2004, the most popular articles over the same period are very different and are dominated by up-to-date reviews of either cutting-edge topics (such as the potential of stem cells) or of the management of rare or unusual conditions. There is no overlap between the two lists despite covering exactly the same 12-month period and using measures of peer esteem. Perhaps the time has come to look at the usage of articles rather than, or in addition to, their referencing.
NASA Astrophysics Data System (ADS)
Chang, Kuo-En; Hsiao, Ta-Chih; Hsu, N. Christina; Lin, Neng-Huei; Wang, Sheng-Hsiang; Liu, Gin-Rong; Liu, Chian-Yi; Lin, Tang-Huang
2016-08-01
In this study, an approach in determining effective mixing weight of soot aggregates from dust-soot aerosols is proposed to improve the accuracy of retrieving properties of polluted dusts by means of satellite remote sensing. Based on a pre-computed database containing several variables (such as wavelength, refractive index, soot mixing weight, surface reflectivity, observation geometries and aerosol optical depth (AOD)), the fan-shaped look-up tables can be drawn out accordingly for determining the mixing weights, AOD and single scattering albedo (SSA) of polluted dusts simultaneously with auxiliary regional dust properties and surface reflectivity. To validate the performance of the approach in this study, 6 cases study of polluted dusts (dust-soot aerosols) in Lower Egypt and Israel were examined with the ground-based measurements through AErosol RObotic NETwork (AERONET). The results show that the mean absolute differences could be reduced from 32.95% to 6.56% in AOD and from 2.67% to 0.83% in SSA retrievals for MODIS aerosol products when referenced to AERONET measurements, demonstrating the soundness of the proposed approach under different levels of dust loading, mixing weight and surface reflectivity. Furthermore, the developed algorithm is capable of providing the spatial distribution of the mixing weights and removing the requirement to assume that the dust plume properties are uniform. The case study further shows the spatially variant dust-soot mixing weight would improve the retrieval accuracy in AODmixture and SSAmixture about 10.0% and 1.4% respectively.
Shhh! No Opinions in the Library: "IndyKids" and Kids' Right to an Independent Press
ERIC Educational Resources Information Center
Vender, Amanda
2011-01-01
"Nintendo Power," "Sports Illustrated for Kids," and a biography of President Obama were on prominent display as the author entered the branch library in Forest Hills, Queens. The librarian looked skeptical when the author asked the librarian if she could leave copies of "IndyKids" newspapers on the free literature table. The branch manager…
GEMINI-TITAN (GT)-9 TEST - ASTRONAUT BEAN, ALAN - KSC
1973-08-14
S73-31973 (August 1973) --- Scientist-astronaut Owen K. Garriott, Skylab 3 science pilot, looks at a map of Earth at the food table in the ward room of the Orbital Workshop (OWS). In this photographic reproduction taken from a television transmission made by a color TV camera aboard the Skylab space station cluster in Earth orbit. Photo credit: NASA
Look-Listen Opinion Poll, 1984-1985. Project of the National Telemedia Council, Inc.
ERIC Educational Resources Information Center
Giles, Doris, Ed.; And Others
Designed to indicate the reasons behind viewer program preferences, this 32nd report of an annual opinion poll presents the results of a survey which asked 914 participants to evaluate 3,584 television programs they liked, did not like, and/or to evaluate new programs. Tables summarize the reasons why programs were selected by viewers, their…
Maintaining Multimedia Data in a Geospatial Database
2012-09-01
at PostgreSQL and MySQL as spatial databases was offered. Given their results, as each database produced result sets from zero to 100,000, it was...excelled given multiple conditions. A different look at PostgreSQL and MySQL as spatial databases was offered. Given their results, as each database... MySQL ................................................................................................14 B. BENCHMARKING DATA RETRIEVED FROM TABLE
ERIC Educational Resources Information Center
Institute for Responsive Education, Boston, MA.
The Institute for Responsive Education and its study team are looking at ways to widen the scope of collective bargaining to provide room for communities to participate in policy formulation in their schools. The traditional management-labor approach was designed to resolve differences about wages, fringe benefits, and the rules, rights, and…
25. Photographic copy of undated photo; Photographer unknown; Original in ...
25. Photographic copy of undated photo; Photographer unknown; Original in Rath collection at Iowa State University Libraries, Ames, Iowa; Filed under: Rath Packing Company, Printed Photographs, Symbol M, Box 2; REMOVING HIDES ON THE SKINNING TABLE; CARCASSES IN HALF-HOIST POSITION; LOOKING SOUTH - Rath Packing Company, Beef Killing Building, Sycamore Street between Elm & Eighteenth Streets, Waterloo, Black Hawk County, IA
A Monte Carlo investigation of lung brachytherapy treatment planning
NASA Astrophysics Data System (ADS)
Sutherland, J. G. H.; Furutani, K. M.; Thomson, R. M.
2013-07-01
Iodine-125 (125I) and Caesium-131 (131Cs) brachytherapy have been used in conjunction with sublobar resection to reduce the local recurrence of stage I non-small cell lung cancer compared with resection alone. Treatment planning for this procedure is typically performed using only a seed activity nomogram or look-up table to determine seed strand spacing for the implanted mesh. Since the post-implant seed geometry is difficult to predict, the nomogram is calculated using the TG-43 formalism for seeds in a planar geometry. In this work, the EGSnrc user-code BrachyDose is used to recalculate nomograms using a variety of tissue models for 125I and 131Cs seeds. Calculated prescription doses are compared to those calculated using TG-43. Additionally, patient CT and contour data are used to generate virtual implants to study the effects that post-implant deformation and patient-specific tissue heterogeneity have on perturbing nomogram-derived dose distributions. Differences of up to 25% in calculated prescription dose are found between TG-43 and Monte Carlo calculations with the TG-43 formalism underestimating prescription doses in general. Differences between the TG-43 formalism and Monte Carlo calculated prescription doses are greater for 125I than for 131Cs seeds. Dose distributions are found to change significantly based on implant deformation and tissues surrounding implants for patient-specific virtual implants. Results suggest that accounting for seed grid deformation and the effects of non-water media, at least approximately, are likely required to reliably predict dose distributions in lung brachytherapy patients.
The Micro-Pulse Lidar Network (MPL-Net)
NASA Technical Reports Server (NTRS)
Welton, Ellsworth J.; Campbell, James R.; Berkoff, Timothy A.; Spinhirne, James D.; Tsay, Si-Chee; Holben, Brent; Shiobara, Masataka; Starr, David OC. (Technical Monitor)
2002-01-01
In the early 1990s, the first small, eye-safe, and autonomous lidar system was developed, the Micro-pulse Lidar (MPL). The MPL has proven to be useful in the field because it can be automated, runs continuously (day and night), is eye-safe, can easily be transported and set up, and has a small field-of-view which limits multiple scattering concerns. The MPL acquires signal profiles of backscattered laser light from aerosols and clouds. The signals are analyzed to yield multiple layer heights, optical depths of each layer, average extinction-to-backscatter ratio of each layer, and profiles of extinction in each layer. The MPL has been used in a wide variety of field studies over the past 10 years, leading to nearly 20 papers and many conference presentations. In 2000, a new project using MPL systems was started at NASA Goddard Space Flight Center. The MPL-Net project is currently working to establish a worldwide network of MPL systems, all co-located with NASA's AERONET sunphotometers for joint measurements of optical depth and sky radiance. Automated processing algorithms have been developed to produce data products on a next day basis for all sites and some field experiments. Initial results from the first several sites are shown, along with aerosol data collected during several major field campaigns. Measurements of the aerosol extinction-to-backscatter ratio at several different geographic regions, and for various aerosol types are shown. This information is used to improve the construction of look up tables of the ratio, needed to process aerosol profiles acquired with satellite based lidars.
The performance of DC restoration function for MODIS thermal emissive bands
NASA Astrophysics Data System (ADS)
Wang, Zhipeng; Xiong, Xiaoxiong Jack; Shrestha, Ashish
2017-09-01
The DC restore (DCR) process of MODIS instrument maintains the output of a detector at focal plane assembly (FPA) within the dynamic range of subsequent analog-to-digital converter, by adding a specific offset voltage to the output. The DCR offset value is adjusted per scan, based on the comparison of the detector response in digital number (DN) collected from the blackbody (BB) view with target DN saved as an on-board look-up table. In this work, the MODIS DCR mechanism is revisited, with the trends of DCR offset being provided for thermal emissive bands (TEB). Noticeable changes have been occasionally found which coincide with significant detector gain change due to various instrumental events such as safe-mode anomaly or FPA temperature fluctuation. In general, MODIS DCR functionality has been effective and the change of DCR offset has no impact to the quality of MODIS data. One exception is the Earth view (EV) data saturation of Aqua MODIS LWIR bands 33, 35 ad 36 during BB warm-up cool-down (WUCD) cycle which has been observed since 2008. The BB view of their detectors saturate when the BB temperature is above certain threshold so the DCR cannot work as designed. Therefore, the dark signal DN fluctuates with the cold FPA (CFPA) temperature and saturate for a few hours per WUCD cycle, which also saturate the EV data sector within the scan. The CFPA temperature fluctuation peaked in 2012 and has been reduced in recent years and the saturation phenomenon has been easing accordingly. This study demonstrates the importance of DCR to data generation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paige, Karen Schultz; Gomez, Penelope E.
This document describes the approach Waste and Environmental Services - Environmental Data and Analysis plans to take to resolve the issues presented in a recent audit of the WES-EDA Environmental Database relative to the RACER database. A majority of the issues discovered in the audit will be resolved in May 2011 when the WES-EDA Environmental Database, along with other LANL databases, are integrated and moved to a new vendor providing an Environmental Information Management (EIM) system that allows reporting capabilities for all users directly from the database. The EIM system will reside in a publicly accessible LANL cloud-based software system.more » When this transition occurs, the data quality, completeness, and access will change significantly. In the remainder of this document, this new structure will be referred to as the LANL Cloud System In general, our plan is to address the issues brought up in this audit in three ways: (1) Data quality issues such as units and detection status, which impinge upon data usability, will be resolved as soon possible so that data quality is maintained. (2) Issues requiring data cleanup, such as look up tables, legacy data, locations, codes, and significant data discrepancies, will be addressed as resources permit. (3) Issues associated with data feed problems will be eliminated by the LANL Cloud System, because there will be no data feed. As discussed in the paragraph above, in the future the data will reside in a publicly accessible system. Note that report writers may choose to convert, adapt, or simplify the information they receive officially through our data base, thereby introducing data discrepancies between the data base and the public report. It is not always possible to incorporate and/or correct these errors when they occur. Issues in the audit will be discussed in the order in which they are presented in the audit report. Clarifications will also be noted as the audit report was a draft document, at the time of this response.« less
Depth of interaction decoding of a continuous crystal detector module.
Ling, T; Lewellen, T K; Miyaoka, R S
2007-04-21
We present a clustering method to extract the depth of interaction (DOI) information from an 8 mm thick crystal version of our continuous miniature crystal element (cMiCE) small animal PET detector. This clustering method, based on the maximum-likelihood (ML) method, can effectively build look-up tables (LUT) for different DOI regions. Combined with our statistics-based positioning (SBP) method, which uses a LUT searching algorithm based on the ML method and two-dimensional mean-variance LUTs of light responses from each photomultiplier channel with respect to different gamma ray interaction positions, the position of interaction and DOI can be estimated simultaneously. Data simulated using DETECT2000 were used to help validate our approach. An experiment using our cMiCE detector was designed to evaluate the performance. Two and four DOI region clustering were applied to the simulated data. Two DOI regions were used for the experimental data. The misclassification rate for simulated data is about 3.5% for two DOI regions and 10.2% for four DOI regions. For the experimental data, the rate is estimated to be approximately 25%. By using multi-DOI LUTs, we also observed improvement of the detector spatial resolution, especially for the corner region of the crystal. These results show that our ML clustering method is a consistent and reliable way to characterize DOI in a continuous crystal detector without requiring any modifications to the crystal or detector front end electronics. The ability to characterize the depth-dependent light response function from measured data is a major step forward in developing practical detectors with DOI positioning capability.
Research of spectacle frame measurement system based on structured light method
NASA Astrophysics Data System (ADS)
Guan, Dong; Chen, Xiaodong; Zhang, Xiuda; Yan, Huimin
2016-10-01
Automatic eyeglass lens edging system is now widely used to automatically cut and polish the uncut lens based on the spectacle frame shape data which is obtained from the spectacle frame measuring machine installed on the system. The conventional approach to acquire the frame shape data works in the contact scanning mode with a probe tracing around the groove contour of the spectacle frame which requires a sophisticated mechanical and numerical control system. In this paper, a novel non-contact optical measuring method based on structured light to measure the three dimensional (3D) data of the spectacle frame is proposed. First we focus on the processing approach solving the problem of deterioration of the structured light stripes caused by intense specular reflection on the frame surface. The techniques of bright-dark bi-level fringe projecting, multiple exposuring and high dynamic range imaging are introduced to obtain a high-quality image of structured light stripes. Then, the Gamma transform and median filtering are applied to enhance image contrast. In order to get rid of background noise from the image and extract the region of interest (ROI), an auxiliary lighting system of special design is utilized to help effectively distinguish between the object and the background. In addition, a morphological method with specific morphological structure-elements is adopted to remove noise between stripes and boundary of the spectacle frame. By further fringe center extraction and depth information acquisition through the method of look-up table, the 3D shape of the spectacle frame is recovered.
JPSS-1 VIIRS Version 2 At-Launch Relative Spectral Response Characterization and Performance
NASA Technical Reports Server (NTRS)
Moeller, Chris; Schwarting, Thomas; McIntire, Jeff; Moyer, Dave; Zeng, Jinan
2017-01-01
The relative spectral response (RSR) characterization of the JPSS-1 VIIRS spectral bands has achieved at launch status in the VIIRS Data Analysis Working Group February 2016 Version 2 RSR release. The Version 2 release improves upon the June 2015 Version 1 release by including December 2014 NIST TSIRCUS spectral measurements of VIIRS VisNIR bands in the analysis plus correcting CO2 influence on the band M13 RSR. The T-SIRCUS based characterization is merged with the summer 2014 SpMA based characterization of VisNIR bands (Version 1 release) to yield a fused RSR for these bands, combining the strengths of the T-SIRCUS and the SpMA measurement systems. The M13 RSR is updated by applying a model-based correction to mitigate CO2 attenuation of the SpMA source signal that occurred during M13 spectral measurements. The Version 2 release carries forward the Version 1 RSR for those bands that were not updated (M8-M12, M14-M16AB, I3-I5, DNBMGS). The Version 2 release includes band average (overall detectors and subsamples) RSR plus supporting RSR for each detector and subsample. The at-launch band average RSR have been used to populate Look-Up Tables supporting the sensor data record and environmental data record at-launch science products. Spectral performance metrics show that JPSS-1VIIRS RSR are compliant on specifications with a few minor exceptions. The Version 2 release, which replaces the Version 1 release, is currently available on the password-protected NASA JPSS-1 eRooms under EAR99 control.
Zhao, Yong-guang; Ma, Ling-ling; Li, Chuan-rong; Zhu, Xiao-hua; Tang, Ling-li
2015-07-01
Due to the lack of enough spectral bands for multi-spectral sensor, it is difficult to reconstruct surface retlectance spectrum from finite spectral information acquired by multi-spectral instrument. Here, taking into full account of the heterogeneity of pixel from remote sensing image, a method is proposed to simulate hyperspectral data from multispectral data based on canopy radiation transfer model. This method first assumes the mixed pixels contain two types of land cover, i.e., vegetation and soil. The sensitive parameters of Soil-Leaf-Canopy (SLC) model and a soil ratio factor were retrieved from multi-spectral data based on Look-Up Table (LUT) technology. Then, by combined with a soil ratio factor, all the parameters were input into the SLC model to simulate the surface reflectance spectrum from 400 to 2 400 nm. Taking Landsat Enhanced Thematic Mapper Plus (ETM+) image as reference image, the surface reflectance spectrum was simulated. The simulated reflectance spectrum revealed different feature information of different surface types. To test the performance of this method, the simulated reflectance spectrum was convolved with the Landsat ETM + spectral response curves and Moderate Resolution Imaging Spectrometer (MODIS) spectral response curves to obtain the simulated Landsat ETM+ and MODIS image. Finally, the simulated Landsat ETM+ and MODIS images were compared with the observed Landsat ETM+ and MODIS images. The results generally showed high correction coefficients (Landsat: 0.90-0.99, MODIS: 0.74-0.85) between most simulated bands and observed bands and indicated that the simulated reflectance spectrum was well simulated and reliable.
"Hyperstat": an educational and working tool in epidemiology.
Nicolosi, A
1995-01-01
The work of a researcher in epidemiology is based on studying literature, planning studies, gathering data, analyzing data and writing results. Therefore he has need for performing, more or less, simple calculations, the need for consulting or quoting literature, the need for consulting textbooks about certain issues or procedures, and the need for looking at a specific formula. There are no programs conceived as a workstation to assist the different aspects of researcher work in an integrated fashion. A hypertextual system was developed which supports different stages of the epidemiologist's work. It combines database management, statistical analysis or planning, and literature searches. The software was developed on Apple Macintosh by using Hypercard 2.1 as a database and HyperTalk as a programming language. The program is structured in 7 "stacks" or files: Procedures; Statistical Tables; Graphs; References; Text; Formulas; Help. Each stack has its own management system with an automated Table of Contents. Stacks contain "cards" which make up the databases and carry executable programs. The programs are of four kinds: association; statistical procedure; formatting (input/output); database management. The system performs general statistical procedures, procedures applicable to epidemiological studies only (follow-up and case-control), and procedures for clinical trials. All commands are given by clicking the mouse on self-explanatory "buttons". In order to perform calculations, the user only needs to enter the data into the appropriate cells and then click on the selected procedure's button. The system has a hypertextual structure. The user can go from a procedure to other cards following the preferred order of succession and according to built-in associations. The user can access different levels of knowledge or information from any stack he is consulting or operating. From every card, the user can go to a selected procedure to perform statistical calculations, to the reference database management system, to the textbook in which all procedures and issues are discussed in detail, to the database of statistical formulas with automated table of contents, to statistical tables with automated table of contents, or to the help module. he program has a very user-friendly interface and leaves the user free to use the same format he would use on paper. The interface does not require special skills. It reflects the Macintosh philosophy of using windows, buttons and mouse. This allows the user to perform complicated calculations without losing the "feel" of data, weight alternatives, and simulations. This program shares many features in common with hypertexts. It has an underlying network database where the nodes consist of text, graphics, executable procedures, and combinations of these; the nodes in the database correspond to windows on the screen; the links between the nodes in the database are visible as "active" text or icons in the windows; the text is read by following links and opening new windows. The program is especially useful as an educational tool, directed to medical and epidemiology students. The combination of computing capabilities with a textbook and databases of formulas and literature references, makes the program versatile and attractive as a learning tool. The program is also helpful in the work done at the desk, where the researcher examines results, consults literature, explores different analytic approaches, plans new studies, or writes grant proposals or scientific articles.
Closeup view looking into the nozzle of the Space Shuttle ...
Close-up view looking into the nozzle of the Space Shuttle Main Engine number 2061 looking at the cooling tubes along the nozzle wall and up towards the Main Combustion Chamber and Injector Plate - Space Transportation System, Space Shuttle Main Engine, Lyndon B. Johnson Space Center, 2101 NASA Parkway, Houston, Harris County, TX
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finch, Charlie T.; Zacharias, Norbert; Wycoff, Gary L., E-mail: finch@usno.navy.mi
2010-06-15
Presented here are the details of the astrometric reductions from the x, y data to mean right ascension (R.A.), declination (decl.) coordinates of the third U.S. Naval Observatory CCD Astrograph Catalog (UCAC3). For these new reductions we used over 216,000 CCD exposures. The Two-Micron All-Sky Survey (2MASS) data are used extensively to probe for coordinate and coma-like systematic errors in UCAC data mainly caused by the poor charge transfer efficiency of the 4K CCD. Errors up to about 200 mas have been corrected using complex look-up tables handling multiple dependences derived from the residuals. Similarly, field distortions and sub-pixel phasemore » errors have also been evaluated using the residuals with respect to 2MASS. The overall magnitude equation is derived from UCAC calibration field observations alone, independent of external catalogs. Systematic errors of positions at the UCAC observing epoch as presented in UCAC3 are better corrected than in the previous catalogs for most stars. The Tycho-2 catalog is used to obtain final positions on the International Celestial Reference Frame. Residuals of the Tycho-2 reference stars show a small magnitude equation (depending on declination zone) that might be inherent in the Tycho-2 catalog.« less
Kellman, Peter; Hansen, Michael S; Nielles-Vallespin, Sonia; Nickander, Jannike; Themudo, Raquel; Ugander, Martin; Xue, Hui
2017-04-07
Quantification of myocardial blood flow requires knowledge of the amount of contrast agent in the myocardial tissue and the arterial input function (AIF) driving the delivery of this contrast agent. Accurate quantification is challenged by the lack of linearity between the measured signal and contrast agent concentration. This work characterizes sources of non-linearity and presents a systematic approach to accurate measurements of contrast agent concentration in both blood and myocardium. A dual sequence approach with separate pulse sequences for AIF and myocardial tissue allowed separate optimization of parameters for blood and myocardium. A systems approach to the overall design was taken to achieve linearity between signal and contrast agent concentration. Conversion of signal intensity values to contrast agent concentration was achieved through a combination of surface coil sensitivity correction, Bloch simulation based look-up table correction, and in the case of the AIF measurement, correction of T2* losses. Validation of signal correction was performed in phantoms, and values for peak AIF concentration and myocardial flow are provided for 29 normal subjects for rest and adenosine stress. For phantoms, the measured fits were within 5% for both AIF and myocardium. In healthy volunteers the peak [Gd] was 3.5 ± 1.2 for stress and 4.4 ± 1.2 mmol/L for rest. The T2* in the left ventricle blood pool at peak AIF was approximately 10 ms. The peak-to-valley ratio was 5.6 for the raw signal intensities without correction, and was 8.3 for the look-up-table (LUT) corrected AIF which represents approximately 48% correction. Without T2* correction the myocardial blood flow estimates are overestimated by approximately 10%. The signal-to-noise ratio of the myocardial signal at peak enhancement (1.5 T) was 17.7 ± 6.6 at stress and the peak [Gd] was 0.49 ± 0.15 mmol/L. The estimated perfusion flow was 3.9 ± 0.38 and 1.03 ± 0.19 ml/min/g using the BTEX model and 3.4 ± 0.39 and 0.95 ± 0.16 using a Fermi model, for stress and rest, respectively. A dual sequence for myocardial perfusion cardiovascular magnetic resonance and AIF measurement has been optimized for quantification of myocardial blood flow. A validation in phantoms was performed to confirm that the signal conversion to gadolinium concentration was linear. The proposed sequence was integrated with a fully automatic in-line solution for pixel-wise mapping of myocardial blood flow and evaluated in adenosine stress and rest studies on N = 29 normal healthy subjects. Reliable perfusion mapping was demonstrated and produced estimates with low variability.
NASA Astrophysics Data System (ADS)
Klammler, G.; Rock, G.; Kupfersberger, H.; Fank, J.
2012-04-01
For many European countries nitrate leaching from the soil zone into the aquifer due to surplus application of mineral fertilizer and animal manure by farmers constitutes the most important threat to groundwater quality. Since this is a diffuse pollution situation measures to change agricultural production have to be investigated at the aquifer scale. In principal, the problem could be solved by the 3 dimensional equation describing variable saturated groundwater flow and solute transport. However, this is computationally prohibitive due to the temporal and spatial scope of the task, particularly in the framework of running numerous simulations to compromise between conflicting interests (i.e. good groundwater status and high agricultural yield). For the aquifer 'Westliches Leibnitzer Feld' we break down this task into 1d vertical movement of water and nitrate mass in the unsaturated zone and 2d horizontal flow of water and solutes in the saturated compartment. The aquifer is located within the Mur Valley about 20 km south of Graz and consists of early Holocene gravel with varying amounts of sand and some silt. The unsaturated flow and nitrate leaching package SIMWASER/STOTRASIM (Stenitzer, 1988; Feichtinger, 1998) is calibrated to the lysimeter data sets and further on applied to so called hydrotopes which are unique combinations of soil type and agricultural management. To account for the unknown regional distribution of crops grown and amount, timing and kind of fertilizers used a stochastic tool (Klammler et al, 2011) is developed that generates sequences of crop rotations derived from municipal statistical data. To match the observed nitrate concentrations in groundwater with a saturated nitrate transport model it is of utmost importance to apply a realistic input distribution of nitrate mass in terms of spatial and temporal characteristics. A table is generated by running SIMWASER/STOTRASIM that consists of unsaturated water and nitrate fluxes for each 10 cm interval of every hydrotope vertical profile until the lowest observed groundwater table is reached. The fluctuation range of the phreatic surface is also discretized in 10 cm intervals and used as outflow boundary condition. By this procedure, the influence of the groundwater table on the water and nitrate mass leaving the unsaturated can be considered taken into account varying soil horizons. To cover saturated flow in the WLF aquifer a 2-dimensional transient horizontal flow and solute transport model is set up. A sequential coupling between the two models is implemented, i.e. a unidirectional transfer of recharge and nitrate mass outflow from the hydrotopes to the saturated compartment. For this purpose, a one-time assignment between the spatial discretization of the hydrotopes and the finite element mesh has to be set up. The resulting groundwater table computed for a given time step with the input from SIMWASER/STOTRASIM is then used to extract the corresponding water and nitrate mass values from the look-up table to be used for the consecutive time step. This process is being repeated until the end of the simulation period. Within this approach there is no direct feedback between the unsaturated and the saturated aquifer compartment, i.e. there is no simultaneous (within the same time step) update of the pressure head - unsaturated head relationship at the soil and the phreatic surface (like is shown e.g. in Walsum and Groedendijk, 2008). For the dominating coarse sand conditions of the WLF aquifer we believe that this simplification is not of further relevance. For higher soil moisture contents (i.e. almost full saturation near the groundwater table) the curve returns to specific retention within a short vertical distance. Thus, there might only be mutual impact between soil and phreatic surface conditions for shallow groundwater tables. However, it should be mentioned here that all other processes in the two compartments (including capillary rise due to clay rich soils and groundwater withdrawn by root plants or evaporation losses) are accordingly considered given the capabilities of the used models. If we impose the computed groundwater table elevation as the outflow condition of the hydrotope for the next time step we postulate that the associated water volume of the saturated storage change will lead to the same change of the phreatic surface in the hydrotope column. This is only valid if the storage characteristics of the affected unsaturated soil layers can be adequately described by the co-located porosity of the saturated model. Moreover, the current soil moisture content of the respective soil layers is not being considered by the implemented new outflow boundary condition. Thus, from the perspective of continuity of mass it might be more correct, to transfer the same water volume that led to the saturated change (rise and fall) of the groundwater table to the unsaturated hydrotope column and compute the adjusted outflow boundary position for use in the next time step. Due to the hydrogeological conditions in our application, for almost all hydrotopes we have the same soil type (i.e. coarse sand) in the range of groundwater table fluctuations and thus, we expect no further impact of transferring the groundwater table from the saturated computation to the unsaturated domain. Summarizing, for the hydrogeologic conditions of our test site and the scope of the problem to be solved the sequential coupling between 1d unsaturated vertical and 2d saturated horizontal simulation of water movement and solute transport is regarded as an appropriate conceptual and numerical approach. Due to the extensive look-up table containing unsaturated water and nitrate fluxes for each hydrotope at a vertical resolution of 10 cm no further feedback processes between the unsaturated and saturated subsurface compartment need to be considered. Feichtinger, F. (1998). STOTRASIM - Ein Modell zur Simulation der Stickstoffdynamik in der ungesättigten Zone eines Ackerstandortes. Schriftenreihe des Bundesamtes für Wasserwirtschaft, Bd. 7, 14-41. Klammler, G., Rock, G., Fank, J. & H. Kupfersberger, H. (2011): Generating land use information to derive diffuse water and nitrate transfer as input for groundwater modelling at the aquifer scale, Proc of MODELCARE 2011 Models - Repository of Knowledge, Leipzig. Stenitzer, E. (1988). SIMWASER - Ein numerisches Modell zur Simulation des Bodenwasserhaushaltes und des Pflanzenertrages eines Standortes. Mitteilung Nr. 31, Bundesanstalt für Kulturtechnik und Bodenwasserhaushalt, A-3252 Petzenkirchen. Van Walsum, P.E.V. and P. Groedendilk (2008). Quasi steady-state simulation of the unsaturated zone in groundwater modeling of lowland regions. Vadose Zone J. 7:769-781 doi:10.2136/vzj2007.0146.
Using Firefly Tools to Enhance Archive Web Pages
NASA Astrophysics Data System (ADS)
Roby, W.; Wu, X.; Ly, L.; Goldina, T.
2013-10-01
Astronomy web developers are looking for fast and powerful HTML 5/AJAX tools to enhance their web archives. We are exploring ways to make this easier for the developer. How could you have a full FITS visualizer or a Web 2.0 table that supports paging, sorting, and filtering in your web page in 10 minutes? Can it be done without even installing any software or maintaining a server? Firefly is a powerful, configurable system for building web-based user interfaces to access astronomy science archives. It has been in production for the past three years. Recently, we have made some of the advanced components available through very simple JavaScript calls. This allows a web developer, without any significant knowledge of Firefly, to have FITS visualizers, advanced table display, and spectrum plots on their web pages with minimal learning curve. Because we use cross-site JSONP, installing a server is not necessary. Web sites that use these tools can be created in minutes. Firefly was created in IRSA, the NASA/IPAC Infrared Science Archive (http://irsa.ipac.caltech.edu). We are using Firefly to serve many projects including Spitzer, Planck, WISE, PTF, LSST and others.
The Calculator of Anti-Alzheimer’s Diet. Macronutrients
Studnicki, Marcin; Woźniak, Grażyna; Stępkowski, Dariusz
2016-01-01
The opinions about optimal proportions of macronutrients in a healthy diet have changed significantly over the last century. At the same time nutritional sciences failed to provide strong evidence backing up any of the variety of views on macronutrient proportions. Herein we present an idea how these proportions can be calculated to find an optimal balance of macronutrients with respect to prevention of Alzheimer’s Disease (AD) and dementia. These calculations are based on our published observation that per capita personal income (PCPI) in the USA correlates with age-adjusted death rates for AD (AADR). We have previously reported that PCPI through the period 1925–2005 correlated with AADR in 2005 in a remarkable, statistically significant oscillatory manner, as shown by changes in the correlation coefficient R (Roriginal). A question thus arises what caused the oscillatory behavior of Roriginal? What historical events in the life of 2005 AD victims had shaped their future with AD? Looking for the answers we found that, considering changes in the per capita availability of macronutrients in the USA in the period 1929–2005, we can mathematically explain the variability of Roriginal for each quarter of a human life. On the basis of multiple regression of Roriginal with regard to the availability of three macronutrients: carbohydrates, total fat, and protein, with or without alcohol, we propose seven equations (referred to as “the calculator” throughout the text) which allow calculating optimal changes in the proportions of macronutrients to reduce the risk of AD for each age group: youth, early middle age, late middle age and late age. The results obtained with the use of “the calculator” are grouped in a table (Table 4) of macronutrient proportions optimal for reducing the risk of AD in each age group through minimizing Rpredicted−i.e., minimizing the strength of correlation between PCPI and future AADR. PMID:27992612
The Calculator of Anti-Alzheimer's Diet. Macronutrients.
Studnicki, Marcin; Woźniak, Grażyna; Stępkowski, Dariusz
2016-01-01
The opinions about optimal proportions of macronutrients in a healthy diet have changed significantly over the last century. At the same time nutritional sciences failed to provide strong evidence backing up any of the variety of views on macronutrient proportions. Herein we present an idea how these proportions can be calculated to find an optimal balance of macronutrients with respect to prevention of Alzheimer's Disease (AD) and dementia. These calculations are based on our published observation that per capita personal income (PCPI) in the USA correlates with age-adjusted death rates for AD (AADR). We have previously reported that PCPI through the period 1925-2005 correlated with AADR in 2005 in a remarkable, statistically significant oscillatory manner, as shown by changes in the correlation coefficient R (Roriginal). A question thus arises what caused the oscillatory behavior of Roriginal? What historical events in the life of 2005 AD victims had shaped their future with AD? Looking for the answers we found that, considering changes in the per capita availability of macronutrients in the USA in the period 1929-2005, we can mathematically explain the variability of Roriginal for each quarter of a human life. On the basis of multiple regression of Roriginal with regard to the availability of three macronutrients: carbohydrates, total fat, and protein, with or without alcohol, we propose seven equations (referred to as "the calculator" throughout the text) which allow calculating optimal changes in the proportions of macronutrients to reduce the risk of AD for each age group: youth, early middle age, late middle age and late age. The results obtained with the use of "the calculator" are grouped in a table (Table 4) of macronutrient proportions optimal for reducing the risk of AD in each age group through minimizing Rpredicted-i.e., minimizing the strength of correlation between PCPI and future AADR.
An optimal-estimation-based aerosol retrieval algorithm using OMI near-UV observations
NASA Astrophysics Data System (ADS)
Jeong, U.; Kim, J.; Ahn, C.; Torres, O.; Liu, X.; Bhartia, P. K.; Spurr, R. J. D.; Haffner, D.; Chance, K.; Holben, B. N.
2016-01-01
An optimal-estimation(OE)-based aerosol retrieval algorithm using the OMI (Ozone Monitoring Instrument) near-ultraviolet observation was developed in this study. The OE-based algorithm has the merit of providing useful estimates of errors simultaneously with the inversion products. Furthermore, instead of using the traditional look-up tables for inversion, it performs online radiative transfer calculations with the VLIDORT (linearized pseudo-spherical vector discrete ordinate radiative transfer code) to eliminate interpolation errors and improve stability. The measurements and inversion products of the Distributed Regional Aerosol Gridded Observation Network campaign in northeast Asia (DRAGON NE-Asia 2012) were used to validate the retrieved aerosol optical thickness (AOT) and single scattering albedo (SSA). The retrieved AOT and SSA at 388 nm have a correlation with the Aerosol Robotic Network (AERONET) products that is comparable to or better than the correlation with the operational product during the campaign. The OE-based estimated error represented the variance of actual biases of AOT at 388 nm between the retrieval and AERONET measurements better than the operational error estimates. The forward model parameter errors were analyzed separately for both AOT and SSA retrievals. The surface reflectance at 388 nm, the imaginary part of the refractive index at 354 nm, and the number fine-mode fraction (FMF) were found to be the most important parameters affecting the retrieval accuracy of AOT, while FMF was the most important parameter for the SSA retrieval. The additional information provided with the retrievals, including the estimated error and degrees of freedom, is expected to be valuable for relevant studies. Detailed advantages of using the OE method were described and discussed in this paper.
NASA Astrophysics Data System (ADS)
Wei, Qingyang; Ma, Tianyu; Xu, Tianpeng; Zeng, Ming; Gu, Yu; Dai, Tiantian; Liu, Yaqiang
2018-01-01
Modern positron emission tomography (PET) detectors are made from pixelated scintillation crystal arrays and readout by Anger logic. The interaction position of the gamma-ray should be assigned to a crystal using a crystal position map or look-up table. Crystal identification is a critical procedure for pixelated PET systems. In this paper, we propose a novel crystal identification method for a dual-layer-offset LYSO based animal PET system via Lu-176 background radiation and mean shift algorithm. Single photon event data of the Lu-176 background radiation are acquired in list-mode for 3 h to generate a single photon flood map (SPFM). Coincidence events are obtained from the same data using time information to generate a coincidence flood map (CFM). The CFM is used to identify the peaks of the inner layer using the mean shift algorithm. The response of the inner layer is deducted from the SPFM by subtracting CFM. Then, the peaks of the outer layer are also identified using the mean shift algorithm. The automatically identified peaks are manually inspected by a graphical user interface program. Finally, a crystal position map is generated using a distance criterion based on these peaks. The proposed method is verified on the animal PET system with 48 detector blocks on a laptop with an Intel i7-5500U processor. The total runtime for whole system peak identification is 67.9 s. Results show that the automatic crystal identification has 99.98% and 99.09% accuracy for the peaks of the inner and outer layers of the whole system respectively. In conclusion, the proposed method is suitable for the dual-layer-offset lutetium based PET system to perform crystal identification instead of external radiation sources.
Records for conversion of laser energy to nuclear energy in exploding nanostructures
NASA Astrophysics Data System (ADS)
Jortner, Joshua; Last, Isidore
2017-09-01
Table-top nuclear fusion reactions in the chemical physics laboratory can be driven by high-energy dynamics of Coulomb exploding, multicharged, deuterium containing nanostructures generated by ultraintense, femtosecond, near-infrared laser pulses. Theoretical-computational studies of table-top laser-driven nuclear fusion of high-energy (up to 15 MeV) deuterons with 7Li, 6Li and D nuclei demonstrate the attainment of high fusion yields within a source-target reaction design, which constitutes the highest table-top fusion efficiencies obtained up to date. The conversion efficiency of laser energy to nuclear energy (0.1-1.0%) for table-top fusion is comparable to that for DT fusion currently accomplished for 'big science' inertial fusion setups.
75 FR 62923 - WRC-07 Table Clean-up Order
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-13
..., the Commission's Allocation Table is revised by expressing frequencies in the High Frequency (HF....S. Table 28 frequencies designated for disaster communications and 40 frequencies designated for..., highlights the availability of the high frequency broadcasting (HFBC) bands 7.2-7.3 and 7.4-7.45 MHz in...
Intracardiac ultrasound scanner using a micromachine (MEMS) actuator.
Zara, J M; Bobbio, S M; Goodwin-Johansson, S; Smith, S W
2000-01-01
Catheter-based intracardiac ultrasound offers the potential for improved guidance of interventional cardiac procedures. The objective of this research is the development of catheter-based mechanical sector scanners incorporating high frequency ultrasound transducers operating at frequencies up to 20 MHz. The authors' current transducer assembly consists of a single 1.75 mm by 1.75 mm, 20 MHz, PZT element mounted on a 2 mm by 2 mm square, 75 mum thick polyimide table that pivots on 3-mum thick gold plated polyimide hinges. The hinges also serve as the electrical connections to the transducer. This table-mounted transducer is tilted using a miniature linear actuator to produce a sector scan. This linear actuator is an integrated force array (IFA), which is an example of a micromachine, i.e., a microelectromechanical system (MEMS). The IFA is a thin (2.2 mum) polyimide membrane, which consists of a network of hundreds of thousands of micron scale deformable capacitors made from pairs of metallized polyimide plates. IFAs contract with an applied voltage of 30-120 V and have been shown to produce strains as large as 20% and forces of up to 8 dynes. The prototype transducer and actuator assembly was fabricated and interfaced with a GagePCI analog to digital conversion board digitizing 12 bit samples at a rate of 100 MSamples/second housed in a personal computer to create a single channel ultrasound scanner. The deflection of the table transducer in a low viscosity insulating fluid (HFE 7100, 3M) is up to +/-10 degrees at scan rates of 10-60 Hz. Software has been developed to produce real-time sector scans on the PC monitor.
INTERIOR PERSPECTIVE, LOOKING SOUTH SOUTHWEST WITH FIELD SET UP IN ...
INTERIOR PERSPECTIVE, LOOKING SOUTH SOUTHWEST WITH FIELD SET UP IN FOOTBALL CONFIGURATION. FIELD SEATING ROTATES TO ACCOMMODATE BASEBALL GAMES. - Houston Astrodome, 8400 Kirby Drive, Houston, Harris County, TX
Dynamic generation of a table of contents with consumer-friendly labels.
Miller, Trudi; Leroy, Gondy; Wood, Elizabeth
2006-01-01
Consumers increasingly look to the Internet for health information, but available resources are too difficult for the majority to understand. Interactive tables of contents (TOC) can help consumers access health information by providing an easy to understand structure. Using natural language processing and the Unified Medical Language System (UMLS), we have automatically generated TOCs for consumer health information. The TOC are categorized according to consumer-friendly labels for the UMLS semantic types and semantic groups. Categorizing phrases by semantic types is significantly more correct and relevant. Greater correctness and relevance was achieved with documents that are difficult to read than those at an easier reading level. Pruning TOCs to use categories that consumers favor further increases relevancy and correctness while reducing structural complexity.
ERIC Educational Resources Information Center
Ontario Dept. of Education, Toronto.
The transcript (translated into English) of a roundtable discussion of literacy among francophone women in Canada begins with the personal narrative of one women who gained literacy skills as an adult. The panel of three specialists in francophone women's literacy in Canada then look at the literacy rate among Canadian women, and the demand for…
Optimal CV-22 Centralized Intermediate Repair Facility Locations and Parts Repair
2009-06-01
and Reorder Point for TEWS ............................ 36 Table 8. Excel Model for Safety Stock and Reorder Point for FADEC ...Digital Engine Control ( FADEC ) Main Wheel Assembly Blade Fold System Landing Gear Control Panel Drive System Interface Unit Main Landing Gear...Radar 4 Forward Looking Infrared System (FLIR) 4 Tactical Electronic Warfare System (TEWS) 1 Full Authority Digital Engine Control ( FADEC ) 2 Blade
Brain Potentials and Personality: A New Look at Stress Susceptibility.
1987-09-01
disinhibition (Dis) measures a hedonistic , extraverted lifestyle including drinking, parties, sex, and gambling; boredom susceptibility (BS) indicates an...adventure seeking; ES = Experience seeking; Dis = Disinhibition; BS = Boredom susceptibility. 1 14 I N i*5’ Table 4 Correlation of Auditory Evoked...20. aTAS = Thrill and adventure seeking; ES = Experience seeking; Dis = Disinhibition; BS = Boredom susceptibility. < .05. 15 I The present study
Who Wins? Who Pays? The Economic Returns and Costs of a Bachelor's Degree
ERIC Educational Resources Information Center
de Alva, Jorge Klor; Schneider, Mark
2011-01-01
Given the importance of a college education to entering and staying in the middle class and the high cost of obtaining a bachelor's degree, "Who Wins? and Who Pays?" are questions being asked today at kitchen tables and in the halls of government throughout the nation. Using publicly available data, the authors look at who wins and who pays…
RANS Simulation (Virtual Blade Model [VBM]) of Single Full Scale DOE RM1 MHK Turbine
Javaherchi, Teymour; Aliseda, Alberto
2013-04-10
Attached are the .cas and .dat files along with the required User Defined Functions (UDFs) and look-up table of lift and drag coefficients for Reynolds Averaged Navier-Stokes (RANS) simulation of a single full scale DOE RM1 turbine implemented in ANSYS FLUENT CFD-package. In this case study the flow field around and in the wake of the full scale DOE RM1 turbine is simulated using Blade Element Model (a.k.a Virtual Blade Model) by solving RANS equations coupled with k-\\omega turbulence closure model. It should be highlighted that in this simulation the actual geometry of the rotor blade is not modeled. The effect of turbine rotating blades are modeled using the Blade Element Theory. This simulation provides an accurate estimate for the performance of device and structure of it's turbulent far wake. Due to the simplifications implemented for modeling the rotating blades in this model, VBM is limited to capture details of the flow field in near wake region of the device.
NASA Astrophysics Data System (ADS)
Sankaran, A.; Chuang, Keh-Shih; Yonekawa, Hisashi; Huang, H. K.
1992-06-01
The imaging characteristics of two chest radiographic equipment, Advanced Multiple Beam Equalization Radiography (AMBER) and Konica Direct Digitizer [using a storage phosphor (SP) plate] systems have been compared. The variables affecting image quality and the computer display/reading systems used are detailed. Utilizing specially designed wedge, geometric, and anthropomorphic phantoms, studies were conducted on: exposure and energy response of detectors; nodule detectability; different exposure techniques; various look- up tables (LUTs), gray scale displays and laser printers. Methods for scatter estimation and reduction were investigated. It is concluded that AMBER with screen-film and equalization techniques provides better nodule detectability than SP plates. However, SP plates have other advantages such as flexibility in the selection of exposure techniques, image processing features, and excellent sensitivity when combined with optimum reader operating modes. The equalization feature of AMBER provides better nodule detectability under the denser regions of the chest. Results of diagnostic accuracy are demonstrated with nodule detectability plots and analysis of images obtained with phantoms.