Sample records for lasentec refining sensor

  1. Evaluation and Refinement of a Field-Portable Drinking Water Toxicity Sensor Utilizing Electric Cell-Substrate Impedance Sensing and a Fluidic Biochip

    DTIC Science & Technology

    2014-01-01

    Potential interferences tested were chlorine and chloramine (commonly used for drinking water disinfection ), geosmin and 2-methyl-isoborneol (MIB...Protection Agency maximum residual disinfectant level for chlorine and chloramine is set at 4 mg l1 under the Safe Drinking Water Act and thus would...Evaluation and refinement of a field-portable drinking water toxicity sensor utilizing electric cell–substrate impedance sensing and a fluidic

  2. Close-range sensors for small unmanned bottom vehicles: update

    NASA Astrophysics Data System (ADS)

    Bernstein, Charles L.

    2000-07-01

    The Surf Zone Reconnaissance Project is developing sensors for small, autonomous, Underwater Bottom-crawling Vehicles. The objective is to enable small, crawling robots to autonomously detect and classify mines and obstacles on the ocean bottom in depths between 0 and 10 feet. We have identified a promising set of techniques that will exploit the electromagnetic, shape, texture, image, and vibratory- modal features of this images. During FY99 and FY00 we have worked toward refining these techniques. Signature data sets have been collected for a standard target set to facilitate the development of sensor fusion and target detection and classification algorithms. Specific behaviors, termed microbehaviors, are developed to utilize the robot's mobility to position and operate the sensors. A first generation, close-range sensor suite, composed of 5 sensors, will be completed and tested on a crawling platform in FY00, and will be further refined and demonstrated in FY01 as part of the Mine Countermeasures 6.3 core program sponsored by the Office of Naval Research.

  3. Advanced vehicle emission reduction sensor program (FED-SAVER).

    DOT National Transportation Integrated Search

    2008-09-01

    The FED-SAVER program refined and continued the development of an in-cylinder, high temperature pressure sensor by demonstrating that it can be successfully inserted into diesel engines for routine feedback control of each individual cylinder. There ...

  4. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms.

    PubMed

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan

    2016-04-22

    The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.

  5. Sensors and devices containing ultra-small nanowire arrays

    DOEpatents

    Xiao, Zhili

    2014-09-23

    A network of nanowires may be used for a sensor. The nanowires are metallic, each nanowire has a thickness of at most 20 nm, and each nanowire has a width of at most 20 nm. The sensor may include nanowires comprising Pd, and the sensor may sense a change in hydrogen concentration from 0 to 100%. A device may include the hydrogen sensor, such as a vehicle, a fuel cell, a hydrogen storage tank, a facility for manufacturing steel, or a facility for refining petroleum products.

  6. Sensors and devices containing ultra-small nanowire arrays

    DOEpatents

    Xiao, Zhili

    2017-04-11

    A network of nanowires may be used for a sensor. The nanowires are metallic, each nanowire has a thickness of at most 20 nm, and each nanowire has a width of at most 20 nm. The sensor may include nanowires comprising Pd, and the sensor may sense a change in hydrogen concentration from 0 to 100%. A device may include the hydrogen sensor, such as a vehicle, a fuel cell, a hydrogen storage tank, a facility for manufacturing steel, or a facility for refining petroleum products.

  7. Development of an Integrated ISFET pH Sensor for High Pressure Applications in the Deep-Sea

    DTIC Science & Technology

    2012-09-30

    Measurements in the upper ocean suggest that sensor precision is comparable to the annual pH change due to ocean acidification (Fig. 2). An array of...profiling floats equipped with pH sensors would be capable of directly monitoring the process of ocean acidification . Further refinement of the sensor...Quality of Life The high pressure pH sensor will have direct applications to our understanding of ocean acidification and the impacts on ecosystem

  8. Fiber optic photoelastic pressure sensor for high temperature gases

    NASA Technical Reports Server (NTRS)

    Wesson, Laurence N.; Redner, Alex S.; Baumbick, Robert J.

    1990-01-01

    A novel fiber optic pressure sensor based on the photoelastic effects has been developed for extremely high temperature gases. At temperatures varying from 25 to 650 C, the sensor experiences no change in the peak pressure of the transfer function and only a 10 percent drop in dynamic range. Refinement of the sensor has resulted in an optoelectronic interface and processor software which can calculate pressure values within 1 percent of full scale at any temperature within the full calibrated temperature range.

  9. Designs and test results for three new rotational sensors

    USGS Publications Warehouse

    Jedlicka, P.; Kozak, J.T.; Evans, J.R.; Hutt, C.R.

    2012-01-01

    We discuss the designs and testing of three rotational seismometer prototypes developed at the Institute of Geophysics, Academy of Sciences (Prague, Czech Republic). Two of these designs consist of a liquid-filled toroidal tube with the liquid as the proof mass and providing damping; we tested the piezoelectric and pressure transduction versions of this torus. The third design is a wheel-shaped solid metal inertial sensor with capacitive sensing and magnetic damping. Our results from testing in Prague and at the Albuquerque Seismological Laboratory of the US Geological Survey of transfer function and cross-axis sensitivities are good enough to justify the refinement and subsequent testing of advanced prototypes. These refinements and new testing are well along.

  10. Designs and test results for three new rotational sensors

    NASA Astrophysics Data System (ADS)

    Jedlička, P.; Kozák, J. T.; Evans, J. R.; Hutt, C. R.

    2012-10-01

    We discuss the designs and testing of three rotational seismometer prototypes developed at the Institute of Geophysics, Academy of Sciences (Prague, Czech Republic). Two of these designs consist of a liquid-filled toroidal tube with the liquid as the proof mass and providing damping; we tested the piezoelectric and pressure transduction versions of this torus. The third design is a wheel-shaped solid metal inertial sensor with capacitive sensing and magnetic damping. Our results from testing in Prague and at the Albuquerque Seismological Laboratory of the US Geological Survey of transfer function and cross-axis sensitivities are good enough to justify the refinement and subsequent testing of advanced prototypes. These refinements and new testing are well along.

  11. Application of historical mobility testing to sensor-based robotic performance

    NASA Astrophysics Data System (ADS)

    Willoughby, William E.; Jones, Randolph A.; Mason, George L.; Shoop, Sally A.; Lever, James H.

    2006-05-01

    The USA Engineer Research and Development Center (ERDC) has conducted on-/off-road experimental field testing with full-sized and scale-model military vehicles for more than fifty years. Some 4000 acres of local terrain are available for tailored field evaluations or verification/validation of future robotic designs in a variety of climatic regimes. Field testing and data collection procedures, as well as techniques for quantifying terrain in engineering terms, have been developed and refined into algorithms and models for predicting vehicle-terrain interactions and resulting forces or speeds of military-sized vehicles. Based on recent experiments with Matilda, Talon, and Pacbot, these predictive capabilities appear to be relevant to most robotic systems currently in development. Utilization of current testing capabilities with sensor-based vehicle drivers, or use of the procedures for terrain quantification from sensor data, would immediately apply some fifty years of historical knowledge to the development, refinement, and implementation of future robotic systems. Additionally, translation of sensor-collected terrain data into engineering terms would allow assessment of robotic performance a priori deployment of the actual system and ensure maximum system performance in the theater of operation.

  12. High resolution hybrid optical and acoustic sea floor maps (Invited)

    NASA Astrophysics Data System (ADS)

    Roman, C.; Inglis, G.

    2013-12-01

    This abstract presents a method for creating hybrid optical and acoustic sea floor reconstructions at centimeter scale grid resolutions with robotic vehicles. Multibeam sonar and stereo vision are two common sensing modalities with complementary strengths that are well suited for data fusion. We have recently developed an automated two stage pipeline to create such maps. The steps can be broken down as navigation refinement and map construction. During navigation refinement a graph-based optimization algorithm is used to align 3D point clouds created with both the multibeam sonar and stereo cameras. The process combats the typical growth in navigation error that has a detrimental affect on map fidelity and typically introduces artifacts at small grid sizes. During this process we are able to automatically register local point clouds created by each sensor to themselves and to each other where they overlap in a survey pattern. The process also estimates the sensor offsets, such as heading, pitch and roll, that describe how each sensor is mounted to the vehicle. The end results of the navigation step is a refined vehicle trajectory that ensures the points clouds from each sensor are consistently aligned, and the individual sensor offsets. In the mapping step, grid cells in the map are selectively populated by choosing data points from each sensor in an automated manner. The selection process is designed to pick points that preserve the best characteristics of each sensor and honor some specific map quality criteria to reduce outliers and ghosting. In general, the algorithm selects dense 3D stereo points in areas of high texture and point density. In areas where the stereo vision is poor, such as in a scene with low contrast or texture, multibeam sonar points are inserted in the map. This process is automated and results in a hybrid map populated with data from both sensors. Additional cross modality checks are made to reject outliers in a robust manner. The final hybrid map retains the strengths of both sensors and shows improvement over the single modality maps and a naively assembled multi-modal map where all the data points are included and averaged. Results will be presented from marine geological and archaeological applications using a 1350 kHz BlueView multibeam sonar and 1.3 megapixel digital still cameras.

  13. Adaptive Multi-Sensor Interrogation of Targets Embedded in Complex Environments

    DTIC Science & Technology

    2010-06-09

    to efficient refinement of data from distributed networked sensor systems for interpretation by both machines and humans in a low latency and...of a DP draw: Tk^HIltiU-^). Vk*& Beta{l,a), d’k ~ d" H. (19) where 5g - is a point measure concentrated at 9*k (each 9*k is termed an atom

  14. Pressure Measurement Sensor

    NASA Technical Reports Server (NTRS)

    1997-01-01

    FFPI Industries Inc. is the manufacturer of fiber-optic sensors that furnish accurate pressure measurements in internal combustion chambers. Such an assessment can help reduce pollution emitted by these engines. A chief component in the sensor owes its seven year- long development to Lewis Research Center funding to embed optical fibers and sensors in metal parts. NASA support to Texas A&M University played a critical role in developing this fiber optic technology and led to the formation of FFPI Industries and the production of fiber sensor products. The simple, rugged design of the sensor offers the potential for mass production at low cost. Widespread application of the new technology is forseen, from natural gas transmission, oil refining and electrical power generation to rail transport and the petrochemical paper product industry.

  15. Comparison of different classification methods for analyzing electronic nose data to characterize sesame oils and blends.

    PubMed

    Shao, Xiaolong; Li, Hui; Wang, Nan; Zhang, Qiang

    2015-10-21

    An electronic nose (e-nose) was used to characterize sesame oils processed by three different methods (hot-pressed, cold-pressed, and refined), as well as blends of the sesame oils and soybean oil. Seven classification and prediction methods, namely PCA, LDA, PLS, KNN, SVM, LASSO and RF, were used to analyze the e-nose data. The classification accuracy and MAUC were employed to evaluate the performance of these methods. The results indicated that sesame oils processed with different methods resulted in different sensor responses, with cold-pressed sesame oil producing the strongest sensor signals, followed by the hot-pressed sesame oil. The blends of pressed sesame oils with refined sesame oil were more difficult to be distinguished than the blends of pressed sesame oils and refined soybean oil. LDA, KNN, and SVM outperformed the other classification methods in distinguishing sesame oil blends. KNN, LASSO, PLS, and SVM (with linear kernel), and RF models could adequately predict the adulteration level (% of added soybean oil) in the sesame oil blends. Among the prediction models, KNN with k = 1 and 2 yielded the best prediction results.

  16. Scene-based nonuniformity corrections for optical and SWIR pushbroom sensors.

    PubMed

    Leathers, Robert; Downes, Trijntje; Priest, Richard

    2005-06-27

    We propose and evaluate several scene-based methods for computing nonuniformity corrections for visible or near-infrared pushbroom sensors. These methods can be used to compute new nonuniformity correction values or to repair or refine existing radiometric calibrations. For a given data set, the preferred method depends on the quality of the data, the type of scenes being imaged, and the existence and quality of a laboratory calibration. We demonstrate our methods with data from several different sensor systems and provide a generalized approach to be taken for any new data set.

  17. NOx Sensor for Direct Injection Emission Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Betteridge, William J

    2006-02-28

    The Electricore/Delphi team continues to leverage the electrochemical planar sensor technology that has produced stoichiometric planar and wide range oxygen sensors as the basis for development of a NOx sensor. Zirconia cell technology with an integrated heater will provide the foundation for the sensor structure. Proven materials and packaging technology will help to ensure a cost-effective approach to the manufacture of this sensor. The electronics technique and interface is considered to be an area where new strategies need to be employed to produce higher S/N ratios of the NOx signal with emphasis on signal stability over time for robustness andmore » durability Both continuous mode and pulse mode control techniques are being evaluated. Packaging the electronics requires careful design and circuit partitioning so that only the necessary signal conditioning electronics are coupled directly in the wiring harness, while the remainder is situated within the ECM for durability and costs reasons. This task continues to be on hold due to the limitation that the definition of the interface electronics was unavailable until very late in the project. The sense element is based on the amperometric method utilizing integrated alumina and zirconia ceramics. Precious metal electrodes are used to form the integrated heater, the cell electrodes and leads. Inside the actual sense cell structure, it is first necessary to separate NOx from the remaining oxygen constituents of the exhaust, without reducing the NOx. Once separated, the NOx will be measured using a measurement cell. Development or test coupons have been used to facilitate material selection and refinement, cell, diffusion barrier, and chamber development. The sense element currently requires elaborate interconnections. To facilitate a robust durable connection, mechanical and metallurgical connections are under investigation. Materials and process refinements continue to play an important role in the development of the sensor.« less

  18. Comparison of Different Classification Methods for Analyzing Electronic Nose Data to Characterize Sesame Oils and Blends

    PubMed Central

    Shao, Xiaolong; Li, Hui; Wang, Nan; Zhang, Qiang

    2015-01-01

    An electronic nose (e-nose) was used to characterize sesame oils processed by three different methods (hot-pressed, cold-pressed, and refined), as well as blends of the sesame oils and soybean oil. Seven classification and prediction methods, namely PCA, LDA, PLS, KNN, SVM, LASSO and RF, were used to analyze the e-nose data. The classification accuracy and MAUC were employed to evaluate the performance of these methods. The results indicated that sesame oils processed with different methods resulted in different sensor responses, with cold-pressed sesame oil producing the strongest sensor signals, followed by the hot-pressed sesame oil. The blends of pressed sesame oils with refined sesame oil were more difficult to be distinguished than the blends of pressed sesame oils and refined soybean oil. LDA, KNN, and SVM outperformed the other classification methods in distinguishing sesame oil blends. KNN, LASSO, PLS, and SVM (with linear kernel), and RF models could adequately predict the adulteration level (% of added soybean oil) in the sesame oil blends. Among the prediction models, KNN with k = 1 and 2 yielded the best prediction results. PMID:26506350

  19. Numerical analysis of flow about a total temperature sensor

    NASA Technical Reports Server (NTRS)

    Von Lavante, Ernst; Bruns, Russell L., Jr.; Sanetrik, Mark D.; Lam, Tim

    1989-01-01

    The unsteady flowfield about an airfoil-shaped inlet temperature sensor has been investigated using the thin-layer and full Navier-Stokes equations. A finite-volume formulation of the governing equations was used in conjunction with a Runge-Kutta time stepping scheme to analyze the flow about the sensor. Flow characteristics for this configuration were established at Mach numbers of 0.5 and 0.8 for different Reynolds numbers. The results were obtained for configurations of increasing complexity; important physical phenomena such as shock formation, boundary-layer separation, and unsteady wake formation were noted. Based on the computational results, recommendations for further study and refinement of the inlet temperature sensor were made.

  20. Adaptive-Mesh-Refinement for hyperbolic systems of conservation laws based on a posteriori stabilized high order polynomial reconstructions

    NASA Astrophysics Data System (ADS)

    Semplice, Matteo; Loubère, Raphaël

    2018-02-01

    In this paper we propose a third order accurate finite volume scheme based on a posteriori limiting of polynomial reconstructions within an Adaptive-Mesh-Refinement (AMR) simulation code for hydrodynamics equations in 2D. The a posteriori limiting is based on the detection of problematic cells on a so-called candidate solution computed at each stage of a third order Runge-Kutta scheme. Such detection may include different properties, derived from physics, such as positivity, from numerics, such as a non-oscillatory behavior, or from computer requirements such as the absence of NaN's. Troubled cell values are discarded and re-computed starting again from the previous time-step using a more dissipative scheme but only locally, close to these cells. By locally decrementing the degree of the polynomial reconstructions from 2 to 0 we switch from a third-order to a first-order accurate but more stable scheme. The entropy indicator sensor is used to refine/coarsen the mesh. This sensor is also employed in an a posteriori manner because if some refinement is needed at the end of a time step, then the current time-step is recomputed with the refined mesh, but only locally, close to the new cells. We show on a large set of numerical tests that this a posteriori limiting procedure coupled with the entropy-based AMR technology can maintain not only optimal accuracy on smooth flows but also stability on discontinuous profiles such as shock waves, contacts, interfaces, etc. Moreover numerical evidences show that this approach is at least comparable in terms of accuracy and cost to a more classical CWENO approach within the same AMR context.

  1. Rigorous Photogrammetric Processing of CHANG'E-1 and CHANG'E-2 Stereo Imagery for Lunar Topographic Mapping

    NASA Astrophysics Data System (ADS)

    Di, K.; Liu, Y.; Liu, B.; Peng, M.

    2012-07-01

    Chang'E-1(CE-1) and Chang'E-2(CE-2) are the two lunar orbiters of China's lunar exploration program. Topographic mapping using CE-1 and CE-2 images is of great importance for scientific research as well as for preparation of landing and surface operation of Chang'E-3 lunar rover. In this research, we developed rigorous sensor models of CE-1 and CE-2 CCD cameras based on push-broom imaging principle with interior and exterior orientation parameters. Based on the rigorous sensor model, the 3D coordinate of a ground point in lunar body-fixed (LBF) coordinate system can be calculated by space intersection from the image coordinates of con-jugate points in stereo images, and the image coordinates can be calculated from 3D coordinates by back-projection. Due to uncer-tainties of the orbit and the camera, the back-projected image points are different from the measured points. In order to reduce these inconsistencies and improve precision, we proposed two methods to refine the rigorous sensor model: 1) refining EOPs by correcting the attitude angle bias, 2) refining the interior orientation model by calibration of the relative position of the two linear CCD arrays. Experimental results show that the mean back-projection residuals of CE-1 images are reduced to better than 1/100 pixel by method 1 and the mean back-projection residuals of CE-2 images are reduced from over 20 pixels to 0.02 pixel by method 2. Consequently, high precision DEM (Digital Elevation Model) and DOM (Digital Ortho Map) are automatically generated.

  2. Analysis of Mission Effectiveness: Modern System Architecture Tools for Project Developers

    DTIC Science & Technology

    2017-12-01

    operator input and scripted instructions to describe low-level flow. Note that the case study in Chapter IV describes one pass through evaluation...capability of the sensors. A constraint on the case study is that each sensor type must cover the entire operations area. Cost is a function of 53...completed. 5. Assessment This case study focuses on the first recursive refinement phase completed in a multi-phase effort to demonstrate the effects

  3. Low-Cost, Robust, Threat-Aware Wireless Sensor Network for Assuring the Nation's Energy Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carols H. Rentel

    2007-03-31

    Eaton, in partnership with Oak Ridge National Laboratory and the Electric Power Research Institute (EPRI) has completed a project that applies a combination of wireless sensor network (WSN) technology, anticipatory theory, and a near-term value proposition based on diagnostics and process uptime to ensure the security and reliability of critical electrical power infrastructure. Representatives of several Eaton business units have been engaged to ensure a viable commercialization plan. Tennessee Valley Authority (TVA), American Electric Power (AEP), PEPCO, and Commonwealth Edison were recruited as partners to confirm and refine the requirements definition from the perspective of the utilities that actually operatemore » the facilities to be protected. Those utilities have cooperated with on-site field tests as the project proceeds. Accomplishments of this project included: (1) the design, modeling, and simulation of the anticipatory wireless sensor network (A-WSN) that will be used to gather field information for the anticipatory application, (2) the design and implementation of hardware and software prototypes for laboratory and field experimentation, (3) stack and application integration, (4) develop installation and test plan, and (5) refinement of the commercialization plan.« less

  4. Review of the development of laser fluorosensors for oil spill application.

    PubMed

    Brown, Carl E; Fingas, Mervin F

    2003-01-01

    As laser fluorosensors provide their own source of excitation, they are known as active sensors. Being active sensors, laser fluorosensors can be employed around the clock, in daylight or in total darkness. Certain compounds, such as aromatic hydrocarbons, present in petroleum oils absorb ultraviolet laser light and become electronically excited. This excitation is quickly removed by the process of fluorescence emission, primarily in the visible region of the spectrum. By careful choice of the excitation laser wavelength and range-gated detection at selected emission wavelengths, petroleum oils can be detected and classified into three broad categories: light refined, crude or heavy refined. This paper will review the development of laser fluorosensors for oil spill application, with emphasis on system components such as excitation laser source, and detection schemes that allow these unique sensors to be employed for the detection and classification of petroleum oils. There have been a number of laser fluorosensors developed in recent years, many of which are strictly research and development tools. Certain of these fluorosensors have been ship-borne instruments that have been mounted in aircraft for the occasional airborne mission. Other systems are mounted permanently on aircraft for use in either surveillance or spill response roles.

  5. Thirty-fifth anniversary of the optical affinity sensor for glucose: a personal retrospective.

    PubMed

    Schultz, Jerome S

    2015-01-01

    Since 1962 when Clark introduced the enzyme electrode, research has been intense for a robust implantable glucose sensor. An alternative "optical affinity sensor" was introduced by Jerome Schultz in 1979. The evolution of this sensor technology into a new methodology is reviewed. The approach integrates a variety of disparate concepts: the selectivity of immunoassays-selectivity for glucose was obtained with concanavalin A, detection sensitivity was obtained with fluorescence (FITC-Dextran), and miniaturization was achieved by the use of an optical fiber readout system. Refinements of Schultz's optical affinity sensor approach over the past 35 years have led to a number of configurations that show great promise to meet the needs of a successful implantable continuous monitoring device for diabetics, some of which are currently being tested clinically. © 2014 Diabetes Technology Society.

  6. Quality-by-Design (QbD): An integrated process analytical technology (PAT) approach for a dynamic pharmaceutical co-precipitation process characterization and process design space development.

    PubMed

    Wu, Huiquan; White, Maury; Khan, Mansoor A

    2011-02-28

    The aim of this work was to develop an integrated process analytical technology (PAT) approach for a dynamic pharmaceutical co-precipitation process characterization and design space development. A dynamic co-precipitation process by gradually introducing water to the ternary system of naproxen-Eudragit L100-alcohol was monitored at real-time in situ via Lasentec FBRM and PVM. 3D map of count-time-chord length revealed three distinguishable process stages: incubation, transition, and steady-state. The effects of high risk process variables (slurry temperature, stirring rate, and water addition rate) on both derived co-precipitation process rates and final chord-length-distribution were evaluated systematically using a 3(3) full factorial design. Critical process variables were identified via ANOVA for both transition and steady state. General linear models (GLM) were then used for parameter estimation for each critical variable. Clear trends about effects of each critical variable during transition and steady state were found by GLM and were interpreted using fundamental process principles and Nyvlt's transfer model. Neural network models were able to link process variables with response variables at transition and steady state with R(2) of 0.88-0.98. PVM images evidenced nucleation and crystal growth. Contour plots illustrated design space via critical process variables' ranges. It demonstrated the utility of integrated PAT approach for QbD development. Published by Elsevier B.V.

  7. REFINE (Reducing Falls in In-patient Elderly)--a randomised controlled trial.

    PubMed

    Vass, Catherine D; Sahota, Opinder; Drummond, Avril; Kendrick, Denise; Gladman, John; Sach, Tracey; Avis, Mark; Grainge, Matthew

    2009-09-10

    Falls in hospitals are common, resulting in injury and anxiety to patients, and large costs to NHS organisations. More than half of all in-patient falls in elderly people in acute care settings occur at the bedside, during transfers or whilst getting up to go to the toilet. In the majority of cases these falls are unwitnessed. There is insufficient evidence underpinning the effectiveness of interventions to guide clinical staff regarding the reduction of falls in the elderly inpatient. New patient monitoring technologies have the potential to offer advances in falls prevention. Bedside sensor equipment can alert staff, not in the immediate vicinity, to a potential problem and avert a fall. However no studies utilizing this assistive technology have demonstrated a significant reduction in falls rates in a randomised controlled trial setting. The research design is an individual patient randomised controlled trial of bedside chair and bed pressure sensors, incorporating a radio-paging alerting mode to alert staff to patients rising from their bed or chair, across five acute elderly care wards in Nottingham University Hospitals NHS Trust. Participants will be randomised to bedside chair and bed sensors or to usual care (without the use of sensors). The primary outcome is the number of bedside in-patient falls. The REFINE study is the first randomised controlled trial of bedside pressure sensors in elderly inpatients in an acute NHS Trust. We will assess whether falls can be successfully and cost effectively reduced using this technology, and report on its acceptability to both patients and staff.

  8. Embedded NMR Sensor to Monitor Compressive Strength Development and Pore Size Distribution in Hydrating Concrete

    PubMed Central

    Díaz-Díaz, Floriberto; de J. Cano-Barrita, Prisciliano F.; Balcom, Bruce J.; Solís-Nájera, Sergio E.; Rodríguez, Alfredo O.

    2013-01-01

    In cement-based materials porosity plays an important role in determining their mechanical and transport properties. This paper describes an improved low–cost embeddable miniature NMR sensor capable of non-destructively measuring evaporable water loss and porosity refinement in low and high water-to-cement ratio cement-based materials. The sensor consists of two NdFeB magnets having their North and South poles facing each other, separated by 7 mm to allow space for a Faraday cage containing a Teflon tube and an ellipsoidal RF coil. To account for magnetic field changes due to temperature variations, and/or the presence of steel rebars, or frequency variation due to sample impedance, an external tuning circuit was employed. The sensor performance was evaluated by analyzing the transverse magnetization decay obtained with a CPMG measurement from different materials, such as a polymer phantom, fresh white and grey cement pastes with different w/c ratios and concrete with low (0.30) and high (0.6) w/c ratios. The results indicated that the sensor is capable of detecting changes in water content in fresh cement pastes and porosity refinement caused by cement hydration in hardened materials, even if they are prepared with a low w/c ratio (w/c = 0.30). The short lifetime component of the transverse relaxation rate is directly proportional to the compressive strength of concrete determined by destructive testing. The r2 (0.97) from the linear relationship observed is similar to that obtained using T2 data from a commercial Oxford Instruments 12.9 MHz spectrometer.

  9. A figure control sensor for the Large Deployable Reflector (LDR)

    NASA Technical Reports Server (NTRS)

    Bartman, R.; Dubovitsky, S.

    1988-01-01

    A sensing and control system is required to maintain high optical figure quality in a segmented reflector. Upon detecting a deviation of the segmented surface from its ideal form, the system drives segment mounted actuators to realign the individual segments and thereby return the surface to its intended figure. When the reflector is in use, a set of figure sensors will determine positions of a number of points on the back surface of each of the reflector's segments, each sensor being assigned to a single point. By measuring the positional deviations of these points from previously established nominal values, the figure sensors provide the control system with the information required to maintain the reflector's optical figure. The optical lever, multiple wavelength interferometer, and electronic capacitive sensor, the most promising technologies for the development of the figure sensor, are illustrated. It is concluded that to select a particular implementation of the figure sensors, performance requirement will be refined and relevant technologies investigated further.

  10. Navigation Algorithms for the SeaWiFS Mission

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Patt, Frederick S.; McClain, Charles R. (Technical Monitor)

    2002-01-01

    The navigation algorithms for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) were designed to meet the requirement of 1-pixel accuracy-a standard deviation (sigma) of 2. The objective has been to extract the best possible accuracy from the spacecraft telemetry and avoid the need for costly manual renavigation or geometric rectification. The requirement is addressed by postprocessing of both the Global Positioning System (GPS) receiver and Attitude Control System (ACS) data in the spacecraft telemetry stream. The navigation algorithms described are separated into four areas: orbit processing, attitude sensor processing, attitude determination, and final navigation processing. There has been substantial modification during the mission of the attitude determination and attitude sensor processing algorithms. For the former, the basic approach was completely changed during the first year of the mission, from a single-frame deterministic method to a Kalman smoother. This was done for several reasons: a) to improve the overall accuracy of the attitude determination, particularly near the sub-solar point; b) to reduce discontinuities; c) to support the single-ACS-string spacecraft operation that was started after the first mission year, which causes gaps in attitude sensor coverage; and d) to handle data quality problems (which became evident after launch) in the direct-broadcast data. The changes to the attitude sensor processing algorithms primarily involved the development of a model for the Earth horizon height, also needed for single-string operation; the incorporation of improved sensor calibration data; and improved data quality checking and smoothing to handle the data quality issues. The attitude sensor alignments have also been revised multiple times, generally in conjunction with the other changes. The orbit and final navigation processing algorithms have remained largely unchanged during the mission, aside from refinements to data quality checking. Although further improvements are certainly possible, future evolution of the algorithms is expected to be limited to refinements of the methods presented here, and no substantial changes are anticipated.

  11. Implementation of Implicit Adaptive Mesh Refinement in an Unstructured Finite-Volume Flow Solver

    NASA Technical Reports Server (NTRS)

    Schwing, Alan M.; Nompelis, Ioannis; Candler, Graham V.

    2013-01-01

    This paper explores the implementation of adaptive mesh refinement in an unstructured, finite-volume solver. Unsteady and steady problems are considered. The effect on the recovery of high-order numerics is explored and the results are favorable. Important to this work is the ability to provide a path for efficient, implicit time advancement. A method using a simple refinement sensor based on undivided differences is discussed and applied to a practical problem: a shock-shock interaction on a hypersonic, inviscid double-wedge. Cases are compared to uniform grids without the use of adapted meshes in order to assess error and computational expense. Discussion of difficulties, advances, and future work prepare this method for additional research. The potential for this method in more complicated flows is described.

  12. On-board error correction improves IR earth sensor accuracy

    NASA Astrophysics Data System (ADS)

    Alex, T. K.; Kasturirangan, K.; Shrivastava, S. K.

    1989-10-01

    Infra-red earth sensors are used in satellites for attitude sensing. Their accuracy is limited by systematic and random errors. The sources of errors in a scanning infra-red earth sensor are analyzed in this paper. The systematic errors arising from seasonal variation of infra-red radiation, oblate shape of the earth, ambient temperature of sensor, changes in scan/spin rates have been analyzed. Simple relations are derived using least square curve fitting for on-board correction of these errors. Random errors arising out of noise from detector and amplifiers, instability of alignment and localized radiance anomalies are analyzed and possible correction methods are suggested. Sun and Moon interference on earth sensor performance has seriously affected a number of missions. The on-board processor detects Sun/Moon interference and corrects the errors on-board. It is possible to obtain eight times improvement in sensing accuracy, which will be comparable with ground based post facto attitude refinement.

  13. Toward real time detection of the basic living activity in home using a wearable sensor and smart home sensors.

    PubMed

    Bang, Sunlee; Kim, Minho; Song, Sa-Kwang; Park, Soo-Jun

    2008-01-01

    As the elderly people living alone are enormously increasing recently, we need the system inferring activities of daily living (ADL) for maintaining healthy life and recognizing emergency. The system should be constructed with sensors, which are used to associate with people's living while remaining as non intrusive views as possible. To do this, the proposed system use a triaxial accelerometer sensor and environment sensors indicating contact with subject in home. Particularly, in order to robustly infer ADLs, we present component ADL, which is decided with conjunction of human motion together, not just only contacted object identification. It is an important component in inferring ADL. In special, component ADL decision firstly refines misclassified initial activities, which improves the accuracy of recognizing ADL. Preliminary experiments results for proposed system provides overall recognition rate of over 97% over 8 component ADLs, which can be effectively applicable to recognize the final ADLs.

  14. Distributed wireless sensing for methane leak detection technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klein, Levente; van Kesse, Theodor

    Large scale environmental monitoring requires dynamic optimization of data transmission, power management, and distribution of the computational load. In this work, we demonstrate the use of a wireless sensor network for detection of chemical leaks on gas oil well pads. The sensor network consist of chemi-resistive and wind sensors and aggregates all the data and transmits it to the cloud for further analytics processing. The sensor network data is integrated with an inversion model to identify leak location and quantify leak rates. We characterize the sensitivity and accuracy of such system under multiple well controlled methane release experiments. It ismore » demonstrated that even 1 hour measurement with 10 sensors localizes leaks within 1 m and determines leak rate with an accuracy of 40%. This integrated sensing and analytics solution is currently refined to be a robust system for long term remote monitoring of methane leaks, generation of alarms, and tracking regulatory compliance.« less

  15. Distributed wireless sensing for fugitive methane leak detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klein, Levente J.; van Kessel, Theodore; Nair, Dhruv

    Large scale environmental monitoring requires dynamic optimization of data transmission, power management, and distribution of the computational load. In this work, we demonstrate the use of a wireless sensor network for detection of chemical leaks on gas oil well pads. The sensor network consist of chemi-resistive and wind sensors and aggregates all the data and transmits it to the cloud for further analytics processing. The sensor network data is integrated with an inversion model to identify leak location and quantify leak rates. We characterize the sensitivity and accuracy of such system under multiple well controlled methane release experiments. It ismore » demonstrated that even 1 hour measurement with 10 sensors localizes leaks within 1 m and determines leak rate with an accuracy of 40%. This integrated sensing and analytics solution is currently refined to be a robust system for long term remote monitoring of methane leaks, generation of alarms, and tracking regulatory compliance.« less

  16. Distributed wireless sensing for fugitive methane leak detection

    DOE PAGES

    Klein, Levente J.; van Kessel, Theodore; Nair, Dhruv; ...

    2017-12-11

    Large scale environmental monitoring requires dynamic optimization of data transmission, power management, and distribution of the computational load. In this work, we demonstrate the use of a wireless sensor network for detection of chemical leaks on gas oil well pads. The sensor network consist of chemi-resistive and wind sensors and aggregates all the data and transmits it to the cloud for further analytics processing. The sensor network data is integrated with an inversion model to identify leak location and quantify leak rates. We characterize the sensitivity and accuracy of such system under multiple well controlled methane release experiments. It ismore » demonstrated that even 1 hour measurement with 10 sensors localizes leaks within 1 m and determines leak rate with an accuracy of 40%. This integrated sensing and analytics solution is currently refined to be a robust system for long term remote monitoring of methane leaks, generation of alarms, and tracking regulatory compliance.« less

  17. Marine Vehicle Sensor Network Architecture and Protocol Designs for Ocean Observation

    PubMed Central

    Zhang, Shaowei; Yu, Jiancheng; Zhang, Aiqun; Yang, Lei; Shu, Yeqiang

    2012-01-01

    The micro-scale and meso-scale ocean dynamic processes which are nonlinear and have large variability, have a significant impact on the fisheries, natural resources, and marine climatology. A rapid, refined and sophisticated observation system is therefore needed in marine scientific research. The maneuverability and controllability of mobile sensor platforms make them a preferred choice to establish ocean observing networks, compared to the static sensor observing platform. In this study, marine vehicles are utilized as the nodes of mobile sensor networks for coverage sampling of a regional ocean area and ocean feature tracking. A synoptic analysis about marine vehicle dynamic control, multi vehicles mission assignment and path planning methods, and ocean feature tracking and observing techniques is given. Combined with the observation plan in the South China Sea, we provide an overview of the mobile sensor networks established with marine vehicles, and the corresponding simulation results. PMID:22368475

  18. Bayer Demosaicking with Polynomial Interpolation.

    PubMed

    Wu, Jiaji; Anisetti, Marco; Wu, Wei; Damiani, Ernesto; Jeon, Gwanggil

    2016-08-30

    Demosaicking is a digital image process to reconstruct full color digital images from incomplete color samples from an image sensor. It is an unavoidable process for many devices incorporating camera sensor (e.g. mobile phones, tablet, etc.). In this paper, we introduce a new demosaicking algorithm based on polynomial interpolation-based demosaicking (PID). Our method makes three contributions: calculation of error predictors, edge classification based on color differences, and a refinement stage using a weighted sum strategy. Our new predictors are generated on the basis of on the polynomial interpolation, and can be used as a sound alternative to other predictors obtained by bilinear or Laplacian interpolation. In this paper we show how our predictors can be combined according to the proposed edge classifier. After populating three color channels, a refinement stage is applied to enhance the image quality and reduce demosaicking artifacts. Our experimental results show that the proposed method substantially improves over existing demosaicking methods in terms of objective performance (CPSNR, S-CIELAB E, and FSIM), and visual performance.

  19. Satellite Ocean Biology: Past, Present, Future

    NASA Technical Reports Server (NTRS)

    McClain, Charles R.

    2012-01-01

    Since 1978 when the first satellite ocean color proof-of-concept sensor, the Nimbus-7 Coastal Zone Color Scanner, was launched, much progress has been made in refining the basic measurement concept and expanding the research applications of global satellite time series of biological and optical properties such as chlorophyll-a concentrations. The seminar will review the fundamentals of satellite ocean color measurements (sensor design considerations, on-orbit calibration, atmospheric corrections, and bio-optical algorithms), scientific results from the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) and Moderate resolution Imaging Spectroradiometer (MODIS) missions, and the goals of future NASA missions such as PACE, the Aerosol, Cloud, Ecology (ACE), and Geostationary Coastal and Air Pollution Events (GeoCAPE) missions.

  20. Characterization of Arcjet Flows Using Laser-Induced Fluorescence

    NASA Technical Reports Server (NTRS)

    Bamford, Douglas J.; O'Keefe, Anthony; Babikian, Dikran S.; Stewart, David A.; Strawa, Anthony W.

    1995-01-01

    A sensor based on laser-induced fluorescence has been installed at the 20-MW NASA Ames Aerodynamic Heating Facility. The sensor has provided new, quantitative, real-time information about properties of the arcjet flow in the highly dissociated, partially ionized, nonequilibrium regime. Number densities of atomic oxygen, flow velocities, heavy particle translational temperatures, and collisional quenching rates have been measured. These results have been used to test and refine computational models of the arcjet flow. The calculated number densities, translational temperatures, and flow velocities are in moderately good agreement with experiment

  1. A new range-free localisation in wireless sensor networks using support vector machine

    NASA Astrophysics Data System (ADS)

    Wang, Zengfeng; Zhang, Hao; Lu, Tingting; Sun, Yujuan; Liu, Xing

    2018-02-01

    Location information of sensor nodes is of vital importance for most applications in wireless sensor networks (WSNs). This paper proposes a new range-free localisation algorithm using support vector machine (SVM) and polar coordinate system (PCS), LSVM-PCS. In LSVM-PCS, two sets of classes are first constructed based on sensor nodes' polar coordinates. Using the boundaries of the defined classes, the operation region of WSN field is partitioned into a finite number of polar grids. Each sensor node can be localised into one of the polar grids by executing two localisation algorithms that are developed on the basis of SVM classification. The centre of the resident polar grid is then estimated as the location of the sensor node. In addition, a two-hop mass-spring optimisation (THMSO) is also proposed to further improve the localisation accuracy of LSVM-PCS. In THMSO, both neighbourhood information and non-neighbourhood information are used to refine the sensor node location. The results obtained verify that the proposed algorithm provides a significant improvement over existing localisation methods.

  2. An Algorithm for Converting Static Earth Sensor Measurements into Earth Observation Vectors

    NASA Technical Reports Server (NTRS)

    Harman, R.; Hashmall, Joseph A.; Sedlak, Joseph

    2004-01-01

    An algorithm has been developed that converts penetration angles reported by Static Earth Sensors (SESs) into Earth observation vectors. This algorithm allows compensation for variation in the horizon height including that caused by Earth oblateness. It also allows pitch and roll to be computed using any number (greater than 1) of simultaneous sensor penetration angles simplifying processing during periods of Sun and Moon interference. The algorithm computes body frame unit vectors through each SES cluster. It also computes GCI vectors from the spacecraft to the position on the Earth's limb where each cluster detects the Earth's limb. These body frame vectors are used as sensor observation vectors and the GCI vectors are used as reference vectors in an attitude solution. The attitude, with the unobservable yaw discarded, is iteratively refined to provide the Earth observation vector solution.

  3. Adaptive Mesh Refinement for Microelectronic Device Design

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Lou, John; Norton, Charles

    1999-01-01

    Finite element and finite volume methods are used in a variety of design simulations when it is necessary to compute fields throughout regions that contain varying materials or geometry. Convergence of the simulation can be assessed by uniformly increasing the mesh density until an observable quantity stabilizes. Depending on the electrical size of the problem, uniform refinement of the mesh may be computationally infeasible due to memory limitations. Similarly, depending on the geometric complexity of the object being modeled, uniform refinement can be inefficient since regions that do not need refinement add to the computational expense. In either case, convergence to the correct (measured) solution is not guaranteed. Adaptive mesh refinement methods attempt to selectively refine the region of the mesh that is estimated to contain proportionally higher solution errors. The refinement may be obtained by decreasing the element size (h-refinement), by increasing the order of the element (p-refinement) or by a combination of the two (h-p refinement). A successful adaptive strategy refines the mesh to produce an accurate solution measured against the correct fields without undue computational expense. This is accomplished by the use of a) reliable a posteriori error estimates, b) hierarchal elements, and c) automatic adaptive mesh generation. Adaptive methods are also useful when problems with multi-scale field variations are encountered. These occur in active electronic devices that have thin doped layers and also when mixed physics is used in the calculation. The mesh needs to be fine at and near the thin layer to capture rapid field or charge variations, but can coarsen away from these layers where field variations smoothen and charge densities are uniform. This poster will present an adaptive mesh refinement package that runs on parallel computers and is applied to specific microelectronic device simulations. Passive sensors that operate in the infrared portion of the spectrum as well as active device simulations that model charge transport and Maxwell's equations will be presented.

  4. Hypoxia, Monitoring, and Mitigation System

    DTIC Science & Technology

    2015-08-01

    Oxygen Saturation Measured via Pulse - Oximeter SRS Software Requirements Specification SW Software TI Texas Instruments uPROC Micro-Processor USAARL...Financial) Table of Figures Figure 1: Pulse OX custom module...Tasks 3, 4 and 5 have not been exercised. Sensor definition testing continued on the custom pulse -ox design. Additional refinement on the pulse

  5. An Approach to Poiseuille's Law in an Undergraduate Laboratory Experiment

    ERIC Educational Resources Information Center

    Sianoudis, I. A.; Drakaki, E.

    2008-01-01

    The continuous growth of computer and sensor technology allows many researchers to develop simple modifications and/or refinements to standard educational experiments, making them more attractive and comprehensible to students and thus increasing their educational impact. In the framework of this approach, the present study proposes an alternative…

  6. A Tailored Ontology Supporting Sensor Implementation for the Maintenance of Industrial Machines.

    PubMed

    Maleki, Elaheh; Belkadi, Farouk; Ritou, Mathieu; Bernard, Alain

    2017-09-08

    The longtime productivity of an industrial machine is improved by condition-based maintenance strategies. To do this, the integration of sensors and other cyber-physical devices is necessary in order to capture and analyze a machine's condition through its lifespan. Thus, choosing the best sensor is a critical step to ensure the efficiency of the maintenance process. Indeed, considering the variety of sensors, and their features and performance, a formal classification of a sensor's domain knowledge is crucial. This classification facilitates the search for and reuse of solutions during the design of a new maintenance service. Following a Knowledge Management methodology, the paper proposes and develops a new sensor ontology that structures the domain knowledge, covering both theoretical and experimental sensor attributes. An industrial case study is conducted to validate the proposed ontology and to demonstrate its utility as a guideline to ease the search of suitable sensors. Based on the ontology, the final solution will be implemented in a shared repository connected to legacy CAD (computer-aided design) systems. The selection of the best sensor is, firstly, obtained by the matching of application requirements and sensor specifications (that are proposed by this sensor repository). Then, it is refined from the experimentation results. The achieved solution is recorded in the sensor repository for future reuse. As a result, the time and cost of the design process of new condition-based maintenance services is reduced.

  7. Optimized ratiometric calcium sensors for functional in vivo imaging of neurons and T lymphocytes.

    PubMed

    Thestrup, Thomas; Litzlbauer, Julia; Bartholomäus, Ingo; Mues, Marsilius; Russo, Luigi; Dana, Hod; Kovalchuk, Yuri; Liang, Yajie; Kalamakis, Georgios; Laukat, Yvonne; Becker, Stefan; Witte, Gregor; Geiger, Anselm; Allen, Taylor; Rome, Lawrence C; Chen, Tsai-Wen; Kim, Douglas S; Garaschuk, Olga; Griesinger, Christian; Griesbeck, Oliver

    2014-02-01

    The quality of genetically encoded calcium indicators (GECIs) has improved dramatically in recent years, but high-performing ratiometric indicators are still rare. Here we describe a series of fluorescence resonance energy transfer (FRET)-based calcium biosensors with a reduced number of calcium binding sites per sensor. These 'Twitch' sensors are based on the C-terminal domain of Opsanus troponin C. Their FRET responses were optimized by a large-scale functional screen in bacterial colonies, refined by a secondary screen in rat hippocampal neuron cultures. We tested the in vivo performance of the most sensitive variants in the brain and lymph nodes of mice. The sensitivity of the Twitch sensors matched that of synthetic calcium dyes and allowed visualization of tonic action potential firing in neurons and high resolution functional tracking of T lymphocytes. Given their ratiometric readout, their brightness, large dynamic range and linear response properties, Twitch sensors represent versatile tools for neuroscience and immunology.

  8. Wavefront sensorless adaptive optics ophthalmoscopy in the human eye

    PubMed Central

    Hofer, Heidi; Sredar, Nripun; Queener, Hope; Li, Chaohong; Porter, Jason

    2011-01-01

    Wavefront sensor noise and fidelity place a fundamental limit on achievable image quality in current adaptive optics ophthalmoscopes. Additionally, the wavefront sensor ‘beacon’ can interfere with visual experiments. We demonstrate real-time (25 Hz), wavefront sensorless adaptive optics imaging in the living human eye with image quality rivaling that of wavefront sensor based control in the same system. A stochastic parallel gradient descent algorithm directly optimized the mean intensity in retinal image frames acquired with a confocal adaptive optics scanning laser ophthalmoscope (AOSLO). When imaging through natural, undilated pupils, both control methods resulted in comparable mean image intensities. However, when imaging through dilated pupils, image intensity was generally higher following wavefront sensor-based control. Despite the typically reduced intensity, image contrast was higher, on average, with sensorless control. Wavefront sensorless control is a viable option for imaging the living human eye and future refinements of this technique may result in even greater optical gains. PMID:21934779

  9. Joint estimation of high resolution images and depth maps from light field cameras

    NASA Astrophysics Data System (ADS)

    Ohashi, Kazuki; Takahashi, Keita; Fujii, Toshiaki

    2014-03-01

    Light field cameras are attracting much attention as tools for acquiring 3D information of a scene through a single camera. The main drawback of typical lenselet-based light field cameras is the limited resolution. This limitation comes from the structure where a microlens array is inserted between the sensor and the main lens. The microlens array projects 4D light field on a single 2D image sensor at the sacrifice of the resolution; the angular resolution and the position resolution trade-off under the fixed resolution of the image sensor. This fundamental trade-off remains after the raw light field image is converted to a set of sub-aperture images. The purpose of our study is to estimate a higher resolution image from low resolution sub-aperture images using a framework of super-resolution reconstruction. In this reconstruction, these sub-aperture images should be registered as accurately as possible. This registration is equivalent to depth estimation. Therefore, we propose a method where super-resolution and depth refinement are performed alternatively. Most of the process of our method is implemented by image processing operations. We present several experimental results using a Lytro camera, where we increased the resolution of a sub-aperture image by three times horizontally and vertically. Our method can produce clearer images compared to the original sub-aperture images and the case without depth refinement.

  10. Sensor Selection and Optimization for Health Assessment of Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Maul, William A.; Kopasakis, George; Santi, Louis M.; Sowers, Thomas S.; Chicatelli, Amy

    2007-01-01

    Aerospace systems are developed similarly to other large-scale systems through a series of reviews, where designs are modified as system requirements are refined. For space-based systems few are built and placed into service. These research vehicles have limited historical experience to draw from and formidable reliability and safety requirements, due to the remote and severe environment of space. Aeronautical systems have similar reliability and safety requirements, and while these systems may have historical information to access, commercial and military systems require longevity under a range of operational conditions and applied loads. Historically, the design of aerospace systems, particularly the selection of sensors, is based on the requirements for control and performance rather than on health assessment needs. Furthermore, the safety and reliability requirements are met through sensor suite augmentation in an ad hoc, heuristic manner, rather than any systematic approach. A review of the current sensor selection practice within and outside of the aerospace community was conducted and a sensor selection architecture is proposed that will provide a justifiable, dependable sensor suite to address system health assessment requirements.

  11. Sensor Selection and Optimization for Health Assessment of Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Maul, William A.; Kopasakis, George; Santi, Louis M.; Sowers, Thomas S.; Chicatelli, Amy

    2008-01-01

    Aerospace systems are developed similarly to other large-scale systems through a series of reviews, where designs are modified as system requirements are refined. For space-based systems few are built and placed into service these research vehicles have limited historical experience to draw from and formidable reliability and safety requirements, due to the remote and severe environment of space. Aeronautical systems have similar reliability and safety requirements, and while these systems may have historical information to access, commercial and military systems require longevity under a range of operational conditions and applied loads. Historically, the design of aerospace systems, particularly the selection of sensors, is based on the requirements for control and performance rather than on health assessment needs. Furthermore, the safety and reliability requirements are met through sensor suite augmentation in an ad hoc, heuristic manner, rather than any systematic approach. A review of the current sensor selection practice within and outside of the aerospace community was conducted and a sensor selection architecture is proposed that will provide a justifiable, defendable sensor suite to address system health assessment requirements.

  12. A Markov game theoretic data fusion approach for cyber situational awareness

    NASA Astrophysics Data System (ADS)

    Shen, Dan; Chen, Genshe; Cruz, Jose B., Jr.; Haynes, Leonard; Kruger, Martin; Blasch, Erik

    2007-04-01

    This paper proposes an innovative data-fusion/ data-mining game theoretic situation awareness and impact assessment approach for cyber network defense. Alerts generated by Intrusion Detection Sensors (IDSs) or Intrusion Prevention Sensors (IPSs) are fed into the data refinement (Level 0) and object assessment (L1) data fusion components. High-level situation/threat assessment (L2/L3) data fusion based on Markov game model and Hierarchical Entity Aggregation (HEA) are proposed to refine the primitive prediction generated by adaptive feature/pattern recognition and capture new unknown features. A Markov (Stochastic) game method is used to estimate the belief of each possible cyber attack pattern. Game theory captures the nature of cyber conflicts: determination of the attacking-force strategies is tightly coupled to determination of the defense-force strategies and vice versa. Also, Markov game theory deals with uncertainty and incompleteness of available information. A software tool is developed to demonstrate the performance of the high level information fusion for cyber network defense situation and a simulation example shows the enhanced understating of cyber-network defense.

  13. Numerical Study of Richtmyer-Meshkov Instability with Re-Shock

    NASA Astrophysics Data System (ADS)

    Wong, Man Long; Livescu, Daniel; Lele, Sanjiva

    2017-11-01

    The interaction of a Mach 1.45 shock wave with a perturbed planar interface between two gases with an Atwood number 0.68 is studied through 2D and 3D shock-capturing adaptive mesh refinement (AMR) simulations with physical diffusive and viscous terms. The simulations have initial conditions similar to those in the actual experiment conducted by Poggi et al. [1998]. The development of the flow and evolution of mixing due to the interactions with the first shock and the re-shock are studied together with the sensitivity of various global parameters to the properties of the initial perturbation. Grid resolutions needed for fully resolved and 2D and 3D simulations are also evaluated. Simulations are conducted with an in-house AMR solver HAMeRS built on the SAMRAI library. The code utilizes the high-order localized dissipation weighted compact nonlinear scheme [Wong and Lele, 2017] for shock-capturing and different sensors including the wavelet sensor [Wong and Lele, 2016] to identify regions for grid refinement. First and third authors acknowledge the project sponsor LANL.

  14. Smart Pipes—Instrumented Water Pipes, Can This Be Made a Reality?

    PubMed Central

    Metje, Nicole; Chapman, David N.; Cheneler, David; Ward, Michael; Thomas, Andrew M.

    2011-01-01

    Several millions of kilometres of pipes and cables are buried beneath our streets in the UK. As they are not visible and easily accessible, the monitoring of their integrity as well as the quality of their contents is a challenge. Any information of these properties aids the utility owners in their planning and management of their maintenance regime. Traditionally, expensive and very localised sensors are used to provide irregular measurements of these properties. In order to have a complete picture of the utility network, cheaper sensors need to be investigated which would allow large numbers of small sensors to be incorporated into (or near to) the pipe leading to so-called smart pipes. This paper focuses on a novel trial where a short section of a prototype smart pipe was buried using mainly off-the-shelf sensors and communication elements. The challenges of such a burial are presented together with the limitations of the sensor system. Results from the sensors were obtained during and after burial indicating that off-the-shelf sensors can be used in a smart pipes system although further refinements are necessary in order to miniaturise these sensors. The key challenges identified were the powering of these sensors and the communication of the data to the operator using a range of different methods. PMID:22164027

  15. Refinement of regression models to estimate real-time concentrations of contaminants in the Menomonee River drainage basin, southeast Wisconsin, 2008-11

    USGS Publications Warehouse

    Baldwin, Austin K.; Robertson, Dale M.; Saad, David A.; Magruder, Christopher

    2013-01-01

    In 2008, the U.S. Geological Survey and the Milwaukee Metropolitan Sewerage District initiated a study to develop regression models to estimate real-time concentrations and loads of chloride, suspended solids, phosphorus, and bacteria in streams near Milwaukee, Wisconsin. To collect monitoring data for calibration of models, water-quality sensors and automated samplers were installed at six sites in the Menomonee River drainage basin. The sensors continuously measured four potential explanatory variables: water temperature, specific conductance, dissolved oxygen, and turbidity. Discrete water-quality samples were collected and analyzed for five response variables: chloride, total suspended solids, total phosphorus, Escherichia coli bacteria, and fecal coliform bacteria. Using the first year of data, regression models were developed to continuously estimate the response variables on the basis of the continuously measured explanatory variables. Those models were published in a previous report. In this report, those models are refined using 2 years of additional data, and the relative improvement in model predictability is discussed. In addition, a set of regression models is presented for a new site in the Menomonee River Basin, Underwood Creek at Wauwatosa. The refined models use the same explanatory variables as the original models. The chloride models all used specific conductance as the explanatory variable, except for the model for the Little Menomonee River near Freistadt, which used both specific conductance and turbidity. Total suspended solids and total phosphorus models used turbidity as the only explanatory variable, and bacteria models used water temperature and turbidity as explanatory variables. An analysis of covariance (ANCOVA), used to compare the coefficients in the original models to those in the refined models calibrated using all of the data, showed that only 3 of the 25 original models changed significantly. Root-mean-squared errors (RMSEs) calculated for both the original and refined models using the entire dataset showed a median improvement in RMSE of 2.1 percent, with a range of 0.0–13.9 percent. Therefore most of the original models did almost as well at estimating concentrations during the validation period (October 2009–September 2011) as the refined models, which were calibrated using those data. Application of these refined models can produce continuously estimated concentrations of chloride, total suspended solids, total phosphorus, E. coli bacteria, and fecal coliform bacteria that may assist managers in quantifying the effects of land-use changes and improvement projects, establish total maximum daily loads, and enable better informed decision making in the future.

  16. Cross-Calibration of Earth Observing System Terra Satellite Sensors MODIS and ASTER

    NASA Technical Reports Server (NTRS)

    McCorkel, J.

    2014-01-01

    The Advanced Spaceborne Thermal Emissive and Reflection Radiometer (ASTER) and Moderate Resolution Imaging Spectrometer (MODIS) are two of the five sensors onboard the Earth Observing System's Terra satellite. These sensors share many similar spectral channels while having much different spatial and operational parameters. ASTER is a tasked sensor and sometimes referred to a zoom camera of the MODIS that collects a full-earth image every one to two days. It is important that these sensors have a consistent characterization and calibration for continued development and use of their data products. This work uses a variety of test sites to retrieve and validate intercalibration results. The refined calibration of Collection 6 of the Terra MODIS data set is leveraged to provide the up-to-date reference for trending and validation of ASTER. Special attention is given to spatially matching radiance measurements using prelaunch spatial response characterization of MODIS. Despite differences in spectral band properties and spatial scales, ASTER-MODIS is an ideal case for intercomparison since the sensors have nearly identical views and acquisitions times and therefore can be used as a baseline of intercalibration performance of other satellite sensor pairs.

  17. SEASAT study documentation

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The proposed spacecraft consists of a bus module, containing all subsystems required for support of the sensors, and a payload module containing all of the sensor equipment. The two modules are bolted together to form the spacecraft, and electrical interfaces are accomplished via mated connectors at the interface plane. This approach permits independent parallel assembly and test operations on each module up until mating for final spacecraft integration and test operations. Proposed program schedules recognize the need to refine sensor/spacecraft interfaces prior to proceeding with procurement, reflect the lead times estimated by suppliers for delivery of equipment, reflect a comprehensive test program, and provide flexibility for unanticipated problems. The spacecraft systems are described in detail along with aerospace ground equipment, ground handling equipment, the launch vehicle, imaging radar incorporation, and systems tests.

  18. Automated source term and wind parameter estimation for atmospheric transport and dispersion applications

    NASA Astrophysics Data System (ADS)

    Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.

    2015-12-01

    Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a ;first guess; source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the distance from source to the closest sampler), and improve mass estimates by several orders of magnitude. Furthermore, it also has the ability to operate in scenarios with inconsistencies between the wind and airborne contaminant sensor observations and adjust the wind to provide a better match between the hazard prediction and the observations.

  19. Calibrating a novel multi-sensor physical activity measurement system.

    PubMed

    John, D; Liu, S; Sasaki, J E; Howe, C A; Staudenmayer, J; Gao, R X; Freedson, P S

    2011-09-01

    Advancing the field of physical activity (PA) monitoring requires the development of innovative multi-sensor measurement systems that are feasible in the free-living environment. The use of novel analytical techniques to combine and process these multiple sensor signals is equally important. This paper describes a novel multi-sensor 'integrated PA measurement system' (IMS), the lab-based methodology used to calibrate the IMS, techniques used to predict multiple variables from the sensor signals, and proposes design changes to improve the feasibility of deploying the IMS in the free-living environment. The IMS consists of hip and wrist acceleration sensors, two piezoelectric respiration sensors on the torso, and an ultraviolet radiation sensor to obtain contextual information (indoors versus outdoors) of PA. During lab-based calibration of the IMS, data were collected on participants performing a PA routine consisting of seven different ambulatory and free-living activities while wearing a portable metabolic unit (criterion measure) and the IMS. Data analyses on the first 50 adult participants are presented. These analyses were used to determine if the IMS can be used to predict the variables of interest. Finally, physical modifications for the IMS that could enhance the feasibility of free-living use are proposed and refinement of the prediction techniques is discussed.

  20. A Tailored Ontology Supporting Sensor Implementation for the Maintenance of Industrial Machines

    PubMed Central

    Belkadi, Farouk; Bernard, Alain

    2017-01-01

    The longtime productivity of an industrial machine is improved by condition-based maintenance strategies. To do this, the integration of sensors and other cyber-physical devices is necessary in order to capture and analyze a machine’s condition through its lifespan. Thus, choosing the best sensor is a critical step to ensure the efficiency of the maintenance process. Indeed, considering the variety of sensors, and their features and performance, a formal classification of a sensor’s domain knowledge is crucial. This classification facilitates the search for and reuse of solutions during the design of a new maintenance service. Following a Knowledge Management methodology, the paper proposes and develops a new sensor ontology that structures the domain knowledge, covering both theoretical and experimental sensor attributes. An industrial case study is conducted to validate the proposed ontology and to demonstrate its utility as a guideline to ease the search of suitable sensors. Based on the ontology, the final solution will be implemented in a shared repository connected to legacy CAD (computer-aided design) systems. The selection of the best sensor is, firstly, obtained by the matching of application requirements and sensor specifications (that are proposed by this sensor repository). Then, it is refined from the experimentation results. The achieved solution is recorded in the sensor repository for future reuse. As a result, the time and cost of the design process of new condition-based maintenance services is reduced. PMID:28885592

  1. Atmospheric electricity/meteorology analysis

    NASA Technical Reports Server (NTRS)

    Goodman, Steven J.; Blakeslee, Richard; Buechler, Dennis

    1993-01-01

    This activity focuses on Lightning Imaging Sensor (LIS)/Lightning Mapper Sensor (LMS) algorithm development and applied research. Specifically we are exploring the relationships between (1) global and regional lightning activity and rainfall, and (2) storm electrical development, physics, and the role of the environment. U.S. composite radar-rainfall maps and ground strike lightning maps are used to understand lightning-rainfall relationships at the regional scale. These observations are then compared to SSM/I brightness temperatures to simulate LIS/TRMM multi-sensor algorithm data sets. These data sets are supplied to the WETNET project archive. WSR88-D (NEXRAD) data are also used as it becomes available. The results of this study allow us to examine the information content from lightning imaging sensors in low-earth and geostationary orbits. Analysis of tropical and U.S. data sets continues. A neural network/sensor fusion algorithm is being refined for objectively associating lightning and rainfall with their parent storm systems. Total lightning data from interferometers are being used in conjunction with data from the national lightning network. A 6-year lightning/rainfall climatology has been assembled for LIS sampling studies.

  2. An initial investigation of the long-term trends in the fluxgate magnetometer (FGM) calibration parameters on the four Cluster spacecraft

    NASA Astrophysics Data System (ADS)

    Alconcel, L. N. S.; Fox, P.; Brown, P.; Oddy, T. M.; Lucek, E. L.; Carr, C. M.

    2014-07-01

    Over the course of more than 10 years in operation, the calibration parameters of the outboard fluxgate magnetometer (FGM) sensors on the four Cluster spacecraft are shown to be remarkably stable. The parameters are refined on the ground during the rigorous FGM calibration process performed for the Cluster Active Archive (CAA). Fluctuations in some parameters show some correlation with trends in the sensor temperature (orbit position). The parameters, particularly the offsets, of the spacecraft 1 (C1) sensor have undergone more long-term drift than those of the other spacecraft (C2, C3 and C4) sensors. Some potentially anomalous calibration parameters have been identified and will require further investigation in future. However, the observed long-term stability demonstrated in this initial study gives confidence in the accuracy of the Cluster magnetic field data. For the most sensitive ranges of the FGM instrument, the offset drift is typically 0.2 nT per year in each sensor on C1 and negligible on C2, C3 and C4.

  3. An initial investigation of the long-term trends in the fluxgate magnetometer (FGM) calibration parameters on the four Cluster spacecraft

    NASA Astrophysics Data System (ADS)

    Alconcel, L. N. S.; Fox, P.; Brown, P.; Oddy, T. M.; Lucek, E. L.; Carr, C. M.

    2014-01-01

    Over the course of more than ten years in operation, the calibration parameters of the outboard fluxgate magnetometer (FGM) sensors on the four Cluster spacecraft are shown to be remarkably stable. The parameters are refined on the ground during the rigorous FGM calibration process performed for the Cluster Active Archive (CAA). Fluctuations in some parameters show some correlation with trends in the sensor temperature (orbit position). The parameters, particularly the offsets, of the Spacecraft1 (C1) sensor have undergone more long-term drift than those of the other spacecraft (C2, C3 and C4) sensors. Some potentially anomalous calibration parameters have been identified and will require further investigation in future. However, the observed long-term stability demonstrated in this initial study gives confidence in the relative accuracy of the Cluster magnetic field data. For the most sensitive ranges of the FGM instrument, the offset drift is typically 0.2 nT yr-1 in each sensor on C1 and negligible on C2, C3 and C4.

  4. Sentient networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapline, G.

    1998-03-01

    The engineering problems of constructing autonomous networks of sensors and data processors that can provide alerts for dangerous situations provide a new context for debating the question whether man-made systems can emulate the cognitive capabilities of the mammalian brain. In this paper we consider the question whether a distributed network of sensors and data processors can form ``perceptions`` based on sensory data. Because sensory data can have exponentially many explanations, the use of a central data processor to analyze the outputs from a large ensemble of sensors will in general introduce unacceptable latencies for responding to dangerous situations. A bettermore » idea is to use a distributed ``Helmholtz machine`` architecture in which the sensors are connected to a network of simple processors, and the collective state of the network as a whole provides an explanation for the sensory data. In general communication within such a network will require time division multiplexing, which opens the door to the possibility that with certain refinements to the Helmholtz machine architecture it may be possible to build sensor networks that exhibit a form of artificial consciousness.« less

  5. Dynamic Leading-Edge Stagnation Point Determination Utilizing an Array of Hot-Film Sensors with Unknown Calibration

    NASA Technical Reports Server (NTRS)

    Ellsworth, Joel C.

    2017-01-01

    During flight-testing of the National Aeronautics and Space Administration (NASA) Gulfstream III (G-III) airplane (Gulfstream Aerospace Corporation, Savannah, Georgia) SubsoniC Research Aircraft Testbed (SCRAT) between March 2013 and April 2015 it became evident that the sensor array used for stagnation point detection was not functioning as expected. The stagnation point detection system is a self calibrating hot-film array; the calibration was unknown and varied between flights, however, the channel with the lowest power consumption was expected to correspond with the point of least surface shear. While individual channels showed the expected behavior for the hot-film sensors, more often than not the lowest power consumption occurred at a single sensor (despite in-flight maneuvering) in the array located far from the expected stagnation point. An algorithm was developed to process the available system output and determine the stagnation point location. After multiple updates and refinements, the final algorithm was not sensitive to the failure of a single sensor in the array, but adjacent failures beneath the stagnation point crippled the algorithm.

  6. Development of a Non-Contact, Inductive Depth Sensor for Free-Surface, Liquid-Metal Flows

    NASA Astrophysics Data System (ADS)

    Bruhaug, Gerrit; Kolemen, Egemen; Fischer, Adam; Hvasta, Mike

    2017-10-01

    This paper details a non-contact based, inductive depth measurement system that can sit behind a layer of steel and measure the depth of the liquid metal flowing over the steel. Free-surface liquid metal depth measurement is usually done with invasive sensors that impact the flow of the liquid metal, or complex external sensors that require lasers and precise alignment. Neither of these methods is suitable for the extreme environment encountered in the diverter region of a nuclear fusion reactor, where liquid metal open channel flows are being investigated for future use. A sensor was developed that used the inductive coupling of a coil to liquid metal to measure the height of the liquid metal present. The sensor was built and tested experimentally, and modeled with finite element modeling software to further understand the physics involved. Future work will attempt to integrate the sensor into the Liquid Metal eXperiment (LMX) at the Princeton Plasma Physics Laboratory for more refined testing. This work was made possible by funding from the Department of Energy for the Summer Undergraduate Laboratory Internship (SULI) program. This work is supported by the US DOE Contract No.DE-AC02-09CH11466.

  7. Integrated control and health management. Orbit transfer rocket engine technology program

    NASA Technical Reports Server (NTRS)

    Holzmann, Wilfried A.; Hayden, Warren R.

    1988-01-01

    To insure controllability of the baseline design for a 7500 pound thrust, 10:1 throttleable, dual expanded cycle, Hydrogen-Oxygen, orbit transfer rocket engine, an Integrated Controls and Health Monitoring concept was developed. This included: (1) Dynamic engine simulations using a TUTSIM derived computer code; (2) analysis of various control methods; (3) Failure Modes Analysis to identify critical sensors; (4) Survey of applicable sensors technology; and, (5) Study of Health Monitoring philosophies. The engine design was found to be controllable over the full throttling range by using 13 valves, including an oxygen turbine bypass valve to control mixture ratio, and a hydrogen turbine bypass valve, used in conjunction with the oxygen bypass to control thrust. Classic feedback control methods are proposed along with specific requirements for valves, sensors, and the controller. Expanding on the control system, a Health Monitoring system is proposed including suggested computing methods and the following recommended sensors: (1) Fiber optic and silicon bearing deflectometers; (2) Capacitive shaft displacement sensors; and (3) Hot spot thermocouple arrays. Further work is needed to refine and verify the dynamic simulations and control algorithms, to advance sensor capabilities, and to develop the Health Monitoring computational methods.

  8. New-generation diabetes management: glucose sensor-augmented insulin pump therapy

    PubMed Central

    Cengiz, Eda; Sherr, Jennifer L; Weinzimer, Stuart A; Tamborlane, William V

    2011-01-01

    Diabetes is one of the most common chronic disorders with an increasing incidence worldwide. Technologic advances in the field of diabetes have provided new tools for clinicians to manage this challenging disease. For example, the development of continuous subcutaneous insulin infusion systems have allowed for refinement in the delivery of insulin, while continuous glucose monitors provide patients and clinicians with a better understanding of the minute to minute glucose variability, leading to the titration of insulin delivery based on this variability when applicable. Merging of these devices has resulted in sensor-augmented insulin pump therapy, which became a major building block upon which the artificial pancreas (closed-loop systems) can be developed. This article summarizes the evolution of sensor-augmented insulin pump therapy until present day and its future applications in new-generation diabetes management. PMID:21728731

  9. New-generation diabetes management: glucose sensor-augmented insulin pump therapy.

    PubMed

    Cengiz, Eda; Sherr, Jennifer L; Weinzimer, Stuart A; Tamborlane, William V

    2011-07-01

    Diabetes is one of the most common chronic disorders with an increasing incidence worldwide. Technologic advances in the field of diabetes have provided new tools for clinicians to manage this challenging disease. For example, the development of continuous subcutaneous insulin infusion systems have allowed for refinement in the delivery of insulin, while continuous glucose monitors provide patients and clinicians with a better understanding of the minute to minute glucose variability, leading to the titration of insulin delivery based on this variability when applicable. Merging of these devices has resulted in sensor-augmented insulin pump therapy, which became a major building block upon which the artificial pancreas (closed-loop systems) can be developed. This article summarizes the evolution of sensor-augmented insulin pump therapy until present day and its future applications in new-generation diabetes management.

  10. Nature inspires sensors to do more with less.

    PubMed

    Mulvaney, Shawn P; Sheehan, Paul E

    2014-10-28

    The world is filled with widely varying chemical, physical, and biological stimuli. Over millennia, organisms have refined their senses to cope with these diverse stimuli, becoming virtuosos in differentiating closely related antigens, handling extremes in concentration, resetting the spent sensing mechanisms, and processing the multiple data streams being generated. Nature successfully deals with both repeating and new stimuli, demonstrating great adaptability when confronted with the latter. Interestingly, nature accomplishes these feats using a fairly simple toolbox. The sensors community continues to draw inspiration from nature's example: just look at the antibodies used as biosensor capture agents or the neural networks that process multivariate data streams. Indeed, many successful sensors have been built by simply mimicking natural systems. However, some of the most exciting breakthroughs occur when the community moves beyond mimicking nature and learns to use nature's tools in innovative ways.

  11. Use of array of conducting polymers for differentiation of coconut oil products.

    PubMed

    Rañola, Rey Alfred G; Santiago, Karen S; Sevilla, Fortunato B

    2016-01-01

    An array of chemiresistors based on conducting polymers was assembled for the differentiation of coconut oil products. The chemiresistor sensors were fabricated through the potentiostatic electrodeposition of polyaniline (PANi), polypyrrole (PPy) and poly(3-methylthiophene) (P-3MTp) on the gap separating two planar gold electrodes set on a Teflon substrate. The change in electrical resistance of the sensors was measured and observed after exposing the array to the headspace of oil samples. The sensor response was found rapid, reversible and reproducible. Different signals were obtained for each coconut oil sample and pattern recognition techniques were employed for the analysis of the data. The developed system was able to distinguish virgin coconut oil (VCO) from refined, bleached & deodorised coconut oil (RBDCO), flavoured VCO, homemade VCO, and rancid VCO. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Structure-From-Motion in 3D Space Using 2D Lidars

    PubMed Central

    Choi, Dong-Geol; Bok, Yunsu; Kim, Jun-Sik; Shim, Inwook; Kweon, In So

    2017-01-01

    This paper presents a novel structure-from-motion methodology using 2D lidars (Light Detection And Ranging). In 3D space, 2D lidars do not provide sufficient information for pose estimation. For this reason, additional sensors have been used along with the lidar measurement. In this paper, we use a sensor system that consists of only 2D lidars, without any additional sensors. We propose a new method of estimating both the 6D pose of the system and the surrounding 3D structures. We compute the pose of the system using line segments of scan data and their corresponding planes. After discarding the outliers, both the pose and the 3D structures are refined via nonlinear optimization. Experiments with both synthetic and real data show the accuracy and robustness of the proposed method. PMID:28165372

  13. Evaluation of the Sensor Data Record from the Nadir Instruments of the Ozone Mapping Profiler Suite (OMPS)

    NASA Technical Reports Server (NTRS)

    Wu, Xiangqian; Liu, Quanhua; Zeng, Jian; Grotenhuis, Michael; Qian, Haifeng; Caponi, Maria; Flynn, Larry; Jaross, Glen; Sen, Bhaswar; Buss, Richard H., Jr.; hide

    2014-01-01

    This paper evaluates the first 15 months of the Ozone Mapping and Profiler Suite (OMPS) Sensor Data Record (SDR) acquired by the nadir sensors and processed by the National Oceanic and Atmospheric Administration Interface Data Processing Segment. The evaluation consists of an inter-comparison with a similar satellite instrument, an analysis using a radiative transfer model, and an assessment of product stability. This is in addition to the evaluation of sensor calibration and the Environment Data Record product that are also reported in this Special Issue. All these are parts of synergetic effort to provide comprehensive assessment at every level of the products to ensure its quality. It is found that the OMPS nadir SDR quality is satisfactory for the current Provisional maturity. Methods used in the evaluation are being further refined, developed, and expanded, in collaboration with international community through the Global Space-based Inter-Calibration System, to support the upcoming long-term monitoring.

  14. Attitude ground support system for the solar maximum mission spacecraft

    NASA Technical Reports Server (NTRS)

    Nair, G.

    1980-01-01

    The SMM attitude ground support system (AGSS) supports the acquisition of spacecraft roll attitude reference, performs the in-flight calibration of the attitude sensor complement, supports onboard control autonomy via onboard computer data base updates, and monitors onboard computer (OBC) performance. Initial roll attitude acquisition is accomplished by obtaining a coarse 3 axis attitude estimate from magnetometer and Sun sensor data and subsequently refining it by processing data from the fixed head star trackers. In-flight calibration of the attitude sensor complement is achieved by processing data from a series of slew maneuvers designed to maximize the observability and accuracy of the appropriate alignments and biases. To ensure autonomy of spacecraft operation, the AGSS selects guide stars and computes sensor occultation information for uplink to the OBC. The onboard attitude control performance is monitored on the ground through periodic attitude determination and processing of OBC data in downlink telemetry. In general, the control performance has met mission requirements. However, software and hardware problems have resulted in sporadic attitude reference losses.

  15. Feasibility Test of a Liquid Film Thickness Sensor on a Flexible Printed Circuit Board Using a Three-Electrode Conductance Method

    PubMed Central

    Lee, Kyu Byung; Kim, Jong Rok; Park, Goon Cherl; Cho, Hyoung Kyu

    2016-01-01

    Liquid film thickness measurements under temperature-varying conditions in a two-phase flow are of great importance to refining our understanding of two-phase flows. In order to overcome the limitations of the conventional electrical means of measuring the thickness of a liquid film, this study proposes a three-electrode conductance method, with the device fabricated on a flexible printed circuit board (FPCB). The three-electrode conductance method offers the advantage of applicability under conditions with varying temperatures in principle, while the FPCB has the advantage of usability on curved surfaces and in relatively high-temperature conditions in comparison with sensors based on a printed circuit board (PCB). Two types of prototype sensors were fabricated on an FPCB and the feasibility of both was confirmed in a calibration test conducted at different temperatures. With the calibrated sensor, liquid film thickness measurements were conducted via a falling liquid film flow experiment, and the working performance was tested. PMID:28036000

  16. Demonstration of automated proximity and docking technologies

    NASA Astrophysics Data System (ADS)

    Anderson, Robert L.; Tsugawa, Roy K.; Bryan, Thomas C.

    An autodock was demonstrated using straightforward techniques and real sensor hardware. A simulation testbed was established and validated. The sensor design was refined with improved optical performance and image processing noise mitigation techniques, and the sensor is ready for production from off-the-shelf components. The autonomous spacecraft architecture is defined. The areas of sensors, docking hardware, propulsion, and avionics are included in the design. The Guidance Navigation and Control architecture and requirements are developed. Modular structures suitable for automated control are used. The spacecraft system manager functions including configuration, resource, and redundancy management are defined. The requirements for autonomous spacecraft executive are defined. High level decisionmaking, mission planning, and mission contingency recovery are a part of this. The next step is to do flight demonstrations. After the presentation the following question was asked. How do you define validation? There are two components to validation definition: software simulation with formal and vigorous validation, and hardware and facility performance validated with respect to software already validated against analytical profile.

  17. Mathematical Model of the One-stage Magneto-optical Sensor Based on Faraday Effect

    NASA Astrophysics Data System (ADS)

    Babaev, O. G.; Paranin, V. D.; Sinitsin, L. I.

    2018-01-01

    The aim of this work is to refine a model of magneto-optical sensors based on Faraday’s longitudinal magneto-optical effect. The tasks of the study include computer modeling and analysis of the transfer characteristic of a single-stage magneto-optical sensor for various polarization of the input beam and non-ideal optical components. The proposed mathematical model and software make it possible to take into account the non-ideal characteristics of film polaroids observed in operation in the near infrared region and at increased temperatures. On the basis of the results of the model analysis it was found that the dependence of normalized transmission T(γ2) has periodic nature. Choosing the angle (γ 2-γ 1) makes it possible to shift the initial operation point and change the sensitivity dT/dγ 2. The influence of the input beam polarization increases with the increase of polaroid parameter deviation from ideal and shows itself as reduction of modulation depth and angular shift of the sensor conversion response.

  18. Interfacial fields in organic field-effect transistors and sensors

    NASA Astrophysics Data System (ADS)

    Dawidczyk, Thomas J.

    Organic electronics are currently being commercialized and present a viable alternative to conventional electronics. These organic materials offer the ability to chemically manipulate the molecule, allowing for more facile mass processing techniques, which in turn reduces the cost. One application where organic semiconductors (OSCs) are being investigated is sensors. This work evaluates an assortment of n- and p-channel semiconductors as organic field-effect transistor (OFET) sensors. The sensor responses to dinitrotoluene (DNT) vapor and solid along with trinitrotoluene (TNT) solid were studied. Different semiconductor materials give different magnitude and direction of electrical current response upon exposure to DNT. Additional OFET parameters---mobility and threshold voltage---further refine the response to the DNT with each OFET sensor requiring a certain gate voltage for an optimized response to the vapor. The pattern of responses has sufficient diversity to distinguish DNT from other vapors. To effectively use these OFET sensors in a circuit, the threshold voltage needs to be tuned for each transistor to increase the efficiency of the circuit and maximize the sensor response. The threshold voltage can be altered by embedding charges into the dielectric layer of the OFET. To study the quantity and energy of charges needed to alter the threshold voltage, charge carriers were injected into polystyrene (PS) and investigated with scanning Kelvin probe microscopy (SKPM) and thermally stimulated discharge current (TSDC). Lateral heterojunctions of pentacene/PS were scanned using SKPM, effectively observing polarization along a side view of a lateral nonvolatile organic field-effect transistor dielectric interface. TSDC was used to observe charge migration out of PS films and to estimate the trap energy level inside the PS, using the initial rise method. The process was further refined to create lateral heterojunctions that were actual working OFETs, consisting of a PS or poly (3-trifluoro)styrene (F-PS) gate dielectric and a pentacene OSC. The charge storage inside the dielectric was visualized with SKPM, correlated to a threshold voltage shift in the transistor operation, and related to bias stress as well. The SKPM method allows the dielectric/OSC interface of the OFET to be visualized without any alteration of the OFET. Furthermore, this technique allows for the observation of charge distribution between the two dielectric interfaces, PS and F-PS. The SKPM is used to visualize the charge from conventional gate biasing and also as a result of embedding charges deliberately into the dielectric to shift the threshold voltage. Conventional gate biasing shows considerable residual charge in the PS dielectric, which results in gate bias stress. Gate bias stress is one of the major hurdles left in the commercialization of OFETs. To prevent this bias stress, additives of different energy levels were inserted into the dielectric to limit the gate bias stress. Additionally, the dielectrics were pre-charged to try and prevent further bias stress. Neither pre-charging the dielectric or the addition of additive has been used in gate bias prevention, but both methods offer improved resistance to gate bias stress, and help to further refine the dielectric design.

  19. A lumber grading system for the future: an update evaluation

    Treesearch

    D. Earl Kline; Chris Surak; Philip A. Araman

    2000-01-01

    Virginia Tech and the Southern Research Station of the USDA Forest Service have jointly developed and refined a multiple-sensor lumber-scanning prototype to demonstrate and test applicable scanning technologies (Conners et al. 1997, Kline et al. 1997, Kline et al. 1998). This R&D effort has led to a patented wood color and grain sorting system (Conners and Lu 1998...

  20. Integrated Conceptual Design of Joined-Wing SensorCraft Using Response Surface Models

    DTIC Science & Technology

    2006-11-01

    vi Acknowledgements I would like to express my sincere appreciation to my thesis advisor, Dr. Robert Canfield for his guidance and...55 Raymer Approximate and Group Weights Sizing Methods....................................... 57 Finite Element Model Structural Weight...Empty Weight Fraction Equation ............................... 54 Figure 29 Response of Refined Weight to T/W and W/S Inputs for Model (2) Raymer ASW

  1. Spatiotemporal Local-Remote Senor Fusion (ST-LRSF) for Cooperative Vehicle Positioning.

    PubMed

    Jeong, Han-You; Nguyen, Hoa-Hung; Bhawiyuga, Adhitya

    2018-04-04

    Vehicle positioning plays an important role in the design of protocols, algorithms, and applications in the intelligent transport systems. In this paper, we present a new framework of spatiotemporal local-remote sensor fusion (ST-LRSF) that cooperatively improves the accuracy of absolute vehicle positioning based on two state estimates of a vehicle in the vicinity: a local sensing estimate, measured by the on-board exteroceptive sensors, and a remote sensing estimate, received from neighbor vehicles via vehicle-to-everything communications. Given both estimates of vehicle state, the ST-LRSF scheme identifies the set of vehicles in the vicinity, determines the reference vehicle state, proposes a spatiotemporal dissimilarity metric between two reference vehicle states, and presents a greedy algorithm to compute a minimal weighted matching (MWM) between them. Given the outcome of MWM, the theoretical position uncertainty of the proposed refinement algorithm is proven to be inversely proportional to the square root of matching size. To further reduce the positioning uncertainty, we also develop an extended Kalman filter model with the refined position of ST-LRSF as one of the measurement inputs. The numerical results demonstrate that the proposed ST-LRSF framework can achieve high positioning accuracy for many different scenarios of cooperative vehicle positioning.

  2. FLASH X-RAY (FXR) LINEAR INDUCTION ACCELERATOR (LIA) OPTIMIZATION Sensor Delay Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ong, M M; Houck, T L; Kreitzer, B R

    2006-05-01

    The radiographic goal of the FXR Optimization Project is to generate an x-ray pulse with peak energy of 19 MeV, spot-size of 1.5 mm, a dose of 500 rad, and duration of 60 ns. The electrical objectives are to generate a 3 kA electron-beam and refine our 16 MV accelerator so that the voltage does not vary more than 1%-rms. In a multi-cell linear induction accelerator, like FXR, the timing of the acceleration pulses relative to the beam is critical. The pulses must be timed optimally so that a cell is at full voltage before the beam arrives and doesmore » not drop until the beam passes. In order to stay within the energy-variation budget, the synchronization between the cells and beam arrival must be controlled to a couple of nanoseconds. Therefore, temporal measurements must be accurate to a fraction of a nanosecond. FXR Optimization Project developed a one-giga-sample per second (gs/s) data acquisition system to record beam sensor data. Signal processing algorithms were written to determine cell timing with an uncertainty of a fraction of a nanosecond. However, the uncertainty in the sensor delay was still a few nanoseconds. This error had to be reduced if we are to improve the quality of the electron beam. Two types of sensors are used to align the cell voltage pulse against the beam current. The beam current is measured with resistive-wall sensors. The cell voltages are read with capacitive voltage monitors. Sensor delays can be traced to two mechanisms: (1) the sensors are not co-located at the beam and cell interaction points, and (2) the sensors have different length jumper cables and other components that connect them to the standard-length coaxial cables of the data acquisition system. Using the physical locations and dimensions of the sensor components, and the dielectric constant of the materials, delay times were computed. Relative to the cell voltage, the beam current was theoretically reporting late by 7.7 ns. Two experiments were performed to verify and refine the sensor delay correction. In the first experiment, the beam was allowed to drift through a cell that was not pulsed. The beam induces a potential into the cell that is read by the voltage monitor. Analysis of the data indicated that the beam sensor signal was likely 7.1 ns late. In the second experiment, the beam current is calculated from the injector diode voltage that is the sum of the cell voltages. A 7 ns correction produced a very good match between the signals from the two types of sensors. For simplicity, we selected a correction factor that advanced the current signals by 7 ns. This should reduce the uncertainty in the temporal measurements to less than 1 ns.« less

  3. An epidemic model for biological data fusion in ad hoc sensor networks

    NASA Astrophysics Data System (ADS)

    Chang, K. C.; Kotari, Vikas

    2009-05-01

    Bio terrorism can be a very refined and a catastrophic approach of attacking a nation. This requires the development of a complete architecture dedicatedly designed for this purpose which includes but is not limited to Sensing/Detection, Tracking and Fusion, Communication, and others. In this paper we focus on one such architecture and evaluate its performance. Various sensors for this specific purpose have been studied. The accent has been on use of Distributed systems such as ad-hoc networks and on application of epidemic data fusion algorithms to better manage the bio threat data. The emphasis has been on understanding the performance characteristics of these algorithms under diversified real time scenarios which are implemented through extensive JAVA based simulations. Through comparative studies on communication and fusion the performance of channel filter algorithm for the purpose of biological sensor data fusion are validated.

  4. Basic Geometric Support of Systems for Earth Observation from Geostationary and Highly Elliptical Orbits

    NASA Astrophysics Data System (ADS)

    Gektin, Yu. M.; Egoshkin, N. A.; Eremeev, V. V.; Kuznecov, A. E.; Moskatinyev, I. V.; Smelyanskiy, M. B.

    2017-12-01

    A set of standardized models and algorithms for geometric normalization and georeferencing images from geostationary and highly elliptical Earth observation systems is considered. The algorithms can process information from modern scanning multispectral sensors with two-coordinate scanning and represent normalized images in optimal projection. Problems of the high-precision ground calibration of the imaging equipment using reference objects, as well as issues of the flight calibration and refinement of geometric models using the absolute and relative reference points, are considered. Practical testing of the models, algorithms, and technologies is performed in the calibration of sensors for spacecrafts of the Electro-L series and during the simulation of the Arktika prospective system.

  5. Reduction procedures for accurate analysis of MSX surveillance experiment data

    NASA Technical Reports Server (NTRS)

    Gaposchkin, E. Mike; Lane, Mark T.; Abbot, Rick I.

    1994-01-01

    Technical challenges of the Midcourse Space Experiment (MSX) science instruments require careful characterization and calibration of these sensors for analysis of surveillance experiment data. Procedures for reduction of Resident Space Object (RSO) detections will be presented which include refinement and calibration of the metric and radiometric (and photometric) data and calculation of a precise MSX ephemeris. Examples will be given which support the reduction, and these are taken from ground-test data similar in characteristics to the MSX sensors and from the IRAS satellite RSO detections. Examples to demonstrate the calculation of a precise ephemeris will be provided from satellites in similar orbits which are equipped with S-band transponders.

  6. Radiometric Calibration of the Earth Observing System's Imaging Sensors

    NASA Technical Reports Server (NTRS)

    Slater, Philip N. (Principal Investigator)

    1997-01-01

    The work on the grant was mainly directed towards developing new, accurate, redundant methods for the in-flight, absolute radiometric calibration of satellite multispectral imaging systems and refining the accuracy of methods already in use. Initially the work was in preparation for the calibration of MODIS and HIRIS (before the development of that sensor was canceled), with the realization it would be applicable to most imaging multi- or hyper-spectral sensors provided their spatial or spectral resolutions were not too coarse. The work on the grant involved three different ground-based, in-flight calibration methods reflectance-based radiance-based and diffuse-to-global irradiance ratio used with the reflectance-based method. This continuing research had the dual advantage of: (1) developing several independent methods to create the redundancy that is essential for the identification and hopefully the elimination of systematic errors; and (2) refining the measurement techniques and algorithms that can be used not only for improving calibration accuracy but also for the reverse process of retrieving ground reflectances from calibrated remote-sensing data. The grant also provided the support necessary for us to embark on other projects such as the ratioing radiometer approach to on-board calibration (this has been further developed by SBRS as the 'solar diffuser stability monitor' and is incorporated into the most important on-board calibration system for MODIS)- another example of the work, which was a spin-off from the grant funding, was a study of solar diffuser materials. Journal citations, titles and abstracts of publications authored by faculty, staff, and students are also attached.

  7. A reference architecture for telemonitoring.

    PubMed

    Clarke, Malcolm

    2004-01-01

    The Telecare Interactive Continuous Monitoring System exploits GPRS to provide an ambulatory device that monitors selected vital signs on a continuous basis. Alarms are sent when parameters fall outside preset limits, and accompanying physiological data may also be transmitted. The always-connected property of GPRS allows continuous interactive control of the device and its sensors, permitting changes to monitoring parameters or even enabling continuous monitoring of a sensor in emergency. A new personal area network (PAN) has been developed to support short-range wireless connection to sensors worn on the body including ECG and finger worn SpO2. Most notable is use of ultra low radio frequency to reduce power to minimum. The system has been designed to use a hierarchical architecture for sensors and "derived" signals, such as HR from ECG, so that each can be independently controlled and managed. Sensors are treated as objects, and functions are defined to control aspects of behaviour. These are refined in order to define a generic set of abstract functions to handle the majority of functions, leaving a minimum of sensor specific commands. The intention is to define a reference architecture in order to research the functionality and system architecture of a telemonitoring system. The Telecare project is funded through a grant from the European Commission (IST programme).

  8. Estimation of distributed Fermat-point location for wireless sensor networking.

    PubMed

    Huang, Po-Hsian; Chen, Jiann-Liang; Larosa, Yanuarius Teofilus; Chiang, Tsui-Lien

    2011-01-01

    This work presents a localization scheme for use in wireless sensor networks (WSNs) that is based on a proposed connectivity-based RF localization strategy called the distributed Fermat-point location estimation algorithm (DFPLE). DFPLE applies triangle area of location estimation formed by intersections of three neighboring beacon nodes. The Fermat point is determined as the shortest path from three vertices of the triangle. The area of estimated location then refined using Fermat point to achieve minimum error in estimating sensor nodes location. DFPLE solves problems of large errors and poor performance encountered by localization schemes that are based on a bounding box algorithm. Performance analysis of a 200-node development environment reveals that, when the number of sensor nodes is below 150, the mean error decreases rapidly as the node density increases, and when the number of sensor nodes exceeds 170, the mean error remains below 1% as the node density increases. Second, when the number of beacon nodes is less than 60, normal nodes lack sufficient beacon nodes to enable their locations to be estimated. However, the mean error changes slightly as the number of beacon nodes increases above 60. Simulation results revealed that the proposed algorithm for estimating sensor positions is more accurate than existing algorithms, and improves upon conventional bounding box strategies.

  9. A new intelligent electronic nose system for measuring and analysing livestock and poultry farm odours.

    PubMed

    Pan, Leilei; Yang, Simon X

    2007-12-01

    This paper introduces a new portable intelligent electronic nose system developed especially for measuring and analysing livestock and poultry farm odours. It can be used in both laboratory and field. The sensor array of the proposed electronic nose consists of 14 gas sensors, a humidity sensor, and a temperature sensor. The gas sensors were especially selected for the main compounds from the livestock farm odours. An expert system called "Odour Expert" was developed to support researchers' and farmers' decision making on odour control strategies for livestock and poultry operations. "Odour Expert" utilises several advanced artificial intelligence technologies tailored to livestock and poultry farm odours. It can provide more advanced odour analysis than existing commercially available products. In addition, a rank of odour generation factors is provided, which refines the focus of odour control research. Field experiments were conducted downwind from the barns on 14 livestock and poultry farms. Experimental results show that the predicted odour strengths by the electronic nose yield higher consistency in comparison to the perceived odour intensity by human panel. The "Odour Expert" is a useful tool for assisting farmers' odour management practises.

  10. Inferring Human Activity Recognition with Ambient Sound on Wireless Sensor Nodes.

    PubMed

    Salomons, Etto L; Havinga, Paul J M; van Leeuwen, Henk

    2016-09-27

    A wireless sensor network that consists of nodes with a sound sensor can be used to obtain context awareness in home environments. However, the limited processing power of wireless nodes offers a challenge when extracting features from the signal, and subsequently, classifying the source. Although multiple papers can be found on different methods of sound classification, none of these are aimed at limited hardware or take the efficiency of the algorithms into account. In this paper, we compare and evaluate several classification methods on a real sensor platform using different feature types and classifiers, in order to find an approach that results in a good classifier that can run on limited hardware. To be as realistic as possible, we trained our classifiers using sound waves from many different sources. We conclude that despite the fact that the classifiers are often of low quality due to the highly restricted hardware resources, sufficient performance can be achieved when (1) the window length for our classifiers is increased, and (2) if we apply a two-step approach that uses a refined classification after a global classification has been performed.

  11. Dynamic mechanical measurement of the viscoelasticity of single adherent cells

    NASA Astrophysics Data System (ADS)

    Corbin, Elise A.; Adeniba, Olaoluwa O.; Ewoldt, Randy H.; Bashir, Rashid

    2016-02-01

    Many recent studies on the viscoelasticity of individual cells link mechanics with cellular function and health. Here, we introduce a measurement of the viscoelastic properties of individual human colon cancer cells (HT-29) using silicon pedestal microelectromechanical systems (MEMS) resonant sensors. We demonstrate that the viscoelastic properties of single adherent cells can be extracted by measuring a difference in vibrational amplitude of our resonant sensor platform. The magnitude of vibration of the pedestal sensor is measured using a laser Doppler vibrometer (LDV). A change in amplitude of the sensor, compared with the driving amplitude (amplitude ratio), is influenced by the mechanical properties of the adhered cells. The amplitude ratio of the fixed cells was greater than the live cells, with a p-value <0.0001. By combining the amplitude shift with the resonant frequency shift measure, we determined the elastic modulus and viscosity values of 100 Pa and 0.0031 Pa s, respectively. Our method using the change in amplitude of resonant MEMS devices can enable the determination of a refined solution space and could improve measuring the stiffness of cells.

  12. Vision Sensor-Based Road Detection for Field Robot Navigation

    PubMed Central

    Lu, Keyu; Li, Jian; An, Xiangjing; He, Hangen

    2015-01-01

    Road detection is an essential component of field robot navigation systems. Vision sensors play an important role in road detection for their great potential in environmental perception. In this paper, we propose a hierarchical vision sensor-based method for robust road detection in challenging road scenes. More specifically, for a given road image captured by an on-board vision sensor, we introduce a multiple population genetic algorithm (MPGA)-based approach for efficient road vanishing point detection. Superpixel-level seeds are then selected in an unsupervised way using a clustering strategy. Then, according to the GrowCut framework, the seeds proliferate and iteratively try to occupy their neighbors. After convergence, the initial road segment is obtained. Finally, in order to achieve a globally-consistent road segment, the initial road segment is refined using the conditional random field (CRF) framework, which integrates high-level information into road detection. We perform several experiments to evaluate the common performance, scale sensitivity and noise sensitivity of the proposed method. The experimental results demonstrate that the proposed method exhibits high robustness compared to the state of the art. PMID:26610514

  13. Depth and thermal sensor fusion to enhance 3D thermographic reconstruction.

    PubMed

    Cao, Yanpeng; Xu, Baobei; Ye, Zhangyu; Yang, Jiangxin; Cao, Yanlong; Tisse, Christel-Loic; Li, Xin

    2018-04-02

    Three-dimensional geometrical models with incorporated surface temperature data provide important information for various applications such as medical imaging, energy auditing, and intelligent robots. In this paper we present a robust method for mobile and real-time 3D thermographic reconstruction through depth and thermal sensor fusion. A multimodal imaging device consisting of a thermal camera and a RGB-D sensor is calibrated geometrically and used for data capturing. Based on the underlying principle that temperature information remains robust against illumination and viewpoint changes, we present a Thermal-guided Iterative Closest Point (T-ICP) methodology to facilitate reliable 3D thermal scanning applications. The pose of sensing device is initially estimated using correspondences found through maximizing the thermal consistency between consecutive infrared images. The coarse pose estimate is further refined by finding the motion parameters that minimize a combined geometric and thermographic loss function. Experimental results demonstrate that complimentary information captured by multimodal sensors can be utilized to improve performance of 3D thermographic reconstruction. Through effective fusion of thermal and depth data, the proposed approach generates more accurate 3D thermal models using significantly less scanning data.

  14. An Evolutionary Algorithm for Fast Intensity Based Image Matching Between Optical and SAR Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Fischer, Peter; Schuegraf, Philipp; Merkle, Nina; Storch, Tobias

    2018-04-01

    This paper presents a hybrid evolutionary algorithm for fast intensity based matching between satellite imagery from SAR and very high-resolution (VHR) optical sensor systems. The precise and accurate co-registration of image time series and images of different sensors is a key task in multi-sensor image processing scenarios. The necessary preprocessing step of image matching and tie-point detection is divided into a search problem and a similarity measurement. Within this paper we evaluate the use of an evolutionary search strategy for establishing the spatial correspondence between satellite imagery of optical and radar sensors. The aim of the proposed algorithm is to decrease the computational costs during the search process by formulating the search as an optimization problem. Based upon the canonical evolutionary algorithm, the proposed algorithm is adapted for SAR/optical imagery intensity based matching. Extensions are drawn using techniques like hybridization (e.g. local search) and others to lower the number of objective function calls and refine the result. The algorithm significantely decreases the computational costs whilst finding the optimal solution in a reliable way.

  15. Surface acoustic waves/silicon monolithic sensor processor

    NASA Technical Reports Server (NTRS)

    Kowel, S. T.; Kornreich, P. G.; Fathimulla, M. A.; Mehter, E. A.

    1981-01-01

    Progress is reported in the creation of a two dimensional Fourier transformer for optical images based on the zinc oxide on silicon technology. The sputtering of zinc oxide films using a micro etch system and the possibility of a spray-on technique based on zinc chloride dissolved in alcohol solution are discussed. Refinements to techniques for making platinum silicide Schottky barrier junctions essential for constructing the ultimate convolver structure are described.

  16. Application and calibration of the subsurface mapping capability of SIR-B in desert regions

    NASA Technical Reports Server (NTRS)

    Schaber, G. G.; Mccauley, J. F.; Breed, C. S.; Grolier, M. J.; Issawi, B.; Haynes, C. V.; Mchugh, W.; Walker, A. S.; Blom, R.

    1984-01-01

    The penetration capability of the shuttle imaging radar (SIR-B) sensor in desert regions is investigated. Refined models to explain this penetration capability in terms of radar physics and regional geologic conditions are devised. The sand-buried radar-rivers discovered in the Western Desert in Egypt and Sudan are defined. Results and procedures developed during previous SIR-A investigation of the same area are extrapolated.

  17. 3D Reconstruction of Space Objects from Multi-Views by a Visible Sensor

    PubMed Central

    Zhang, Haopeng; Wei, Quanmao; Jiang, Zhiguo

    2017-01-01

    In this paper, a novel 3D reconstruction framework is proposed to recover the 3D structural model of a space object from its multi-view images captured by a visible sensor. Given an image sequence, this framework first estimates the relative camera poses and recovers the depths of the surface points by the structure from motion (SFM) method, then the patch-based multi-view stereo (PMVS) algorithm is utilized to generate a dense 3D point cloud. To resolve the wrong matches arising from the symmetric structure and repeated textures of space objects, a new strategy is introduced, in which images are added to SFM in imaging order. Meanwhile, a refining process exploiting the structural prior knowledge that most sub-components of artificial space objects are composed of basic geometric shapes is proposed and applied to the recovered point cloud. The proposed reconstruction framework is tested on both simulated image datasets and real image datasets. Experimental results illustrate that the recovered point cloud models of space objects are accurate and have a complete coverage of the surface. Moreover, outliers and points with severe noise are effectively filtered out by the refinement, resulting in an distinct improvement of the structure and visualization of the recovered points. PMID:28737675

  18. Spatiotemporal Local-Remote Senor Fusion (ST-LRSF) for Cooperative Vehicle Positioning

    PubMed Central

    Bhawiyuga, Adhitya

    2018-01-01

    Vehicle positioning plays an important role in the design of protocols, algorithms, and applications in the intelligent transport systems. In this paper, we present a new framework of spatiotemporal local-remote sensor fusion (ST-LRSF) that cooperatively improves the accuracy of absolute vehicle positioning based on two state estimates of a vehicle in the vicinity: a local sensing estimate, measured by the on-board exteroceptive sensors, and a remote sensing estimate, received from neighbor vehicles via vehicle-to-everything communications. Given both estimates of vehicle state, the ST-LRSF scheme identifies the set of vehicles in the vicinity, determines the reference vehicle state, proposes a spatiotemporal dissimilarity metric between two reference vehicle states, and presents a greedy algorithm to compute a minimal weighted matching (MWM) between them. Given the outcome of MWM, the theoretical position uncertainty of the proposed refinement algorithm is proven to be inversely proportional to the square root of matching size. To further reduce the positioning uncertainty, we also develop an extended Kalman filter model with the refined position of ST-LRSF as one of the measurement inputs. The numerical results demonstrate that the proposed ST-LRSF framework can achieve high positioning accuracy for many different scenarios of cooperative vehicle positioning. PMID:29617341

  19. Interfacing and Verifying ALHAT Safe Precision Landing Systems with the Morpheus Vehicle

    NASA Technical Reports Server (NTRS)

    Carson, John M., III; Hirsh, Robert L.; Roback, Vincent E.; Villalpando, Carlos; Busa, Joseph L.; Pierrottet, Diego F.; Trawny, Nikolas; Martin, Keith E.; Hines, Glenn D.

    2015-01-01

    The NASA Autonomous precision Landing and Hazard Avoidance Technology (ALHAT) project developed a suite of prototype sensors to enable autonomous and safe precision landing of robotic or crewed vehicles under any terrain lighting conditions. Development of the ALHAT sensor suite was a cross-NASA effort, culminating in integration and testing on-board a variety of terrestrial vehicles toward infusion into future spaceflight applications. Terrestrial tests were conducted on specialized test gantries, moving trucks, helicopter flights, and a flight test onboard the NASA Morpheus free-flying, rocket-propulsive flight-test vehicle. To accomplish these tests, a tedious integration process was developed and followed, which included both command and telemetry interfacing, as well as sensor alignment and calibration verification to ensure valid test data to analyze ALHAT and Guidance, Navigation and Control (GNC) performance. This was especially true for the flight test campaign of ALHAT onboard Morpheus. For interfacing of ALHAT sensors to the Morpheus flight system, an adaptable command and telemetry architecture was developed to allow for the evolution of per-sensor Interface Control Design/Documents (ICDs). Additionally, individual-sensor and on-vehicle verification testing was developed to ensure functional operation of the ALHAT sensors onboard the vehicle, as well as precision-measurement validity for each ALHAT sensor when integrated within the Morpheus GNC system. This paper provides some insight into the interface development and the integrated-systems verification that were a part of the build-up toward success of the ALHAT and Morpheus flight test campaigns in 2014. These campaigns provided valuable performance data that is refining the path toward spaceflight infusion of the ALHAT sensor suite.

  20. Integration of ALS and TLS for calibration and validation of LAI profiles from large footprint lidar

    NASA Astrophysics Data System (ADS)

    Armston, J.; Tang, H.; Hancock, S.; Hofton, M. A.; Dubayah, R.; Duncanson, L.; Fatoyinbo, T. E.; Blair, J. B.; Disney, M.

    2016-12-01

    The Global Ecosystem Dynamics Investigation (GEDI) is designed to provide measurements of forest vertical structure and above-ground biomass density (AGBD) over tropical and temperate regions. The GEDI is a multi-beam waveform lidar that will acquire transects of forest canopy vertical profiles in conditions of up to 99% canopy cover. These are used to produce a number of canopy height and profile metrics to model habitat suitability and AGBD. These metrics include vertical leaf area index (LAI) profiles, which require some pre-launch refinement of large-footprint waveform processing methods for separating canopy and ground returns and estimation of their reflectance. Previous research developments in modelling canopy gap probability to derive canopy and ground reflectance from waveforms have primarily used data from small-footprint instruments, however development of a generalized spatial model with uncertainty will be useful for interpreting and modelling waveforms from large-footprint instruments such as the NASA Land Vegetation and Ice Sensor (LVIS) with a view to implementation for GEDI. Here we present an analysis of waveform lidar data from the NASA Land Vegetation and Ice Sensor (LVIS), which were acquired in Gabon in February 2016 to support the NASA/ESA AfriSAR campaign. AfriSAR presents a unique opportunity to test refined methods for retrieval of LAI profiles in high above-ground biomass rainforests (up to 600 Mg/ha) with dense canopies (>90% cover), where the greatest uncertainty exists. Airborne and Terrestrial Laser Scanning data (TLS) were also collected, enabling quantification of algorithm performance in plots of dense canopy cover. Refinement of canopy gap probability and LAI profile modelling from large-footprint lidar was based on solving for canopy and ground reflectance parameters spatially by penalized least-squares. The sensitivities of retrieved cover and LAI profiles to variation in canopy and ground reflectance showed improvement compared to assuming a constant ratio. We evaluated the use of spatially proximate simple waveforms to interpret more complex waveforms with poor separation of canopy and ground returns. This work has direct implications for GEDI algorithm refinement.

  1. Principles and techniques of polarimetric mapping.

    NASA Technical Reports Server (NTRS)

    Halajian, J.; Hallock, H.

    1973-01-01

    This paper introduces the concept and potential value of polarimetric maps and the techniques for generating these maps in operational remote sensing. The application-oriented polarimetric signature analyses in the literature are compiled, and several optical models are illustrated to bring out requirements of a sensor system for polarimetric mapping. By use of the concepts of Stokes parameters the descriptive specification of one sensor system is refined. The descriptive specification for a multichannel digital photometric-polarimetric mapper is based upon our experience with the present single channel device which includes the generation of polarimetric maps and pictures. High photometric accuracy and stability coupled with fast, accurate digital output has enabled us to overcome the handicap of taking sequential data from the same terrain.

  2. Parameter estimation for slit-type scanning sensors

    NASA Technical Reports Server (NTRS)

    Fowler, J. W.; Rolfe, E. G.

    1981-01-01

    The Infrared Astronomical Satellite, scheduled for launch into a 900 km near-polar orbit in August 1982, will perform an infrared point source survey by scanning the sky with slit-type sensors. The description of position information is shown to require the use of a non-Gaussian random variable. Methods are described for deciding whether separate detections stem from a single common source, and a formulism is developed for the scan-to-scan problems of identifying multiple sightings of inertially fixed point sources for combining their individual measurements into a refined estimate. Several cases are given where the general theory yields results which are quite different from the corresponding Gaussian applications, showing that argument by Gaussian analogy would lead to error.

  3. Near-Body Grid Adaption for Overset Grids

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Pulliam, Thomas H.

    2016-01-01

    A solution adaption capability for curvilinear near-body grids has been implemented in the OVERFLOW overset grid computational fluid dynamics code. The approach follows closely that used for the Cartesian off-body grids, but inserts refined grids in the computational space of original near-body grids. Refined curvilinear grids are generated using parametric cubic interpolation, with one-sided biasing based on curvature and stretching ratio of the original grid. Sensor functions, grid marking, and solution interpolation tasks are implemented in the same fashion as for off-body grids. A goal-oriented procedure, based on largest error first, is included for controlling growth rate and maximum size of the adapted grid system. The adaption process is almost entirely parallelized using MPI, resulting in a capability suitable for viscous, moving body simulations. Two- and three-dimensional examples are presented.

  4. Evaluation and refinement of a field-portable drinking water toxicity sensor utilizing electric cell-substrate impedance sensing and a fluidic biochip.

    PubMed

    Widder, Mark W; Brennan, Linda M; Hanft, Elizabeth A; Schrock, Mary E; James, Ryan R; van der Schalie, William H

    2015-07-01

    The US Army's need for a reliable and field-portable drinking water toxicity sensor was the catalyst for the development and evaluation of an electric cell-substrate impedance sensing (ECIS) device. Water testing technologies currently available to soldiers in the field are analyte-specific and have limited capabilities to detect broad-based water toxicity. The ECIS sensor described here uses rainbow trout gill epithelial cells seeded on fluidic biochips to measure changes in impedance for the detection of possible chemical contamination of drinking water supplies. Chemicals selected for testing were chosen as representatives of a broad spectrum of toxic industrial compounds. Results of a US Environmental Protection Agency (USEPA)-sponsored evaluation of the field portable device were similar to previously published US Army testing results of a laboratory-based version of the same technology. Twelve of the 18 chemicals tested following USEPA Technology Testing and Evaluation Program procedures were detected by the ECIS sensor within 1 h at USEPA-derived human lethal concentrations. To simplify field-testing methods further, elimination of a procedural step that acclimated cells to serum-free media streamlined the test process with only a slight loss of chemical sensitivity. For field use, the ECIS sensor will be used in conjunction with an enzyme-based sensor that is responsive to carbamate and organophosphorus pesticides. Copyright © 2014 John Wiley & Sons, Ltd.

  5. Impact Analysis of Temperature and Humidity Conditions on Electrochemical Sensor Response in Ambient Air Quality Monitoring

    PubMed Central

    Ning, Zhi; Ye, Sheng; Sun, Li; Yang, Fenhuan; Wong, Ka Chun; Westerdahl, Dane; Louie, Peter K. K.

    2018-01-01

    The increasing applications of low-cost air sensors promises more convenient and cost-effective systems for air monitoring in many places and under many conditions. However, the data quality from such systems has not been fully characterized and may not meet user expectations in research and regulatory uses, or for use in citizen science. In our study, electrochemical sensors (Alphasense B4 series) for carbon monoxide (CO), nitric oxide (NO), nitrogen dioxide (NO2), and oxidants (Ox) were evaluated under controlled laboratory conditions to identify the influencing factors and quantify their relation with sensor outputs. Based on the laboratory tests, we developed different correction methods to compensate for the impact of ambient conditions. Further, the sensors were assembled into a monitoring system and tested in ambient conditions in Hong Kong side-by-side with regulatory reference monitors, and data from these tests were used to evaluate the performance of the models, to refine them, and validate their applicability in variable ambient conditions in the field. The more comprehensive correction models demonstrated enhanced performance when compared with uncorrected data. One over-arching observation of this study is that the low-cost sensors may promise excellent sensitivity and performance, but it is essential for users to understand and account for several key factors that may strongly affect the nature of sensor data. In this paper, we also evaluated factors of multi-month stability, temperature, and humidity, and considered the interaction of oxidant gases NO2 and ozone on a newly introduced oxidant sensor. PMID:29360749

  6. Impact Analysis of Temperature and Humidity Conditions on Electrochemical Sensor Response in Ambient Air Quality Monitoring.

    PubMed

    Wei, Peng; Ning, Zhi; Ye, Sheng; Sun, Li; Yang, Fenhuan; Wong, Ka Chun; Westerdahl, Dane; Louie, Peter K K

    2018-01-23

    The increasing applications of low-cost air sensors promises more convenient and cost-effective systems for air monitoring in many places and under many conditions. However, the data quality from such systems has not been fully characterized and may not meet user expectations in research and regulatory uses, or for use in citizen science. In our study, electrochemical sensors (Alphasense B4 series) for carbon monoxide (CO), nitric oxide (NO), nitrogen dioxide (NO₂), and oxidants (O x ) were evaluated under controlled laboratory conditions to identify the influencing factors and quantify their relation with sensor outputs. Based on the laboratory tests, we developed different correction methods to compensate for the impact of ambient conditions. Further, the sensors were assembled into a monitoring system and tested in ambient conditions in Hong Kong side-by-side with regulatory reference monitors, and data from these tests were used to evaluate the performance of the models, to refine them, and validate their applicability in variable ambient conditions in the field. The more comprehensive correction models demonstrated enhanced performance when compared with uncorrected data. One over-arching observation of this study is that the low-cost sensors may promise excellent sensitivity and performance, but it is essential for users to understand and account for several key factors that may strongly affect the nature of sensor data. In this paper, we also evaluated factors of multi-month stability, temperature, and humidity, and considered the interaction of oxidant gases NO₂ and ozone on a newly introduced oxidant sensor.

  7. A custom multi-modal sensor suite and data analysis pipeline for aerial field phenotyping

    NASA Astrophysics Data System (ADS)

    Bartlett, Paul W.; Coblenz, Lauren; Sherwin, Gary; Stambler, Adam; van der Meer, Andries

    2017-05-01

    Our group has developed a custom, multi-modal sensor suite and data analysis pipeline to phenotype crops in the field using unpiloted aircraft systems (UAS). This approach to high-throughput field phenotyping is part of a research initiative intending to markedly accelerate the breeding process for refined energy sorghum varieties. To date, single rotor and multirotor helicopters, roughly 14 kg in total weight, are being employed to provide sensor coverage over multiple hectaresized fields in tens of minutes. The quick, autonomous operations allow for complete field coverage at consistent plant and lighting conditions, with low operating costs. The sensor suite collects data simultaneously from six sensors and registers it for fusion and analysis. High resolution color imagery targets color and geometric phenotypes, along with lidar measurements. Long-wave infrared imagery targets temperature phenomena and plant stress. Hyperspectral visible and near-infrared imagery targets phenotypes such as biomass and chlorophyll content, as well as novel, predictive spectral signatures. Onboard spectrometers and careful laboratory and in-field calibration techniques aim to increase the physical validity of the sensor data throughout and across growing seasons. Off-line processing of data creates basic products such as image maps and digital elevation models. Derived data products include phenotype charts, statistics, and trends. The outcome of this work is a set of commercially available phenotyping technologies, including sensor suites, a fully integrated phenotyping UAS, and data analysis software. Effort is also underway to transition these technologies to farm management users by way of streamlined, lower cost sensor packages and intuitive software interfaces.

  8. Surface phenology and satellite sensor-derived onset of greenness: An initial comparison

    USGS Publications Warehouse

    Schwartz, Mark D.; Reed, Bradley C.

    1999-01-01

    The objective of this work was to document the utility of phenological data derived from satellite sensors by comparing them with modelled phenology. Surface phenological model outputs (first leaf and first bloom dates) were correlated positively with satellite sensor-derived start of season (SOS) dates for 1991-1995 across the eastern United States. The correlation was highest for forest (r 0.62 for deciduous trees and 0.64 for mixed woodland) and tall grass (r 0.46) and lowest for short grass (r 0.37). The average correlation over all land cover types was 0.61. Average SOS dates were consistently earlier than Spring Index dates across all land cover types. This finding and limited native tree phenology data suggest that the SOS technique detects understorey green-up in the forest rather than overstorey species. The biweekly temporal resolution of the satellite sensor data placed an upper limit on prediction accuracy; thus, year-to-year variations at individual sites were typically small. Nevertheless, the correct biweek SOS could be identified from the surface models 61% of the time, and 1 biweek 96% of the time. Further temporal refinement of the satellite sensor measurements is necessary in order to connect them with surface phenology adequately and to develop links among 'green wave' components in selected biomes.

  9. Exploration of the Performance of a Hybrid Closed Loop Insulin Delivery Algorithm That Includes Insulin Delivery Limits Designed to Protect Against Hypoglycemia.

    PubMed

    de Bock, Martin; Dart, Julie; Roy, Anirban; Davey, Raymond; Soon, Wayne; Berthold, Carolyn; Retterath, Adam; Grosman, Benyamin; Kurtz, Natalie; Davis, Elizabeth; Jones, Timothy

    2017-01-01

    Hypoglycemia remains a risk for closed loop insulin delivery particularly following exercise or if the glucose sensor is inaccurate. The aim of this study was to test whether an algorithm that includes a limit to insulin delivery is effective at protecting against hypoglycemia under those circumstances. An observational study on 8 participants with type 1 diabetes was conducted, where a hybrid closed loop system (HCL) (Medtronic™ 670G) was challenged with hypoglycemic stimuli: exercise and an overreading glucose sensor. There was no overnight or exercise-induced hypoglycemia during HCL insulin delivery. All daytime hypoglycemia was attributable to postmeal bolused insulin in those participants with a more aggressive carbohydrate factor. HCL systems rely on accurate carbohydrate ratios and carbohydrate counting to avoid hypoglycemia. The algorithm that was tested against moderate exercise and an overreading glucose sensor performed well in terms of hypoglycemia avoidance. Algorithm refinement continues in preparation for long-term outpatient trials.

  10. Using budget-friendly methods to analyze sport specific movements

    NASA Astrophysics Data System (ADS)

    Jackson, Lindsay; Williams, Sarah; Ferrara, Davon

    2015-03-01

    When breaking down the physics behind sport specific movements, athletes, usually professional, are often assessed in multimillion-dollar laboratories and facilities. Budget-friendly methods, such as video analysis using low-cost cameras, iPhone sensors, or inexpensive force sensors can make this process more accessible to amateur athletes, which in-turn can give insight into injury mechanisms. Here we present a comparison of two methods of determining the forces experienced by a cheerleader during co-education stunting and soccer goalies while side-diving. For the cheerleader, accelerometer measurements were taken by an iPhone 5 and compared to video analysis. The measurements done on the soccer players were taken using FlexiForce force sensors and again compared to video analysis. While these budget-friendly methods could use some refining, they show promise for producing usable measurements for possibly increasing our understanding of injury in amateur players. Furthermore, low-cost physics experiments with sports can foster an active learning environment for students with minimum physics and mathematical background.

  11. Initial design and performance of the near surface unmanned aircraft system sensor suite in support of the GOES-R field campaign

    NASA Astrophysics Data System (ADS)

    Pearlman, Aaron J.; Padula, Francis; Shao, Xi; Cao, Changyong; Goodman, Steven J.

    2016-09-01

    One of the main objectives of the Geostationary Operational Environmental Satellite R-Series (GOES-R) field campaign is to validate the SI traceability of the Advanced Baseline Imager. The campaign plans include a feasibility demonstration study for new near surface unmanned aircraft system (UAS) measurement capability that is being developed to meet the challenges of validating geostationary sensors. We report our progress in developing our initial systems by presenting the design and preliminary characterization results of the sensor suite. The design takes advantage of off-the-shelf technologies and fiber-based optical components to make hemispheric directional measurements from a UAS. The characterization results - including laboratory measurements of temperature effects and polarization sensitivity - are used to refine the radiometric uncertainty budget towards meeting the validation objectives for the campaign. These systems will foster improved validation capabilities for the GOES-R field campaign and other next generation satellite systems.

  12. Bioinspired Methodology for Artificial Olfaction

    PubMed Central

    Raman, Baranidharan; Hertz, Joshua L.; Benkstein, Kurt D.; Semancik, Steve

    2008-01-01

    Artificial olfaction is a potential tool for noninvasive chemical monitoring. Application of “electronic noses” typically involves recognition of “pretrained” chemicals, while long-term operation and generalization of training to allow chemical classification of “unknown” analytes remain challenges. The latter analytical capability is critically important, as it is unfeasible to pre-expose the sensor to every analyte it might encounter. Here, we demonstrate a biologically inspired approach where the recognition and generalization problems are decoupled and resolved in a hierarchical fashion. Analyte composition is refined in a progression from general (e.g., target is a hydrocarbon) to precise (e.g., target is ethane), using highly optimized response features for each step. We validate this approach using a MEMS-based chemiresistive microsensor array. We show that this approach, a unique departure from existing methodologies in artificial olfaction, allows the recognition module to better mitigate sensor-aging effects and to better classify unknowns, enhancing the utility of chemical sensors for real-world applications. PMID:18855409

  13. ShakeNet: a portable wireless sensor network for instrumenting large civil structures

    USGS Publications Warehouse

    Kohler, Monica D.; Hao, Shuai; Mishra, Nilesh; Govindan, Ramesh; Nigbor, Robert

    2015-08-03

    We report our findings from a U.S. Geological Survey (USGS) National Earthquake Hazards Reduction Program-funded project to develop and test a wireless, portable, strong-motion network of up to 40 triaxial accelerometers for structural health monitoring. The overall goal of the project was to record ambient vibrations for several days from USGS-instrumented structures. Structural health monitoring has important applications in fields like civil engineering and the study of earthquakes. The emergence of wireless sensor networks provides a promising means to such applications. However, while most wireless sensor networks are still in the experimentation stage, very few take into consideration the realistic earthquake engineering application requirements. To collect comprehensive data for structural health monitoring for civil engineers, high-resolution vibration sensors and sufficient sampling rates should be adopted, which makes it challenging for current wireless sensor network technology in the following ways: processing capabilities, storage limit, and communication bandwidth. The wireless sensor network has to meet expectations set by wired sensor devices prevalent in the structural health monitoring community. For this project, we built and tested an application-realistic, commercially based, portable, wireless sensor network called ShakeNet for instrumentation of large civil structures, especially for buildings, bridges, or dams after earthquakes. Two to three people can deploy ShakeNet sensors within hours after an earthquake to measure the structural response of the building or bridge during aftershocks. ShakeNet involved the development of a new sensing platform (ShakeBox) running a software suite for networking, data collection, and monitoring. Deployments reported here on a tall building and a large dam were real-world tests of ShakeNet operation, and helped to refine both hardware and software. 

  14. Non-traditional Sensor Tasking for SSA: A Case Study

    NASA Astrophysics Data System (ADS)

    Herz, A.; Herz, E.; Center, K.; Martinez, I.; Favero, N.; Clark, C.; Therien, W.; Jeffries, M.

    Industry has recognized that maintaining SSA of the orbital environment going forward is too challenging for the government alone. Consequently there are a significant number of commercial activities in various stages of development standing-up novel sensors and sensor networks to assist in SSA gathering and dissemination. Use of these systems will allow government and military operators to focus on the most sensitive space control issues while allocating routine or lower priority data gathering responsibility to the commercial side. The fact that there will be multiple (perhaps many) commercial sensor capabilities available in this new operational model begets a common access solution. Absent a central access point to assert data needs, optimized use of all commercial sensor resources is not possible and the opportunity for coordinated collections satisfying overarching SSA-elevating objectives is lost. Orbit Logic is maturing its Heimdall Web system - an architecture facilitating “data requestor” perspectives (allowing government operations centers to assert SSA data gathering objectives) and “sensor operator” perspectives (through which multiple sensors of varying phenomenology and capability are integrated via machine -machine interfaces). When requestors submit their needs, Heimdall’s planning engine determines tasking schedules across all sensors, optimizing their use via an SSA-specific figure-of-merit. ExoAnalytic was a key partner in refining the sensor operator interfaces, working with Orbit Logic through specific details of sensor tasking schedule delivery and the return of observation data. Scant preparation on both sides preceded several integration exercises (walk-then-run style), which culminated in successful demonstration of the ability to supply optimized schedules for routine public catalog data collection – then adapt sensor tasking schedules in real-time upon receipt of urgent data collection requests. This paper will provide a narrative of the joint integration process - detailing decision points, compromises, and results obtained on the road toward a set of interoperability standards for commercial sensor accommodation.

  15. Robust Kalman filter design for predictive wind shear detection

    NASA Technical Reports Server (NTRS)

    Stratton, Alexander D.; Stengel, Robert F.

    1991-01-01

    Severe, low-altitude wind shear is a threat to aviation safety. Airborne sensors under development measure the radial component of wind along a line directly in front of an aircraft. In this paper, optimal estimation theory is used to define a detection algorithm to warn of hazardous wind shear from these sensors. To achieve robustness, a wind shear detection algorithm must distinguish threatening wind shear from less hazardous gustiness, despite variations in wind shear structure. This paper presents statistical analysis methods to refine wind shear detection algorithm robustness. Computational methods predict the ability to warn of severe wind shear and avoid false warning. Comparative capability of the detection algorithm as a function of its design parameters is determined, identifying designs that provide robust detection of severe wind shear.

  16. Acoustic-wave sensor apparatus for analyzing a petroleum-based composition and sensing solidification of constituents therein

    DOEpatents

    Spates, J.J.; Martin, S.J.; Mansure, A.J.

    1997-08-26

    An acoustic-wave sensor apparatus and method are disclosed. The apparatus for analyzing a normally liquid petroleum-based composition includes at least one acoustic-wave device in contact with the petroleum-based composition for sensing or detecting the presence of constituents (e.g. paraffins or petroleum waxes) therein which solidify upon cooling of the petroleum-based composition below a cloud-point temperature. The acoustic-wave device can be a thickness-shear-mode device (also termed a quartz crystal microbalance), a surface-acoustic-wave device, an acoustic-plate-mode device or a flexural plate-wave device. Embodiments of the present invention can be used for measuring a cloud point, a pour point and/or a freeze point of the petroleum-based composition, and for determining a temperature characteristic of each point. Furthermore, measurements with the acoustic-wave sensor apparatus can be made off-line by using a sample having a particular petroleum-based composition; or in-situ with the petroleum-based composition contained within a pipeline or storage tank. The acoustic-wave sensor apparatus has uses in many different petroleum technology areas, including the recovery, transport, storage, refining and use of petroleum and petroleum-based products. 7 figs.

  17. Acoustic-wave sensor apparatus for analyzing a petroleum-based composition and sensing solidification of constituents therein

    DOEpatents

    Spates, James J.; Martin, Stephen J.; Mansure, Arthur J.

    1997-01-01

    An acoustic-wave sensor apparatus and method. The apparatus for analyzing a normally liquid petroleum-based composition includes at least one acoustic-wave device in contact with the petroleum-based composition for sensing or detecting the presence of constituents (e.g. paraffins or petroleum waxes) therein which solidify upon cooling of the petroleum-based composition below a cloud-point temperature. The acoustic-wave device can be a thickness-shear-mode device (also termed a quartz crystal mircrobalance), a surface-acoustic-wave device, an acoustic-plate-mode device or a flexural plate-wave device. Embodiments of the present invention can be used for measuring a cloud point, a pour point and/or a freeze point of the petroleum-based composition, and for determining a temperature characteristic of each point. Furthermore, measurements with the acoustic-wave sensor apparatus can be made off-line by using a sample having a particular petroleum-based composition; or in-situ with the petroleum-based composition contained within a pipeline or storage tank. The acoustic-wave sensor apparatus has uses in many different petroleum technology areas, including the recover transport, storage, refining and use of petroleum and petroleum-based products.

  18. Heuristic approach to image registration

    NASA Astrophysics Data System (ADS)

    Gertner, Izidor; Maslov, Igor V.

    2000-08-01

    Image registration, i.e. correct mapping of images obtained from different sensor readings onto common reference frame, is a critical part of multi-sensor ATR/AOR systems based on readings from different types of sensors. In order to fuse two different sensor readings of the same object, the readings have to be put into a common coordinate system. This task can be formulated as optimization problem in a space of all possible affine transformations of an image. In this paper, a combination of heuristic methods is explored to register gray- scale images. The modification of Genetic Algorithm is used as the first step in global search for optimal transformation. It covers the entire search space with (randomly or heuristically) scattered probe points and helps significantly reduce the search space to a subspace of potentially most successful transformations. Due to its discrete character, however, Genetic Algorithm in general can not converge while coming close to the optimum. Its termination point can be specified either as some predefined number of generations or as achievement of a certain acceptable convergence level. To refine the search, potential optimal subspaces are searched using more delicate and efficient for local search Taboo and Simulated Annealing methods.

  19. Development of a Tool to Recreate the Mars Science Laboratory Aerothermal Environment

    NASA Technical Reports Server (NTRS)

    Beerman, A. F.; Lewis, M. J.; Santos, J. A.; White, T. R.

    2010-01-01

    The Mars Science Laboratory will enter the Martian atmosphere in 2012 with multiple char depth sensors and in-depth thermocouples in its heatshield. The aerothermal environment experienced by MSL may be computationally recreated using the data from the sensors and a material response program, such as the Fully Implicit Ablation and Thermal (FIAT) response program, through the matching of the char depth and thermocouple predictions of the material response program to the sensor data. A tool, CHanging Inputs from the Environment of FIAT (CHIEF), was developed to iteratively change different environmental conditions such that FIAT predictions match within certain criteria applied to an external data set. The computational environment is changed by iterating on the enthalpy, pressure, or heat transfer coefficient at certain times in the trajectory. CHIEF was initially compared against arc-jet test data from the development of the MSL heatshield and then against simulated sensor data derived from design trajectories for MSL. CHIEF was able to match char depth and in-depth thermocouple temperatures within the bounds placed upon it for these cases. Further refinement of CHIEF to compare multiple time points and assign convergence criteria may improve accuracy.

  20. Shape memory polymer sensors for tracking cumulative environmental exposure

    NASA Astrophysics Data System (ADS)

    Snyder, Ryan; Rauscher, Michael; Vining, Ben; Havens, Ernie; Havens, Teresa; McFerran, Jace

    2010-04-01

    Cornerstone Research Group Inc. (CRG) has developed environmental exposure tracking (EET) sensors using shape memory polymers (SMP) to monitor the degradation of perishable items, such as munitions, foods and beverages, or medicines, by measuring the cumulative exposure to temperature and moisture. SMPs are polymers whose qualities have been altered to give them dynamic shape "memory" properties. Under thermal or moisture stimuli, the SMP exhibits a radical change from a rigid thermoset to a highly flexible, elastomeric state. The dynamic response of the SMP can be tailored to match the degradation profile of the perishable item. SMP-based EET sensors require no digital memory or internal power supply and provide the capability of inexpensive, long-term life cycle monitoring of thermal and moisture exposure over time. This technology was developed through Phase I and Phase II SBIR efforts with the Navy. The emphasis of current research centers on transitioning SMP materials from the lab bench to a production environment. Here, CRG presents the commercialization progress of thermally-activated EET sensors, focusing on fabrication scale-up, process refinements, and quality control. In addition, progress on the development of vapor pressure-responsive SMP (VPR-SMP) will be discussed.

  1. Quantifying Acoustic Uncertainty Due to Marine Mammals and Fish Near the Shelfbreak Front off Cape Hatteras

    DTIC Science & Technology

    2015-09-30

    an AUV mounted acoustic source, 2) moored multi-element SHRU acoustic receiver arrays, 3) a shipboard acoustic resonator, 4) fish-attraction...devices (FAD’s), 5) a three- AUV fish-field mapping effort (employing sidescan sonar plus optics) and 6) ScanFish, ADCP, and moored sensor oceanographic...The acoustic model has been further refined. To obtain a better estimate of source positions, the navigation data of the source AUV (Snoopy) was

  2. Bombing Target Identification from Limited Transect Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, Barry L.; Hathaway, John E.; Pulsipher, Brent A.

    2006-08-07

    A series of sensor data combined with geostatistical techniques were used to determine likely target areas for a historic military aerial bombing range. Primary data consisted of magnetic anomaly information from limited magnetometer transects across the site. Secondary data included airborne LIDAR, orthophotography, and other general site characterization information. Identification of likely target areas relied primarily upon kriging estimates of magnetic anomaly densities across the site. Secondary information, such as impact crater locations, was used to refine the boundary delineations.

  3. Optical and Radio Frequency Refractivity Fluctuations from High Resolution Point Sensors: Sea Breezes and Other Observations

    DTIC Science & Technology

    2007-03-01

    velocity and direction along with vertical velocities are derived from the measured time of flight for the ultrasonic signals (manufacture’s...data set. To prevent aliasing a wave must be sample at least twice per period so the Nyquist frequency is sn ff 2 = . 3. Sampling Requirements...an order of magnitude or more. To refine models or conduct climatologically studies for Cn2 requires direct measurements to identify the underlying

  4. Landsat 8 operational land imager on-orbit geometric calibration and performance

    USGS Publications Warehouse

    Storey, James C.; Choate, Michael J.; Lee, Kenton

    2014-01-01

    The Landsat 8 spacecraft was launched on 11 February 2013 carrying the Operational Land Imager (OLI) payload for moderate resolution imaging in the visible, near infrared (NIR), and short-wave infrared (SWIR) spectral bands. During the 90-day commissioning period following launch, several on-orbit geometric calibration activities were performed to refine the prelaunch calibration parameters. The results of these calibration activities were subsequently used to measure geometric performance characteristics in order to verify the OLI geometric requirements. Three types of geometric calibrations were performed including: (1) updating the OLI-to-spacecraft alignment knowledge; (2) refining the alignment of the sub-images from the multiple OLI sensor chips; and (3) refining the alignment of the OLI spectral bands. The aspects of geometric performance that were measured and verified included: (1) geolocation accuracy with terrain correction, but without ground control (L1Gt); (2) Level 1 product accuracy with terrain correction and ground control (L1T); (3) band-to-band registration accuracy; and (4) multi-temporal image-to-image registration accuracy. Using the results of the on-orbit calibration update, all aspects of geometric performance were shown to meet or exceed system requirements.

  5. Smart watch RSSI localization and refinement for behavioral classification using laser-SLAM for mapping and fingerprinting.

    PubMed

    Carlson, Jay D; Mittek, Mateusz; Parkison, Steven A; Sathler, Pedro; Bayne, David; Psota, Eric T; Perez, Lance C; Bonasera, Stephen J

    2014-01-01

    As a first step toward building a smart home behavioral monitoring system capable of classifying a wide variety of human behavior, a wireless sensor network (WSN) system is presented for RSSI localization. The low-cost, non-intrusive system uses a smart watch worn by the user to broadcast data to the WSN, where the strength of the radio signal is evaluated at each WSN node to localize the user. A method is presented that uses simultaneous localization and mapping (SLAM) for system calibration, providing automated fingerprinting associating the radio signal strength patterns to the user's location within the living space. To improve the accuracy of localization, a novel refinement technique is introduced that takes into account typical movement patterns of people within their homes. Experimental results demonstrate that the system is capable of providing accurate localization results in a typical living space.

  6. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor.

    PubMed

    Kim, Heegwang; Park, Jinho; Park, Hasil; Paik, Joonki

    2017-12-09

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.

  7. Four-dimensional world-wide atmospheric models (surface to 25 km altitude)

    NASA Technical Reports Server (NTRS)

    Spiegler, D. B.; Fowler, M. G.

    1972-01-01

    Four-dimensional atmospheric models previously developed for use as input to atmospheric attenuation models are evaluated to determine where refinements are warranted. The models are refined where appropriate. A computerized technique is developed that has the unique capability of extracting mean monthly and daily variance profiles of moisture, temperature, density and pressure at 1 km intervals to the height of 25 km for any location on the globe. This capability could be very useful to planners of remote sensing of earth resources missions in that the profiles may be used as input to the attenuation models that predict the expected degradation of the sensor data. Recommendations are given for procedures to use the four-dimensional models in computer mission simulations and for the approach to combining the information provided by the 4-D models with that given by the global models.

  8. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor

    PubMed Central

    Park, Jinho; Park, Hasil

    2017-01-01

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system. PMID:29232826

  9. Passive microwave algorithm development and evaluation

    NASA Technical Reports Server (NTRS)

    Petty, Grant W.

    1995-01-01

    The scientific objectives of this grant are: (1) thoroughly evaluate, both theoretically and empirically, all available Special Sensor Microwave Imager (SSM/I) retrieval algorithms for column water vapor, column liquid water, and surface wind speed; (2) where both appropriate and feasible, develop, validate, and document satellite passive microwave retrieval algorithms that offer significantly improved performance compared with currently available algorithms; and (3) refine and validate a novel physical inversion scheme for retrieving rain rate over the ocean. This report summarizes work accomplished or in progress during the first year of a three year grant. The emphasis during the first year has been on the validation and refinement of the rain rate algorithm published by Petty and on the analysis of independent data sets that can be used to help evaluate the performance of rain rate algorithms over remote areas of the ocean. Two articles in the area of global oceanic precipitation are attached.

  10. Open-Filter Optical SSA Analysis Considerations

    NASA Astrophysics Data System (ADS)

    Lambert, J.

    2016-09-01

    Optical Space Situational Awareness (SSA) sensors used for space object detection and orbit refinement measurements are typically operated in an "open-filter" mode without any spectral filters to maximize sensitivity and signal-to-noise. These same optical brightness measurements are often also employed for size determination (e.g., for orbital debris), object correlation, and object status change. These functions, especially when performed using multiple sensors, are highly dependent on sensor calibration for measurement accuracy. Open-filter SSA sensors are traditionally calibrated against the cataloged visual magnitudes of solar-type stars which have similar spectral distributions as the illuminating source, the Sun. The stellar calibration is performed to a high level of accuracy, a few hundredths of a magnitude, by observing many stars over a range of elevation angles to determine sensor, telescope, and atmospheric effects. However, space objects have individual color properties which alter the reflected solar illumination producing spectral distributions which differ from those of the calibration stars. When the stellar calibrations are applied to the space object measurements, visual magnitude values are obtained which are systematically biased. These magnitudes combined with the unknown Bond albedos of the space objects result in systematically biased size determinations which will differ between sensors. Measurements of satellites of known sizes and surface materials have been analyzed to characterize these effects. The results have combined into standardized Bond albedos to correct the measured magnitudes into object sizes. However, the actual albedo values will vary between objects and represent a mean correction subject to some uncertainty. The objective of this discussion is to characterize the sensor spectral biases that are present in open-filter optical observations and examine the resulting brightness and albedo uncertainties that should accompany object size, correlation, or status change determinations, especially in the SSA analyses of individual space objects using data from multiple sensors.

  11. An integrative framework for sensor-based measurement of teamwork in healthcare

    PubMed Central

    Rosen, Michael A; Dietz, Aaron S; Yang, Ting; Priebe, Carey E; Pronovost, Peter J

    2015-01-01

    There is a strong link between teamwork and patient safety. Emerging evidence supports the efficacy of teamwork improvement interventions. However, the availability of reliable, valid, and practical measurement tools and strategies is commonly cited as a barrier to long-term sustainment and spread of these teamwork interventions. This article describes the potential value of sensor-based technology as a methodology to measure and evaluate teamwork in healthcare. The article summarizes the teamwork literature within healthcare, including team improvement interventions and measurement. Current applications of sensor-based measurement of teamwork are reviewed to assess the feasibility of employing this approach in healthcare. The article concludes with a discussion highlighting current application needs and gaps and relevant analytical techniques to overcome the challenges to implementation. Compelling studies exist documenting the feasibility of capturing a broad array of team input, process, and output variables with sensor-based methods. Implications of this research are summarized in a framework for development of multi-method team performance measurement systems. Sensor-based measurement within healthcare can unobtrusively capture information related to social networks, conversational patterns, physical activity, and an array of other meaningful information without having to directly observe or periodically survey clinicians. However, trust and privacy concerns present challenges that need to be overcome through engagement of end users in healthcare. Initial evidence exists to support the feasibility of sensor-based measurement to drive feedback and learning across individual, team, unit, and organizational levels. Future research is needed to refine methods, technologies, theory, and analytical strategies. PMID:25053579

  12. Development of an on-line aqueous particle sensor to study the performance of inclusions in a 12 tonne, delta shaped full scale water model tundish

    NASA Astrophysics Data System (ADS)

    Chakraborty, Abhishek

    Detection of particulate matter thinly dispersed in a fluid medium with the aid of the difference in electrical conductivity between the pure fluid and the particles has been practiced at least since the last 50 to 60 years. The first such instruments were employed to measure cell counts in samples of biological fluid. Following a detailed study of the physics and principles operating within the device, called the Electric Sensing Zone (ESZ) principle, a new device called the Liquid Metal Cleanliness Analyzer (LiMCA) was invented which could measure and count particles of inclusions in molten metal. It provided a fast and fairly accurate tool to make online measurement of the quality of steel during refining and casting operations. On similar lines of development as the LiMCA, a water analogue of the device called, the Aqueous Particle Sensor (APS) was developed for physical modeling experiments of metal refining operations involving water models. The APS can detect and measure simulated particles of inclusions added to the working fluid (water). The present study involves the designing, building and final application of a new and improved APS in water modeling experiments to study inclusion behavior in a tundish operation. The custom built instrument shows superior performance and applicability in experiments involving physical modeling of metal refining operations, compared to its commercial counterparts. In addition to higher accuracy and range of operating parameters, its capability to take real-time experimental data for extended periods of time helps to reduce the total number of experiments required to reach a result, and makes it suitable for analyzing temporal changes occurring in unsteady systems. With the modern impetus on the quality of the final product of metallurgical operations, the new APS can prove to be an indispensable research tool to study and put forward innovative design and parametric changes in industrially practised metallurgical operations.

  13. Highly Strong and Elastic Graphene Fibres Prepared from Universal Graphene Oxide Precursors

    PubMed Central

    Huang, Guoji; Hou, Chengyi; Shao, Yuanlong; Wang, Hongzhi; Zhang, Qinghong; Li, Yaogang; Zhu, Meifang

    2014-01-01

    Graphene fibres are continuously prepared from universal graphene oxide precursors by a novel hydrogel-assisted spinning method. With assistance of a rolling process, meters of ribbon-like GFs, or GRs with improved conductivity, tensile strength, and a long-range ordered compact layer structure are successfully obtained. Furthermore, we refined our spinning process to obtained elastic GRs with a mixing microstructure and exceptional elasticity, which may provide a platform for electronic skins and wearable electronics, sensors, and energy devices. PMID:24576869

  14. Communication assisted Localization and Navigation for Networked Robots

    DTIC Science & Technology

    2005-09-01

    developments such as the Mica Mote [23, 24] and the single chip called “Spec” [1] along the path to the ultimate goal of smart dust. Other technologies...path or a path defining a grid , broadcasting GPS coordinates. The sensors incrementally pro- cess all broadcasts they receive to refine their estimated...RAM, 4K EEPROM), a 916 MHz RF transceiver (50Kbits/sec, nominal 30m range), a UART and a 4Mbit serial flash. A Mote runs for approximately one month on

  15. Stock-car racing makes intuitive physicists

    NASA Astrophysics Data System (ADS)

    Gwynne, Peter

    2008-03-01

    Formula One races involve cars festooned with gadgets and complex electronic devices, in which millions of dollars are spent refining a vehicle's aerodynamics and reducing its weight. But in events run by America's National Association of Stock Car Auto Racing (NASCAR), cars hurtle round an oval track at speeds of about 300 km h-1 without the help of the complex sensors that are employed in Formula One cars. To avoid crashing, drivers must make their own adjustments to track conditions, engine problems and the traffic around them.

  16. AOD furnace splash soft-sensor in the smelting process based on improved BP neural network

    NASA Astrophysics Data System (ADS)

    Ma, Haitao; Wang, Shanshan; Wu, Libin; Yu, Ying

    2017-11-01

    In view of argon oxygen refining low carbon ferrochrome production process, in the splash of smelting process as the research object, based on splash mechanism analysis in the smelting process , using multi-sensor information fusion and BP neural network modeling techniques is proposed in this paper, using the vibration signal, the audio signal and the flame image signal in the furnace as the characteristic signal of splash, the vibration signal, the audio signal and the flame image signal in the furnace integration and modeling, and reconstruct splash signal, realize the splash soft measurement in the smelting process, the simulation results show that the method can accurately forecast splash type in the smelting process, provide a new method of measurement for forecast splash in the smelting process, provide more accurate information to control splash.

  17. Terrestrial Environmental Variables Derived From EOS Platform Sensors

    NASA Technical Reports Server (NTRS)

    Stadler, Stephen J.; Czajkowski, Kevin P.; Goward, Samuel N.; Xue, Yongkang

    2001-01-01

    The three main objectives of the overall project were: 1. Adaptation of environmental constraint methods to take advantage of EOS sensors, specifically, MODIS, ASTER, and Landsat-7, in addition to the PM AVHRR observations 2. Refinement of environmental constraint methods based on fundamental scientific knowledge. 3. Assessment of spatial scaling patterns in environmental constraint measurements to evaluate the potential biases and errors that occur when estimating regional and global-scale NPP patterns with moderate to coarse satellite observations. These goals were modified because, on one hand, MODIS data did not become available until after the first year of the project and because of project staffing issues at the University of Maryland., The OSU portion of the project contained a modest amount of funding and responsibility compared to the University of Maryland and the University of Toledo.

  18. Superresolution with the focused plenoptic camera

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Chunev, Georgi; Lumsdaine, Andrew

    2011-03-01

    Digital images from a CCD or CMOS sensor with a color filter array must undergo a demosaicing process to combine the separate color samples into a single color image. This interpolation process can interfere with the subsequent superresolution process. Plenoptic superresolution, which relies on precise sub-pixel sampling across captured microimages, is particularly sensitive to such resampling of the raw data. In this paper we present an approach for superresolving plenoptic images that takes place at the time of demosaicing the raw color image data. Our approach exploits the interleaving provided by typical color filter arrays (e.g., Bayer filter) to further refine plenoptic sub-pixel sampling. Our rendering algorithm treats the color channels in a plenoptic image separately, which improves final superresolution by a factor of two. With appropriate plenoptic capture we show the theoretical possibility for rendering final images at full sensor resolution.

  19. Passive Infrared (PIR)-Based Indoor Position Tracking for Smart Homes Using Accessibility Maps and A-Star Algorithm.

    PubMed

    Yang, Dan; Xu, Bin; Rao, Kaiyou; Sheng, Weihua

    2018-01-24

    Indoor occupants' positions are significant for smart home service systems, which usually consist of robot service(s), appliance control and other intelligent applications. In this paper, an innovative localization method is proposed for tracking humans' position in indoor environments based on passive infrared (PIR) sensors using an accessibility map and an A-star algorithm, aiming at providing intelligent services. First the accessibility map reflecting the visiting habits of the occupants is established through the integral training with indoor environments and other prior knowledge. Then the PIR sensors, which placement depends on the training results in the accessibility map, get the rough location information. For more precise positioning, the A-start algorithm is used to refine the localization, fused with the accessibility map and the PIR sensor data. Experiments were conducted in a mock apartment testbed. The ground truth data was obtained from an Opti-track system. The results demonstrate that the proposed method is able to track persons in a smart home environment and provide a solution for home robot localization.

  20. Development and Application of a Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Fulton, Christopher E.; Maul, William A.; Sowers, T. Shane

    2007-01-01

    This paper describes the development and initial demonstration of a Portable Health Algorithms Test (PHALT) System that is being developed by researchers at the NASA Glenn Research Center (GRC). The PHALT System was conceived as a means of evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT System allows systems health management algorithms to be developed in a graphical programming environment; to be tested and refined using system simulation or test data playback; and finally, to be evaluated in a real-time hardware-in-the-loop mode with a live test article. In this paper, PHALT System development is described through the presentation of a functional architecture, followed by the selection and integration of hardware and software. Also described is an initial real-time hardware-in-the-loop demonstration that used sensor data qualification algorithms to diagnose and isolate simulated sensor failures in a prototype Power Distribution Unit test-bed. Success of the initial demonstration is highlighted by the correct detection of all sensor failures and the absence of any real-time constraint violations.

  1. Passive Infrared (PIR)-Based Indoor Position Tracking for Smart Homes Using Accessibility Maps and A-Star Algorithm

    PubMed Central

    Yang, Dan; Xu, Bin; Rao, Kaiyou; Sheng, Weihua

    2018-01-01

    Indoor occupants’ positions are significant for smart home service systems, which usually consist of robot service(s), appliance control and other intelligent applications. In this paper, an innovative localization method is proposed for tracking humans’ position in indoor environments based on passive infrared (PIR) sensors using an accessibility map and an A-star algorithm, aiming at providing intelligent services. First the accessibility map reflecting the visiting habits of the occupants is established through the integral training with indoor environments and other prior knowledge. Then the PIR sensors, which placement depends on the training results in the accessibility map, get the rough location information. For more precise positioning, the A-start algorithm is used to refine the localization, fused with the accessibility map and the PIR sensor data. Experiments were conducted in a mock apartment testbed. The ground truth data was obtained from an Opti-track system. The results demonstrate that the proposed method is able to track persons in a smart home environment and provide a solution for home robot localization. PMID:29364188

  2. Toward increased reliability in the electric power industry: direct temperature measurement in transformers using fiber optic sensors

    NASA Astrophysics Data System (ADS)

    McDonald, Greg

    1998-09-01

    Optimal loading, prevention of catastrophic failures and reduced maintenance costs are some of the benefits of accurate determination of hot spot winding temperatures in medium and high power transformers. Temperature estimates obtained using current theoretical models are not always accurate. Traditional technology (IR, thermocouples...) are unsuitable or inadequate for direct measurement. Nortech fiber-optic temperature sensors offer EMI immunity and chemical resistance and are a proven solution to the problem. The Nortech sensor's measurement principle is based on variations in the spectral absorption of a fiber-mounted semiconductor chip and probes are interchangeable with no need for recalibration. Total length of probe + extension can be up to several hundred meters allowing system electronics to be located in the control room or mounted in the transformer instrumentation cabinet. All of the sensor materials withstand temperatures up to 250 degree(s)C and have demonstrated excellent resistance to the harsh transformer environment (hot oil, kerosene). Thorough study of the problem and industry collaboration in testing and installation allows Nortech to identify and meet the need for durable probes, leak-proof feedthroughs, standard computer interfaces and measurement software. Refined probe technology, the method's simplicity and reliable calibration are all assets that should lead to growing acceptance of this type of direct measuring in the electric power industry.

  3. Surface-roughness considerations for atmospheric correction of ocean color sensors. II: Error in the retrieved water-leaving radiance.

    PubMed

    Gordon, H R; Wang, M

    1992-07-20

    In the algorithm for the atmospheric correction of coastal zone color scanner (CZCS) imagery, it is assumed that the sea surface is flat. Simulations are carried out to assess the error incurred when the CZCS-type algorithm is applied to a realistic ocean in which the surface is roughened by the wind. In situations where there is no direct Sun glitter (either a large solar zenith angle or the sensor tilted away from the specular image of the Sun), the following conclusions appear justified: (1) the error induced by ignoring the surface roughness is less, similar1 CZCS digital count for wind speeds up to approximately 17 m/s, and therefore can be ignored for this sensor; (2) the roughness-induced error is much more strongly dependent on the wind speed than on the wave shadowing, suggesting that surface effects can be adequately dealt with without precise knowledge of the shadowing; and (3) the error induced by ignoring the Rayleigh-aerosol interaction is usually larger than that caused by ignoring the surface roughness, suggesting that in refining algorithms for future sensors more effort should be placed on dealing with the Rayleigh-aerosol interaction than on the roughness of the sea surface.

  4. Low cost, multiscale and multi-sensor application for flooded area mapping

    NASA Astrophysics Data System (ADS)

    Giordan, Daniele; Notti, Davide; Villa, Alfredo; Zucca, Francesco; Calò, Fabiana; Pepe, Antonio; Dutto, Furio; Pari, Paolo; Baldo, Marco; Allasia, Paolo

    2018-05-01

    Flood mapping and estimation of the maximum water depth are essential elements for the first damage evaluation, civil protection intervention planning and detection of areas where remediation is needed. In this work, we present and discuss a methodology for mapping and quantifying flood severity over floodplains. The proposed methodology considers a multiscale and multi-sensor approach using free or low-cost data and sensors. We applied this method to the November 2016 Piedmont (northwestern Italy) flood. We first mapped the flooded areas at the basin scale using free satellite data from low- to medium-high-resolution from both the SAR (Sentinel-1, COSMO-Skymed) and multispectral sensors (MODIS, Sentinel-2). Using very- and ultra-high-resolution images from the low-cost aerial platform and remotely piloted aerial system, we refined the flooded zone and detected the most damaged sector. The presented method considers both urbanised and non-urbanised areas. Nadiral images have several limitations, in particular in urbanised areas, where the use of terrestrial images solved this limitation. Very- and ultra-high-resolution images were processed with structure from motion (SfM) for the realisation of 3-D models. These data, combined with an available digital terrain model, allowed us to obtain maps of the flooded area, maximum high water area and damaged infrastructures.

  5. Real-time, in situ, continuous monitoring of CO in a pulverized-coal-fired power plant with a 2.3 μm laser absorption sensor

    NASA Astrophysics Data System (ADS)

    Chao, Xing; Jeffries, Jay B.; Hanson, Ronald K.

    2013-03-01

    A real-time, in situ CO sensor using 2.3 μm DFB diode laser absorption, with calibration-free wavelength-modulation-spectroscopy, was demonstrated for continuous monitoring in the boiler exhaust of a pulverized-coal-fired power plant up to temperatures of 700 K. The sensor was similar to a design demonstrated earlier in laboratory conditions, now refined to accommodate the harsh conditions of utility boilers. Measurements were performed across a 3 m path in the particulate-laden economizer exhaust of the coal-fired boiler. A 0.6 ppm detection limit with 1 s averaging was estimated from the results of a continuous 7-h-long measurement with varied excess air levels. The measured CO concentration exhibited expected inverse trends with the excess O2 concentration, which was varied between 1 and 3 %. Measured CO concentrations ranged between 6 and 200 ppm; evaluation of the data suggested a dynamic range from 6 to 10,000 ppm based on a minimum signal-to-noise ratio of ten and maximum absorbance of one. This field demonstration of a 2.3 μm laser absorption sensor for CO showed great potential for real-time combustion exhaust monitoring and control of practical combustion systems.

  6. Assessment, Validation, and Refinement of the Atmospheric Correction Algorithm for the Ocean Color Sensors. Chapter 19

    NASA Technical Reports Server (NTRS)

    Wang, Menghua

    2003-01-01

    The primary focus of this proposed research is for the atmospheric correction algorithm evaluation and development and satellite sensor calibration and characterization. It is well known that the atmospheric correction, which removes more than 90% of sensor-measured signals contributed from atmosphere in the visible, is the key procedure in the ocean color remote sensing (Gordon and Wang, 1994). The accuracy and effectiveness of the atmospheric correction directly affect the remotely retrieved ocean bio-optical products. On the other hand, for ocean color remote sensing, in order to obtain the required accuracy in the derived water-leaving signals from satellite measurements, an on-orbit vicarious calibration of the whole system, i.e., sensor and algorithms, is necessary. In addition, it is important to address issues of (i) cross-calibration of two or more sensors and (ii) in-orbit vicarious calibration of the sensor-atmosphere system. The goal of these researches is to develop methods for meaningful comparison and possible merging of data products from multiple ocean color missions. In the past year, much efforts have been on (a) understanding and correcting the artifacts appeared in the SeaWiFS-derived ocean and atmospheric produces; (b) developing an efficient method in generating the SeaWiFS aerosol lookup tables, (c) evaluating the effects of calibration error in the near-infrared (NIR) band to the atmospheric correction of the ocean color remote sensors, (d) comparing the aerosol correction algorithm using the singlescattering epsilon (the current SeaWiFS algorithm) vs. the multiple-scattering epsilon method, and (e) continuing on activities for the International Ocean-Color Coordinating Group (IOCCG) atmospheric correction working group. In this report, I will briefly present and discuss these and some other research activities.

  7. Intuitive terrain reconstruction using height observation-based ground segmentation and 3D object boundary estimation.

    PubMed

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-12-12

    Mobile robot operators must make rapid decisions based on information about the robot's surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot's array of sensors, but some upper parts of objects are beyond the sensors' measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances.

  8. Hybrid optical acoustic seafloor mapping

    NASA Astrophysics Data System (ADS)

    Inglis, Gabrielle

    The oceanographic research and industrial communities have a persistent demand for detailed three dimensional sea floor maps which convey both shape and texture. Such data products are used for archeology, geology, ship inspection, biology, and habitat classification. There are a variety of sensing modalities and processing techniques available to produce these maps and each have their own potential benefits and related challenges. Multibeam sonar and stereo vision are such two sensors with complementary strengths making them ideally suited for data fusion. Data fusion approaches however, have seen only limited application to underwater mapping and there are no established methods for creating hybrid, 3D reconstructions from two underwater sensing modalities. This thesis develops a processing pipeline to synthesize hybrid maps from multi-modal survey data. It is helpful to think of this processing pipeline as having two distinct phases: Navigation Refinement and Map Construction. This thesis extends existing work in underwater navigation refinement by incorporating methods which increase measurement consistency between both multibeam and camera. The result is a self consistent 3D point cloud comprised of camera and multibeam measurements. In map construction phase, a subset of the multi-modal point cloud retaining the best characteristics of each sensor is selected to be part of the final map. To quantify the desired traits of a map several characteristics of a useful map are distilled into specific criteria. The different ways that hybrid maps can address these criteria provides justification for producing them as an alternative to current methodologies. The processing pipeline implements multi-modal data fusion and outlier rejection with emphasis on different aspects of map fidelity. The resulting point cloud is evaluated in terms of how well it addresses the map criteria. The final hybrid maps retain the strengths of both sensors and show significant improvement over the single modality maps and naively assembled multi-modal maps.

  9. Point Cloud Refinement with a Target-Free Intrinsic Calibration of a Mobile Multi-Beam LIDAR System

    NASA Astrophysics Data System (ADS)

    Nouiraa, H.; Deschaud, J. E.; Goulettea, F.

    2016-06-01

    LIDAR sensors are widely used in mobile mapping systems. The mobile mapping platforms allow to have fast acquisition in cities for example, which would take much longer with static mapping systems. The LIDAR sensors provide reliable and precise 3D information, which can be used in various applications: mapping of the environment; localization of objects; detection of changes. Also, with the recent developments, multi-beam LIDAR sensors have appeared, and are able to provide a high amount of data with a high level of detail. A mono-beam LIDAR sensor mounted on a mobile platform will have an extrinsic calibration to be done, so the data acquired and registered in the sensor reference frame can be represented in the body reference frame, modeling the mobile system. For a multibeam LIDAR sensor, we can separate its calibration into two distinct parts: on one hand, we have an extrinsic calibration, in common with mono-beam LIDAR sensors, which gives the transformation between the sensor cartesian reference frame and the body reference frame. On the other hand, there is an intrinsic calibration, which gives the relations between the beams of the multi-beam sensor. This calibration depends on a model given by the constructor, but the model can be non optimal, which would bring errors and noise into the acquired point clouds. In the litterature, some optimizations of the calibration parameters are proposed, but need a specific routine or environment, which can be constraining and time-consuming. In this article, we present an automatic method for improving the intrinsic calibration of a multi-beam LIDAR sensor, the Velodyne HDL-32E. The proposed approach does not need any calibration target, and only uses information from the acquired point clouds, which makes it simple and fast to use. Also, a corrected model for the Velodyne sensor is proposed. An energy function which penalizes points far from local planar surfaces is used to optimize the different proposed parameters for the corrected model, and we are able to give a confidence value for the calibration parameters found. Optimization results on both synthetic and real data are presented.

  10. Design of a WSN for the Sampling of Environmental Variability in Complex Terrain

    PubMed Central

    Martín-Tardío, Miguel A.; Felicísimo, Ángel M.

    2014-01-01

    In-situ environmental parameter measurements using sensor systems connected to a wireless network have become widespread, but the problem of monitoring large and mountainous areas by means of a wireless sensor network (WSN) is not well resolved. The main reasons for this are: (1) the environmental variability distribution is unknown in the field; (2) without this knowledge, a huge number of sensors would be necessary to ensure the complete coverage of the environmental variability and (3) WSN design requirements, for example, effective connectivity (intervisibility), limiting distances and controlled redundancy, are usually solved by trial and error. Using temperature as the target environmental variable, we propose: (1) a method to determine the homogeneous environmental classes to be sampled using the digital elevation model (DEM) and geometric simulations and (2) a procedure to determine an effective WSN design in complex terrain in terms of the number of sensors, redundancy, cost and spatial distribution. The proposed methodology, based on geographic information systems and binary integer programming can be easily adapted to a wide range of applications that need exhaustive and continuous environmental monitoring with high spatial resolution. The results show that the WSN design is perfectly suited to the topography and the technical specifications of the sensors, and provides a complete coverage of the environmental variability in terms of Sun exposure. However these results still need be validated in the field and the proposed procedure must be refined. PMID:25412218

  11. Tunable Diode Laser Sensors to Monitor Temperature and Gas Composition in High-Temperature Coal Gasifiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanson, Ronald; Whitty, Kevin

    2014-12-01

    The integrated gasification combined cycle (IGCC) when combined with carbon capture and storage can be one of the cleanest methods of extracting energy from coal. Control of coal and biomass gasification processes to accommodate the changing character of input-fuel streams is required for practical implementation of integrated gasification combined-cycle (IGCC) technologies. Therefore a fast time-response sensor is needed for real-time monitoring of the composition and ideally the heating value of the synthesis gas (here called syngas) as it exits the gasifier. The goal of this project was the design, construction, and demonstration an in situ laserabsorption sensor to monitor multiplemore » species in the syngas output from practical-scale coal gasifiers. This project investigated the hypothesis of using laser absorption sensing in particulateladen syngas. Absorption transitions were selected with design rules to optimize signal strength while minimizing interference from other species. Successful in situ measurements in the dusty, high-pressure syngas flow were enabled by Stanford’s normalized and scanned wavelength modulation strategy. A prototype sensor for CO, CH4, CO2, and H2O was refined with experiments conducted in the laboratory at Stanford University, a pilot-scale at the University of Utah, and an engineering-scale gasifier at DoE’s National Center for Carbon Capture with the demonstration of a prototype sensor with technical readiness level 6 in the 2014 measurement campaign.« less

  12. Development of Ultrasonic and Fabry-Perot Interferometer for Non-Destruction Inspection of Aging Aircraft

    NASA Technical Reports Server (NTRS)

    Smith, Alphonso C.

    1998-01-01

    Fabry-Perot Interferometer (FPI) sensor detection system was continued and refined modifications were made in the data acquisition and evaluation process during the last year. The ultrasonic and FPI detection system was improved from one to multiple sensor detectors. Physical models were developed to understand the physical phenomenon of this work. Multilayered flawed samples were fabricated for inspection by a prototype ultrasonic and FPI detection. Experimental data was verified with simulated results. Undergraduate students that were associated with this research gained valuable knowledge from this experience. This was a learning process helping students to understand the importance of research and its application to solve important technological problems. As a result of our students exposure to this research two and planning to continue this type of research work in graduate school. A prototype instrument package was laboratory tested an actual airframe structure for documentation purposes.

  13. Orion Exploration Flight Test-l (EFT -1) Absolute Navigation Design

    NASA Technical Reports Server (NTRS)

    Sud, Jastesh; Gay, Robert; Holt, Greg; Zanetti, Renato

    2014-01-01

    Scheduled to launch in September 2014 atop a Delta IV Heavy from the Kennedy Space Center, the Orion Multi-Purpose-Crew-Vehicle (MPCV's) maiden flight dubbed "Exploration Flight Test -1" (EFT-1) intends to stress the system by placing the uncrewed vehicle on a high-energy parabolic trajectory replicating conditions similar to those that would be experienced when returning from an asteroid or a lunar mission. Unique challenges associated with designing the navigation system for EFT-1 are presented in the narrative with an emphasis on how redundancy and robustness influenced the architecture. Two Inertial Measurement Units (IMUs), one GPS receiver and three barometric altimeters (BALTs) comprise the navigation sensor suite. The sensor data is multiplexed using conventional integration techniques and the state estimate is refined by the GPS pseudorange and deltarange measurements in an Extended Kalman Filter (EKF) that employs the UDUT decomposition approach. The design is substantiated by simulation results to show the expected performance.

  14. A neural approach for improving the measurement capability of an electronic nose

    NASA Astrophysics Data System (ADS)

    Chimenti, M.; DeRossi, D.; Di Francesco, F.; Domenici, C.; Pieri, G.; Pioggia, G.; Salvetti, O.

    2003-06-01

    Electronic noses, instruments for automatic recognition of odours, are typically composed of an array of partially selective sensors, a sampling system, a data acquisition device and a data processing system. For the purpose of evaluating the quality of olive oil, an electronic nose based on an array of conducting polymer sensors capable of discriminating olive oil aromas was developed. The selection of suitable pattern recognition techniques for a particular application can enhance the performance of electronic noses. Therefore, an advanced neural recognition algorithm for improving the measurement capability of the device was designed and implemented. This method combines multivariate statistical analysis and a hierarchical neural-network architecture based on self-organizing maps and error back-propagation. The complete system was tested using samples composed of characteristic olive oil aromatic components in refined olive oil. The results obtained have shown that this approach is effective in grouping aromas into different categories representative of their chemical structure.

  15. Absolute Navigation Performance of the Orion Exploration Fight Test 1

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato; Holt, Greg; Gay, Robert; D'Souza, Christopher; Sud, Jastesh

    2016-01-01

    Launched in December 2014 atop a Delta IV Heavy from the Kennedy Space Center, the Orion vehicle's Exploration Flight Test-1 (EFT-1) successfully completed the objective to stress the system by placing the un-crewed vehicle on a high-energy parabolic trajectory replicating conditions similar to those that would be experienced when returning from an asteroid or a lunar mission. Unique challenges associated with designing the navigation system for EFT-1 are presented with an emphasis on how redundancy and robustness influenced the architecture. Two Inertial Measurement Units (IMUs), one GPS receiver and three barometric altimeters (BALTs) comprise the navigation sensor suite. The sensor data is multiplexed using conventional integration techniques and the state estimate is refined by the GPS pseudorange and deltarange measurements in an Extended Kalman Filter (EKF) that employs UDU factorization. The performance of the navigation system during flight is presented to substantiate the design.

  16. Autogeneous Friction Stir Weld Lack-of-Penetration Defect Detection and Sizing Using Directional Conductivity Measurements with MWM Eddy Current Sensor

    NASA Technical Reports Server (NTRS)

    Goldfine, Neil; Zilberstei, Vladimir; Lawson, Ablode; Kinchen, David; Arbegast, William

    2000-01-01

    Al 2195-T8 plate specimens containing Friction Stir Welds (FSW), provided by Lockheed Martin, were inspected using directional conductivity measurements with the MWM sensor. Sensitivity to lack-of-penetration (LOP) defect size has been demonstrated. The feature used to determine defect size was the normalized longitudinal component of the MWM conductivity measurements. This directional conductivity component was insensitive to the presence of a discrete crack. This permitted correlation of MWM conductivity measurements with the LOP defect size as changes in conductivity were apparently associated with metallurgical features within the first 0.020 in. of the LOP defect zone. Transverse directional conductivity measurements also provided an indication of the presence of discrete cracks. Continued efforts are focussed on inspection of a larger set of welded panels and further refinement of LOP characterization tools.

  17. Parametric Human Body Reconstruction Based on Sparse Key Points.

    PubMed

    Cheng, Ke-Li; Tong, Ruo-Feng; Tang, Min; Qian, Jing-Ye; Sarkis, Michel

    2016-11-01

    We propose an automatic parametric human body reconstruction algorithm which can efficiently construct a model using a single Kinect sensor. A user needs to stand still in front of the sensor for a couple of seconds to measure the range data. The user's body shape and pose will then be automatically constructed in several seconds. Traditional methods optimize dense correspondences between range data and meshes. In contrast, our proposed scheme relies on sparse key points for the reconstruction. It employs regression to find the corresponding key points between the scanned range data and some annotated training data. We design two kinds of feature descriptors as well as corresponding regression stages to make the regression robust and accurate. Our scheme follows with dense refinement where a pre-factorization method is applied to improve the computational efficiency. Compared with other methods, our scheme achieves similar reconstruction accuracy but significantly reduces runtime.

  18. F-8C adaptive control law refinement and software development

    NASA Technical Reports Server (NTRS)

    Hartmann, G. L.; Stein, G.

    1981-01-01

    An explicit adaptive control algorithm based on maximum likelihood estimation of parameters was designed. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm was implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer, surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software.

  19. F-8C adaptive flight control laws

    NASA Technical Reports Server (NTRS)

    Hartmann, G. L.; Harvey, C. A.; Stein, G.; Carlson, D. N.; Hendrick, R. C.

    1977-01-01

    Three candidate digital adaptive control laws were designed for NASA's F-8C digital flyby wire aircraft. Each design used the same control laws but adjusted the gains with a different adaptative algorithm. The three adaptive concepts were: high-gain limit cycle, Liapunov-stable model tracking, and maximum likelihood estimation. Sensors were restricted to conventional inertial instruments (rate gyros and accelerometers) without use of air-data measurements. Performance, growth potential, and computer requirements were used as criteria for selecting the most promising of these candidates for further refinement. The maximum likelihood concept was selected primarily because it offers the greatest potential for identifying several aircraft parameters and hence for improved control performance in future aircraft application. In terms of identification and gain adjustment accuracy, the MLE design is slightly superior to the other two, but this has no significant effects on the control performance achievable with the F-8C aircraft. The maximum likelihood design is recommended for flight test, and several refinements to that design are proposed.

  20. Comparison of Calibration Techniques for Low-Cost Air Quality Monitoring

    NASA Astrophysics Data System (ADS)

    Malings, C.; Ramachandran, S.; Tanzer, R.; Kumar, S. P. N.; Hauryliuk, A.; Zimmerman, N.; Presto, A. A.

    2017-12-01

    Assessing the intra-city spatial distribution and temporal variability of air quality can be facilitated by a dense network of monitoring stations. However, the cost of implementing such a network can be prohibitive if high-quality but high-cost monitoring systems are used. To this end, the Real-time Affordable Multi-Pollutant (RAMP) sensor package has been developed at the Center for Atmospheric Particle Studies of Carnegie Mellon University, in collaboration with SenSevere LLC. This self-contained unit can measure up to five gases out of CO, SO2, NO, NO2, O3, VOCs, and CO2, along with temperature and relative humidity. Responses of individual gas sensors can vary greatly even when exposed to the same ambient conditions. Those of VOC sensors in particular were observed to vary by a factor-of-8, which suggests that each sensor requires its own calibration model. To this end, we apply and compare two different calibration methods to data collected by RAMP sensors collocated with a reference monitor station. The first method, random forest (RF) modeling, is a rule-based method which maps sensor responses to pollutant concentrations by implementing a trained sequence of decision rules. RF modeling has previously been used for other RAMP gas sensors by the group, and has produced precise calibrated measurements. However, RF models can only predict pollutant concentrations within the range observed in the training data collected during the collocation period. The second method, Gaussian process (GP) modeling, is a probabilistic Bayesian technique whereby broad prior estimates of pollutant concentrations are updated using sensor responses to generate more refined posterior predictions, as well as allowing predictions beyond the range of the training data. The accuracy and precision of these techniques are assessed and compared on VOC data collected during the summer of 2017 in Pittsburgh, PA. By combining pollutant data gathered by each RAMP sensor and applying appropriate calibration techniques, the potentially noisy or biased responses of individual sensors can be mapped to pollutant concentration values which are comparable to those of reference instruments.

  1. Modeling of Sensor Placement Strategy for Shape Sensing and Structural Health Monitoring of a Wing-Shaped Sandwich Panel Using Inverse Finite Element Method.

    PubMed

    Kefal, Adnan; Yildiz, Mehmet

    2017-11-30

    This paper investigated the effect of sensor density and alignment for three-dimensional shape sensing of an airplane-wing-shaped thick panel subjected to three different loading conditions, i.e., bending, torsion, and membrane loads. For shape sensing analysis of the panel, the Inverse Finite Element Method (iFEM) was used together with the Refined Zigzag Theory (RZT), in order to enable accurate predictions for transverse deflection and through-the-thickness variation of interfacial displacements. In this study, the iFEM-RZT algorithm is implemented by utilizing a novel three-node C°-continuous inverse-shell element, known as i3-RZT. The discrete strain data is generated numerically through performing a high-fidelity finite element analysis on the wing-shaped panel. This numerical strain data represents experimental strain readings obtained from surface patched strain gauges or embedded fiber Bragg grating (FBG) sensors. Three different sensor placement configurations with varying density and alignment of strain data were examined and their corresponding displacement contours were compared with those of reference solutions. The results indicate that a sparse distribution of FBG sensors (uniaxial strain measurements), aligned in only the longitudinal direction, is sufficient for predicting accurate full-field membrane and bending responses (deformed shapes) of the panel, including a true zigzag representation of interfacial displacements. On the other hand, a sparse deployment of strain rosettes (triaxial strain measurements) is essentially enough to produce torsion shapes that are as accurate as those of predicted by a dense sensor placement configuration. Hence, the potential applicability and practical aspects of i3-RZT/iFEM methodology is proven for three-dimensional shape-sensing of future aerospace structures.

  2. Temperature Compensation in Determining of Remazol Black B Concentrations Using Plastic Optical Fiber Based Sensor

    PubMed Central

    Chong, Su Sin; Aziz, A.R. Abdul; Harun, Sulaiman W.; Arof, Hamzah

    2014-01-01

    In this study, the construction and test of tapered plastic optical fiber (POF) sensors, based on an intensity modulation approach are described. Tapered fiber sensors with different diameters of 0.65 mm, 0.45 mm, and 0.35 mm, were used to measure various concentrations of Remazol black B (RBB) dye aqueous solutions at room temperature. The concentrations of the RBB solutions were varied from 0 ppm to 70 ppm. In addition, the effect of varying the temperature of the RBB solution was also investigated. In this case, the output of the sensor was measured at four different temperatures of 27 °C, 30 °C, 35 °C, and 40 °C, while its concentration was fixed at 50 ppm and 100 ppm. The experimental results show that the tapered POF with d = 0.45 mm achieves the best performance with a reasonably good sensitivity of 61 × 10−4 and a linearity of more than 99%. It also maintains a sufficient and stable signal when heat was applied to the solution with a linearity of more than 97%. Since the transmitted intensity is dependent on both the concentration and temperature of the analyte, multiple linear regression analysis was performed to combine the two independent variables into a single equation. The resulting equation was then validated experimentally and the best agreement between the calculated and experimental results was achieved by the sensor with d = 0.45 mm, where the minimum discrepancy is less than 5%. The authors conclude that POF-based sensors are suitable for RBB dye concentration sensing and, with refinement in fabrication, better results could be achieved. Their low fabrication cost, simple configuration, accuracy, and high sensitivity would attract many potential applications in chemical and biological sensing. PMID:25166498

  3. Temperature compensation in determining of Remazol black B concentrations using plastic optical fiber based sensor.

    PubMed

    Chong, Su Sin; Aziz, A R Abdul; Harun, Sulaiman W; Arof, Hamzah

    2014-08-27

    In this study, the construction and test of tapered plastic optical fiber (POF) sensors, based on an intensity modulation approach are described. Tapered fiber sensors with different diameters of 0.65 mm, 0.45 mm, and 0.35 mm, were used to measure various concentrations of Remazol black B (RBB) dye aqueous solutions at room temperature. The concentrations of the RBB solutions were varied from 0 ppm to 70 ppm. In addition, the effect of varying the temperature of the RBB solution was also investigated. In this case, the output of the sensor was measured at four different temperatures of 27 °C, 30 °C, 35 °C, and 40 °C, while its concentration was fixed at 50 ppm and 100 ppm. The experimental results show that the tapered POF with d = 0.45 mm achieves the best performance with a reasonably good sensitivity of 61 × 10(-4) and a linearity of more than 99%. It also maintains a sufficient and stable signal when heat was applied to the solution with a linearity of more than 97%. Since the transmitted intensity is dependent on both the concentration and temperature of the analyte, multiple linear regression analysis was performed to combine the two independent variables into a single equation. The resulting equation was then validated experimentally and the best agreement between the calculated and experimental results was achieved by the sensor with d = 0.45 mm, where the minimum discrepancy is less than 5%. The authors conclude that POF-based sensors are suitable for RBB dye concentration sensing and, with refinement in fabrication, better results could be achieved. Their low fabrication cost, simple configuration, accuracy, and high sensitivity would attract many potential applications in chemical and biological sensing.

  4. An integrative framework for sensor-based measurement of teamwork in healthcare.

    PubMed

    Rosen, Michael A; Dietz, Aaron S; Yang, Ting; Priebe, Carey E; Pronovost, Peter J

    2015-01-01

    There is a strong link between teamwork and patient safety. Emerging evidence supports the efficacy of teamwork improvement interventions. However, the availability of reliable, valid, and practical measurement tools and strategies is commonly cited as a barrier to long-term sustainment and spread of these teamwork interventions. This article describes the potential value of sensor-based technology as a methodology to measure and evaluate teamwork in healthcare. The article summarizes the teamwork literature within healthcare, including team improvement interventions and measurement. Current applications of sensor-based measurement of teamwork are reviewed to assess the feasibility of employing this approach in healthcare. The article concludes with a discussion highlighting current application needs and gaps and relevant analytical techniques to overcome the challenges to implementation. Compelling studies exist documenting the feasibility of capturing a broad array of team input, process, and output variables with sensor-based methods. Implications of this research are summarized in a framework for development of multi-method team performance measurement systems. Sensor-based measurement within healthcare can unobtrusively capture information related to social networks, conversational patterns, physical activity, and an array of other meaningful information without having to directly observe or periodically survey clinicians. However, trust and privacy concerns present challenges that need to be overcome through engagement of end users in healthcare. Initial evidence exists to support the feasibility of sensor-based measurement to drive feedback and learning across individual, team, unit, and organizational levels. Future research is needed to refine methods, technologies, theory, and analytical strategies. © The Author 2014. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.comFor numbered affiliations see end of article.

  5. Comprehensive and Highly Accurate Measurements of Crane Runways, Profiles and Fastenings

    PubMed Central

    Dennig, Dirk; Bureick, Johannes; Link, Johannes; Diener, Dmitri; Hesse, Christian; Neumann, Ingo

    2017-01-01

    The process of surveying crane runways has been continually refined due to the competitive situation, modern surveying instruments, additional sensors, accessories and evaluation procedures. Guidelines, such as the International Organization for Standardization (ISO) 12488-1, define target values that must be determined by survey. For a crane runway these are for example the span, the position and height of the rails. The process has to be objective and reproducible. However, common processes of surveying crane runways do not meet these requirements sufficiently. The evaluation of the protocols, ideally by an expert, requires many years of experience. Additionally, the recording of crucial parameters, e.g., the wear of the rail, or the condition of the rail fastening and rail joints, is not regulated and for that reason are often not considered during the measurement. To solve this deficit the Advanced Rail Track Inspection System (ARTIS) was developed. ARTIS is used to measure the 3D position of crane rails, the cross-section of the crane rails, joints and, for the first time, the (crane-rail) fastenings. The system consists of a monitoring vehicle and an external tracking sensor. It makes kinematic observations with the tracking sensor from outside the rail run, e.g., the floor of an overhead crane runway, possible. In this paper we present stages of the development process of ARTIS, new target values, calibration of sensors and results of a test measurement. PMID:28505076

  6. New MHD feedback control schemes using the MARTe framework in RFX-mod

    NASA Astrophysics Data System (ADS)

    Piron, Chiara; Manduchi, Gabriele; Marrelli, Lionello; Piovesan, Paolo; Zanca, Paolo

    2013-10-01

    Real-time feedback control of MHD instabilities is a topic of major interest in magnetic thermonuclear fusion, since it allows to optimize a device performance even beyond its stability bounds. The stability properties of different magnetic configurations are important test benches for real-time control systems. RFX-mod, a Reversed Field Pinch experiment that can also operate as a tokamak, is a well suited device to investigate this topic. It is equipped with a sophisticated magnetic feedback system that controls MHD instabilities and error fields by means of 192 active coils and a corresponding grid of sensors. In addition, the RFX-mod control system has recently gained new potentialities thanks to the introduction of the MARTe framework and of a new CPU architecture. These capabilities allow to study new feedback algorithms relevant to both RFP and tokamak operation and to contribute to the debate on the optimal feedback strategy. This work focuses on the design of new feedback schemes. For this purpose new magnetic sensors have been explored, together with new algorithms that refine the de-aliasing computation of the radial sideband harmonics. The comparison of different sensor and feedback strategy performance is described in both RFP and tokamak experiments.

  7. An adaptive procedure for defect identification problems in elasticity

    NASA Astrophysics Data System (ADS)

    Gutiérrez, Sergio; Mura, J.

    2010-07-01

    In the context of inverse problems in mechanics, it is well known that the most typical situation is that neither the interior nor all the boundary is available to obtain data to detect the presence of inclusions or defects. We propose here an adaptive method that uses loads and measures of displacements only on part of the surface of the body, to detect defects in the interior of an elastic body. The method is based on Small Amplitude Homogenization, that is, we work under the assumption that the contrast on the values of the Lamé elastic coefficients between the defect and the matrix is not very large. The idea is that given the data for one loading state and one location of the displacement sensors, we use an optimization method to obtain a guess for the location of the inclusion and then, using this guess, we adapt the position of the sensors and the loading zone, hoping to refine the current guess. Numerical results show that the method is quite efficient in some cases, using in those cases no more than three loading positions and three different positions of the sensors.

  8. Autonomous Aeromagnetic Surveys Using a Fluxgate Magnetometer

    PubMed Central

    Macharet, Douglas G.; Perez-Imaz, Héctor I. A.; Rezeck, Paulo A. F.; Potje, Guilherme A.; Benyosef, Luiz C. C.; Wiermann, André; Freitas, Gustavo M.; Garcia, Luis G. U.; Campos, Mario F. M.

    2016-01-01

    Recent advances in the research of autonomous vehicles have showed a vast range of applications, such as exploration, surveillance and environmental monitoring. Considering the mining industry, it is possible to use such vehicles in the prospection of minerals of commercial interest beneath the ground. However, tasks such as geophysical surveys are highly dependent on specific sensors, which mostly are not designed to be used in these new range of autonomous vehicles. In this work, we propose a novel magnetic survey pipeline that aims to increase versatility, speed and robustness by using autonomous rotary-wing Unmanned Aerial Vehicles (UAVs). We also discuss the development of a state-of-the-art three-axis fluxgate, where our goal in this work was to refine and adjust the sensor topology and coupled electronics specifically for this type of vehicle and application. The sensor was built with two ring-cores using a specially developed stress-annealed CoFeSiB amorphous ribbon, in order to get sufficient resolution to detect concentrations of small ferrous minerals. Finally, we report on the results of experiments performed with a real UAV in an outdoor environment, showing the efficacy of the methodology in detecting an artificial ferrous anomaly. PMID:27999307

  9. Autonomous Aeromagnetic Surveys Using a Fluxgate Magnetometer.

    PubMed

    Macharet, Douglas G; Perez-Imaz, Héctor I A; Rezeck, Paulo A F; Potje, Guilherme A; Benyosef, Luiz C C; Wiermann, André; Freitas, Gustavo M; Garcia, Luis G U; Campos, Mario F M

    2016-12-17

    Recent advances in the research of autonomous vehicles have showed a vast range of applications, such as exploration, surveillance and environmental monitoring. Considering the mining industry, it is possible to use such vehicles in the prospection of minerals of commercial interest beneath the ground. However, tasks such as geophysical surveys are highly dependent on specific sensors, which mostly are not designed to be used in these new range of autonomous vehicles. In this work, we propose a novel magnetic survey pipeline that aims to increase versatility, speed and robustness by using autonomous rotary-wing Unmanned Aerial Vehicles (UAVs). We also discuss the development of a state-of-the-art three-axis fluxgate, where our goal in this work was to refine and adjust the sensor topology and coupled electronics specifically for this type of vehicle and application. The sensor was built with two ring-cores using a specially developed stress-annealed CoFeSiB amorphous ribbon, in order to get sufficient resolution to detect concentrations of small ferrous minerals. Finally, we report on the results of experiments performed with a real UAV in an outdoor environment, showing the efficacy of the methodology in detecting an artificial ferrous anomaly.

  10. Software Would Largely Automate Design of Kalman Filter

    NASA Technical Reports Server (NTRS)

    Chuang, Jason C. H.; Negast, William J.

    2005-01-01

    Embedded Navigation Filter Automatic Designer (ENFAD) is a computer program being developed to automate the most difficult tasks in designing embedded software to implement a Kalman filter in a navigation system. The most difficult tasks are selection of error states of the filter and tuning of filter parameters, which are timeconsuming trial-and-error tasks that require expertise and rarely yield optimum results. An optimum selection of error states and filter parameters depends on navigation-sensor and vehicle characteristics, and on filter processing time. ENFAD would include a simulation module that would incorporate all possible error states with respect to a given set of vehicle and sensor characteristics. The first of two iterative optimization loops would vary the selection of error states until the best filter performance was achieved in Monte Carlo simulations. For a fixed selection of error states, the second loop would vary the filter parameter values until an optimal performance value was obtained. Design constraints would be satisfied in the optimization loops. Users would supply vehicle and sensor test data that would be used to refine digital models in ENFAD. Filter processing time and filter accuracy would be computed by ENFAD.

  11. Hierarchical adaptation scheme for multiagent data fusion and resource management in situation analysis

    NASA Astrophysics Data System (ADS)

    Benaskeur, Abder R.; Roy, Jean

    2001-08-01

    Sensor Management (SM) has to do with how to best manage, coordinate and organize the use of sensing resources in a manner that synergistically improves the process of data fusion. Based on the contextual information, SM develops options for collecting further information, allocates and directs the sensors towards the achievement of the mission goals and/or tunes the parameters for the realtime improvement of the effectiveness of the sensing process. Conscious of the important role that SM has to play in modern data fusion systems, we are currently studying advanced SM Concepts that would help increase the survivability of the current Halifax and Iroquois Class ships, as well as their possible future upgrades. For this purpose, a hierarchical scheme has been proposed for data fusion and resource management adaptation, based on the control theory and within the process refinement paradigm of the JDL data fusion model, and taking into account the multi-agent model put forward by the SASS Group for the situation analysis process. The novelty of this work lies in the unified framework that has been defined for tackling the adaptation of both the fusion process and the sensor/weapon management.

  12. Bundle Block Adjustment of Airborne Three-Line Array Imagery Based on Rotation Angles

    PubMed Central

    Zhang, Yongjun; Zheng, Maoteng; Huang, Xu; Xiong, Jinxin

    2014-01-01

    In the midst of the rapid developments in electronic instruments and remote sensing technologies, airborne three-line array sensors and their applications are being widely promoted and plentiful research related to data processing and high precision geo-referencing technologies is under way. The exterior orientation parameters (EOPs), which are measured by the integrated positioning and orientation system (POS) of airborne three-line sensors, however, have inevitable systematic errors, so the level of precision of direct geo-referencing is not sufficiently accurate for surveying and mapping applications. Consequently, a few ground control points are necessary to refine the exterior orientation parameters, and this paper will discuss bundle block adjustment models based on the systematic error compensation and the orientation image, considering the principle of an image sensor and the characteristics of the integrated POS. Unlike the models available in the literature, which mainly use a quaternion to represent the rotation matrix of exterior orientation, three rotation angles are directly used in order to effectively model and eliminate the systematic errors of the POS observations. Very good experimental results have been achieved with several real datasets that verify the correctness and effectiveness of the proposed adjustment models. PMID:24811075

  13. Bundle block adjustment of airborne three-line array imagery based on rotation angles.

    PubMed

    Zhang, Yongjun; Zheng, Maoteng; Huang, Xu; Xiong, Jinxin

    2014-05-07

    In the midst of the rapid developments in electronic instruments and remote sensing technologies, airborne three-line array sensors and their applications are being widely promoted and plentiful research related to data processing and high precision geo-referencing technologies is under way. The exterior orientation parameters (EOPs), which are measured by the integrated positioning and orientation system (POS) of airborne three-line sensors, however, have inevitable systematic errors, so the level of precision of direct geo-referencing is not sufficiently accurate for surveying and mapping applications. Consequently, a few ground control points are necessary to refine the exterior orientation parameters, and this paper will discuss bundle block adjustment models based on the systematic error compensation and the orientation image, considering the principle of an image sensor and the characteristics of the integrated POS. Unlike the models available in the literature, which mainly use a quaternion to represent the rotation matrix of exterior orientation, three rotation angles are directly used in order to effectively model and eliminate the systematic errors of the POS observations. Very good experimental results have been achieved with several real datasets that verify the correctness and effectiveness of the proposed adjustment models.

  14. Landsat 8 thermal infrared sensor geometric characterization and calibration

    USGS Publications Warehouse

    Storey, James C.; Choate, Michael J.; Moe, Donald

    2014-01-01

    The Landsat 8 spacecraft was launched on 11 February 2013 carrying two imaging payloads: the Operational Land Imager (OLI) and the Thermal Infrared Sensor (TIRS). The TIRS instrument employs a refractive telescope design that is opaque to visible wavelengths making prelaunch geometric characterization challenging. TIRS geometric calibration thus relied heavily on on-orbit measurements. Since the two Landsat 8 payloads are complementary and generate combined Level 1 data products, the TIRS geometric performance requirements emphasize the co-alignment of the OLI and TIRS instrument fields of view and the registration of the OLI reflective bands to the TIRS long-wave infrared emissive bands. The TIRS on-orbit calibration procedures include measuring the TIRS-to-OLI alignment, refining the alignment of the three TIRS sensor chips, and ensuring the alignment of the two TIRS spectral bands. The two key TIRS performance metrics are the OLI reflective to TIRS emissive band registration accuracy, and the registration accuracy between the TIRS thermal bands. The on-orbit calibration campaign conducted during the commissioning period provided an accurate TIRS geometric model that enabled TIRS Level 1 data to meet all geometric accuracy requirements. Seasonal variations in TIRS-to-OLI alignment have led to several small calibration parameter adjustments since commissioning.

  15. Continuous Calibration Improvement in Solar Reflective Bands: Landsat 5 Through Landsat 8

    NASA Technical Reports Server (NTRS)

    Mishra, Nischal; Helder, Dennis; Barsi, Julia; Markham, Brian

    2016-01-01

    Launched in February 2013, the Operational Land Imager (OLI) on-board Landsat 8 continues to perform exceedingly well and provides high science quality data globally. Several design enhancements have been made in the OLI instrument relative to prior Landsat instruments: pushbroom imaging which provides substantially improved Signal-to-Noise Ratio (SNR), spectral bandpasses refinement to avoid atmospheric absorption features, 12 bit data resolution to provide a larger dynamic range that limits the saturation level, a set of well-designed onboard calibrators to monitor the stability of the sensor. Some of these changes such as refinements in spectral bandpasses compared to earlier Landsats and well-designed on-board calibrator have a direct impact on the improved radiometric calibration performance of the instrument from both the stability of the response and the ability to track the changes. The on-board calibrator lamps and diffusers indicate that the instrument drift is generally less than 0.1% per year across the bands. The refined bandpasses of the OLI indicate that temporal uncertainty of better than 0.5% is possible when the instrument is trended over vicarious targets such as Pseudo Invariant Calibration Sites (PICS), a level of precision that was never achieved with the earlier Landsat instruments. The stability measurements indicated by on-board calibrators and PICS agree much better compared to the earlier Landsats, which is very encouraging and bodes well for the future Landsat missions too.

  16. CONTINUOUS CALIBRATION IMPROVEMENT: LANDSAT 5 THROUGH LANDSAT 8

    PubMed Central

    Mishra, Nischal; Helder, Dennis; Barsi, Julia; Markham, Brian

    2018-01-01

    Launched in February 2013, the Operational Land Imager (OLI) on-board Landsat 8 continues to perform exceedingly well and provides high science quality data globally. Several design enhancements have been made in the OLI instrument relative to prior Landsat instruments: pushbroom imaging which provides substantially improved Signal-to-Noise Ratio (SNR), spectral bandpasses refinement to avoid atmospheric absorption features, 12 bit data resolution to provide a larger dynamic range that limits the saturation level, a set of well-designed onboard calibrators to monitor the stability of the sensor. Some of these changes such as refinements in spectral bandpasses compared to earlier Landsats and well-designed on-board calibrator have a direct impact on the improved radiometric calibration performance of the instrument from both the stability of the response and the ability to track the changes. The on-board calibrator lamps and diffusers indicate that the instrument drift is generally less than 0.1% per year across the bands. The refined bandpasses of the OLI indicate that temporal uncertainty of better than 0.5% is possible when the instrument is trended over vicarious targets such as Pseudo Invariant Calibration Sites (PICS), a level of precision that was never achieved with the earlier Landsat instruments. The stability measurements indicated by on-board calibrators and PICS agree much better compared to the earlier Landsats, which is very encouraging and bodes well for the future Landsat missions too. PMID:29449747

  17. Distributed sensor coordination for advanced energy systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumer, Kagan

    Motivation: The ability to collect key system level information is critical to the safe, efficient and reliable operation of advanced power systems. Recent advances in sensor technology have enabled some level of decision making directly at the sensor level. However, coordinating large numbers of sensors, particularly heterogeneous sensors, to achieve system level objectives such as predicting plant efficiency, reducing downtime or predicting outages requires sophisticated coordination algorithms. Indeed, a critical issue in such systems is how to ensure the interaction of a large number of heterogenous system components do not interfere with one another and lead to undesirable behavior. Objectivesmore » and Contributions: The long-term objective of this work is to provide sensor deployment, coordination and networking algorithms for large numbers of sensors to ensure the safe, reliable, and robust operation of advanced energy systems. Our two specific objectives are to: 1. Derive sensor performance metrics for heterogeneous sensor networks. 2. Demonstrate effectiveness, scalability and reconfigurability of heterogeneous sensor network in advanced power systems. The key technical contribution of this work is to push the coordination step to the design of the objective functions of the sensors, allowing networks of heterogeneous sensors to be controlled. By ensuring that the control and coordination is not specific to particular sensor hardware, this approach enables the design and operation of large heterogeneous sensor networks. In addition to the coordination coordination mechanism, this approach allows the system to be reconfigured in response to changing needs (e.g., sudden external events requiring new responses) or changing sensor network characteristics (e.g., sudden changes to plant condition). Impact: The impact of this work extends to a large class of problems relevant to the National Energy Technology Laboratory including sensor placement, heterogeneous sensor coordination, and sensor network control in advanced power systems. Each application has specific needs, but they all share the one crucial underlying problem: how to ensure that the interactions of a large number of heterogenous agents lead to coordinated system behavior. This proposal describes a new paradigm that addresses that very issue in a systematic way. Key Results and Findings: All milestones have been completed. Our results demonstrate that by properly shaping agent objective functions, we can develop large (up to 10,000 devices) heterogeneous sensor networks with key desirable properties. The first milestone shows that properly choosing agent-specific objective functions increases system performance by up to 99.9% compared to global evaluations. The second milestone shows evolutionary algorithms learn excellent sensor network coordination policies prior to network deployment, and these policies can be refined online once the network is deployed. The third milestone shows the resulting sensor networks networks are extremely robust to sensor noise, where networks with up to 25% sensor noise are capable of providing measurements with errors on the order of 10⁻³. The fourth milestone shows the resulting sensor networks are extremely robust to sensor failure, with 25% of the sensors in the system failing resulting in no significant performance losses after system reconfiguration.« less

  18. Potentiometric Sensor for Real-Time Remote Surveillance of Actinides in Molten Salts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Natalie J. Gese; Jan-Fong Jue; Brenda E. Serrano

    2012-07-01

    A potentiometric sensor is being developed at the Idaho National Laboratory for real-time remote surveillance of actinides during electrorefining of spent nuclear fuel. During electrorefining, fuel in metallic form is oxidized at the anode while refined uranium metal is reduced at the cathode in a high temperature electrochemical cell containing LiCl-KCl-UCl3 electrolyte. Actinides present in the fuel chemically react with UCl3 and form stable metal chlorides that accumulate in the electrolyte. This sensor will be used for process control and safeguarding of activities in the electrorefiner by monitoring the concentrations of actinides in the electrolyte. The work presented focuses onmore » developing a solid-state cation conducting ceramic sensor for detecting varying concentrations of trivalent actinide metal cations in eutectic LiCl-KCl molten salt. To understand the basic mechanisms for actinide sensor applications in molten salts, gadolinium was used as a surrogate for actinides. The ß?-Al2O3 was selected as the solid-state electrolyte for sensor fabrication based on cationic conductivity and other factors. In the present work Gd3+-ß?-Al2O3 was prepared by ion exchange reactions between trivalent Gd3+ from GdCl3 and K+-, Na+-, and Sr2+-ß?-Al2O3 precursors. Scanning electron microscopy (SEM) was used for characterization of Gd3+-ß?-Al2O3 samples. Microfocus X-ray Diffraction (µ-XRD) was used in conjunction with SEM energy dispersive X-ray spectroscopy (EDS) to identify phase content and elemental composition. The Gd3+-ß?-Al2O3 materials were tested for mechanical and chemical stability by exposing them to molten LiCl-KCl based salts. The effect of annealing on the exchanged material was studied to determine improvements in material integrity post ion exchange. The stability of the ß?-Al2O3 phase after annealing was verified by µ-XRD. Preliminary sensor tests with different assembly designs will also be presented.« less

  19. Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling

    PubMed Central

    Tang, Shengjun; Zhu, Qing; Chen, Wu; Darwish, Walid; Wu, Bo; Hu, Han; Chen, Min

    2016-01-01

    RGB-D sensors (sensors with RGB camera and Depth camera) are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks including limited measurement ranges (e.g., within 3 m) and errors in depth measurement increase with distance from the sensor with respect to 3D dense mapping. In this paper, we present a novel approach to geometrically integrate the depth scene and RGB scene to enlarge the measurement distance of RGB-D sensors and enrich the details of model generated from depth images. First, precise calibration for RGB-D Sensors is introduced. In addition to the calibration of internal and external parameters for both, IR camera and RGB camera, the relative pose between RGB camera and IR camera is also calibrated. Second, to ensure poses accuracy of RGB images, a refined false features matches rejection method is introduced by combining the depth information and initial camera poses between frames of the RGB-D sensor. Then, a global optimization model is used to improve the accuracy of the camera pose, decreasing the inconsistencies between the depth frames in advance. In order to eliminate the geometric inconsistencies between RGB scene and depth scene, the scale ambiguity problem encountered during the pose estimation with RGB image sequences can be resolved by integrating the depth and visual information and a robust rigid-transformation recovery method is developed to register RGB scene to depth scene. The benefit of the proposed joint optimization method is firstly evaluated with the publicly available benchmark datasets collected with Kinect. Then, the proposed method is examined by tests with two sets of datasets collected in both outside and inside environments. The experimental results demonstrate the feasibility and robustness of the proposed method. PMID:27690028

  20. Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling.

    PubMed

    Tang, Shengjun; Zhu, Qing; Chen, Wu; Darwish, Walid; Wu, Bo; Hu, Han; Chen, Min

    2016-09-27

    RGB-D sensors (sensors with RGB camera and Depth camera) are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks including limited measurement ranges (e.g., within 3 m) and errors in depth measurement increase with distance from the sensor with respect to 3D dense mapping. In this paper, we present a novel approach to geometrically integrate the depth scene and RGB scene to enlarge the measurement distance of RGB-D sensors and enrich the details of model generated from depth images. First, precise calibration for RGB-D Sensors is introduced. In addition to the calibration of internal and external parameters for both, IR camera and RGB camera, the relative pose between RGB camera and IR camera is also calibrated. Second, to ensure poses accuracy of RGB images, a refined false features matches rejection method is introduced by combining the depth information and initial camera poses between frames of the RGB-D sensor. Then, a global optimization model is used to improve the accuracy of the camera pose, decreasing the inconsistencies between the depth frames in advance. In order to eliminate the geometric inconsistencies between RGB scene and depth scene, the scale ambiguity problem encountered during the pose estimation with RGB image sequences can be resolved by integrating the depth and visual information and a robust rigid-transformation recovery method is developed to register RGB scene to depth scene. The benefit of the proposed joint optimization method is firstly evaluated with the publicly available benchmark datasets collected with Kinect. Then, the proposed method is examined by tests with two sets of datasets collected in both outside and inside environments. The experimental results demonstrate the feasibility and robustness of the proposed method.

  1. Image Alignment and Correlation System.

    DTIC Science & Technology

    1980-07-01

    4 degrees in 0.1 degree steps would require 40 evaluations of SUM. Other search techniques such as Fibonacci and golden section 3 were investigated...in software, and particularly its refinement to suit the characteristics of the sensor, also represent a significant achievement in the application...86-07 LDAA #7 A23F BD 80 00 JSR MATH A242 4F CLRA A243 BD BB B4 JSR PUSH41 A246 CE lE 6C LDX #ARRAY+44D A249 4F CLRA A24A BD B BE JSR PUSH44 D A24D

  2. The Control of Human Arm Movement: Models and Mechanical Constraints

    DTIC Science & Technology

    1990-06-01

    joints o linear joint angle sensors These assumptions may be refined as needed (e.g., muscle geometry may be included), but such additional complexity... C +y (2.4) 26 where W_ = (a, a2 Y = (1 0 1 O )T, = , 9 and cos( o ’) cos(o4) cos(q1) sin( o ) sin(44) sin(01) C 1= (2.5) coS(qS) coS(02) coS(0b) The least...squares solution is o = (CTC)-1CT(-y). A unique solution is guaranteed provided that the columns of C are independent. Observe that the columns of C

  3. Optimizing Input/Output Using Adaptive File System Policies

    NASA Technical Reports Server (NTRS)

    Madhyastha, Tara M.; Elford, Christopher L.; Reed, Daniel A.

    1996-01-01

    Parallel input/output characterization studies and experiments with flexible resource management algorithms indicate that adaptivity is crucial to file system performance. In this paper we propose an automatic technique for selecting and refining file system policies based on application access patterns and execution environment. An automatic classification framework allows the file system to select appropriate caching and pre-fetching policies, while performance sensors provide feedback used to tune policy parameters for specific system environments. To illustrate the potential performance improvements possible using adaptive file system policies, we present results from experiments involving classification-based and performance-based steering.

  4. Cameras Reveal Elements in the Short Wave Infrared

    NASA Technical Reports Server (NTRS)

    2010-01-01

    Goodrich ISR Systems Inc. (formerly Sensors Unlimited Inc.), based out of Princeton, New Jersey, received Small Business Innovation Research (SBIR) contracts from the Jet Propulsion Laboratory, Marshall Space Flight Center, Kennedy Space Center, Goddard Space Flight Center, Ames Research Center, Stennis Space Center, and Langley Research Center to assist in advancing and refining indium gallium arsenide imaging technology. Used on the Lunar Crater Observation and Sensing Satellite (LCROSS) mission in 2009 for imaging the short wave infrared wavelengths, the technology has dozens of applications in military, security and surveillance, machine vision, medical, spectroscopy, semiconductor inspection, instrumentation, thermography, and telecommunications.

  5. Acceptance Equipment System Data Acquisition and Processing Utility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fakhro, Rowan

    2015-02-01

    My internship at Sandia National Laboratories took place in the Department of Sensors and Embedded Systems, which is tasked with, among many things, the non-destructive testing of thermal batteries. The Acceptance Equipment System (AES) is a flexible rack system designed to electrically test thermal batteries individually for internal defects before they are stored in the battery stock pile. Aside from individual testing, data acquired by the AES is used for many things including trending and catching outliers within the tolerance levels of a particular battery type, allowing for the development of more refined acceptance requirements and testing procedures.

  6. A Generic Approach for Inversion of Surface Reflectance over Land: Overview, Application and Validation Using MODIS and LANDSAT8 Data

    NASA Technical Reports Server (NTRS)

    Vermote, E.; Roger, J. C.; Justice, C. O.; Franch, B.; Claverie, M.

    2016-01-01

    This paper presents a generic approach developed to derive surface reflectance over land from a variety of sensors. This technique builds on the extensive dataset acquired by the Terra platform by combining MODIS and MISR to derive an explicit and dynamic map of band ratio's between blue and red channels and is a refinement of the operational approach used for MODIS and LANDSAT over the past 15 years. We will present the generic approach and the application to MODIS and LANDSAT data and its validation using the AERONET data.

  7. Optical Payload for the STARE Mission

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simms, L; Riot, V; De Vries, W

    2011-03-13

    Space-based Telescopes for Actionable Refinement of Ephemeris (STARE) is a nano-sat based mission designed to better determine the trajectory of satellites and space debris in orbit around earth. In this paper, we give a brief overview of the mission and its place in the larger context of Space Situational Awareness (SSA). We then describe the details of the central optical payload, touching on the optical design and characterization of the on-board image sensor used in our Cubesat based prototype. Finally, we discuss the on-board star and satellite track detection algorithm central to the success of the mission.

  8. Design and application of a small size SAFT imaging system for concrete structure

    NASA Astrophysics Data System (ADS)

    Shao, Zhixue; Shi, Lihua; Shao, Zhe; Cai, Jian

    2011-07-01

    A method of ultrasonic imaging detection is presented for quick non-destructive testing (NDT) of concrete structures using synthesized aperture focusing technology (SAFT). A low cost ultrasonic sensor array consisting of 12 market available low frequency ultrasonic transducers is designed and manufactured. A channel compensation method is proposed to improve the consistency of different transducers. The controlling devices for array scan as well as the virtual instrument for SAFT imaging are designed. In the coarse scan mode with the scan step of 50 mm, the system can quickly give an image display of a cross section of 600 mm (L) × 300 mm (D) by one measurement. In the refined scan model, the system can reduce the scan step and give an image display of the same cross section by moving the sensor array several times. Experiments on staircase specimen, concrete slab with embedded target, and building floor with underground pipe line all verify the efficiency of the proposed method.

  9. Robust Target Tracking with Multi-Static Sensors under Insufficient TDOA Information.

    PubMed

    Shin, Hyunhak; Ku, Bonhwa; Nelson, Jill K; Ko, Hanseok

    2018-05-08

    This paper focuses on underwater target tracking based on a multi-static sonar network composed of passive sonobuoys and an active ping. In the multi-static sonar network, the location of the target can be estimated using TDOA (Time Difference of Arrival) measurements. However, since the sensor network may obtain insufficient and inaccurate TDOA measurements due to ambient noise and other harsh underwater conditions, target tracking performance can be significantly degraded. We propose a robust target tracking algorithm designed to operate in such a scenario. First, track management with track splitting is applied to reduce performance degradation caused by insufficient measurements. Second, a target location is estimated by a fusion of multiple TDOA measurements using a Gaussian Mixture Model (GMM). In addition, the target trajectory is refined by conducting a stack-based data association method based on multiple-frames measurements in order to more accurately estimate target trajectory. The effectiveness of the proposed method is verified through simulations.

  10. A Model-Based Approach for Bridging Virtual and Physical Sensor Nodes in a Hybrid Simulation Framework

    PubMed Central

    Mozumdar, Mohammad; Song, Zhen Yu; Lavagno, Luciano; Sangiovanni-Vincentelli, Alberto L.

    2014-01-01

    The Model Based Design (MBD) approach is a popular trend to speed up application development of embedded systems, which uses high-level abstractions to capture functional requirements in an executable manner, and which automates implementation code generation. Wireless Sensor Networks (WSNs) are an emerging very promising application area for embedded systems. However, there is a lack of tools in this area, which would allow an application developer to model a WSN application by using high level abstractions, simulate it mapped to a multi-node scenario for functional analysis, and finally use the refined model to automatically generate code for different WSN platforms. Motivated by this idea, in this paper we present a hybrid simulation framework that not only follows the MBD approach for WSN application development, but also interconnects a simulated sub-network with a physical sub-network and then allows one to co-simulate them, which is also known as Hardware-In-the-Loop (HIL) simulation. PMID:24960083

  11. Point Cloud Based Approach to Stem Width Extraction of Sorghum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Jihui; Zakhor, Avideh

    A revolution in the field of genomics has produced vast amounts of data and furthered our understanding of the genotypephenotype map, but is currently constrained by manually intensive or limited phenotype data collection. We propose an algorithm to estimate stem width, a key characteristic used for biomass potential evaluation, from 3D point cloud data collected by a robot equipped with a depth sensor in a single pass in a standard field. The algorithm applies a two step alignment to register point clouds in different frames, a Frangi filter to identify stemlike objects in the point cloud and an orientation basedmore » filter to segment out and refine individual stems for width estimation. Individually, detected stems which are split due to occlusions are merged and then registered with previously found stems in previous camera frames in order to track temporally. We then refine the estimates to produce an accurate histogram of width estimates per plot. Since the plants in each plot are genetically identical, distributions of the stem width per plot can be useful in identifying genetically superior sorghum for biofuels.« less

  12. Ice surface temperature retrieval from AVHRR, ATSR, and passive microwave satellite data: Algorithm development and application

    NASA Technical Reports Server (NTRS)

    Key, Jeff; Maslanik, James; Steffen, Konrad

    1995-01-01

    During the second phase project year we have made progress in the development and refinement of surface temperature retrieval algorithms and in product generation. More specifically, we have accomplished the following: (1) acquired a new advanced very high resolution radiometer (AVHRR) data set for the Beaufort Sea area spanning an entire year; (2) acquired additional along-track scanning radiometer(ATSR) data for the Arctic and Antarctic now totalling over eight months; (3) refined our AVHRR Arctic and Antarctic ice surface temperature (IST) retrieval algorithm, including work specific to Greenland; (4) developed ATSR retrieval algorithms for the Arctic and Antarctic, including work specific to Greenland; (5) developed cloud masking procedures for both AVHRR and ATSR; (6) generated a two-week bi-polar global area coverage (GAC) set of composite images from which IST is being estimated; (7) investigated the effects of clouds and the atmosphere on passive microwave 'surface' temperature retrieval algorithms; and (8) generated surface temperatures for the Beaufort Sea data set, both from AVHRR and special sensor microwave imager (SSM/I).

  13. Point Cloud Based Approach to Stem Width Extraction of Sorghum

    DOE PAGES

    Jin, Jihui; Zakhor, Avideh

    2017-01-29

    A revolution in the field of genomics has produced vast amounts of data and furthered our understanding of the genotypephenotype map, but is currently constrained by manually intensive or limited phenotype data collection. We propose an algorithm to estimate stem width, a key characteristic used for biomass potential evaluation, from 3D point cloud data collected by a robot equipped with a depth sensor in a single pass in a standard field. The algorithm applies a two step alignment to register point clouds in different frames, a Frangi filter to identify stemlike objects in the point cloud and an orientation basedmore » filter to segment out and refine individual stems for width estimation. Individually, detected stems which are split due to occlusions are merged and then registered with previously found stems in previous camera frames in order to track temporally. We then refine the estimates to produce an accurate histogram of width estimates per plot. Since the plants in each plot are genetically identical, distributions of the stem width per plot can be useful in identifying genetically superior sorghum for biofuels.« less

  14. Adaptive and iterative methods for simulations of nanopores with the PNP-Stokes equations

    NASA Astrophysics Data System (ADS)

    Mitscha-Baude, Gregor; Buttinger-Kreuzhuber, Andreas; Tulzer, Gerhard; Heitzinger, Clemens

    2017-06-01

    We present a 3D finite element solver for the nonlinear Poisson-Nernst-Planck (PNP) equations for electrodiffusion, coupled to the Stokes system of fluid dynamics. The model serves as a building block for the simulation of macromolecule dynamics inside nanopore sensors. The source code is released online at http://github.com/mitschabaude/nanopores. We add to existing numerical approaches by deploying goal-oriented adaptive mesh refinement. To reduce the computation overhead of mesh adaptivity, our error estimator uses the much cheaper Poisson-Boltzmann equation as a simplified model, which is justified on heuristic grounds but shown to work well in practice. To address the nonlinearity in the full PNP-Stokes system, three different linearization schemes are proposed and investigated, with two segregated iterative approaches both outperforming a naive application of Newton's method. Numerical experiments are reported on a real-world nanopore sensor geometry. We also investigate two different models for the interaction of target molecules with the nanopore sensor through the PNP-Stokes equations. In one model, the molecule is of finite size and is explicitly built into the geometry; while in the other, the molecule is located at a single point and only modeled implicitly - after solution of the system - which is computationally favorable. We compare the resulting force profiles of the electric and velocity fields acting on the molecule, and conclude that the point-size model fails to capture important physical effects such as the dependence of charge selectivity of the sensor on the molecule radius.

  15. Inferring the most probable maps of underground utilities using Bayesian mapping model

    NASA Astrophysics Data System (ADS)

    Bilal, Muhammad; Khan, Wasiq; Muggleton, Jennifer; Rustighi, Emiliano; Jenks, Hugo; Pennock, Steve R.; Atkins, Phil R.; Cohn, Anthony

    2018-03-01

    Mapping the Underworld (MTU), a major initiative in the UK, is focused on addressing social, environmental and economic consequences raised from the inability to locate buried underground utilities (such as pipes and cables) by developing a multi-sensor mobile device. The aim of MTU device is to locate different types of buried assets in real time with the use of automated data processing techniques and statutory records. The statutory records, even though typically being inaccurate and incomplete, provide useful prior information on what is buried under the ground and where. However, the integration of information from multiple sensors (raw data) with these qualitative maps and their visualization is challenging and requires the implementation of robust machine learning/data fusion approaches. An approach for automated creation of revised maps was developed as a Bayesian Mapping model in this paper by integrating the knowledge extracted from sensors raw data and available statutory records. The combination of statutory records with the hypotheses from sensors was for initial estimation of what might be found underground and roughly where. The maps were (re)constructed using automated image segmentation techniques for hypotheses extraction and Bayesian classification techniques for segment-manhole connections. The model consisting of image segmentation algorithm and various Bayesian classification techniques (segment recognition and expectation maximization (EM) algorithm) provided robust performance on various simulated as well as real sites in terms of predicting linear/non-linear segments and constructing refined 2D/3D maps.

  16. Target tracking and pointing for arrays of phase-locked lasers

    NASA Astrophysics Data System (ADS)

    Macasaet, Van P.; Hughes, Gary B.; Lubin, Philip; Madajian, Jonathan; Zhang, Qicheng; Griswold, Janelle; Kulkarni, Neeraj; Cohen, Alexander; Brashears, Travis

    2016-09-01

    Arrays of phase-locked lasers are envisioned for planetary defense and exploration systems. High-energy beams focused on a threatening asteroid evaporate surface material, creating a reactionary thrust that alters the asteroid's orbit. The same system could be used to probe an asteroid's composition, to search for unknown asteroids, and to propel interplanetary and interstellar spacecraft. Phased-array designs are capable of producing high beam intensity, and allow beam steering and beam profile manipulation. Modular designs allow ongoing addition of emitter elements to a growing array. This paper discusses pointing control for extensible laser arrays. Rough pointing is determined by spacecraft attitude control. Lateral movement of the laser emitter tips behind the optical elements provides intermediate pointing adjustment for individual array elements and beam steering. Precision beam steering and beam formation is accomplished by coordinated phase modulation across the array. Added cells are incorporated into the phase control scheme by precise alignment to local mechanical datums using fast, optical relative position sensors. Infrared target sensors are also positioned within the datum scheme, and provide information about the target vector relative to datum coordinates at each emitter. Multiple target sensors allow refined determination of the target normal plane, providing information to the phase controller for each emitter. As emitters and sensors are added, local position data allows accurate prediction of the relative global position of emitters across the array, providing additional constraints to the phase controllers. Mechanical design and associated phase control that is scalable for target distance and number of emitters is presented.

  17. PDR with a Foot-Mounted IMU and Ramp Detection

    PubMed Central

    Jiménez, Antonio R.; Seco, Fernando; Zampella, Francisco; Prieto, José C.; Guevara, Jorge

    2011-01-01

    The localization of persons in indoor environments is nowadays an open problem. There are partial solutions based on the deployment of a network of sensors (Local Positioning Systems or LPS). Other solutions only require the installation of an inertial sensor on the person’s body (Pedestrian Dead-Reckoning or PDR). PDR solutions integrate the signals coming from an Inertial Measurement Unit (IMU), which usually contains 3 accelerometers and 3 gyroscopes. The main problem of PDR is the accumulation of positioning errors due to the drift caused by the noise in the sensors. This paper presents a PDR solution that incorporates a drift correction method based on detecting the access ramps usually found in buildings. The ramp correction method is implemented over a PDR framework that uses an Inertial Navigation algorithm (INS) and an IMU attached to the person’s foot. Unlike other approaches that use external sensors to correct the drift error, we only use one IMU on the foot. To detect a ramp, the slope of the terrain on which the user is walking, and the change in height sensed when moving forward, are estimated from the IMU. After detection, the ramp is checked for association with one of the existing in a database. For each associated ramp, a position correction is fed into the Kalman Filter in order to refine the INS-PDR solution. Drift-free localization is achieved with positioning errors below 2 meters for 1,000-meter-long routes in a building with a few ramps. PMID:22163701

  18. A review of ocean color remote sensing methods and statistical techniques for the detection, mapping and analysis of phytoplankton blooms in coastal and open oceans

    NASA Astrophysics Data System (ADS)

    Blondeau-Patissier, David; Gower, James F. R.; Dekker, Arnold G.; Phinn, Stuart R.; Brando, Vittorio E.

    2014-04-01

    The need for more effective environmental monitoring of the open and coastal ocean has recently led to notable advances in satellite ocean color technology and algorithm research. Satellite ocean color sensors' data are widely used for the detection, mapping and monitoring of phytoplankton blooms because earth observation provides a synoptic view of the ocean, both spatially and temporally. Algal blooms are indicators of marine ecosystem health; thus, their monitoring is a key component of effective management of coastal and oceanic resources. Since the late 1970s, a wide variety of operational ocean color satellite sensors and algorithms have been developed. The comprehensive review presented in this article captures the details of the progress and discusses the advantages and limitations of the algorithms used with the multi-spectral ocean color sensors CZCS, SeaWiFS, MODIS and MERIS. Present challenges include overcoming the severe limitation of these algorithms in coastal waters and refining detection limits in various oceanic and coastal environments. To understand the spatio-temporal patterns of algal blooms and their triggering factors, it is essential to consider the possible effects of environmental parameters, such as water temperature, turbidity, solar radiation and bathymetry. Hence, this review will also discuss the use of statistical techniques and additional datasets derived from ecosystem models or other satellite sensors to characterize further the factors triggering or limiting the development of algal blooms in coastal and open ocean waters.

  19. Structural refinement of the hERG1 pore and voltage-sensing domains with ROSETTA-membrane and molecular dynamics simulations.

    PubMed

    Subbotina, Julia; Yarov-Yarovoy, Vladimir; Lees-Miller, James; Durdagi, Serdar; Guo, Jiqing; Duff, Henry J; Noskov, Sergei Yu

    2010-11-01

    The hERG1 gene (Kv11.1) encodes a voltage-gated potassium channel. Mutations in this gene lead to one form of the Long QT Syndrome (LQTS) in humans. Promiscuous binding of drugs to hERG1 is known to alter the structure/function of the channel leading to an acquired form of the LQTS. Expectably, creation and validation of reliable 3D model of the channel have been a key target in molecular cardiology and pharmacology for the last decade. Although many models were built, they all were limited to pore domain. In this work, a full model of the hERG1 channel is developed which includes all transmembrane segments. We tested a template-driven de-novo design with ROSETTA-membrane modeling using side-chain placements optimized by subsequent molecular dynamics (MD) simulations. Although backbone templates for the homology modeled parts of the pore and voltage sensors were based on the available structures of KvAP, Kv1.2 and Kv1.2-Kv2.1 chimera channels, the missing parts are modeled de-novo. The impact of several alignments on the structure of the S4 helix in the voltage-sensing domain was also tested. Herein, final models are evaluated for consistency to the reported structural elements discovered mainly on the basis of mutagenesis and electrophysiology. These structural elements include salt bridges and close contacts in the voltage-sensor domain; and the topology of the extracellular S5-pore linker compared with that established by toxin foot-printing and nuclear magnetic resonance studies. Implications of the refined hERG1 model to binding of blockers and channels activators (potent new ligands for channel activations) are discussed. © 2010 Wiley-Liss, Inc.

  20. Developing a Mixed Neural Network Approach to Forecast the Residential Electricity Consumption Based on Sensor Recorded Data

    PubMed Central

    Bâra, Adela; Stănică, Justina-Lavinia; Coculescu, Cristina

    2018-01-01

    In this paper, we report a study having as a main goal the obtaining of a method that can provide an accurate forecast of the residential electricity consumption, refining it up to the appliance level, using sensor recorded data, for residential smart homes complexes that use renewable energy sources as a part of their consumed electricity, overcoming the limitations of not having available historical meteorological data and the unwillingness of the contractor to acquire such data periodically in the future accurate short-term forecasts from a specialized institute due to the implied costs. In this purpose, we have developed a mixed artificial neural network (ANN) approach using both non-linear autoregressive with exogenous input (NARX) ANNs and function fitting neural networks (FITNETs). We have used a large dataset containing detailed electricity consumption data recorded by sensors, monitoring a series of individual appliances, while in the NARX case we have also used timestamps datasets as exogenous variables. After having developed and validated the forecasting method, we have compiled it in view of incorporating it into a cloud solution, being delivered to the contractor that can provide it as a service for a monthly fee to both the operators and residential consumers. PMID:29734761

  1. Developing a Mixed Neural Network Approach to Forecast the Residential Electricity Consumption Based on Sensor Recorded Data.

    PubMed

    Oprea, Simona-Vasilica; Pîrjan, Alexandru; Căruțașu, George; Petroșanu, Dana-Mihaela; Bâra, Adela; Stănică, Justina-Lavinia; Coculescu, Cristina

    2018-05-05

    In this paper, we report a study having as a main goal the obtaining of a method that can provide an accurate forecast of the residential electricity consumption, refining it up to the appliance level, using sensor recorded data, for residential smart homes complexes that use renewable energy sources as a part of their consumed electricity, overcoming the limitations of not having available historical meteorological data and the unwillingness of the contractor to acquire such data periodically in the future accurate short-term forecasts from a specialized institute due to the implied costs. In this purpose, we have developed a mixed artificial neural network (ANN) approach using both non-linear autoregressive with exogenous input (NARX) ANNs and function fitting neural networks (FITNETs). We have used a large dataset containing detailed electricity consumption data recorded by sensors, monitoring a series of individual appliances, while in the NARX case we have also used timestamps datasets as exogenous variables. After having developed and validated the forecasting method, we have compiled it in view of incorporating it into a cloud solution, being delivered to the contractor that can provide it as a service for a monthly fee to both the operators and residential consumers.

  2. Efficient integration of spectral features for vehicle tracking utilizing an adaptive sensor

    NASA Astrophysics Data System (ADS)

    Uzkent, Burak; Hoffman, Matthew J.; Vodacek, Anthony

    2015-03-01

    Object tracking in urban environments is an important and challenging problem that is traditionally tackled using visible and near infrared wavelengths. By inserting extended data such as spectral features of the objects one can improve the reliability of the identification process. However, huge increase in data created by hyperspectral imaging is usually prohibitive. To overcome the complexity problem, we propose a persistent air-to-ground target tracking system inspired by a state-of-the-art, adaptive, multi-modal sensor. The adaptive sensor is capable of providing panchromatic images as well as the spectra of desired pixels. This addresses the data challenge of hyperspectral tracking by only recording spectral data as needed. Spectral likelihoods are integrated into a data association algorithm in a Bayesian fashion to minimize the likelihood of misidentification. A framework for controlling spectral data collection is developed by incorporating motion segmentation information and prior information from a Gaussian Sum filter (GSF) movement predictions from a multi-model forecasting set. An intersection mask of the surveillance area is extracted from OpenStreetMap source and incorporated into the tracking algorithm to perform online refinement of multiple model set. The proposed system is tested using challenging and realistic scenarios generated in an adverse environment.

  3. Acoustic Sensor Network for Relative Positioning of Nodes

    PubMed Central

    De Marziani, Carlos; Ureña, Jesus; Hernandez, Álvaro; Mazo, Manuel; García, Juan Jesús; Jimenez, Ana; Rubio, María del Carmen Pérez; Álvarez, Fernando; Villadangos, José Manuel

    2009-01-01

    In this work, an acoustic sensor network for a relative localization system is analyzed by reporting the accuracy achieved in the position estimation. The proposed system has been designed for those applications where objects are not restricted to a particular environment and thus one cannot depend on any external infrastructure to compute their positions. The objects are capable of computing spatial relations among themselves using only acoustic emissions as a ranging mechanism. The object positions are computed by a multidimensional scaling (MDS) technique and, afterwards, a least-square algorithm, based on the Levenberg-Marquardt algorithm (LMA), is applied to refine results. Regarding the position estimation, all the parameters involved in the computation of the temporary relations with the proposed ranging mechanism have been considered. The obtained results show that a fine-grained localization can be achieved considering a Gaussian distribution error in the proposed ranging mechanism. Furthermore, since acoustic sensors require a line-of-sight to properly work, the system has been tested by modeling the lost of this line-of-sight as a non-Gaussian error. A suitable position estimation has been achieved even if it is considered a bias of up to 25 of the line-of-sight measurements among a set of nodes. PMID:22291520

  4. Mesh-type acoustic vector sensor

    NASA Astrophysics Data System (ADS)

    Zalalutdinov, M. K.; Photiadis, D. M.; Szymczak, W. G.; McMahon, J. W.; Bucaro, J. A.; Houston, B. H.

    2017-07-01

    Motivated by the predictions of a theoretical model developed to describe the acoustic flow force exerted on closely spaced nano-fibers in a viscous medium, we have demonstrated a novel concept for a particle velocity-based directional acoustic sensor. The central element of the concept exploits the acoustically induced normal displacement of a fine mesh as a measure of the collinear projection of the particle velocity in the sound wave. The key observations are (i) the acoustically induced flow force on an individual fiber within the mesh is nearly independent of the fiber diameter and (ii) the mesh-flow interaction can be well-described theoretically by a nearest neighbor coupling approximation. Scaling arguments based on these two observations indicate that the refinement of the mesh down to the nanoscale leads to significant improvements in performance. The combination of the two dimensional nature of the mesh together with the nanoscale dimensions provides a dramatic gain in the total length of fiber exposed to the flow, leading to a sensitivity enhancement by orders of magnitude. We describe the fabrication of a prototype mesh sensor equipped with optical readout. Preliminary measurements carried out over a considerable bandwidth together with the results of numerical simulations are in good agreement with the theory, thus providing a proof of concept.

  5. Multiresolution Wavelet Based Adaptive Numerical Dissipation Control for Shock-Turbulence Computations

    NASA Technical Reports Server (NTRS)

    Sjoegreen, B.; Yee, H. C.

    2001-01-01

    The recently developed essentially fourth-order or higher low dissipative shock-capturing scheme of Yee, Sandham and Djomehri (1999) aimed at minimizing nu- merical dissipations for high speed compressible viscous flows containing shocks, shears and turbulence. To detect non smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artificial compression method (ACM) of Harten (1978) but utilize it in an entirely different context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem dependence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch to the extra numerical dissipation outside shock layers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability at all parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat & Zhong 1992) used by Gerritsen and Olsson (1996) in an adaptive mesh refinement method, to determine regions where re nement should be done. The other is the modification of the multiresolution method of Harten (1995) by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent of a chosen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these sensors are scheme independent and can be stand alone options for numerical algorithm other than the Yee et al. scheme.

  6. Cyber security and data collection approaches for smartphone sensor systems

    NASA Astrophysics Data System (ADS)

    Turner, Hamilton; White, Jules

    2012-06-01

    In recent years the ubiquity and resources provided by smartphone devices have encouraged scientists to explore using these devices as remote sensing nodes. In addition, the United States Department of Defense has stated a mission of increasing persistent intelligence, surveillance, and reconnaissance capabilities or U.S. units. This paper presents a method of enabling large-scale, long-term smartphone-powered data collection. Key solutions discussed include the ability to directly allow domain experts to define and refine smartphone applications for data collection, technical advancements that allow rapid dissemination of a smartphone data collection application, and an algorithm for preserving the locational privacy of participating users.

  7. Adaptive optics two-photon excited fluorescence lifetime imaging ophthalmoscopy of exogenous fluorophores in mice.

    PubMed

    Feeks, James A; Hunter, Jennifer J

    2017-05-01

    In vivo cellular scale fluorescence lifetime imaging of the mouse retina has the potential to be a sensitive marker of retinal cell health. In this study, we demonstrate fluorescence lifetime imaging of extrinsic fluorophores using adaptive optics fluorescence lifetime imaging ophthalmoscopy (AOFLIO). We recorded AOFLIO images of inner retinal cells labeled with enhanced green fluorescent protein (EGFP) and capillaries labeled with fluorescein. We demonstrate that AOFLIO can be used to differentiate spectrally overlapping fluorophores in the retina. With further refinements, AOFLIO could be used to assess retinal health in early stages of degeneration by utilizing lifetime-based sensors or even fluorophores native to the retina.

  8. Precipitation Model Validation in 3rd Generation Aeroturbine Disc Alloys

    NASA Technical Reports Server (NTRS)

    Olson, G. B.; Jou, H.-J.; Jung, J.; Sebastian, J. T.; Misra, A.; Locci, I.; Hull, D.

    2008-01-01

    In support of application of the DARPA-AIM methodology to the accelerated hybrid thermal process optimization of 3rd generation aeroturbine disc alloys with quantified uncertainty, equilibrium and diffusion couple experiments have identified available fundamental thermodynamic and mobility databases of sufficient accuracy. Using coherent interfacial energies quantified by Single-Sensor DTA nucleation undercooling measurements, PrecipiCalc(TM) simulations of nonisothermal precipitation in both supersolvus and subsolvus treated samples show good agreement with measured gamma particle sizes and compositions. Observed longterm isothermal coarsening behavior defines requirements for further refinement of elastic misfit energy and treatment of the parallel evolution of incoherent precipitation at grain boundaries.

  9. The fast and accurate 3D-face scanning technology based on laser triangle sensors

    NASA Astrophysics Data System (ADS)

    Wang, Jinjiang; Chang, Tianyu; Ge, Baozhen; Tian, Qingguo; Chen, Yang; Kong, Bin

    2013-08-01

    A laser triangle scanning method and the structure of 3D-face measurement system were introduced. In presented system, a liner laser source was selected as an optical indicated signal in order to scanning a line one times. The CCD image sensor was used to capture image of the laser line modulated by human face. The system parameters were obtained by system calibrated calculated. The lens parameters of image part of were calibrated with machine visual image method and the triangle structure parameters were calibrated with fine wire paralleled arranged. The CCD image part and line laser indicator were set with a linear motor carry which can achieve the line laser scanning form top of the head to neck. For the nose is ledge part and the eyes are sunk part, one CCD image sensor can not obtain the completed image of laser line. In this system, two CCD image sensors were set symmetric at two sides of the laser indicator. In fact, this structure includes two laser triangle measure units. Another novel design is there laser indicators were arranged in order to reduce the scanning time for it is difficult for human to keep static for longer time. The 3D data were calculated after scanning. And further data processing include 3D coordinate refine, mesh calculate and surface show. Experiments show that this system has simply structure, high scanning speed and accurate. The scanning range covers the whole head of adult, the typical resolution is 0.5mm.

  10. Adaptive recovery of motion blur point spread function from differently exposed images

    NASA Astrophysics Data System (ADS)

    Albu, Felix; Florea, Corneliu; Drîmbarean, Alexandru; Zamfir, Adrian

    2010-01-01

    Motion due to digital camera movement during the image capture process is a major factor that degrades the quality of images and many methods for camera motion removal have been developed. Central to all techniques is the correct recovery of what is known as the Point Spread Function (PSF). A very popular technique to estimate the PSF relies on using a pair of gyroscopic sensors to measure the hand motion. However, the errors caused either by the loss of the translational component of the movement or due to the lack of precision in gyro-sensors measurements impede the achievement of a good quality restored image. In order to compensate for this, we propose a method that begins with an estimation of the PSF obtained from 2 gyro sensors and uses a pair of under-exposed image together with the blurred image to adaptively improve it. The luminance of the under-exposed image is equalized with that of the blurred image. An initial estimation of the PSF is generated from the output signal of 2 gyro sensors. The PSF coefficients are updated using 2D-Least Mean Square (LMS) algorithms with a coarse-to-fine approach on a grid of points selected from both images. This refined PSF is used to process the blurred image using known deblurring methods. Our results show that the proposed method leads to superior PSF support and coefficient estimation. Also the quality of the restored image is improved compared to 2 gyro only approach or to blind image de-convolution results.

  11. MEMS based shock pulse detection sensor for improved rotary Stirling cooler end of life prediction

    NASA Astrophysics Data System (ADS)

    Hübner, M.; Münzberg, M.

    2018-05-01

    The widespread use of rotary Stirling coolers in high performance thermal imagers used for critical 24/7 surveillance tasks justifies any effort to significantly enhance the reliability and predictable uptime of those coolers. Typically the lifetime of the whole imaging device is limited due to continuous wear and finally failure of the rotary compressor of the Stirling cooler, especially due to failure of the comprised bearings. MTTF based lifetime predictions, even based on refined MTTF models taking operational scenario dependent scaling factors into account, still lack in precision to forecast accurately the end of life (EOL) of individual coolers. Consequently preventive maintenance of individual coolers to avoid failures of the main sensor in critical operational scenarios are very costly or even useless. We have developed an integrated test method based on `Micro Electromechanical Systems', so called MEMS sensors, which significantly improves the cooler EOL prediction. The recently commercially available MEMS acceleration sensors have mechanical resonance frequencies up to 50 kHz. They are able to detect solid borne shock pulses in the cooler structure, originating from e.g. metal on metal impacts driven by periodical forces acting on moving inner parts of the rotary compressor within wear dependent slack and play. The impact driven transient shock pulse analyses uses only the high frequency signal <10kHz and differs therefore from the commonly used broadband low frequencies vibrational analysis of reciprocating machines. It offers a direct indicator of the individual state of wear. The predictive cooler lifetime model based on the shock pulse analysis is presented and results are discussed.

  12. Issues and challenges in resource management and its interaction with levels 2/3 fusion with applications to real-world problems: an annotated perspective

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Kadar, Ivan; Hintz, Kenneth; Biermann, Joachim; Chong, Chee-Yee; Salerno, John; Das, Subrata

    2007-04-01

    Resource management (or process refinement) is critical for information fusion operations in that users, sensors, and platforms need to be informed, based on mission needs, on how to collect, process, and exploit data. To meet these growing concerns, a panel session was conducted at the International Society of Information Fusion Conference in 2006 to discuss the various issues surrounding the interaction of Resource Management with Level 2/3 Situation and Threat Assessment. This paper briefly consolidates the discussion of the invited panel panelists. The common themes include: (1) Addressing the user in system management, sensor control, and knowledge based information collection (2) Determining a standard set of fusion metrics for optimization and evaluation based on the application (3) Allowing dynamic and adaptive updating to deliver timely information needs and information rates (4) Optimizing the joint objective functions at all information fusion levels based on decision-theoretic analysis (5) Providing constraints from distributed resource mission planning and scheduling; and (6) Defining L2/3 situation entity definitions for knowledge discovery, modeling, and information projection

  13. Surface roughness considerations for atmospheric correction of ocean color sensors. I - The Rayleigh-scattering component. II - Error in the retrieved water-leaving radiance

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.; Wang, Menghua

    1992-01-01

    The first step in the Coastal Zone Color Scanner (CZCS) atmospheric-correction algorithm is the computation of the Rayleigh-scattering (RS) contribution, L sub r, to the radiance leaving the top of the atmosphere over the ocean. In the present algorithm, L sub r is computed by assuming that the ocean surface is flat. Calculations of the radiance leaving an RS atmosphere overlying a rough Fresnel-reflecting ocean are presented to evaluate the radiance error caused by the flat-ocean assumption. Simulations are carried out to evaluate the error incurred when the CZCS-type algorithm is applied to a realistic ocean in which the surface is roughened by the wind. In situations where there is no direct sun glitter, it is concluded that the error induced by ignoring the Rayleigh-aerosol interaction is usually larger than that caused by ignoring the surface roughness. This suggests that, in refining algorithms for future sensors, more effort should be focused on dealing with the Rayleigh-aerosol interaction than on the roughness of the sea surface.

  14. 50th Anniversary of Radiation Budget Measurements from Satellites

    NASA Astrophysics Data System (ADS)

    Raschke, Ehrhard, ,, Dr.; Kinne, Stefan, ,, Dr.

    2010-05-01

    The "space race" between the USA and the Soviet Union supported rapid developments of instruments to measure properties of the atmosphere from satellite platforms. The satellite Explorer 7 (launch on 13 October 1959) was the first to carry sensors which were sensitive to the fluxes of solar (shortwave) and terrestrial (longwave) radiation leaving the Earth to space. Improved versions of those sensors and more complicated radiometers were flown on various operational and experimental satellites of the Nimbus, ESSA, TIROS, COSMOS, and NOAA series. There results, although often inherent to strong sampling insufficiencies, provided already a general picture on the spatial distribution and seasonal variability of radiation budget components at the Top of the Atmosphere, which finally could be refined with the more recent and more accurate and complete data sets of the experiments ERBE, CERES and ScRaB. Numerical analyses of climate data complemented such measurements to obtain a complete picture on the radiation budget at various levels within the atmosphere and at ground. These data is now used to validate the performance of climate models.

  15. The astro-geodetic use of CCD for gravity field refinement

    NASA Astrophysics Data System (ADS)

    Gerstbach, G.

    1996-07-01

    The paper starts with a review of geoid projects, where vertical deflections are more effective than gravimetry. In alpine regions the economy of astrogeoids is at least 10 times higher, but many countries do not make use of this fact - presumably because the measurements are not fully automated up to now. Based upon the experiences of astrometry of high satellites and own tests the author analyses the use of CCD for astro-geodetic measurements. Automation and speeding up will be possible in a few years, the latter depending on the observation scheme. Sensor characteristics, cooling and reading out of the devices should be harmonized. Using line sensors in small prism astrolabes, the CCD accuracy will reach the visual one (±0.2″) within 5-10 years. Astrogeoids can be combined ideally with geological data, because vertical variation of rock densities does not cause systematic effects (contrary to gravimetry). So a geoid of ±5 cm accuracy (achieved in Austria and other alpine countries by 5-10 points per 1000 km 2) can be improved to ±2 cm without additional observations and border effects.

  16. Passive field reflectance measurements

    NASA Astrophysics Data System (ADS)

    Weber, Christian; Schinca, Daniel C.; Tocho, Jorge O.; Videla, Fabian

    2008-10-01

    The results of reflectance measurements performed with a three-band passive radiometer with independent channels for solar irradiance reference are presented. Comparative operation between the traditional method that uses downward-looking field and reference white panel measurements and the new approach involving duplicated downward- and upward-looking spectral channels (each latter one with its own diffuser) is analyzed. The results indicate that the latter method performs in very good agreement with the standard method and is more suitable for passive sensors under rapidly changing atmospheric conditions (such as clouds, dust, mist, smog and other scatterers), since a more reliable synchronous recording of reference and incident light is achieved. Besides, having separate channels for the reference and the signal allows a better balancing of gains in the amplifiers for each spectral channel. We show the results obtained in the determination of the normalized difference vegetation index (NDVI) corresponding to the period 2004-2007 field experiments concerning weed detection in soybean stubbles and fertilizer level assessment in wheat. The method may be used to refine sensor-based nitrogen fertilizer rate recommendations and to determine suitable zones for herbicide applications.

  17. A study on rational function model generation for TerraSAR-X imagery.

    PubMed

    Eftekhari, Akram; Saadatseresht, Mohammad; Motagh, Mahdi

    2013-09-09

    The Rational Function Model (RFM) has been widely used as an alternative to rigorous sensor models of high-resolution optical imagery in photogrammetry and remote sensing geometric processing. However, not much work has been done to evaluate the applicability of the RF model for Synthetic Aperture Radar (SAR) image processing. This paper investigates how to generate a Rational Polynomial Coefficient (RPC) for high-resolution TerraSAR-X imagery using an independent approach. The experimental results demonstrate that the RFM obtained using the independent approach fits the Range-Doppler physical sensor model with an accuracy of greater than 10-3 pixel. Because independent RPCs indicate absolute errors in geolocation, two methods can be used to improve the geometric accuracy of the RFM. In the first method, Ground Control Points (GCPs) are used to update SAR sensor orientation parameters, and the RPCs are calculated using the updated parameters. Our experiment demonstrates that by using three control points in the corners of the image, an accuracy of 0.69 pixels in range and 0.88 pixels in the azimuth direction is achieved. For the second method, we tested the use of an affine model for refining RPCs. In this case, by applying four GCPs in the corners of the image, the accuracy reached 0.75 pixels in range and 0.82 pixels in the azimuth direction.

  18. Mechanical monolithic horizontal sensor for low frequency seismic noise measurement

    NASA Astrophysics Data System (ADS)

    Acernese, Fausto; Giordano, Gerardo; Romano, Rocco; De Rosa, Rosario; Barone, Fabrizio

    2008-07-01

    This paper describes a mechanical monolithic horizontal sensor for geophysical applications developed at the University of Salerno. The instrument is basically a monolithic tunable folded pendulum, shaped with precision machining and electric discharge machining, that can be used both as seismometer and, in a force-feedback configuration, as accelerometer. The monolithic mechanical design and the introduction of laser interferometric techniques for the readout implementation makes it a very compact instrument, very sensitive in the low frequency seismic noise band, with a very good immunity to environmental noises. Many changes have been produced since last version (2007), mainly aimed to the improvement of the mechanics and of the optical readout of the instrument. In fact, we have developed and tested a prototype with elliptical hinges and mechanical tuning of the resonance frequency together with a laser optical lever and a new laser interferometer readout system. The theoretical sensitivity curve for both laser optical lever and laser interferometric readouts, evaluated on the basis of suitable theoretical models, shows a very good agreement with the experimental measurements. Very interesting scientific result is the measured natural resonance frequency of the instrument of 70mHz with a Q =140 in air without thermal stabilization. This result demonstrates the feasibility of a monolithic folded pendulum sensor with a natural resonance frequency of the order of millihertz with a more refined mechanical tuning.

  19. Mechanical monolithic sensor for low frequency seismic noise measurement

    NASA Astrophysics Data System (ADS)

    Acernese, Fausto; De Rosa, Rosario; Giordano, Gerardo; Romano, Rocco; Barone, Fabrizio

    2007-10-01

    This paper describes a mechanical monolithic sensor for geophysical applications developed at the University of Salerno. The instrument is basically a monolithic tunable folded pendulum, shaped with precision machining and electric-discharge-machining, that can be used both as seismometer and, in a force-feedback configuration, as accelerometer. The monolithic mechanical design and the introduction of laser interferometric techniques for the readout implementation make it a very compact instrument, very sensitive in the low-frequency seismic noise band, with a very good immunity to environmental noises. Many changes have been produced since last version (2006), mainly aimed to the improvement of the mechanics and of the optical readout of the instrument. In fact, we have developed and tested a prototype with elliptical hinges and mechanical tuning of the resonance frequency together with a new laser optical lever and laser interferometer readout system. The theoretical sensitivity curve for both laser optical lever and laser interferometric readouts, calculated on the basis of suitable theoretical models, shows a very good agreement with the experimental measurements. Very interesting scientific result is that the measured natural resonance frequency of the instrument is ~ 70mHz with a Q ~ 140 in air without thermal stabilization, demonstrating the feasibility of a monolithic FP sensor with a natural resonance frequency of the order of 5 mHz with a more refined mechanical tuning.

  20. Mechanical monolithic horizontal sensor for low frequency seismic noise measurement.

    PubMed

    Acernese, Fausto; Giordano, Gerardo; Romano, Rocco; De Rosa, Rosario; Barone, Fabrizio

    2008-07-01

    This paper describes a mechanical monolithic horizontal sensor for geophysical applications developed at the University of Salerno. The instrument is basically a monolithic tunable folded pendulum, shaped with precision machining and electric discharge machining, that can be used both as seismometer and, in a force-feedback configuration, as accelerometer. The monolithic mechanical design and the introduction of laser interferometric techniques for the readout implementation makes it a very compact instrument, very sensitive in the low frequency seismic noise band, with a very good immunity to environmental noises. Many changes have been produced since last version (2007), mainly aimed to the improvement of the mechanics and of the optical readout of the instrument. In fact, we have developed and tested a prototype with elliptical hinges and mechanical tuning of the resonance frequency together with a laser optical lever and a new laser interferometer readout system. The theoretical sensitivity curve for both laser optical lever and laser interferometric readouts, evaluated on the basis of suitable theoretical models, shows a very good agreement with the experimental measurements. Very interesting scientific result is the measured natural resonance frequency of the instrument of 70 mHz with a Q=140 in air without thermal stabilization. This result demonstrates the feasibility of a monolithic folded pendulum sensor with a natural resonance frequency of the order of millihertz with a more refined mechanical tuning.

  1. A Study on Rational Function Model Generation for TerraSAR-X Imagery

    PubMed Central

    Eftekhari, Akram; Saadatseresht, Mohammad; Motagh, Mahdi

    2013-01-01

    The Rational Function Model (RFM) has been widely used as an alternative to rigorous sensor models of high-resolution optical imagery in photogrammetry and remote sensing geometric processing. However, not much work has been done to evaluate the applicability of the RF model for Synthetic Aperture Radar (SAR) image processing. This paper investigates how to generate a Rational Polynomial Coefficient (RPC) for high-resolution TerraSAR-X imagery using an independent approach. The experimental results demonstrate that the RFM obtained using the independent approach fits the Range-Doppler physical sensor model with an accuracy of greater than 10−3 pixel. Because independent RPCs indicate absolute errors in geolocation, two methods can be used to improve the geometric accuracy of the RFM. In the first method, Ground Control Points (GCPs) are used to update SAR sensor orientation parameters, and the RPCs are calculated using the updated parameters. Our experiment demonstrates that by using three control points in the corners of the image, an accuracy of 0.69 pixels in range and 0.88 pixels in the azimuth direction is achieved. For the second method, we tested the use of an affine model for refining RPCs. In this case, by applying four GCPs in the corners of the image, the accuracy reached 0.75 pixels in range and 0.82 pixels in the azimuth direction. PMID:24021971

  2. Approach for Improving the Integrated Sensor Orientation

    NASA Astrophysics Data System (ADS)

    Mitishita, E.; Ercolin Filho, L.; Graça, N.; Centeno, J.

    2016-06-01

    The direct determination of exterior orientation parameters (EOP) of aerial images via integration of the Inertial Measurement Unit (IMU) and GPS is often used in photogrammetric mapping nowadays. The accuracies of the EOP depend on the accurate parameters related to sensors mounting when the job is performed (offsets of the IMU relative to the projection centre and the angles of boresigth misalignment between the IMU and the photogrammetric coordinate system). In principle, when the EOP values do not achieve the required accuracies for the photogrammetric application, the approach, known as Integrated Sensor Orientation (ISO), is used to refine the direct EOP. ISO approach requires accurate Interior Orientation Parameters (IOP) and standard deviation of the EOP under flight condition. This paper investigates the feasibility of use the in situ camera calibration to obtain these requirements. The camera calibration uses a small sub block of images, extracted from the entire block. A digital Vexcel UltraCam XP camera connected to APPLANIX POS AVTM system was used to get two small blocks of images that were use in this study. The blocks have different flight heights and opposite flight directions. The proposed methodology improved significantly the vertical and horizontal accuracies of the 3D point intersection. Using a minimum set of control points, the horizontal and vertical accuracies achieved nearly one image pixel of resolution on the ground (GSD). The experimental results are shown and discussed.

  3. A Crowd-Sourcing Indoor Localization Algorithm via Optical Camera on a Smartphone Assisted by Wi-Fi Fingerprint RSSI

    PubMed Central

    Chen, Wei; Wang, Weiping; Li, Qun; Chang, Qiang; Hou, Hongtao

    2016-01-01

    Indoor positioning based on existing Wi-Fi fingerprints is becoming more and more common. Unfortunately, the Wi-Fi fingerprint is susceptible to multiple path interferences, signal attenuation, and environmental changes, which leads to low accuracy. Meanwhile, with the recent advances in charge-coupled device (CCD) technologies and the processing speed of smartphones, indoor positioning using the optical camera on a smartphone has become an attractive research topic; however, the major challenge is its high computational complexity; as a result, real-time positioning cannot be achieved. In this paper we introduce a crowd-sourcing indoor localization algorithm via an optical camera and orientation sensor on a smartphone to address these issues. First, we use Wi-Fi fingerprint based on the K Weighted Nearest Neighbor (KWNN) algorithm to make a coarse estimation. Second, we adopt a mean-weighted exponent algorithm to fuse optical image features and orientation sensor data as well as KWNN in the smartphone to refine the result. Furthermore, a crowd-sourcing approach is utilized to update and supplement the positioning database. We perform several experiments comparing our approach with other positioning algorithms on a common smartphone to evaluate the performance of the proposed sensor-calibrated algorithm, and the results demonstrate that the proposed algorithm could significantly improve accuracy, stability, and applicability of positioning. PMID:27007379

  4. A Crowd-Sourcing Indoor Localization Algorithm via Optical Camera on a Smartphone Assisted by Wi-Fi Fingerprint RSSI.

    PubMed

    Chen, Wei; Wang, Weiping; Li, Qun; Chang, Qiang; Hou, Hongtao

    2016-03-19

    Indoor positioning based on existing Wi-Fi fingerprints is becoming more and more common. Unfortunately, the Wi-Fi fingerprint is susceptible to multiple path interferences, signal attenuation, and environmental changes, which leads to low accuracy. Meanwhile, with the recent advances in charge-coupled device (CCD) technologies and the processing speed of smartphones, indoor positioning using the optical camera on a smartphone has become an attractive research topic; however, the major challenge is its high computational complexity; as a result, real-time positioning cannot be achieved. In this paper we introduce a crowd-sourcing indoor localization algorithm via an optical camera and orientation sensor on a smartphone to address these issues. First, we use Wi-Fi fingerprint based on the K Weighted Nearest Neighbor (KWNN) algorithm to make a coarse estimation. Second, we adopt a mean-weighted exponent algorithm to fuse optical image features and orientation sensor data as well as KWNN in the smartphone to refine the result. Furthermore, a crowd-sourcing approach is utilized to update and supplement the positioning database. We perform several experiments comparing our approach with other positioning algorithms on a common smartphone to evaluate the performance of the proposed sensor-calibrated algorithm, and the results demonstrate that the proposed algorithm could significantly improve accuracy, stability, and applicability of positioning.

  5. Small Business Innovations

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Under a Small Business Innovation Research (SBIR) contract to Kennedy Space Center, EIC Laboratories invented a Raman Spectrograph with fiber optic sampling for space applications such as sensing hazardous fuel vapors and making on-board rapid analyses of chemicals and minerals. Raman spectroscopy is a laser-based measurement technique that provides through a unique vibrational spectrum a molecular 'fingerprint,' and can function in aqueous environments. EIC combined optical fiber technology with Raman methods to develop sensors that can be operated at a distance from the spectrographic analysis instruments and the laser excitation source. EIC refined and commercialized the technology to create the Fiber Optic Raman Spectrograph and the RamanProbe. Commercial applications range from process control to monitoring hazardous materials.

  6. Adaptive optics two-photon excited fluorescence lifetime imaging ophthalmoscopy of exogenous fluorophores in mice

    PubMed Central

    Feeks, James A.; Hunter, Jennifer J.

    2017-01-01

    In vivo cellular scale fluorescence lifetime imaging of the mouse retina has the potential to be a sensitive marker of retinal cell health. In this study, we demonstrate fluorescence lifetime imaging of extrinsic fluorophores using adaptive optics fluorescence lifetime imaging ophthalmoscopy (AOFLIO). We recorded AOFLIO images of inner retinal cells labeled with enhanced green fluorescent protein (EGFP) and capillaries labeled with fluorescein. We demonstrate that AOFLIO can be used to differentiate spectrally overlapping fluorophores in the retina. With further refinements, AOFLIO could be used to assess retinal health in early stages of degeneration by utilizing lifetime-based sensors or even fluorophores native to the retina. PMID:28663886

  7. NAS-Wide Fast-Time Simulation Study for Evaluating Performance of UAS Detect-and-Avoid Alerting and Guidance Systems

    NASA Technical Reports Server (NTRS)

    Lee, Seung Man; Park, Chunki; Cone, Andrew Clayton; Thipphavong, David P.; Santiago, Confesor

    2016-01-01

    This presentation contains the analysis results of NAS-wide fast-time simulations with UAS and VFR traffic for a single day for evaluating the performance of Detect-and-Avoid (DAA) alerting and guidance systems. This purpose of this study was to help refine and validate MOPS alerting and guidance requirements. In this study, we generated plots of all performance metrics that are specified by RTCA SC-228 Minimum Operational Performance Standards (MOPS): 1) to evaluate the sensitivity of alerting parameters on the performance metrics of each DAA alert type: Preventive, Corrective, and Warning alerts and 2) to evaluate the effect of sensor uncertainty on DAA alerting and guidance performance.

  8. Long-Term Observations of Ocean Biogeochemistry with Nitrate and Oxygen Sensors in Apex Profiling Floats

    NASA Astrophysics Data System (ADS)

    Johnson, K. S.; Coletti, L.; Jannasch, H.; Martz, T.; Swift, D.; Riser, S.

    2008-12-01

    Long-term, autonomous observations of ocean biogeochemical cycles are now feasible with chemical sensors in profiling floats. These sensors will enable decadal-scale observations of trends in global ocean biogeochemical cycles. Here, we focus on measurements on nitrate and dissolved oxygen. The ISUS (In Situ Ultraviolet Spectrophotometer) optical nitrate sensor has been adapted to operate in a Webb Research, Apex profiling float. The Apex float is of the type used in the Argo array and is designed for multi-year, expendable deployments in the ocean. Floats park at 1000 m depth and make 60 nitrate and oxygen measurements at depth intervals ranging from 50 m below 400 m to 5 m in the upper 100 m as they profile to the surface. All data are transmitted to shore using the Iridium telemetry system and they are available on the Internet in near-real time. Floats equipped with ISUS and an Aanderaa oxygen sensor are capable of making 280 vertical profiles from 1000 m. At a 5 day cycle time, the floats should have nearly a four year endurance. Three floats have now been deployed at the Hawaii Ocean Time series station (HOT), Ocean Station Papa (OSP) in the Gulf of Alaska and at 50 South, 30 East in the Southern Ocean. Two additional floats are designated for deployment at the Bermuda Atlantic Time Series station (BATS) and in the Drake Passage. The HOT float has made 56 profiles over 260 days and should continue operating for 3 more years. Nitrate concentrations are in excellent agreement with the long-term mean observed at HOT. No significant long-term drift in sensor response has occurred. A variety of features have been observed in the HOT nitrate data that are linked to contemporaneous changes in oxygen production and mesoscale dynamics. The impacts of these features will be briefly described. The Southern Ocean float has operated for 200 days and is now observing reinjection of nitrate into surface waters as winter mixing occurs(surface nitrate > 24 micromolar). We expect that the OSP and Southern Ocean floats will provide a quantitative measurement of the timing and magnitude of the spring bloom via the drawdown of surface nitrate. We are funded through NSF and NOPP to continue float deployments at HOT, BATS, OSP and the Southern Ocean for the next 3 years and to refine the sensor so it can be offered as a commercial option for all float users. New sensors in development for float deployments include a stable ISFET pH sensor.

  9. Spinoff 2009

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Topics covered include: Image-Capture Devices Extend Medicine's Reach; Medical Devices Assess, Treat Balance Disorders; NASA Bioreactors Advance Disease Treatments; Robotics Algorithms Provide Nutritional Guidelines; "Anti-Gravity" Treadmills Speed Rehabilitation; Crew Management Processes Revitalize Patient Care; Hubble Systems Optimize Hospital Schedules; Web-based Programs Assess Cognitive Fitness; Electrolyte Concentrates Treat Dehydration; Tools Lighten Designs, Maintain Structural Integrity; Insulating Foams Save Money, Increase Safety; Polyimide Resins Resist Extreme Temperatures; Sensors Locate Radio Interference; Surface Operations Systems Improve Airport Efficiency; Nontoxic Resins Advance Aerospace Manufacturing; Sensors Provide Early Warning of Biological Threats; Robot Saves Soldier's Lives Overseas (MarcBot); Apollo-Era Life Raft Saves Hundreds of Sailors; Circuits Enhance Scientific Instruments and Safety Devices; Tough Textiles Protect Payloads and Public Safety Officers; Forecasting Tools Point to Fishing Hotspots; Air Purifiers Eliminate Pathogens, Preserve Food; Fabrics Protect Sensitive Skin from UV Rays; Phase Change Fabrics Control Temperature; Tiny Devices Project Sharp, Colorful Images; Star-Mapping Tools Enable Tracking of Endangered Animals; Nanofiber Filters Eliminate Contaminants; Modeling Innovations Advance Wind Energy Industry; Thermal Insulation Strips Conserve Energy; Satellite Respondent Buoys Identify Ocean Debris; Mobile Instruments Measure Atmospheric Pollutants; Cloud Imagers Offer New Details on Earth's Health; Antennas Lower Cost of Satellite Access; Feature Detection Systems Enhance Satellite Imagery; Chlorophyll Meters Aid Plant Nutrient Management; Telemetry Boards Interpret Rocket, Airplane Engine Data; Programs Automate Complex Operations Monitoring; Software Tools Streamline Project Management; Modeling Languages Refine Vehicle Design; Radio Relays Improve Wireless Products; Advanced Sensors Boost Optical Communication, Imaging; Tensile Fabrics Enhance Architecture Around the World; Robust Light Filters Support Powerful Imaging Devices; Thermoelectric Devices Cool, Power Electronics; Innovative Tools Advance Revolutionary Weld Technique; Methods Reduce Cost, Enhance Quality of Nanotubes; Gauging Systems Monitor Cryogenic Liquids; Voltage Sensors Monitor Harmful Static; and Compact Instruments Measure Heat Potential.

  10. Validation of conducting wall models using magnetic measurements

    DOE PAGES

    Hanson, Jeremy M.; Bialek, James M.; Turco, Francesca; ...

    2016-08-16

    The impact of conducting wall eddy currents on perturbed magnetic field measurements is a key issue for understanding the measurement and control of long-wavelength MHD stability in tokamak devices. As plasma response models have growth in sophistication, the need to understand and resolve small changes in these measurements has become more important, motivating increased fidelity in simulations of externally applied fields and the wall eddy current response. In this manuscript, we describe thorough validation studies of the wall models in the MARS-F and VALEN stability codes, using coil–sensor vacuum coupling measurements from the DIII-D tokamak. The valen formulation treats conductingmore » structures with arbitrary threedimensional geometries, while mars-f uses an axisymmetric wall model and a spectral decomposition of the problem geometry with a fixed toroidal harmonic n. The vacuum coupling measurements have a strong sensitivity to wall eddy currents induced by timechanging coil currents, owing to the close proximities of both the sensors and coils to the wall. Measurements from individual coil and sensor channels are directly compared with valen predictions. It is found that straightforward improvements to the valen model, such as refining the wall mesh and simulating the vertical extent of the DIII-D poloidal field sensors, lead to good agreement with the experimental measurements. In addition, couplings to multi-coil, n = 1 toroidal mode perturbations are calculated from the measurements and compared with predictions from both codes. Lastly, the toroidal mode comparisons favor the fully three-dimensional simulation approach, likely because this approach naturally treats n > 1 sidebands generated by the coils and wall eddy currents, as well as the n = 1 fundamental.« less

  11. Validation of conducting wall models using magnetic measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanson, Jeremy M.; Bialek, James M.; Turco, Francesca

    The impact of conducting wall eddy currents on perturbed magnetic field measurements is a key issue for understanding the measurement and control of long-wavelength MHD stability in tokamak devices. As plasma response models have growth in sophistication, the need to understand and resolve small changes in these measurements has become more important, motivating increased fidelity in simulations of externally applied fields and the wall eddy current response. In this manuscript, we describe thorough validation studies of the wall models in the MARS-F and VALEN stability codes, using coil–sensor vacuum coupling measurements from the DIII-D tokamak. The valen formulation treats conductingmore » structures with arbitrary threedimensional geometries, while mars-f uses an axisymmetric wall model and a spectral decomposition of the problem geometry with a fixed toroidal harmonic n. The vacuum coupling measurements have a strong sensitivity to wall eddy currents induced by timechanging coil currents, owing to the close proximities of both the sensors and coils to the wall. Measurements from individual coil and sensor channels are directly compared with valen predictions. It is found that straightforward improvements to the valen model, such as refining the wall mesh and simulating the vertical extent of the DIII-D poloidal field sensors, lead to good agreement with the experimental measurements. In addition, couplings to multi-coil, n = 1 toroidal mode perturbations are calculated from the measurements and compared with predictions from both codes. Lastly, the toroidal mode comparisons favor the fully three-dimensional simulation approach, likely because this approach naturally treats n > 1 sidebands generated by the coils and wall eddy currents, as well as the n = 1 fundamental.« less

  12. Assessment of the suitability of Durafet-based sensors for pH measurement in dynamic estuarine environments

    NASA Astrophysics Data System (ADS)

    Gonski, Stephen F.; Cai, Wei-Jun; Ullman, William J.; Joesoef, Andrew; Main, Christopher R.; Pettay, D. Tye; Martz, Todd R.

    2018-01-01

    The suitability of the Honeywell Durafet to the measurement of pH in productive, high-fouling, and highly-turbid estuarine environments was investigated at the confluence of the Murderkill Estuary and Delaware Bay (Delaware, USA). Three different flow configurations of the SeapHOx sensor equipped with a Honeywell Durafet and its integrated internal (Ag/AgCl reference electrode containing a 4.5 M KCl gel liquid junction) and external (solid-state chloride ion selective electrode, Cl-ISE) reference electrodes were deployed for four periods between April 2015 and September 2016. In this environment, the Honeywell Durafet proved capable of making high-resolution and high-frequency pH measurements on the total scale between pH 6.8 and 8.4. Natural pH fluctuations of >1 pH unit were routinely captured over a range of timescales. The sensor pH collected between May and August 2016 using the most refined SeapHOx configuration exhibited good agreement with multiple sets of independently measured reference pH values. When deployed in conjunction with rigorous discrete sampling and calibration schemes, the sensor pH had a root-mean squared error ranging between 0.011 and 0.036 pH units across a wide range of salinity relative to both pHT calculated from measured dissolved inorganic carbon and total alkalinity and pHNBS measured with a glass electrode corrected to pHT at in situ conditions. The present work demonstrates the viability of the Honeywell Durafet to the measurement of pH to within the weather-level precision defined by the Global Ocean Acidification Observing Network (GOA-ON, ≤ 0.02 pH units) as a part of future estuarine CO2 chemistry studies undertaken in dynamic environments.

  13. A normalisation framework for (hyper-)spectral imagery

    NASA Astrophysics Data System (ADS)

    Grumpe, Arne; Zirin, Vladimir; Wöhler, Christian

    2015-06-01

    It is well known that the topography has an influence on the observed reflectance spectra. This influence is not compensated by spectral ratios, i.e. the effect is wavelength dependent. In this work, we present a complete normalisation framework. The surface temperature is estimated based on the measured surface reflectance. To normalise the spectral reflectance with respect to a standard illumination geometry, spatially varying reflectance parameters are estimated based on a non-linear reflectance model. The reflectance parameter estimation has one free parameter, i.e. a low-pass function, which sets the scale of the spatial-variance, i.e. the lateral resolution of the reflectance parameter maps. Since the local surface topography has a major influence on the measured reflectance, often neglected shading information is extracted from the spectral imagery and an existing topography model is refined to image resolution. All methods are demonstrated on the Moon Mineralogy Mapper dataset. Additionally, two empirical methods are introduced that deal with observed systematic reflectance changes in co-registered images acquired at different phase angles. These effects, however, may also be caused by the sensor temperature, due to its correlation with the phase angle. Surface temperatures above 300 K are detected and are very similar to a reference method. The proposed method, however, seems more robust in case of absorptions visible in the reflectance spectrum near 2000 nm. By introducing a low-pass into the computation of the reflectance parameters, the reflectance behaviour of the surfaces may be derived at different scales. This allows for an iterative refinement of the local surface topography using shape from shading and the computation reflectance parameters. The inferred parameters are derived from all available co-registered images and do not show significant influence of the local surface topography. The results of the empirical correction show that both proposed methods greatly reduce the influence of different phase angles or sensor temperatures.

  14. Estimating the beam attenuation coefficient in coastal waters from AVHRR imagery

    NASA Astrophysics Data System (ADS)

    Gould, Richard W.; Arnone, Robert A.

    1997-09-01

    This paper presents an algorithm to estimate particle beam attenuation at 660 nm ( cp660) in coastal areas using the red and near-infrared channels of the NOAA AVHRR satellite sensor. In situ reflectance spectra and cp660 measurements were collected at 23 stations in Case I and II waters during an April 1993 cruise in the northern Gulf of Mexico. The reflectance spectra were weighted by the spectral response of the AVHRR sensor and integrated over the channel 1 waveband to estimate the atmospherically corrected signal recorded by the satellite. An empirical relationship between integrated reflectance and cp660 values was derived with a linear correlation coefficient of 0.88. Because the AVHRR sensor requires a strong channel 1 signal, the algorithm is applicable in highly turbid areas ( cp660 > 1.5 m -1) where scattering from suspended sediment strongly controls the shape and magnitude of the red (550-650 nm) reflectance spectrum. The algorithm was tested on a data set collected 2 years later in different coastal waters in the northern Gulf of Mexico and satellite estimates of cp660 averaged within 37% of measured values. Application of the algorithm provides daily images of nearshore regions at 1 km resolution for evaluating processes affecting ocean color distribution patterns (tides, winds, currents, river discharge). Further validation and refinement of the algorithm are in progress to permit quantitative application in other coastal areas. Published by Elsevier Science Ltd

  15. Tunable mechanical monolithic sensor with interferometric readout for low frequency seismic noise measurement

    NASA Astrophysics Data System (ADS)

    Acernese, F.; De Rosa, R.; Giordano, G.; Romano, R.; Barone, F.

    2008-03-01

    This paper describes a mechanical monolithic sensor for geophysical applications developed at the University of Salerno. The instrument is basically a monolithic tunable folded pendulum, shaped with precision machining and electric-discharge-machining, that can be used both as seismometer and, in a force-feedback configuration, as accelerometer. The monolithic mechanical design and the introduction of laser interferometric techniques for the readout implementation make it a very compact instrument, very sensitive in the low-frequency seismic noise band, with a very good immunity to environmental noises. Many changes have been produced since last version (2007), mainly aimed to the improvement of the mechanics and of the optical readout of the instrument. In fact, we have developed and tested a prototype with elliptical hinges and mechanical tuning of the resonance frequency together with a laser optical lever and a new laser interferometer readout system. The theoretical sensitivity curve both for both laser optical lever and laser interferometric readouts, evaluated on the basis of suitable theoretical models, shows a very good agreement with the experimental measurements. Very interesting scientific result, for example, is that the measured natural resonance frequency of the instrument is 70 mHz with a Q = 140 in air without thermal stabilization, demonstrating the feasibility of a monolithic FP sensor with a natural resonance frequency of the order of mHz with a more refined mechanical tuning. Results on the readout system based on polarimetric homodyne Michelson interferometer is discussed.

  16. Biological basis for space-variant sensor design I: parameters of monkey and human spatial vision

    NASA Astrophysics Data System (ADS)

    Rojer, Alan S.; Schwartz, Eric L.

    1991-02-01

    Biological sensor design has long provided inspiration for sensor design in machine vision. However relatively little attention has been paid to the actual design parameters provided by biological systems as opposed to the general nature of biological vision architectures. In the present paper we will provide a review of current knowledge of primate spatial vision design parameters and will present recent experimental and modeling work from our lab which demonstrates that a numerical conformal mapping which is a refinement of our previous complex logarithmic model provides the best current summary of this feature of the primate visual system. In this paper we will review recent work from our laboratory which has characterized some of the spatial architectures of the primate visual system. In particular we will review experimental and modeling studies which indicate that: . The global spatial architecture of primate visual cortex is well summarized by a numerical conformal mapping whose simplest analytic approximation is the complex logarithm function . The columnar sub-structure of primate visual cortex can be well summarized by a model based on a band-pass filtered white noise. We will also refer to ongoing work in our lab which demonstrates that: . The joint columnar/map structure of primate visual cortex can be modeled and summarized in terms of a new algorithm the ''''proto-column'''' algorithm. This work provides a reference-point for current engineering approaches to novel architectures for

  17. Film cameras or digital sensors? The challenge ahead for aerial imaging

    USGS Publications Warehouse

    Light, D.L.

    1996-01-01

    Cartographic aerial cameras continue to play the key role in producing quality products for the aerial photography business, and specifically for the National Aerial Photography Program (NAPP). One NAPP photograph taken with cameras capable of 39 lp/mm system resolution can contain the equivalent of 432 million pixels at 11 ??m spot size, and the cost is less than $75 per photograph to scan and output the pixels on a magnetic storage medium. On the digital side, solid state charge coupled device linear and area arrays can yield quality resolution (7 to 12 ??m detector size) and a broader dynamic range. If linear arrays are to compete with film cameras, they will require precise attitude and positioning of the aircraft so that the lines of pixels can be unscrambled and put into a suitable homogeneous scene that is acceptable to an interpreter. Area arrays need to be much larger than currently available to image scenes competitive in size with film cameras. Analysis of the relative advantages and disadvantages of the two systems show that the analog approach is more economical at present. However, as arrays become larger, attitude sensors become more refined, global positioning system coordinate readouts become commonplace, and storage capacity becomes more affordable, the digital camera may emerge as the imaging system for the future. Several technical challenges must be overcome if digital sensors are to advance to where they can support mapping, charting, and geographic information system applications.

  18. Synthetic environments

    NASA Astrophysics Data System (ADS)

    Lukes, George E.; Cain, Joel M.

    1996-02-01

    The Advanced Distributed Simulation (ADS) Synthetic Environments Program seeks to create robust virtual worlds from operational terrain and environmental data sources of sufficient fidelity and currency to interact with the real world. While some applications can be met by direct exploitation of standard digital terrain data, more demanding applications -- particularly those support operations 'close to the ground' -- are well-served by emerging capabilities for 'value-adding' by the user working with controlled imagery. For users to rigorously refine and exploit controlled imagery within functionally different workstations they must have a shared framework to allow interoperability within and between these environments in terms of passing image and object coordinates and other information using a variety of validated sensor models. The Synthetic Environments Program is now being expanded to address rapid construction of virtual worlds with research initiatives in digital mapping, softcopy workstations, and cartographic image understanding. The Synthetic Environments Program is also participating in a joint initiative for a sensor model applications programer's interface (API) to ensure that a common controlled imagery exploitation framework is available to all researchers, developers and users. This presentation provides an introduction to ADS and the associated requirements for synthetic environments to support synthetic theaters of war. It provides a technical rationale for exploring applications of image understanding technology to automated cartography in support of ADS and related programs benefitting from automated analysis of mapping, earth resources and reconnaissance imagery. And it provides an overview and status of the joint initiative for a sensor model API.

  19. Discovery Channel Telescope active optics system early integration and test

    NASA Astrophysics Data System (ADS)

    Venetiou, Alexander J.; Bida, Thomas A.

    2012-09-01

    The Discovery Channel Telescope (DCT) is a 4.3-meter telescope with a thin meniscus primary mirror (M1) and a honeycomb secondary mirror (M2). The optical design is an f/6.1 Ritchey-Chrétien (RC) with an unvignetted 0.5° Field of View (FoV) at the Cassegrain focus. We describe the design, implementation and performance of the DCT active optics system (AOS). The DCT AOS maintains collimation and controls the figure of the mirror to provide seeing-limited images across the focal plane. To minimize observing overhead, rapid settling times are achieved using a combination of feed-forward and low-bandwidth feedback control using a wavefront sensing system. In 2011, we mounted a Shack-Hartmann wavefront sensor at the prime focus of M1, the Prime Focus Test Assembly (PFTA), to test the AOS with the wavefront sensor, and the feedback loop. The incoming wavefront is decomposed using Zernike polynomials, and the mirror figure is corrected with a set of bending modes. Components of the system that we tested and tuned included the Zernike to Bending Mode transformations. We also started open-loop feed-forward coefficients determination. In early 2012, the PFTA was replaced by M2, and the wavefront sensor moved to its normal location on the Cassegrain instrument assembly. We present early open loop wavefront test results with the full optical system and instrument cube, along with refinements to the overall control loop operating at RC Cassegrain focus.

  20. Use of satellite ocean color observations to refine understanding of global geochemical cycles

    NASA Technical Reports Server (NTRS)

    Walsh, J. J.; Dieterle, D. A.

    1985-01-01

    In October 1978, the first satellite-borne color sensor, the Coastal Zone Color Scanner (CZCS), was launched aboard Nimbus-7 with four visible and two infrared bands, permitting a sensitivity about 60 times that of the Landsat-1 multispectral scanner. The CZCS radiance data can be utilized to estimate ocean chlorophyll concentrations by detecting shifts in sea color, particularly in oceanic waters. The obtained data can be used in studies regarding problems of overfishing, and, in addition, in investigations concerning the consequences of man's accelerated extraction of nitrogen from the atmosphere and addition of carbon to the atmosphere. The satellite data base is considered along with a simulation analysis, and ships providing ground-truth chlorophyll measurements in the ocean.

  1. BALANCE: Towards a Usable Pervasive Wellness Application with Accurate Activity Inference

    PubMed Central

    Denning, Tamara; Andrew, Adrienne; Chaudhri, Rohit; Hartung, Carl; Lester, Jonathan; Borriello, Gaetano; Duncan, Glen

    2010-01-01

    Technology offers the potential to objectively monitor people’s eating and activity behaviors and encourage healthier lifestyles. BALANCE is a mobile phone-based system for long term wellness management. The BALANCE system automatically detects the user’s caloric expenditure via sensor data from a Mobile Sensing Platform unit worn on the hip. Users manually enter information on foods eaten via an interface on an N95 mobile phone. Initial validation experiments measuring oxygen consumption during treadmill walking and jogging show that the system’s estimate of caloric output is within 87% of the actual value. Future work will refine and continue to evaluate the system’s efficacy and develop more robust data input and activity inference methods. PMID:20445819

  2. Millimeter wave scattering characteristics and radar cross section measurements of common roadway objects

    NASA Astrophysics Data System (ADS)

    Zoratti, Paul K.; Gilbert, R. Kent; Majewski, Ronald; Ference, Jack

    1995-12-01

    Development of automotive collision warning systems has progressed rapidly over the past several years. A key enabling technology for these systems is millimeter-wave radar. This paper addresses a very critical millimeter-wave radar sensing issue for automotive radar, namely the scattering characteristics of common roadway objects such as vehicles, roadsigns, and bridge overpass structures. The data presented in this paper were collected on ERIM's Fine Resolution Radar Imaging Rotary Platform Facility and processed with ERIM's image processing tools. The value of this approach is that it provides system developers with a 2D radar image from which information about individual point scatterers `within a single target' can be extracted. This information on scattering characteristics will be utilized to refine threat assessment processing algorithms and automotive radar hardware configurations. (1) By evaluating the scattering characteristics identified in the radar image, radar signatures as a function of aspect angle for common roadway objects can be established. These signatures will aid in the refinement of threat assessment processing algorithms. (2) Utilizing ERIM's image manipulation tools, total RCS and RCS as a function of range and azimuth can be extracted from the radar image data. This RCS information will be essential in defining the operational envelope (e.g. dynamic range) within which any radar sensor hardware must be designed.

  3. Simulations of viscous and compressible gas-gas flows using high-order finite difference schemes

    NASA Astrophysics Data System (ADS)

    Capuano, M.; Bogey, C.; Spelt, P. D. M.

    2018-05-01

    A computational method for the simulation of viscous and compressible gas-gas flows is presented. It consists in solving the Navier-Stokes equations associated with a convection equation governing the motion of the interface between two gases using high-order finite-difference schemes. A discontinuity-capturing methodology based on sensors and a spatial filter enables capturing shock waves and deformable interfaces. One-dimensional test cases are performed as validation and to justify choices in the numerical method. The results compare well with analytical solutions. Shock waves and interfaces are accurately propagated, and remain sharp. Subsequently, two-dimensional flows are considered including viscosity and thermal conductivity. In Richtmyer-Meshkov instability, generated on an air-SF6 interface, the influence of the mesh refinement on the instability shape is studied, and the temporal variations of the instability amplitude is compared with experimental data. Finally, for a plane shock wave propagating in air and impacting a cylindrical bubble filled with helium or R22, numerical Schlieren pictures obtained using different grid refinements are found to compare well with experimental shadow-photographs. The mass conservation is verified from the temporal variations of the mass of the bubble. The mean velocities of pressure waves and bubble interface are similar to those obtained experimentally.

  4. Noise Reduction Techniques and Scaling Effects towards Photon Counting CMOS Image Sensors

    PubMed Central

    Boukhayma, Assim; Peizerat, Arnaud; Enz, Christian

    2016-01-01

    This paper presents an overview of the read noise in CMOS image sensors (CISs) based on four-transistors (4T) pixels, column-level amplification and correlated multiple sampling. Starting from the input-referred noise analytical formula, process level optimizations, device choices and circuit techniques at the pixel and column level of the readout chain are derived and discussed. The noise reduction techniques that can be implemented at the column and pixel level are verified by transient noise simulations, measurement and results from recently-published low noise CIS. We show how recently-reported process refinement, leading to the reduction of the sense node capacitance, can be combined with an optimal in-pixel source follower design to reach a sub-0.3erms- read noise at room temperature. This paper also discusses the impact of technology scaling on the CIS read noise. It shows how designers can take advantage of scaling and how the Metal-Oxide-Semiconductor (MOS) transistor gate leakage tunneling current appears as a challenging limitation. For this purpose, both simulation results of the gate leakage current and 1/f noise data reported from different foundries and technology nodes are used.

  5. Crop Field Reflectance Measurements

    NASA Astrophysics Data System (ADS)

    Weber, Christian; Schinca, Daniel C.; Tocho, Jorge O.; Videla, Fabian

    2008-04-01

    We present in this paper the results of reflectance measurements performed with a three-band passive radiometer with independent channels for solar irradiance reference. The comparative operation of the traditional method that alternatively uses measurements of the field (downward-looking) and a white panel for reference (downard-looking) and the new approach that involves duplicated spectral channels, each one with its own difuser that point upwards to the zenith direction (upward-looking) is analyzed. The results indicated that the latter method is more suitable for use with passive sensors under rapid changing atmospheric conditions (such as clouds, dust, mist, smog and other scatterers), since a more reliable synchronic record of reference and incident light is achieved. Besides, having separate channels for the reference and the signal allow a better balancing of gains in the amplifiers for each spectral channel. We show the results obtained in the determination of the Normalized Difference Vegetation Index (NDVI) corresponding to 2006 and 2007 field experiments concerning weeds detection and fertilizer levels assessing in wheat, to refine sensor-based fertilizer nitrogen rate recommendations. It is also shown the variation of the radiometric normalization measurements taken at noon (nadir solar position) for the whole culture cycle corresponding to two seasons (winter and spring).

  6. Air-Induced Drag Reduction at High Reynolds Numbers: Velocity and Void Fraction Profiles

    NASA Astrophysics Data System (ADS)

    Elbing, Brian; Mäkiharju, Simo; Wiggins, Andrew; Dowling, David; Perlin, Marc; Ceccio, Steven

    2010-11-01

    The injection of air into a turbulent boundary layer forming over a flat plate can reduce the skin friction. With sufficient volumetric fluxes an air layer can separate the solid surface from the flowing liquid, which can produce drag reduction in excess of 80%. Several large scale experiments have been conducted at the US Navy's Large Cavitation Channel on a 12.9 m long flat plate model investigating bubble drag reduction (BDR), air layer drag reduction (ALDR) and the transition between BDR and ALDR. The most recent experiment acquired phase velocities and void fraction profiles at three downstream locations (3.6, 5.9 and 10.6 m downstream from the model leading edge) for a single flow speed (˜6.4 m/s). The profiles were acquired with a combination of electrode point probes, time-of-flight sensors, Pitot tubes and an LDV system. Additional diagnostics included skin-friction sensors and flow-field image visualization. During this experiment the inlet flow was perturbed with vortex generators immediately upstream of the injection location to assess the robustness of the air layer. From these, and prior measurements, computational models can be refined to help assess the viability of ALDR for full-scale ship applications.

  7. Numerical Modeling of the Transient Chilldown Process of a Cryogenic Propellant Transfer Line

    NASA Technical Reports Server (NTRS)

    Hartwig, Jason; Vera, Jerry

    2015-01-01

    Before cryogenic fuel depots can be fully realized, efficient methods with which to chill down the spacecraft transfer line and receiver tank are required. This paper presents numerical modeling of the chilldown of a liquid hydrogen tank-to-tank propellant transfer line using the Generalized Fluid System Simulation Program (GFSSP). To compare with data from recently concluded turbulent LH2 chill down experiments, seven different cases were run across a range of inlet liquid temperatures and mass flow rates. Both trickle and pulse chill down methods were simulated. The GFSSP model qualitatively matches external skin mounted temperature readings, but large differences are shown between measured and predicted internal stream temperatures. Discrepancies are attributed to the simplified model correlation used to compute two-phase flow boiling heat transfer. Flow visualization from testing shows that the initial bottoming out of skin mounted sensors corresponds to annular flow, but that considerable time is required for the stream sensor to achieve steady state as the system moves through annular, churn, and bubbly flow. The GFSSP model does adequately well in tracking trends in the data but further work is needed to refine the two-phase flow modeling to better match observed test data.

  8. Epidermal devices for noninvasive, precise, and continuous mapping of macrovascular and microvascular blood flow

    PubMed Central

    Webb, R. Chad; Ma, Yinji; Krishnan, Siddharth; Li, Yuhang; Yoon, Stephen; Guo, Xiaogang; Feng, Xue; Shi, Yan; Seidel, Miles; Cho, Nam Heon; Kurniawan, Jonas; Ahad, James; Sheth, Niral; Kim, Joseph; Taylor VI, James G.; Darlington, Tom; Chang, Ken; Huang, Weizhong; Ayers, Joshua; Gruebele, Alexander; Pielak, Rafal M.; Slepian, Marvin J.; Huang, Yonggang; Gorbach, Alexander M.; Rogers, John A.

    2015-01-01

    Continuous monitoring of variations in blood flow is vital in assessing the status of microvascular and macrovascular beds for a wide range of clinical and research scenarios. Although a variety of techniques exist, most require complete immobilization of the subject, thereby limiting their utility to hospital or clinical settings. Those that can be rendered in wearable formats suffer from limited accuracy, motion artifacts, and other shortcomings that follow from an inability to achieve intimate, noninvasive mechanical linkage of sensors with the surface of the skin. We introduce an ultrathin, soft, skin-conforming sensor technology that offers advanced capabilities in continuous and precise blood flow mapping. Systematic work establishes a set of experimental procedures and theoretical models for quantitative measurements and guidelines in design and operation. Experimental studies on human subjects, including validation with measurements performed using state-of-the-art clinical techniques, demonstrate sensitive and accurate assessment of both macrovascular and microvascular flow under a range of physiological conditions. Refined operational modes eliminate long-term drifts and reduce power consumption, thereby providing steps toward the use of this technology for continuous monitoring during daily activities. PMID:26601309

  9. Adaptation of reference volumes for correlation-based digital holographic particle tracking

    NASA Astrophysics Data System (ADS)

    Hesseling, Christina; Peinke, Joachim; Gülker, Gerd

    2018-04-01

    Numerically reconstructed reference volumes tailored to particle images are used for particle position detection by means of three-dimensional correlation. After a first tracking of these positions, the experimentally recorded particle images are retrieved as a posteriori knowledge about the particle images in the system. This knowledge is used for a further refinement of the detected positions. A transparent description of the individual algorithm steps including the results retrieved with experimental data complete the paper. The work employs extraordinarily small particles, smaller than the pixel pitch of the camera sensor. It is the first approach known to the authors that combines numerical knowledge about particle images and particle images retrieved from the experimental system to an iterative particle tracking approach for digital holographic particle tracking velocimetry.

  10. Spectral reflectance and emissivity features of broad leaf plants: Prospects for remote sensing in the thermal infrared (8.0-14.0 μm)

    USGS Publications Warehouse

    Ribeiro da Luz, Beatriz; Crowley, James K.

    2007-01-01

    In contrast to visible and short-wave infrared data, thermal infrared spectra of broad leaf plants show considerable spectral diversity, suggesting that such data eventually could be utilized to map vegetation composition. However, remotely measuring the subtle emissivity features of leaves still presents major challenges. To be successful, sensors operating in the 8–14 μm atmospheric window must have high signal-to-noise and a small enough instantaneous field of view to allow measurements of only a few leaf surfaces. Methods for atmospheric compensation, temperature–emissivity separation, and spectral feature analysis also will need to be refined to allow the recognition, and perhaps, exploitation of leaf thermal infrared spectral properties.

  11. Collaborative Localization and Location Verification in WSNs

    PubMed Central

    Miao, Chunyu; Dai, Guoyong; Ying, Kezhen; Chen, Qingzhang

    2015-01-01

    Localization is one of the most important technologies in wireless sensor networks. A lightweight distributed node localization scheme is proposed by considering the limited computational capacity of WSNs. The proposed scheme introduces the virtual force model to determine the location by incremental refinement. Aiming at solving the drifting problem and malicious anchor problem, a location verification algorithm based on the virtual force mode is presented. In addition, an anchor promotion algorithm using the localization reliability model is proposed to re-locate the drifted nodes. Extended simulation experiments indicate that the localization algorithm has relatively high precision and the location verification algorithm has relatively high accuracy. The communication overhead of these algorithms is relative low, and the whole set of reliable localization methods is practical as well as comprehensive. PMID:25954948

  12. Characterizing the reliability of a bioMEMS-based cantilever sensor

    NASA Astrophysics Data System (ADS)

    Bhalerao, Kaustubh D.

    2004-12-01

    The cantilever-based BioMEMS sensor represents one instance from many competing ideas of biosensor technology based on Micro Electro Mechanical Systems. The advancement of BioMEMS from laboratory-scale experiments to applications in the field will require standardization of their components and manufacturing procedures as well as frameworks to evaluate their performance. Reliability, the likelihood with which a system performs its intended task, is a compact mathematical description of its performance. The mathematical and statistical foundation of systems-reliability has been applied to the cantilever-based BioMEMS sensor. The sensor is designed to detect one aspect of human ovarian cancer, namely the over-expression of the folate receptor surface protein (FR-alpha). Even as the application chosen is clinically motivated, the objective of this study was to demonstrate the underlying systems-based methodology used to design, develop and evaluate the sensor. The framework development can be readily extended to other BioMEMS-based devices for disease detection and will have an impact in the rapidly growing $30 bn industry. The Unified Modeling Language (UML) is a systems-based framework for design and development of object-oriented information systems which has potential application for use in systems designed to interact with biological environments. The UML has been used to abstract and describe the application of the biosensor, to identify key components of the biosensor, and the technology needed to link them together in a coherent manner. The use of the framework is also demonstrated in computation of system reliability from first principles as a function of the structure and materials of the biosensor. The outcomes of applying the systems-based framework to the study are the following: (1) Characterizing the cantilever-based MEMS device for disease (cell) detection. (2) Development of a novel chemical interface between the analyte and the sensor that provides a degree of selectivity towards the disease. (3) Demonstrating the performance and measuring the reliability of the biosensor prototype, and (4) Identification of opportunities in technological development in order to further refine the proposed biosensor. Application of the methodology to design develop and evaluate the reliability of BioMEMS devices will be beneficial in the streamlining the growth of the BioMEMS industry, while providing a decision-support tool in comparing and adopting suitable technologies from available competing options.

  13. Application of activity sensors for estimating behavioral patterns

    USGS Publications Warehouse

    Roberts, Caleb P.; Cain, James W.; Cox, Robert D.

    2016-01-01

    The increasing use of Global Positioning System (GPS) collars in habitat selection studies provides large numbers of precise location data points with reduced field effort. However, inclusion of activity sensors in many GPS collars also grants the potential to remotely estimate behavioral state. Thus, only using GPS collars to collect location data belies their full capabilities. Coupling behavioral state with location data would allow researchers and managers to refine habitat selection models by using diel behavioral state changes to partition fine-scale temporal shifts in habitat selection. We tested the capability of relatively unsophisticated GPS-collar activity sensors to estimate behavior throughout diel periods using free-ranging female elk (Cervus canadensis) in the Jemez Mountains of north-central New Mexico, USA, 2013–2014. Collars recorded cumulative number of movements (hits) per 15-min recording period immediately preceding GPS fixes at 0000, 0600, 1200, and 1800 hr. We measured diel behavioral patterns of focal elk, categorizing active (i.e., foraging, traveling, vigilant, grooming) and inactive (i.e., resting) states. Active behaviors (foraging, traveling) produced more average hits (0.87 ± 0.69 hits/min, 4.0 ± 2.2 hits/min, respectively; 95% CI) and inactive (resting) behavior fewer hits (−1.1 ± 0.61 95% CI). We differentiated active and inactive behavioral states with a bootstrapped threshold of 5.9 ± 3.9 hits/15-min recording period. Mean cumulative activity-sensor hits corresponded with observed diel behavioral patterns: hits increased during crepuscular (0600, 1800 hr) observations when elk were most active (0000–0600 hr: d = 0.19; 1200–1800 hr: d = 0.64) and decreased during midday and night (0000 hr, 1200 hr) when elk were least active (1800–0000 hr: d = −0.39; 0600–1200 hr: d = −0.43). Even using relatively unsophisticated GPS-collar activity sensors, managers can remotely estimate behavioral states, approximate diel behavioral patterns, and potentially complement location data in developing habitat selection models.

  14. An overview of the U.S. Army Research Laboratory's Sensor Information Testbed for Collaborative Research Environment (SITCORE) and Automated Online Data Repository (AODR) capabilities

    NASA Astrophysics Data System (ADS)

    Ward, Dennis W.; Bennett, Kelly W.

    2017-05-01

    The Sensor Information Testbed COllaberative Research Environment (SITCORE) and the Automated Online Data Repository (AODR) are significant enablers of the U.S. Army Research Laboratory (ARL)'s Open Campus Initiative and together create a highly-collaborative research laboratory and testbed environment focused on sensor data and information fusion. SITCORE creates a virtual research development environment allowing collaboration from other locations, including DoD, industry, academia, and collation facilities. SITCORE combined with AODR provides end-toend algorithm development, experimentation, demonstration, and validation. The AODR enterprise allows the U.S. Army Research Laboratory (ARL), as well as other government organizations, industry, and academia to store and disseminate multiple intelligence (Multi-INT) datasets collected at field exercises and demonstrations, and to facilitate research and development (R and D), and advancement of analytical tools and algorithms supporting the Intelligence, Surveillance, and Reconnaissance (ISR) community. The AODR provides a potential central repository for standards compliant datasets to serve as the "go-to" location for lessons-learned and reference products. Many of the AODR datasets have associated ground truth and other metadata which provides a rich and robust data suite for researchers to develop, test, and refine their algorithms. Researchers download the test data to their own environments using a sophisticated web interface. The AODR allows researchers to request copies of stored datasets and for the government to process the requests and approvals in an automated fashion. Access to the AODR requires two-factor authentication in the form of a Common Access Card (CAC) or External Certificate Authority (ECA)

  15. Human-machine analytics for closed-loop sense-making in time-dominant cyber defense problems

    NASA Astrophysics Data System (ADS)

    Henry, Matthew H.

    2017-05-01

    Many defense problems are time-dominant: attacks progress at speeds that outpace human-centric systems designed for monitoring and response. Despite this shortcoming, these well-honed and ostensibly reliable systems pervade most domains, including cyberspace. The argument that often prevails when considering the automation of defense is that while technological systems are suitable for simple, well-defined tasks, only humans possess sufficiently nuanced understanding of problems to act appropriately under complicated circumstances. While this perspective is founded in verifiable truths, it does not account for a middle ground in which human-managed technological capabilities extend well into the territory of complex reasoning, thereby automating more nuanced sense-making and dramatically increasing the speed at which it can be applied. Snort1 and platforms like it enable humans to build, refine, and deploy sense-making tools for network defense. Shortcomings of these platforms include a reliance on rule-based logic, which confounds analyst knowledge of how bad actors behave with the means by which bad behaviors can be detected, and a lack of feedback-informed automation of sensor deployment. We propose an approach in which human-specified computational models hypothesize bad behaviors independent of indicators and then allocate sensors to estimate and forecast the state of an intrusion. State estimates and forecasts inform the proactive deployment of additional sensors and detection logic, thereby closing the sense-making loop. All the while, humans are on the loop, rather than in it, permitting nuanced management of fast-acting automated measurement, detection, and inference engines. This paper motivates and conceptualizes analytics to facilitate this human-machine partnership.

  16. The Landscape Evolution Observatory: a large-scale controllable infrastructure to study coupled Earth-surface processes

    USGS Publications Warehouse

    Pangle, Luke A.; DeLong, Stephen B.; Abramson, Nate; Adams, John; Barron-Gafford, Greg A.; Breshears, David D.; Brooks, Paul D.; Chorover, Jon; Dietrich, William E.; Dontsova, Katerina; Durcik, Matej; Espeleta, Javier; Ferré, T.P.A.; Ferriere, Regis; Henderson, Whitney; Hunt, Edward A.; Huxman, Travis E.; Millar, David; Murphy, Brendan; Niu, Guo-Yue; Pavao-Zuckerman, Mitch; Pelletier, Jon D.; Rasmussen, Craig; Ruiz, Joaquin; Saleska, Scott; Schaap, Marcel; Sibayan, Michael; Troch, Peter A.; Tuller, Markus; van Haren, Joost; Zeng, Xubin

    2015-01-01

    Zero-order drainage basins, and their constituent hillslopes, are the fundamental geomorphic unit comprising much of Earth's uplands. The convergent topography of these landscapes generates spatially variable substrate and moisture content, facilitating biological diversity and influencing how the landscape filters precipitation and sequesters atmospheric carbon dioxide. In light of these significant ecosystem services, refining our understanding of how these functions are affected by landscape evolution, weather variability, and long-term climate change is imperative. In this paper we introduce the Landscape Evolution Observatory (LEO): a large-scale controllable infrastructure consisting of three replicated artificial landscapes (each 330 m2 surface area) within the climate-controlled Biosphere 2 facility in Arizona, USA. At LEO, experimental manipulation of rainfall, air temperature, relative humidity, and wind speed are possible at unprecedented scale. The Landscape Evolution Observatory was designed as a community resource to advance understanding of how topography, physical and chemical properties of soil, and biological communities coevolve, and how this coevolution affects water, carbon, and energy cycles at multiple spatial scales. With well-defined boundary conditions and an extensive network of sensors and samplers, LEO enables an iterative scientific approach that includes numerical model development and virtual experimentation, physical experimentation, data analysis, and model refinement. We plan to engage the broader scientific community through public dissemination of data from LEO, collaborative experimental design, and community-based model development.

  17. Calibration of GOES-derived solar radiation data using a distributed network of surface measurements in Florida, USA

    USGS Publications Warehouse

    Sumner, David M.; Pathak, Chandra S.; Mecikalski, John R.; Paech, Simon J.; Wu, Qinglong; Sangoyomi, Taiye; Babcock, Roger W.; Walton, Raymond

    2008-01-01

    Solar radiation data are critically important for the estimation of evapotranspiration. Analysis of visible-channel data derived from Geostationary Operational Environmental Satellites (GOES) using radiative transfer modeling has been used to produce spatially- and temporally-distributed datasets of solar radiation. An extensive network of (pyranometer) surface measurements of solar radiation in the State of Florida has allowed refined calibration of a GOES-derived daily integrated radiation data product. This refinement of radiation data allowed for corrections of satellite sensor drift, satellite generational change, and consideration of the highly-variable cloudy conditions that are typical of Florida. To aid in calibration of a GOES-derived radiation product, solar radiation data for the period 1995–2004 from 58 field stations that are located throughout the State were compiled. The GOES radiation product was calibrated by way of a three-step process: 1) comparison with ground-based pyranometer measurements on clear reference days, 2) correcting for a bias related to cloud cover, and 3) deriving month-by-month bias correction factors. Pre-calibration results indicated good model performance, with a station-averaged model error of 2.2 MJ m–2 day–1 (13 percent). Calibration reduced errors to 1.7 MJ m–2 day–1 (10 percent) and also removed time- and cloudiness-related biases. The final dataset has been used to produce Statewide evapotranspiration estimates.

  18. Implementation of Fiber Optic Sensing System on Sandwich Composite Cylinder Buckling Test

    NASA Technical Reports Server (NTRS)

    Pena, Francisco; Richards, W. Lance; Parker, Allen R.; Piazza, Anthony; Schultz, Marc R.; Rudd, Michelle T.; Gardner, Nathaniel W.; Hilburger, Mark W.

    2018-01-01

    The National Aeronautics and Space Administration (NASA) Engineering and Safety Center Shell Buckling Knockdown Factor Project is a multicenter project tasked with developing new analysis-based shell buckling design guidelines and design factors (i.e., knockdown factors) through high-fidelity buckling simulations and advanced test technologies. To validate these new buckling knockdown factors for future launch vehicles, the Shell Buckling Knockdown Factor Project is carrying out structural testing on a series of large-scale metallic and composite cylindrical shells at the NASA Marshall Space Flight Center (Marshall Space Flight Center, Alabama). A fiber optic sensor system was used to measure strain on a large-scale sandwich composite cylinder that was tested under multiple axial compressive loads up to more than 850,000 lb, and equivalent bending loads over 22 million in-lb. During the structural testing of the composite cylinder, strain data were collected from optical cables containing distributed fiber Bragg gratings using a custom fiber optic sensor system interrogator developed at the NASA Armstrong Flight Research Center. A total of 16 fiber-optic strands, each containing nearly 1,000 fiber Bragg gratings, measuring strain, were installed on the inner and outer cylinder surfaces to monitor the test article global structural response through high-density real-time and post test strain measurements. The distributed sensing system provided evidence of local epoxy failure at the attachment-ring-to-barrel interface that would not have been detected with conventional instrumentation. Results from the fiber optic sensor system were used to further refine and validate structural models for buckling of the large-scale composite structures. This paper discusses the techniques employed for real-time structural monitoring of the composite cylinder for structural load introduction and distributed bending-strain measurements over a large section of the cylinder by utilizing unique sensing capabilities of fiber optic sensors.

  19. Search and detection modeling of military imaging systems

    NASA Astrophysics Data System (ADS)

    Maurer, Tana; Wilson, David L.; Driggers, Ronald G.

    2013-04-01

    For more than 50 years, the U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) has been studying the science behind the human processes of searching and detecting, and using that knowledge to develop and refine its models for military imaging systems. Modeling how human observers perform military tasks while using imaging systems in the field and linking that model with the physics of the systems has resulted in the comprehensive sensor models we have today. These models are used by the government, military, industry, and academia for sensor development, sensor system acquisition, military tactics development, and war-gaming. From the original hypothesis put forth by John Johnson in 1958, to modeling time-limited search, to modeling the impact of motion on target detection, to modeling target acquisition performance in different spectral bands, the concept of search has a wide-ranging history. Our purpose is to present a snapshot of that history; as such, it will begin with a description of the search-modeling task, followed by a summary of highlights from the early years, and concluding with a discussion of search and detection modeling today and the changing battlefield. Some of the topics to be discussed will be classic search, clutter, computational vision models and the ACQUIRE model with its variants. We do not claim to present a complete history here, but rather a look at some of the work that has been done, and this is meant to be an introduction to an extensive amount of work on a complex topic. That said, it is hoped that this overview of the history of search and detection modeling of military imaging systems pursued by NVESD directly, or in association with other government agencies or contractors, will provide both the novice and experienced search modeler with a useful historical summary and an introduction to current issues and future challenges.

  20. Remote biomonitoring of temperatures in mothers and newborns: design, development and testing of a wearable sensor device in a tertiary-care hospital in southern India.

    PubMed

    Mony, Prem K; Thankachan, Prashanth; Bhat, Swarnarekha; Rao, Suman; Washington, Maryann; Antony, Sumi; Thomas, Annamma; Nagarajarao, Sheela C; Rao, Hiteshwar; Amrutur, Bharadwaj

    2018-04-01

    Newer technologies such as wearables, sensors, mobile telephony and computing offer opportunities to monitor vital physiological parameters and tackle healthcare problems, thereby improving access and quality of care. We describe the design, development and testing of a wearable sensor device for remote biomonitoring of body temperatures in mothers and newborns in southern India. Based on client needs and technological requirements, a wearable sensor device was designed and developed using principles of 'social innovation' design. The device underwent multiple iterations in product design and engineering based on user feedback, and then following preclinical testing, a techno-feasibility study and clinical trial were undertaken in a tertiary-care teaching hospital in Bangalore, India. Clinical trial phases I and IIa for evaluation of safety and efficacy were undertaken in the following sequence: 7 healthy adult volunteers; 18 healthy mothers; 3 healthy babies; 10 stable babies in the neonatal care intensive unit and 1 baby with morbidities. Time-stamped skin temperature readings obtained at 5 min intervals over a 1-hour period from the device secured on upper arms of mothers and abdomen of neonates were compared against readings from thermometers used routinely in clinical practice. Devices were comfortably secured on to adults and neonates, and data were efficiently transmitted via the gateway device for secure storage and retrieval for analysis. The mean skin temperatures in mothers were lower than the axillary temperatures by 2°C; and in newborns, there was a precision of -0.5°C relative to axillary measurements. While occasional minimal adverse events were noted in healthy volunteers, no adverse events were noted in mothers or neonates. This proof-of-concept study shows that this device is promising in terms of feasibility, safety and accuracy (with appropriate calibration) with potential for further refinements in device accuracy and pursuit of further phases of clinical research for improved maternal and neonatal health.

  1. Refining the effects of aircraft motion on an airborne beam-type gravimeter

    NASA Astrophysics Data System (ADS)

    Childers, V. A.; Weil, C.

    2016-12-01

    A challenge of modern airborne gravimetry is identifying an aircraft/autopilot combination that will allow for high quality data collection. The natural motion of the aircraft coupled with the autopilot's reaction to changing winds and turbulence can result in a successful data collection effort when the motion is benign or in total failure when the motion is at its worst. Aircraft motion plays such an important role in airborne gravimetry for several reasons, but most importantly to this study it affects the behavior of the gravimeter's gyro-stabilized platform. The gyro-stabilized platform keeps the sensor aligned with a time-averaged local vertical to produce a scalar measurement along the plumb direction. However, turbulence can cause the sensor to align temporarily with aircraft horizontal accelerations that can both decrease the measured gravity (because the sensor is no longer aligned with the gravity field) and increase the measured gravity (because horizontal accelerations are coupling into the measurement). NOAA's Gravity for the Redefinition of the American Vertical Datum (GRAV-D) project has collected airborne gravity data using a Micro-g LaCoste TAGS (Turnkey Airborne Gravity System) beam-type meter on a variety of mostly turboprop aircraft with a wide range of outcomes, some different than one would predict. Some aircraft that seem the smoothest to the operator in flight do not produce as high quality a measurement as one would expect. Alternatively, some aircraft that have significant motion produce very high quality data. Due to the extensive nature of the GRAV-D survey, significant quantities of data exist on our various successful aircraft. In addition, we have numerous flights, although fewer, that were not successful for a number of reasons. In this study, we use spectral analysis to evaluate the aircraft motion for our various successful aircraft and compare with the problem flights in our effort to identify the signature motions indicative of aircraft that could be successful or not successful for airborne gravity collection with a beam-type sensor.

  2. REFINE (REducing Falls in In-patieNt Elderly) using bed and bedside chair pressure sensors linked to radio-pagers in acute hospital care: a randomised controlled trial

    PubMed Central

    Sahota, Opinder; Drummond, Avril; Kendrick, Denise; Grainge, Matthew J.; Vass, Catherine; Sach, Tracey; Gladman, John; Avis, Mark

    2014-01-01

    Background: falls in hospitals are a major problem and contribute to substantial healthcare burden. Advances in sensor technology afford innovative approaches to reducing falls in acute hospital care. However, whether these are clinically effective and cost effective in the UK setting has not been evaluated. Methods: pragmatic, parallel-arm, individual randomised controlled trial of bed and bedside chair pressure sensors using radio-pagers (intervention group) compared with standard care (control group) in elderly patients admitted to acute, general medical wards, in a large UK teaching hospital. Primary outcome measure number of in-patient bedside falls per 1,000 bed days. Results: 1,839 participants were randomised (918 to the intervention group and 921 to the control group). There were 85 bedside falls (65 fallers) in the intervention group, falls rate 8.71 per 1,000 bed days compared with 83 bedside falls (64 fallers) in the control group, falls rate 9.84 per 1,000 bed days (adjusted incidence rate ratio, 0.90; 95% confidence interval [CI], 0.66–1.22; P = 0.51). There was no significant difference between the two groups with respect to time to first bedside fall (adjusted hazard ratio (HR), 0.95; 95% CI: 0.67–1.34; P= 0.12). The mean cost per patient in the intervention group was £7199 compared with £6400 in the control group, mean difference in QALYs per patient, 0.0001 (95% CI: −0.0006–0.0004, P= 0.67). Conclusions: bed and bedside chair pressure sensors as a single intervention strategy do not reduce in-patient bedside falls, time to first bedside fall and are not cost-effective in elderly patients in acute, general medical wards in the UK. Trial registration: isrctn.org identifier: ISRCTN44972300. PMID:24141253

  3. Analysis of High Order Difference Methods for Multiscale Complex Compressible Flows

    NASA Technical Reports Server (NTRS)

    Sjoegreen, Bjoern; Yee, H. C.; Tang, Harry (Technical Monitor)

    2002-01-01

    Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes with incremental studies was initiated. Here we further refine the analysis on, and improve the understanding of the adaptive numerical dissipation control strategy. Basically, the development of these schemes focuses on high order nondissipative schemes and takes advantage of the progress that has been made for the last 30 years in numerical methods for conservation laws, such as techniques for imposing boundary conditions, techniques for stability at shock waves, and techniques for stable and accurate long-time integration. We concentrate on high order centered spatial discretizations and a fourth-order Runge-Kutta temporal discretizations as the base scheme. Near the bound-aries, the base scheme has stable boundary difference operators. To further enhance stability, the split form of the inviscid flux derivatives is frequently used for smooth flow problems. To enhance nonlinear stability, linear high order numerical dissipations are employed away from discontinuities, and nonlinear filters are employed after each time step in order to suppress spurious oscillations near discontinuities to minimize the smearing of turbulent fluctuations. Although these schemes are built from many components, each of which is well-known, it is not entirely obvious how the different components be best connected. For example, the nonlinear filter could instead have been built into the spatial discretization, so that it would have been activated at each stage in the Runge-Kutta time stepping. We could think of a mechanism that activates the split form of the equations only at some parts of the domain. Another issue is how to define good sensors for determining in which parts of the computational domain a certain feature should be filtered by the appropriate numerical dissipation. For the present study we employ a wavelet technique introduced in as sensors. Here, the method is briefly described with selected numerical experiments.

  4. Overview of NASA Langley's Piezoelectric Ceramic Packaging Technology and Applications

    NASA Technical Reports Server (NTRS)

    Bryant, Robert G.

    2007-01-01

    Over the past decade, NASA Langley Research Center (LaRC) has developed several actuator packaging concepts designed to enhance the performance of commercial electroactive ceramics. NASA LaRC focused on properly designed actuator and sensor packaging for the following reasons, increased durability, protect the working material from the environment, allow for proper mechanical and electrical contact, afford "ready to use" mechanisms that are scalable, and develop fabrication methodology applicable to any active material of the same physical class. It is more cost effective to enhance or tailor the performance of existing systems, through innovative packaging, than to develop, test and manufacture new materials. This approach led to the development of several solid state actuators that include THUNDER, the Macrofiber Composite or (MFC) and the Radial Field Diaphragm or (RFD). All these actuators are fabricated using standard materials and processes derived from earlier concepts. NASA s fabrication and packaging technology as yielded, piezoelectric actuators and sensors that are easy to implement, reliable, consistent in properties, and of lower cost to manufacture in quantity, than their predecessors (as evidenced by their continued commercial availability.) These piezoelectric actuators have helped foster new research and development in areas involving computational modeling, actuator specific refinements, and engineering system redesign which led to new applications for piezo-based devices that replace traditional systems currently in use.

  5. 20 Meter Solar Sail Analysis and Correlation

    NASA Technical Reports Server (NTRS)

    Taleghani, B. K.; Lively, P. S.; Banik, J.; Murphy, D. M.; Trautt, T. A.

    2005-01-01

    This paper describes finite element analyses and correlation studies to predict deformations and vibration modes/frequencies of a 20-meter solar sail system developed by ATK Space Systems. Under the programmatic leadership of NASA Marshall Space Flight Center's In-Space Propulsion activity, the 20-meter solar sail program objectives were to verify the design, to assess structural responses of the sail system, to implement lessons learned from a previous 10-meter quadrant system analysis and test program, and to mature solar sail technology to a technology readiness level (TRL) of 5. For this 20 meter sail system, static and ground vibration tests were conducted in NASA Glenn Research Center's 100 meter diameter vacuum chamber at Plum Brook station. Prior to testing, a preliminary analysis was performed to evaluate test conditions and to determine sensor and actuator locations. After testing was completed, an analysis of each test configuration was performed. Post-test model refinements included updated properties to account for the mass of sensors, wiring, and other components used for testing. This paper describes the development of finite element models (FEM) for sail membranes and masts in each of four quadrants at both the component and system levels, as well as an optimization procedure for the static test/analyses correlation.

  6. A Key Pre-Distribution Scheme Based on µ-PBIBD for Enhancing Resilience in Wireless Sensor Networks.

    PubMed

    Yuan, Qi; Ma, Chunguang; Yu, Haitao; Bian, Xuefen

    2018-05-12

    Many key pre-distribution (KPD) schemes based on combinatorial design were proposed for secure communication of wireless sensor networks (WSNs). Due to complexity of constructing the combinatorial design, it is infeasible to generate key rings using the corresponding combinatorial design in large scale deployment of WSNs. In this paper, we present a definition of new combinatorial design, termed “µ-partially balanced incomplete block design (µ-PBIBD)”, which is a refinement of partially balanced incomplete block design (PBIBD), and then describe a 2-D construction of µ-PBIBD which is mapped to KPD in WSNs. Our approach is of simple construction which provides a strong key connectivity and a poor network resilience. To improve the network resilience of KPD based on 2-D µ-PBIBD, we propose a KPD scheme based on 3-D Ex-µ-PBIBD which is a construction of µ-PBIBD from 2-D space to 3-D space. Ex-µ-PBIBD KPD scheme improves network scalability and resilience while has better key connectivity. Theoretical analysis and comparison with the related schemes show that key pre-distribution scheme based on Ex-µ-PBIBD provides high network resilience and better key scalability, while it achieves a trade-off between network resilience and network connectivity.

  7. Denoising Algorithm for CFA Image Sensors Considering Inter-Channel Correlation.

    PubMed

    Lee, Min Seok; Park, Sang Wook; Kang, Moon Gi

    2017-05-28

    In this paper, a spatio-spectral-temporal filter considering an inter-channel correlation is proposed for the denoising of a color filter array (CFA) sequence acquired by CCD/CMOS image sensors. Owing to the alternating under-sampled grid of the CFA pattern, the inter-channel correlation must be considered in the direct denoising process. The proposed filter is applied in the spatial, spectral, and temporal domain, considering the spatio-tempo-spectral correlation. First, nonlocal means (NLM) spatial filtering with patch-based difference (PBD) refinement is performed by considering both the intra-channel correlation and inter-channel correlation to overcome the spatial resolution degradation occurring with the alternating under-sampled pattern. Second, a motion-compensated temporal filter that employs inter-channel correlated motion estimation and compensation is proposed to remove the noise in the temporal domain. Then, a motion adaptive detection value controls the ratio of the spatial filter and the temporal filter. The denoised CFA sequence can thus be obtained without motion artifacts. Experimental results for both simulated and real CFA sequences are presented with visual and numerical comparisons to several state-of-the-art denoising methods combined with a demosaicing method. Experimental results confirmed that the proposed frameworks outperformed the other techniques in terms of the objective criteria and subjective visual perception in CFA sequences.

  8. A Key Pre-Distribution Scheme Based on µ-PBIBD for Enhancing Resilience in Wireless Sensor Networks

    PubMed Central

    Yuan, Qi; Ma, Chunguang; Yu, Haitao; Bian, Xuefen

    2018-01-01

    Many key pre-distribution (KPD) schemes based on combinatorial design were proposed for secure communication of wireless sensor networks (WSNs). Due to complexity of constructing the combinatorial design, it is infeasible to generate key rings using the corresponding combinatorial design in large scale deployment of WSNs. In this paper, we present a definition of new combinatorial design, termed “µ-partially balanced incomplete block design (µ-PBIBD)”, which is a refinement of partially balanced incomplete block design (PBIBD), and then describe a 2-D construction of µ-PBIBD which is mapped to KPD in WSNs. Our approach is of simple construction which provides a strong key connectivity and a poor network resilience. To improve the network resilience of KPD based on 2-D µ-PBIBD, we propose a KPD scheme based on 3-D Ex-µ-PBIBD which is a construction of µ-PBIBD from 2-D space to 3-D space. Ex-µ-PBIBD KPD scheme improves network scalability and resilience while has better key connectivity. Theoretical analysis and comparison with the related schemes show that key pre-distribution scheme based on Ex-µ-PBIBD provides high network resilience and better key scalability, while it achieves a trade-off between network resilience and network connectivity. PMID:29757244

  9. Gas sensing behaviour of Cr{sub 2}O{sub 3} and W{sup 6+}: Cr{sub 2}O{sub 3} nanoparticles towards acetone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kohli, Nipin, E-mail: nipinkohli82@yahoo.com; Hastir, Anita; Singh, Ravi Chand

    2016-05-23

    This paper reports the acetone gas sensing properties of Cr{sub 2}O{sub 3} and 2% W{sup 6+} doped Cr{sub 2}O{sub 3} nanoparticles. The simple cost-effective hydrolysis assisted co-precipitation method was adopted. Synthesized samples were characterized by X-ray diffraction (XRD) and field emission scanning electron microscopy (FESEM) techniques. XRD revealed that synthesized nanoparticles have corundum structure. The lattice parameters have been calculated by Rietveld refinement; and strain and crystallite size have been calculated by using the Williamson-Hall plots. For acetone gas sensing properties, the nanoparticles were applied as thick film onto alumina substrate and tested at different operating temperatures. The results showedmore » that the optimum operating temperature of both the gas sensors is 250°C. At optimum operating temperature, the response of Cr{sub 2}O{sub 3} and 2% W{sup 6+} doped Cr{sub 2}O{sub 3} gas sensor towards 100 ppm acetone was found to be 25.5 and 35.6 respectively. The investigations revealed that the addition of W{sup 6+} as a dopant enhanced the sensing response of Cr{sub 2}O{sub 3} nanoparticles appreciably.« less

  10. High-bandwidth detection of short DNA in nanopipettes.

    PubMed

    Fraccari, Raquel L; Carminati, Marco; Piantanida, Giacomo; Leontidou, Tina; Ferrari, Giorgio; Albrecht, Tim

    2016-12-12

    Glass or quartz nanopipettes have found increasing use as tools for studying the biophysical properties of DNA and proteins, and as sensor devices. The ease of fabrication, favourable wetting properties and low capacitance are some of the inherent advantages, for example compared to more conventional, silicon-based nanopore chips. Recently, we have demonstrated high-bandwidth detection of double-stranded (ds) DNA with microsecond time resolution in nanopipettes, using custom-designed electronics. The electronics design has now been refined to include more sophisticated control features, such as integrated bias reversal and other features. Here, we exploit these capabilities and probe the translocation of short dsDNA in the 100 bp range, in different electrolytes. Single-stranded (ss) DNA of similar length are in use as capture probes, so label-free detection of their ds counterparts could therefore be of relevance in disease diagnostics.

  11. Mars Tumbleweed: FY2003 Conceptual Design Assessment

    NASA Technical Reports Server (NTRS)

    Antol, Jeffrey; Calhoun, Philip C.; Flick, John J.; Hajos, Gregory a.; Keys, Jennifer P.; Stillwagen, Frederic H.; Krizan, Shawn A.; Strickland, Christopher V.; Owens, Rachel; Wisniewski, Michael

    2005-01-01

    NASA LaRC is studying concepts for a new type of Mars exploration vehicle that would be propelled by the wind. Known as the Mars Tumbleweed, it would derive mobility through use of the Martian surface winds. Tumbleweeds could conceivably travel greater distances, cover larger areas of the surface, and provide access to areas inaccessible by conventional vehicles. They would be lightweight and relatively inexpensive, allowing a multiple vehicle network to be deployed on a single mission. Tumbleweeds would be equipped with sensors for conducting science and serve as scouts searching broad areas to identify specific locations for follow-on investigation by other explorers. An extensive assessment of LaRC Tumbleweed concepts was conducted in FY03, including refinement of science mission scenarios, definition of supporting subsystems (structures, power, communications), testing in wind tunnels, and development of a dynamic simulation capability.

  12. Advances in Surface Plasmon Resonance Imaging allowing for quantitative measurement of laterally heterogeneous samples

    NASA Astrophysics Data System (ADS)

    Raegen, Adam; Reiter, Kyle; Clarke, Anthony; Lipkowski, Jacek; Dutcher, John

    2012-02-01

    The Surface Plasmon Resonance (SPR) phenomenon is routinely exploited to qualitatively probe changes to materials on metallic surfaces for use in probes and sensors. Unfortunately, extracting truly quantitative information is usually limited to a select few cases -- uniform absorption/desorption of small biomolecules and films, in which a continuous ``slab'' model is a good approximation. We present advancements in the SPR technique that expand the number of cases for which the technique can provide meaningful results. Use of a custom, angle-scanning SPR imaging system, together with a refined data analysis method, allow for quantitative kinetic measurements of laterally heterogeneous systems. The degradation of cellulose microfibrils and bundles of microfibrils due to the action of cellulolytic enzymes will be presented as an excellent example of the capabilities of the SPR imaging system.

  13. Crew Office Evaluation of a Precision Lunar Landing System

    NASA Technical Reports Server (NTRS)

    Major, Laura M.; Duda, Kevin R.; Hirsh, Robert L.

    2011-01-01

    A representative Human System Interface for a precision lunar landing system, ALHAT, has been developed as a platform for prototype visualization and interaction concepts. This facilitates analysis of crew interaction with advanced sensors and AGNC systems. Human-in-the-loop evaluations with representatives from the Crew Office (i.e. astronauts) and Mission Operations Directorate (MOD) were performed to refine the crew role and information requirements during the final phases of landing. The results include a number of lessons learned from Shuttle that are applicable to the design of a human supervisory landing system and cockpit. Overall, the results provide a first order analysis of the tasks the crew will perform during lunar landing, an architecture for the Human System Interface based on these tasks, as well as details on the information needs to land safely.

  14. 40 CFR 80.1344 - What provisions are available to a non-small refiner that acquires one or more of a small refiner...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...-small refiner that acquires one or more of a small refiner's refineries? 80.1344 Section 80.1344... available to a non-small refiner that acquires one or more of a small refiner's refineries? (a) In the case of a refiner that is not an approved small refiner under § 80.1340 and that acquires a refinery from...

  15. 40 CFR 80.555 - What provisions are available to a large refiner that acquires a small refiner or one or more of...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... large refiner that acquires a small refiner or one or more of its refineries? 80.555 Section 80.555... that acquires a small refiner or one or more of its refineries? (a) In the case of a refiner without approved small refiner status who acquires a refinery from a refiner with approved status as a motor...

  16. Space-borne polarimetric SAR sensors or the golden age of radar polarimetry

    NASA Astrophysics Data System (ADS)

    Pottier, E.

    2010-06-01

    SAR Polarimetry represents an active area of research in Active Earth Remote Sensing. This interest is clearly supported by the fact that nowadays there exists, or there will exist in a very next future, a non negligible quantity of launched Polarimetric SAR Spaceborne sensors. The ENVISAT satellite, developed by ESA, was launched on March 2002, and was the first Spaceborne sensor offering an innovative dualpolarization Advanced Synthetic Aperture Radar (ASAR) system operating at C-band. The second Polarimetric Spaceborne sensor is ALOS, a Japanese Earth-Observation satellite, developed by JAXA and was launched in January 2006. This mission includes an active L-band polarimetric radar sensor (PALSAR) whose highresolution data may be used for environmental and hazard monitoring. The third Polarimetric Spaceborne sensor is TerraSAR-X, a new German radar satellite, developed by DLR, EADS-Astrium and Infoterra GmbH, was launched on June 2007. This sensor carries a dual-polarimetric and high frequency X-Band SAR sensor that can be operated in different modes and offers features that were not available from space before. At least, the Polarimetric Spaceborne sensor, developed by CSA and MDA, and named RADARSAT-2 was launched in December 2007 The Radarsat program was born out the need for effective monitoring of Canada’s icy waters, and some Radarsat-2 capabilities that benefit sea- and river ice applications are the multi-polarization options that will improve ice-edge detection, ice-type discrimination and structure information. The many advances in these different Polarimetric Spaceborne platforms were developed to respond to specific needs for radar data in environmental monitoring applications around the world, like : sea- and river-ice monitoring, marine surveillance, disaster management, oil spill detection, snow monitoring, hydrology, mapping, geology, agriculture, soil characterisation, forestry applications (biomass, allometry, height…), urban mapping etc…. In order to promote the exploitation of Polarimetric Spaceborne data, as it is starting today to proliferate with the launch of these Polarimetric SAR sensors, the PolSARpro Software, developed under contract to ESA and that is a toolbox for the scientific exploitation of Polarimetric SAR and Polarimetric-Interferometric data and a tool for high-level education in radar polarimetry, has been expanded and refined to include all elements necessary for the demonstration of a number of key applications. The PolSARpro Software, that already was supporting an important range of airborne and spaceborne polarimetric data sources, supports now the following additional data sources: ALOS-PALSAR (Dual-Pol fine mode and Quad-Pol mode), TerraSAR-X (Dual-pol mode) and Radarsat-2 (Dual-Pol fine mode and Quad-Pol fine and standard modes), by offering a platform dedicated interface for E.O Scientific Investigator. A number of illustrations of key applications has been developed for the demonstration and the promotion of the Polarimetric Spaceborne missions, that are consistent with the activities incorporated in the GMES Services Element (GSE). The aim of this communication is to present the current state of the art in SAR Polarimetry ranging from theory to applications, with special emphasis in the analysis of data provided by the new Polarimetric Spaceborne SAR sensors, and samples of real polarimetric data will be presented for use in real-life examples of key applications.

  17. Enhanced magnetostrictive properties of nanocrystalline Dy3+ substituted Fe-rich Co0.8Fe2.2O4 for sensor applications

    NASA Astrophysics Data System (ADS)

    Kharat, Shahaji P.; Swadipta, Roy; Kambale, R. C.; Kolekar, Y. D.; Ramana, C. V.

    2017-10-01

    We report on the enhanced magnetostrictive properties of nanocrystalline Dysprosium (Dy3+) substituted iron-rich cobalt ferrites (Co0.8Fe(2.2-x)DyxO4, referred to as CFDO). The CFDO samples with a variable Dy concentration (x = 0.000-0.075) were synthesized by the sol-gel auto-combustion method. The phase purity and crystal structure were confirmed from X-ray diffraction analyses coupled with Rietveld refinement. Surface morphology analysis using scanning electron microscopy imaging indicates the agglomerated magnetic particles with a non-uniform particle size distribution, which is desirable to transfer the strain. The magnetostriction coefficient (λ11) measurements indicate that the CFDO with Dy concentration x = 0.025 exhibits the highest strain sensitivity, (dλ/dH) ˜1.432 nm/A (for H ≤ 1000 Oe). On the other hand, the magnetostriction coefficient (λ12) measurements indicate that the Dy concentration x = 0.075 exhibits the larger (dλ/dH) ˜ 0.615 nm/A (for H ≤ 1000 Oe). The maximum λ11value of 166 ppm (at H = 3300 Oe) was observed for a compound with Dy concentration x = 0.050. Magnetization measurements indicate that the saturation magnetization and coercivity of CFDO samples are dependent on the Dy3+content; the highest value of squareness ratio of 0.424 was observed for x = 0.050. The interplay between strain sensitivity (dλ/dH) and instantaneous susceptibility (dM/dH), as derived from magnetostriction and magnetization results, demonstrates that these CFDO materials may be useful for developing torque/stress sensors, as a constituent magnetostrictive phase for making the magnetoelectric composite materials and thus suitable for magnetoelectric sensor applications.

  18. Small rocket tornado probe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colgate, S.A.

    1982-01-01

    A (less than 1 lb.) paper rock tornado probe was developed and deployed in an attempt to measure the pressure, temperature, ionization, and electric field variations along a trajectory penetrating a tornado funnel. The requirements of weight and materials were set by federal regulations and a one-meter resolution at a penetration velocity of close to Mach 1 was desired. These requirements were achieved by telemetering a strain gage transducer for pressure, micro size thermister and electric field, and ionization sensors via a pulse time telemetry to a receiver on board an aircraft that digitizes a signal and presents it tomore » a Z80 microcomputer for recording on mini-floppy disk. Recording rate was 2 ms for 8 channels of information that also includes telemetry rf field strength, magnetic field for orientation on the rocket, zero reference voltage for the sensor op amps as well as the previously mentioned items also. The absolute pressure was recorded. Tactically, over 120 h were flown in a Cessna 210 in April and May 1981, and one tornado was encountered. Four rockets were fired at this tornado, missed, and there were many equipment problems. The equipment needs to be hardened and engineered to a significant degree, but it is believed that the feasibility of the probe, tactics, and launch platform for future tornado work has been proven. The logistics of thunderstorm chasing from a remote base in New Mexico is a major difficulty and reliability of the equipment another. Over 50 dummy rockets have been fired to prove trajectories, stability, and photographic capability. Over 25 electronically equipped rockets have been fired to prove sensors transmission, breakaway connections, etc. The pressure recovery factor was calibrated in the Air Force Academy blow-down tunnel. There is a need for more refined engineering and more logistic support.« less

  19. Operational algorithm development and refinement approaches

    NASA Astrophysics Data System (ADS)

    Ardanuy, Philip E.

    2003-11-01

    Next-generation polar and geostationary systems, such as the National Polar-orbiting Operational Environmental Satellite System (NPOESS) and the Geostationary Operational Environmental Satellite (GOES)-R, will deploy new generations of electro-optical reflective and emissive capabilities. These will include low-radiometric-noise, improved spatial resolution multi-spectral and hyperspectral imagers and sounders. To achieve specified performances (e.g., measurement accuracy, precision, uncertainty, and stability), and best utilize the advanced space-borne sensing capabilities, a new generation of retrieval algorithms will be implemented. In most cases, these advanced algorithms benefit from ongoing testing and validation using heritage research mission algorithms and data [e.g., the Earth Observing System (EOS)] Moderate-resolution Imaging Spectroradiometer (MODIS) and Shuttle Ozone Limb Scattering Experiment (SOLSE)/Limb Ozone Retreival Experiment (LORE). In these instances, an algorithm's theoretical basis is not static, but rather improves with time. Once frozen, an operational algorithm can "lose ground" relative to research analogs. Cost/benefit analyses provide a basis for change management. The challenge is in reconciling and balancing the stability, and "comfort," that today"s generation of operational platforms provide (well-characterized, known, sensors and algorithms) with the greatly improved quality, opportunities, and risks, that the next generation of operational sensors and algorithms offer. By using the best practices and lessons learned from heritage/groundbreaking activities, it is possible to implement an agile process that enables change, while managing change. This approach combines a "known-risk" frozen baseline with preset completion schedules with insertion opportunities for algorithm advances as ongoing validation activities identify and repair areas of weak performance. This paper describes an objective, adaptive implementation roadmap that takes into account the specific maturities of each system"s (sensor and algorithm) technology to provide for a program that contains continuous improvement while retaining its manageability.

  20. Abdominal Twin Pressure Sensors for the assessment of abdominal injuries in Q dummies: in-dummy evaluation and performance in accident reconstructions.

    PubMed

    Beillas, Philippe; Alonzo, François; Chevalier, Marie-Christine; Lesire, Philippe; Leopold, Franck; Trosseille, Xavier; Johannsen, Heiko

    2012-10-01

    The Abdominal Pressure Twin Sensors (APTS) for Q3 and Q6 dummies are composed of soft polyurethane bladders filled with fluid and equipped with pressure sensors. Implanted within the abdominal insert of child dummies, they can be used to detect abdominal loading due to the belt during frontal collisions. In the present study - which is part of the EC funded CASPER project - two versions of APTS (V1 and V2) were evaluated in abdominal belt compression tests, torso flexion test (V1 only) and two series of sled tests with degraded restraint conditions. The results suggest that the two versions have similar responses, and that the pressure sensitivity to torso flexion is limited. The APTS ability to detect abdominal loading in sled tests was also confirmed, with peak pressures typically below 1 bar when the belt loaded only the pelvis and the thorax (appropriate restraint) and values above that level when the abdomen was loaded directly (inappropriate restraint). Then, accident reconstructions performed as part of CASPER and previous EC funded projects were reanalyzed. Selected data from 19 dummies (12 Q6 and 7 Q3) were used to plot injury risk curves. Maximum pressure, maximum pressure rate and their product were all found to be injury predictors. Maximum pressure levels for a 50% risk of AIS3+ were consistent with the levels separating appropriate and inappropriate restraint in the sled tests (e.g. 50% risk of AIS3+ at 1.09 bar for pressure filtered CFC180). Further work is needed to refine the scaling techniques between ages and confirm the risk curves.

  1. Rationale and design of a home-based trial using wearable sensors to detect asymptomatic atrial fibrillation in a targeted population: The mHealth Screening To Prevent Strokes (mSToPS) trial.

    PubMed

    Steinhubl, Steven R; Mehta, Rajesh R; Ebner, Gail S; Ballesteros, Marissa M; Waalen, Jill; Steinberg, Gregory; Van Crocker, Percy; Felicione, Elise; Carter, Chureen T; Edmonds, Shawn; Honcz, Joseph P; Miralles, Gines Diego; Talantov, Dimitri; Sarich, Troy C; Topol, Eric J

    2016-05-01

    Efficient methods for screening populations for undiagnosed atrial fibrillation (AF) are needed to reduce its associated mortality, morbidity, and costs. The use of digital technologies, including wearable sensors and large health record data sets allowing for targeted outreach toward individuals at increased risk for AF, might allow for unprecedented opportunities for effective, economical screening. The trial's primary objective is to determine, in a real-world setting, whether using wearable sensors in a risk-targeted screening population can diagnose asymptomatic AF more effectively than routine care. Additional key objectives include (1) exploring 2 rhythm-monitoring strategies-electrocardiogram-based and exploratory pulse wave-based-for detection of new AF, and (2) comparing long-term clinical and resource outcomes among groups. In all, 2,100 Aetna members will be randomized 1:1 to either immediate or delayed monitoring, in which a wearable patch will capture a single-lead electrocardiogram during the first and last 2 weeks of a 4-month period beginning immediately or 4 months after enrollment, respectively. An observational, risk factor-matched control group (n = 4,000) will be developed from members who did not receive an invitation to participate. The primary end point is the incidence of new AF in the immediate- vs delayed-monitoring arms at the end of the 4-month monitoring period. Additional efficacy and safety end points will be captured at 1 and 3 years. The results of this digital medicine trial might benefit a substantial proportion of the population by helping identify and refine screening methods for undiagnosed AF. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Unmanned aircraft system sense and avoid integrity and continuity

    NASA Astrophysics Data System (ADS)

    Jamoom, Michael B.

    This thesis describes new methods to guarantee safety of sense and avoid (SAA) functions for Unmanned Aircraft Systems (UAS) by evaluating integrity and continuity risks. Previous SAA efforts focused on relative safety metrics, such as risk ratios, comparing the risk of using an SAA system versus not using it. The methods in this thesis evaluate integrity and continuity risks as absolute measures of safety, as is the established practice in commercial aircraft terminal area navigation applications. The main contribution of this thesis is a derivation of a new method, based on a standard intruder relative constant velocity assumption, that uses hazard state estimates and estimate error covariances to establish (1) the integrity risk of the SAA system not detecting imminent loss of '"well clear," which is the time and distance required to maintain safe separation from intruder aircraft, and (2) the probability of false alert, the continuity risk. Another contribution is applying these integrity and continuity risk evaluation methods to set quantifiable and certifiable safety requirements on sensors. A sensitivity analysis uses this methodology to evaluate the impact of sensor errors on integrity and continuity risks. The penultimate contribution is an integrity and continuity risk evaluation where the estimation model is refined to address realistic intruder relative linear accelerations, which goes beyond the current constant velocity standard. The final contribution is an integrity and continuity risk evaluation addressing multiple intruders. This evaluation is a new innovation-based method to determine the risk of mis-associating intruder measurements. A mis-association occurs when the SAA system incorrectly associates a measurement to the wrong intruder, causing large errors in the estimated intruder trajectories. The new methods described in this thesis can help ensure safe encounters between aircraft and enable SAA sensor certification for UAS integration into the National Airspace System.

  3. MACS-Mar: a real-time remote sensing system for maritime security applications

    NASA Astrophysics Data System (ADS)

    Brauchle, Jörg; Bayer, Steven; Hein, Daniel; Berger, Ralf; Pless, Sebastian

    2018-04-01

    The modular aerial camera system (MACS) is a development platform for optical remote sensing concepts, algorithms and special environments. For real-time services for maritime security (EMSec joint project), a new multi-sensor configuration MACS-Mar was realized. It consists of four co-aligned sensor heads in the visible RGB, near infrared (NIR, 700-950 nm), hyperspectral (HS, 450-900 nm) and thermal infrared (TIR, 7.5-14 µm) spectral range, a mid-cost navigation system, a processing unit and two data links. On-board image projection, cropping of redundant data and compression enable the instant generation of direct-georeferenced high-resolution image mosaics, automatic object detection, vectorization and annotation of floating objects on the water surface. The results were transmitted over a distance up to 50 km in real-time via narrow and broadband data links and were visualized in a maritime situation awareness system. For the automatic onboard detection of floating objects, a segmentation and classification workflow based on RGB, IR and TIR information was developed and tested. The completeness of the object detection in the experiment resulted in 95%, the correctness in 53%. Mostly, bright backwash of ships lead to an overestimation of the number of objects, further refinement using water homogeneity in the TIR, as implemented in the workflow, couldn't be carried out due to problems with the TIR sensor, else distinctly better results could have been expected. The absolute positional accuracy of the projected real-time imagery resulted in 2 m without postprocessing of images or navigation data, the relative measurement accuracy of distances is in the range of the image resolution, which is about 12 cm for RGB imagery in the EMSec experiment.

  4. 40 CFR 80.553 - Under what conditions may the small refiner gasoline sulfur standards be extended for a small...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... refiner gasoline sulfur standards be extended for a small refiner of motor vehicle diesel fuel? 80.553... small refiner gasoline sulfur standards be extended for a small refiner of motor vehicle diesel fuel? (a) A refiner that has been approved by EPA for small refiner gasoline sulfur standards under § 80.240...

  5. 40 CFR 80.553 - Under what conditions may the small refiner gasoline sulfur standards be extended for a small...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... refiner gasoline sulfur standards be extended for a small refiner of motor vehicle diesel fuel? 80.553... small refiner gasoline sulfur standards be extended for a small refiner of motor vehicle diesel fuel? (a) A refiner that has been approved by EPA for small refiner gasoline sulfur standards under § 80.240...

  6. Very High Resolution Bathymetric Mapping at the Ridge 2000 Integrated Study Sites: Acquisition and Processing Protocols Developed During Recent Alvin Field Programs to the East Pacific Rise and Juan de Fuca Ridge

    NASA Astrophysics Data System (ADS)

    Ferrini, V.; Fornari, D. J.; Shank, T.; Tivey, M.; Kelley, D. S.; Glickson, D.; Carbotte, S. M.; Howland, J.; Whitcomb, L. L.; Yoerger, D.

    2004-12-01

    Recent field programs at the East Pacific Rise and Juan de Fuca Ridge have resulted in the refinement of data processing protocols that enable the rapid creation of high-resolution (meter-scale) bathymetric maps from pencil-beam altimetric sonar data that are routinely collected during DSV Alvin dives. With the development of the appropriate processing tools, the Imagenex sonar, a permanent sensor on Alvin, can be used by a broad range of scientists permitting the analysis of various data sets within the context of high-quality bathymetric maps. The data processing protocol integrates depth data recorded with Alvin's Paroscientific pressure sensor with bathymetric soundings collected with an Imagenex 675 kHz articulating (scanning) sonar system, and high-resolution navigational data acquired with DVLNAV, which includes bottom lock Doppler sonar and long baseline (LBL) navigation. Together these data allow us, for the first time, to visualize portions of Ridge 2000 Integrated Study Sites (ISS) at 1-m vertical and horizontal resolution. These maps resolve morphological details of structures within the summit trough at scales that are relevant to biological communities (e.g. hydrothermal vents, lava pillars, trough walls), thus providing the important geologic context necessary to better understand spatial patterns associated with integrated biological-hydrothermal-geological processes. The Imagenex sonar is also a permanent sensor on the Jason2 ROV, which is also equipped with an SM2000 (200 kHz) near-bottom multibeam sonar. In the future, it is envisioned that near-bottom multibeam sonars will be standard sensors on all National Deep Submergence Facility (NDSF) vehicles. Streamlining data processing protocols makes these datasets more accessible to NDSF users and ensures broad compatibility between data formats among NDSF vehicle systems and allied vehicles (e.g. ABE). Establishing data processing protocols and software suites, routinely calibrating sensors (e.g. Paroscientific depth sensors), and ensuring good navigational benchmarks between various cruises to the Ridge 2000 ISS improves the capability and quality of rapidly produced high-resolution bathymetric maps enabling users to optimize their diving programs. This is especially important within the context of augmenting high-resolution bathymetric data collection in ISS areas (several cruises to the same area over multiple years) and investigating possible changes in seafloor topography, hydrothermal vent features and/or biological communities that are related to tectonic or volcanic events.

  7. Advanced Systems for Monitoring Underwater Sounds

    NASA Technical Reports Server (NTRS)

    Lane, Michael; Van Meter, Steven; Gilmore, Richard Grant; Sommer, Keith

    2007-01-01

    The term "Passive Acoustic Monitoring System" (PAMS) describes a developmental sensing-and-data-acquisition system for recording underwater sounds. The sounds (more precisely, digitized and preprocessed versions from acoustic transducers) are subsequently analyzed by a combination of data processing and interpretation to identify and/or, in some cases, to locate the sources of those sounds. PAMS was originally designed to locate the sources such as fish of species that one knows or seeks to identify. The PAMS unit could also be used to locate other sources, for example, marine life, human divers, and/or vessels. The underlying principles of passive acoustic sensing and analyzing acoustic-signal data in conjunction with temperature and salinity data are not new and not unique to PAMS. Part of the uniqueness of the PAMS design is that it is the first deep-sea instrumentation design to provide a capability for studying soniferous marine animals (especially fish) over the wide depth range described below. The uniqueness of PAMS also lies partly in a synergistic combination of advanced sensing, packaging, and data-processing design features with features adapted from proven marine instrumentation systems. This combination affords a versatility that enables adaptation to a variety of undersea missions using a variety of sensors. The interpretation of acoustic data can include visual inspection of power-spectrum plots for identification of spectral signatures of known biological species or artificial sources. Alternatively or in addition, data analysis could include determination of relative times of arrival of signals at different acoustic sensors arrayed at known locations. From these times of arrival, locations of acoustic sources (and errors in those locations) can be estimated. Estimates of relative locations of sources and sensors can be refined through analysis of the attenuation of sound in the intervening water in combination with water-temperature and salinity data acquired by instrumentation systems other than PAMS. A PAMS is packaged as a battery-powered unit, mated with external sensors, that can operate in the ocean at any depth from 2 m to 1 km. A PAMS includes a pressure housing, a deep-sea battery, a hydrophone (which is one of the mating external sensors), and an external monitor and keyboard box. In addition to acoustic transducers, external sensors can include temperature probes and, potentially, underwater cameras. The pressure housing contains a computer that includes a hard drive, DC-to- DC power converters, a post-amplifier board, a sound card, and a universal serial bus (USB) 4-port hub.

  8. Refinements to HIRS CO2 Slicing Algorithm with Results Compared to CALIOP and MODIS

    NASA Astrophysics Data System (ADS)

    Frey, R.; Menzel, P.

    2012-12-01

    This poster reports on the refinement of a cloud top property algorithm using High-resolution Infrared Radiation Sounder (HIRS) measurements. The HIRS sensor has been flown on fifteen satellites from TIROS-N through NOAA-19 and MetOp-A forming a continuous 30 year cloud data record. Cloud Top Pressure and effective emissivity (cloud fraction multiplied by cloud emissivity) are derived using the 15 μm spectral bands in the CO2 absorption band, implementing the CO2 slicing technique which is strong for high semi-transparent clouds but weak for low clouds with little thermal contrast from clear skies. We report on algorithm adjustments suggested from MODIS cloud record validations and the inclusion of collocated AVHRR cloud fraction data from the PATMOS-x algorithm. Reprocessing results for 2008 are shown using NOAA-18 HIRS and collocated CALIOP data for validation, as well as comparisons to MODIS monthly mean values. Adjustments to the cloud algorithm include (a) using CO2 slicing for all ice and mixed phase clouds and infrared window determinations for all water clouds, (b) determining the cloud top pressure from the most opaque CO2 spectral band pair seeing the cloud, (c) reducing the cloud detection threshold for the CO2 slicing algorithm to include conditions of smaller radiance differences that are often due to thin ice clouds, and (d) identifying stratospheric clouds when an opaque band is warmer than a less opaque band.

  9. Optical model for the water characterization of the highly turbid water of the Winam Gulf (Victoria Lake)

    NASA Astrophysics Data System (ADS)

    Santini, F.; Cavalli, R. M.; Palombo, A.; Pignatti, S.

    2007-10-01

    The study, proposed within the framework of the cooperation with Kenyan Authorities, has been carried out on the Kenyan part of the Lake Victoria. This lake is one of the largest freshwater bodies of the world where, over the last few years, environmental challenges and human impact have perturbed the ecological balance. Pollution and sediments loads from the tributaries rivers and antrophic sources caused a worrying increase of the turbidity level of the lake water. Secchi transparency index has declined from 5 meters in the 1930s to less than one meter in the 1990s. With the aim of providing an inexpensive way to gather information linked to the water clarity and quality, a method for remotely sensed data interpretation, devoted to produce chl (chlorophyll), CDOM (coloured dissolved organic matter) and TSS (total suspended solids) maps, has been assessed. At this purpose a bio-optical model, based on radiative transfer theory in water bodies, has been refined. The method has been applied on an image acquired on January 2004 by ENVISAT/MERIS sensor just a week after an in situ campaign took place. During the in situ campaign a data set for model refinement and products validation has been collected. This data comprise surface radiometric quantity and samples for laboratory analyses. The comparison between the obtained maps and the data provided by the laboratory analysis showed a good correspondence, demonstrating the potentiality of remote observation in supporting the management of the water resources.

  10. CE-SAM: a conversational interface for ISR mission support

    NASA Astrophysics Data System (ADS)

    Pizzocaro, Diego; Parizas, Christos; Preece, Alun; Braines, Dave; Mott, David; Bakdash, Jonathan Z.

    2013-05-01

    There is considerable interest in natural language conversational interfaces. These allow for complex user interactions with systems, such as fulfilling information requirements in dynamic environments, without requiring extensive training or a technical background (e.g. in formal query languages or schemas). To leverage the advantages of conversational interactions we propose CE-SAM (Controlled English Sensor Assignment to Missions), a system that guides users through refining and satisfying their information needs in the context of Intelligence, Surveillance, and Reconnaissance (ISR) operations. The rapidly-increasing availability of sensing assets and other information sources poses substantial challenges to effective ISR resource management. In a coalition context, the problem is even more complex, because assets may be "owned" by different partners. We show how CE-SAM allows a user to refine and relate their ISR information needs to pre-existing concepts in an ISR knowledge base, via conversational interaction implemented on a tablet device. The knowledge base is represented using Controlled English (CE) - a form of controlled natural language that is both human-readable and machine processable (i.e. can be used to implement automated reasoning). Users interact with the CE-SAM conversational interface using natural language, which the system converts to CE for feeding-back to the user for confirmation (e.g. to reduce misunderstanding). We show that this process not only allows users to access the assets that can support their mission needs, but also assists them in extending the CE knowledge base with new concepts.

  11. Intuitive Terrain Reconstruction Using Height Observation-Based Ground Segmentation and 3D Object Boundary Estimation

    PubMed Central

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-01-01

    Mobile robot operators must make rapid decisions based on information about the robot’s surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot’s array of sensors, but some upper parts of objects are beyond the sensors’ measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances. PMID:23235454

  12. a Sensor Aided H.264/AVC Video Encoder for Aerial Video Sequences with in the Loop Metadata Correction

    NASA Astrophysics Data System (ADS)

    Cicala, L.; Angelino, C. V.; Ruatta, G.; Baccaglini, E.; Raimondo, N.

    2015-08-01

    Unmanned Aerial Vehicles (UAVs) are often employed to collect high resolution images in order to perform image mosaicking and/or 3D reconstruction. Images are usually stored on board and then processed with on-ground desktop software. In such a way the computational load, and hence the power consumption, is moved on ground, leaving on board only the task of storing data. Such an approach is important in the case of small multi-rotorcraft UAVs because of their low endurance due to the short battery life. Images can be stored on board with either still image or video data compression. Still image system are preferred when low frame rates are involved, because video coding systems are based on motion estimation and compensation algorithms which fail when the motion vectors are significantly long and when the overlapping between subsequent frames is very small. In this scenario, UAVs attitude and position metadata from the Inertial Navigation System (INS) can be employed to estimate global motion parameters without video analysis. A low complexity image analysis can be still performed in order to refine the motion field estimated using only the metadata. In this work, we propose to use this refinement step in order to improve the position and attitude estimation produced by the navigation system in order to maximize the encoder performance. Experiments are performed on both simulated and real world video sequences.

  13. A template-based approach for parallel hexahedral two-refinement

    DOE PAGES

    Owen, Steven J.; Shih, Ryan M.; Ernst, Corey D.

    2016-10-17

    Here, we provide a template-based approach for generating locally refined all-hex meshes. We focus specifically on refinement of initially structured grids utilizing a 2-refinement approach where uniformly refined hexes are subdivided into eight child elements. The refinement algorithm consists of identifying marked nodes that are used as the basis for a set of four simple refinement templates. The target application for 2-refinement is a parallel grid-based all-hex meshing tool for high performance computing in a distributed environment. The result is a parallel consistent locally refined mesh requiring minimal communication and where minimum mesh quality is greater than scaled Jacobian 0.3more » prior to smoothing.« less

  14. A template-based approach for parallel hexahedral two-refinement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owen, Steven J.; Shih, Ryan M.; Ernst, Corey D.

    Here, we provide a template-based approach for generating locally refined all-hex meshes. We focus specifically on refinement of initially structured grids utilizing a 2-refinement approach where uniformly refined hexes are subdivided into eight child elements. The refinement algorithm consists of identifying marked nodes that are used as the basis for a set of four simple refinement templates. The target application for 2-refinement is a parallel grid-based all-hex meshing tool for high performance computing in a distributed environment. The result is a parallel consistent locally refined mesh requiring minimal communication and where minimum mesh quality is greater than scaled Jacobian 0.3more » prior to smoothing.« less

  15. In-situ Chemical Exploration and Mapping using an Autonomous Underwater Vehicle

    NASA Astrophysics Data System (ADS)

    Camilli, R.; Bingham, B. S.; Jakuba, M.; Whelan, J.; Singh, H.; Whiticar, M.

    2004-12-01

    Recent advances in in-situ chemical sensing have emphasized several issues associated with making reliable chemical measurements in the ocean. Such measurements are often aliased temporally and or spatially, and may suffer from instrumentation artifacts, such as slow response time, limited dynamic range, hysteresis, and environmental sensitivities (eg., temperature and pressure). We focus on the in-situ measurement of light hydrocarbons. Specifically we examine data collected using a number of methods including: a vertical profiler, autonomous underwater vehicles (AUV) surveys, and adaptive spatio-temporal survey techniques. We present data collected using a commercial METS sensor on a vertical profiler to identify and map structures associated with ocean bottom methane sources in the Saanich inlet off Vancouver, Canada. This sensor was deployed in parallel with a submersible mass spectrometer and a shipboard equilibrator-gas chromatograph. Our results illustrate that spatial offsets as small as centimeters can produce significant differences in measured concentration. In addition, differences in response times between instruments can also alias the measurements. The results of this preliminary experiment underscore the challenges of quantifying ocean chemical processes with small-scale spatial variability and temporal variability that is often faster than the response times of many available instruments. We explore the capabilities and current limitations of autonomous underwater vehicles for extending the spatial coverage of new in-situ sensor technologies. We present data collected from deployments of Seabed, a passively stable, hover capable AUV, at large-scale gas blowout features located along the U.S. Atlantic margin. Although these deployments successfully revealed previously unobservable oceanographic processes, temporal aliasing caused by sensor response as well as tidal variability manifests itself, illustrating the possibilities for misinterpretation of localized periodic anomalies. Finally we present results of recent experimental chemical plume mapping surveys that were conducted off the coast of Massachusetts using adaptive behaviors that allow the AUV to optimize its mission plan to autonomously search for chemical anomalies. This adaptive operation is based on coupling the chemical sensor payload within a closed-loop architecture with the vehicle's navigation control system for real-time autonomous data assimilation and decision making processes. This allows the vehicle to autonomously refine the search strategy, thereby improving feature localization capabilities and enabling surveys at an appropriate temporal and spatial resolution.

  16. Comparison of lab, pilot, and industrial scale low consistency mechanical refining for improvements in enzymatic digestibility of pretreated hardwood.

    PubMed

    Jones, Brandon W; Venditti, Richard; Park, Sunkyu; Jameel, Hasan

    2014-09-01

    Mechanical refining has been shown to improve biomass enzymatic digestibility. In this study industrial high-yield sodium carbonate hardwood pulp was subjected to lab, pilot and industrial refining to determine if the mechanical refining improves the enzymatic hydrolysis sugar conversion efficiency differently at different refining scales. Lab, pilot and industrial refining increased the biomass digestibility for lignocellulosic biomass relative to the unrefined material. The sugar conversion was increased from 36% to 65% at 5 FPU/g of biomass with industrial refining at 67.0 kWh/t, which was more energy efficient than lab and pilot scale refining. There is a maximum in the sugar conversion with respect to the amount of refining energy. Water retention value is a good predictor of improvements in sugar conversion for a given fiber source and composition. Improvements in biomass digestibility with refining due to lab, pilot plant and industrial refining were similar with respect to water retention value. Published by Elsevier Ltd.

  17. 40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Provisions § 80.1340 How does a refiner obtain approval as a small refiner? (a) Applications for small refiner status must be submitted to EPA by December 31, 2007. (b) For U.S. Postal delivery, applications... small refiner status application must contain the following information for the company seeking small...

  18. A 2D range Hausdorff approach to 3D facial recognition.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koch, Mark William; Russ, Trina Denise; Little, Charles Quentin

    2004-11-01

    This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and templatemore » datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.« less

  19. RFID Technology for Continuous Monitoring of Physiological Signals in Small Animals.

    PubMed

    Volk, Tobias; Gorbey, Stefan; Bhattacharyya, Mayukh; Gruenwald, Waldemar; Lemmer, Björn; Reindl, Leonhard M; Stieglitz, Thomas; Jansen, Dirk

    2015-02-01

    Telemetry systems enable researchers to continuously monitor physiological signals in unrestrained, freely moving small rodents. Drawbacks of common systems are limited operation time, the need to house the animals separately, and the necessity of a stable communication link. Furthermore, the costs of the typically proprietary telemetry systems reduce the acceptance. The aim of this paper is to introduce a low-cost telemetry system based on common radio frequency identification technology optimized for battery-independent operational time, good reusability, and flexibility. The presented implant is equipped with sensors to measure electrocardiogram, arterial blood pressure, and body temperature. The biological signals are transmitted as digital data streams. The device is able of monitoring several freely moving animals housed in groups with a single reader station. The modular concept of the system significantly reduces the costs to monitor multiple physiological functions and refining procedures in preclinical research.

  20. Temperature - Emissivity Separation Assessment in a Sub-Urban Scenario

    NASA Astrophysics Data System (ADS)

    Moscadelli, M.; Diani, M.; Corsini, G.

    2017-10-01

    In this paper, a methodology that aims at evaluating the effectiveness of different TES strategies is presented. The methodology takes into account the specific material of interest in the monitored scenario, sensor characteristics, and errors in the atmospheric compensation step. The methodology is proposed in order to predict and analyse algorithms performances during the planning of a remote sensing mission, aimed to discover specific materials of interest in the monitored scenario. As case study, the proposed methodology is applied to a real airborne data set of a suburban scenario. In order to perform the TES problem, three state-of-the-art algorithms, and a recently proposed one, are investigated: Temperature-Emissivity Separation '98 (TES-98) algorithm, Stepwise Refining TES (SRTES) algorithm, Linear piecewise TES (LTES) algorithm, and Optimized Smoothing TES (OSTES) algorithm. At the end, the accuracy obtained with real data, and the ones predicted by means of the proposed methodology are compared and discussed.

  1. Evaluating a Control System Architecture Based on a Formally Derived AOCS Model

    NASA Astrophysics Data System (ADS)

    Ilic, Dubravka; Latvala, Timo; Varpaaniemi, Kimmo; Vaisanen, Pauli; Troubitsyna, Elena; Laibinis, Linas

    2010-08-01

    Attitude & Orbit Control System (AOCS) refers to a wider class of control systems which are used to determine and control the attitude of the spacecraft while in orbit, based on the information obtained from various sensors. In this paper, we propose an approach to evaluate a typical (yet somewhat simplified) AOCS architecture using formal development - based on the Event-B method. As a starting point, an Ada specification of the AOCS is translated into a formal specification and further refined to incorporate all the details of its original source code specification. This way we are able not only to evaluate the Ada specification by expressing and verifying specific system properties in our formal models, but also to determine how well the chosen modelling framework copes with the level of detail required for an actual implementation and code generation from the derived models.

  2. Sentinel-1 Precise Orbit Calibration and Validation

    NASA Astrophysics Data System (ADS)

    Monti Guarnieri, Andrea; Mancon, Simone; Tebaldini, Stefano

    2015-05-01

    In this paper, we propose a model-based procedure to calibrate and validate Sentinel-1 orbit products by the Multi-Squint (MS) phase. The technique allows to calibrate an interferometric pair geometry by refining the slave orbit with reference to the orbit of a master image. Accordingly, we state the geometric model of the InSAR phase as function of positioning errors of targets and slave track; and the MS phase model as derivative of the InSAR phase geometric model with respect to the squint angle. In this paper we focus on the TOPSAR acquisition modes of Sentinel-1 (IW and EW) assuming at the most a linear error in the known slave trajectory. In particular, we describe a dedicated methodology to prevent InSAR phase artifacts on data acquired by the TOPSAR acquisition mode. Experimental results obtained by interferometric pairs acquired by Sentinel-1 sensor will be displayed.

  3. Laminated Object Manufacturing of 3D-Printed Laser-Induced Graphene Foams.

    PubMed

    Luong, Duy Xuan; Subramanian, Ajay K; Silva, Gladys A Lopez; Yoon, Jongwon; Cofer, Savannah; Yang, Kaichun; Owuor, Peter Samora; Wang, Tuo; Wang, Zhe; Lou, Jun; Ajayan, Pulickel M; Tour, James M

    2018-05-29

    Laser-induced graphene (LIG), a graphene structure synthesized by a one-step process through laser treatment of commercial polyimide (PI) film in an ambient atmosphere, has been shown to be a versatile material in applications ranging from energy storage to water treatment. However, the process as developed produces only a 2D product on the PI substrate. Here, a 3D LIG foam printing process is developed on the basis of laminated object manufacturing, a widely used additive-manufacturing technique. A subtractive laser-milling process to yield further refinements to the 3D structures is also developed and shown here. By combining both techniques, various 3D graphene objects are printed. The LIG foams show good electrical conductivity and mechanical strength, as well as viability in various energy storage and flexible electronic sensor applications. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. An Instrumented Glove to Assess Manual Dexterity in Simulation-Based Neurosurgical Education

    PubMed Central

    Lemos, Juan Diego; Hernandez, Alher Mauricio; Soto-Romero, Georges

    2017-01-01

    The traditional neurosurgical apprenticeship scheme includes the assessment of trainee’s manual skills carried out by experienced surgeons. However, the introduction of surgical simulation technology presents a new paradigm where residents can refine surgical techniques on a simulator before putting them into practice in real patients. Unfortunately, in this new scheme, an experienced surgeon will not always be available to evaluate trainee’s performance. For this reason, it is necessary to develop automatic mechanisms to estimate metrics for assessing manual dexterity in a quantitative way. Authors have proposed some hardware-software approaches to evaluate manual dexterity on surgical simulators. This paper presents IGlove, a wearable device that uses inertial sensors embedded on an elastic glove to capture hand movements. Metrics to assess manual dexterity are estimated from sensors signals using data processing and information analysis algorithms. It has been designed to be used with a neurosurgical simulator called Daubara NS Trainer, but can be easily adapted to another benchtop- and manikin-based medical simulators. The system was tested with a sample of 14 volunteers who performed a test that was designed to simultaneously evaluate their fine motor skills and the IGlove’s functionalities. Metrics obtained by each of the participants are presented as results in this work; it is also shown how these metrics are used to automatically evaluate the level of manual dexterity of each volunteer. PMID:28468268

  5. Connectivity Restoration in Wireless Sensor Networks via Space Network Coding.

    PubMed

    Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing

    2017-04-20

    The problem of finding the number and optimal positions of relay nodes for restoring the network connectivity in partitioned Wireless Sensor Networks (WSNs) is Non-deterministic Polynomial-time hard (NP-hard) and thus heuristic methods are preferred to solve it. This paper proposes a novel polynomial time heuristic algorithm, namely, Relay Placement using Space Network Coding (RPSNC), to solve this problem, where Space Network Coding, also called Space Information Flow (SIF), is a new research paradigm that studies network coding in Euclidean space, in which extra relay nodes can be introduced to reduce the cost of communication. Unlike contemporary schemes that are often based on Minimum Spanning Tree (MST), Euclidean Steiner Minimal Tree (ESMT) or a combination of MST with ESMT, RPSNC is a new min-cost multicast space network coding approach that combines Delaunay triangulation and non-uniform partitioning techniques for generating a number of candidate relay nodes, and then linear programming is applied for choosing the optimal relay nodes and computing their connection links with terminals. Subsequently, an equilibrium method is used to refine the locations of the optimal relay nodes, by moving them to balanced positions. RPSNC can adapt to any density distribution of relay nodes and terminals, as well as any density distribution of terminals. The performance and complexity of RPSNC are analyzed and its performance is validated through simulation experiments.

  6. General Metropolis-Hastings jump diffusions for automatic target recognition in infrared scenes

    NASA Astrophysics Data System (ADS)

    Lanterman, Aaron D.; Miller, Michael I.; Snyder, Donald L.

    1997-04-01

    To locate and recognize ground-based targets in forward- looking IR (FLIR) images, 3D faceted models with associated pose parameters are formulated to accommodate the variability found in FLIR imagery. Taking a Bayesian approach, scenes are simulated from the emissive characteristics of the CAD models and compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. To accommodate scenes with variable numbers of targets, the posterior distribution is defined over parameter vectors of varying dimension. An inference algorithm based on Metropolis-Hastings jump- diffusion processes empirically samples from the posterior distribution, generating configurations of templates and transformations that match the collected sensor data with high probability. The jumps accommodate the addition and deletion of targets and the estimation of target identities; diffusions refine the hypotheses by drifting along the gradient of the posterior distribution with respect to the orientation and position parameters. Previous results on jumps strategies analogous to the Metropolis acceptance/rejection algorithm, with proposals drawn from the prior and accepted based on the likelihood, are extended to encompass general Metropolis-Hastings proposal densities. In particular, the algorithm proposes moves by drawing from the posterior distribution over computationally tractible subsets of the parameter space. The algorithm is illustrated by an implementation on a Silicon Graphics Onyx/Reality Engine.

  7. Respiratory chain complex II as general sensor for apoptosis.

    PubMed

    Grimm, Stefan

    2013-05-01

    I review here the evidence that complex II of the respiratory chain (RC) constitutes a general sensor for apoptosis induction. This concept emerged from work on neurodegenerative diseases and from recent data on metabolic alterations in cancer cells affecting the RC and in particular on mutations of complex II subunits. It is also supported by experiments with many anticancer compounds that compared the apoptosis sensitivities of complex II-deficient versus WT cells. These results are explained by the mechanistic understanding of how complex II mediates the diverse range of apoptosis signals. This protein aggregate is specifically activated for apoptosis by pH change as a common and early feature of dying cells. This leads to the dissociation of its SDHA and SDHB subunits from the remaining membrane-anchored subunits and the consequent block of it enzymatic SQR activity, while its SDH activity, which is contained in the SDHA/SDHB subcomplex, remains intact. The uncontrolled SDH activity then generates excessive amounts of reactive oxygen species for the demise of the cell. Future studies on these mitochondrial processes will help refine this model, unravel the contribution of mutations in complex II subunits as the cause of degenerative neurological diseases and tumorigenesis, and aid in discovering novel interference options. This article is part of a Special Issue entitled: Respiratory complex II: Role in cellular physiology and disease. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Paradigms for restoration of somatosensory feedback via stimulation of the peripheral nervous system.

    PubMed

    Pasluosta, Cristian; Kiele, Patrick; Stieglitz, Thomas

    2018-04-01

    The somatosensory system contributes substantially to the integration of multiple sensor modalities into perception. Tactile sensations, proprioception and even temperature perception are integrated to perceive embodiment of our limbs. Damage of somatosensory networks can severely affect the execution of daily life activities. Peripheral injuries are optimally corrected via direct interfacing of the peripheral nerves. Recent advances in implantable devices, stimulation paradigms, and biomimetic sensors enabled the restoration of natural sensations after amputation of the limb. The refinement of stimulation patterns to deliver natural feedback that can be interpreted intuitively such to prescind from long-learning sessions is crucial to function restoration. For this review, we collected state-of-the-art knowledge on the evolution of stimulation paradigms from single fiber stimulation to the eliciting of multisensory sensations. Data from the literature are structured into six sections: (a) physiology of the somatosensory system; (b) stimulation of single fibers; (c) restoral of multisensory percepts; (d) closure of the control loop in hand prostheses; (e) sensory restoration and the sense of embodiment, and (f) methodologies to assess stimulation outcomes. Full functional recovery demands further research on multisensory integration and brain plasticity, which will bring new paradigms for intuitive sensory feedback in the next generation of limb prostheses. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  9. Distributed Power Allocation for Wireless Sensor Network Localization: A Potential Game Approach.

    PubMed

    Ke, Mingxing; Li, Ding; Tian, Shiwei; Zhang, Yuli; Tong, Kaixiang; Xu, Yuhua

    2018-05-08

    The problem of distributed power allocation in wireless sensor network (WSN) localization systems is investigated in this paper, using the game theoretic approach. Existing research focuses on the minimization of the localization errors of individual agent nodes over all anchor nodes subject to power budgets. When the service area and the distribution of target nodes are considered, finding the optimal trade-off between localization accuracy and power consumption is a new critical task. To cope with this issue, we propose a power allocation game where each anchor node minimizes the square position error bound (SPEB) of the service area penalized by its individual power. Meanwhile, it is proven that the power allocation game is an exact potential game which has one pure Nash equilibrium (NE) at least. In addition, we also prove the existence of an ϵ -equilibrium point, which is a refinement of NE and the better response dynamic approach can reach the end solution. Analytical and simulation results demonstrate that: (i) when prior distribution information is available, the proposed strategies have better localization accuracy than the uniform strategies; (ii) when prior distribution information is unknown, the performance of the proposed strategies outperforms power management strategies based on the second-order cone program (SOCP) for particular agent nodes after obtaining the estimated distribution of agent nodes. In addition, proposed strategies also provide an instructional trade-off between power consumption and localization accuracy.

  10. Space Weather Tools of the Trade - A Changing Mix

    NASA Astrophysics Data System (ADS)

    Kunches, J.; Crowley, G.; Pilinski, M.; Winkler, C.; Fish, C. S.; Hunton, D.; Reynolds, A.; Azeem, I.

    2014-12-01

    Historically, operational space weather tools have focused on the large-scale. The Sun, solar wind, magnetosphere, and ionosphere were the domains that, rightly so, needed the attention of experimentalists and scientists to fashion the best sensors and physics-based models available. These initiatives resulted in significant improvements for operational forecasters. For example, geomagnetic storm predictions now do not have to rely on proxies for CMEs, such as type II sweep, but rather make use of available actual observations of CMEs from which the true velocity vector may be determined. The users of space weather services profited from the better large-scale observations, but now have expressed their desire for even better spatially and time-resolved granularity of products and services. This natural evolution towards refining products has ushered in the era of the smaller mission, the more efficient sensor. CubeSats and compact ionospheric monitors are examples of the instrumental suite now emerging to bring in this new era. This presentation will show examples of the new mix of smaller systems that enable finer, more well-resolved products and services for the operational world. A number of technologies are now in the marketplace demonstrating the value of more observations at a decreasing cost. In addition, new models are looming to take advantage of these better observations. Examples of models poised to take advantage of new observations will be given.

  11. Secular Climate Change on Mars: An Update

    NASA Astrophysics Data System (ADS)

    Batterson, C. M.; Kahre, M. A.; Haberle, R. M.; Wilson, R. J.; Kahanpää, H.

    2017-12-01

    The stability of the South Polar Residual Cap (SPRC) has been in question since Leighton & Murray (1966) theorized that the cap is predominantly CO2 ice in solid-vapor equilibrium with the atmosphere. In 2001, Malin et al. reported a net loss of cap mass that Blackburn et al. (2010) calculated would sublime the SPRC by the end of the decade. Also in 2010, Haberle & Kahre analyzed Phoenix and VL2 pressure data to quantify the net loss of CO2 from the SPRC since the time of the Viking Missions. Though their loss estimates were consistent with Malin et al. (2001), the unclear accuracy of the pressure sensor and limited data available from Phoenix rendered their study inconclusive. This study modifies the process Haberle & Kahre (2010) use to quantify the change in atmospheric mass since VL2 and is improved by the known accuracy and stability of the MSL pressure sensor and its longer data set (two complete Mars Years). Modifications include excluding warm up errors in the MSL data, correcting for rover elevation changes, binning the data into hourly bins for daily averages, and excluding periods when both MSL and VL2 data are not simultaneously present when calculating annual means. An ensemble of Ames GCM simulations are used to define the offset that accounts for dynamical and physical differences between MSL and VL2. From these calculations we find an estimated net loss of 5 Pascals per Mars Decade of atmospheric CO2 , which is comparable to the year 2 MSL sensor accuracy ( 4 Pa). Given this, and the uncertain accuracy of the VL2 sensor, we see no compelling evidence for secular climate change. This result is consistent with Thomas et al's (2016) recent refinements in actual cap loss rates. However, since the Ames GCM runs at fairly coarse horizontal resolution we plan to use the higher-resolution GFDL FMS/FV3 GCM capable of resolving Gale Crater and its circulation to re-calculate the offset and obtain a more accurate loss/gain rate in the near future.

  12. Characterizing the physics of plant root gravitropism: A systems modeling approach

    NASA Astrophysics Data System (ADS)

    Yoder, Thomas Lynn

    Root gravitropism is divided into three mechanisms; the gravity sensor, transduction, and differential growth. The gravitropic response has been imitated with various mathematical constructs, but a coherent model based on systems engineering concepts does not exist. The goal of this research is to create models of the gravitropic sensor and differential growth response that are consistent with actual physical characteristics of these mechanisms. The study initially establishes that the amyloplasts within the central columella cells of maize are feasible gravity sensors; statoliths. Video-microscopy studies of live root cap sections are used to quantify the dynamics of the statoliths. Extensive MATLAB analysis of amyloplast sedimentation indicates that an actin network interferes with the free sedimentation of the statoliths. This interference is most significant in the central region of the cell and less significant near the periphery. This obstruction of actin creates a channeling behavior in amyloplasts sedimenting through the cell's central region. The amyloplasts also appear to exhibit cross-correlated motions. Cytochalasin D mediates both the channeling and correlated behaviors, confirming that the obstructive influence is actin-based. The video analysis produced a refined value for maize cytoplasmic viscosity. Efforts to model the differential growth mechanism examined historical growth data from numerous researchers. RELEL (relative elemental elongation) growth data applied to a model set analogous to bi-metallic bending is used. Testing and analysis of the model highlights an extremely high sensitivity of curvature to all RELEL parameters. This sensitivity appears to be the reason for the significant differences between gravitropic responses within like species. Newly observed gravitropic responses, along with historical data, are used to explore the gravitropic time response specifications as opposed to averaging individual time-curvature data into single responses. This approach highlights the significant disadvantages of time-averaging, low sampling rates, and a lack of frequency components being incorporated into the response. A single feedback "black box" model is created so that, along with the sensor and differential growth models, some inferences could be made about the elusive transduction mechanism. Numerous pieces of circumstantial evidence are found that indicate that the gravitropic mechanism is not a single-pathway system.

  13. Examining the strength of the newly-launched Sentinel 2 MSI sensor in detecting and discriminating subtle differences between C3 and C4 grass species

    NASA Astrophysics Data System (ADS)

    Shoko, C.; Mutanga, O.

    2017-07-01

    C3 and C4 grass species discrimination has increasingly become relevant in understanding their response to environmental changes and to monitor their integrity in providing goods and services. While remotely-sensed data provide robust, cost-effective and repeatable monitoring tools for C3 and C4 grasses, this has been largely limited by the scarcity of sensors with better earth imaging characteristics. The recent launch of the advanced Sentinel 2 MultiSpectral Instrument (MSI) presents a new prospect for discriminating C3 and C4 grasses. The present study tested the potential of Sentinel 2, characterized by refined spatial resolution and more unique spectral bands in discriminating between Festuca (C3) and Themeda (C4) grasses. To evaluate the performance of Sentinel 2 MSI; spectral bands, vegetation indices and spectral bands plus indices were used. Findings from Sentinel 2 were compared with those derived from the widely-used Worldview 2 commercial sensor and the Landsat 8 Operational Land Imager (OLI). Overall classification accuracies have shown that Sentinel 2 bands have potential (90.36%), than indices (85.54%) and combined variables (88.61%). The results were comparable to Worldview 2 sensor, which produced slightly higher accuracies using spectral bands (95.69%), indices (86.02%) and combined variables (87.09%), and better than Landsat 8 OLI spectral bands (75.26%), indices (82.79%) and combined variables (86.02%). Sentinel 2 bands produced lower errors of commission and omission (between 4.76 and 14.63%), comparable to Worldview 2 (between 1.96 and 7.14%), than Landsat 8 (between 18.18 and 30.61%), when classifying the two species. The classification accuracy from Sentinel 2 also did not differ significantly (z = 1.34) from Worldview 2, using standard bands; it was significantly (z > 1.96) different using indices and combined variables, whereas when compared to Landsat 8, Sentinel 2 accuracies were significantly different (z > 1.96) using all variables. These results demonstrated that key vegetation species discrimination could be improved by the use of the freely and improved Sentinel 2 MSI data.

  14. Remote biomonitoring of temperatures in mothers and newborns: design, development and testing of a wearable sensor device in a tertiary-care hospital in southern India

    PubMed Central

    Mony, Prem K; Thankachan, Prashanth; Bhat, Swarnarekha; Rao, Suman; Washington, Maryann; Antony, Sumi; Thomas, Annamma; Nagarajarao, Sheela C; Rao, Hiteshwar; Amrutur, Bharadwaj

    2018-01-01

    Objective Newer technologies such as wearables, sensors, mobile telephony and computing offer opportunities to monitor vital physiological parameters and tackle healthcare problems, thereby improving access and quality of care. We describe the design, development and testing of a wearable sensor device for remote biomonitoring of body temperatures in mothers and newborns in southern India. Methods Based on client needs and technological requirements, a wearable sensor device was designed and developed using principles of ‘social innovation’ design. The device underwent multiple iterations in product design and engineering based on user feedback, and then following preclinical testing, a techno-feasibility study and clinical trial were undertaken in a tertiary-care teaching hospital in Bangalore, India. Clinical trial phases I and IIa for evaluation of safety and efficacy were undertaken in the following sequence: 7 healthy adult volunteers; 18 healthy mothers; 3 healthy babies; 10 stable babies in the neonatal care intensive unit and 1 baby with morbidities. Time-stamped skin temperature readings obtained at 5 min intervals over a 1-hour period from the device secured on upper arms of mothers and abdomen of neonates were compared against readings from thermometers used routinely in clinical practice. Results Devices were comfortably secured on to adults and neonates, and data were efficiently transmitted via the gateway device for secure storage and retrieval for analysis. The mean skin temperatures in mothers were lower than the axillary temperatures by 2°C; and in newborns, there was a precision of –0.5°C relative to axillary measurements. While occasional minimal adverse events were noted in healthy volunteers, no adverse events were noted in mothers or neonates. Conclusions This proof-of-concept study shows that this device is promising in terms of feasibility, safety and accuracy (with appropriate calibration) with potential for further refinements in device accuracy and pursuit of further phases of clinical research for improved maternal and neonatal health. PMID:29670758

  15. Characterization of Sodium Thermal Hydraulics with Optical Fiber Temperature Sensors

    NASA Astrophysics Data System (ADS)

    Weathered, Matthew Thomas

    The thermal hydraulic properties of liquid sodium make it an attractive coolant for use in Generation IV reactors. The liquid metal's high thermal conductivity and low Prandtl number increases efficiency in heat transfer at fuel rods and heat exchangers, but can also cause features such as high magnitude temperature oscillations and gradients in the coolant. Currently, there exists a knowledge gap in the mechanisms which may create these features and their effect on mechanical structures in a sodium fast reactor. Two of these mechanisms include thermal striping and thermal stratification. Thermal striping is the oscillating temperature field created by the turbulent mixing of non-isothermal flows. Usually this occurs at the reactor core outlet or in piping junctions and can cause thermal fatigue in mechanical structures. Meanwhile, thermal stratification results from large volumes of non-isothermal sodium in a pool type reactor, usually caused by a loss of coolant flow accident. This stratification creates buoyancy driven flow transients and high temperature gradients which can also lead to thermal fatigue in reactor structures. In order to study these phenomena in sodium, a novel method for the deployment of optical fiber temperature sensors was developed. This method promotes rapid thermal response time and high spatial temperature resolution in the fluid. The thermal striping and stratification behavior in sodium may be experimentally analyzed with these sensors with greater fidelity than ever before. Thermal striping behavior at a junction of non-isothermal sodium was fully characterized with optical fibers. An experimental vessel was hydrodynamically scaled to model thermal stratification in a prototypical sodium reactor pool. Novel auxiliary applications of the optical fiber temperature sensors were developed throughout the course of this work. One such application includes local convection coefficient determination in a vessel with the corollary application of level sensing. Other applications were cross correlation velocimetry to determine bulk sodium flow rate and the characterization of coherent vortical structures in sodium with temperature frequency data. The data harvested, instrumentation developed and techniques refined in this work will help in the design of more robust reactors as well as validate computational models for licensing sodium fast reactors.

  16. 40 CFR 80.1344 - What provisions are available to a non-small refiner that acquires one or more of a small refiner...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner Provisions § 80.1344 What provisions are... a small refiner approved under § 80.1340, the small refiner provisions of the gasoline benzene...

  17. 40 CFR 80.1344 - What provisions are available to a non-small refiner that acquires one or more of a small refiner...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner Provisions § 80.1344 What provisions are... a small refiner approved under § 80.1340, the small refiner provisions of the gasoline benzene...

  18. 40 CFR 80.1344 - What provisions are available to a non-small refiner that acquires one or more of a small refiner...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner Provisions § 80.1344 What provisions are... a small refiner approved under § 80.1340, the small refiner provisions of the gasoline benzene...

  19. Work on Planetary Atmospheres and Planetary Atmosphere Probes

    NASA Technical Reports Server (NTRS)

    Lester, Peter

    1999-01-01

    A summary final report of work accomplished is presented. Work was performed in the following areas: (1) Galileo Probe science analysis, (2) Galileo probe Atmosphere Structure Instrument, (3) Mars Pathfinder Atmosphere Structure/Meteorology instrument, (4) Mars Pathfinder data analysis, (5) Science Definition for future Mars missions, (6) Viking Lander data analysis, (7) winds in Mars atmosphere Venus atmospheric dynamics, (8) Pioneer Venus Probe data analysis, (9) Pioneer Venus anomaly analysis, (10) Discovery Venus Probe Titan probe instrument design, and (11) laboratory studies of Titan probe impact phenomena. The work has resulted in more than 10 articles published in archive journals, 2 encyclopedia articles, and many working papers. This final report is organized around the four planets on which there was activity, Jupiter, Mars, Venus, and Titan, with a closing section on Miscellaneous Activities. A major objective was to complete the fabrication, test, and evaluation of the atmosphere structure experiment on the Galileo probe, and to receive, analyze and interpret data received from the spacecraft. The instrument was launched on April 14, 1989. Calibration data were taken for all experiment sensors. The data were analyzed, fitted with algorithms, and summarized in a calibration report for use in analyzing and interpreting data returned from Jupiter's atmosphere. The sensors included were the primary science pressure, temperature and acceleration sensors, and the supporting engineering temperature sensors. Computer programs were written to decode the Experiment Data Record and convert the digital numbers to physical quantities, i.e., temperatures, pressures, and accelerations. The project office agreed to obtain telemetry of checkout data from the probe. Work to extend programs written for use on the Pioneer Venus project included: (1) massive heat shield ablation leading to important mass loss during entry; and (2) rapid planet rotation, which introduced terms of motion not needed on Venus. When the Galileo Probe encountered Jupiter, analysis and interpretation of data commenced. The early contributions of the experiment were to define (1) the basic structure of the deep atmosphere, (2) the stability of the atmosphere, (3) the upper atmospheric profiles of density, pressure, and temperature. The next major task in the Galileo Probe project was to refine, verify and extend the analysis of the data. It was the verified, and corrected data, which indicated a dry abiabatic atmosphere within measurement accuracy. Temperature in the thermosphere was measured at 900 K. Participation in the Mars atmospheric research included: (1) work as a team member of the Mars Atmosphere Working Group, (2) contribution to the Mars Exobiology Instrument workshop, (3) asssistance in planning the Mars global network and (4) assitance in planning the Soviet-French Mars mission in 1994. This included a return to the Viking Lander parachute data to refine and improve the definition of winds between 1.5 and 4 kilometer altitude at the two entry sites. The variability of the structure of Mars atmosphere was addressed, which is known to vary with season, latitude, hemisphere and dust loading of the atmosphere. This led to work on the Pathfinder project. The probe had a deployable meteorology mast that had three temperature sensors, and a wind sensor at the tip of the mast. Work on the Titan atmospheric probe was also accomplished. This included developing an experiment proposal to the European Space Agency (ESA), which was not selected. However, as an advisor in the design and preparation of the selected experiment the researcher interacted with scientist on the Huygens Probe Atmosphere Structure Experiment. The researcher also participated in the planning for the Venus Chemical Probe. The science objectives of the probe were to resolve unanswered questions concerning the minor species chemistry of Venus' atmosphere that control cloud formation, greenhouse effectiveness, and the thermal structure. The researcher also reviewed problems with the Pioneer Venus Probe, that caused anomalies which occurred on the Probes at and below 12.5 km level of the Venus' atmosphere. He convened and participated in a workshop that concluded the most likely hardware cause was insulation failure in the electrical harness outside the Probes' pressure vessels. It was discovered that the shrink tubing material failed at 600K. This failure could explain the anomalies experienced by the probes. The descent data of the Pioneer probes, and the Soviet Vega Lander was analyzed to evaluate the presence of small scale gravity waves in and below the Venus cloud layer.

  20. 40 CFR 80.1622 - Approval for small refiner and small volume refinery status.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... appropriate data to correct the record when the company submits its application. (ii) Foreign small refiners... 40 Protection of Environment 17 2014-07-01 2014-07-01 false Approval for small refiner and small... Approval for small refiner and small volume refinery status. (a) Applications for small refiner or small...

  1. ACOUSTICAL IMAGING AND MECHANICAL PROPERTIES OF SOFT ROCK AND MARINE SEDIMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thurman E. Scott, Jr., Ph.D.; Younane Abousleiman, Ph.D.; Musharraf Zaman, Ph.D., P.E.

    2002-11-18

    During the sixth quarter of this research project the research team developed a method and the experimental procedures for acquiring the data needed for ultrasonic tomography of rock core samples under triaxial stress conditions as outlined in Task 10. Traditional triaxial compression experiments, where compressional and shear wave velocities are measured, provide little or no information about the internal spatial distribution of mechanical damage within the sample. The velocities measured between platen-to-platen or sensor-to-sensor reflects an averaging of all the velocities occurring along that particular raypath across the boundaries of the rock. The research team is attempting to develop andmore » refine a laboratory equivalent of seismic tomography for use on rock samples deformed under triaxial stress conditions. Seismic tomography, utilized for example in crosswell tomography, allows an imaging of the velocities within a discrete zone within the rock. Ultrasonic or acoustic tomography is essentially the extension of that field technology applied to rock samples deforming in the laboratory at high pressures. This report outlines the technical steps and procedures for developing this technology for use on weak, soft chalk samples. Laboratory tests indicate that the chalk samples exhibit major changes in compressional and shear wave velocities during compaction. Since chalk is the rock type responsible for the severe subsidence and compaction in the North Sea it was selected for the first efforts at tomographic imaging of soft rocks. Field evidence from the North Sea suggests that compaction, which has resulted in over 30 feet of subsidence to date, is heterogeneously distributed within the reservoir. The research team will attempt to image this very process in chalk samples. The initial tomographic studies (Scott et al., 1994a,b; 1998) were accomplished on well cemented, competent rocks such as Berea sandstone. The extension of the technology to weaker samples is more difficult but potentially much more rewarding. The chalk, since it is a weak material, also attenuates wave propagation more than other rock types. Three different types of sensors were considered (and tested) for the tomographic imaging project: 600 KHz PZT, 1 MHz PZT, and PVDF film sensors. 600 KHz PZT crystals were selected because they generated a sufficiently high amplitude pulse to propagate across the damaged chalk. A number of different configurations were considered for placement of the acoustic arrays. It was decided after preliminary testing that the most optimum arrangement of the acoustic sensors was to place three arrays of sensors, with each array containing twenty sensors, around the sample. There would be two horizontal arrays to tomographically image two circular cross-sectional planes through the rock core sample. A third array would be vertically oriented to provide a vertical cross-sectional view of the sample. A total of 260 acoustic raypaths would be shot and acquired in the horizontal acoustic array to create each horizontal tomographic image. The sensors can be used as both acoustic sources or as acoustic each of the 10 pulsers to the 10 receivers.« less

  2. 40 CFR 80.551 - How does a refiner obtain approval as a small refiner under this subpart?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... application for small refiner status. EPA may accept such alternate data at its discretion. (4) For motor... a small refiner under this subpart? 80.551 Section 80.551 Protection of Environment ENVIRONMENTAL... Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA Marine Fuel Small Refiner Hardship...

  3. 40 CFR 80.551 - How does a refiner obtain approval as a small refiner under this subpart?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... application for small refiner status. EPA may accept such alternate data at its discretion. (4) For motor... a small refiner under this subpart? 80.551 Section 80.551 Protection of Environment ENVIRONMENTAL... Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA Marine Fuel Small Refiner Hardship...

  4. 40 CFR 80.551 - How does a refiner obtain approval as a small refiner under this subpart?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... application for small refiner status. EPA may accept such alternate data at its discretion. (4) For motor... a small refiner under this subpart? 80.551 Section 80.551 Protection of Environment ENVIRONMENTAL... Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA Marine Fuel Small Refiner Hardship...

  5. 40 CFR 80.551 - How does a refiner obtain approval as a small refiner under this subpart?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... application for small refiner status. EPA may accept such alternate data at its discretion. (4) For motor... a small refiner under this subpart? 80.551 Section 80.551 Protection of Environment ENVIRONMENTAL... Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA Marine Fuel Small Refiner Hardship...

  6. Computer simulation of refining process of a high consistency disc refiner based on CFD

    NASA Astrophysics Data System (ADS)

    Wang, Ping; Yang, Jianwei; Wang, Jiahui

    2017-08-01

    In order to reduce refining energy consumption, the ANSYS CFX was used to simulate the refining process of a high consistency disc refiner. In the first it was assumed to be uniform Newton fluid of turbulent state in disc refiner with the k-ɛ flow model; then meshed grids and set the boundary conditions in 3-D model of the disc refiner; and then was simulated and analyzed; finally, the viscosity of the pulp were measured. The results show that the CFD method can be used to analyze the pressure and torque on the disc plate, so as to calculate the refining power, and streamlines and velocity vectors can also be observed. CFD simulation can optimize parameters of the bar and groove, which is of great significance to reduce the experimental cost and cycle.

  7. The genesis of neurosurgery and the evolution of the neurosurgical operative environment: part I-prehistory to 2003.

    PubMed

    Liu, Charles Y; Apuzzo, Michael L J

    2003-01-01

    Despite its singular importance, little attention has been given to the neurosurgical operative environment in the scientific and medical literature. This article focuses attention on the development of neurosurgery and the parallel emergence of its operative setting. The operative environment has, to a large extent, defined the "state of the art and science" of neurosurgery, which is now undergoing rapid reinvention. During the course of its initial invention, major milestones in the development of neurosurgery have included the definition of anatomy, consolidation of a scientific basis, and incorporation of the practicalities of anesthesia and antisepsis and later operative technical adjuvants for further refinement of action and minimalism. The progress, previously long and laborious in emergence, is currently undergoing rapid evolution. Throughout its evolution, the discipline has assimilated the most effective tools of modernity into the operative environment, leading eventually to the entity known as the operating room. In the decades leading to the present, progressive minimalization of manipulation and the emergence of more refined operative definition with increasing precision are evident, with concurrent miniaturization of attendant computerized support systems, sensors, robotic interfaces, and imaging devices. These developments over time have led to the invention of neurosurgery and the establishment of the current state-of-the-art neurosurgical operating room as we understand it, and indeed, to a broader definition of the entity itself. To remain current, each neurosurgeon should periodically reconsider his or her personal operative environment and its functional design with reference to modernity of practice as currently defined.

  8. Refinement of the deletion in 8q22.2-q22.3: the minimum deletion size at 8q22.3 related to intellectual disability and epilepsy.

    PubMed

    Kuroda, Yukiko; Ohashi, Ikuko; Saito, Toshiyuki; Nagai, Jun-ichi; Ida, Kazumi; Naruto, Takuya; Iai, Mizue; Kurosawa, Kenji

    2014-08-01

    Kuechler et al. [2011] reported five patients with interstitial deletions in 8q22.2-q22.3 who had intellectual disability, epilepsy, and dysmorphic features. We report on a new patient with the smallest overlapping de novo deletion in 8q22.3 and refined the phenotype. The proposita was an 8-year-old girl, who developed seizures at 10 months, and her epileptic seizure became severe and difficult to control with antiepileptic drugs. She also exhibited developmental delay and walked alone at 24 months. She was referred to us for evaluation for developmental delay and epilepsy at the age of 8 years. She had intellectual disability (IQ 37 at 7 years) and autistic behavior, and spoke two word sentences at 8 years. She had mild dysmorphic features, including telecanthus and thick vermilion of the lips. Array comparative genomic hybridization detected a 1.36 Mb deletion in 8q22.3 that encompassed RRM2B and NCALD, which encode the small subunit of p53-inducible ribonucleotide reductase and neurocalcin delta in the neuronal calcium sensor family of calcium-binding proteins, respectively. The minimum overlapping region between the present and previously reported patients is considered to be a critical region for the phenotype of the deletion in 8q22.3. We suggest that the deletion in 8q22.3 may represent a clinically recognizable condition, which is characterized by intellectual disability and epilepsy. © 2014 Wiley Periodicals, Inc.

  9. Laser system refinements to reduce variability in infarct size in the rat photothrombotic stroke model

    PubMed Central

    Alaverdashvili, Mariam; Paterson, Phyllis G.; Bradley, Michael P.

    2015-01-01

    Background The rat photothrombotic stroke model can induce brain infarcts with reasonable biological variability. Nevertheless, we observed unexplained high inter-individual variability despite using a rigorous protocol. Of the three major determinants of infarct volume, photosensitive dye concentration and illumination period were strictly controlled, whereas undetected fluctuation in laser power output was suspected to account for the variability. New method The frequently utilized Diode Pumped Solid State (DPSS) lasers emitting 532 nm (green) light can exhibit fluctuations in output power due to temperature and input power alterations. The polarization properties of the Nd:YAG and Nd:YVO4 crystals commonly used in these lasers are another potential source of fluctuation, since one means of controlling output power uses a polarizer with a variable transmission axis. Thus, the properties of DPSS lasers and the relationship between power output and infarct size were explored. Results DPSS laser beam intensity showed considerable variation. Either a polarizer or a variable neutral density filter allowed adjustment of a polarized laser beam to the desired intensity. When the beam was unpolarized, the experimenter was restricted to using a variable neutral density filter. Comparison with existing method(s) Our refined approach includes continuous monitoring of DPSS laser intensity via beam sampling using a pellicle beamsplitter and photodiode sensor. This guarantees the desired beam intensity at the targeted brain area during stroke induction, with the intensity controlled either through a polarizer or variable neutral density filter. Conclusions Continuous monitoring and control of laser beam intensity is critical for ensuring consistent infarct size. PMID:25840363

  10. The effect of changing movement and posture using motion-sensor biofeedback, versus guidelines-based care, on the clinical outcomes of people with sub-acute or chronic low back pain-a multicentre, cluster-randomised, placebo-controlled, pilot trial.

    PubMed

    Kent, Peter; Laird, Robert; Haines, Terry

    2015-05-29

    The aims of this pilot trial were to (i) test the hypothesis that modifying patterns of painful lumbo-pelvic movement using motion-sensor biofeedback in people with low back pain would lead to reduced pain and activity limitation compared with guidelines-based care, and (ii) facilitate sample size calculations for a fully powered trial. A multicentre (8 clinics), cluster-randomised, placebo-controlled pilot trial compared two groups of patients seeking medical or physiotherapy primary care for sub-acute and chronic back pain. It was powered for longitudinal analysis, but not for adjusted single-time point comparisons. The intervention group (n = 58) received modification of movement patterns augmented by motion-sensor movement biofeedback (ViMove, dorsaVi.com) plus guidelines-based medical or physiotherapy care. The control group (n = 54) received a placebo (wearing the motion-sensors without biofeedback) plus guidelines-based medical or physiotherapy care. Primary outcomes were self-reported pain intensity (VAS) and activity limitation (Roland Morris Disability Questionnaire (RMDQ), Patient Specific Functional Scale (PSFS)), all on 0-100 scales. Both groups received 6-8 treatment sessions. Outcomes were measured seven times during 10-weeks of treatment and at 12, 26 and 52 week follow-up, with 17.0 % dropout. Patients were not informed of group allocation or the study hypothesis. Across one-year, there were significant between-group differences favouring the intervention group [generalized linear model coefficient (95 % CI): group effect RMDQ -7.1 (95 % CI-12.6;-1.6), PSFS -10.3 (-16.6; -3.9), QVAS -7.7 (-13.0; -2.4); and group by time effect differences (per 100 days) RMDQ -3.5 (-5.2; -2.2), PSFS -4.7 (-7.0; -2.5), QVAS -4.8 (-6.1; -3.5)], all p < 0.001. Risk ratios between groups of probability of improving by >30 % at 12-months = RMDQ 2.4 (95 % CI 1.5; 4.1), PSFS 2.5 (1.5; 4.0), QVAS 3.3 (1.8; 5.9). The only device-related side-effects involved transient skin irritation from tape used to mount motion sensors. Individualised movement retraining using motion-sensor biofeedback resulted in significant and sustained improvements in pain and activity limitation that persisted after treatment finished. This pilot trial also refined the procedures and sample size requirements for a fully powered RCT. This trial (Australian New Zealand Clinical Trials Registry NCT01572779) was equally funded by dorsaVi P/L and the Victorian State Government.

  11. A smart ROV solution for ship hull and harbor inspection

    NASA Astrophysics Data System (ADS)

    Reed, Scott; Wood, Jon; Vazquez, Jose; Mignotte, Pierre-Yves; Privat, Benjamin

    2010-04-01

    Hull and harbor infrastructure inspections are frequently performed manually and involve quite a bit of risk and human and monetary resources. In any kind of threat and resource constrained environment, this involves unacceptable levels of risk and cost. Modern Remotely Operated Vehicles are highly refined machines that provide features and capabilities previously unavailable. Operations once carried out by divers can now be carried out more quickly, efficiently and safely by smart enabled ROVs. ROVs are rapidly deployable and capable of continuous, reliable operations in adverse conditions. They also provide a stable platform on which multiple sensors may be mounted and utilized to meet the harbor inspection problem. Automated Control software provides ROV's and their pilots with the capability to inspect complex, constrained environments such as those found in a harbor region. This application and the user interface allow the ROV to automatically conduct complex maneuvers relative to the area being inspected and relieves the training requirements and work load for the pilot, allowing he or she to focus on the primary task of survey, inspection and looking for possible threats (such as IEDs, Limpet Mines, signs of sabotage, etc). Real-time sensor processing tools can be integrated into the smart ROV solution to assist the operator. Automatic Target Recognition (ATR) algorithms are used to search through the sensor data collected by the ROV in real time. These algorithms provide immediate feedback on possible threats and notify the operator of regions that may require manual verification. Sensor data (sonar or video) is also mosaiced, providing the operator with real-time situational awareness and a coverage map of the hull or seafloor. Detected objects may also be placed in the context of the large scale characteristics of the hull (or bottom or pilings) and localized. Within the complex areas such as the harbor pier pilings and the running gear of the ship, real-time 3D reconstruction techniques may be used to process profiling sonar data for similar applications. An observation class ROV equipped with sensors, running an operator in the loop, Automated Surface-Computer (ASC) system can inspect an entire harbor region. These systems can autonomously provide coverage information, identify possible threats and provide the level of control required to operate in confined environments. The system may be controlled autonomously or by the operator. Previous inspection results may also be used for change detection applications. This paper presents the SeeByte Smart ROV and sensor processing technology relevant to the harbor inspection problem. These technologies have been tested extensively in real world applications and trials and are demonstrated using real data and examples.

  12. Application of Al-2La-1B Grain Refiner to Al-10Si-0.3Mg Casting Alloy

    NASA Astrophysics Data System (ADS)

    Jing, Lijun; Pan, Ye; Lu, Tao; Li, Chenlin; Pi, Jinhong; Sheng, Ningyue

    2018-05-01

    This paper reports the application and microstructure refining effect of an Al-2La-1B grain refiner in Al-10Si-0.3Mg casting alloy. Compared with the traditional Al-5Ti-1B refiner, Al-2La-1B refiner shows better performances on the grain refinement of Al-10Si-0.3Mg alloy. Transmission electron microscopy analysis suggests that the crystallite structure features of LaB6 are beneficial to the heterogeneous nucleation of α-Al grains. Regarding the mechanical performances, tensile properties of Al-10Si-0.3Mg casting alloy are prominently improved, due to the refined microstructures.

  13. NMRe: a web server for NMR protein structure refinement with high-quality structure validation scores.

    PubMed

    Ryu, Hyojung; Lim, GyuTae; Sung, Bong Hyun; Lee, Jinhyuk

    2016-02-15

    Protein structure refinement is a necessary step for the study of protein function. In particular, some nuclear magnetic resonance (NMR) structures are of lower quality than X-ray crystallographic structures. Here, we present NMRe, a web-based server for NMR structure refinement. The previously developed knowledge-based energy function STAP (Statistical Torsion Angle Potential) was used for NMRe refinement. With STAP, NMRe provides two refinement protocols using two types of distance restraints. If a user provides NOE (Nuclear Overhauser Effect) data, the refinement is performed with the NOE distance restraints as a conventional NMR structure refinement. Additionally, NMRe generates NOE-like distance restraints based on the inter-hydrogen distances derived from the input structure. The efficiency of NMRe refinement was validated on 20 NMR structures. Most of the quality assessment scores of the refined NMR structures were better than those of the original structures. The refinement results are provided as a three-dimensional structure view, a secondary structure scheme, and numerical and graphical structure validation scores. NMRe is available at http://psb.kobic.re.kr/nmre/. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Refinement Of Hexahedral Cells In Euler Flow Computations

    NASA Technical Reports Server (NTRS)

    Melton, John E.; Cappuccio, Gelsomina; Thomas, Scott D.

    1996-01-01

    Topologically Independent Grid, Euler Refinement (TIGER) computer program solves Euler equations of three-dimensional, unsteady flow of inviscid, compressible fluid by numerical integration on unstructured hexahedral coordinate grid refined where necessary to resolve shocks and other details. Hexahedral cells subdivided, each into eight smaller cells, as needed to refine computational grid in regions of high flow gradients. Grid Interactive Refinement and Flow-Field Examination (GIRAFFE) computer program written in conjunction with TIGER program to display computed flow-field data and to assist researcher in verifying specified boundary conditions and refining grid.

  15. Real-space refinement in PHENIX for cryo-EM and crystallography

    DOE PAGES

    Afonine, Pavel V.; Poon, Billy K.; Read, Randy J.; ...

    2018-06-01

    This work describes the implementation of real-space refinement in the phenix.real_space_refine program from the PHENIX suite. The use of a simplified refinement target function enables very fast calculation, which in turn makes it possible to identify optimal data-restraint weights as part of routine refinements with little runtime cost. Refinement of atomic models against low-resolution data benefits from the inclusion of as much additional information as is available. In addition to standard restraints on covalent geometry, phenix.real_space_refine makes use of extra information such as secondary-structure and rotamer-specific restraints, as well as restraints or constraints on internal molecular symmetry. The re-refinement ofmore » 385 cryo-EM-derived models available in the Protein Data Bank at resolutions of 6 Å or better shows significant improvement of the models and of the fit of these models to the target maps.« less

  16. Real-space refinement in PHENIX for cryo-EM and crystallography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Afonine, Pavel V.; Poon, Billy K.; Read, Randy J.

    This work describes the implementation of real-space refinement in the phenix.real_space_refine program from the PHENIX suite. The use of a simplified refinement target function enables very fast calculation, which in turn makes it possible to identify optimal data-restraint weights as part of routine refinements with little runtime cost. Refinement of atomic models against low-resolution data benefits from the inclusion of as much additional information as is available. In addition to standard restraints on covalent geometry, phenix.real_space_refine makes use of extra information such as secondary-structure and rotamer-specific restraints, as well as restraints or constraints on internal molecular symmetry. The re-refinement ofmore » 385 cryo-EM-derived models available in the Protein Data Bank at resolutions of 6 Å or better shows significant improvement of the models and of the fit of these models to the target maps.« less

  17. 40 CFR 80.550 - What is the definition of a motor vehicle diesel fuel small refiner or a NRLM diesel fuel small...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... vehicle diesel fuel small refiner or a NRLM diesel fuel small refiner under this subpart? 80.550 Section...) REGULATION OF FUELS AND FUEL ADDITIVES Motor Vehicle Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel... vehicle diesel fuel small refiner or a NRLM diesel fuel small refiner under this subpart? (a) A motor...

  18. 40 CFR 80.550 - What is the definition of a motor vehicle diesel fuel small refiner or a NRLM diesel fuel small...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... vehicle diesel fuel small refiner or a NRLM diesel fuel small refiner under this subpart? 80.550 Section...) REGULATION OF FUELS AND FUEL ADDITIVES Motor Vehicle Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel... vehicle diesel fuel small refiner or a NRLM diesel fuel small refiner under this subpart? (a) A motor...

  19. 40 CFR 80.551 - How does a refiner obtain approval as a small refiner under this subpart?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Applications for motor vehicle diesel fuel small refiner status must be submitted to EPA by December 31, 2001. (ii) Applications for NRLM diesel fuel small refiner status must be submitted to EPA by December 31, 2004. (2)(i) In the case of a refiner who acquires or reactivates a refinery that was shutdown or non...

  20. Apparent consumption of refined sugar in Australia (1938-2011).

    PubMed

    McNeill, T J; Shrapnel, W S

    2015-11-01

    In Australia, the Australian Bureau of Statistics discontinued collection of apparent consumption data for refined sugars in 1998/1999. The objectives of this study were to update this data series to determine whether it is a reliable data series that reflects consumption of refined sugars, defined as sucrose in the forms of refined or raw sugar or liquified sugars manufactured for human consumption. The study used the same methodology as that used by the Australian Bureau of Statistics to derive a refined sugars consumption estimate each year until the collection was discontinued. Sales by Australian refiners, refined sugars imports and the net balance of refined sugars contained in foods imported into, and exported from, Australia were used to calculate total refined sugars use for each year up to 2011. Per capita consumption figures were then derived. During the period 1938-2011, apparent consumption of refined sugars in Australia fell 13.1% from 48.3 to 42.0 kg per head (R(2)=0.74). Between the 1950s and the 1970s, apparent consumption was relatively stable at about 50 kg per person. In the shorter period 1970-2011, refined sugars consumption fell 16.5% from 50.3 to 42.0 kg per head, though greater variability was evident (R(2)=0.53). An alternative data set showed greater volatility with no trend up or down. The limited variability of the extended apparent consumption series and its consistency with recent national dietary survey data and sugar-sweetened beverage sales data indicate that it is a reliable data set that reflects declining intake of refined sugars in Australia.

  1. MicroRadarNet: A network of weather micro radars for the identification of local high resolution precipitation patterns

    NASA Astrophysics Data System (ADS)

    Turso, S.; Paolella, S.; Gabella, M.; Perona, G.

    2013-01-01

    In this paper, MicroRadarNet, a novel micro radar network for continuous, unattended meteorological monitoring is presented. Key aspects and constraints are introduced. Specific design strategies are highlighted, leading to the technological implementations of this wireless, low-cost, low power consumption sensor network. Raw spatial and temporal datasets are processed on-board in real-time, featuring a consistent evaluation of the signals from the sensors and optimizing the data loads to be transmitted. Network servers perform the final post-elaboration steps on the data streams coming from each unit. Final network products are meteorological mappings of weather events, monitored with high spatial and temporal resolution, and lastly served to the end user through any Web browser. This networked approach is shown to imply a sensible reduction of the overall operational costs, including management and maintenance aspects, if compared to the traditional long range monitoring strategy. Adoption of the TITAN storm identification and nowcasting engine is also here evaluated for in-loop integration within the MicroRadarNet data processing chain. A brief description of the engine workflow is provided, to present preliminary feasibility results and performance estimates. The outcomes were not so predictable, taking into account relevant operational differences between a Western Alps micro radar scenario and the long range radar context in the Denver region of Colorado. Finally, positive results from a set of case studies are discussed, motivating further refinements and integration activities.

  2. The S2 UAS, a Modular Platform for Atmospheric Science

    NASA Astrophysics Data System (ADS)

    Elston, J. S.; Stachura, M.; Bland, G.

    2017-12-01

    Black Swift Technologies, LLC (BST) developed and refined the S2 in partnership with NASA. The S2 is a novel small Unmanned Aircraft System (sUAS) specifically designed to meet the needs of atmospheric and earth observing scientific field campaigns. This tightly integrated system consists of an airframe, avionics, and sensors designed to measure atmospheric parameters (e.g., temperature, pressure, humidity, and 3D winds) and well as carry up to 2.3kg (5lbs) of additional payload. At the core of the sensing suite is a custom designed multi-hole-probe being developed to provide accurate measurements in u, v and w while remaining simple to integrate as well as low-cost. The S2 relies on the commercially-available SwiftCore Flight Management System (FMS), which has been proven in the field to provide a cost-effective, powerful, and easy-to-operate solution to meet the demanding requirements of nomadic scientific field campaigns. The airframe capabilities are currently being expanded to achieve high altitude flights through strong winds and damaging airborne particulates. Additionally, the well-documented power and data interfaces of the S2 will be employed to integrate the sensors required for the measurement of soil moisture content, atmospheric volcanic phenomenon, fire weather, as well as provide satellite calibration via multispectral cameras. Extensive flight testing has been planned to validate the S2 system's ability to operate in difficult terrain including mountainside takeoff and recovery and flights up to 6000m above sea level.

  3. Security Data Warehouse Application

    NASA Technical Reports Server (NTRS)

    Vernon, Lynn R.; Hennan, Robert; Ortiz, Chris; Gonzalez, Steve; Roane, John

    2012-01-01

    The Security Data Warehouse (SDW) is used to aggregate and correlate all JSC IT security data. This includes IT asset inventory such as operating systems and patch levels, users, user logins, remote access dial-in and VPN, and vulnerability tracking and reporting. The correlation of this data allows for an integrated understanding of current security issues and systems by providing this data in a format that associates it to an individual host. The cornerstone of the SDW is its unique host-mapping algorithm that has undergone extensive field tests, and provides a high degree of accuracy. The algorithm comprises two parts. The first part employs fuzzy logic to derive a best-guess host assignment using incomplete sensor data. The second part is logic to identify and correct errors in the database, based on subsequent, more complete data. Host records are automatically split or merged, as appropriate. The process had to be refined and thoroughly tested before the SDW deployment was feasible. Complexity was increased by adding the dimension of time. The SDW correlates all data with its relationship to time. This lends support to forensic investigations, audits, and overall situational awareness. Another important feature of the SDW architecture is that all of the underlying complexities of the data model and host-mapping algorithm are encapsulated in an easy-to-use and understandable Perl language Application Programming Interface (API). This allows the SDW to be quickly augmented with additional sensors using minimal coding and testing. It also supports rapid generation of ad hoc reports and integration with other information systems.

  4. Data fusion in cyber security: first order entity extraction from common cyber data

    NASA Astrophysics Data System (ADS)

    Giacobe, Nicklaus A.

    2012-06-01

    The Joint Directors of Labs Data Fusion Process Model (JDL Model) provides a framework for how to handle sensor data to develop higher levels of inference in a complex environment. Beginning from a call to leverage data fusion techniques in intrusion detection, there have been a number of advances in the use of data fusion algorithms in this subdomain of cyber security. While it is tempting to jump directly to situation-level or threat-level refinement (levels 2 and 3) for more exciting inferences, a proper fusion process starts with lower levels of fusion in order to provide a basis for the higher fusion levels. The process begins with first order entity extraction, or the identification of important entities represented in the sensor data stream. Current cyber security operational tools and their associated data are explored for potential exploitation, identifying the first order entities that exist in the data and the properties of these entities that are described by the data. Cyber events that are represented in the data stream are added to the first order entities as their properties. This work explores typical cyber security data and the inferences that can be made at the lower fusion levels (0 and 1) with simple metrics. Depending on the types of events that are expected by the analyst, these relatively simple metrics can provide insight on their own, or could be used in fusion algorithms as a basis for higher levels of inference.

  5. Characterization and Evaluation of Re-Refined Engine Lubricating Oil.

    DTIC Science & Technology

    1981-12-01

    performance of re-refineod and virgin oils and to Investigate the potential esubstantlal esquivalknced of re-refined and virgin lubricating oils. The...d 20. Abstract (continued) engine deposits derived from virgin and re-refined engine oils. (2) The effects of virgin and re-refined oils on engine...blowby composition and engine deposit generation were determined using a spark ignition engine and, 3) Virgin and re-refined basestock production

  6. An optimized inverse modelling method for determining the location and strength of a point source releasing airborne material in urban environment

    NASA Astrophysics Data System (ADS)

    Efthimiou, George C.; Kovalets, Ivan V.; Venetsanos, Alexandros; Andronopoulos, Spyros; Argyropoulos, Christos D.; Kakosimos, Konstantinos

    2017-12-01

    An improved inverse modelling method to estimate the location and the emission rate of an unknown point stationary source of passive atmospheric pollutant in a complex urban geometry is incorporated in the Computational Fluid Dynamics code ADREA-HF and presented in this paper. The key improvement in relation to the previous version of the method lies in a two-step segregated approach. At first only the source coordinates are analysed using a correlation function of measured and calculated concentrations. In the second step the source rate is identified by minimizing a quadratic cost function. The validation of the new algorithm is performed by simulating the MUST wind tunnel experiment. A grid-independent flow field solution is firstly attained by applying successive refinements of the computational mesh and the final wind flow is validated against the measurements quantitatively and qualitatively. The old and new versions of the source term estimation method are tested on a coarse and a fine mesh. The new method appeared to be more robust, giving satisfactory estimations of source location and emission rate on both grids. The performance of the old version of the method varied between failure and success and appeared to be sensitive to the selection of model error magnitude that needs to be inserted in its quadratic cost function. The performance of the method depends also on the number and the placement of sensors constituting the measurement network. Of significant interest for the practical application of the method in urban settings is the number of concentration sensors required to obtain a ;satisfactory; determination of the source. The probability of obtaining a satisfactory solution - according to specified criteria -by the new method has been assessed as function of the number of sensors that constitute the measurement network.

  7. A Decade Remote Sensing River Bathymetry with the Experimental Advanced Airborne Research LiDAR

    NASA Astrophysics Data System (ADS)

    Kinzel, P. J.; Legleiter, C. J.; Nelson, J. M.; Skinner, K.

    2012-12-01

    Since 2002, the first generation of the Experimental Advanced Airborne Research LiDAR (EAARL-A) sensor has been deployed for mapping rivers and streams. We present and summarize the results of comparisons between ground truth surveys and bathymetry collected by the EAARL-A sensor in a suite of rivers across the United States. These comparisons include reaches on the Platte River (NE), Boise and Deadwood Rivers (ID), Blue and Colorado Rivers (CO), Klamath and Trinity Rivers (CA), and the Shenandoah River (VA). In addition to diverse channel morphologies (braided, single thread, and meandering) these rivers possess a variety of substrates (sand, gravel, and bedrock) and a wide range of optical characteristics which influence the attenuation and scattering of laser energy through the water column. Root mean square errors between ground truth elevations and those measured by the EAARL-A ranged from 0.15-m in rivers with relatively low turbidity and highly reflective sandy bottoms to over 0.5-m in turbid rivers with less reflective substrates. Mapping accuracy with the EAARL-A has proved challenging in pools where bottom returns are either absent in waveforms or are of such low intensity that they are treated as noise by waveform processing algorithms. Resolving bathymetry in shallow depths where near surface and bottom returns are typically convolved also presents difficulties for waveform processing routines. The results of these evaluations provide an empirical framework to discuss the capabilities and limitations of the EAARL-A sensor as well as previous generations of post-processing software for extracting bathymetry from complex waveforms. These experiences and field studies not only provide benchmarks for the evaluation of the next generation of bathymetric LiDARs for use in river mapping, but also highlight the importance of developing and standardizing more rigorous methods to characterize substrate reflectance and in-situ optical properties at study sites. They also point out the continued necessity of ground truth data for algorithm refinement and survey verification.

  8. Large-N Over the Source Physics Experiment (SPE) Phase I and Phase II Test Beds

    NASA Astrophysics Data System (ADS)

    Snelson, C. M.; Carmichael, J. D.; Mellors, R. J.; Abbott, R. E.

    2014-12-01

    One of the current challenges in the field of monitoring and verification is source discrimination of low-yield nuclear explosions from background seismicity, both natural and anthropogenic. Work is underway at the Nevada National Security Site to conduct a series of chemical explosion experiments using a multi-institutional, multi-disciplinary approach. The goal of this series of experiments, called the Source Physics Experiments (SPE), is to refine the understanding of the effect of earth structures on source phenomenology and energy partitioning in the source region, the transition of seismic energy from the near field to the far field, and the development of S waves observed in the far field. To fully explore these problems, the SPE series includes tests in both hard and soft rock geologic environments. The project comprises a number of activities, which range from characterizing the shallow subsurface to acquiring new explosion data from both the near field (<100 m) and the far field (>100 m). SPE includes a series of planned explosions (with different yields and depths of burials), which are conducted in the same hole and monitored by a diverse set of sensors recording characteristics of the explosions, ground-shock, seismo-acoustic energy propagation. This presentation focuses on imaging the full 3D wavefield over hard rock and soft rock test beds using a large number of seismic sensors. This overview presents statistical analyses of optimal sensor layout required to estimate wavefield discriminants and the planned deployment for the upcoming experiments. This work was conducted under Contract No. DE-AC52-06NA25946 with the U.S. Department of Energy. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  9. Impact of different eddy covariance sensors and set-up on the annual balance of CO2 and fluxes of CH4 and latent heat in the Arctic

    NASA Astrophysics Data System (ADS)

    Goodrich, J. P.; Zona, D.; Gioli, B.; Murphy, P.; Burba, G. G.; Oechel, W. C.

    2015-12-01

    Expanding eddy covariance measurements of CO2 and CH4 fluxes in the Arctic is critical for refining the global C budget. Continuous measurements are particularly challenging because of the remote locations, low power availability, and extreme weather conditions. The necessity for tailoring instrumentation at different sites further complicates the interpretation of results and may add uncertainty to estimates of annual CO2 budgets. We investigated the influence of different sensor combinations on FCO2, latent heat (LE), and FCH4, and assessed the differences in annual FCO2 estimated with different instrumentation at the same sites. Using data from four sites across the North Slope of Alaska, we resolved FCO2 and FCH4 to within 5% using different combinations of open- and closed-path gas analyzers and within 10% using heated and non-heated anemometers. A continuously heated anemometer increased data coverage relative to non-heated anemometers while resulting in comparable annual FCO2, despite over-estimating sensible heat fluxes by 15%. We also implemented an intermittent heating strategy whereby activation only when ice or snow blockage of the transducers was detected. This resulted in comparable data coverage (~ 60%) to the continuously heated anemometer, while avoiding potential over-estimation of sensible heat and gas fluxes. We found good agreement in FCO2 and FCH4 from two closed-path and one open-path gas analyzer, despite the need for large spectral corrections of closed-path fluxes and density and temperature corrections to open-path sensors. However, data coverage was generally greater when using closed-path, especially during cold seasons (36-40% vs 10-14% for the open path), when fluxes from Arctic regions are particularly uncertain and potentially critical to annual C budgets. Measurement of Arctic LE remains a challenge due to strong attenuation along sample tubes, even when heated, that could not be accounted for with spectral corrections.

  10. PADF RF localization experiments with multi-agent caged-MAV platforms

    NASA Astrophysics Data System (ADS)

    Barber, Christopher; Gates, Miguel; Selmic, Rastko; Al-Issa, Huthaifa; Ordonez, Raul; Mitra, Atindra

    2011-06-01

    This paper provides a summary of preliminary RF direction finding results generated within an AFOSR funded testbed facility recently developed at Louisiana Tech University. This facility, denoted as the Louisiana Tech University Micro- Aerial Vehicle/Wireless Sensor Network (MAVSeN) Laboratory, has recently acquired a number of state-of-the-art MAV platforms that enable us to analyze, design, and test some of our recent results in the area of multiplatform position-adaptive direction finding (PADF) [1] [2] for localization of RF emitters in challenging embedded multipath environments. Discussions within the segmented sections of this paper include a description of the MAVSeN Laboratory and the preliminary results from the implementation of mobile platforms with the PADF algorithm. This novel approach to multi-platform RF direction finding is based on the investigation of iterative path-loss based (i.e. path loss exponent) metrics estimates that are measured across multiple platforms in order to develop a control law that robotically/intelligently positionally adapt (i.e. self-adjust) the location of each distributed/cooperative platform. The body of this paper provides a summary of our recent results on PADF and includes a discussion on state-of-the-art Sensor Mote Technologies as applied towards the development of sensor-integrated caged-MAV platform for PADF applications. Also, a discussion of recent experimental results that incorporate sample approaches to real-time singleplatform data pruning is included as part of a discussion on potential approaches to refining a basic PADF technique in order to integrate and perform distributed self-sensitivity and self-consistency analysis as part of a PADF technique with distributed robotic/intelligent features. These techniques are extracted in analytical form from a parallel study denoted as "PADF RF Localization Criteria for Multi-Model Scattering Environments". The focus here is on developing and reporting specific approaches to self-sensitivity and self-consistency within this experimental PADF framework via the exploitation of specific single-agent caged-MAV trajectories that are unique to this experiment set.

  11. Shall We Climb on the Shoulders of the Giants to Extend the Reality Horizon of Physics?

    NASA Astrophysics Data System (ADS)

    Roychoudhuri, Chandrasekhar

    2007-12-01

    After a very successful flurry of activities for a few decades to maximize the benefits of the formalism of Quantum Mechanics to connect the micro and macro universe, the applied physics community has successfully engineered sustained technological innovations for human social advancements. However, a significant segment of the theoretical physics community put their endeavors essentially in inventing realities that are esthetically pleasing to our human logics (epistemology) rather than staying focused on discovering the actual physical realities in nature driven by cosmic logics (ontology). The purpose of this paper is an attempt to formulate a Reality Epistemology that can leverage our enormous successes in science to re-focus our attention to discovering nature's realities by understanding the physical processes behind all natural interactions that collectively make the cosmic evolution progressing forward. We underscore the deviation from seeking reality to justify the key premise of the paper. We can ``see'' (measure) the universe only through the ``eyes'' of the various sensors (detectors). None of these sensors are completely known to us as yet. All sensors also have inherently limited capabilities to respond to input signals and limited capabilities to ``report'' only a part of all that it experiences. We are thus forced to develop our mathematical theories mixing our human logics and incomplete information and hence they are all provisional and incomplete since they are predicting only correctly measured but limited report by the detector. Thus, we should be careful not to jump into conclusion that we have captured all the necessary cosmic logics behind the interactions involved. We dissect the measurement process in a generic way along with well defined steps to apply Reality Epistemology, which will jointly allow us to develop a scientific methodology of iteratively refining our ``successful'' human logics that can evolve towards our goal of capturing the cosmic logics. The core content of this paper was first presented at the 2007 QTRF-4 conference at the Vaxjo University [1].

  12. Coordinated traffic incident management using the I-Net embedded sensor architecture

    NASA Astrophysics Data System (ADS)

    Dudziak, Martin J.

    1999-01-01

    The I-Net intelligent embedded sensor architecture enables the reconfigurable construction of wide-area remote sensing and data collection networks employing diverse processing and data acquisition modules communicating over thin- server/thin-client protocols. Adaptive initially for operation using mobile remotely-piloted vehicle platforms such as small helicopter robots such as the Hornet and Ascend-I, the I-Net architecture lends itself to a critical problem in the management of both spontaneous and planned traffic congestion and rerouting over major interstate thoroughfares such as the I-95 Corridor. Pre-programmed flight plans and ad hoc operator-assisted navigation of the lightweight helicopter, using an auto-pilot and gyroscopic stabilization augmentation units, allows daytime or nighttime over-the-horizon flights of the unit to collect and transmit real-time video imagery that may be stored or transmitted to other locations. With on-board GPS and ground-based pattern recognition capabilities to augment the standard video collection process, this approach enables traffic management and emergency response teams to plan and assist real-time in the adjustment of traffic flows in high- density or congested areas or during dangerous road conditions such as during ice, snow, and hurricane storms. The I-Net architecture allows for integration of land-based and roadside sensors within a comprehensive automated traffic management system with communications to and form an airborne or other platform to devices in the network other than human-operated desktop computers, thereby allowing more rapid assimilation and response for critical data. Experiments have been conducted using several modified platforms and standard video and still photographic equipment. Current research and development is focused upon modification of the modular instrumentation units in order to accommodate faster loading and reloading of equipment onto the RPV, extension of the I-Net architecture to enable RPV-to-RPV signaling and control, and refinement of safety and emergency mechanisms to handle RPV mechanical failure during flight.

  13. Evaluation of Passive Multilayer Cloud Detection Using Preliminary CloudSat and CALIPSO Cloud Profiles

    NASA Astrophysics Data System (ADS)

    Minnis, P.; Sun-Mack, S.; Chang, F.; Huang, J.; Nguyen, L.; Ayers, J. K.; Spangenberg, D. A.; Yi, Y.; Trepte, C. R.

    2006-12-01

    During the last few years, several algorithms have been developed to detect and retrieve multilayered clouds using passive satellite data. Assessing these techniques has been difficult due to the need for active sensors such as cloud radars and lidars that can "see" through different layers of clouds. Such sensors have been available only at a few surface sites and on aircraft during field programs. With the launch of the CALIPSO and CloudSat satellites on April 28, 2006, it is now possible to observe multilayered systems all over the globe using collocated cloud radar and lidar data. As part of the A- Train, these new active sensors are also matched in time ad space with passive measurements from the Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Microwave Scanning Radiometer - EOS (AMSR-E). The Clouds and the Earth's Radiant Energy System (CERES) has been developing and testing algorithms to detect ice-over-water overlapping cloud systems and to retrieve the cloud liquid path (LWP) and ice water path (IWP) for those systems. One technique uses a combination of the CERES cloud retrieval algorithm applied to MODIS data and a microwave retrieval method applied to AMSR-E data. The combination of a CO2-slicing cloud retireval technique with the CERES algorithms applied to MODIS data (Chang et al., 2005) is used to detect and analyze such overlapped systems that contain thin ice clouds. A third technique uses brightness temperature differences and the CERES algorithms to detect similar overlapped methods. This paper uses preliminary CloudSat and CALIPSO data to begin a global scale assessment of these different methods. The long-term goals are to assess and refine the algorithms to aid the development of an optimal combination of the techniques to better monitor ice 9and liquid water clouds in overlapped conditions.

  14. Evaluation of Passive Multilayer Cloud Detection Using Preliminary CloudSat and CALIPSO Cloud Profiles

    NASA Astrophysics Data System (ADS)

    Minnis, P.; Sun-Mack, S.; Chang, F.; Huang, J.; Nguyen, L.; Ayers, J. K.; Spangenberg, D. A.; Yi, Y.; Trepte, C. R.

    2005-05-01

    During the last few years, several algorithms have been developed to detect and retrieve multilayered clouds using passive satellite data. Assessing these techniques has been difficult due to the need for active sensors such as cloud radars and lidars that can "see" through different layers of clouds. Such sensors have been available only at a few surface sites and on aircraft during field programs. With the launch of the CALIPSO and CloudSat satellites on April 28, 2006, it is now possible to observe multilayered systems all over the globe using collocated cloud radar and lidar data. As part of the A- Train, these new active sensors are also matched in time ad space with passive measurements from the Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Microwave Scanning Radiometer - EOS (AMSR-E). The Clouds and the Earth's Radiant Energy System (CERES) has been developing and testing algorithms to detect ice-over-water overlapping cloud systems and to retrieve the cloud liquid path (LWP) and ice water path (IWP) for those systems. One technique uses a combination of the CERES cloud retrieval algorithm applied to MODIS data and a microwave retrieval method applied to AMSR-E data. The combination of a CO2-slicing cloud retireval technique with the CERES algorithms applied to MODIS data (Chang et al., 2005) is used to detect and analyze such overlapped systems that contain thin ice clouds. A third technique uses brightness temperature differences and the CERES algorithms to detect similar overlapped methods. This paper uses preliminary CloudSat and CALIPSO data to begin a global scale assessment of these different methods. The long-term goals are to assess and refine the algorithms to aid the development of an optimal combination of the techniques to better monitor ice 9and liquid water clouds in overlapped conditions.

  15. Dsm Extraction and Evaluation from GEOEYE-1 Stereo Imagery

    NASA Astrophysics Data System (ADS)

    Saldaña, M. M.; Aguilar, M. A.; Aguilar, F. J.; Fernández, I.

    2012-07-01

    The newest very high resolution (VHR) commercial satellites, such as GeoEye-1 or WorldView-2, open new possibilities for cartographic applications, orthoimages generation and extraction of Digital Surface Models (DSMs). These DSMs are generated by image matching strategies from VHR satellite stereopairs imagery, reconstructing the 3D surface corresponding to the first surface view of the earth containing both microrelief (buildings, trees and so on) and bare terrain. The main aim of this work is to carry out an accuracy assessment test on the DSMs extracted from a GeoEye-1 stereopair captured in August 2011. A LiDAR derived DSM taken at the same month that the satellite imagery was used as ground truth. The influence of factors such as number of Ground Control Points (GCPs), sensor models tested and the geoid employed to transform the ellipsoid to orthometric heights were going to be evaluated. In this way, different sets of GCPs ranging from 7 to 45, two sensor models and two geoids (EGM96 and EGM08, the last adapted for Spain vertical network by the Spanish's National Geographic Institute) were tested in this work. The photogrammetric software package used was OrthoEngine from PCI Geomatica v. 10.3.2. OrthoEngine implements both sensor models tested: (i) the physical model developed by Toutin (CCRS) and, (ii) the rational function model using rational polynomial coefficients supplied by the vendor and later refined by means of the zero order linear functions (RPC0). When high accurate and well-distributed GCPs were used, the planimetric and vertical accuracies of DSMs generated from the GeoEye-1 Geo stereopair were always better than 0.5 m. Using only 7 GCPs and RPC0, a vertical accuracy around 0.43 m measured as standard deviation was attained. The geoid used by OrthoEngine (EGM96) produced similar results that the EGM08 adapted for Spain vertical network.

  16. 21 CFR 146.3 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... dextrose. (b) The term dextrose means the hydrated or anhydrous, refined monosaccharide obtained from... means an aqueous solution of inverted or partly inverted, refined or partly refined sucrose, the solids... flavorless, except for sweetness. (f) The term sugar means refined sucrose. (g) Compliance means the...

  17. 21 CFR 145.3 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... term dextrose means the hydrated or anhydrous, refined monosaccharide obtained from hydrolyzed starch... of inverted or partly inverted, refined or partly refined sucrose, the solids of which contain not... sweetness. (f) The term sugar means refined sucrose. (g) The terms edible organic acid and edible organic...

  18. 21 CFR 146.3 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... dextrose. (b) The term dextrose means the hydrated or anhydrous, refined monosaccharide obtained from... means an aqueous solution of inverted or partly inverted, refined or partly refined sucrose, the solids... flavorless, except for sweetness. (f) The term sugar means refined sucrose. (g) Compliance means the...

  19. 21 CFR 145.3 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... term dextrose means the hydrated or anhydrous, refined monosaccharide obtained from hydrolyzed starch... of inverted or partly inverted, refined or partly refined sucrose, the solids of which contain not... sweetness. (f) The term sugar means refined sucrose. (g) The terms edible organic acid and edible organic...

  20. 21 CFR 146.3 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... dextrose. (b) The term dextrose means the hydrated or anhydrous, refined monosaccharide obtained from... means an aqueous solution of inverted or partly inverted, refined or partly refined sucrose, the solids... flavorless, except for sweetness. (f) The term sugar means refined sucrose. (g) Compliance means the...

  1. 21 CFR 146.3 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... dextrose. (b) The term dextrose means the hydrated or anhydrous, refined monosaccharide obtained from... means an aqueous solution of inverted or partly inverted, refined or partly refined sucrose, the solids... flavorless, except for sweetness. (f) The term sugar means refined sucrose. (g) Compliance means the...

  2. Petroleum: An energy profile. [CONTAINS GLOSSARY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1991-08-01

    This publication is intended as a general reference about petroleum: its origins, production, refining, marketing, and use. This report presents an overview of refined petroleum products and their use, crude oil reserves and production, refining technology and US refining capacity, the development and operation of petroleum markets, and foreign trade. A statistical supplement, an appendix describing refining operations, a glossary, and bibliographic references for additional sources of information are also included. 36 figs., 4 tabs.

  3. The blind leading the blind: Mutual refinement of approximate theories

    NASA Technical Reports Server (NTRS)

    Kedar, Smadar T.; Bresina, John L.; Dent, C. Lisa

    1991-01-01

    The mutual refinement theory, a method for refining world models in a reactive system, is described. The method detects failures, explains their causes, and repairs the approximate models which cause the failures. The approach focuses on using one approximate model to refine another.

  4. 48 CFR 208.7304 - Refined precious metals.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP. [71 FR 39005, July 11, 2006] ...

  5. 48 CFR 208.7304 - Refined precious metals.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP. [71 FR 39005, July 11, 2006] ...

  6. 40 CFR 421.50 - Applicability: Description of the primary electrolytic copper refining subcategory.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... primary electrolytic copper refining subcategory. 421.50 Section 421.50 Protection of Environment... POINT SOURCE CATEGORY Primary Electrolytic Copper Refining Subcategory § 421.50 Applicability: Description of the primary electrolytic copper refining subcategory. The provisions of this subpart apply to...

  7. 40 CFR 421.50 - Applicability: Description of the primary electrolytic copper refining subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... primary electrolytic copper refining subcategory. 421.50 Section 421.50 Protection of Environment... POINT SOURCE CATEGORY Primary Electrolytic Copper Refining Subcategory § 421.50 Applicability: Description of the primary electrolytic copper refining subcategory. The provisions of this subpart apply to...

  8. Optimization of Refining Craft for Vegetable Insulating Oil

    NASA Astrophysics Data System (ADS)

    Zhou, Zhu-Jun; Hu, Ting; Cheng, Lin; Tian, Kai; Wang, Xuan; Yang, Jun; Kong, Hai-Yang; Fang, Fu-Xin; Qian, Hang; Fu, Guang-Pan

    2016-05-01

    Vegetable insulating oil because of its environmental friendliness are considered as ideal material instead of mineral oil used for the insulation and the cooling of the transformer. The main steps of traditional refining process included alkali refining, bleaching and distillation. This kind of refining process used in small doses of insulating oil refining can get satisfactory effect, but can't be applied to the large capacity reaction kettle. This paper using rapeseed oil as crude oil, and the refining process has been optimized for large capacity reaction kettle. The optimized refining process increases the acid degumming process. The alkali compound adds the sodium silicate composition in the alkali refining process, and the ratio of each component is optimized. Add the amount of activated clay and activated carbon according to 10:1 proportion in the de-colorization process, which can effectively reduce the oil acid value and dielectric loss. Using vacuum pumping gas instead of distillation process can further reduce the acid value. Compared some part of the performance parameters of refined oil products with mineral insulating oil, the dielectric loss of vegetable insulating oil is still high and some measures are needed to take to further optimize in the future.

  9. 40 CFR 409.30 - Applicability; description of the liquid cane sugar refining subcategory.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... liquid cane sugar refining subcategory. 409.30 Section 409.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining Subcategory § 409.30 Applicability; description of the liquid cane sugar refining...

  10. 40 CFR 409.30 - Applicability; description of the liquid cane sugar refining subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... liquid cane sugar refining subcategory. 409.30 Section 409.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining Subcategory § 409.30 Applicability; description of the liquid cane sugar refining...

  11. 40 CFR 409.30 - Applicability; description of the liquid cane sugar refining subcategory.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... liquid cane sugar refining subcategory. 409.30 Section 409.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining Subcategory § 409.30 Applicability; description of the liquid cane sugar refining...

  12. 40 CFR 409.30 - Applicability; description of the liquid cane sugar refining subcategory.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... liquid cane sugar refining subcategory. 409.30 Section 409.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining Subcategory § 409.30 Applicability; description of the liquid cane sugar refining...

  13. 40 CFR 409.30 - Applicability; description of the liquid cane sugar refining subcategory.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... liquid cane sugar refining subcategory. 409.30 Section 409.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining Subcategory § 409.30 Applicability; description of the liquid cane sugar refining...

  14. 7 CFR 1435.2 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... sugarcane processors. Cane sugar refiner means any person in the U.S. Customs Territory that refines raw... further refined or improved in quality and that is to be distributed for human consumption, either directly or in molasses-containing products. Edible syrups means syrups that are not to be further refined...

  15. 7 CFR 1435.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... sugarcane processors. Cane sugar refiner means any person in the U.S. Customs Territory that refines raw... further refined or improved in quality and that is to be distributed for human consumption, either directly or in molasses-containing products. Edible syrups means syrups that are not to be further refined...

  16. 7 CFR 1435.2 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... sugarcane processors. Cane sugar refiner means any person in the U.S. Customs Territory that refines raw... further refined or improved in quality and that is to be distributed for human consumption, either directly or in molasses-containing products. Edible syrups means syrups that are not to be further refined...

  17. 7 CFR 1435.2 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... sugarcane processors. Cane sugar refiner means any person in the U.S. Customs Territory that refines raw... further refined or improved in quality and that is to be distributed for human consumption, either directly or in molasses-containing products. Edible syrups means syrups that are not to be further refined...

  18. Influence of Mg on Grain Refinement of Near Eutectic Al-Si Alloys

    NASA Astrophysics Data System (ADS)

    Ravi, K. R.; Manivannan, S.; Phanikumar, G.; Murty, B. S.; Sundarraj, Suresh

    2011-07-01

    Although the grain-refinement practice is well established for wrought Al alloys, in the case of foundry alloys such as near eutectic Al-Si alloys, the underlying mechanisms and the use of grain refiners need better understanding. Conventional grain refiners such as Al-5Ti-1B are not effective in grain refining the Al-Si alloys due to the poisoning effect of Si. In this work, we report the results of a newly developed grain refiner, which can effectively grain refine as well as modify eutectic and primary Si in near eutectic Al-Si alloys. Among the material choices, the grain refining response with Al-1Ti-3B master alloy is found to be superior compared to the conventional Al-5Ti-1B master alloy. It was also found that magnesium additions of 0.2 wt pct along with the Al-1Ti-3B master alloy further enhance the near eutectic Al-Si alloy's grain refining efficiency, thus leading to improved bulk mechanical properties. We have found that magnesium essentially scavenges the oxygen present on the surface of nucleant particles, improves wettability, and reduces the agglomeration tendency of boride particles, thereby enhancing grain refining efficiency. It allows the nucleant particles to act as potent and active nucleation sites even at levels as low as 0.2 pct in the Al-1Ti-3B master alloy.

  19. Dynamic particle refinement in SPH: application to free surface flow and non-cohesive soil simulations

    NASA Astrophysics Data System (ADS)

    Reyes López, Yaidel; Roose, Dirk; Recarey Morfa, Carlos

    2013-05-01

    In this paper, we present a dynamic refinement algorithm for the smoothed particle Hydrodynamics (SPH) method. An SPH particle is refined by replacing it with smaller daughter particles, which positions are calculated by using a square pattern centered at the position of the refined particle. We determine both the optimal separation and the smoothing distance of the new particles such that the error produced by the refinement in the gradient of the kernel is small and possible numerical instabilities are reduced. We implemented the dynamic refinement procedure into two different models: one for free surface flows, and one for post-failure flow of non-cohesive soil. The results obtained for the test problems indicate that using the dynamic refinement procedure provides a good trade-off between the accuracy and the cost of the simulations.

  20. Firing of pulverized solvent refined coal

    DOEpatents

    Derbidge, T. Craig; Mulholland, James A.; Foster, Edward P.

    1986-01-01

    An air-purged burner for the firing of pulverized solvent refined coal is constructed and operated such that the solvent refined coal can be fired without the coking thereof on the burner components. The air-purged burner is designed for the firing of pulverized solvent refined coal in a tangentially fired boiler.

  1. 40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... EPA with appropriate data to correct the record when the company submits its application for small... a small refiner? 80.1340 Section 80.1340 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner...

  2. Technology transfer and evaluation for Space Station telerobotics

    NASA Technical Reports Server (NTRS)

    Price, Charles R.; Stokes, Lebarian; Diftler, Myron A.

    1994-01-01

    The international space station (SS) must take advantage of advanced telerobotics in order to maximize productivity and safety and to reduce maintenance costs. The Automation and Robotics Division at the NASA Lyndon B. Johnson Space Center (JSC) has designed, developed, and constructed the Automated Robotics Maintenance of Space Station (ARMSS) facility for the purpose of transferring and evaluating robotic technology that will reduce SS operation costs. Additionally, JSC had developed a process for expediting the transfer of technology from NASA research centers and evaluating these technologies in SS applications. Software and hardware system developed at the research centers and NASA sponsored universities are currently being transferred to JSC and integrated into the ARMSS for flight crew personnel testing. These technologies will be assessed relative to the SS baseline, and, after refinements, those technologies that provide significant performance improvements will be recommended as upgrades to the SS. Proximity sensors, vision algorithms, and manipulator controllers are among the systems scheduled for evaluation.

  3. Miss-distance indicator for tank main guns

    NASA Astrophysics Data System (ADS)

    Bornstein, Jonathan A.; Hillis, David B.

    1996-06-01

    Tank main gun systems must possess extremely high levels of accuracy to perform successfully in battle. Under some circumstances, the first round fired in an engagement may miss the intended target, and it becomes necessary to rapidly correct fire. A breadboard automatic miss-distance indicator system was previously developed to assist in this process. The system, which would be mounted on a 'wingman' tank, consists of a charged-coupled device (CCD) camera and computer-based image-processing system, coupled with a separate infrared sensor to detect muzzle flash. For the system to be successfully employed with current generation tanks, it must be reliable, be relatively low cost, and respond rapidly maintaining current firing rates. Recently, the original indicator system was developed further in an effort to assist in achieving these goals. Efforts have focused primarily upon enhanced image-processing algorithms, both to improve system reliability and to reduce processing requirements. Intelligent application of newly refined trajectory models has permitted examination of reduced areas of interest and enhanced rejection of false alarms, significantly improving system performance.

  4. U.S. Army PEM fuel cell programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patil, A.S.; Jacobs, R.

    The United States Army has identified the need for lightweight power sources to provide the individual soldier with continuous power for extended periods without resupply. Due to the high cost of primary batteries and the high weight of rechargeable batteries, fuel cell technology is being developed to provide a power source for the individual soldier, sensors, communications equipment and other various applications in the Army. Current programs are in the tech base area and will demonstrate Proton Exchange Membrane (PEM) Fuel Cell Power Sources with low weight and high energy densities. Fuel Cell Power Sources underwent user evaluations in 1996more » that showed a power source weight reduction of 75%. The quiet operation along with the ability to refuel much like an engine was well accepted by the user and numerous applications were investigated. These programs are now aimed at further weight reduction for applications that are weight critical; system integration that will demonstrate a viable military power source; refining the user requirements; and planning for a transition to engineering development.« less

  5. At-sea detection of marine debris: overview of technologies, processes, issues, and options.

    PubMed

    Mace, Thomas H

    2012-01-01

    At-sea detection of marine debris presents a difficult problem, as the debris items are often relatively small and partially submerged. However, they may accumulate in water parcel boundaries or eddy lines. The application of models, satellite radar and multispectral data, and airborne remote sensing (particularly radar) to focus the search on eddies and convergence zones in the open ocean appear to be a productive avenue of investigation. A multistage modeling and remote sensing approach is proposed for the identification of areas of the open ocean where debris items are more likely to congregate. A path forward may best be achieved through the refinement of the Ghost Net procedures with the addition of a final search stage using airborne radar from an UAS simulator aircraft to detect zones of potential accumulation for direct search. Sampling strategies, direct versus indirect measurements, remote sensing resolution, sensor/platform considerations, and future state are addressed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. KSC-2009-5952

    NASA Image and Video Library

    2009-10-28

    CAPE CANAVERAL, Fla. - A bow shock forms around the Constellation Program's 327-foot-tall Ares I-X test rocket traveling at supersonic speed. The rocket produces 2.96 million pounds of thrust at liftoff and goes supersonic in 39 seconds. Liftoff of the 6-minute flight test from Launch Pad 39B at NASA's Kennedy Space Center in Florida was at 11:30 a.m. EDT Oct. 28. This was the first launch from Kennedy's pads of a vehicle other than the space shuttle since the Apollo Program's Saturn rockets were retired. The parts used to make the Ares I-X booster flew on 30 different shuttle missions ranging from STS-29 in 1989 to STS-106 in 2000. The data returned from more than 700 sensors throughout the rocket will be used to refine the design of future launch vehicles and bring NASA one step closer to reaching its exploration goals. For information on the Ares I-X vehicle and flight test, visit http://www.nasa.gov/aresIX. Photo courtesy of Scott Andrews

  7. KSC-2009-6029

    NASA Image and Video Library

    2009-10-30

    CAPE CANAVERAL, Fla. – At Hangar AF on Cape Canaveral Air Force Station in Florida, the spent first stage of NASA's Ares I-X rocket is secured in a slip. The solid rocket booster recovery ship Freedom Star recovered the booster after it splashed down in the Atlantic Ocean following its flight test. Liftoff of the 6-minute flight test was at 11:30 a.m. EDT Oct. 28. This was the first launch from Kennedy's pads of a vehicle other than the space shuttle since the Apollo Program's Saturn rockets were retired. The parts used to make the Ares I-X booster flew on 30 different shuttle missions ranging from STS-29 in 1989 to STS-106 in 2000. The data returned from more than 700 sensors throughout the rocket will be used to refine the design of future launch vehicles and bring NASA one step closer to reaching its exploration goals. For information on the Ares I-X vehicle and flight test, visit http://www.nasa.gov/aresIX. Photo credit: NASA/Kim Shiflett

  8. KSC-2009-6024

    NASA Image and Video Library

    2009-10-30

    CAPE CANAVERAL, Fla. – The solid rocket booster recovery ship Freedom Star, towing the spent first stage of NASA's Ares I-X rocket, passes through Port Canaveral in Florida. Following the launch of the Ares I-X flight test, the booster splashed down in the Atlantic Ocean and was recovered. Liftoff of the 6-minute flight test was at 11:30 a.m. EDT Oct. 28. This was the first launch from Kennedy's pads of a vehicle other than the space shuttle since the Apollo Program's Saturn rockets were retired. The parts used to make the Ares I-X booster flew on 30 different shuttle missions ranging from STS-29 in 1989 to STS-106 in 2000. The data returned from more than 700 sensors throughout the rocket will be used to refine the design of future launch vehicles and bring NASA one step closer to reaching its exploration goals. For information on the Ares I-X vehicle and flight test, visit http://www.nasa.gov/aresIX. Photo credit: NASA/Kim Shiflett

  9. KSC-2009-6032

    NASA Image and Video Library

    2009-10-31

    CAPE CANAVERAL, Fla. – At Hangar AF on Cape Canaveral Air Force Station in Florida, the spent first stage of NASA's Ares I-X rocket, secured in a slip, awaits inspection. The booster was recovered by the solid rocket booster recovery ship Freedom Star after it splashed down in the Atlantic Ocean following its flight test. Liftoff of the 6-minute flight test was at 11:30 a.m. EDT Oct. 28. This was the first launch from Kennedy's pads of a vehicle other than the space shuttle since the Apollo Program's Saturn rockets were retired. The parts used to make the Ares I-X booster flew on 30 different shuttle missions ranging from STS-29 in 1989 to STS-106 in 2000. The data returned from more than 700 sensors throughout the rocket will be used to refine the design of future launch vehicles and bring NASA one step closer to reaching its exploration goals. For information on the Ares I-X vehicle and flight test, visit http://www.nasa.gov/aresIX. Photo credit: NASA/Kim Shiflett

  10. KSC-2009-6027

    NASA Image and Video Library

    2009-10-30

    CAPE CANAVERAL, Fla. – The solid rocket booster recovery ship Freedom Star delivers the spent first stage of NASA's Ares I-X rocket to Hangar AF at Cape Canaveral Air Force Station in Florida. Following the launch of the Ares I-X flight test, the booster splashed down in the Atlantic Ocean and was recovered. Liftoff of the 6-minute flight test was at 11:30 a.m. EDT Oct. 28. This was the first launch from Kennedy's pads of a vehicle other than the space shuttle since the Apollo Program's Saturn rockets were retired. The parts used to make the Ares I-X booster flew on 30 different shuttle missions ranging from STS-29 in 1989 to STS-106 in 2000. The data returned from more than 700 sensors throughout the rocket will be used to refine the design of future launch vehicles and bring NASA one step closer to reaching its exploration goals. For information on the Ares I-X vehicle and flight test, visit http://www.nasa.gov/aresIX. Photo credit: NASA/Kim Shiflett

  11. KSC-2009-6028

    NASA Image and Video Library

    2009-10-30

    CAPE CANAVERAL, Fla. – At Hangar AF on Cape Canaveral Air Force Station in Florida, workers guide the spent first stage of NASA's Ares I-X rocket into a slip. The solid rocket booster recovery ship Freedom Star, in the background, recovered the booster after it splashed down in the Atlantic Ocean following its flight test. Liftoff of the 6-minute flight test was at 11:30 a.m. EDT Oct. 28. This was the first launch from Kennedy's pads of a vehicle other than the space shuttle since the Apollo Program's Saturn rockets were retired. The parts used to make the Ares I-X booster flew on 30 different shuttle missions ranging from STS-29 in 1989 to STS-106 in 2000. The data returned from more than 700 sensors throughout the rocket will be used to refine the design of future launch vehicles and bring NASA one step closer to reaching its exploration goals. For information on the Ares I-X vehicle and flight test, visit http://www.nasa.gov/aresIX. Photo credit: NASA/Kim Shiflett

  12. KSC-2009-6030

    NASA Image and Video Library

    2009-10-31

    CAPE CANAVERAL, Fla. – At Hangar AF on Cape Canaveral Air Force Station in Florida, the spent first stage of NASA's Ares I-X rocket is secured in a slip. The solid rocket booster recovery ship Freedom Star recovered the booster after it splashed down in the Atlantic Ocean following its flight test. Liftoff of the 6-minute flight test was at 11:30 a.m. EDT Oct. 28. This was the first launch from Kennedy's pads of a vehicle other than the space shuttle since the Apollo Program's Saturn rockets were retired. The parts used to make the Ares I-X booster flew on 30 different shuttle missions ranging from STS-29 in 1989 to STS-106 in 2000. The data returned from more than 700 sensors throughout the rocket will be used to refine the design of future launch vehicles and bring NASA one step closer to reaching its exploration goals. For information on the Ares I-X vehicle and flight test, visit http://www.nasa.gov/aresIX. Photo credit: NASA/Kim Shiflett

  13. KSC-2009-5947

    NASA Image and Video Library

    2009-10-28

    CAPE CANAVERAL, Fla. - Almost twice as tall as Disney's Cinderella Castle, the Constellation Program's 327-foot-tall Ares I-X test rocket races off Launch Complex 39B at NASA's Kennedy Space Center in Florida. The rocket produces 2.96 million pounds of thrust at liftoff and reaches a speed of 100 mph in eight seconds. Liftoff of the 6-minute flight test was at 11:30 a.m. EDT Oct. 28. This was the first launch from Kennedy's pads of a vehicle other than the space shuttle since the Apollo Program's Saturn rockets were retired. The parts used to make the Ares I-X booster flew on 30 different shuttle missions ranging from STS-29 in 1989 to STS-106 in 2000. The data returned from more than 700 sensors throughout the rocket will be used to refine the design of future launch vehicles and bring NASA one step closer to reaching its exploration goals. For information on the Ares I-X vehicle and flight test, visit http://www.nasa.gov/aresIX. Photo courtesy of Scott Andrews

  14. KSC-2009-6023

    NASA Image and Video Library

    2009-10-28

    CAPE CANAVERAL, Fla. - Almost twice as tall as Disney's Cinderella Castle, NASA's 327-foot-tall Ares I-X test rocket lifts off from Launch Complex 39B at NASA's Kennedy Space Center in Florida. The rocket produces 2.96 million pounds of thrust at liftoff and reaches a speed of 100 mph in eight seconds. Liftoff of the 6-minute flight test was at 11:30 a.m. EDT Oct. 28. This was the first launch from Kennedy's pads of a vehicle other than the space shuttle since the Apollo Program's Saturn rockets were retired. The parts used to make the Ares I-X booster flew on 30 different shuttle missions ranging from STS-29 in 1989 to STS-106 in 2000. The data returned from more than 700 sensors throughout the rocket will be used to refine the design of future launch vehicles and bring NASA one step closer to reaching its exploration goals. For information on the Ares I-X vehicle and flight test, visit http://www.nasa.gov/aresIX. Photo credit: NASA/Carl Winebarger

  15. KSC-2009-5937

    NASA Image and Video Library

    2009-10-28

    CAPE CANAVERAL, Fla. - Almost twice as tall as Disney's Cinderella Castle, the Constellation Program's 327-foot-tall Ares I-X test rocket lifts off from Launch Complex 39B at NASA's Kennedy Space Center in Florida. The rocket produces 2.96 million pounds of thrust at liftoff and reaches a speed of 100 mph in eight seconds. Liftoff of the 6-minute flight test was at 11:30 a.m. EDT Oct. 28. This was the first launch from Kennedy's pads of a vehicle other than the space shuttle since the Apollo Program's Saturn rockets were retired. The parts used to make the Ares I-X booster flew on 30 different shuttle missions ranging from STS-29 in 1989 to STS-106 in 2000. The data returned from more than 700 sensors throughout the rocket will be used to refine the design of future launch vehicles and bring NASA one step closer to reaching its exploration goals. For information on the Ares I-X vehicle and flight test, visit http://www.nasa.gov/aresIX. Photo credit: NASA/Jack Pfaller

  16. KSC-2009-5941

    NASA Image and Video Library

    2009-10-28

    CAPE CANAVERAL, Fla. - Almost twice as tall as Disney's Cinderella Castle, the Constellation Program's 327-foot-tall Ares I-X test rocket races off Launch Complex 39B at NASA's Kennedy Space Center in Florida. The rocket produces 2.96 million pounds of thrust at liftoff and reaches a speed of 100 mph in eight seconds. Liftoff of the 6-minute flight test was at 11:30 a.m. EDT Oct. 28. This was the first launch from Kennedy's pads of a vehicle other than the space shuttle since the Apollo Program's Saturn rockets were retired. The parts used to make the Ares I-X booster flew on 30 different shuttle missions ranging from STS-29 in 1989 to STS-106 in 2000. The data returned from more than 700 sensors throughout the rocket will be used to refine the design of future launch vehicles and bring NASA one step closer to reaching its exploration goals. For information on the Ares I-X vehicle and flight test, visit http://www.nasa.gov/aresIX. Photo credit: NASA/Kim Shiflett

  17. Vision-Based Georeferencing of GPR in Urban Areas

    PubMed Central

    Barzaghi, Riccardo; Cazzaniga, Noemi Emanuela; Pagliari, Diana; Pinto, Livio

    2016-01-01

    Ground Penetrating Radar (GPR) surveying is widely used to gather accurate knowledge about the geometry and position of underground utilities. The sensor arrays need to be coupled to an accurate positioning system, like a geodetic-grade Global Navigation Satellite System (GNSS) device. However, in urban areas this approach is not always feasible because GNSS accuracy can be substantially degraded due to the presence of buildings, trees, tunnels, etc. In this work, a photogrammetric (vision-based) method for GPR georeferencing is presented. The method can be summarized in three main steps: tie point extraction from the images acquired during the survey, computation of approximate camera extrinsic parameters and finally a refinement of the parameter estimation using a rigorous implementation of the collinearity equations. A test under operational conditions is described, where accuracy of a few centimeters has been achieved. The results demonstrate that the solution was robust enough for recovering vehicle trajectories even in critical situations, such as poorly textured framed surfaces, short baselines, and low intersection angles. PMID:26805842

  18. Convergence in full motion video processing, exploitation, and dissemination and activity based intelligence

    NASA Astrophysics Data System (ADS)

    Phipps, Marja; Lewis, Gina

    2012-06-01

    Over the last decade, intelligence capabilities within the Department of Defense/Intelligence Community (DoD/IC) have evolved from ad hoc, single source, just-in-time, analog processing; to multi source, digitally integrated, real-time analytics; to multi-INT, predictive Processing, Exploitation and Dissemination (PED). Full Motion Video (FMV) technology and motion imagery tradecraft advancements have greatly contributed to Intelligence, Surveillance and Reconnaissance (ISR) capabilities during this timeframe. Imagery analysts have exploited events, missions and high value targets, generating and disseminating critical intelligence reports within seconds of occurrence across operationally significant PED cells. Now, we go beyond FMV, enabling All-Source Analysts to effectively deliver ISR information in a multi-INT sensor rich environment. In this paper, we explore the operational benefits and technical challenges of an Activity Based Intelligence (ABI) approach to FMV PED. Existing and emerging ABI features within FMV PED frameworks are discussed, to include refined motion imagery tools, additional intelligence sources, activity relevant content management techniques and automated analytics.

  19. An mHealth Monitoring System for Traditional Birth Attendant-led Antenatal Risk Assessment in Rural Guatemala

    PubMed Central

    Stroux, Lisa; Martinez, Boris; Ixen, Enma Coyote; King, Nora; Hall-Clifford, Rachel; Rohloff, Peter; Clifford, Gari D.

    2016-01-01

    Limited funding for medical technology, low levels of education and poor infrastructure for delivering and maintaining technology severely limit medical decision support in low- and middle-income countries. Perinatal and maternal mortality is of particular concern with millions dying every year from potentially treatable conditions. Guatemala has one of the worst maternal mortality ratios, the highest incidence of intrauterine growth restriction (IUGR), and one of the lowest gross national incomes per capita within Latin America. To address the lack of decision support in rural Guatemala, a smartphone-based system is proposed including peripheral sensors, such as a handheld Doppler for the identification of fetal compromise. Designed for use by illiterate birth attendants, the system uses pictograms, audio guidance, local and cloud processing, SMS alerts and voice calling. The initial prototype was evaluated on 22 women in highland Guatemala. Results were fed back into the refinement of the system, currently undergoing RCT evaluation. PMID:27696915

  20. Monitoring early hydration of reinforced concrete structures using structural parameters identified by piezo sensors via electromechanical impedance technique

    NASA Astrophysics Data System (ADS)

    Talakokula, Visalakshi; Bhalla, Suresh; Gupta, Ashok

    2018-01-01

    Concrete is the most widely used material in civil engineering construction. Its life begins when the hydration process is activated after mixing the cement granulates with water. In this paper, a non-dimensional hydration parameter, obtained from piezoelectric ceramic (PZT) patches bonded to rebars embedded inside concrete, is employed to monitor the early age hydration of concrete. The non-dimensional hydration parameter is derived from the equivalent stiffness determined from the piezo-impedance transducers using the electro-mechanical impedance (EMI) technique. The focus of the study is to monitor the hydration process of cementitious materials commencing from the early hours and continue till 28 days using single non-dimensional parameter. The experimental results show that the proposed piezo-based non-dimensional hydration parameter is very effective in monitoring the early age hydration, as it has been derived from the refined structural impedance parameters, obtained by eliminating the PZT contribution, and using both the real and imaginary components of the admittance signature.

  1. An mHealth monitoring system for traditional birth attendant-led antenatal risk assessment in rural Guatemala.

    PubMed

    Stroux, Lisa; Martinez, Boris; Coyote Ixen, Enma; King, Nora; Hall-Clifford, Rachel; Rohloff, Peter; Clifford, Gari D

    Limited funding for medical technology, low levels of education and poor infrastructure for delivering and maintaining technology severely limit medical decision support in low- and middle-income countries. Perinatal and maternal mortality is of particular concern with millions dying every year from potentially treatable conditions. Guatemala has one of the worst maternal mortality ratios, the highest incidence of intra-uterine growth restriction (IUGR), and one of the lowest gross national incomes per capita within Latin America. To address the lack of decision support in rural Guatemala, a smartphone-based system is proposed including peripheral sensors, such as a handheld Doppler for the identification of foetal compromise. Designed for use by illiterate birth attendants, the system uses pictograms, audio guidance, local and cloud processing, SMS alerts and voice calling. The initial prototype was evaluated on 22 women in highland Guatemala. Results were fed back into the refinement of the system, currently undergoing RCT evaluation.

  2. KSC-2009-5950

    NASA Image and Video Library

    2009-10-28

    CAPE CANAVERAL, Fla. - Nearly twice as tall as the space shuttle, the Constellation Program's 327-foot-tall Ares I-X test rocket races off Launch Complex 39B at NASA's Kennedy Space Center in Florida. The rocket produces 2.96 million pounds of thrust at liftoff and reaches a speed of 100 mph in eight seconds. Liftoff of the 6-minute flight test was at 11:30 a.m. EDT Oct. 28. This was the first launch from Kennedy's pads of a vehicle other than the space shuttle since the Apollo Program's Saturn rockets were retired. The parts used to make the Ares I-X booster flew on 30 different shuttle missions ranging from STS-29 in 1989 to STS-106 in 2000. The data returned from more than 700 sensors throughout the rocket will be used to refine the design of future launch vehicles and bring NASA one step closer to reaching its exploration goals. For information on the Ares I-X vehicle and flight test, visit http://www.nasa.gov/aresIX. Photo courtesy of Scott Andrews

  3. KSC-2009-5959

    NASA Image and Video Library

    2009-10-28

    CAPE CANAVERAL, Fla. – A fiery blaze trails the Ares I-X test rocket as it takes off from Launch Pad 39B at NASA's Kennedy Space Center in Florida at 11:30 a.m. EDT Oct. 28. Constellation Program's 327-foot-tall rocket produces 2.96 million pounds of thrust at liftoff and reaches a speed of 100 mph in eight seconds. This was the first launch from Kennedy's pads of a vehicle other than the space shuttle since the Apollo Program's Saturn rockets were retired. The parts used to make the Ares I-X booster flew on 30 different shuttle missions ranging from STS-29 in 1989 to STS-106 in 2000. The data returned from more than 700 sensors throughout the rocket will be used to refine the design of future launch vehicles and bring NASA one step closer to reaching its exploration goals. For information on the Ares I-X vehicle and flight test, visit http://www.nasa.gov/aresIX. Photo credit: NASA/ Kenny Allen

  4. KSC-2009-5962

    NASA Image and Video Library

    2009-10-28

    CAPE CANAVERAL, Fla. – Two of the lightning towers frame the Ares I-X test rocket as it takes off from Launch Pad 39B at NASA's Kennedy Space Center in Florida at 11:30 a.m. EDT Oct. 28. NASA’s Constellation Program's 327-foot-tall rocket produces 2.96 million pounds of thrust at liftoff and reaches a speed of 100 mph in eight seconds. This was the first launch from Kennedy's pads of a vehicle other than the space shuttle since the Apollo Program's Saturn rockets were retired. The parts used to make the Ares I-X booster flew on 30 different shuttle missions ranging from STS-29 in 1989 to STS-106 in 2000. The data returned from more than 700 sensors throughout the rocket will be used to refine the design of future launch vehicles and bring NASA one step closer to reaching its exploration goals. For information on the Ares I-X vehicle and flight test, visit http://www.nasa.gov/aresIX. Photo credit: NASA/ Sandra Joseph and Kevin O'Connell

  5. KSC-2009-5968

    NASA Image and Video Library

    2009-10-28

    CAPE CANAVERAL, Fla. – NASA's Ares I-X test rocket ignites its first stage at Launch Pad 39B at NASA's Kennedy Space Center in Florida at 11:30 a.m. EDT on Oct. 28. The Constellation Program's 327-foot-tall rocket produces 2.96 million pounds of thrust at liftoff and reaches a speed of 100 mph in eight seconds. This was the first launch from Kennedy's pads of a vehicle other than the space shuttle since the Apollo Program's Saturn rockets were retired. The parts used to make the Ares I-X booster flew on 30 different shuttle missions ranging from STS-29 in 1989 to STS-106 in 2000. The data returned from more than 700 sensors throughout the rocket will be used to refine the design of future launch vehicles and bring NASA one step closer to reaching its exploration goals. For information on the Ares I-X vehicle and flight test, visit http://www.nasa.gov/aresIX. Photo credit: NASA/ George Roberts and Tony Gray

  6. KSC-2009-5913

    NASA Image and Video Library

    2009-10-27

    CAPE CANAVERAL, Fla. – At Launch Pad 39B at NASA's Kennedy Space Center in Florida, the rotating service structure has been rolled back from the Constellation Program's 327-foot-tall Ares I-X rocket, sitting atop its mobile launcher platform, during preparations for launch. The transfer of the pad from the Space Shuttle Program to the Constellation Program took place May 31. Modifications made to the pad include the removal of shuttle unique subsystems, such as the orbiter access arm and a section of the gaseous oxygen vent arm, and the installation of three 600-foot lightning towers, access platforms, environmental control systems and a vehicle stabilization system. The data returned from more than 700 sensors throughout the rocket will be used to refine the design of future launch vehicles and bring NASA one step closer to reaching its exploration goals. The Ares I-X flight test is targeted for Oct. 27. For information on the Ares I-X vehicle and flight test, visit http://www.nasa.gov/aresIX. Photo credit: NASA/Kim Shiflett

  7. KSC-2009-5915

    NASA Image and Video Library

    2009-10-27

    CAPE CANAVERAL, Fla. – Sunrise at Launch Pad 39B at NASA's Kennedy Space Center in Florida reveals the rotating service structure and the arms of the vehicle stabilization system have been retracted from around the Constellation Program's 327-foot-tall Ares I-X rocket for launch. The transfer of the pad from the Space Shuttle Program to the Constellation Program took place May 31. Modifications made to the pad include the removal of shuttle unique subsystems, such as the orbiter access arm and a section of the gaseous oxygen vent arm, and the installation of three 600-foot lightning towers, access platforms, environmental control systems and a vehicle stabilization system. The data returned from more than 700 sensors throughout the rocket will be used to refine the design of future launch vehicles and bring NASA one step closer to reaching its exploration goals. The Ares I-X flight test is targeted for Oct. 27. For information on the Ares I-X vehicle and flight test, visit http://www.nasa.gov/aresIX. Photo credit: NASA/Kim Shiflett

  8. KSC-2009-5914

    NASA Image and Video Library

    2009-10-27

    CAPE CANAVERAL, Fla. – At Launch Pad 39B at NASA's Kennedy Space Center in Florida, xenon lights illuminate the Constellation Program's 327-foot-tall Ares I-X rocket after the rotating service structure, has been retracted from around it for launch. The transfer of the pad from the Space Shuttle Program to the Constellation Program took place May 31. Modifications made to the pad include the removal of shuttle unique subsystems, such as the orbiter access arm and a section of the gaseous oxygen vent arm, and the installation of three 600-foot lightning towers, access platforms, environmental control systems and a vehicle stabilization system. The data returned from more than 700 sensors throughout the rocket will be used to refine the design of future launch vehicles and bring NASA one step closer to reaching its exploration goals. The Ares I-X flight test is targeted for Oct. 27. For information on the Ares I-X vehicle and flight test, visit http://www.nasa.gov/aresIX. Photo credit: NASA/Kim Shiflett

  9. KSC-2009-5971

    NASA Image and Video Library

    2009-10-28

    CAPE CANAVERAL, Fla. – NASA's Ares I-X test rocket climbs into the skies above Launch Pad 39B at NASA's Kennedy Space Center in Florida at 11:30 a.m. EDT on Oct. 28. NASA’s Constellation Program's 327-foot-tall rocket produces 2.96 million pounds of thrust at liftoff and reaches a speed of 100 mph in eight seconds. This was the first launch from Kennedy's pads of a vehicle other than the space shuttle since the Apollo Program's Saturn rockets were retired. The parts used to make the Ares I-X booster flew on 30 different shuttle missions ranging from STS-29 in 1989 to STS-106 in 2000. The data returned from more than 700 sensors throughout the rocket will be used to refine the design of future launch vehicles and bring NASA one step closer to reaching its exploration goals. For information on the Ares I-X vehicle and flight test, visit http://www.nasa.gov/aresIX. Photo credit: NASA/ George Roberts and Tony Gray

  10. KSC-2009-5972

    NASA Image and Video Library

    2009-10-28

    CAPE CANAVERAL, Fla. – NASA's Ares I-X test rocket flies high above Launch Pad 39B at Kennedy Space Center in Florida at 11:30 a.m. EDT on Oct. 28. NASA’s Constellation Program's 327-foot-tall rocket produces 2.96 million pounds of thrust at liftoff and reaches a speed of 100 mph in eight seconds. This was the first launch from Kennedy's pads of a vehicle other than the space shuttle since the Apollo Program's Saturn rockets were retired. The parts used to make the Ares I-X booster flew on 30 different shuttle missions ranging from STS-29 in 1989 to STS-106 in 2000. The data returned from more than 700 sensors throughout the rocket will be used to refine the design of future launch vehicles and bring NASA one step closer to reaching its exploration goals. For information on the Ares I-X vehicle and flight test, visit http://www.nasa.gov/aresIX.Photo credit: NASA/ George Roberts and Tom Farrar

  11. KSC-2009-5917

    NASA Image and Video Library

    2009-10-27

    CAPE CANAVERAL, Fla. – Daybreak at Launch Pad 39B at NASA's Kennedy Space Center in Florida reveals the rotating service structure rolled back from around the Constellation Program's 327-foot-tall Ares I-X rocket for launch. The transfer of the pad from the Space Shuttle Program to the Constellation Program took place May 31. Modifications made to the pad include the removal of shuttle unique subsystems, such as the orbiter access arm and a section of the gaseous oxygen vent arm, and the installation of three 600-foot lightning towers, access platforms, environmental control systems and a vehicle stabilization system. The data returned from more than 700 sensors throughout the rocket will be used to refine the design of future launch vehicles and bring NASA one step closer to reaching its exploration goals. The Ares I-X flight test is targeted for Oct. 27. For information on the Ares I-X vehicle and flight test, visit http://www.nasa.gov/aresIX. Photo credit: NASA/Kim Shiflett

  12. KSC-2009-5973

    NASA Image and Video Library

    2009-10-28

    CAPE CANAVERAL, Fla. – The Ares I-X test rocket launches into a bright Florida sky from Launch Pad 39B at NASA's Kennedy Space Center in Florida at 11:30 a.m. EDT on Oct. 28. NASA’s Constellation Program's 327-foot-tall rocket produces 2.96 million pounds of thrust at liftoff and reaches a speed of 100 mph in eight seconds. This was the first launch from Kennedy's pads of a vehicle other than the space shuttle since the Apollo Program's Saturn rockets were retired. The parts used to make the Ares I-X booster flew on 30 different shuttle missions ranging from STS-29 in 1989 to STS-106 in 2000. The data returned from more than 700 sensors throughout the rocket will be used to refine the design of future launch vehicles and bring NASA one step closer to reaching its exploration goals. For information on the Ares I-X vehicle and flight test, visit http://www.nasa.gov/aresIX. Photo credit: NASA/George Roberts and Tom Farrar

  13. Evaluation of spatial, radiometric and spectral thematic mapper performance for coastal studies

    NASA Technical Reports Server (NTRS)

    Klemas, V. (Principal Investigator)

    1983-01-01

    An area along the southeastern shore of the Chesapeake Bay was subsetted from TM imagery. The subsetted image was then enhanced and classified using an ERDAS 400 system. Results obtained were compared with a chart showing the distribution of both Zolsters marina and Rupplia martime in the Vaucluse Shores and which supports a large community of SAV. Radiative transfer models describing the irradiance reflectance of a water column containing SAV are being refined. Radiative transfer theory was used to model upwelling radiance for an orbiting sensor viewing an estuarine environment. Upwelling radiance was calculated for a clear maritime atmosphere, an optically shallow estuary of either clear or turbid water, and one of three bottom types: vegetation, sand, or mud using TM bands 1, 2, and 3 and MSS bands 4 and 5. A spectral quality index was defined similar to the equation for apparent contrast and used to evaluate the relative effectiveness of TM and MSS bands in detecting submerged vegetation.

  14. Color changing photonic crystals detect blast exposure

    PubMed Central

    Cullen, D. Kacy; Xu, Yongan; Reneer, Dexter V.; Browne, Kevin D.; Geddes, James W.; Yang, Shu; Smith, Douglas H.

    2010-01-01

    Blast-induced traumatic brain injury (bTBI) is the “signature wound” of the current wars in Iraq and Afghanistan. However, with no objective information of relative blast exposure, warfighters with bTBI may not receive appropriate medical care and are at risk of being returned to the battlefield. Accordingly, we have created a colorimetric blast injury dosimeter (BID) that exploits material failure of photonic crystals to detect blast exposure. Appearing like a colored sticker, the BID is fabricated in photosensitive polymers via multi-beam interference lithography. Although very stable in the presence of heat, cold or physical impact, sculpted micro- and nano-structures of the BID are physically altered in a precise manner by blast exposure, resulting in color changes that correspond with blast intensity. This approach offers a lightweight, power-free sensor that can be readily interpreted by the naked eye. Importantly, with future refinement this technology may be deployed to identify soldiers exposed to blast at levels suggested to be supra-threshold for non-impact blast-induced mild TBI. PMID:21040795

  15. KSC-2009-5942

    NASA Image and Video Library

    2009-10-28

    CAPE CANAVERAL, Fla. - With more than 23 times the power output of the Hoover Dam, the Constellation Program's Ares I-X test rocket zooms off Launch Complex 39B at NASA's Kennedy Space Center in Florida. The rocket produces 2.96 million pounds of thrust at liftoff and reaches a speed of 100 mph in eight seconds. Liftoff of the 6-minute flight test was at 11:30 a.m. EDT Oct. 28. This was the first launch from Kennedy's pads of a vehicle other than the space shuttle since the Apollo Program's Saturn rockets were retired. The parts used to make the Ares I-X booster flew on 30 different shuttle missions ranging from STS-29 in 1989 to STS-106 in 2000. The data returned from more than 700 sensors throughout the rocket will be used to refine the design of future launch vehicles and bring NASA one step closer to reaching its exploration goals. For information on the Ares I-X vehicle and flight test, visit http://www.nasa.gov/aresIX. Photo credit: NASA/Kim Shiflett

  16. KSC-2009-5938

    NASA Image and Video Library

    2009-10-28

    CAPE CANAVERAL, Fla. - With more than 23 times the power output of the Hoover Dam, the Constellation Program's Ares I-X test rocket zooms off Launch Complex 39B at NASA's Kennedy Space Center in Florida. The rocket produces 2.96 million pounds of thrust at liftoff and reaches a speed of 100 mph in eight seconds. Liftoff of the 6-minute flight test was at 11:30 a.m. EDT Oct. 28. This was the first launch from Kennedy's pads of a vehicle other than the space shuttle since the Apollo Program's Saturn rockets were retired. The parts used to make the Ares I-X booster flew on 30 different shuttle missions ranging from STS-29 in 1989 to STS-106 in 2000. The data returned from more than 700 sensors throughout the rocket will be used to refine the design of future launch vehicles and bring NASA one step closer to reaching its exploration goals. For information on the Ares I-X vehicle and flight test, visit http://www.nasa.gov/aresIX. Photo credit: NASA/Jack Pfaller

  17. KSC-2009-6021

    NASA Image and Video Library

    2009-10-28

    CAPE CANAVERAL, Fla. - With more than 23 times the power output of the Hoover Dam, NASA's Ares I-X test rocket zooms off Launch Complex 39B at NASA's Kennedy Space Center in Florida. The rocket produces 2.96 million pounds of thrust at liftoff and reaches a speed of 100 mph in eight seconds. Liftoff of the 6-minute flight test was at 11:30 a.m. EDT Oct. 28. This was the first launch from Kennedy's pads of a vehicle other than the space shuttle since the Apollo Program's Saturn rockets were retired. The parts used to make the Ares I-X booster flew on 30 different shuttle missions ranging from STS-29 in 1989 to STS-106 in 2000. The data returned from more than 700 sensors throughout the rocket will be used to refine the design of future launch vehicles and bring NASA one step closer to reaching its exploration goals. For information on the Ares I-X vehicle and flight test, visit http://www.nasa.gov/aresIX. Photo credit: NASA/Carl Winebarger

  18. On the ligand binding profile and desensitization of plant ionotropic glutamate receptor (iGluR)-like channels functioning in MAMP-triggered Ca²⁺ influx.

    PubMed

    Kwaaitaal, Mark; Maintz, Jens; Cavdar, Meltem; Panstruga, Ralph

    2012-11-01

    The generation of intracellular microbe-associated molecular pattern (MAMP)-triggered Ca²⁺ transients was recently demonstrated to involve ionotropic Glutamate Receptor (iGluR)-like channels in Arabidopsis and tobacco. Here we elaborate on our previous findings and refine our insights in the putative agonist binding profile and potential mode of desensitization of MAMP-activated plant iGluRs. Based on results from pharmacological inhibition and desensitization experiments, we propose that plant iGluR complexes responsible for the MAMP-triggered Ca²⁺ signature have a binding profile that combines the specificities of mammalian NMDA-and non-NMDA types of iGluRs, possibly reflecting the evolutionary history of plant and animal iGluRs. We further hypothesize that, analogous to the mammalian NMDA-NR1 receptor, desensitization of plant iGluR-like channels might involve binding of the ubiquitous Ca²⁺ sensor calmodulin to a cytoplasmic C-terminal domain.

  19. Practical Advances in Petroleum Processing

    NASA Astrophysics Data System (ADS)

    Hsu, Chang S.; Robinson, Paul R.

    "This comprehensive book by Robinson and Hsu will certainly become the standard text book for the oil refining business...[A] must read for all who are associated with oil refining." - Dr. Walter Fritsch, Senior Vice President Refining, OMV "This book covers a very advanced horizon of petroleum processing technology. For all refiners facing regional and global environmental concerns, and for those who seek a more sophisticated understanding of the refining of petroleum resources, this book has been long in coming." - Mr. Naomasa Kondo, Cosmo Oil Company, Ltd.

  20. Aeromagnetic survey by a model helicopter at the ruin of ironwork refinement

    NASA Astrophysics Data System (ADS)

    Funaki, M.; Nishioka, T.

    2007-12-01

     It is difficult to detect the magnetic anomaly resulting from the small scale of magnetic sources as archeological or historical ruins by a helicopter due to the restraint of the low altitude flights in the narrow area. Although a relatively small unmanned helicopters has been commercialized for agriculture use etc., it is too expensive for aeromagnetic surveys. We have developed a small autonomous unmanned helicopter which modified a model helicopter for aeromagnetic survey. A model helicopter (Hirobo Co.; SF40) with a 40cc gasoline engine, length of 143cm from the nose to the tail and dry weight of 15 kg is selected in this study. The irradiated magnetic field from the bottom-center of skid of SF40 was the total magnetic field (R)=3511 nT, inclination (I)=12 degrees and declination (D)=138 degrees. It was reduced to about 1 nT at 3 m downward from the skid during the hovering. When SF40 was covered with a magnetic shield film (Amolic sheet), the distance to measure 1nT diminished to 2 m. As shielding whole body with the film is not effective for reliable and safety flights, the only servomotors having the strong magnetization were shielded by the film. The autonomous flights based on GPS data succeeded. As the control system was too large and heavy for SF40, we are developing more simple and small navigation system for this project. Magnetometer system consists of a 3-axis fluxgate magnetometer, data logger, GPS and battery, recording every second of x, y and z magnetic fields, latitude, longitude, altitude and satellite number during 3 hours. The total weight of the system is 400g. The system was hanged to 2m lower from the skid by a rope (Bird magnetometer) or 2m front form the nose by a carbon fiber pipe (Stinger magnetometer) in order to avoid the magnetic field of SF40. However, the bird magnetometer was not suitable due to the strong noise resulting from the swing of the sensor. An archeological ruin of the ironwork refinement aged 15th century in western Japan was measured by the stinger magnetometer. The survey area was 70x20m with a gentle slop. The helicopter was controlled by the manual keeping up the roughly same altitude (the 4-8m height from the surface) and speed (1m/s). The result showed the strong anomalies of 500 nT at the NW corner of the area where consists with the refinement. From these viewpoints the model helicopter is useful to find the ironwork refinements instead of the identification based on the feeling and the experience of archeologists.

  1. Gary Refining Company emerges from Chapter 11 bankruptcy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1986-09-01

    On July 24, 1986 Gary Refining Company, Inc. announced that the Reorganization Plan for Gary Refining Company, Inc., Gary Refining Company, and Mesa Refining, Inc. has been approved by the United States bankruptcy Court (District of Colorado). The companies filed for protection from creditors on March 4, 1985 under Chapter 11 of the United States Bankruptcy Code. Payments to creditors are expected to begin upon start-up of the Gary Refining Company (GRC) refinery in Fruita, Colorado after delivery of shale oil from Union Oil's Parachute Creek plant. In the interim, GRC will continue to explore options for possible startup (onmore » a full scale or partial basis) prior to that time.« less

  2. Effect of Grain Refining on Defect Formation in DC Cast Al-Zn-Mg-Cu Alloy Billet

    NASA Astrophysics Data System (ADS)

    Nadella, Ravi; Eskin, Dmitry; Katgerman, Laurens

    In direct chill (DC) casting, the effect of grain refining on the prominent defects such as hot cracking and macrosegregation remains poorly understood, especially for multi-component commercial aluminum alloys. In this work, DC casting experiments were conducted on a 7075 alloy with and without grain refining at two casting speeds. The grain refiner was introduced either in the launder or in the furnace. The concentration profiles of Zn, Cu and Mg, measured along the billet diameter, showed that the increasing casting speed raises the segregation levels but grain refining does not seem to have a noticeable effect. However, hot cracking tendency is significantly reduced with grain refining and it is observed that crack is terminated with the introduction of grain refiner at a lower casting speed. These experimental results are correlated with microstructural observations such as grain size and morphology, and the occurrence of floating grains.

  3. An adaptive embedded mesh procedure for leading-edge vortex flows

    NASA Technical Reports Server (NTRS)

    Powell, Kenneth G.; Beer, Michael A.; Law, Glenn W.

    1989-01-01

    A procedure for solving the conical Euler equations on an adaptively refined mesh is presented, along with a method for determining which cells to refine. The solution procedure is a central-difference cell-vertex scheme. The adaptation procedure is made up of a parameter on which the refinement decision is based, and a method for choosing a threshold value of the parameter. The refinement parameter is a measure of mesh-convergence, constructed by comparison of locally coarse- and fine-grid solutions. The threshold for the refinement parameter is based on the curvature of the curve relating the number of cells flagged for refinement to the value of the refinement threshold. Results for three test cases are presented. The test problem is that of a delta wing at angle of attack in a supersonic free-stream. The resulting vortices and shocks are captured efficiently by the adaptive code.

  4. On macromolecular refinement at subatomic resolution withinteratomic scatterers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.

    2007-11-09

    A study of the accurate electron density distribution in molecular crystals at subatomic resolution, better than {approx} 1.0 {angstrom}, requires more detailed models than those based on independent spherical atoms. A tool conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8-1.0 {angstrom}, the number of experimental data is insufficient for the full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark datasets gave results comparable in quality withmore » results of multipolar refinement and superior of those for conventional models. Applications to several datasets of both small- and macro-molecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.« less

  5. On macromolecular refinement at subatomic resolution with interatomic scatterers

    PubMed Central

    Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.; Lunin, Vladimir Y.; Urzhumtsev, Alexandre

    2007-01-01

    A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than ∼1.0 Å) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8–1.0 Å, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package. PMID:18007035

  6. On macromolecular refinement at subatomic resolution with interatomic scatterers.

    PubMed

    Afonine, Pavel V; Grosse-Kunstleve, Ralf W; Adams, Paul D; Lunin, Vladimir Y; Urzhumtsev, Alexandre

    2007-11-01

    A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than approximately 1.0 A) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8-1.0 A, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.

  7. Deformable complex network for refining low-resolution X-ray structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Chong; Wang, Qinghua; Ma, Jianpeng, E-mail: jpma@bcm.edu

    2015-10-27

    A new refinement algorithm called the deformable complex network that combines a novel angular network-based restraint with a deformable elastic network model in the target function has been developed to aid in structural refinement in macromolecular X-ray crystallography. In macromolecular X-ray crystallography, building more accurate atomic models based on lower resolution experimental diffraction data remains a great challenge. Previous studies have used a deformable elastic network (DEN) model to aid in low-resolution structural refinement. In this study, the development of a new refinement algorithm called the deformable complex network (DCN) is reported that combines a novel angular network-based restraint withmore » the DEN model in the target function. Testing of DCN on a wide range of low-resolution structures demonstrated that it constantly leads to significantly improved structural models as judged by multiple refinement criteria, thus representing a new effective refinement tool for low-resolution structural determination.« less

  8. US refining margin trend: austerity continues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    Should crude oil prices hold near current levels in 1988, US refining margins might improve little, if at all. If crude oil prices rise, margins could blush pink or worse. If they drop, US refiners would still probably not see much margin improvement. In fact, if crude prices fall, they could set off another free fall in products markets and threaten refiner survival. Volatility in refined products markets and low product demand growth are the underlying reasons for caution or pessimism as the new year approaches. Recent directional patterns in refining margins are scrutinized in this issue. This issue alsomore » contains the following: (1) the ED refining netback data for the US Gulf and West Coasts, Rotterdam, and Singapore for late November, 1987; and (2) the ED fuel price/tax series for countries of the Eastern Hemisphere, November, 1987 edition. 4 figures, 6 tables.« less

  9. 19 CFR 19.18 - Smelting and refining; allowance for wastage; withdrawal for consumption.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 1 2010-04-01 2010-04-01 false Smelting and refining; allowance for wastage... OF MERCHANDISE THEREIN Smelting and Refining Warehouses § 19.18 Smelting and refining; allowance for... liquidation of the entry for losses on copper, lead, and zinc content of metal-bearing materials, pursuant to...

  10. 19 CFR 19.18 - Smelting and refining; allowance for wastage; withdrawal for consumption.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 19 Customs Duties 1 2011-04-01 2011-04-01 false Smelting and refining; allowance for wastage... OF MERCHANDISE THEREIN Smelting and Refining Warehouses § 19.18 Smelting and refining; allowance for... liquidation of the entry for losses on copper, lead, and zinc content of metal-bearing materials, pursuant to...

  11. 77 FR 52021 - Proposed CERCLA Administrative Settlement Agreement and Order on Consent for the Mercury Refining...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-28

    ... and Order on Consent for the Mercury Refining Superfund Site, Towns of Guilderland and Colonie, Albany... Management of Michigan, Inc. (hereafter ``Settling Parties'') pertaining to the Mercury Refining Superfund... Superfund Mercury Refining Superfund Site Special Account, which combined total $79,028.49. Each Settling...

  12. 76 FR 61472 - Revised Fiscal Year 2011 Tariff-Rate Quota Allocations for Refined Sugar

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-04

    ... Allocations for Refined Sugar AGENCY: Office of the United States Trade Representative. ACTION: Notice... (TRQ) for imported refined sugar for entry through November 30, 2011. DATES: Effective Date: October 4... States maintains a tariff-rate quota for imports of refined sugar. Section 404(d)(3) of the Uruguay Round...

  13. 7 CFR 1530.104 - Application for a license.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... OF AGRICULTURE THE REFINED SUGAR RE-EXPORT PROGRAM, THE SUGAR CONTAINING PRODUCTS RE-EXPORT PROGRAM... of any co-packer(s); (4) In the case of a refined sugar product, the polarity of the product and the formula proposed by the refiner for calculating the refined sugar in the product; (5) In the case of a...

  14. Principles of minimum cost refining for optimum linerboard strength

    Treesearch

    Thomas J. Urbanik; Jong Myoung Won

    2006-01-01

    The mechanical properties of paper at a single basis weight and a single targeted refining freeness level have traditionally been used to compare papers. Understanding the economics of corrugated fiberboard requires a more global characterization of the variation of mechanical properties and refining energy consumption with freeness. The cost of refining energy to...

  15. 7 CFR 1530.104 - Application for a license.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... OF AGRICULTURE THE REFINED SUGAR RE-EXPORT PROGRAM, THE SUGAR CONTAINING PRODUCTS RE-EXPORT PROGRAM... of any co-packer(s); (4) In the case of a refined sugar product, the polarity of the product and the formula proposed by the refiner for calculating the refined sugar in the product; (5) In the case of a...

  16. 40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... for small refiner status must be sent to: Attn: MSAT2 Benzene, Mail Stop 6406J, U.S. Environmental Protection Agency, 1200 Pennsylvania Ave., NW., Washington, DC 20460. For commercial delivery: MSAT2 Benzene...

  17. 40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... for small refiner status must be sent to: Attn: MSAT2 Benzene, Mail Stop 6406J, U.S. Environmental Protection Agency, 1200 Pennsylvania Ave., NW., Washington, DC 20460. For commercial delivery: MSAT2 Benzene...

  18. 40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... for small refiner status must be sent to: Attn: MSAT2 Benzene, Mail Stop 6406J, U.S. Environmental Protection Agency, 1200 Pennsylvania Ave., NW., Washington, DC 20460. For commercial delivery: MSAT2 Benzene...

  19. 30 CFR 208.4 - Royalty oil sales to eligible refiners.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 2 2010-07-01 2010-07-01 false Royalty oil sales to eligible refiners. 208.4... MANAGEMENT SALE OF FEDERAL ROYALTY OIL General Provisions § 208.4 Royalty oil sales to eligible refiners. (a... and defense. The Secretary will review these items and will determine whether eligible refiners have...

  20. GalaxyRefineComplex: Refinement of protein-protein complex model structures driven by interface repacking.

    PubMed

    Heo, Lim; Lee, Hasup; Seok, Chaok

    2016-08-18

    Protein-protein docking methods have been widely used to gain an atomic-level understanding of protein interactions. However, docking methods that employ low-resolution energy functions are popular because of computational efficiency. Low-resolution docking tends to generate protein complex structures that are not fully optimized. GalaxyRefineComplex takes such low-resolution docking structures and refines them to improve model accuracy in terms of both interface contact and inter-protein orientation. This refinement method allows flexibility at the protein interface and in the overall docking structure to capture conformational changes that occur upon binding. Symmetric refinement is also provided for symmetric homo-complexes. This method was validated by refining models produced by available docking programs, including ZDOCK and M-ZDOCK, and was successfully applied to CAPRI targets in a blind fashion. An example of using the refinement method with an existing docking method for ligand binding mode prediction of a drug target is also presented. A web server that implements the method is freely available at http://galaxy.seoklab.org/refinecomplex.

  1. Strengthening and Improving Yield Asymmetry of Magnesium Alloys by Second Phase Particle Refinement Under the Guidance of Integrated Computational Materials Engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Dongsheng; Lavender, Curt

    2015-05-08

    Improving yield strength and asymmetry is critical to expand applications of magnesium alloys in industry for higher fuel efficiency and lower CO 2 production. Grain refinement is an efficient method for strengthening low symmetry magnesium alloys, achievable by precipitate refinement. This study provides guidance on how precipitate engineering will improve mechanical properties through grain refinement. Precipitate refinement for improving yield strengths and asymmetry is simulated quantitatively by coupling a stochastic second phase grain refinement model and a modified polycrystalline crystal viscoplasticity φ-model. Using the stochastic second phase grain refinement model, grain size is quantitatively determined from the precipitate size andmore » volume fraction. Yield strengths, yield asymmetry, and deformation behavior are calculated from the modified φ-model. If the precipitate shape and size remain constant, grain size decreases with increasing precipitate volume fraction. If the precipitate volume fraction is kept constant, grain size decreases with decreasing precipitate size during precipitate refinement. Yield strengths increase and asymmetry approves to one with decreasing grain size, contributed by increasing precipitate volume fraction or decreasing precipitate size.« less

  2. The Influence of Grain Refiners on the Efficiency of Ceramic Foam Filters

    NASA Astrophysics Data System (ADS)

    Towsey, Nicholas; Schneider, Wolfgang; Krug, Hans-Peter; Hardman, Angela; Keegan, Neil J.

    An extensive program of work has been carried out to evaluate the efficiency of ceramic foam filters under carefully controlled conditions. Work reported at previous TMS meetings showed that in the absence of grain refiners, ceramic foam filters have the capacity for high filtration efficiency and consistent, reliable performance. The current phase of the investigation focuses on the impact grain refiner additions have on filter performance. The high filtration efficiencies obtained using 50 or 80ppi CFF's in the absence of grain refiners diminish when Al-3%Ti-1%B grain refiners are added. This, together with the impact of incoming inclusion loading on filter performance and the level of grain refiner addition are considered in detail. The new generation Al-3%Ti-0.15%C grain refiner has also been included. At typical addition levels (1kg/tonne) the effect on filter efficiency is similar to that for TiB2based grain refiners. The work was again conducted on a production scale using AA1050 alloy. Metal quality was determined using LiMCA and PoDFA. Spent filters were also analysed.

  3. REFMAC5 for the refinement of macromolecular crystal structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murshudov, Garib N., E-mail: garib@ysbl.york.ac.uk; Skubák, Pavol; Lebedev, Andrey A.

    The general principles behind the macromolecular crystal structure refinement program REFMAC5 are described. This paper describes various components of the macromolecular crystallographic refinement program REFMAC5, which is distributed as part of the CCP4 suite. REFMAC5 utilizes different likelihood functions depending on the diffraction data employed (amplitudes or intensities), the presence of twinning and the availability of SAD/SIRAS experimental diffraction data. To ensure chemical and structural integrity of the refined model, REFMAC5 offers several classes of restraints and choices of model parameterization. Reliable models at resolutions at least as low as 4 Å can be achieved thanks to low-resolution refinement toolsmore » such as secondary-structure restraints, restraints to known homologous structures, automatic global and local NCS restraints, ‘jelly-body’ restraints and the use of novel long-range restraints on atomic displacement parameters (ADPs) based on the Kullback–Leibler divergence. REFMAC5 additionally offers TLS parameterization and, when high-resolution data are available, fast refinement of anisotropic ADPs. Refinement in the presence of twinning is performed in a fully automated fashion. REFMAC5 is a flexible and highly optimized refinement package that is ideally suited for refinement across the entire resolution spectrum encountered in macromolecular crystallography.« less

  4. Separation of Lead from Crude Antimony by Pyro-Refining Process with NaPO3 Addition

    NASA Astrophysics Data System (ADS)

    Ye, Longgang; Hu, Yuejie; Xia, Zhimei; Chen, Yongming

    2016-06-01

    The main purpose of this study was to separate lead from crude antimony through an oxidation pyro-refining process and by using sodium metaphosphate as a lead elimination reagent. The process parameters that will affect the refining results were optimized experimentally under controlled conditions, such as the sodium metaphosphate charging dosage, the refining temperature and duration, and the air flow rate, to determine their effect on the lead content in refined antimony and the lead removal rate. A minimum lead content of 0.0522 wt.% and a 98.6% lead removal rate were obtained under the following optimal conditions: W_{{{NaPO}_{{3}} }} = 15% W Sb (where W represents weight), a refining temperature of 800°C, a refining time of 30 min, and an air flow rate of 3 L/min. X-ray diffractometry and scanning electron microscopy showed that high-purity antimony was obtained. The smelting operation is free from smoke or ammonia pollution when using monobasic sodium phosphate or ammonium dihydrogen phosphate as the lead elimination reagent. However, this refining process can also remove a certain amount of sulfur, cobalt, and silicon simultaneously, and smelting results also suggest that sodium metaphosphate can be used as a potential lead elimination reagent for bismuth and copper refining.

  5. Data-driven simulations of the Landsat Data Continuity Mission (LDCM) platform

    NASA Astrophysics Data System (ADS)

    Gerace, Aaron; Gartley, Mike; Schott, John; Raqueño, Nina; Raqueño, Rolando

    2011-06-01

    The Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) are two new sensors being developed by the Landsat Data Continuity Mission (LDCM) that will extend over 35 years of archived Landsat data. In a departure from the whiskbroom design used by all previous generations of Landsat, the LDCM system will employ a pushbroom technology. Although the newly adopted modular array, pushbroom architecture has several advantages over the previous whiskbroom design, registration of the multi-spectral data products is a concern. In this paper, the Digital Imaging and Remote Sensing Image Generation (DIRSIG) tool was used to simulate an LDCM collection, which gives the team access to data that would not otherwise be available prior to launch. The DIRSIG model was used to simulate the two-instrument LDCM payload in order to study the geometric and radiometric impacts of the sensor design on the proposed processing chain. The Lake Tahoe area located in eastern California was chosen for this work because of its dramatic change in elevation, which was ideal for studying the geometric effects of the new Landsat sensor design. Multi-modal datasets were used to create the Lake Tahoe site model for use in DIRSIG. National Elevation Dataset (NED) data were used to create the digital elevation map (DEM) required by DIRSIG, QuickBird data were used to identify different material classes in the scene, and ASTER and Hyperion spectral data were used to assign radiometric properties to those classes. In order to model a realistic Landsat orbit in these simulations, orbital parameters were obtained from a Landsat 7 two-line element set and propagated with the SGP4 orbital position model. Line-of-sight vectors defining how the individual detector elements of the OLI and TIRS instruments project through the optics were measured and provided by NASA. Additionally, the relative spectral response functions for the 9 bands of OLI and the 2 bands of TIRS were measured and provided by NASA. The instruments were offset on the virtual satellite and data recorders used to generate ephemeris data for downstream processing. Finally, potential platform jitter spectra were measured and provided by NASA and incorporated into the simulations. Simulated imagery generated by the model was incrementally provided to the rest of the LDCM team in a spiral development cycle to constantly refine the simulations.

  6. Low-field NMR logging sensor for measuring hydraulic parameters of model soils

    NASA Astrophysics Data System (ADS)

    Sucre, Oscar; Pohlmeier, Andreas; Minière, Adrien; Blümich, Bernhard

    2011-08-01

    SummaryKnowing the exact hydraulic parameters of soils is very important for improving water management in agriculture and for the refinement of climate models. Up to now, however, the investigation of such parameters has required applying two techniques simultaneously which is time-consuming and invasive. Thus, the objective of this current study is to present only one technique, i.e., a new non-invasive method to measure hydraulic parameters of model soils by using low-field nuclear magnetic resonance (NMR). Hereby, two model clay or sandy soils were respectively filled in a 2 m-long acetate column having an integrated PVC tube. After the soils were completely saturated with water, a low-field NMR sensor was moved up and down in the PVC tube to quantitatively measure along the whole column the initial water content of each soil sample. Thereafter, both columns were allowed to drain. Meanwhile, the NMR sensor was set at a certain depth to measure the water content of that soil slice. Once the hydraulic equilibrium was reached in each of the two columns, a final moisture profile was taken along the whole column. Three curves were subsequently generated accordingly: (1) the initial moisture profile, (2) the evolution curve of the moisture depletion at that particular depth, and (3) the final moisture profile. All three curves were then inverse analyzed using a MATLAB code over numerical data produced with the van Genuchten-Mualem model. Hereby, a set of values ( α, n, θr and θs) was found for the hydraulic parameters for the soils under research. Additionally, the complete decaying NMR signal could be analyzed through Inverse Laplace Transformation and averaged on the 1/ T2 space. Through measurement of the decay in pure water, the effect on the relaxation caused by the sample could be estimated from the obtained spectra. The migration of the sample-related average <1/ T2, Sample> with decreasing saturation speaks for a enhancement of the surface relaxation as the soil dries, in concordance with results found by other authors. In conclusion, this low-field mobile NMR technique has proven itself to be a fast and a non-invasive mean to investigate the hydraulic behavior of soils and to explore microscopical aspect of the water retained in them. In the future, the sensor should allow easy soil moisture measurements on-field.

  7. Annite stability revised. 1. Hydrogen-sensor data for the reaction annite = sanidine + magnetite + H2

    NASA Astrophysics Data System (ADS)

    Dachs, E.

    1994-08-01

    In P - T - log fO2 space, the stability of annite (ideally KFe{3/2+}(OH)2AlSi3O10) at high fO2 (low fH2) is limited by the reaction: annite = sanidine + magnetite + H2. Using the hydrogen-sensor technique, the equilibrium fH2 of this reaction was measured between 500 and 800° C at 2.8 kbar in 50° C intervals. Microbrobe analyses of the reacted annite+sanidine+magnetite mixtures show that tetrahedral positions of annite have a lower Si/Al ratio than the ideal value of 3/1. Silicon decreases from ˜2.9 per formula unit at low temperatures to ˜2.76 at high temperatures. As determined by Mössbauer spectroscopy in three experimental runs, the Fe3+ content of annite in the equilibrium assemblage is 11%±3. A least squares fit to the hydrogensensor data gives Δ H {R/0} = 50.269 ± 3.987 kJ and Δ S {R/0} = 83.01 ± 4.35 J/K for equilibrium (1). The hydrogene-sensor data are consistent with temperature half brackets determined in the classical way along the nickel-nickel oxide (NNO) and quartz-fayalite-magnetite (QFM) buffers with a mixture of annite+sanidine+magnetite for control. Compared to published oxygen buffer reversals, agreement is only found at high temperature and possible reasons for that discrepancy are discussed. The resulting slope of equilibrium (1) in log fO2 - T dimensions is considerably steeper than previously determined and between 400 and 800°C only intersects with the QFM buffer curve. Based on the hydrogen-sensor data and on the thermodynamic dataset of Berman (1988, and TWEEQ data base) for sanidine, magnetite and H2, the deduced standard-state properties of annite are: H {f/0}=-5127.376±5.279 kJ and S 0=422.84±5.29 J/(mol K). From the recently published unit cell refinements of annites and their Fe3+ contents, determined by Mössbauer spectroscopy (Redhammer et al. 1993), the molar volume of pure annite was constrained as 15.568±0.030 J/bar. A revised stability field for annite is presented, calculated between 400 and 800°C.

  8. i3Drefine software for protein 3D structure refinement and its assessment in CASP10.

    PubMed

    Bhattacharya, Debswapna; Cheng, Jianlin

    2013-01-01

    Protein structure refinement refers to the process of improving the qualities of protein structures during structure modeling processes to bring them closer to their native states. Structure refinement has been drawing increasing attention in the community-wide Critical Assessment of techniques for Protein Structure prediction (CASP) experiments since its addition in 8(th) CASP experiment. During the 9(th) and recently concluded 10(th) CASP experiments, a consistent growth in number of refinement targets and participating groups has been witnessed. Yet, protein structure refinement still remains a largely unsolved problem with majority of participating groups in CASP refinement category failed to consistently improve the quality of structures issued for refinement. In order to alleviate this need, we developed a completely automated and computationally efficient protein 3D structure refinement method, i3Drefine, based on an iterative and highly convergent energy minimization algorithm with a powerful all-atom composite physics and knowledge-based force fields and hydrogen bonding (HB) network optimization technique. In the recent community-wide blind experiment, CASP10, i3Drefine (as 'MULTICOM-CONSTRUCT') was ranked as the best method in the server section as per the official assessment of CASP10 experiment. Here we provide the community with free access to i3Drefine software and systematically analyse the performance of i3Drefine in strict blind mode on the refinement targets issued in CASP10 refinement category and compare with other state-of-the-art refinement methods participating in CASP10. Our analysis demonstrates that i3Drefine is only fully-automated server participating in CASP10 exhibiting consistent improvement over the initial structures in both global and local structural quality metrics. Executable version of i3Drefine is freely available at http://protein.rnet.missouri.edu/i3drefine/.

  9. Generation of 2D Land Cover Maps for Urban Areas Using Decision Tree Classification

    NASA Astrophysics Data System (ADS)

    Höhle, J.

    2014-09-01

    A 2D land cover map can automatically and efficiently be generated from high-resolution multispectral aerial images. First, a digital surface model is produced and each cell of the elevation model is then supplemented with attributes. A decision tree classification is applied to extract map objects like buildings, roads, grassland, trees, hedges, and walls from such an "intelligent" point cloud. The decision tree is derived from training areas which borders are digitized on top of a false-colour orthoimage. The produced 2D land cover map with six classes is then subsequently refined by using image analysis techniques. The proposed methodology is described step by step. The classification, assessment, and refinement is carried out by the open source software "R"; the generation of the dense and accurate digital surface model by the "Match-T DSM" program of the Trimble Company. A practical example of a 2D land cover map generation is carried out. Images of a multispectral medium-format aerial camera covering an urban area in Switzerland are used. The assessment of the produced land cover map is based on class-wise stratified sampling where reference values of samples are determined by means of stereo-observations of false-colour stereopairs. The stratified statistical assessment of the produced land cover map with six classes and based on 91 points per class reveals a high thematic accuracy for classes "building" (99 %, 95 % CI: 95 %-100 %) and "road and parking lot" (90 %, 95 % CI: 83 %-95 %). Some other accuracy measures (overall accuracy, kappa value) and their 95 % confidence intervals are derived as well. The proposed methodology has a high potential for automation and fast processing and may be applied to other scenes and sensors.

  10. Evaluation of health alerts from an early illness warning system in independent living.

    PubMed

    Rantz, Marilyn J; Scott, Susan D; Miller, Steven J; Skubic, Marjorie; Phillips, Lorraine; Alexander, Greg; Koopman, Richelle J; Musterman, Katy; Back, Jessica

    2013-06-01

    Passive sensor networks were deployed in independent living apartments to monitor older adults in their home environments to detect signs of impending illness and alert clinicians so they can intervene and prevent or delay significant changes in health or functional status. A retrospective qualitative deductive content analysis was undertaken to refine health alerts to improve clinical relevance to clinicians as they use alerts in their normal workflow of routine care delivery to older adults. Clinicians completed written free-text boxes to describe actions taken (or not) as a result of each alert; they also rated the clinical significance (relevance) of each health alert on a scale of 1 to 5. Two samples of the clinician's written responses to the health alerts were analyzed after alert algorithms had been adjusted based on results of a pilot study using health alerts to enhance clinical decision-making. In the first sample, a total of 663 comments were generated by seven clinicians in response to 385 unique alerts; there are more comments than alerts because more than one clinician rated the same alert. The second sample had a total of 142 comments produced by three clinicians in response to 88 distinct alerts. The overall clinical relevance of the alerts, as judged by the content of the qualitative comments by clinicians for each alert, improved from 33.3% of the alerts in the first sample classified as clinically relevant to 43.2% in the second. The goal is to produce clinically relevant alerts that clinicians find useful in daily practice. The evaluation methods used are described to assist others as they consider building and iteratively refining health alerts to enhance clinical decision making.

  11. Applications of a thermal-based two-source energy balance model using Priestley-Taylor approach for surface temperature partitioning under advective conditions

    NASA Astrophysics Data System (ADS)

    Song, Lisheng; Kustas, William P.; Liu, Shaomin; Colaizzi, Paul D.; Nieto, Hector; Xu, Ziwei; Ma, Yanfei; Li, Mingsong; Xu, Tongren; Agam, Nurit; Tolk, Judy A.; Evett, Steven R.

    2016-09-01

    In this study ground measured soil and vegetation component temperatures and composite temperature from a high spatial resolution thermal camera and a network of thermal-IR sensors collected in an irrigated maize field and in an irrigated cotton field are used to assess and refine the component temperature partitioning approach in the Two-Source Energy Balance (TSEB) model. A refinement to TSEB using a non-iterative approach based on the application of the Priestley-Taylor formulation for surface temperature partitioning and estimating soil evaporation from soil moisture observations under advective conditions (TSEB-A) was developed. This modified TSEB formulation improved the agreement between observed and modeled soil and vegetation temperatures. In addition, the TSEB-A model output of evapotranspiration (ET) and the components evaporation (E), transpiration (T) when compared to ground observations using the stable isotopic method and eddy covariance (EC) technique from the HiWATER experiment and with microlysimeters and a large monolithic weighing lysimeter from the BEAREX08 experiment showed good agreement. Difference between the modeled and measured ET measurements were less than 10% and 20% on a daytime basis for HiWATER and BEAREX08 data sets, respectively. The TSEB-A model was found to accurately reproduce the temporal dynamics of E, T and ET over a full growing season under the advective conditions existing for these irrigated crops located in arid/semi-arid climates. With satellite data this TSEB-A modeling framework could potentially be used as a tool for improving water use efficiency and conservation practices in water limited regions. However, TSEB-A requires soil moisture information which is not currently available routinely from satellite at the field scale.

  12. 40 CFR 80.1339 - Who is not eligible for the provisions for small refiners?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... eligible for the hardship provisions for small refiners: (a) A refiner with one or more refineries built... employees or crude capacity is due to operational changes at the refinery or a company sale or... refinery processing units. (e)(1) A small refiner approved under § 80.1340 that subsequently ceases...

  13. Refinement Types ML

    DTIC Science & Technology

    1994-03-16

    105 2.10 Decidability ........ ................................ 116 3 Declaring Refinements of Recursive Data Types 165 3.1...However, when we introduce polymorphic constructors in Chapter 5, tuples will become a polymorphic data type very similar to other polymorphic data types...terminate. 0 Chapter 3 Declaring Refinements of Recursive Data Types 3.1 Introduction The previous chapter defined refinement type inference in terms of

  14. 76 FR 61074 - USDA Increases the Fiscal Year 2011 Tariff-Rate Quota for Refined Sugar

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-03

    ... Quota for Refined Sugar AGENCY: Office of the Secretary, USDA. ACTION: Notice. SUMMARY: The Secretary of Agriculture today announced an increase in the fiscal year (FY) 2011 refined sugar tariff-rate quota (TRQ) of... INFORMATION: A quantity of 22,000 MTRV for sugars, syrups, and molasses (collectively referred to as refined...

  15. 40 CFR 80.554 - What compliance options are available to NRLM diesel fuel small refiners?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... to NRLM diesel fuel small refiners? 80.554 Section 80.554 Protection of Environment ENVIRONMENTAL... Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA Marine Fuel Small Refiner Hardship Provisions § 80.554 What compliance options are available to NRLM diesel fuel small refiners? (a) Option 1: A...

  16. 40 CFR 80.554 - What compliance options are available to NRLM diesel fuel small refiners?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... to NRLM diesel fuel small refiners? 80.554 Section 80.554 Protection of Environment ENVIRONMENTAL... Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA Marine Fuel Small Refiner Hardship Provisions § 80.554 What compliance options are available to NRLM diesel fuel small refiners? (a) Option 1: A...

  17. 76 FR 80333 - Seamless Refined Copper Pipe and Tube From Mexico: Extension of Time Limits for the Preliminary...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-23

    ... DEPARTMENT OF COMMERCE International Trade Administration [A-428-840] Seamless Refined Copper Pipe... antidumping duty order on seamless refined copper pipe and tube from Mexico, covering the period November 22, 2010, to April 30, 2011. See Seamless Refined Copper Pipe and Tube From Mexico: Notice of Initiation of...

  18. Corporate Entrepreneurship Assessment Instrument (CEAI): Refinement and Validation of a Survey Measure

    DTIC Science & Technology

    2007-03-01

    CORPORATE ENTREPRENEURSHIP ASSESSMENT INSTRUMENT (CEAI): REFINEMENT AND VALIDATION OF A SURVEY MEASURE...States Government. AFIT/GIR/ENV/07-M7 CORPORATE ENTREPRENEURSHIP ASSESSMENT INSTRUMENT (CEAI): REFINEMENT AND VALIDATION OF A SURVEY MEASURE...UNLIMITED AFIT/GIR/ENV/07-M7 CORPORATE ENTREPRENEURSHIP ASSESSMENT INSTRUMENT (CEAI): REFINEMENT AND VALIDATION OF A SURVEY MEASURE Michael

  19. A mobile sensor network to map carbon dioxide emissions in urban environments

    NASA Astrophysics Data System (ADS)

    Lee, Joseph K.; Christen, Andreas; Ketler, Rick; Nesic, Zoran

    2017-03-01

    A method for directly measuring carbon dioxide (CO2) emissions using a mobile sensor network in cities at fine spatial resolution was developed and tested. First, a compact, mobile system was built using an infrared gas analyzer combined with open-source hardware to control, georeference, and log measurements of CO2 mixing ratios on vehicles (car, bicycles). Second, two measurement campaigns, one in summer and one in winter (heating season) were carried out. Five mobile sensors were deployed within a 1 × 12. 7 km transect across the city of Vancouver, BC, Canada. The sensors were operated for 3.5 h on pre-defined routes to map CO2 mixing ratios at street level, which were then averaged to 100 × 100 m grid cells. The averaged CO2 mixing ratios of all grids in the study area were 417.9 ppm in summer and 442.5 ppm in winter. In both campaigns, mixing ratios were highest in the grid cells of the downtown core and along arterial roads and lowest in parks and well vegetated residential areas. Third, an aerodynamic resistance approach to calculating emissions was used to derive CO2 emissions from the gridded CO2 mixing ratio measurements in conjunction with mixing ratios and fluxes collected from a 28 m tall eddy-covariance tower located within the study area. These measured emissions showed a range of -12 to 226 CO2 ha-1 h-1 in summer and of -14 to 163 kg CO2 ha-1 h-1 in winter, with an average of 35.1 kg CO2 ha-1 h-1 (summer) and 25.9 kg CO2 ha-1 h-1 (winter). Fourth, an independent emissions inventory was developed for the study area using buildings energy simulations from a previous study and routinely available traffic counts. The emissions inventory for the same area averaged to 22.06 kg CO2 ha-1 h-1 (summer) and 28.76 kg CO2 ha-1 h-1 (winter) and was used to compare against the measured emissions from the mobile sensor network. The comparison on a grid-by-grid basis showed linearity between CO2 mixing ratios and the emissions inventory (R2 = 0. 53 in summer and R2 = 0. 47 in winter). Also, 87 % (summer) and 94 % (winter) of measured grid cells show a difference within ±1 order of magnitude, and 49 % (summer) and 69 % (winter) show an error of less than a factor 2. Although associated with considerable errors at the individual grid cell level, the study demonstrates a promising method of using a network of mobile sensors and an aerodynamic resistance approach to rapidly map greenhouse gases at high spatial resolution across cities. The method could be improved by longer measurements and a refined calculation of the aerodynamic resistance.

  20. Development and Testing of a Friction-Based Post-Installable Sensor for Subsea Fiber-Optic Monitoring System

    NASA Technical Reports Server (NTRS)

    Bentley, Nicole L.; Brower, David V.; Le, Suy Q.; Seaman, Calvin H.; Tang, Henry H.

    2017-01-01

    This paper presents the design and development of a friction-based coupling device for a fiber-optic monitoring system that can be deployed on existing subsea structures. This paper provides a summary of the design concept, prototype development, prototype performance testing, and design refinements of the device. The results of the laboratory testing of the first prototype performed at the National Aeronautics and Space Administration (NASA) Johnson Space Center (JSC) are included in this paper. Limitations of the initial design were identified and future design improvements were proposed. These new features will enhance the coupling of the device and improve the monitoring system measurement capabilities. A major challenge of a post-installed instrumentation monitoring system is to ensure adequate coupling between the instruments and the structure of interest for reliable measurements. Friction-based coupling devices have the potential to overcome coupling limitations caused by marine growth and soil contamination on subsea structures, flowlines or risers. The work described in this paper investigates the design of a friction-based coupling device (friction clamp), which is applicable for pipelines and structures that are suspended in the water column and those that are resting on the seabed. The monitoring elements consist of fiber-optic sensors that are bonded to a metal clamshell with a high-friction coating. The friction clamp has a single hinge design to facilitate the operation of the clamp and dual rows of opposing fasteners to distribute the clamping force on the structure. The friction clamp can be installed by divers in shallow depths or by remotely operated vehicles in deep-water applications. NASA-JSC was involved in the selection and testing of the friction coating, and in the design and testing of the prototype clamp device. Four-inch diameter and eight-inch diameter sub-scale friction clamp prototypes were built and tested to evaluate the strain measuring capabilities of the design under different loading scenarios. The testing revealed some limitations of the initial design concept, and subsequent refinements were explored to improve the measurement performance of the system. This study was part of a collaboration between NASA-JSC and Astro Technology, Inc. within a study called Clear Gulf. The primary objective of the Clear Gulf study is to develop advanced instrumentation technologies that will improve operational safety and reduce the risk of hydrocarbon spillage. NASA provided unique insights, expansive test facilities, and technical expertise to advance these technologies that would benefit the environment, the public, and commercial industries.

  1. Development and Testing of a Friction-Based Post-Installable Sensor for Subsea Fiber-Optic Monitoring Systems

    NASA Technical Reports Server (NTRS)

    Bentley, Nicole; Brower, David; Le, Suy Q.; Seaman, Calvin; Tang, Henry

    2017-01-01

    This paper presents the design and development of a friction-based coupling device for a fiber-optic monitoring system that can be deployed on existing subsea structures. This paper provides a summary of the design concept, prototype development, prototype performance testing, and design refinements of the device. The results of the laboratory testing of the first prototype performed at the National Aeronautics and Space Administration (NASA) Johnson Space Center (JSC) are included in this paper. Limitations of the initial design were identified and future design improvements were proposed. These new features will enhance the coupling of the device and improve the monitoring system measurement capabilities. A major challenge of a post-installed instrumentation monitoring system is to ensure adequate coupling between the instruments and the structure of interest for reliable measurements. Friction-based coupling devices have the potential to overcome coupling limitations caused by marine growth and soil contamination on subsea structures, flowlines or risers. The work described in this paper investigates the design of a friction-based coupling device (friction clamp), which is applicable for pipelines and structures that are suspended in the water column and those that are resting on the seabed. The monitoring elements consist of fiber-optic sensors that are bonded to a metal clamshell with a high-friction coating. The friction clamp has a single hinge design to facilitate the operation of the clamp and dual rows of opposing fasteners to distribute the clamping force on the structure. The friction clamp can be installed by divers in shallow depths or by remotely operated vehicles in deep-water applications. NASA-JSC was involved in the selection and testing of the friction coating, and in the design and testing of the prototype clamp device. Four-inch diameter and eight-inch diameter sub-scale friction clamp prototypes were built and tested to evaluate the strain measuring capabilities of the design under different loading scenarios. The testing revealed some limitations of the initial design concept, and subsequent refinements were explored to improve the measurement performance of the system. This study was part of a collaboration between NASA-JSC and Astro Technology, Inc. within a study called Clear Gulf. The primary objective of the Clear Gulf study is to develop advanced instrumentation technologies that will improve operational safety and reduce the risk of hydrocarbon spillage. NASA provided unique insights, expansive test facilities, and technical expertise to advance these technologies that would benefit the environment, the public, and commercial industries.

  2. Satellite and in situ measurements for coastal water quality assessment and monitoring: a comparison between MODIS Ocean Color and SST products with Wave Glider observations in the Southern Tyrrhenian Sea (Gulf of Naples, Italy).

    NASA Astrophysics Data System (ADS)

    Sileo, Giancanio; Lacava, Teodosio; Tramutoli, Valerio; Budillon, Giorgio; Aulicino, Giuseppe; Cotroneo, Yuri; Ciancia, Emanuele; De Stefano, Massimo; Fusco, Giannetta; Pergola, Nicola; Satriano, Valeria

    2015-04-01

    A wave-propelled autonomous vehicle (Wave Glider, WG) carrying a variety of oceanographic and meteorological sensors was launched from Gulf of Naples on the 12th September 2012 for a three-week mission in the Southern Tyrrhenian Sea. The main objective of the mission was the opportunity to evaluate the usefulness of combined satellite and autonomous platform observations in providing reliable and concurrent information about sea water parameters about the Southern Tyrrhenian Sea surface layer. The Wave Glider was equipped with sensors to measure temperature, salinity, currents, as well as CDOM, turbidity and refined fuels fluorescence. Wave Glider oceanographic data were also compared to satellite measurements. In particular, MODIS Ocean Color (OC) products concerning sea water properties collected during the Wave Glider mission were used. The EOS constellation allowed us to have about two daily diurnal imagery providing information about ocean color products. Concerning SST, both diurnal and night-time data were available. The first study we performed was focused on the analysis of SST information coming from both WG and MODIS. A good coefficient of correlation was achieved considering together both day-time and night-time acquisitions, with a discrepancy not higher than 0,7 °C. The correlation increases considering only day-time values, when more samples respect to the night-time ones were available. The results confirm the capability of MODIS products to reproduce over large area the SST variability, with a good level of accuracy. A similar analysis has been carried out to compare the turbidity WG data with the kd-490 MODIS product, which provide information about the diffuse attenuation coefficient in water at 490 nm and it is directly related to the presence of scattering particles, either organic or inorganic, in the water column and thus it is an indication of water clarity or of the water column turbidity. The absence of correlation seems to indicate, for such a specific parameter, that the two sensors are looking at not similar objects. A different depth of investigation or a small scale variability, that MODIS is not able to capture, could be a few of the explanations of these results. It should be also stressed that, by its design, the WG is propelled at the surface like a surfboard and bubbles of all sizes will roll along the bottom of the float. Microbubbles are of particular concern since they will not rapidly ascend and are likely to represent a source noise for the turbidity WG parameter. Finally, the refined fuels WG data have been compared with a statistical indicator of oil spill presence named RST-OIL and the correlation was quite poor. Such a results is quite expected since for its construction, values of RETIRAbox within +/-2 σ, like those achieved along WG path, have a probability of occurrence of 97,75% representing the normal fluctuation of the signal, hence randomly varying.

  3. 76 FR 50285 - Fiscal Year 2012 Tariff-Rate Quota Allocations for Raw Cane Sugar, Refined and Specialty Sugar...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-12

    ... for Raw Cane Sugar, Refined and Specialty Sugar and Sugar-Containing Products AGENCY: Office of the... quantity of the tariff-rate quotas for imported raw cane sugar, refined and specialty sugar and sugar...), the United States maintains tariff-rate quotas (TRQs) for imports of raw cane sugar and refined sugar...

  4. Solidification Based Grain Refinement in Steels

    DTIC Science & Technology

    2009-07-24

    pearlite (See Figure 1). No evidence of the as-cast austenite dendrite structure was observed. The gating system for this sample resides at the thermal...possible nucleating compounds. 3) Extend grain refinement theory and solidification knowledge through experimental data. 4) Determine structure ...refine the structure of a casting through heat treatment. The energy required for grain refining via thermomechanical processes or heat treatment

  5. 40 CFR 80.1165 - What are the additional requirements under this subpart for a foreign small refiner?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... audits may be by EPA employees or contractors to EPA. (iv) Any documents requested that are related to... interviewing employees. (vii) Any employee of the foreign refiner must be made available for interview by the... service on this agent constitutes service on the foreign refiner or any employee of the foreign refiner...

  6. 40 CFR 80.1165 - What are the additional requirements under this subpart for a foreign small refiner?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... audits may be by EPA employees or contractors to EPA. (iv) Any documents requested that are related to... interviewing employees. (vii) Any employee of the foreign refiner must be made available for interview by the... service on this agent constitutes service on the foreign refiner or any employee of the foreign refiner...

  7. Coloured Petri Net Refinement Specification and Correctness Proof with Coq

    NASA Technical Reports Server (NTRS)

    Choppy, Christine; Mayero, Micaela; Petrucci, Laure

    2009-01-01

    In this work, we address the formalisation of symmetric nets, a subclass of coloured Petri nets, refinement in COQ. We first provide a formalisation of the net models, and of their type refinement in COQ. Then the COQ proof assistant is used to prove the refinement correctness lemma. An example adapted from a protocol example illustrates our work.

  8. Improved cryoEM-Guided Iterative Molecular Dynamics–Rosetta Protein Structure Refinement Protocol for High Precision Protein Structure Prediction

    PubMed Central

    2016-01-01

    Many excellent methods exist that incorporate cryo-electron microscopy (cryoEM) data to constrain computational protein structure prediction and refinement. Previously, it was shown that iteration of two such orthogonal sampling and scoring methods – Rosetta and molecular dynamics (MD) simulations – facilitated exploration of conformational space in principle. Here, we go beyond a proof-of-concept study and address significant remaining limitations of the iterative MD–Rosetta protein structure refinement protocol. Specifically, all parts of the iterative refinement protocol are now guided by medium-resolution cryoEM density maps, and previous knowledge about the native structure of the protein is no longer necessary. Models are identified solely based on score or simulation time. All four benchmark proteins showed substantial improvement through three rounds of the iterative refinement protocol. The best-scoring final models of two proteins had sub-Ångstrom RMSD to the native structure over residues in secondary structure elements. Molecular dynamics was most efficient in refining secondary structure elements and was thus highly complementary to the Rosetta refinement which is most powerful in refining side chains and loop regions. PMID:25883538

  9. Structure refinement of membrane proteins via molecular dynamics simulations.

    PubMed

    Dutagaci, Bercem; Heo, Lim; Feig, Michael

    2018-07-01

    A refinement protocol based on physics-based techniques established for water soluble proteins is tested for membrane protein structures. Initial structures were generated by homology modeling and sampled via molecular dynamics simulations in explicit lipid bilayer and aqueous solvent systems. Snapshots from the simulations were selected based on scoring with either knowledge-based or implicit membrane-based scoring functions and averaged to obtain refined models. The protocol resulted in consistent and significant refinement of the membrane protein structures similar to the performance of refinement methods for soluble proteins. Refinement success was similar between sampling in the presence of lipid bilayers and aqueous solvent but the presence of lipid bilayers may benefit the improvement of lipid-facing residues. Scoring with knowledge-based functions (DFIRE and RWplus) was found to be as good as scoring using implicit membrane-based scoring functions suggesting that differences in internal packing is more important than orientations relative to the membrane during the refinement of membrane protein homology models. © 2018 Wiley Periodicals, Inc.

  10. A Comparison of the Behaviour of AlTiB and AlTiC Grain Refiners

    NASA Astrophysics Data System (ADS)

    Schneider, W.; Kearns, M. A.; McGarry, M. J.; Whitehead, A. J.

    AlTiC master alloys present a new alternative to AlTiB grain refiners which have enjoyed pre-eminence in cast houses for several decades. Recent investigations have shown that, under defined casting conditions, AlTiC is a more efficient grain refiner than AlTiB, is less prone to agglomeration and is more resistant to poisoning by Zr, Cr. Moreover it is observed that there are differences in the mechanism of grain refinement for the different alloys. This paper describes the influence of melt temperature and addition rate on the performance of both types of grain refiner in DC casting tests on different wrought alloys. Furthermore the effects of combined additions of the grain refiners and the recycling behaviour of the treated alloys are presented. Results are compared with laboratory test data. Finally, mechanisms of grain refinement are discussed which are consistent with the observed differences in behaviour with AlTiC and AlTiB.

  11. Investigation on temporal evolution of the grain refinement in copper under high strain rate loading via in-situ synchrotron measurement and predictive modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shah, Pooja Nitin; Shin, Yung C.; Sun, Tao

    Synchrotron X-rays are integrated with a modified Kolsky tension bar to conduct in situ tracking of the grain refinement mechanism operating during the dynamic deformation of metals. Copper with an initial average grain size of 36 μm is refined to 6.3 μm when loaded at a constant high strain rate of 1200 s -1. The synchrotron measurements revealed the temporal evolution of the grain refinement mechanism in terms of the initiation and rate of refinement throughout the loading test. A multiscale coupled probabilistic cellular automata based recrystallization model has been developed to predict the microstructural evolution occurring during dynamic deformationmore » processes. The model accurately predicts the initiation of the grain refinement mechanism with a predicted final average grain size of 2.4 μm. As a result, the model also accurately predicts the temporal evolution in terms of the initiation and extent of refinement when compared with the experimental results.« less

  12. Investigation on temporal evolution of the grain refinement in copper under high strain rate loading via in-situ synchrotron measurement and predictive modeling

    DOE PAGES

    Shah, Pooja Nitin; Shin, Yung C.; Sun, Tao

    2017-10-03

    Synchrotron X-rays are integrated with a modified Kolsky tension bar to conduct in situ tracking of the grain refinement mechanism operating during the dynamic deformation of metals. Copper with an initial average grain size of 36 μm is refined to 6.3 μm when loaded at a constant high strain rate of 1200 s -1. The synchrotron measurements revealed the temporal evolution of the grain refinement mechanism in terms of the initiation and rate of refinement throughout the loading test. A multiscale coupled probabilistic cellular automata based recrystallization model has been developed to predict the microstructural evolution occurring during dynamic deformationmore » processes. The model accurately predicts the initiation of the grain refinement mechanism with a predicted final average grain size of 2.4 μm. As a result, the model also accurately predicts the temporal evolution in terms of the initiation and extent of refinement when compared with the experimental results.« less

  13. Post-treatment mechanical refining as a method to improve overall sugar recovery of steam pretreated hybrid poplar.

    PubMed

    Dou, Chang; Ewanick, Shannon; Bura, Renata; Gustafson, Rick

    2016-05-01

    This study investigates the effect of mechanical refining to improve the sugar yield from biomass processed under a wide range of steam pretreatment conditions. Hybrid poplar chips were steam pretreated using six different conditions with or without SO2. The resulting water insoluble fractions were subjected to mechanical refining. After refining, poplar pretreated at 205°C for 10min without SO2 obtained a 32% improvement in enzymatic hydrolysis and achieved similar overall monomeric sugar recovery (539kg/tonne) to samples pretreated with SO2. Refining did not improve hydrolyzability of samples pretreated at more severe conditions, nor did it improve the overall sugar recovery. By maximizing overall sugar recovery, refining could partially decouple the pretreatment from other unit operations, and enable the use of low temperature, non-sulfur pretreatment conditions. The study demonstrates the possibility of using post-treatment refining to accommodate potential pretreatment process upsets without sacrificing sugar yields. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Refined geometric transition and qq-characters

    NASA Astrophysics Data System (ADS)

    Kimura, Taro; Mori, Hironori; Sugimoto, Yuji

    2018-01-01

    We show the refinement of the prescription for the geometric transition in the refined topological string theory and, as its application, discuss a possibility to describe qq-characters from the string theory point of view. Though the suggested way to operate the refined geometric transition has passed through several checks, it is additionally found in this paper that the presence of the preferred direction brings a nontrivial effect. We provide the modified formula involving this point. We then apply our prescription of the refined geometric transition to proposing the stringy description of doubly quantized Seiberg-Witten curves called qq-characters in certain cases.

  15. Properties of Base Stocks Obtained from Used Engine Oils by Acid/Clay Re-refining (Proprietes des Stocks de Base Obtenus par Regeneration des Huiles a Moteur Usees par le Procede de Traitement a l’Acide et a la Terre),

    DTIC Science & Technology

    1980-09-01

    Research Conseil national Council Canada de recherches Canada LEY EL < PROPERTIES OF BASE STOCKS OBTAINED FROM USED ENGINE OILS BY ACID /CLAY RE-REFINING DTIC...MECHANICAL ENGINEERING REPORT Canad NC MP75 NRC NO. 18719 PROPERTIES OF BASE STOCKS OBTAINED FROM USED ENGINE OILS BY ACID /CLAY RE-REFINING (PROPRIETES...refined Base Stock ..................................... 10 3 Physical Test Data of Acid /Clay Process - Re-refined Base Stock Oils ............ 11 4

  16. Unstructured Euler flow solutions using hexahedral cell refinement

    NASA Technical Reports Server (NTRS)

    Melton, John E.; Cappuccio, Gelsomina; Thomas, Scott D.

    1991-01-01

    An attempt is made to extend grid refinement into three dimensions by using unstructured hexahedral grids. The flow solver is developed using the TIGER (topologically Independent Grid, Euler Refinement) as the starting point. The program uses an unstructured hexahedral mesh and a modified version of the Jameson four-stage, finite-volume Runge-Kutta algorithm for integration of the Euler equations. The unstructured mesh allows for local refinement appropriate for each freestream condition, thereby concentrating mesh cells in the regions of greatest interest. This increases the computational efficiency because the refinement is not required to extend throughout the entire flow field.

  17. Gross wood characteristics affecting properties of handsheets made from loblolly pine refiner groundwood

    Treesearch

    Charles W. McMillin

    1968-01-01

    Specific refining energy and gross wood properties accounted for as much as 90% of the total variation in strength of handsheets made from 96 pulps disk-refined from chips of varying characteristics. Burst, tear, and breaking length were increased by applying high specific refining energy and using fast-grown wood of high latewood content but of relatively low density...

  18. 75 FR 50796 - Fiscal Year 2011 Tariff-Rate Quota Allocations for Raw Cane Sugar, Refined and Specialty Sugar...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-17

    ... for Raw Cane Sugar, Refined and Specialty Sugar, and Sugar-Containing Products AGENCY: Office of the... quantity of the tariff-rate quotas for imported raw cane sugar, refined and specialty sugar, and sugar... imports of raw cane sugar and refined sugar. Pursuant to Additional U.S. Note 8 to Chapter 17 of the HTS...

  19. 40 CFR 80.1338 - What criteria must be met to qualify as a small refiner for the gasoline benzene requirements of...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... as a small refiner for the gasoline benzene requirements of this subpart? 80.1338 Section 80.1338... FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner Provisions § 80.1338 What criteria must be met to qualify as a small refiner for the gasoline benzene requirements of this subpart? (a) A small...

  20. 40 CFR 80.1338 - What criteria must be met to qualify as a small refiner for the gasoline benzene requirements of...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... as a small refiner for the gasoline benzene requirements of this subpart? 80.1338 Section 80.1338... FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner Provisions § 80.1338 What criteria must be met to qualify as a small refiner for the gasoline benzene requirements of this subpart? (a) A small...

  1. 40 CFR 80.1338 - What criteria must be met to qualify as a small refiner for the gasoline benzene requirements of...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... as a small refiner for the gasoline benzene requirements of this subpart? 80.1338 Section 80.1338... FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner Provisions § 80.1338 What criteria must be met to qualify as a small refiner for the gasoline benzene requirements of this subpart? (a) A small...

  2. 40 CFR 80.1338 - What criteria must be met to qualify as a small refiner for the gasoline benzene requirements of...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... as a small refiner for the gasoline benzene requirements of this subpart? 80.1338 Section 80.1338... FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner Provisions § 80.1338 What criteria must be met to qualify as a small refiner for the gasoline benzene requirements of this subpart? (a) A small...

  3. 40 CFR 80.1338 - What criteria must be met to qualify as a small refiner for the gasoline benzene requirements of...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... as a small refiner for the gasoline benzene requirements of this subpart? 80.1338 Section 80.1338... FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner Provisions § 80.1338 What criteria must be met to qualify as a small refiner for the gasoline benzene requirements of this subpart? (a) A small...

  4. Effects of varying refiner pressure on the machanical properties of loblolly pine fibres

    Treesearch

    Les Groom; Timothy Rials; Rebecca Snell

    2000-01-01

    Loblolly pine chips, separated into mature and juvenile portions, were refined at three pressures (4, 8, and 12 bar) in a single disc refiner at the BioComposites Centre. Fibres were dried in a flash drier to a moisture content of approximately 12 percent. The mechanical properties of single fibres from each refining pressure were determined using a tensile strength...

  5. Field validation of protocols developed to evaluate in-line mastitis detection systems.

    PubMed

    Kamphuis, C; Dela Rue, B T; Eastwood, C R

    2016-02-01

    This paper reports on a field validation of previously developed protocols for evaluating the performance of in-line mastitis-detection systems. The protocols outlined 2 requirements of these systems: (1) to detect cows with clinical mastitis (CM) promptly and accurately to enable timely and appropriate treatment and (2) to identify cows with high somatic cell count (SCC) to manage bulk milk SCC levels. Gold standard measures, evaluation tests, performance measures, and performance targets were proposed. The current study validated the protocols on commercial dairy farms with automated in-line mastitis-detection systems using both electrical conductivity (EC) and SCC sensor systems that both monitor at whole-udder level. The protocol for requirement 1 was applied on 3 commercial farms. For requirement 2, the protocol was applied on 6 farms; 3 of them had low bulk milk SCC (128×10(3) cells/mL) and were the same farms as used for field evaluation of requirement 1. Three farms with high bulk milk SCC (270×10(3) cells/mL) were additionally enrolled. The field evaluation methodology and results were presented at a workshop including representation from 7 international suppliers of in-line mastitis-detection systems. Feedback was sought on the acceptance of standardized performance evaluation protocols and recommended refinements to the protocols. Although the methodology for requirement 1 was relatively labor intensive and required organizational skills over an extended period, no major issues were encountered during the field validation of both protocols. The validation, thus, proved the protocols to be practical. Also, no changes to the data collection process were recommended by the technology supplier representatives. However, 4 recommendations were made to refine the protocols: inclusion of an additional analysis that ignores small (low-density) clot observations in the definition of CM, extension of the time window from 4 to 5 milkings for timely alerts for CM, setting a maximum number of 10 milkings for the time window to detect a CM episode, and presentation of sensitivity for a larger range of false alerts per 1,000 milkings replacing minimum performance targets. The recommended refinements are discussed with suggested changes to the original protocols. The information presented is intended to inform further debate toward achieving international agreement on standard protocols to evaluate performance of in-line mastitis-detection systems. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  6. Pervasive brain monitoring and data sharing based on multi-tier distributed computing and linked data technology

    PubMed Central

    Zao, John K.; Gan, Tchin-Tze; You, Chun-Kai; Chung, Cheng-En; Wang, Yu-Te; Rodríguez Méndez, Sergio José; Mullen, Tim; Yu, Chieh; Kothe, Christian; Hsiao, Ching-Teng; Chu, San-Liang; Shieh, Ce-Kuen; Jung, Tzyy-Ping

    2014-01-01

    EEG-based Brain-computer interfaces (BCI) are facing basic challenges in real-world applications. The technical difficulties in developing truly wearable BCI systems that are capable of making reliable real-time prediction of users' cognitive states in dynamic real-life situations may seem almost insurmountable at times. Fortunately, recent advances in miniature sensors, wireless communication and distributed computing technologies offered promising ways to bridge these chasms. In this paper, we report an attempt to develop a pervasive on-line EEG-BCI system using state-of-art technologies including multi-tier Fog and Cloud Computing, semantic Linked Data search, and adaptive prediction/classification models. To verify our approach, we implement a pilot system by employing wireless dry-electrode EEG headsets and MEMS motion sensors as the front-end devices, Android mobile phones as the personal user interfaces, compact personal computers as the near-end Fog Servers and the computer clusters hosted by the Taiwan National Center for High-performance Computing (NCHC) as the far-end Cloud Servers. We succeeded in conducting synchronous multi-modal global data streaming in March and then running a multi-player on-line EEG-BCI game in September, 2013. We are currently working with the ARL Translational Neuroscience Branch to use our system in real-life personal stress monitoring and the UCSD Movement Disorder Center to conduct in-home Parkinson's disease patient monitoring experiments. We shall proceed to develop the necessary BCI ontology and introduce automatic semantic annotation and progressive model refinement capability to our system. PMID:24917804

  7. Aeroheating Thermal Model Correlation for Mars Global Surveyor (MGS) Solar Array

    NASA Technical Reports Server (NTRS)

    Amundsen, Ruth M.; Dec, John A.; George, Benjamin E.

    2003-01-01

    The Mars Global Surveyor (MGS) Spacecraft made use of aerobraking to gradually reduce its orbit period from a highly elliptical insertion orbit to its final science orbit. Aerobraking produces a high heat load on the solar arrays, which have a large surface area exposed to the airflow and relatively low mass. To accurately model the complex behavior during aerobraking, the thermal analysis needed to be tightly coupled to the spatially varying, time dependent aerodynamic heating. Also, the thermal model itself needed to accurately capture the behavior of the solar array and its response to changing heat load conditions. The correlation of the thermal model to flight data allowed a validation of the modeling process, as well as information on what processes dominate the thermal behavior. Correlation in this case primarily involved detailing the thermal sensor nodes, using as-built mass to modify material property estimates, refining solar cell assembly properties, and adding detail to radiation and heat flux boundary conditions. This paper describes the methods used to develop finite element thermal models of the MGS solar array and the correlation of the thermal model to flight data from the spacecraft drag passes. Correlation was made to data from four flight thermal sensors over three of the early drag passes. Good correlation of the model was achieved, with a maximum difference between the predicted model maximum and the observed flight maximum temperature of less than 5%. Lessons learned in the correlation of this model assisted in validating a similar model and method used for the Mars Odyssey solar array aeroheating analysis, which were used during onorbit operations.

  8. Pervasive brain monitoring and data sharing based on multi-tier distributed computing and linked data technology.

    PubMed

    Zao, John K; Gan, Tchin-Tze; You, Chun-Kai; Chung, Cheng-En; Wang, Yu-Te; Rodríguez Méndez, Sergio José; Mullen, Tim; Yu, Chieh; Kothe, Christian; Hsiao, Ching-Teng; Chu, San-Liang; Shieh, Ce-Kuen; Jung, Tzyy-Ping

    2014-01-01

    EEG-based Brain-computer interfaces (BCI) are facing basic challenges in real-world applications. The technical difficulties in developing truly wearable BCI systems that are capable of making reliable real-time prediction of users' cognitive states in dynamic real-life situations may seem almost insurmountable at times. Fortunately, recent advances in miniature sensors, wireless communication and distributed computing technologies offered promising ways to bridge these chasms. In this paper, we report an attempt to develop a pervasive on-line EEG-BCI system using state-of-art technologies including multi-tier Fog and Cloud Computing, semantic Linked Data search, and adaptive prediction/classification models. To verify our approach, we implement a pilot system by employing wireless dry-electrode EEG headsets and MEMS motion sensors as the front-end devices, Android mobile phones as the personal user interfaces, compact personal computers as the near-end Fog Servers and the computer clusters hosted by the Taiwan National Center for High-performance Computing (NCHC) as the far-end Cloud Servers. We succeeded in conducting synchronous multi-modal global data streaming in March and then running a multi-player on-line EEG-BCI game in September, 2013. We are currently working with the ARL Translational Neuroscience Branch to use our system in real-life personal stress monitoring and the UCSD Movement Disorder Center to conduct in-home Parkinson's disease patient monitoring experiments. We shall proceed to develop the necessary BCI ontology and introduce automatic semantic annotation and progressive model refinement capability to our system.

  9. Optical sensor system for time-resolved quantification of methane concentrations: Validation measurements in a rapid compression machine

    NASA Astrophysics Data System (ADS)

    Bauke, Stephan; Golibrzuch, Kai; Wackerbarth, Hainer; Fendt, Peter; Zigan, Lars; Seefeldt, Stefan; Thiele, Olaf; Berg, Thomas

    2018-05-01

    Lowering greenhouse gas emissions is one of the most challenging demands of today's society. Especially, the automotive industry struggles with the development of more efficient internal combustion (IC) engines. As an alternative to conventional fuels, methane has the potential for a significant emission reduction. In methane fuelled engines, the process of mixture formation, which determines the properties of combustion after ignition, differs significantly from gasoline and diesel engines and needs to be understood and controlled in order to develop engines with high efficiency. This work demonstrates the development of a gas sensing system that can serve as a diagnostic tool for measuring crank-angle resolved relative air-fuel ratios in methane-fuelled near-production IC engines. By application of non-dispersive infrared absorption spectroscopy at two distinct spectral regions in the ν3 absorption band of methane around 3.3 μm, the system is able to determine fuel density and temperature simultaneously. A modified spark plug probe allows for straightforward application at engine test stations. Here, the application of the detection system in a rapid compression machine is presented, which enables validation and characterization of the system on well-defined gas mixtures under engine-like dynamic conditions. In extension to a recent proof-of-principle study, a refined data analysis procedure is introduced that allows the correction of artefacts originating from mechanical distortions of the sensor probe. In addition, the measured temperatures are compared to data obtained with a commercially available system based on the spectrally resolved detection of water absorption in the near infrared.

  10. MRMS Experimental Testbed for Operational Products (METOP)

    NASA Astrophysics Data System (ADS)

    Zhang, J.

    2016-12-01

    Accurate high-resolution quantitative precipitation estimation (QPE) at the continental scale is of critical importance to the nation's weather, water and climate services. To address this need, a Multi-Radar Multi-Sensor (MRMS) system was developed at the National Severe Storms Lab of National Oceanic and Atmospheric Administration that integrates radar, gauge, model and satellite data and provides a suite of QPE products at 1-km and 2-min resolution. MRMS system consists of three components: 1) an operational system; 2) a real-time research system; 3) an archive testbed. The operational system currently provides instantaneous precipitation rate, type and 1- to 72-hr accumulations for conterminous United Stated and southern Canada. The research system has the similar hardware infrastructure and data environment as the operational system, but runs newer and more advanced algorithms. The newer algorithms are tested on the research system for robustness and computational efficiency in a pseudo operational environment before they are transitioned into operations. The archive testbed, also called the MRMS Experimental Testbed for Operational Products (METOP), consists of a large database that encompasses a wide range of hydroclimatological and geographical regimes. METOP is for the testing and refinements of the most advanced radar QPE techniques, which are often developed on specific data from limited times and locations. The archive data includes quality controlled in-situ observations for the validation of the new radar QPE across all seasons and geographic regions. A number of operational QPE products derived from different sensors/models are also included in METOP for the fusion of multiple sources of complementary precipitation information. This paper is an introduction of the METOP system.

  11. i3Drefine Software for Protein 3D Structure Refinement and Its Assessment in CASP10

    PubMed Central

    Bhattacharya, Debswapna; Cheng, Jianlin

    2013-01-01

    Protein structure refinement refers to the process of improving the qualities of protein structures during structure modeling processes to bring them closer to their native states. Structure refinement has been drawing increasing attention in the community-wide Critical Assessment of techniques for Protein Structure prediction (CASP) experiments since its addition in 8th CASP experiment. During the 9th and recently concluded 10th CASP experiments, a consistent growth in number of refinement targets and participating groups has been witnessed. Yet, protein structure refinement still remains a largely unsolved problem with majority of participating groups in CASP refinement category failed to consistently improve the quality of structures issued for refinement. In order to alleviate this need, we developed a completely automated and computationally efficient protein 3D structure refinement method, i3Drefine, based on an iterative and highly convergent energy minimization algorithm with a powerful all-atom composite physics and knowledge-based force fields and hydrogen bonding (HB) network optimization technique. In the recent community-wide blind experiment, CASP10, i3Drefine (as ‘MULTICOM-CONSTRUCT’) was ranked as the best method in the server section as per the official assessment of CASP10 experiment. Here we provide the community with free access to i3Drefine software and systematically analyse the performance of i3Drefine in strict blind mode on the refinement targets issued in CASP10 refinement category and compare with other state-of-the-art refinement methods participating in CASP10. Our analysis demonstrates that i3Drefine is only fully-automated server participating in CASP10 exhibiting consistent improvement over the initial structures in both global and local structural quality metrics. Executable version of i3Drefine is freely available at http://protein.rnet.missouri.edu/i3drefine/. PMID:23894517

  12. Accurate macromolecular crystallographic refinement: incorporation of the linear scaling, semiempirical quantum-mechanics program DivCon into the PHENIX refinement package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borbulevych, Oleg Y.; Plumley, Joshua A.; Martin, Roger I.

    2014-05-01

    Semiempirical quantum-chemical X-ray macromolecular refinement using the program DivCon integrated with PHENIX is described. Macromolecular crystallographic refinement relies on sometimes dubious stereochemical restraints and rudimentary energy functionals to ensure the correct geometry of the model of the macromolecule and any covalently bound ligand(s). The ligand stereochemical restraint file (CIF) requires a priori understanding of the ligand geometry within the active site, and creation of the CIF is often an error-prone process owing to the great variety of potential ligand chemistry and structure. Stereochemical restraints have been replaced with more robust functionals through the integration of the linear-scaling, semiempirical quantum-mechanics (SE-QM)more » program DivCon with the PHENIX X-ray refinement engine. The PHENIX/DivCon package has been thoroughly validated on a population of 50 protein–ligand Protein Data Bank (PDB) structures with a range of resolutions and chemistry. The PDB structures used for the validation were originally refined utilizing various refinement packages and were published within the past five years. PHENIX/DivCon does not utilize CIF(s), link restraints and other parameters for refinement and hence it does not make as many a priori assumptions about the model. Across the entire population, the method results in reasonable ligand geometries and low ligand strains, even when the original refinement exhibited difficulties, indicating that PHENIX/DivCon is applicable to both single-structure and high-throughput crystallography.« less

  13. Empirical Analysis and Refinement of Expert System Knowledge Bases

    DTIC Science & Technology

    1988-08-31

    refinement. Both a simulated case generation program, and a random rule basher were developed to enhance rule refinement experimentation. *Substantial...the second fiscal year 88 objective was fully met. Rule Refinement System Simulated Rule Basher Case Generator Stored Cases Expert System Knowledge...generated until the rule is satisfied. Cases may be randomly generated for a given rule or hypothesis. Rule Basher Given that one has a correct

  14. An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.

    1993-01-01

    We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.

  15. Assessing food allergy risks from residual peanut protein in highly refined vegetable oil.

    PubMed

    Blom, W Marty; Kruizinga, Astrid G; Rubingh, Carina M; Remington, Ben C; Crevel, René W R; Houben, Geert F

    2017-08-01

    Refined vegetable oils including refined peanut oil are widely used in foods. Due to shared production processes, refined non-peanut vegetable oils can contain residual peanut proteins. We estimated the predicted number of allergic reactions to residual peanut proteins using probabilistic risk assessment applied to several scenarios involving food products made with vegetable oils. Variables considered were: a) the estimated production scale of refined peanut oil, b) estimated cross-contact between refined vegetable oils during production, c) the proportion of fat in representative food products and d) the peanut protein concentration in refined peanut oil. For all products examined the predicted risk of objective allergic reactions in peanut-allergic users of the food products was extremely low. The number of predicted reactions ranged depending on the model from a high of 3 per 1000 eating occasions (Weibull) to no reactions (LogNormal). Significantly, all reactions were predicted for allergen intakes well below the amounts reported for the most sensitive individual described in the clinical literature. We conclude that the health risk from cross-contact between vegetable oils and refined peanut oil is negligible. None of the food products would warrant precautionary labelling for peanut according to the VITAL ® programme of the Allergen Bureau. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Adaptive mesh refinement and load balancing based on multi-level block-structured Cartesian mesh

    NASA Astrophysics Data System (ADS)

    Misaka, Takashi; Sasaki, Daisuke; Obayashi, Shigeru

    2017-11-01

    We developed a framework for a distributed-memory parallel computer that enables dynamic data management for adaptive mesh refinement and load balancing. We employed simple data structure of the building cube method (BCM) where a computational domain is divided into multi-level cubic domains and each cube has the same number of grid points inside, realising a multi-level block-structured Cartesian mesh. Solution adaptive mesh refinement, which works efficiently with the help of the dynamic load balancing, was implemented by dividing cubes based on mesh refinement criteria. The framework was investigated with the Laplace equation in terms of adaptive mesh refinement, load balancing and the parallel efficiency. It was then applied to the incompressible Navier-Stokes equations to simulate a turbulent flow around a sphere. We considered wall-adaptive cube refinement where a non-dimensional wall distance y+ near the sphere is used for a criterion of mesh refinement. The result showed the load imbalance due to y+ adaptive mesh refinement was corrected by the present approach. To utilise the BCM framework more effectively, we also tested a cube-wise algorithm switching where an explicit and implicit time integration schemes are switched depending on the local Courant-Friedrichs-Lewy (CFL) condition in each cube.

  17. Reformulated Gasoline Market Affected Refiners Differently, 1995

    EIA Publications

    1996-01-01

    This article focuses on the costs of producing reformulated gasoline (RFG) as experienced by different types of refiners and on how these refiners fared this past summer, given the prices for RFG at the refinery gate.

  18. A Refined Cauchy-Schwarz Inequality

    ERIC Educational Resources Information Center

    Mercer, Peter R.

    2007-01-01

    The author presents a refinement of the Cauchy-Schwarz inequality. He shows his computations in which refinements of the triangle inequality and its reverse inequality are obtained for nonzero x and y in a normed linear space.

  19. On macromolecular refinement at subatomic resolution with interatomic scatterers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Afonine, Pavel V., E-mail: pafonine@lbl.gov; Grosse-Kunstleve, Ralf W.; Adams, Paul D.

    2007-11-01

    Modelling deformation electron density using interatomic scatters is simpler than multipolar methods, produces comparable results at subatomic resolution and can easily be applied to macromolecules. A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than ∼1.0 Å) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8–1.0 Å, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented bymore » additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.« less

  20. Improving the accuracy of macromolecular structure refinement at 7 Å resolution.

    PubMed

    Brunger, Axel T; Adams, Paul D; Fromme, Petra; Fromme, Raimund; Levitt, Michael; Schröder, Gunnar F

    2012-06-06

    In X-ray crystallography, molecular replacement and subsequent refinement is challenging at low resolution. We compared refinement methods using synchrotron diffraction data of photosystem I at 7.4 Å resolution, starting from different initial models with increasing deviations from the known high-resolution structure. Standard refinement spoiled the initial models, moving them further away from the true structure and leading to high R(free)-values. In contrast, DEN refinement improved even the most distant starting model as judged by R(free), atomic root-mean-square differences to the true structure, significance of features not included in the initial model, and connectivity of electron density. The best protocol was DEN refinement with initial segmented rigid-body refinement. For the most distant initial model, the fraction of atoms within 2 Å of the true structure improved from 24% to 60%. We also found a significant correlation between R(free) values and the accuracy of the model, suggesting that R(free) is useful even at low resolution. Copyright © 2012 Elsevier Ltd. All rights reserved.

Top