Sample records for rigorous sensor model

  1. Mathematical models and photogrammetric exploitation of image sensing

    NASA Astrophysics Data System (ADS)

    Puatanachokchai, Chokchai

    Mathematical models of image sensing are generally categorized into physical/geometrical sensor models and replacement sensor models. While the former is determined from image sensing geometry, the latter is based on knowledge of the physical/geometric sensor models and on using such models for its implementation. The main thrust of this research is in replacement sensor models which have three important characteristics: (1) Highly accurate ground-to-image functions; (2) Rigorous error propagation that is essentially of the same accuracy as the physical model; and, (3) Adjustability, or the ability to upgrade the replacement sensor model parameters when additional control information becomes available after the replacement sensor model has replaced the physical model. In this research, such replacement sensor models are considered as True Replacement Models or TRMs. TRMs provide a significant advantage of universality, particularly for image exploitation functions. There have been several writings about replacement sensor models, and except for the so called RSM (Replacement Sensor Model as a product described in the Manual of Photogrammetry), almost all of them pay very little or no attention to errors and their propagation. This is because, it is suspected, the few physical sensor parameters are usually replaced by many more parameters, thus presenting a potential error estimation difficulty. The third characteristic, adjustability, is perhaps the most demanding. It provides an equivalent flexibility to that of triangulation using the physical model. Primary contributions of this thesis include not only "the eigen-approach", a novel means of replacing the original sensor parameter covariance matrices at the time of estimating the TRM, but also the implementation of the hybrid approach that combines the eigen-approach with the added parameters approach used in the RSM. Using either the eigen-approach or the hybrid approach, rigorous error propagation can be performed during image exploitation. Further, adjustability can be performed when additional control information becomes available after the TRM has been implemented. The TRM is shown to apply to imagery from sensors having different geometries, including an aerial frame camera, a spaceborne linear array sensor, an airborne pushbroom sensor, and an airborne whiskbroom sensor. TRM results show essentially negligible differences as compared to those from rigorous physical sensor models, both for geopositioning from single and overlapping images. Simulated as well as real image data are used to address all three characteristics of the TRM.

  2. Rigorous Photogrammetric Processing of CHANG'E-1 and CHANG'E-2 Stereo Imagery for Lunar Topographic Mapping

    NASA Astrophysics Data System (ADS)

    Di, K.; Liu, Y.; Liu, B.; Peng, M.

    2012-07-01

    Chang'E-1(CE-1) and Chang'E-2(CE-2) are the two lunar orbiters of China's lunar exploration program. Topographic mapping using CE-1 and CE-2 images is of great importance for scientific research as well as for preparation of landing and surface operation of Chang'E-3 lunar rover. In this research, we developed rigorous sensor models of CE-1 and CE-2 CCD cameras based on push-broom imaging principle with interior and exterior orientation parameters. Based on the rigorous sensor model, the 3D coordinate of a ground point in lunar body-fixed (LBF) coordinate system can be calculated by space intersection from the image coordinates of con-jugate points in stereo images, and the image coordinates can be calculated from 3D coordinates by back-projection. Due to uncer-tainties of the orbit and the camera, the back-projected image points are different from the measured points. In order to reduce these inconsistencies and improve precision, we proposed two methods to refine the rigorous sensor model: 1) refining EOPs by correcting the attitude angle bias, 2) refining the interior orientation model by calibration of the relative position of the two linear CCD arrays. Experimental results show that the mean back-projection residuals of CE-1 images are reduced to better than 1/100 pixel by method 1 and the mean back-projection residuals of CE-2 images are reduced from over 20 pixels to 0.02 pixel by method 2. Consequently, high precision DEM (Digital Elevation Model) and DOM (Digital Ortho Map) are automatically generated.

  3. Modeling of profilometry with laser focus sensors

    NASA Astrophysics Data System (ADS)

    Bischoff, Jörg; Manske, Eberhard; Baitinger, Henner

    2011-05-01

    Metrology is of paramount importance in submicron patterning. Particularly, line width and overlay have to be measured very accurately. Appropriated metrology techniques are scanning electron microscopy and optical scatterometry. The latter is non-invasive, highly accurate and enables optical cross sections of layer stacks but it requires periodic patterns. Scanning laser focus sensors are a viable alternative enabling the measurement of non-periodic features. Severe limitations are imposed by the diffraction limit determining the edge location accuracy. It will be shown that the accuracy can be greatly improved by means of rigorous modeling. To this end, a fully vectorial 2.5-dimensional model has been developed based on rigorous Maxwell solvers and combined with models for the scanning and various autofocus principles. The simulations are compared with experimental results. Moreover, the simulations are directly utilized to improve the edge location accuracy.

  4. A Rigorous Temperature-Dependent Stochastic Modelling and Testing for MEMS-Based Inertial Sensor Errors.

    PubMed

    El-Diasty, Mohammed; Pagiatakis, Spiros

    2009-01-01

    In this paper, we examine the effect of changing the temperature points on MEMS-based inertial sensor random error. We collect static data under different temperature points using a MEMS-based inertial sensor mounted inside a thermal chamber. Rigorous stochastic models, namely Autoregressive-based Gauss-Markov (AR-based GM) models are developed to describe the random error behaviour. The proposed AR-based GM model is initially applied to short stationary inertial data to develop the stochastic model parameters (correlation times). It is shown that the stochastic model parameters of a MEMS-based inertial unit, namely the ADIS16364, are temperature dependent. In addition, field kinematic test data collected at about 17 °C are used to test the performance of the stochastic models at different temperature points in the filtering stage using Unscented Kalman Filter (UKF). It is shown that the stochastic model developed at 20 °C provides a more accurate inertial navigation solution than the ones obtained from the stochastic models developed at -40 °C, -20 °C, 0 °C, +40 °C, and +60 °C. The temperature dependence of the stochastic model is significant and should be considered at all times to obtain optimal navigation solution for MEMS-based INS/GPS integration.

  5. A Gaussian Mixture Model-based continuous Boundary Detection for 3D sensor networks.

    PubMed

    Chen, Jiehui; Salim, Mariam B; Matsumoto, Mitsuji

    2010-01-01

    This paper proposes a high precision Gaussian Mixture Model-based novel Boundary Detection 3D (BD3D) scheme with reasonable implementation cost for 3D cases by selecting a minimum number of Boundary sensor Nodes (BNs) in continuous moving objects. It shows apparent advantages in that two classes of boundary and non-boundary sensor nodes can be efficiently classified using the model selection techniques for finite mixture models; furthermore, the set of sensor readings within each sensor node's spatial neighbors is formulated using a Gaussian Mixture Model; different from DECOMO [1] and COBOM [2], we also formatted a BN Array with an additional own sensor reading to benefit selecting Event BNs (EBNs) and non-EBNs from the observations of BNs. In particular, we propose a Thick Section Model (TSM) to solve the problem of transition between 2D and 3D. It is verified by simulations that the BD3D 2D model outperforms DECOMO and COBOM in terms of average residual energy and the number of BNs selected, while the BD3D 3D model demonstrates sound performance even for sensor networks with low densities especially when the value of the sensor transmission range (r) is larger than the value of Section Thickness (d) in TSM. We have also rigorously proved its correctness for continuous geometric domains and full robustness for sensor networks over 3D terrains.

  6. A Global Lake Ecological Observatory Network (GLEON) for synthesising high-frequency sensor data for validation of deterministic ecological models

    USGS Publications Warehouse

    David, Hamilton P; Carey, Cayelan C.; Arvola, Lauri; Arzberger, Peter; Brewer, Carol A.; Cole, Jon J; Gaiser, Evelyn; Hanson, Paul C.; Ibelings, Bas W; Jennings, Eleanor; Kratz, Tim K; Lin, Fang-Pang; McBride, Christopher G.; de Motta Marques, David; Muraoka, Kohji; Nishri, Ami; Qin, Boqiang; Read, Jordan S.; Rose, Kevin C.; Ryder, Elizabeth; Weathers, Kathleen C.; Zhu, Guangwei; Trolle, Dennis; Brookes, Justin D

    2014-01-01

    A Global Lake Ecological Observatory Network (GLEON; www.gleon.org) has formed to provide a coordinated response to the need for scientific understanding of lake processes, utilising technological advances available from autonomous sensors. The organisation embraces a grassroots approach to engage researchers from varying disciplines, sites spanning geographic and ecological gradients, and novel sensor and cyberinfrastructure to synthesise high-frequency lake data at scales ranging from local to global. The high-frequency data provide a platform to rigorously validate process- based ecological models because model simulation time steps are better aligned with sensor measurements than with lower-frequency, manual samples. Two case studies from Trout Bog, Wisconsin, USA, and Lake Rotoehu, North Island, New Zealand, are presented to demonstrate that in the past, ecological model outputs (e.g., temperature, chlorophyll) have been relatively poorly validated based on a limited number of directly comparable measurements, both in time and space. The case studies demonstrate some of the difficulties of mapping sensor measurements directly to model state variable outputs as well as the opportunities to use deviations between sensor measurements and model simulations to better inform process understanding. Well-validated ecological models provide a mechanism to extrapolate high-frequency sensor data in space and time, thereby potentially creating a fully 3-dimensional simulation of key variables of interest.

  7. Multisensor Fusion for Change Detection

    NASA Astrophysics Data System (ADS)

    Schenk, T.; Csatho, B.

    2005-12-01

    Combining sensors that record different properties of a 3-D scene leads to complementary and redundant information. If fused properly, a more robust and complete scene description becomes available. Moreover, fusion facilitates automatic procedures for object reconstruction and modeling. For example, aerial imaging sensors, hyperspectral scanning systems, and airborne laser scanning systems generate complementary data. We describe how data from these sensors can be fused for such diverse applications as mapping surface erosion and landslides, reconstructing urban scenes, monitoring urban land use and urban sprawl, and deriving velocities and surface changes of glaciers and ice sheets. An absolute prerequisite for successful fusion is a rigorous co-registration of the sensors involved. We establish a common 3-D reference frame by using sensor invariant features. Such features are caused by the same object space phenomena and are extracted in multiple steps from the individual sensors. After extracting, segmenting and grouping the features into more abstract entities, we discuss ways on how to automatically establish correspondences. This is followed by a brief description of rigorous mathematical models suitable to deal with linear and area features. In contrast to traditional, point-based registration methods, lineal and areal features lend themselves to a more robust and more accurate registration. More important, the chances to automate the registration process increases significantly. The result of the co-registration of the sensors is a unique transformation between the individual sensors and the object space. This makes spatial reasoning of extracted information more versatile; reasoning can be performed in sensor space or in 3-D space where domain knowledge about features and objects constrains reasoning processes, reduces the search space, and helps to make the problem well-posed. We demonstrate the feasibility of the proposed multisensor fusion approach with detecting surface elevation changes on the Byrd Glacier, Antarctica, with aerial imagery from 1980s and ICESat laser altimetry data from 2003-05. Change detection from such disparate data sets is an intricate fusion problem, beginning with sensor alignment, and on to reasoning with spatial information as to where changes occurred and to what extent.

  8. Diagnosis of the Ill-condition of the RFM Based on Condition Index and Variance Decomposition Proportion (CIVDP)

    NASA Astrophysics Data System (ADS)

    Qing, Zhou; Weili, Jiao; Tengfei, Long

    2014-03-01

    The Rational Function Model (RFM) is a new generalized sensor model. It does not need the physical parameters of sensors to achieve a high accuracy that is compatible to the rigorous sensor models. At present, the main method to solve RPCs is the Least Squares Estimation. But when coefficients has a large number or the distribution of the control points is not even, the classical least square method loses its superiority due to the ill-conditioning problem of design matrix. Condition Index and Variance Decomposition Proportion (CIVDP) is a reliable method for diagnosing the multicollinearity among the design matrix. It can not only detect the multicollinearity, but also can locate the parameters and show the corresponding columns in the design matrix. In this paper, the CIVDP method is used to diagnose the ill-condition problem of the RFM and to find the multicollinearity in the normal matrix.

  9. TAMDAR Sensor Validation in 2003 AIRS II

    NASA Technical Reports Server (NTRS)

    Daniels, Taumi S.; Murray, John J.; Anderson, Mark V.; Mulally, Daniel J.; Jensen, Kristopher R.; Grainger, Cedric A.; Delene, David J.

    2005-01-01

    This study entails an assessment of TAMDAR in situ temperature, relative humidity and winds sensor data from seven flights of the UND Citation II. These data are undergoing rigorous assessment to determine their viability to significantly augment domestic Meteorological Data Communications Reporting System (MDCRS) and the international Aircraft Meteorological Data Reporting (AMDAR) system observational databases to improve the performance of regional and global numerical weather prediction models. NASA Langley Research Center participated in the Second Alliance Icing Research Study from November 17 to December 17, 2003. TAMDAR data taken during this period is compared with validation data from the UND Citation. The data indicate acceptable performance of the TAMDAR sensor when compared to measurements from the UND Citation research instruments.

  10. Multiparameter Estimation in Networked Quantum Sensors

    NASA Astrophysics Data System (ADS)

    Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.

    2018-02-01

    We introduce a general model for a network of quantum sensors, and we use this model to consider the following question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. This immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or nonlinear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.

  11. On Deployment of Multiple Base Stations for Energy-Efficient Communication in Wireless Sensor Networks

    DOE PAGES

    Lin, Yunyue; Wu, Qishi; Cai, Xiaoshan; ...

    2010-01-01

    Data transmission from sensor nodes to a base station or a sink node often incurs significant energy consumption, which critically affects network lifetime. We generalize and solve the problem of deploying multiple base stations to maximize network lifetime in terms of two different metrics under one-hop and multihop communication models. In the one-hop communication model, the sensors far away from base stations always deplete their energy much faster than others. We propose an optimal solution and a heuristic approach based on the minimal enclosing circle algorithm to deploy a base station at the geometric center of each cluster. In themore » multihop communication model, both base station location and data routing mechanism need to be considered in maximizing network lifetime. We propose an iterative algorithm based on rigorous mathematical derivations and use linear programming to compute the optimal routing paths for data transmission. Simulation results show the distinguished performance of the proposed deployment algorithms in maximizing network lifetime.« less

  12. Multiparameter Estimation in Networked Quantum Sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.

    We introduce a general model for a network of quantum sensors, and we use this model to consider the question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. Thismore » immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or non-linear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.« less

  13. Multiparameter Estimation in Networked Quantum Sensors

    DOE PAGES

    Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.

    2018-02-21

    We introduce a general model for a network of quantum sensors, and we use this model to consider the question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. Thismore » immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or non-linear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.« less

  14. Localization of small arms fire using acoustic measurements of muzzle blast and/or ballistic shock wave arrivals.

    PubMed

    Lo, Kam W; Ferguson, Brian G

    2012-11-01

    The accurate localization of small arms fire using fixed acoustic sensors is considered. First, the conventional wavefront-curvature passive ranging method, which requires only differential time-of-arrival (DTOA) measurements of the muzzle blast wave to estimate the source position, is modified to account for sensor positions that are not strictly collinear (bowed array). Second, an existing single-sensor-node ballistic model-based localization method, which requires both DTOA and differential angle-of-arrival (DAOA) measurements of the muzzle blast wave and ballistic shock wave, is improved by replacing the basic external ballistics model (which describes the bullet's deceleration along its trajectory) with a more rigorous model and replacing the look-up table ranging procedure with a nonlinear (or polynomial) equation-based ranging procedure. Third, a new multiple-sensor-node ballistic model-based localization method, which requires only DTOA measurements of the ballistic shock wave to localize the point of fire, is formulated. The first method is applicable to situations when only the muzzle blast wave is received, whereas the third method applies when only the ballistic shock wave is received. The effectiveness of each of these methods is verified using an extensive set of real data recorded during a 7 day field experiment.

  15. cStress: Towards a Gold Standard for Continuous Stress Assessment in the Mobile Environment

    PubMed Central

    Hovsepian, Karen; al’Absi, Mustafa; Ertin, Emre; Kamarck, Thomas; Nakajima, Motohiro; Kumar, Santosh

    2015-01-01

    Recent advances in mobile health have produced several new models for inferring stress from wearable sensors. But, the lack of a gold standard is a major hurdle in making clinical use of continuous stress measurements derived from wearable sensors. In this paper, we present a stress model (called cStress) that has been carefully developed with attention to every step of computational modeling including data collection, screening, cleaning, filtering, feature computation, normalization, and model training. More importantly, cStress was trained using data collected from a rigorous lab study with 21 participants and validated on two independently collected data sets — in a lab study on 26 participants and in a week-long field study with 20 participants. In testing, the model obtains a recall of 89% and a false positive rate of 5% on lab data. On field data, the model is able to predict each instantaneous self-report with an accuracy of 72%. PMID:26543926

  16. Statistical modeling of natural backgrounds in hyperspectral LWIR data

    NASA Astrophysics Data System (ADS)

    Truslow, Eric; Manolakis, Dimitris; Cooley, Thomas; Meola, Joseph

    2016-09-01

    Hyperspectral sensors operating in the long wave infrared (LWIR) have a wealth of applications including remote material identification and rare target detection. While statistical models for modeling surface reflectance in visible and near-infrared regimes have been well studied, models for the temperature and emissivity in the LWIR have not been rigorously investigated. In this paper, we investigate modeling hyperspectral LWIR data using a statistical mixture model for the emissivity and surface temperature. Statistical models for the surface parameters can be used to simulate surface radiances and at-sensor radiance which drives the variability of measured radiance and ultimately the performance of signal processing algorithms. Thus, having models that adequately capture data variation is extremely important for studying performance trades. The purpose of this paper is twofold. First, we study the validity of this model using real hyperspectral data, and compare the relative variability of hyperspectral data in the LWIR and visible and near-infrared (VNIR) regimes. Second, we illustrate how materials that are easily distinguished in the VNIR, may be difficult to separate when imaged in the LWIR.

  17. Line-Based Registration of Panoramic Images and LiDAR Point Clouds for Mobile Mapping.

    PubMed

    Cui, Tingting; Ji, Shunping; Shan, Jie; Gong, Jianya; Liu, Kejian

    2016-12-31

    For multi-sensor integrated systems, such as the mobile mapping system (MMS), data fusion at sensor-level, i.e., the 2D-3D registration between an optical camera and LiDAR, is a prerequisite for higher level fusion and further applications. This paper proposes a line-based registration method for panoramic images and a LiDAR point cloud collected by a MMS. We first introduce the system configuration and specification, including the coordinate systems of the MMS, the 3D LiDAR scanners, and the two panoramic camera models. We then establish the line-based transformation model for the panoramic camera. Finally, the proposed registration method is evaluated for two types of camera models by visual inspection and quantitative comparison. The results demonstrate that the line-based registration method can significantly improve the alignment of the panoramic image and the LiDAR datasets under either the ideal spherical or the rigorous panoramic camera model, with the latter being more reliable.

  18. Line-Based Registration of Panoramic Images and LiDAR Point Clouds for Mobile Mapping

    PubMed Central

    Cui, Tingting; Ji, Shunping; Shan, Jie; Gong, Jianya; Liu, Kejian

    2016-01-01

    For multi-sensor integrated systems, such as the mobile mapping system (MMS), data fusion at sensor-level, i.e., the 2D-3D registration between an optical camera and LiDAR, is a prerequisite for higher level fusion and further applications. This paper proposes a line-based registration method for panoramic images and a LiDAR point cloud collected by a MMS. We first introduce the system configuration and specification, including the coordinate systems of the MMS, the 3D LiDAR scanners, and the two panoramic camera models. We then establish the line-based transformation model for the panoramic camera. Finally, the proposed registration method is evaluated for two types of camera models by visual inspection and quantitative comparison. The results demonstrate that the line-based registration method can significantly improve the alignment of the panoramic image and the LiDAR datasets under either the ideal spherical or the rigorous panoramic camera model, with the latter being more reliable. PMID:28042855

  19. Sensors, Volume 4, Thermal Sensors

    NASA Astrophysics Data System (ADS)

    Scholz, Jorg; Ricolfi, Teresio

    1996-12-01

    'Sensors' is the first self-contained series to deal with the whole area of sensors. It describes general aspects, technical and physical fundamentals, construction, function, applications and developments of the various types of sensors. This volume describes the construction and applicational aspects of thermal sensors while presenting a rigorous treatment of the underlying physical principles. It provides a unique overview of the various categories of sensors as well as of specific groups, e.g. temperature sensors (resistance thermometers, thermocouples, and radiation thermometers), noise and acoustic thermometers, heat-flow and mass-flow sensors. Specific facettes of applications are presented by specialists from different fields including process control, automotive technology and cryogenics. This volume is an indispensable reference work and text book for both specialists and newcomers, researchers and developers.

  20. Data Convergence - An Australian Perspective

    NASA Astrophysics Data System (ADS)

    Allen, S. S.; Howell, B.

    2012-12-01

    Coupled numerical physical, biogeochemical and sediment models are increasingly being used as integrators to help understand the cumulative or far field effects of change in the coastal environment. This reliance on modeling has forced observations to be delivered as data streams ingestible by modeling frameworks. This has made it easier to create near real-time or forecasting models than to try to recreate the past, and has lead in turn to the conversion of historical data into data streams to allow them to be ingested by the same frameworks. The model and observation frameworks under development within Australia's Commonwealth and Industrial Research Organisation (CSIRO) are now feeding into the Australian Ocean Data Network's (AODN's) MARine Virtual Laboratory (MARVL) . The sensor, or data stream, brokering solution is centred around the "message" and all data flowing through the gateway is wrapped as a message. Messages consist of a topic and a data object and their routing through the gateway to pre-processors and listeners is determined by the topic. The Sensor Message Gateway (SMG) method is allowing data from different sensors measuring the same thing but with different temporal resolutions, units or spatial coverage to be ingested or visualized seamlessly. At the same time the model output as a virtual sensor is being explored, this again being enabled by the SMG. It is only for two way communications with sensor that rigorous adherence to standards is needed, by accepting existing data in less than ideal formats, but exposing them though the SMG we can move a step closer to the Internet Of Things by creating an Internet of Industries where each vested interest can continue with business as usual, contribute to data convergence and adopt more open standards when investment seems appropriate to that sector or business.Architecture Overview

  1. Modeling Pilot State in Next Generation Aircraft Alert Systems

    NASA Technical Reports Server (NTRS)

    Carlin, Alan S.; Alexander, Amy L.; Schurr, Nathan

    2011-01-01

    The Next Generation Air Transportation System will introduce new, advanced sensor technologies into the cockpit that must convey a large number of potentially complex alerts. Our work focuses on the challenges associated with prioritizing aircraft sensor alerts in a quick and efficient manner, essentially determining when and how to alert the pilot This "alert decision" becomes very difficult in NextGen due to the following challenges: 1) the increasing number of potential hazards, 2) the uncertainty associated with the state of potential hazards as well as pilot slate , and 3) the limited time to make safely-critical decisions. In this paper, we focus on pilot state and present a model for anticipating duration and quality of pilot behavior, for use in a larger system which issues aircraft alerts. We estimate pilot workload, which we model as being dependent on factors including mental effort, task demands. and task performance. We perform a mathematically rigorous analysis of the model and resulting alerting plans. We simulate the model in software and present simulated results with respect to manipulation of the pilot measures.

  2. A study on rational function model generation for TerraSAR-X imagery.

    PubMed

    Eftekhari, Akram; Saadatseresht, Mohammad; Motagh, Mahdi

    2013-09-09

    The Rational Function Model (RFM) has been widely used as an alternative to rigorous sensor models of high-resolution optical imagery in photogrammetry and remote sensing geometric processing. However, not much work has been done to evaluate the applicability of the RF model for Synthetic Aperture Radar (SAR) image processing. This paper investigates how to generate a Rational Polynomial Coefficient (RPC) for high-resolution TerraSAR-X imagery using an independent approach. The experimental results demonstrate that the RFM obtained using the independent approach fits the Range-Doppler physical sensor model with an accuracy of greater than 10-3 pixel. Because independent RPCs indicate absolute errors in geolocation, two methods can be used to improve the geometric accuracy of the RFM. In the first method, Ground Control Points (GCPs) are used to update SAR sensor orientation parameters, and the RPCs are calculated using the updated parameters. Our experiment demonstrates that by using three control points in the corners of the image, an accuracy of 0.69 pixels in range and 0.88 pixels in the azimuth direction is achieved. For the second method, we tested the use of an affine model for refining RPCs. In this case, by applying four GCPs in the corners of the image, the accuracy reached 0.75 pixels in range and 0.82 pixels in the azimuth direction.

  3. A Study on Rational Function Model Generation for TerraSAR-X Imagery

    PubMed Central

    Eftekhari, Akram; Saadatseresht, Mohammad; Motagh, Mahdi

    2013-01-01

    The Rational Function Model (RFM) has been widely used as an alternative to rigorous sensor models of high-resolution optical imagery in photogrammetry and remote sensing geometric processing. However, not much work has been done to evaluate the applicability of the RF model for Synthetic Aperture Radar (SAR) image processing. This paper investigates how to generate a Rational Polynomial Coefficient (RPC) for high-resolution TerraSAR-X imagery using an independent approach. The experimental results demonstrate that the RFM obtained using the independent approach fits the Range-Doppler physical sensor model with an accuracy of greater than 10−3 pixel. Because independent RPCs indicate absolute errors in geolocation, two methods can be used to improve the geometric accuracy of the RFM. In the first method, Ground Control Points (GCPs) are used to update SAR sensor orientation parameters, and the RPCs are calculated using the updated parameters. Our experiment demonstrates that by using three control points in the corners of the image, an accuracy of 0.69 pixels in range and 0.88 pixels in the azimuth direction is achieved. For the second method, we tested the use of an affine model for refining RPCs. In this case, by applying four GCPs in the corners of the image, the accuracy reached 0.75 pixels in range and 0.82 pixels in the azimuth direction. PMID:24021971

  4. Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks

    PubMed Central

    Besada, Juan A.

    2017-01-01

    In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases) is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature). It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation. PMID:28934157

  5. Mathematical Model for Localised and Surface Heat Flux of the Human Body Obtained from Measurements Performed with a Calorimetry Minisensor.

    PubMed

    Socorro, Fabiola; Rodríguez de Rivera, Pedro Jesús; Rodríguez de Rivera, Miriam; Rodríguez de Rivera, Manuel

    2017-11-28

    The accuracy of the direct and local measurements of the heat power dissipated by the surface of the human body, using a calorimetry minisensor, is directly related to the calibration rigor of the sensor and the correct interpretation of the experimental results. For this, it is necessary to know the characteristics of the body's local heat dissipation. When the sensor is placed on the surface of the human body, the body reacts until a steady state is reached. We propose a mathematical model that represents the rate of heat flow at a given location on the surface of a human body by the sum of a series of exponentials: W ( t ) = A ₀ + ∑A i exp( -t / τ i ). In this way, transient and steady states of heat dissipation can be interpreted. This hypothesis has been tested by simulating the operation of the sensor. At the steady state, the power detected in the measurement area (4 cm²) varies depending on the sensor's thermostat temperature, as well as the physical state of the subject. For instance, for a thermostat temperature of 24 °C, this power can vary between 100-250 mW in a healthy adult. In the transient state, two exponentials are sufficient to represent this dissipation, with 3 and 70 s being the mean values of its time constants.

  6. Multi-parameter brain tissue microsensor and interface systems: calibration, reliability and user experiences of pressure and temperature sensors in the setting of neurointensive care.

    PubMed

    Childs, Charmaine; Wang, Li; Neoh, Boon Kwee; Goh, Hok Liok; Zu, Mya Myint; Aung, Phyo Wai; Yeo, Tseng Tsai

    2014-10-01

    The objective was to investigate sensor measurement uncertainty for intracerebral probes inserted during neurosurgery and remaining in situ during neurocritical care. This describes a prospective observational study of two sensor types and including performance of the complete sensor-bedside monitoring and readout system. Sensors from 16 patients with severe traumatic brain injury (TBI) were obtained at the time of removal from the brain. When tested, 40% of sensors achieved the manufacturer temperature specification of 0.1 °C. Pressure sensors calibration differed from the manufacturers at all test pressures in 8/20 sensors. The largest pressure measurement error was in the intraparenchymal triple sensor. Measurement uncertainty is not influenced by duration in situ. User experiences reveal problems with sensor 'handling', alarms and firmware. Rigorous investigation of the performance of intracerebral sensors in the laboratory and at the bedside has established measurement uncertainty in the 'real world' setting of neurocritical care.

  7. NPP Clouds and the Earth's Radiant Energy System (CERES) Predicted Sensor Performance Calibration and Preliminary Data Product Performance

    NASA Technical Reports Server (NTRS)

    Priestly, Kory; Smith, George L.; Thomas, Susan; Maddock, Suzanne L.

    2009-01-01

    Continuation of the Earth Radiation Budget (ERB) Climate Data Record (CDR) has been identified as critical in the 2007 NRC Decadal Survey, the Global Climate Observing System WCRP report, and in an assessment titled Impacts of NPOESS Nunn-McCurdy Certification on Joint NASA-NOAA Climate Goals. In response, NASA, NOAA and NPOESS agreed in early 2008 to fly the final existing CERES Flight Model (FM-5) on the NPP spacecraft for launch in 2010. Future opportunities for ERB CDR continuity consist of procuring an additional CERES Sensor with modest performance upgrades for flight on the NPOESS C1 spacecraft in 2013, followed by a new CERES follow-on sensor for flight in 2018 on the NPOESS C3 spacecraft. While science goals remain unchanged for the long-term ERB Climate Data Record, it is now understood that the task of achieving these goals is more difficult for two reasons. The first is an increased understanding of the dynamics of the Earth/atmosphere system which demonstrates that rigorous separation of natural variability from anthropogenic change on decadal time scales requires higher accuracy and stability than originally envisioned. Secondly, future implementation scenarios involve less redundancy in flight hardware (1 vs. 2 orbits and operational sensors) resulting in higher risk of loss of continuity and reduced number of independent observations to characterize performance of individual sensors. Although EOS CERES CDR's realize a factor of 2 to 4 improvement in accuracy and stability over previous ERBE CDR's, future sensors will require an additional factor of 2 improvement to answer rigorously the science questions moving forward. Modest investments, defined through the CERES Science Team s 30-year operational history of the EOS CERES sensors, in onboard calibration hardware and pre-flight calibration and test program will ensure meeting these goals while reducing costs in re-processing scientific datasets. The CERES FM-5 pre-flight radiometric characterization program benefited from the 30-year operational experience of the CERES EOS sensors, as well as a stronger emphasis of radiometric characterization in the Statement of Work with the sensor provider. Improvements to the pre-flight program included increased spectral, spatial, and temporal sampling under vacuum conditions as well as additional tests to characterize the primary and transfer standards in the calibration facility. Future work will include collaboration with NIST to further enhance the understanding of the radiometric performance of this equipment prior to flight. The current effort summarizes these improvements to the CERES FM-5 pre-flight sensor characterization program, as well as modifications to inflight calibration procedures and operational tasking. In addition, an estimate of the impacts to the system level accuracy and traceability is presented.

  8. Imaging the Gouy phase shift in photonic jets with a wavefront sensor.

    PubMed

    Bon, Pierre; Rolly, Brice; Bonod, Nicolas; Wenger, Jérôme; Stout, Brian; Monneret, Serge; Rigneault, Hervé

    2012-09-01

    A wavefront sensor is used as a direct observation tool to image the Gouy phase shift in photonic nanojets created by micrometer-sized dielectric spheres. The amplitude and phase distributions of light are found in good agreement with a rigorous electromagnetic computation. Interestingly the observed phase shift when travelling through the photonic jet is a combination of the awaited π Gouy shift and a phase shift induced by the bead refraction. Such direct spatial phase shift observation using wavefront sensors would find applications in microscopy, diffractive optics, optical trapping, and point spread function engineering.

  9. A global stochastic programming approach for the optimal placement of gas detectors with nonuniform unavailabilities

    DOE PAGES

    Liu, Jianfeng; Laird, Carl Damon

    2017-09-22

    Optimal design of a gas detection systems is challenging because of the numerous sources of uncertainty, including weather and environmental conditions, leak location and characteristics, and process conditions. Rigorous CFD simulations of dispersion scenarios combined with stochastic programming techniques have been successfully applied to the problem of optimal gas detector placement; however, rigorous treatment of sensor failure and nonuniform unavailability has received less attention. To improve reliability of the design, this paper proposes a problem formulation that explicitly considers nonuniform unavailabilities and all backup detection levels. The resulting sensor placement problem is a large-scale mixed-integer nonlinear programming (MINLP) problem thatmore » requires a tailored solution approach for efficient solution. We have developed a multitree method which depends on iteratively solving a sequence of upper-bounding master problems and lower-bounding subproblems. The tailored global solution strategy is tested on a real data problem and the encouraging numerical results indicate that our solution framework is promising in solving sensor placement problems. This study was selected for the special issue in JLPPI from the 2016 International Symposium of the MKO Process Safety Center.« less

  10. A global stochastic programming approach for the optimal placement of gas detectors with nonuniform unavailabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Jianfeng; Laird, Carl Damon

    Optimal design of a gas detection systems is challenging because of the numerous sources of uncertainty, including weather and environmental conditions, leak location and characteristics, and process conditions. Rigorous CFD simulations of dispersion scenarios combined with stochastic programming techniques have been successfully applied to the problem of optimal gas detector placement; however, rigorous treatment of sensor failure and nonuniform unavailability has received less attention. To improve reliability of the design, this paper proposes a problem formulation that explicitly considers nonuniform unavailabilities and all backup detection levels. The resulting sensor placement problem is a large-scale mixed-integer nonlinear programming (MINLP) problem thatmore » requires a tailored solution approach for efficient solution. We have developed a multitree method which depends on iteratively solving a sequence of upper-bounding master problems and lower-bounding subproblems. The tailored global solution strategy is tested on a real data problem and the encouraging numerical results indicate that our solution framework is promising in solving sensor placement problems. This study was selected for the special issue in JLPPI from the 2016 International Symposium of the MKO Process Safety Center.« less

  11. Evidence-based Sensor Tasking for Space Domain Awareness

    NASA Astrophysics Data System (ADS)

    Jaunzemis, A.; Holzinger, M.; Jah, M.

    2016-09-01

    Space Domain Awareness (SDA) is the actionable knowledge required to predict, avoid, deter, operate through, recover from, and/or attribute cause to the loss and/or degradation of space capabilities and services. A main purpose for SDA is to provide decision-making processes with a quantifiable and timely body of evidence of behavior(s) attributable to specific space threats and/or hazards. To fulfill the promise of SDA, it is necessary for decision makers and analysts to pose specific hypotheses that may be supported or refuted by evidence, some of which may only be collected using sensor networks. While Bayesian inference may support some of these decision making needs, it does not adequately capture ambiguity in supporting evidence; i.e., it struggles to rigorously quantify 'known unknowns' for decision makers. Over the past 40 years, evidential reasoning approaches such as Dempster Shafer theory have been developed to address problems with ambiguous bodies of evidence. This paper applies mathematical theories of evidence using Dempster Shafer expert systems to address the following critical issues: 1) How decision makers can pose critical decision criteria as rigorous, testable hypotheses, 2) How to interrogate these hypotheses to reduce ambiguity, and 3) How to task a network of sensors to gather evidence for multiple competing hypotheses. This theory is tested using a simulated sensor tasking scenario balancing search versus track responsibilities.

  12. Multiscale sagebrush rangeland habitat modeling in southwest Wyoming

    USGS Publications Warehouse

    Homer, Collin G.; Aldridge, Cameron L.; Meyer, Debra K.; Coan, Michael J.; Bowen, Zachary H.

    2009-01-01

    Sagebrush-steppe ecosystems in North America have experienced dramatic elimination and degradation since European settlement. As a result, sagebrush-steppe dependent species have experienced drastic range contractions and population declines. Coordinated ecosystem-wide research, integrated with monitoring and management activities, would improve the ability to maintain existing sagebrush habitats. However, current data only identify resource availability locally, with rigorous spatial tools and models that accurately model and map sagebrush habitats over large areas still unavailable. Here we report on an effort to produce a rigorous large-area sagebrush-habitat classification and inventory with statistically validated products and estimates of precision in the State of Wyoming. This research employs a combination of significant new tools, including (1) modeling sagebrush rangeland as a series of independent continuous field components that can be combined and customized by any user at multiple spatial scales; (2) collecting ground-measured plot data on 2.4-meter imagery in the same season the satellite imagery is acquired; (3) effective modeling of ground-measured data on 2.4-meter imagery to maximize subsequent extrapolation; (4) acquiring multiple seasons (spring, summer, and fall) of an additional two spatial scales of imagery (30 meter and 56 meter) for optimal large-area modeling; (5) using regression tree classification technology that optimizes data mining of multiple image dates, ratios, and bands with ancillary data to extrapolate ground training data to coarser resolution sensors; and (6) employing rigorous accuracy assessment of model predictions to enable users to understand the inherent uncertainties. First-phase results modeled eight rangeland components (four primary targets and four secondary targets) as continuous field predictions. The primary targets included percent bare ground, percent herbaceousness, percent shrub, and percent litter. The four secondary targets included percent sagebrush (Artemisia spp.), percent big sagebrush (Artemisia tridentata), percent Wyoming sagebrush (Artemisia tridentata wyomingensis), and sagebrush height (centimeters). Results were validated by an independent accuracy assessment with root mean square error (RMSE) values ranging from 6.38 percent for bare ground to 2.99 percent for sagebrush at the QuickBird scale and RMSE values ranging from 12.07 percent for bare ground to 6.34 percent for sagebrush at the full Landsat scale. Subsequent project phases are now in progress, with plans to deliver products that improve accuracies of existing components, model new components, complete models over larger areas, track changes over time (from 1988 to 2007), and ultimately model wildlife population trends against these changes. We believe these results offer significant improvement in sagebrush rangeland quantification at multiple scales and offer users products that have been rigorously validated.

  13. Accuracy and performance of 3D mask models in optical projection lithography

    NASA Astrophysics Data System (ADS)

    Agudelo, Viviana; Evanschitzky, Peter; Erdmann, Andreas; Fühner, Tim; Shao, Feng; Limmer, Steffen; Fey, Dietmar

    2011-04-01

    Different mask models have been compared: rigorous electromagnetic field (EMF) modeling, rigorous EMF modeling with decomposition techniques and the thin mask approach (Kirchhoff approach) to simulate optical diffraction from different mask patterns in projection systems for lithography. In addition, each rigorous model was tested for two different formulations for partially coherent imaging: The Hopkins assumption and rigorous simulation of mask diffraction orders for multiple illumination angles. The aim of this work is to closely approximate results of the rigorous EMF method by the thin mask model enhanced with pupil filtering techniques. The validity of this approach for different feature sizes, shapes and illumination conditions is investigated.

  14. Performance evaluation and modeling of a conformal filter (CF) based real-time standoff hazardous material detection sensor

    NASA Astrophysics Data System (ADS)

    Nelson, Matthew P.; Tazik, Shawna K.; Bangalore, Arjun S.; Treado, Patrick J.; Klem, Ethan; Temple, Dorota

    2017-05-01

    Hyperspectral imaging (HSI) systems can provide detection and identification of a variety of targets in the presence of complex backgrounds. However, current generation sensors are typically large, costly to field, do not usually operate in real time and have limited sensitivity and specificity. Despite these shortcomings, HSI-based intelligence has proven to be a valuable tool, thus resulting in increased demand for this type of technology. By moving the next generation of HSI technology into a more adaptive configuration, and a smaller and more cost effective form factor, HSI technologies can help maintain a competitive advantage for the U.S. armed forces as well as local, state and federal law enforcement agencies. Operating near the physical limits of HSI system capability is often necessary and very challenging, but is often enabled by rigorous modeling of detection performance. Specific performance envelopes we consistently strive to improve include: operating under low signal to background conditions; at higher and higher frame rates; and under less than ideal motion control scenarios. An adaptable, low cost, low footprint, standoff sensor architecture we have been maturing includes the use of conformal liquid crystal tunable filters (LCTFs). These Conformal Filters (CFs) are electro-optically tunable, multivariate HSI spectrometers that, when combined with Dual Polarization (DP) optics, produce optimized spectral passbands on demand, which can readily be reconfigured, to discriminate targets from complex backgrounds in real-time. With DARPA support, ChemImage Sensor Systems (CISS™) in collaboration with Research Triangle Institute (RTI) International are developing a novel, real-time, adaptable, compressive sensing short-wave infrared (SWIR) hyperspectral imaging technology called the Reconfigurable Conformal Imaging Sensor (RCIS) based on DP-CF technology. RCIS will address many shortcomings of current generation systems and offer improvements in operational agility and detection performance, while addressing sensor weight, form factor and cost needs. This paper discusses recent test and performance modeling results of a RCIS breadboard apparatus.

  15. Optimal full motion video registration with rigorous error propagation

    NASA Astrophysics Data System (ADS)

    Dolloff, John; Hottel, Bryant; Doucette, Peter; Theiss, Henry; Jocher, Glenn

    2014-06-01

    Optimal full motion video (FMV) registration is a crucial need for the Geospatial community. It is required for subsequent and optimal geopositioning with simultaneous and reliable accuracy prediction. An overall approach being developed for such registration is presented that models relevant error sources in terms of the expected magnitude and correlation of sensor errors. The corresponding estimator is selected based on the level of accuracy of the a priori information of the sensor's trajectory and attitude (pointing) information, in order to best deal with non-linearity effects. Estimator choices include near real-time Kalman Filters and batch Weighted Least Squares. Registration solves for corrections to the sensor a priori information for each frame. It also computes and makes available a posteriori accuracy information, i.e., the expected magnitude and correlation of sensor registration errors. Both the registered sensor data and its a posteriori accuracy information are then made available to "down-stream" Multi-Image Geopositioning (MIG) processes. An object of interest is then measured on the registered frames and a multi-image optimal solution, including reliable predicted solution accuracy, is then performed for the object's 3D coordinates. This paper also describes a robust approach to registration when a priori information of sensor attitude is unavailable. It makes use of structure-from-motion principles, but does not use standard Computer Vision techniques, such as estimation of the Essential Matrix which can be very sensitive to noise. The approach used instead is a novel, robust, direct search-based technique.

  16. Theoretical study of surface plasmon resonance sensors based on 2D bimetallic alloy grating

    NASA Astrophysics Data System (ADS)

    Dhibi, Abdelhak; Khemiri, Mehdi; Oumezzine, Mohamed

    2016-11-01

    A surface plasmon resonance (SPR) sensor based on 2D alloy grating with a high performance is proposed. The grating consists of homogeneous alloys of formula MxAg1-x, where M is gold, copper, platinum and palladium. Compared to the SPR sensors based a pure metal, the sensor based on angular interrogation with silver exhibits a sharper (i.e. larger depth-to-width ratio) reflectivity dip, which provides a big detection accuracy, whereas the sensor based on gold exhibits the broadest dips and the highest sensitivity. The detection accuracy of SPR sensor based a metal alloy is enhanced by the increase of silver composition. In addition, the composition of silver which is around 0.8 improves the sensitivity and the quality of SPR sensor of pure metal. Numerical simulations based on rigorous coupled wave analysis (RCWA) show that the sensor based on a metal alloy not only has a high sensitivity and a high detection accuracy, but also exhibits a good linearity and a good quality.

  17. Wireless Sensor Applications in Extreme Aeronautical Environments

    NASA Technical Reports Server (NTRS)

    Wilson, William C.; Atkinson, Gary M.

    2013-01-01

    NASA aeronautical programs require rigorous ground and flight testing. Many of the testing environments can be extremely harsh. These environments include cryogenic temperatures and high temperatures (greater than 1500 C). Temperature, pressure, vibration, ionizing radiation, and chemical exposure may all be part of the harsh environment found in testing. This paper presents a survey of research opportunities for universities and industry to develop new wireless sensors that address anticipated structural health monitoring (SHM) and testing needs for aeronautical vehicles. Potential applications of passive wireless sensors for ground testing and high altitude aircraft operations are presented. Some of the challenges and issues of the technology are also presented.

  18. Practical considerations in Bayesian fusion of point sensors

    NASA Astrophysics Data System (ADS)

    Johnson, Kevin; Minor, Christian

    2012-06-01

    Sensor data fusion is and has been a topic of considerable research, but rigorous and quantitative understanding of the benefits of fusing specific types of sensor data remains elusive. Often, sensor fusion is performed on an ad hoc basis with the assumption that overall detection capabilities will improve, only to discover later, after expensive and time consuming laboratory and/or field testing that little advantage was gained. The work presented here will discuss these issues with theoretical and practical considerations in the context of fusing chemical sensors with binary outputs. Results are given for the potential performance gains one could expect with such systems, as well as the practical difficulties involved in implementing an optimal Bayesian fusion strategy with realistic scenarios. Finally, a discussion of the biases that inaccurate statistical estimates introduce into the results and their consequences is presented.

  19. A high figure of merit localized surface plasmon sensor based on a gold nanograting on the top of a gold planar film

    NASA Astrophysics Data System (ADS)

    Zhang, Zu-Yin; Wang, Li-Na; Hu, Hai-Feng; Li, Kang-Wen; Ma, Xun-Peng; Song, Guo-Feng

    2013-10-01

    We investigate the sensitivity and figure of merit (FOM) of a localized surface plasmon (LSP) sensor with gold nanograting on the top of planar metallic film. The sensitivity of the localized surface plasmon sensor is 317 nm/RIU, and the FOM is predicted to be above 8, which is very high for a localized surface plasmon sensor. By employing the rigorous coupled-wave analysis (RCWA) method, we analyze the distribution of the magnetic field and find that the sensing property of our proposed system is attributed to the interactions between the localized surface plasmon around the gold nanostrips and the surface plasmon polarition on the surface of the gold planar metallic film. These findings are important for developing high FOM localized surface plasmon sensors.

  20. Adaptive sensor-fault tolerant control for a class of multivariable uncertain nonlinear systems.

    PubMed

    Khebbache, Hicham; Tadjine, Mohamed; Labiod, Salim; Boulkroune, Abdesselem

    2015-03-01

    This paper deals with the active fault tolerant control (AFTC) problem for a class of multiple-input multiple-output (MIMO) uncertain nonlinear systems subject to sensor faults and external disturbances. The proposed AFTC method can tolerate three additive (bias, drift and loss of accuracy) and one multiplicative (loss of effectiveness) sensor faults. By employing backstepping technique, a novel adaptive backstepping-based AFTC scheme is developed using the fact that sensor faults and system uncertainties (including external disturbances and unexpected nonlinear functions caused by sensor faults) can be on-line estimated and compensated via robust adaptive schemes. The stability analysis of the closed-loop system is rigorously proven using a Lyapunov approach. The effectiveness of the proposed controller is illustrated by two simulation examples. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Uncertainty Analysis of Instrument Calibration and Application

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.

  2. Synthetic environments

    NASA Astrophysics Data System (ADS)

    Lukes, George E.; Cain, Joel M.

    1996-02-01

    The Advanced Distributed Simulation (ADS) Synthetic Environments Program seeks to create robust virtual worlds from operational terrain and environmental data sources of sufficient fidelity and currency to interact with the real world. While some applications can be met by direct exploitation of standard digital terrain data, more demanding applications -- particularly those support operations 'close to the ground' -- are well-served by emerging capabilities for 'value-adding' by the user working with controlled imagery. For users to rigorously refine and exploit controlled imagery within functionally different workstations they must have a shared framework to allow interoperability within and between these environments in terms of passing image and object coordinates and other information using a variety of validated sensor models. The Synthetic Environments Program is now being expanded to address rapid construction of virtual worlds with research initiatives in digital mapping, softcopy workstations, and cartographic image understanding. The Synthetic Environments Program is also participating in a joint initiative for a sensor model applications programer's interface (API) to ensure that a common controlled imagery exploitation framework is available to all researchers, developers and users. This presentation provides an introduction to ADS and the associated requirements for synthetic environments to support synthetic theaters of war. It provides a technical rationale for exploring applications of image understanding technology to automated cartography in support of ADS and related programs benefitting from automated analysis of mapping, earth resources and reconnaissance imagery. And it provides an overview and status of the joint initiative for a sensor model API.

  3. Wearable Networked Sensing for Human Mobility and Activity Analytics: A Systems Study.

    PubMed

    Dong, Bo; Biswas, Subir

    2012-01-01

    This paper presents implementation details, system characterization, and the performance of a wearable sensor network that was designed for human activity analysis. Specific machine learning mechanisms are implemented for recognizing a target set of activities with both out-of-body and on-body processing arrangements. Impacts of energy consumption by the on-body sensors are analyzed in terms of activity detection accuracy for out-of-body processing. Impacts of limited processing abilities in the on-body scenario are also characterized in terms of detection accuracy, by varying the background processing load in the sensor units. Through a rigorous systems study, it is shown that an efficient human activity analytics system can be designed and operated even under energy and processing constraints of tiny on-body wearable sensors.

  4. Laboratory and field testing of commercial rotational seismometers

    USGS Publications Warehouse

    Nigbor, R.L.; Evans, J.R.; Hutt, C.R.

    2009-01-01

    There are a small number of commercially available sensors to measure rotational motion in the frequency and amplitude ranges appropriate for earthquake motions on the ground and in structures. However, the performance of these rotational seismometers has not been rigorously and independently tested and characterized for earthquake monitoring purposes as is done for translational strong- and weak-motion seismometers. Quantities such as sensitivity, frequency response, resolution, and linearity are needed for the understanding of recorded rotational data. To address this need, we, with assistance from colleagues in the United States and Taiwan, have been developing performance test methodologies and equipment for rotational seismometers. In this article the performance testing methodologies are applied to samples of a commonly used commercial rotational seismometer, the eentec model R-1. Several examples were obtained for various test sequences in 2006, 2007, and 2008. Performance testing of these sensors consisted of measuring: (1) sensitivity and frequency response; (2) clip level; (3) self noise and resolution; and (4) cross-axis sensitivity, both rotational and translational. These sensor-specific results will assist in understanding the performance envelope of the R-1 rotational seismometer, and the test methodologies can be applied to other rotational seismometers.

  5. Minimum Interference Planar Geometric Topology in Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Nguyen, Trac N.; Huynh, Dung T.

    The approach of using topology control to reduce interference in wireless sensor networks has attracted attention of several researchers. There are at least two definitions of interference in the literature. In a wireless sensor network the interference at a node may be caused by an edge that is transmitting data [15], or it occurs because the node itself is within the transmission range of another [3], [1], [6]. In this paper we show that the problem of assigning power to nodes in the plane to yield a planar geometric graph whose nodes have bounded interference is NP-complete under both interference definitions. Our results provide a rigorous proof for a theorem in [15] whose proof is unconvincing. They also address one of the open issues raised in [6] where Halldórsson and Tokuyama were concerned with the receiver model of node interference, and derived an O(sqrt {Δ}) upper bound for the maximum node interference of a wireless ad hoc network in the plane (Δ is the maximum interference of the so-called uniform radius network). The question as to whether this problem is NP-complete in the 2-dimensional case was left open.

  6. Mathematical Model for Localised and Surface Heat Flux of the Human Body Obtained from Measurements Performed with a Calorimetry Minisensor

    PubMed Central

    Socorro, Fabiola; Rodríguez de Rivera, Pedro Jesús; Rodríguez de Rivera, Miriam

    2017-01-01

    The accuracy of the direct and local measurements of the heat power dissipated by the surface of the human body, using a calorimetry minisensor, is directly related to the calibration rigor of the sensor and the correct interpretation of the experimental results. For this, it is necessary to know the characteristics of the body’s local heat dissipation. When the sensor is placed on the surface of the human body, the body reacts until a steady state is reached. We propose a mathematical model that represents the rate of heat flow at a given location on the surface of a human body by the sum of a series of exponentials: W(t) = A0 + ∑Aiexp(−t/τi). In this way, transient and steady states of heat dissipation can be interpreted. This hypothesis has been tested by simulating the operation of the sensor. At the steady state, the power detected in the measurement area (4 cm2) varies depending on the sensor’s thermostat temperature, as well as the physical state of the subject. For instance, for a thermostat temperature of 24 °C, this power can vary between 100–250 mW in a healthy adult. In the transient state, two exponentials are sufficient to represent this dissipation, with 3 and 70 s being the mean values of its time constants. PMID:29182567

  7. Acoustic monitoring of first responder's physiology for health and performance surveillance

    NASA Astrophysics Data System (ADS)

    Scanlon, Michael V.

    2002-08-01

    Acoustic sensors have been used to monitor firefighter and soldier physiology to assess health and performance. The Army Research Laboratory has developed a unique body-contacting acoustic sensor that can monitor the health and performance of firefighters and soldiers while they are doing their mission. A gel-coupled sensor has acoustic impedance properties similar to the skin that facilitate the transmission of body sounds into the sensor pad, yet significantly repel ambient airborne noises due to an impedance mismatch. This technology can monitor heartbeats, breaths, blood pressure, motion, voice, and other indicators that can provide vital feedback to the medics and unit commanders. Diverse physiological parameters can be continuously monitored with acoustic sensors and transmitted for remote surveillance of personnel status. Body-worn acoustic sensors located at the neck, breathing mask, and wrist do an excellent job at detecting heartbeats and activity. However, they have difficulty extracting physiology during rigorous exercise or movements due to the motion artifacts sensed. Rigorous activity often indicates that the person is healthy by virtue of being active, and injury often causes the subject to become less active or incapacitated making the detection of physiology easier. One important measure of performance, heart rate variability, is the measure of beat-to-beat timing fluctuations derived from the interval between two adjacent beats. The Lomb periodogram is optimized for non-uniformly sampled data, and can be applied to non-stationary acoustic heart rate features (such as 1st and 2nd heart sounds) to derive heart rate variability and help eliminate errors created by motion artifacts. Simple peak-detection above or below a certain threshold or waveform derivative parameters can produce the timing and amplitude features necessary for the Lomb periodogram and cross-correlation techniques. High-amplitude motion artifacts may contribute to a different frequency or baseline noise due to the timing differences between the noise artifacts and heartbeat features. Data from a firefighter experiment is presented.

  8. Optimized passive sonar placement to allow improved interdiction

    NASA Astrophysics Data System (ADS)

    Johnson, Bruce A.; Matthews, Cameron

    2016-05-01

    The Art Gallery Problem (AGP) is the name given to a constrained optimization problem meant to determine the maximum amount of sensor coverage while utilizing the minimum number of resources. The AGP is significant because a common issue among surveillance and interdiction systems is obtaining an understanding of the optimal position of sensors and weapons in advance of enemy combatant maneuvers. The implication that an optimal position for a sensor to observe an event or for a weapon to engage a target autonomously is usually very clear after the target has passed, but for autonomous systems the solution must at least be conjectured in advance for deployment purposes. This abstract applies the AGP as a means to solve where best to place underwater sensor nodes such that the amount of information acquired about a covered area is maximized while the number of resources used to gain that information is minimized. By phrasing the ISR/interdiction problem this way, the issue is addressed as an instance of the AGP. The AGP is a member of a set of computational problems designated as nondeterministic polynomial-time (NP)-hard. As a member of this set, the AGP shares its members' defining feature, namely that no one has proven that there exists a deterministic algorithm providing a computationally-tractable solution to the AGP within a finite amount of time. At best an algorithm meant to solve the AGP can asymptotically approach perfect coverage with minimal resource usage but providing perfect coverage would either break the minimal resource usage constraint or require an exponentially-growing amount of time. No perfectly-optimal solution yet exists to the AGP, however, approximately optimal solutions to the AGP can approach complete area or barrier coverage while simultaneously minimizing the number of sensors and weapons utilized. A minimal number of underwater sensor nodes deployed can greatly increase the Mean Time Between Operational Failure (MTBOF) and logistical footprint. The resulting coverage optimizes the likelihood of encounter given an arbitrary sensor profile and threat from a free field statistical model approach. The free field statistical model is particularly applicable to worst case scenario modeling in open ocean operational profiles where targets to do not follow a particular pattern in any of the modeled dimensions. We present an algorithmic testbed which shows how to achieve approximately optimal solutions to the AGP for a network of underwater sensor nodes with or without effector systems for engagement while operating under changing environmental circumstances. The means by which we accomplish this goal are three-fold: 1) Develop a 3D model for the sonar signal propagating through the underwater environment 2) Add rigorous physics-based modeling of environmental events which can affect sensor information acquisition 3) Provide innovative solutions to the AGP which account for the environmental circumstances affecting sensor performance.

  9. In vivo sodium concentration continuously monitored with fluorescent sensors.

    PubMed

    Dubach, J Matthew; Lim, Edward; Zhang, Ning; Francis, Kevin P; Clark, Heather

    2011-02-01

    Sodium balance is vital to maintaining normal physiological function. Imbalances can occur in a variety of diseases, during certain surgical operations or during rigorous exercise. There is currently no method to continuously monitor sodium concentration in patients who may be susceptible to hyponatremia. Our approach was to design sodium specific fluorescent sensors capable of measuring physiological fluctuations in sodium concentration. The sensors are submicron plasticized polymer particles containing sodium recognition components that are coated with biocompatible poly(ethylene) glycol. Here, the sensors were brought up in saline and placed in the subcutaneous area of the skin of mice by simple injection. The fluorescence was monitored in real time using a whole animal imager to track changes in sodium concentrations. This technology could be used to monitor certain disease states or warn against dangerously low levels of sodium during exercise.

  10. Optimizing Cluster Heads for Energy Efficiency in Large-Scale Heterogeneous Wireless Sensor Networks

    DOE PAGES

    Gu, Yi; Wu, Qishi; Rao, Nageswara S. V.

    2010-01-01

    Many complex sensor network applications require deploying a large number of inexpensive and small sensors in a vast geographical region to achieve quality through quantity. Hierarchical clustering is generally considered as an efficient and scalable way to facilitate the management and operation of such large-scale networks and minimize the total energy consumption for prolonged lifetime. Judicious selection of cluster heads for data integration and communication is critical to the success of applications based on hierarchical sensor networks organized as layered clusters. We investigate the problem of selecting sensor nodes in a predeployed sensor network to be the cluster heads tomore » minimize the total energy needed for data gathering. We rigorously derive an analytical formula to optimize the number of cluster heads in sensor networks under uniform node distribution, and propose a Distance-based Crowdedness Clustering algorithm to determine the cluster heads in sensor networks under general node distribution. The results from an extensive set of experiments on a large number of simulated sensor networks illustrate the performance superiority of the proposed solution over the clustering schemes based on k -means algorithm.« less

  11. Performance Assessment and Geometric Calibration of RESOURCESAT-2

    NASA Astrophysics Data System (ADS)

    Radhadevi, P. V.; Solanki, S. S.; Akilan, A.; Jyothi, M. V.; Nagasubramanian, V.

    2016-06-01

    Resourcesat-2 (RS-2) has successfully completed five years of operations in its orbit. This satellite has multi-resolution and multi-spectral capabilities in a single platform. A continuous and autonomous co-registration, geo-location and radiometric calibration of image data from different sensors with widely varying view angles and resolution was one of the challenges of RS-2 data processing. On-orbit geometric performance of RS-2 sensors has been widely assessed and calibrated during the initial phase operations. Since then, as an ongoing activity, various geometric performance data are being generated periodically. This is performed with sites of dense ground control points (GCPs). These parameters are correlated to the direct geo-location accuracy of the RS-2 sensors and are monitored and validated to maintain the performance. This paper brings out the geometric accuracy assessment, calibration and validation done for about 500 datasets of RS-2. The objectives of this study are to ensure the best absolute and relative location accuracy of different cameras, location performance with payload steering and co-registration of multiple bands. This is done using a viewing geometry model, given ephemeris and attitude data, precise camera geometry and datum transformation. In the model, the forward and reverse transformations between the coordinate systems associated with the focal plane, payload, body, orbit and ground are rigorously and explicitly defined. System level tests using comparisons to ground check points have validated the operational geo-location accuracy performance and the stability of the calibration parameters.

  12. A New Approach to Design Autonomous Wireless Sensor Node Based on RF Energy Harvesting System.

    PubMed

    Mouapi, Alex; Hakem, Nadir

    2018-01-05

    Energy Harvesting techniques are increasingly seen as the solution for freeing the wireless sensor nodes from their battery dependency. However, it remains evident that network performance features, such as network size, packet length, and duty cycle, are influenced by the sum of recovered energy. This paper proposes a new approach to defining the specifications of a stand-alone wireless node based on a Radio-frequency Energy Harvesting System (REHS). To achieve adequate performance regarding the range of the Wireless Sensor Network (WSN), techniques for minimizing the energy consumed by the sensor node are combined with methods for optimizing the performance of the REHS. For more rigor in the design of the autonomous node, a comprehensive energy model of the node in a wireless network is established. For an equitable distribution of network charges between the different nodes that compose it, the Low-Energy Adaptive Clustering Hierarchy (LEACH) protocol is used for this purpose. The model considers five energy-consumption sources, most of which are ignored in recently used models. By using the hardware parameters of commercial off-the-shelf components (Mica2 Motes and CC2520 of Texas Instruments), the energy requirement of a sensor node is quantified. A miniature REHS based on a judicious choice of rectifying diodes is then designed and developed to achieve optimal performance in the Industrial Scientific and Medical (ISM) band centralized at 2.45 GHz . Due to the mismatch between the REHS and the antenna, a band pass filter is designed to reduce reflection losses. A gradient method search is used to optimize the output characteristics of the adapted REHS. At 1 mW of input RF power, the REHS provides an output DC power of 0.57 mW and a comparison with the energy requirement of the node allows the Base Station (BS) to be located at 310 m from the wireless nodes when the Wireless Sensor Network (WSN) has 100 nodes evenly spread over an area of 300 × 300 m 2 and when each round lasts 10 min . The result shows that the range of the autonomous WSN increases when the controlled physical phenomenon varies very slowly. Having taken into account all the dissipation sources coexisting in a sensor node and using actual measurements of an REHS, this work provides the guidelines for the design of autonomous nodes based on REHS.

  13. Achieving Congestion Mitigation Using Distributed Power Control for Spectrum Sensor Nodes in Sensor Network-Aided Cognitive Radio Ad Hoc Networks

    PubMed Central

    Zhuo, Fan; Duan, Hucai

    2017-01-01

    The data sequence of spectrum sensing results injected from dedicated spectrum sensor nodes (SSNs) and the data traffic from upstream secondary users (SUs) lead to unpredictable data loads in a sensor network-aided cognitive radio ad hoc network (SN-CRN). As a result, network congestion may occur at a SU acting as fusion center when the offered data load exceeds its available capacity, which degrades network performance. In this paper, we present an effective approach to mitigate congestion of bottlenecked SUs via a proposed distributed power control framework for SSNs over a rectangular grid based SN-CRN, aiming to balance resource load and avoid excessive congestion. To achieve this goal, a distributed power control framework for SSNs from interior tier (IT) and middle tier (MT) is proposed to achieve the tradeoff between channel capacity and energy consumption. In particular, we firstly devise two pricing factors by considering stability of local spectrum sensing and spectrum sensing quality for SSNs. By the aid of pricing factors, the utility function of this power control problem is formulated by jointly taking into account the revenue of power reduction and the cost of energy consumption for IT or MT SSN. By bearing in mind the utility function maximization and linear differential equation constraint of energy consumption, we further formulate the power control problem as a differential game model under a cooperation or noncooperation scenario, and rigorously obtain the optimal solutions to this game model by employing dynamic programming. Then the congestion mitigation for bottlenecked SUs is derived by alleviating the buffer load over their internal buffers. Simulation results are presented to show the effectiveness of the proposed approach under the rectangular grid based SN-CRN scenario. PMID:28914803

  14. Rigorous Characterisation of a Novel, Statistically-Based Ocean Colour Algorithm for the PACE Mission

    NASA Astrophysics Data System (ADS)

    Craig, S. E.; Lee, Z.; Du, K.; Lin, J.

    2016-02-01

    An approach based on empirical orthogonal function (EOF) analysis of ocean colour spectra has been shown to accurately derive inherent optical properties (IOPs) and chlorophyll concentration in scenarios, such as optically complex waters, where standard algorithms often perform poorly. The algorithm has been successfully used in a number of regional applications, and has also shown promise in a global implementation based on the NASA NOMAD data set. Additionally, it has demonstrated the unique ability to derive ocean colour products from top of atmosphere (TOA) signals with either no or minimal atmospheric correction applied. Due to its high potential for use over coastal and inland waters, the EOF approach is currently being rigorously characterised as part of a suite of approaches that will be used to support the new NASA ocean colour mission, PACE (Pre-Aerosol, Clouds and ocean Ecosystem). A major component in this model characterisation is the generation of a synthetic TOA data set using a coupled ocean-atmosphere radiative transfer model, which has been run to mimic PACE spectral resolution, and under a wide range of geographical locations, water constituent concentrations, and sea surface and atmospheric conditions. The resulting multidimensional data set will be analysed, and results presented on the sensitivity of the model to various combinations of parameters, and preliminary conclusions made regarding the optimal implementation strategy of this promising approach (e.g. on a global, optical water type or regional basis). This will provide vital guidance for operational implementation of the model for both existing satellite ocean colour sensors and the upcoming PACE mission.

  15. Advanced radiometric and interferometric milimeter-wave scene simulations

    NASA Technical Reports Server (NTRS)

    Hauss, B. I.; Moffa, P. J.; Steele, W. G.; Agravante, H.; Davidheiser, R.; Samec, T.; Young, S. K.

    1993-01-01

    Smart munitions and weapons utilize various imaging sensors (including passive IR, active and passive millimeter-wave, and visible wavebands) to detect/identify targets at short standoff ranges and in varied terrain backgrounds. In order to design and evaluate these sensors under a variety of conditions, a high-fidelity scene simulation capability is necessary. Such a capability for passive millimeter-wave scene simulation exists at TRW. TRW's Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code is a rigorous, benchmarked, end-to-end passive millimeter-wave scene simulation code for interpreting millimeter-wave data, establishing scene signatures and evaluating sensor performance. In passive millimeter-wave imaging, resolution is limited due to wavelength and aperture size. Where high resolution is required, the utility of passive millimeter-wave imaging is confined to short ranges. Recent developments in interferometry have made possible high resolution applications on military platforms. Interferometry or synthetic aperture radiometry allows the creation of a high resolution image with a sparsely filled aperture. Borrowing from research work in radio astronomy, we have developed and tested at TRW scene reconstruction algorithms that allow the recovery of the scene from a relatively small number of spatial frequency components. In this paper, the TRW modeling capability is described and numerical results are presented.

  16. A Computational Framework for Quantifying and Optimizing the Performance of Observational Networks in 4D-Var Data Assimilation

    NASA Astrophysics Data System (ADS)

    Cioaca, Alexandru

    A deep scientific understanding of complex physical systems, such as the atmosphere, can be achieved neither by direct measurements nor by numerical simulations alone. Data assimila- tion is a rigorous procedure to fuse information from a priori knowledge of the system state, the physical laws governing the evolution of the system, and real measurements, all with associated error statistics. Data assimilation produces best (a posteriori) estimates of model states and parameter values, and results in considerably improved computer simulations. The acquisition and use of observations in data assimilation raises several important scientific questions related to optimal sensor network design, quantification of data impact, pruning redundant data, and identifying the most beneficial additional observations. These questions originate in operational data assimilation practice, and have started to attract considerable interest in the recent past. This dissertation advances the state of knowledge in four dimensional variational (4D-Var) data assimilation by developing, implementing, and validating a novel computational framework for estimating observation impact and for optimizing sensor networks. The framework builds on the powerful methodologies of second-order adjoint modeling and the 4D-Var sensitivity equations. Efficient computational approaches for quantifying the observation impact include matrix free linear algebra algorithms and low-rank approximations of the sensitivities to observations. The sensor network configuration problem is formulated as a meta-optimization problem. Best values for parameters such as sensor location are obtained by optimizing a performance criterion, subject to the constraint posed by the 4D-Var optimization. Tractable computational solutions to this "optimization-constrained" optimization problem are provided. The results of this work can be directly applied to the deployment of intelligent sensors and adaptive observations, as well as to reducing the operating costs of measuring networks, while preserving their ability to capture the essential features of the system under consideration.

  17. Considerations for blending data from various sensors

    USGS Publications Warehouse

    Bauer, Brian P.; Barringer, Anthony R.

    1980-01-01

    A project is being proposed at the EROS Data Center to blend the information from sensors aboard various satellites. The problems of, and considerations for, blending data from several satellite-borne sensors are discussed. System descriptions of the sensors aboard the HCMM, TIROS-N, GOES-D, Landsat 3, Landsat D, Seasat, SPOT, Stereosat, and NOSS satellites, and the quantity, quality, image dimensions, and availability of these data are summaries to define attributes of a multi-sensor satellite data base. Unique configurations of equipment, storage, media, and specialized hardware to meet the data system requirement are described as well as archival media and improved sensors that will be on-line within the next 5 years. Definitions and rigor required for blending various sensor data are given. Problems of merging data from the same sensor (intrasensor comparison) and from different sensors (intersensor comparison), the characteristics and advantages of cross-calibration of data, and integration of data into a product matrix field are addressed. Data processing considerations as affected by formation, resolution, and problems of merging large data sets, and organization of data bases for blending data are presented. Examples utilizing GOES and Landsat data are presented to demonstrate techniques of data blending, and recommendations for future implementation of a set of standard scenes and their characteristics necessary for optimal data blending are discussed.

  18. Rigorous Performance Evaluation of Smartphone GNSS/IMU Sensors for ITS Applications

    PubMed Central

    Gikas, Vassilis; Perakis, Harris

    2016-01-01

    With the rapid growth in smartphone technologies and improvement in their navigation sensors, an increasing amount of location information is now available, opening the road to the provision of new Intelligent Transportation System (ITS) services. Current smartphone devices embody miniaturized Global Navigation Satellite System (GNSS), Inertial Measurement Unit (IMU) and other sensors capable of providing user position, velocity and attitude. However, it is hard to characterize their actual positioning and navigation performance capabilities due to the disparate sensor and software technologies adopted among manufacturers and the high influence of environmental conditions, and therefore, a unified certification process is missing. This paper presents the analysis results obtained from the assessment of two modern smartphones regarding their positioning accuracy (i.e., precision and trueness) capabilities (i.e., potential and limitations) based on a practical but rigorous methodological approach. Our investigation relies on the results of several vehicle tracking (i.e., cruising and maneuvering) tests realized through comparing smartphone obtained trajectories and kinematic parameters to those derived using a high-end GNSS/IMU system and advanced filtering techniques. Performance testing is undertaken for the HTC One S (Android) and iPhone 5s (iOS). Our findings indicate that the deviation of the smartphone locations from ground truth (trueness) deteriorates by a factor of two in obscured environments compared to those derived in open sky conditions. Moreover, it appears that iPhone 5s produces relatively smaller and less dispersed error values compared to those computed for HTC One S. Also, the navigation solution of the HTC One S appears to adapt faster to changes in environmental conditions, suggesting a somewhat different data filtering approach for the iPhone 5s. Testing the accuracy of the accelerometer and gyroscope sensors for a number of maneuvering (speeding, turning, etc.,) events reveals high consistency between smartphones, whereas the small deviations from ground truth verify their high potential even for critical ITS safety applications. PMID:27527187

  19. Rigorous Performance Evaluation of Smartphone GNSS/IMU Sensors for ITS Applications.

    PubMed

    Gikas, Vassilis; Perakis, Harris

    2016-08-05

    With the rapid growth in smartphone technologies and improvement in their navigation sensors, an increasing amount of location information is now available, opening the road to the provision of new Intelligent Transportation System (ITS) services. Current smartphone devices embody miniaturized Global Navigation Satellite System (GNSS), Inertial Measurement Unit (IMU) and other sensors capable of providing user position, velocity and attitude. However, it is hard to characterize their actual positioning and navigation performance capabilities due to the disparate sensor and software technologies adopted among manufacturers and the high influence of environmental conditions, and therefore, a unified certification process is missing. This paper presents the analysis results obtained from the assessment of two modern smartphones regarding their positioning accuracy (i.e., precision and trueness) capabilities (i.e., potential and limitations) based on a practical but rigorous methodological approach. Our investigation relies on the results of several vehicle tracking (i.e., cruising and maneuvering) tests realized through comparing smartphone obtained trajectories and kinematic parameters to those derived using a high-end GNSS/IMU system and advanced filtering techniques. Performance testing is undertaken for the HTC One S (Android) and iPhone 5s (iOS). Our findings indicate that the deviation of the smartphone locations from ground truth (trueness) deteriorates by a factor of two in obscured environments compared to those derived in open sky conditions. Moreover, it appears that iPhone 5s produces relatively smaller and less dispersed error values compared to those computed for HTC One S. Also, the navigation solution of the HTC One S appears to adapt faster to changes in environmental conditions, suggesting a somewhat different data filtering approach for the iPhone 5s. Testing the accuracy of the accelerometer and gyroscope sensors for a number of maneuvering (speeding, turning, etc.,) events reveals high consistency between smartphones, whereas the small deviations from ground truth verify their high potential even for critical ITS safety applications.

  20. An automated, open-source pipeline for mass production of digital elevation models (DEMs) from very-high-resolution commercial stereo satellite imagery

    NASA Astrophysics Data System (ADS)

    Shean, David E.; Alexandrov, Oleg; Moratto, Zachary M.; Smith, Benjamin E.; Joughin, Ian R.; Porter, Claire; Morin, Paul

    2016-06-01

    We adapted the automated, open source NASA Ames Stereo Pipeline (ASP) to generate digital elevation models (DEMs) and orthoimages from very-high-resolution (VHR) commercial imagery of the Earth. These modifications include support for rigorous and rational polynomial coefficient (RPC) sensor models, sensor geometry correction, bundle adjustment, point cloud co-registration, and significant improvements to the ASP code base. We outline a processing workflow for ˜0.5 m ground sample distance (GSD) DigitalGlobe WorldView-1 and WorldView-2 along-track stereo image data, with an overview of ASP capabilities, an evaluation of ASP correlator options, benchmark test results, and two case studies of DEM accuracy. Output DEM products are posted at ˜2 m with direct geolocation accuracy of <5.0 m CE90/LE90. An automated iterative closest-point (ICP) co-registration tool reduces absolute vertical and horizontal error to <0.5 m where appropriate ground-control data are available, with observed standard deviation of ˜0.1-0.5 m for overlapping, co-registered DEMs (n = 14, 17). While ASP can be used to process individual stereo pairs on a local workstation, the methods presented here were developed for large-scale batch processing in a high-performance computing environment. We are leveraging these resources to produce dense time series and regional mosaics for the Earth's polar regions.

  1. Chase: Control of Heterogeneous Autonomous Sensors for Situational Awareness

    DTIC Science & Technology

    2016-08-03

    remained the discovery and analysis of new foundational methodology for information collection and fusion that exercises rigorous feedback control over...simultaneously achieve quantified information and physical objectives. New foundational methodology for information collection and fusion that exercises...11.2.1. In the general area of novel stochastic systems analysis it seems appropriate to mention the pioneering work on non -Bayesian distributed learning

  2. Performance Comparison of Wireless Sensor Network Standard Protocols in an Aerospace Environment: ISA100.11a and ZigBee Pro

    NASA Technical Reports Server (NTRS)

    Wagner, Raymond S.; Barton, Richard J.

    2011-01-01

    Standards-based wireless sensor network (WSN) protocols are promising candidates for spacecraft avionics systems, offering unprecedented instrumentation flexibility and expandability. Ensuring reliable data transport is key, however, when migrating from wired to wireless data gathering systems. In this paper, we conduct a rigorous laboratory analysis of the relative performances of the ZigBee Pro and ISA100.11a protocols in a representative crewed aerospace environment. Since both operate in the 2.4 GHz radio frequency (RF) band shared by systems such as Wi-Fi, they are subject at times to potentially debilitating RF interference. We compare goodput (application-level throughput) achievable by both under varying levels of 802.11g Wi-Fi traffic. We conclude that while the simpler, more inexpensive ZigBee Pro protocol performs well under moderate levels of interference, the more complex and costly ISA100.11a protocol is needed to ensure reliable data delivery under heavier interference. This paper represents the first published, rigorous analysis of WSN protocols in an aerospace environment that we are aware of and the first published head-to-head comparison of ZigBee Pro and ISA100.11a.

  3. Near infrared spectroscopy as an on-line method to quantitatively determine glycogen and predict ultimate pH in pre rigor bovine M. longissimus dorsi.

    PubMed

    Lomiwes, D; Reis, M M; Wiklund, E; Young, O A; North, M

    2010-12-01

    The potential of near infrared (NIR) spectroscopy as an on-line method to quantify glycogen and predict ultimate pH (pH(u)) of pre rigor beef M. longissimus dorsi (LD) was assessed. NIR spectra (538 to 1677 nm) of pre rigor LD from steers, cows and bulls were collected early post mortem and measurements were made for pre rigor glycogen concentration and pH(u). Spectral and measured data were combined to develop models to quantify glycogen and predict the pH(u) of pre rigor LD. NIR spectra and pre rigor predicted values obtained from quantitative models were shown to be poorly correlated against glycogen and pH(u) (r(2)=0.23 and 0.20, respectively). Qualitative models developed to categorize each muscle according to their pH(u) were able to correctly categorize 42% of high pH(u) samples. Optimum qualitative and quantitative models derived from NIR spectra found low correlation between predicted values and reference measurements. Copyright © 2010 The American Meat Science Association. Published by Elsevier Ltd.. All rights reserved.

  4. Intelligent Melting Probes - How to Make the Most out of our Data

    NASA Astrophysics Data System (ADS)

    Kowalski, J.; Clemens, J.; Chen, S.; Schüller, K.

    2016-12-01

    Direct exploration of glaciers, ice sheets, or subglacial environments poses a big challenge. Different technological solutions have been proposed and deployed in the last decades, examples being hot-water drills or different melting probe designs. Most of the recent engineering concepts integrate a variety of different on-board sensors, e.g. temperature sensors, pressure sensors, or an inertial measurement unit. Not only do individual sensors provide valuable insight into the current state of the probe, yet often they also contain a wealth of additional information when analyzed collectively. This quite naturally raises the question: How can we make most out of our data? We find that it is necessary to implement intelligent data integration and sensor fusion strategies to retrieve a maximum amount of information from the observations. In this contribution, we are inspired by the engineering design of the IceMole, a minimally invasive, steerable melting probe. We will talk about two sensor integration strategies relevant to IceMole melting scenarios. At first, we will present a multi-sensor fusion approach to accurately retrieve subsurface position and attitude information. It uses an extended Kalman filter to integrate data from an on-board IMU, a differential magnetometer system, the screw feed, as well as the travel time of acoustic signals originating from emitters at the ice surface. Furthermore, an evidential mapping algorithm estimates a map of the environment from data of ultrasound phased arrays in the probe's head. Various results from tests in a swimming pool and in glacier ice will be shown during the presentation. A second block considers the fluid-dynamical state in the melting channel, as well as the ambient cryo-environment. It is devoted to retrieving information from on-board temperature and pressure sensors. Here, we will report on preliminary results from re-analysing past field test data. Knowledge from integrated sensor data likewise provides valuable input for the parameter identification and verification of data based models. Due to the concept of not focusing on the physical laws, this approach can still be used, if modifications are done. It is highly transferable and hasn't been exploited rigorously so far. This could be a potential future direction.

  5. An initial investigation of the long-term trends in the fluxgate magnetometer (FGM) calibration parameters on the four Cluster spacecraft

    NASA Astrophysics Data System (ADS)

    Alconcel, L. N. S.; Fox, P.; Brown, P.; Oddy, T. M.; Lucek, E. L.; Carr, C. M.

    2014-07-01

    Over the course of more than 10 years in operation, the calibration parameters of the outboard fluxgate magnetometer (FGM) sensors on the four Cluster spacecraft are shown to be remarkably stable. The parameters are refined on the ground during the rigorous FGM calibration process performed for the Cluster Active Archive (CAA). Fluctuations in some parameters show some correlation with trends in the sensor temperature (orbit position). The parameters, particularly the offsets, of the spacecraft 1 (C1) sensor have undergone more long-term drift than those of the other spacecraft (C2, C3 and C4) sensors. Some potentially anomalous calibration parameters have been identified and will require further investigation in future. However, the observed long-term stability demonstrated in this initial study gives confidence in the accuracy of the Cluster magnetic field data. For the most sensitive ranges of the FGM instrument, the offset drift is typically 0.2 nT per year in each sensor on C1 and negligible on C2, C3 and C4.

  6. An initial investigation of the long-term trends in the fluxgate magnetometer (FGM) calibration parameters on the four Cluster spacecraft

    NASA Astrophysics Data System (ADS)

    Alconcel, L. N. S.; Fox, P.; Brown, P.; Oddy, T. M.; Lucek, E. L.; Carr, C. M.

    2014-01-01

    Over the course of more than ten years in operation, the calibration parameters of the outboard fluxgate magnetometer (FGM) sensors on the four Cluster spacecraft are shown to be remarkably stable. The parameters are refined on the ground during the rigorous FGM calibration process performed for the Cluster Active Archive (CAA). Fluctuations in some parameters show some correlation with trends in the sensor temperature (orbit position). The parameters, particularly the offsets, of the Spacecraft1 (C1) sensor have undergone more long-term drift than those of the other spacecraft (C2, C3 and C4) sensors. Some potentially anomalous calibration parameters have been identified and will require further investigation in future. However, the observed long-term stability demonstrated in this initial study gives confidence in the relative accuracy of the Cluster magnetic field data. For the most sensitive ranges of the FGM instrument, the offset drift is typically 0.2 nT yr-1 in each sensor on C1 and negligible on C2, C3 and C4.

  7. Facile fabrication of CNT-based chemical sensor operating at room temperature

    NASA Astrophysics Data System (ADS)

    Sheng, Jiadong; Zeng, Xian; Zhu, Qi; Yang, Zhaohui; Zhang, Xiaohua

    2017-12-01

    This paper describes a simple, low cost and effective route to fabricate CNT-based chemical sensors, which operate at room temperature. Firstly, the incorporation of silk fibroin in vertically aligned CNT arrays (CNTA) obtained through a thermal chemical vapor deposition (CVD) method makes the direct removal of CNT arrays from substrates without any rigorous acid or sonication treatment feasible. Through a simple one-step in situ polymerization of anilines, the functionalization of CNT arrays with polyaniline (PANI) significantly improves the sensing performance of CNT-based chemical sensors in detecting ammonia (NH3) and hydrogen chloride (HCl) vapors. Chemically modified CNT arrays also show responses to organic vapors like menthol, ethyl acetate and acetone. Although the detection limits of chemically modified CNT-based chemical sensors are of the same orders of magnitudes reported in previous studies, these CNT-based chemical sensors show advantages of simplicity, low cost and energy efficiency in preparation and fabrication of devices. Additionally, a linear relationship between the relative sensitivity and concentration of analyte makes precise estimations on the concentrations of trace chemical vapors possible.

  8. Troyer Syndrome

    MedlinePlus

    ... Coordinating Committees CounterACT Rigor & Transparency Scientific Resources Animal Models Cell/Tissue/DNA Clinical and Translational Resources Gene ... Coordinating Committees CounterACT Rigor & Transparency Scientific Resources Animal Models Cell/Tissue/DNA Clinical and Translational Resources Gene ...

  9. Transient Ischemic Attack

    MedlinePlus

    ... Coordinating Committees CounterACT Rigor & Transparency Scientific Resources Animal Models Cell/Tissue/DNA Clinical and Translational Resources Gene ... Coordinating Committees CounterACT Rigor & Transparency Scientific Resources Animal Models Cell/Tissue/DNA Clinical and Translational Resources Gene ...

  10. The CUAHSI Water Data Center: Empowering scientists to discover, use, store, and share water data

    NASA Astrophysics Data System (ADS)

    Couch, A. L.; Hooper, R. P.; Arrigo, J. S.

    2012-12-01

    The proposed CUAHSI Water Data Center (WDC) will provide production-quality water data resources based upon the successful large-scale data services prototype developed by the CUAHSI Hydrologic Information System (HIS) project. The WDC, using the HIS technology, concentrates on providing time series data collected at fixed points or on moving platforms from sensors primarily (but not exclusively) in the medium of water. The WDC's missions include providing simple and effective data discovery tools useful to researchers in a variety of water-related disciplines, and providing simple and cost-effective data publication mechanisms for projects that do not desire to run their own data servers. The WDC's activities will include: 1. Rigorous curation of the water data catalog already assembled during the CUAHSI HIS project, to ensure accuracy of records and existence of declared sources. 2. Data backup and failover services for "at risk" data sources. 3. Creation and support for ubiquitously accessible data discovery and access, web-based search and smartphone applications. 4. Partnerships with researchers to extend the state of the art in water data use. 5. Partnerships with industry to create plug-and-play data publishing from sensors, and to create domain-specific tools. The WDC will serve as a knowledge resource for researchers of water-related issues, and will interface with other data centers to make their data more accessible to water researchers. The WDC will serve as a vehicle for addressing some of the grand challenges of accessing and using water data, including: a. Cross-domain data discovery: different scientific domains refer to the same kind of water data using different terminologies, making discovery of data difficult for researchers outside the data provider's domain. b. Cross-validation of data sources: much water data comes from sources lacking rigorous quality control procedures; such sources can be compared against others with rigorous quality control. The WDC enables this by making both kinds of sources available in the same search interface. c. Data provenance: the appropriateness of data for use in a specific model or analysis often depends upon the exact details of how data was gathered and processed. The WDC will aid this by curating standards for metadata that are as descriptive as practical of the collection procedures. "Plug and play" sensor interfaces will fill in metadata appropriate to each sensor without human intervention. d. Contextual search: discovering data based upon geological (e.g. aquifer) or geographic (e.g., location in a stream network) features external to metadata. e. Data-driven search: discovering data that exhibit quality factors that are not described by the metadata. The WDC will partner with researchers desiring contextual and data driven search, and make results available to all. Many major data providers (e.g. federal agencies) are not mandated to provide access to data other than those they collect. The HIS project assembled data from over 90 different sources, thus demonstrating the promise of this approach. Meeting the grand challenges listed above will greatly enhance scientists' ability to discover, interpret, access, and analyze water data from across domains and sources to test Earth system hypotheses.

  11. A New Approach to Design Autonomous Wireless Sensor Node Based on RF Energy Harvesting System

    PubMed Central

    Hakem, Nadir

    2018-01-01

    Energy Harvesting techniques are increasingly seen as the solution for freeing the wireless sensor nodes from their battery dependency. However, it remains evident that network performance features, such as network size, packet length, and duty cycle, are influenced by the sum of recovered energy. This paper proposes a new approach to defining the specifications of a stand-alone wireless node based on a Radio-frequency Energy Harvesting System (REHS). To achieve adequate performance regarding the range of the Wireless Sensor Network (WSN), techniques for minimizing the energy consumed by the sensor node are combined with methods for optimizing the performance of the REHS. For more rigor in the design of the autonomous node, a comprehensive energy model of the node in a wireless network is established. For an equitable distribution of network charges between the different nodes that compose it, the Low-Energy Adaptive Clustering Hierarchy (LEACH) protocol is used for this purpose. The model considers five energy-consumption sources, most of which are ignored in recently used models. By using the hardware parameters of commercial off-the-shelf components (Mica2 Motes and CC2520 of Texas Instruments), the energy requirement of a sensor node is quantified. A miniature REHS based on a judicious choice of rectifying diodes is then designed and developed to achieve optimal performance in the Industrial Scientific and Medical (ISM) band centralized at 2.45 GHz. Due to the mismatch between the REHS and the antenna, a band pass filter is designed to reduce reflection losses. A gradient method search is used to optimize the output characteristics of the adapted REHS. At 1 mW of input RF power, the REHS provides an output DC power of 0.57 mW and a comparison with the energy requirement of the node allows the Base Station (BS) to be located at 310 m from the wireless nodes when the Wireless Sensor Network (WSN) has 100 nodes evenly spread over an area of 300 × 300 m2 and when each round lasts 10 min. The result shows that the range of the autonomous WSN increases when the controlled physical phenomenon varies very slowly. Having taken into account all the dissipation sources coexisting in a sensor node and using actual measurements of an REHS, this work provides the guidelines for the design of autonomous nodes based on REHS. PMID:29304002

  12. Body-Worn Sensors in Parkinson's Disease: Evaluating Their Acceptability to Patients.

    PubMed

    Fisher, James M; Hammerla, Nils Y; Rochester, Lynn; Andras, Peter; Walker, Richard W

    2016-01-01

    Remote monitoring of symptoms in Parkinson's disease (PD) using body-worn sensors would assist treatment decisions and evaluation of new treatments. To date, a rigorous, systematic evaluation of the acceptability of body-worn sensors in PD has not been undertaken. Thirty-four participants wore bilateral wrist-worn sensors for 4 h in a research facility and then for 1 week at home. Participants' experiences of wearing the sensors were evaluated using a Likert-style questionnaire after each phase. Qualitative data were collected through free-text responses. Differences in responses between phases were assessed by using the Wilcoxon rank-sum test. Content analysis of qualitative data was undertaken. "Non-wear time" was estimated via analysis of accelerometer data for periods when sensors were stationary. After prolonged wearing there was a negative shift in participants' views on the comfort of the sensor; problems with the sensor's strap were highlighted. However, accelerometer data demonstrated high patient concordance with wearing of the sensors. There was no evidence that participants were less likely to wear the sensors in public. Most participants preferred wearing the sensors to completing symptom diaries. The finding that participants were not less likely to wear the sensors in public provides reassurance regarding the ecological validity of the data captured. The validity of our findings was strengthened by "triangulation" of data sources, enabling patients to express their agenda and repeated assessment after prolonged wearing. Long-term monitoring with wrist-worn sensors is acceptable to this cohort of PD patients. Evaluation of the wearer's experience is critical to the development of remote monitoring technology.

  13. On a more rigorous gravity field processing for future LL-SST type gravity satellite missions

    NASA Astrophysics Data System (ADS)

    Daras, I.; Pail, R.; Murböck, M.

    2013-12-01

    In order to meet the augmenting demands of the user community concerning accuracies of temporal gravity field models, future gravity missions of low-low satellite-to-satellite tracking (LL-SST) type are planned to carry more precise sensors than their precedents. A breakthrough is planned with the improved LL-SST measurement link, where the traditional K-band microwave instrument of 1μm accuracy will be complemented by an inter-satellite ranging instrument of several nm accuracy. This study focuses on investigations concerning the potential performance of the new sensors and their impact in gravity field solutions. The processing methods for gravity field recovery have to meet the new sensor standards and be able to take full advantage of the new accuracies that they provide. We use full-scale simulations in a realistic environment to investigate whether the standard processing techniques suffice to fully exploit the new sensors standards. We achieve that by performing full numerical closed-loop simulations based on the Integral Equation approach. In our simulation scheme, we simulate dynamic orbits in a conventional tracking analysis to compute pseudo inter-satellite ranges or range-rates that serve as observables. Each part of the processing is validated separately with special emphasis on numerical errors and their impact in gravity field solutions. We demonstrate that processing with standard precision may be a limiting factor for taking full advantage of new generation sensors that future satellite missions will carry. Therefore we have created versions of our simulator with enhanced processing precision with primarily aim to minimize round-off system errors. Results using the enhanced precision show a big reduction of system errors that were present at the standard precision processing even for the error-free scenario, and reveal the improvements the new sensors will bring into the gravity field solutions. As a next step, we analyze the contribution of individual error sources to the system's error budget. More specifically we analyze sensor noise from the laser interferometer and the accelerometers, errors in the kinematic orbits and the background fields as well as temporal and spatial aliasing errors. We give special care on the assessment of error sources with stochastic behavior, such as the laser interferometer and the accelerometers, and their consistent stochastic modeling in frame of the adjustment process.

  14. Robust numerical electromagnetic eigenfunction expansion algorithms

    NASA Astrophysics Data System (ADS)

    Sainath, Kamalesh

    This thesis summarizes developments in rigorous, full-wave, numerical spectral-domain (integral plane wave eigenfunction expansion [PWE]) evaluation algorithms concerning time-harmonic electromagnetic (EM) fields radiated by generally-oriented and positioned sources within planar and tilted-planar layered media exhibiting general anisotropy, thickness, layer number, and loss characteristics. The work is motivated by the need to accurately and rapidly model EM fields radiated by subsurface geophysical exploration sensors probing layered, conductive media, where complex geophysical and man-made processes can lead to micro-laminate and micro-fractured geophysical formations exhibiting, at the lower (sub-2MHz) frequencies typically employed for deep EM wave penetration through conductive geophysical media, bulk-scale anisotropic (i.e., directional) electrical conductivity characteristics. When the planar-layered approximation (layers of piecewise-constant material variation and transversely-infinite spatial extent) is locally, near the sensor region, considered valid, numerical spectral-domain algorithms are suitable due to their strong low-frequency stability characteristic, and ability to numerically predict time-harmonic EM field propagation in media with response characterized by arbitrarily lossy and (diagonalizable) dense, anisotropic tensors. If certain practical limitations are addressed, PWE can robustly model sensors with general position and orientation that probe generally numerous, anisotropic, lossy, and thick layers. The main thesis contributions, leading to a sensor and geophysical environment-robust numerical modeling algorithm, are as follows: (1) Simple, rapid estimator of the region (within the complex plane) containing poles, branch points, and branch cuts (critical points) (Chapter 2), (2) Sensor and material-adaptive azimuthal coordinate rotation, integration contour deformation, integration domain sub-region partition and sub-region-dependent integration order (Chapter 3), (3) Integration partition-extrapolation-based (Chapter 3) and Gauss-Laguerre Quadrature (GLQ)-based (Chapter 4) evaluations of the deformed, semi-infinite-length integration contour tails, (4) Robust in-situ-based (i.e., at the spectral-domain integrand level) direct/homogeneous-medium field contribution subtraction and analytical curbing of the source current spatial spectrum function's ill behavior (Chapter 5), and (5) Analytical re-casting of the direct-field expressions when the source is embedded within a NBAM, short for non-birefringent anisotropic medium (Chapter 6). The benefits of these contributions are, respectively, (1) Avoiding computationally intensive critical-point location and tracking (computation time savings), (2) Sensor and material-robust curbing of the integrand's oscillatory and slow decay behavior, as well as preventing undesirable critical-point migration within the complex plane (computation speed, precision, and instability-avoidance benefits), (3) sensor and material-robust reduction (or, for GLQ, elimination) of integral truncation error, (4) robustly stable modeling of scattered fields and/or fields radiated from current sources modeled as spatially distributed (10 to 1000-fold compute-speed acceleration also realized for distributed-source computations), and (5) numerically stable modeling of fields radiated from sources within NBAM layers. Having addressed these limitations, are PWE algorithms applicable to modeling EM waves in tilted planar-layered geometries too? This question is explored in Chapter 7 using a Transformation Optics-based approach, allowing one to model wave propagation through layered media that (in the sensor's vicinity) possess tilted planar interfaces. The technique leads to spurious wave scattering however, whose induced computation accuracy degradation requires analysis. Mathematical exhibition, and exhaustive simulation-based study and analysis of the limitations of, this novel tilted-layer modeling formulation is Chapter 7's main contribution.

  15. Automated Design Space Exploration with Aspen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spafford, Kyle L.; Vetter, Jeffrey S.

    Architects and applications scientists often use performance models to explore a multidimensional design space of architectural characteristics, algorithm designs, and application parameters. With traditional performance modeling tools, these explorations forced users to first develop a performance model and then repeatedly evaluate and analyze the model manually. These manual investigations proved laborious and error prone. More importantly, the complexity of this traditional process often forced users to simplify their investigations. To address this challenge of design space exploration, we extend our Aspen (Abstract Scalable Performance Engineering Notation) language with three new language constructs: user-defined resources, parameter ranges, and a collection ofmore » costs in the abstract machine model. Then, we use these constructs to enable automated design space exploration via a nonlinear optimization solver. We show how four interesting classes of design space exploration scenarios can be derived from Aspen models and formulated as pure nonlinear programs. The analysis tools are demonstrated using examples based on Aspen models for a three-dimensional Fast Fourier Transform, the CoMD molecular dynamics proxy application, and the DARPA Streaming Sensor Challenge Problem. Our results show that this approach can compose and solve arbitrary performance modeling questions quickly and rigorously when compared to the traditional manual approach.« less

  16. Automated Design Space Exploration with Aspen

    DOE PAGES

    Spafford, Kyle L.; Vetter, Jeffrey S.

    2015-01-01

    Architects and applications scientists often use performance models to explore a multidimensional design space of architectural characteristics, algorithm designs, and application parameters. With traditional performance modeling tools, these explorations forced users to first develop a performance model and then repeatedly evaluate and analyze the model manually. These manual investigations proved laborious and error prone. More importantly, the complexity of this traditional process often forced users to simplify their investigations. To address this challenge of design space exploration, we extend our Aspen (Abstract Scalable Performance Engineering Notation) language with three new language constructs: user-defined resources, parameter ranges, and a collection ofmore » costs in the abstract machine model. Then, we use these constructs to enable automated design space exploration via a nonlinear optimization solver. We show how four interesting classes of design space exploration scenarios can be derived from Aspen models and formulated as pure nonlinear programs. The analysis tools are demonstrated using examples based on Aspen models for a three-dimensional Fast Fourier Transform, the CoMD molecular dynamics proxy application, and the DARPA Streaming Sensor Challenge Problem. Our results show that this approach can compose and solve arbitrary performance modeling questions quickly and rigorously when compared to the traditional manual approach.« less

  17. Adaptive convex combination approach for the identification of improper quaternion processes.

    PubMed

    Ujang, Bukhari Che; Jahanchahi, Cyrus; Took, Clive Cheong; Mandic, Danilo P

    2014-01-01

    Data-adaptive optimal modeling and identification of real-world vector sensor data is provided by combining the fractional tap-length (FT) approach with model order selection in the quaternion domain. To account rigorously for the generality of such processes, both second-order circular (proper) and noncircular (improper), the proposed approach in this paper combines the FT length optimization with both the strictly linear quaternion least mean square (QLMS) and widely linear QLMS (WL-QLMS). A collaborative approach based on QLMS and WL-QLMS is shown to both identify the type of processes (proper or improper) and to track their optimal parameters in real time. Analysis shows that monitoring the evolution of the convex mixing parameter within the collaborative approach allows us to track the improperness in real time. Further insight into the properties of those algorithms is provided by establishing a relationship between the steady-state error and optimal model order. The approach is supported by simulations on model order selection and identification of both strictly linear and widely linear quaternion-valued systems, such as those routinely used in renewable energy (wind) and human-centered computing (biomechanics).

  18. SPARTAN: A High-Fidelity Simulation for Automated Rendezvous and Docking Applications

    NASA Technical Reports Server (NTRS)

    Turbe, Michael A.; McDuffie, James H.; DeKock, Brandon K.; Betts, Kevin M.; Carrington, Connie K.

    2007-01-01

    bd Systems (a subsidiary of SAIC) has developed the Simulation Package for Autonomous Rendezvous Test and ANalysis (SPARTAN), a high-fidelity on-orbit simulation featuring multiple six-degree-of-freedom (6DOF) vehicles. SPARTAN has been developed in a modular fashion in Matlab/Simulink to test next-generation automated rendezvous and docking guidance, navigation,and control algorithms for NASA's new Vision for Space Exploration. SPARTAN includes autonomous state-based mission manager algorithms responsible for sequencing the vehicle through various flight phases based on on-board sensor inputs and closed-loop guidance algorithms, including Lambert transfers, Clohessy-Wiltshire maneuvers, and glideslope approaches The guidance commands are implemented using an integrated translation and attitude control system to provide 6DOF control of each vehicle in the simulation. SPARTAN also includes high-fidelity representations of a variety of absolute and relative navigation sensors that maybe used for NASA missions, including radio frequency, lidar, and video-based rendezvous sensors. Proprietary navigation sensor fusion algorithms have been developed that allow the integration of these sensor measurements through an extended Kalman filter framework to create a single optimal estimate of the relative state of the vehicles. SPARTAN provides capability for Monte Carlo dispersion analysis, allowing for rigorous evaluation of the performance of the complete proposed AR&D system, including software, sensors, and mechanisms. SPARTAN also supports hardware-in-the-loop testing through conversion of the algorithms to C code using Real-Time Workshop in order to be hosted in a mission computer engineering development unit running an embedded real-time operating system. SPARTAN also contains both runtime TCP/IP socket interface and post-processing compatibility with bdStudio, a visualization tool developed by bd Systems, allowing for intuitive evaluation of simulation results. A description of the SPARTAN architecture and capabilities is provided, along with details on the models and algorithms utilized and results from representative missions.

  19. Achieving Global Ocean Color Climate Data Records

    NASA Technical Reports Server (NTRS)

    Franz, Bryan

    2010-01-01

    Ocean color, or the spectral distribution of visible light upwelling from beneath the ocean surface, carries information on the composition and concentration of biological constituents within the water column. The CZCS mission in 1978 demonstrated that quantitative ocean color measurements could be. made from spaceborne sensors, given sufficient corrections for atmospheric effects and a rigorous calibration and validation program. The launch of SeaWiFS in 1997 represents the beginning of NASA's ongoing efforts to develop a continuous ocean color data record with sufficient coverage and fidelity for global change research. Achievements in establishing and maintaining the consistency of the time-series through multiple missions and varying instrument designs will be highlighted in this talk, including measurements from NASA'S MODIS instruments currently flying on the Terra and Aqua platforms, as well as the MERIS sensor flown by ESA and the OCM-2 sensor recently launched by ISRO.

  20. A Noncontact FMCW Radar Sensor for Displacement Measurement in Structural Health Monitoring

    PubMed Central

    Li, Cunlong; Chen, Weimin; Liu, Gang; Yan, Rong; Xu, Hengyi; Qi, Yi

    2015-01-01

    This paper investigates the Frequency Modulation Continuous Wave (FMCW) radar sensor for multi-target displacement measurement in Structural Health Monitoring (SHM). The principle of three-dimensional (3-D) displacement measurement of civil infrastructures is analyzed. The requirements of high-accuracy displacement and multi-target identification for the measuring sensors are discussed. The fundamental measuring principle of FMCW radar is presented with rigorous mathematical formulas, and further the multiple-target displacement measurement is analyzed and simulated. In addition, a FMCW radar prototype is designed and fabricated based on an off-the-shelf radar frontend and data acquisition (DAQ) card, and the displacement error induced by phase asynchronism is analyzed. The conducted outdoor experiments verify the feasibility of this sensing method applied to multi-target displacement measurement, and experimental results show that three targets located at different distances can be distinguished simultaneously with millimeter level accuracy. PMID:25822139

  1. A noncontact FMCW radar sensor for displacement measurement in structural health monitoring.

    PubMed

    Li, Cunlong; Chen, Weimin; Liu, Gang; Yan, Rong; Xu, Hengyi; Qi, Yi

    2015-03-26

    This paper investigates the Frequency Modulation Continuous Wave (FMCW) radar sensor for multi-target displacement measurement in Structural Health Monitoring (SHM). The principle of three-dimensional (3-D) displacement measurement of civil infrastructures is analyzed. The requirements of high-accuracy displacement and multi-target identification for the measuring sensors are discussed. The fundamental measuring principle of FMCW radar is presented with rigorous mathematical formulas, and further the multiple-target displacement measurement is analyzed and simulated. In addition, a FMCW radar prototype is designed and fabricated based on an off-the-shelf radar frontend and data acquisition (DAQ) card, and the displacement error induced by phase asynchronism is analyzed. The conducted outdoor experiments verify the feasibility of this sensing method applied to multi-target displacement measurement, and experimental results show that three targets located at different distances can be distinguished simultaneously with millimeter level accuracy.

  2. Matter Gravitates, but Does Gravity Matter?

    ERIC Educational Resources Information Center

    Groetsch, C. W.

    2011-01-01

    The interplay of physical intuition, computational evidence, and mathematical rigor in a simple trajectory model is explored. A thought experiment based on the model is used to elicit student conjectures on the influence of a physical parameter; a mathematical model suggests a computational investigation of the conjectures, and rigorous analysis…

  3. Configuration-controlled Au nanocluster arrays on inverse micelle nano-patterns: versatile platforms for SERS and SPR sensors

    NASA Astrophysics Data System (ADS)

    Jang, Yoon Hee; Chung, Kyungwha; Quan, Li Na; Špačková, Barbora; Šípová, Hana; Moon, Seyoung; Cho, Won Joon; Shin, Hae-Young; Jang, Yu Jin; Lee, Ji-Eun; Kochuveedu, Saji Thomas; Yoon, Min Ji; Kim, Jihyeon; Yoon, Seokhyun; Kim, Jin Kon; Kim, Donghyun; Homola, Jiří; Kim, Dong Ha

    2013-11-01

    Nanopatterned 2-dimensional Au nanocluster arrays with controlled configuration are fabricated onto reconstructed nanoporous poly(styrene-block-vinylpyridine) inverse micelle monolayer films. Near-field coupling of localized surface plasmons is studied and compared for disordered and ordered core-centered Au NC arrays. Differences in evolution of the absorption band and field enhancement upon Au nanoparticle adsorption are shown. The experimental results are found to be in good agreement with theoretical studies based on the finite-difference time-domain method and rigorous coupled-wave analysis. The realized Au nanopatterns are exploited as substrates for surface-enhanced Raman scattering and integrated into Kretschmann-type SPR sensors, based on which unprecedented SPR-coupling-type sensors are demonstrated.Nanopatterned 2-dimensional Au nanocluster arrays with controlled configuration are fabricated onto reconstructed nanoporous poly(styrene-block-vinylpyridine) inverse micelle monolayer films. Near-field coupling of localized surface plasmons is studied and compared for disordered and ordered core-centered Au NC arrays. Differences in evolution of the absorption band and field enhancement upon Au nanoparticle adsorption are shown. The experimental results are found to be in good agreement with theoretical studies based on the finite-difference time-domain method and rigorous coupled-wave analysis. The realized Au nanopatterns are exploited as substrates for surface-enhanced Raman scattering and integrated into Kretschmann-type SPR sensors, based on which unprecedented SPR-coupling-type sensors are demonstrated. Electronic supplementary information (ESI) available: TEM image and UV-vis absorption spectrum of citrate-capped Au NPs, AFM images of Au NC arrays on the PS-b-P4VP (41k-24k) template, ImageJ-analyzed results of PS-b-P4VP (41k-24k)-templated Au NC arrays, calculated %-surface coverage values, SEM images of Au NC arrays on the PS-b-P2VP (172k-42k) template for SPR biosensing, corresponding ImageJ-analyzed images by varying the Au NP deposition time and results of image analysis. See DOI: 10.1039/c3nr03860b

  4. Academic Rigor in General Education, Introductory Astronomy Courses for Nonscience Majors

    ERIC Educational Resources Information Center

    Brogt, Erik; Draeger, John D.

    2015-01-01

    We discuss a model of academic rigor and apply this to a general education introductory astronomy course. We argue that even without central tenets of professional astronomy-the use of mathematics--the course can still be considered academically rigorous when expectations, goals, assessments, and curriculum are properly aligned.

  5. Tri-critical behavior of the Blume-Emery-Griffiths model on a Kagomé lattice: Effective-field theory and Rigorous bounds

    NASA Astrophysics Data System (ADS)

    Santos, Jander P.; Sá Barreto, F. C.

    2016-01-01

    Spin correlation identities for the Blume-Emery-Griffiths model on Kagomé lattice are derived and combined with rigorous correlation inequalities lead to upper bounds on the critical temperature. From the spin correlation identities the mean field approximation and the effective field approximation results for the magnetization, the critical frontiers and the tricritical points are obtained. The rigorous upper bounds on the critical temperature improve over those effective-field type theories results.

  6. Multi-Sensor Radiometric Study to Detect Pathologies in Historical Buildings

    NASA Astrophysics Data System (ADS)

    Del Pozo, S.; Herrero-Pascual, J.; Felipe-García, B.; Hernández-López, D.; Rodríguez-Gonzálvez, P.; González-Aguilera, D.

    2015-02-01

    This paper presents a comparative study with different remote sensing technologies to recognize pathologies in façades of historical buildings. Building materials deteriorate over the years due to different extrinsic and intrinsic agents, so assessing these diseases in a non-invasive way is crucial to help preserve them. Most of these buildings are extremely valuable and some of them have been declared monuments of cultural interest. In this way through close range remote sensing techniques, it is possible to study material pathologies in a rigorous way and in a short duration field campaign. For the investigation two different acquisition systems were applied, active and passive methods. The terrestrial laser scanner FARO Focus 3D was used as active sensor, working at the wavelength of 905 nm. For the case of passive sensors, a Nikon D-5000 and a 6- bands Mini-MCA multispectral camera (530-801 nm) were applied covering visible and near infrared spectral range. This analysis allows assessing the sensor, or sensors combination, suitability for pathologies detection, addressing the limitations according to the spatial and spectral resolution. Moreover, the pathology detection by unsupervised classification methods is addressed in order to evaluate the automation capability of this process.

  7. Sensing and data classification for a robotic meteorite search

    NASA Astrophysics Data System (ADS)

    Pedersen, Liam; Apostolopoulos, Dimi; Whittaker, William L.; Benedix, Gretchen; Rousch, Ted

    1999-01-01

    Upcoming missions to Mars and the mon call for highly autonomous robots with capability to perform intra-site exploration, reason about their scientific finds, and perform comprehensive on-board analysis of data collected. An ideal case for testing such technologies and robot capabilities is the robotic search for Antarctic meteorites. The successful identification and classification of meteorites depends on sensing modalities and intelligent evaluation of acquired data. Data from color imagery and spectroscopic measurements are used to identify terrestrial rocks and distinguish them from meteorites. However, because of the large number of rocks and the high cost and delay of using some of the sensors, it is necessary to eliminate as many meteorite candidates as possible using cheap long range sensors, such as color cameras. More resource consuming sensor will be held in reserve for the more promising samples only. Bayes networks are used as the formalism for incrementally combing data from multiple sources in a statistically rigorous manner. Furthermore, they can be used to infer the utility of further sensor readings given currently known data. This information, along with cost estimates, in necessary for the sensing system to rationally schedule further sensor reading sand deployments. This paper address issues associated with sensor selection and implementation of an architecture for automatic identification of rocks and meteorites from a mobile robot.

  8. Developing a Student Conception of Academic Rigor

    ERIC Educational Resources Information Center

    Draeger, John; del Prado Hill, Pixita; Mahler, Ronnie

    2015-01-01

    In this article we describe models of academic rigor from the student point of view. Drawing on a campus-wide survey, focus groups, and interviews with students, we found that students explained academic rigor in terms of workload, grading standards, level of difficulty, level of interest, and perceived relevance to future goals. These findings…

  9. Pose Measurement Performance of the Argon Relative Navigation Sensor Suite in Simulated Flight Conditions

    NASA Technical Reports Server (NTRS)

    Galante, Joseph M.; Eepoel, John Van; Strube, Matt; Gill, Nat; Gonzalez, Marcelo; Hyslop, Andrew; Patrick, Bryan

    2012-01-01

    Argon is a flight-ready sensor suite with two visual cameras, a flash LIDAR, an on- board flight computer, and associated electronics. Argon was designed to provide sensing capabilities for relative navigation during proximity, rendezvous, and docking operations between spacecraft. A rigorous ground test campaign assessed the performance capability of the Argon navigation suite to measure the relative pose of high-fidelity satellite mock-ups during a variety of simulated rendezvous and proximity maneuvers facilitated by robot manipulators in a variety of lighting conditions representative of the orbital environment. A brief description of the Argon suite and test setup are given as well as an analysis of the performance of the system in simulated proximity and rendezvous operations.

  10. In-line height profiling metrology sensor for zero defect production control

    NASA Astrophysics Data System (ADS)

    Snel, Rob; Winters, Jasper; Liebig, Thomas; Jonker, Wouter

    2017-06-01

    Contemporary production systems of mechanical precision parts show challenges as increased complexity, tolerances shrinking to sub-microns and yield losses that must be mastered to the extreme. More advanced automation and process control is required to accomplish this task. Often a solution based on feedforward/feedback control is chosen requiring innovative and more advanced in line metrology. This article concentrates first on the context of in line metrology for process control and then on the development of a specific in line height profiling sensor. The novel sensor technology is based on full field time domain white light interferometry which is well know from the quality lab. The novel metrology system is to be mounted close to the production equipment, as required to minimize time delay in the control loop, and is thereby fully exposed to vibrations. This sensor is innovated to perform in line with an orders of magnitude faster throughput than laboratory instruments; it's robust to withstand the rigors of workshops and has a height resolution that is in the nanometer range.

  11. CLARREO Approach for Reference Intercalibration of Reflected Solar Sensors: On-Orbit Data Matching and Sampling

    NASA Technical Reports Server (NTRS)

    Roithmayr, Carlos; Lukashin, Constantine; Speth, Paul W.; Kopp, Gregg; Thome, Kurt; Wielicki, Bruce A.; Young, David F.

    2014-01-01

    The implementation of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission was recommended by the National Research Council in 2007 to provide an on-orbit intercalibration standard with accuracy of 0.3% (k = 2) for relevant Earth observing sensors. The goal of reference intercalibration, as established in the Decadal Survey, is to enable rigorous high-accuracy observations of critical climate change parameters, including reflected broadband radiation [Clouds and Earth's Radiant Energy System (CERES)], cloud properties [Visible Infrared Imaging Radiometer Suite (VIIRS)], and changes in surface albedo, including snow and ice albedo feedback. In this paper, we describe the CLARREO approach for performing intercalibration on orbit in the reflected solar (RS) wavelength domain. It is based on providing highly accurate spectral reflectance and reflected radiance measurements from the CLARREO Reflected Solar Spectrometer (RSS) to establish an on-orbit reference for existing sensors, namely, CERES and VIIRS on Joint Polar Satellite System satellites, Advanced Very High Resolution Radiometer and follow-on imagers on MetOp, Landsat imagers, and imagers on geostationary platforms. One of two fundamental CLARREO mission goals is to provide sufficient sampling of high-accuracy observations that are matched in time, space, and viewing angles with measurements made by existing instruments, to a degree that overcomes the random error sources from imperfect data matching and instrument noise. The data matching is achieved through CLARREO RSS pointing operations on orbit that align its line of sight with the intercalibrated sensor. These operations must be planned in advance; therefore, intercalibration events must be predicted by orbital modeling. If two competing opportunities are identified, one target sensor must be given priority over the other. The intercalibration method is to monitor changes in targeted sensor response function parameters: effective offset, gain, nonlinearity, optics spectral response, and sensitivity to polarization. In this paper, we use existing satellite data and orbital simulationmethods to determinemission requirements for CLARREO, its instrument pointing ability, methodology, and needed intercalibration sampling and data matching for accurate intercalibration of RS radiation sensors on orbit.

  12. Design and performance of an integrated ground and space sensor web for monitoring active volcanoes.

    NASA Astrophysics Data System (ADS)

    Lahusen, Richard; Song, Wenzhan; Kedar, Sharon; Shirazi, Behrooz; Chien, Steve; Doubleday, Joshua; Davies, Ashley; Webb, Frank; Dzurisin, Dan; Pallister, John

    2010-05-01

    An interdisciplinary team of computer, earth and space scientists collaborated to develop a sensor web system for rapid deployment at active volcanoes. The primary goals of this Optimized Autonomous Space In situ Sensorweb (OASIS) are to: 1) integrate complementary space and in situ (ground-based) elements into an interactive, autonomous sensor web; 2) advance sensor web power and communication resource management technology; and 3) enable scalability for seamless addition sensors and other satellites into the sensor web. This three-year project began with a rigorous multidisciplinary interchange that resulted in definition of system requirements to guide the design of the OASIS network and to achieve the stated project goals. Based on those guidelines, we have developed fully self-contained in situ nodes that integrate GPS, seismic, infrasonic and lightning (ash) detection sensors. The nodes in the wireless sensor network are linked to the ground control center through a mesh network that is highly optimized for remote geophysical monitoring. OASIS also features an autonomous bidirectional interaction between ground nodes and instruments on the EO-1 space platform through continuous analysis and messaging capabilities at the command and control center. Data from both the in situ sensors and satellite-borne hyperspectral imaging sensors stream into a common database for real-time visualization and analysis by earth scientists. We have successfully completed a field deployment of 15 nodes within the crater and on the flanks of Mount St. Helens, Washington. The demonstration that sensor web technology facilitates rapid network deployments and that we can achieve real-time continuous data acquisition. We are now optimizing component performance and improving user interaction for additional deployments at erupting volcanoes in 2010.

  13. Development and Implementation of a Comprehensive Radiometric Validation Protocol for the CERES Earth Radiation Budget Climate Record Sensors

    NASA Technical Reports Server (NTRS)

    Priestley, K. J.; Matthews, G.; Thomas, S.

    2006-01-01

    The CERES Flight Models 1 through 4 instruments were launched aboard NASA's Earth Observing System (EOS) Terra and Aqua Spacecraft into 705 Km sun-synchronous orbits with 10:30 a.m. and 1:30 p.m. equatorial crossing times. These instruments supplement measurements made by the CERES Proto Flight Model (PFM) instrument launched aboard NASA's Tropical Rainfall Measuring Mission (TRMM) into a 350 Km, 38-degree mid-inclined orbit. CERES Climate Data Records consist of geolocated and calibrated instantaneous filtered and unfiltered radiances through temporally and spatially averaged TOA, Surface and Atmospheric fluxes. CERES filtered radiance measurements cover three spectral bands including shortwave (0.3 to 5 microns), total (0.3 to 100 microns) and an atmospheric window channel (8 to 12 microns). The CERES Earth Radiation Budget measurements represent a new era in radiation climate data, realizing a factor of 2 to 4 improvement in calibration accuracy and stability over the previous ERBE climate records, while striving for the next goal of 0.3-percent per decade absolute stability. The current improvement is derived from two sources: the incorporation of lessons learned from the ERBE mission in the design of the CERES instruments and the development of a rigorous and comprehensive radiometric validation protocol consisting of individual studies covering different spatial, spectral and temporal time scales on data collected both pre and post launch. Once this ensemble of individual perspectives is collected and organized, a cohesive and highly rigorous picture of the overall end-to-end performance of the CERES instrument's and data processing algorithms may be clearly established. This approach has resulted in unprecedented levels of accuracy for radiation budget instruments and data products with calibration stability of better than 0.2-percent and calibration traceability from ground to flight of 0.25-percent. The current work summarizes the development, philosophy and implementation of the protocol designed to rigorously quantify the quality of the data products as well as the level of agreement between the CERES TRMM, Terra and Aqua climate data records.

  14. Rigorous mathematical modelling for a Fast Corrector Power Supply in TPS

    NASA Astrophysics Data System (ADS)

    Liu, K.-B.; Liu, C.-Y.; Chien, Y.-C.; Wang, B.-S.; Wong, Y. S.

    2017-04-01

    To enhance the stability of beam orbit, a Fast Orbit Feedback System (FOFB) eliminating undesired disturbances was installed and tested in the 3rd generation synchrotron light source of Taiwan Photon Source (TPS) of National Synchrotron Radiation Research Center (NSRRC). The effectiveness of the FOFB greatly depends on the output performance of Fast Corrector Power Supply (FCPS); therefore, the design and implementation of an accurate FCPS is essential. A rigorous mathematical modelling is very useful to shorten design time and improve design performance of a FCPS. A rigorous mathematical modelling derived by the state-space averaging method for a FCPS in the FOFB of TPS composed of a full-bridge topology is therefore proposed in this paper. The MATLAB/SIMULINK software is used to construct the proposed mathematical modelling and to conduct the simulations of the FCPS. Simulations for the effects of the different resolutions of ADC on the output accuracy of the FCPS are investigated. A FCPS prototype is realized to demonstrate the effectiveness of the proposed rigorous mathematical modelling for the FCPS. Simulation and experimental results show that the proposed mathematical modelling is helpful for selecting the appropriate components to meet the accuracy requirements of a FCPS.

  15. Numerical investigations of the potential for laser focus sensors in micrometrology

    NASA Astrophysics Data System (ADS)

    Bischoff, Jörg; Mastylo, Rostyslav; Manske, Eberhard

    2017-06-01

    Laser focus sensors (LFS)1 attached to a scanning nano-positioning and measuring machine (NPMM) enable near diffraction limit resolution with very large measuring areas up to 200 x 200 mm1. Further extensions are planned to address wafer sizes of 8 inch and beyond. Thus, they are preferably suited for micro-metrology on large wafers. On the other hand, the minimum lateral features in state-of-the-art semiconductor industry are as small as a few nanometer and therefore far beyond the resolution limits of classical optics. New techniques such as OCD or ODP3,4 a.k.a. as scatterometry have helped to overcome these constraints considerably. However, scatterometry relies on regular patterns and therefore, the measurements have to be performed on special reference gratings or boxes rather than in-die. Consequently, there is a gap between measurement and the actual structure of interest which becomes more and more an issues with shrinking feature sizes. On the other hand, near-field approaches would also allow to extent the resolution limit greatly5 but they require very challenging controls to keep the working distance small enough to stay within the near field zone. Therefore, the feasibility and the limits of a LFS scanner system have been investigated theoretically. Based on simulations of laser focus sensor scanning across simple topographies, it was found that there is potential to overcome the diffraction limitations to some extent by means of vicinity interference effects caused by the optical interaction of adjacent topography features. We think that it might be well possible to reconstruct the diffracting profile by means of rigorous diffraction simulation based on a thorough model of the laser focus sensor optics in combination with topography diffraction 6 in a similar way as applied in OCD. The difference lies in the kind of signal itself which has to be modeled. While standard OCD is based on spectra, LFS utilizes height scan signals. Simulation results are presented for different types of topographies (dense vs. sparse, regular vs. single) with lateral features near and beyond the classical resolution limit. Moreover, the influence of topography height on the detectability is investigated. To this end, several sensor principles and polarization setups are considered such as a dual color pin hole sensor and a Foucault knife sensor. It is shown that resolution beyond the Abbe or Rayleigh limit is possible even with "classical" optical setups when combining measurements with sophisticated profile retrieval techniques and some a-priori knowledge. Finally, measurement uncertainties are derived based on perturbation simulations according to the method presented in 7.

  16. The effect of temperature on the mechanical aspects of rigor mortis in a liquid paraffin model.

    PubMed

    Ozawa, Masayoshi; Iwadate, Kimiharu; Matsumoto, Sari; Asakura, Kumiko; Ochiai, Eriko; Maebashi, Kyoko

    2013-11-01

    Rigor mortis is an important phenomenon to estimate the postmortem interval in forensic medicine. Rigor mortis is affected by temperature. We measured stiffness of rat muscles using a liquid paraffin model to monitor the mechanical aspects of rigor mortis at five temperatures (37, 25, 10, 5 and 0°C). At 37, 25 and 10°C, the progression of stiffness was slower in cooler conditions. At 5 and 0°C, the muscle stiffness increased immediately after the muscles were soaked in cooled liquid paraffin and then muscles gradually became rigid without going through a relaxed state. This phenomenon suggests that it is important to be careful when estimating the postmortem interval in cold seasons. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Self-organization and emergent behaviour: distributed decision making in sensor networks

    NASA Astrophysics Data System (ADS)

    van der Wal, Ariën J.

    2013-01-01

    One of the most challenging phenomena that can be observed in an ensemble of interacting agents is that of self-organization, viz. emergent, collective behaviour, also known as synergy. The concept of synergy is also the key idea behind sensor fusion. The idea often loosely phrased as '1+1>2', strongly suggests that it is possible to make up an ensemble of similar agents, assume some kind of interaction and that in such a system 'synergy' will automatically evolve. In a more rigorous approach, the paradigm may be expressed by identifying an ensemble performance measure that yields more than a superposition of the individual performance measures of the constituents. In this study, we demonstrate that distributed decision making in a sensor network can be described by a simple system consisting of phase oscillators. In the thermodynamic limit, this system shows spontaneous organization. Simulations indicate that also for finite populations, phase synchronization spontaneously emerges if the coupling strength is strong enough.

  18. 3D Spatial and Spectral Fusion of Terrestrial Hyperspectral Imagery and Lidar for Hyperspectral Image Shadow Restoration Applied to a Geologic Outcrop

    NASA Astrophysics Data System (ADS)

    Hartzell, P. J.; Glennie, C. L.; Hauser, D. L.; Okyay, U.; Khan, S.; Finnegan, D. C.

    2016-12-01

    Recent advances in remote sensing technology have expanded the acquisition and fusion of active lidar and passive hyperspectral imagery (HSI) from an exclusively airborne technique to terrestrial modalities. This enables high resolution 3D spatial and spectral quantification of vertical geologic structures for applications such as virtual 3D rock outcrop models for hydrocarbon reservoir analog analysis and mineral quantification in open pit mining environments. In contrast to airborne observation geometry, the vertical surfaces observed by horizontal-viewing terrestrial HSI sensors are prone to extensive topography-induced solar shadowing, which leads to reduced pixel classification accuracy or outright removal of shadowed pixels from analysis tasks. Using a precisely calibrated and registered offset cylindrical linear array camera model, we demonstrate the use of 3D lidar data for sub-pixel HSI shadow detection and the restoration of the shadowed pixel spectra via empirical methods that utilize illuminated and shadowed pixels of similar material composition. We further introduce a new HSI shadow restoration technique that leverages collocated backscattered lidar intensity, which is resistant to solar conditions, obtained by projecting the 3D lidar points through the HSI camera model into HSI pixel space. Using ratios derived from the overlapping lidar laser and HSI wavelengths, restored shadow pixel spectra are approximated using a simple scale factor. Simulations of multiple lidar wavelengths, i.e., multi-spectral lidar, indicate the potential for robust HSI spectral restoration that is independent of the complexity and costs associated with rigorous radiometric transfer models, which have yet to be developed for horizontal-viewing terrestrial HSI sensors. The spectral restoration performance is quantified through HSI pixel classification consistency between full sun and partial sun exposures of a single geologic outcrop.

  19. Peer Assessment with Online Tools to Improve Student Modeling

    ERIC Educational Resources Information Center

    Atkins, Leslie J.

    2012-01-01

    Introductory physics courses often require students to develop precise models of phenomena and represent these with diagrams, including free-body diagrams, light-ray diagrams, and maps of field lines. Instructors expect that students will adopt a certain rigor and precision when constructing these diagrams, but we want that rigor and precision to…

  20. Spatial scaling and multi-model inference in landscape genetics: Martes americana in northern Idaho

    Treesearch

    Tzeidle N. Wasserman; Samuel A. Cushman; Michael K. Schwartz; David O. Wallin

    2010-01-01

    Individual-based analyses relating landscape structure to genetic distances across complex landscapes enable rigorous evaluation of multiple alternative hypotheses linking landscape structure to gene flow. We utilize two extensions to increase the rigor of the individual-based causal modeling approach to inferring relationships between landscape patterns and gene flow...

  1. Mixed Criticality Scheduling for Industrial Wireless Sensor Networks

    PubMed Central

    Jin, Xi; Xia, Changqing; Xu, Huiting; Wang, Jintao; Zeng, Peng

    2016-01-01

    Wireless sensor networks (WSNs) have been widely used in industrial systems. Their real-time performance and reliability are fundamental to industrial production. Many works have studied the two aspects, but only focus on single criticality WSNs. Mixed criticality requirements exist in many advanced applications in which different data flows have different levels of importance (or criticality). In this paper, first, we propose a scheduling algorithm, which guarantees the real-time performance and reliability requirements of data flows with different levels of criticality. The algorithm supports centralized optimization and adaptive adjustment. It is able to improve both the scheduling performance and flexibility. Then, we provide the schedulability test through rigorous theoretical analysis. We conduct extensive simulations, and the results demonstrate that the proposed scheduling algorithm and analysis significantly outperform existing ones. PMID:27589741

  2. A Backpack-Mounted Omnidirectional Camera with Off-the-Shelf Navigation Sensors for Mobile Terrestrial Mapping: Development and Forest Application

    PubMed Central

    Prol, Fabricio dos Santos; El Issaoui, Aimad; Hakala, Teemu

    2018-01-01

    The use of Personal Mobile Terrestrial System (PMTS) has increased considerably for mobile mapping applications because these systems offer dynamic data acquisition with ground perspective in places where the use of wheeled platforms is unfeasible, such as forests and indoor buildings. PMTS has become more popular with emerging technologies, such as miniaturized navigation sensors and off-the-shelf omnidirectional cameras, which enable low-cost mobile mapping approaches. However, most of these sensors have not been developed for high-accuracy metric purposes and therefore require rigorous methods of data acquisition and data processing to obtain satisfactory results for some mapping applications. To contribute to the development of light, low-cost PMTS and potential applications of these off-the-shelf sensors for forest mapping, this paper presents a low-cost PMTS approach comprising an omnidirectional camera with off-the-shelf navigation systems and its evaluation in a forest environment. Experimental assessments showed that the integrated sensor orientation approach using navigation data as the initial information can increase the trajectory accuracy, especially in covered areas. The point cloud generated with the PMTS data had accuracy consistent with the Ground Sample Distance (GSD) range of omnidirectional images (3.5–7 cm). These results are consistent with those obtained for other PMTS approaches. PMID:29522467

  3. Near Identifiability of Dynamical Systems

    NASA Technical Reports Server (NTRS)

    Hadaegh, F. Y.; Bekey, G. A.

    1987-01-01

    Concepts regarding approximate mathematical models treated rigorously. Paper presents new results in analysis of structural identifiability, equivalence, and near equivalence between mathematical models and physical processes they represent. Helps establish rigorous mathematical basis for concepts related to structural identifiability and equivalence revealing fundamental requirements, tacit assumptions, and sources of error. "Structural identifiability," as used by workers in this field, loosely translates as meaning ability to specify unique mathematical model and set of model parameters that accurately predict behavior of corresponding physical system.

  4. Fiber Optic Distributed Sensors for High-resolution Temperature Field Mapping.

    PubMed

    Lomperski, Stephen; Gerardi, Craig; Lisowski, Darius

    2016-11-07

    The reliability of computational fluid dynamics (CFD) codes is checked by comparing simulations with experimental data. A typical data set consists chiefly of velocity and temperature readings, both ideally having high spatial and temporal resolution to facilitate rigorous code validation. While high resolution velocity data is readily obtained through optical measurement techniques such as particle image velocimetry, it has proven difficult to obtain temperature data with similar resolution. Traditional sensors such as thermocouples cannot fill this role, but the recent development of distributed sensing based on Rayleigh scattering and swept-wave interferometry offers resolution suitable for CFD code validation work. Thousands of temperature measurements can be generated along a single thin optical fiber at hundreds of Hertz. Sensors function over large temperature ranges and within opaque fluids where optical techniques are unsuitable. But this type of sensor is sensitive to strain and humidity as well as temperature and so accuracy is affected by handling, vibration, and shifts in relative humidity. Such behavior is quite unlike traditional sensors and so unconventional installation and operating procedures are necessary to ensure accurate measurements. This paper demonstrates implementation of a Rayleigh scattering-type distributed temperature sensor in a thermal mixing experiment involving two air jets at 25 and 45 °C. We present criteria to guide selection of optical fiber for the sensor and describe installation setup for a jet mixing experiment. We illustrate sensor baselining, which links readings to an absolute temperature standard, and discuss practical issues such as errors due to flow-induced vibration. This material can aid those interested in temperature measurements having high data density and bandwidth for fluid dynamics experiments and similar applications. We highlight pitfalls specific to these sensors for consideration in experiment design and operation.

  5. Microgravity Investigation of Crew Reactions in 0-G (MICRO-G)

    NASA Technical Reports Server (NTRS)

    Newman, Dava; Coleman, Charles; Metaxas, Dimitri

    2004-01-01

    There is a need for a human factors, technology-based bioastronautics research effort to develop an integrated system that reduces risk and provides scientific knowledge of astronaut-induced loads and motions during long-duration missions on the International Space Station (ISS), which will lead to appropriate countermeasures. The primary objectives of the Microgravity Investigation of Crew Reactions in 0-G (MICRO-GI research effort are to quantify astronaut adaptation and movement as well as to model motor strategies for differing gravity environments. The overall goal of this research program is to improve astronaut performance and efficiency through the use of rigorous quantitative dynamic analysis, simulation and experimentation. The MICRO-G research effort provides a modular, kinetic and kinematic capability for the ISS. The collection and evaluation of kinematics (whole-body motion) and dynamics (reacting forces and torques) of astronauts within the ISS will allow for quantification of human motion and performance in weightlessness, gathering fundamental human factors information for design, scientific investigation in the field of dynamics and motor control, technological assessment of microgravity disturbances, and the design of miniaturized, real-time space systems. The proposed research effort builds on a strong foundation of successful microgravity experiments, namely, the EDLS (Enhanced Dynamics Load Sensors) flown aboard the Russian Mir space station (19961998) and the DLS (Dynamic Load Sensors) flown on Space Shuttle Mission STS-62. In addition, previously funded NASA ground-based research into sensor technology development and development of algorithms to produce three-dimensional (3-0) kinematics from video images have come to fruition and these efforts culminate in the proposed collaborative MICRO-G flight experiment. The required technology and hardware capitalize on previous sensor design, fabrication, and testing and can be flight qualified for a fraction of the cost of an initial spaceflight experiment. Four dynamic load sensors/restraints are envisioned for measurement of astronaut forces and torques. Two standard ISS video cameras record typical astronaut operations and prescribed IVA motions for 3-D kinematics. Forces and kinematics are combined for dynamic analysis of astronaut motion, exploiting the results of the detailed dynamic modeling effort for the quantitative verification of astronaut IVA performance, induced-loads, and adaptive control strategies for crewmember whole-body motion in microgravity. This comprehensive effort, provides an enhanced human factors approach based on physics-based modeling to identify adaptive performance during long-duration spaceflight, which is critically important for astronaut training as well as providing a spaceflight database to drive countermeasure design.

  6. Integrating Landsat-8, Sentinel-2, and nano-satellite data for deriving atmospherically corrected vegetation indices at enhanced spatio-temporal resolution

    NASA Astrophysics Data System (ADS)

    Houborg, Rasmus; McCabe, Matthew F.; Ershadi, Ali

    2017-04-01

    Flocks of nano-satellites are emerging as an economic resource for overcoming spatio-temporal constraints of conventional single-sensor satellite missions. Planet Labs operates an expanding constellation of currently more than 40 CubeSats (30x10x10 cm3), which will facilitate daily capture of broadband RGB and near-infrared (NIR) imagery for every location on earth at a 3-5 m ground sampling distance. However, data acquired by these miniaturized satellites lack rigorous radiometric corrections and radiance conversions and should be used in synergy with high quality imagery required by conventional large satellites such as Landsat-8 (L8) and Sentinel-2 (S2) in order to realize the full potential of this game changing observational resource. This study integrates L8, S2 and Planet data acquired over sites in Saudi Arabia and the state of California for deriving cross-sensor consistent and atmospherically corrected Vegetation Indices (VI) that may serve as important metrics for vegetation growth, health, and productivity. An automated framework, based on 6S and satellite retrieved atmospheric state and aerosol inputs, is first applied to L8 and S2 at-sensor radiances for the production of atmospherically corrected VIs. Scale-consistent Planet RGB and NIR imagery is then related to the corrected VI data using a selective, scene-specific, and computationally fast machine learning approach. The developed technique uses the closest pair of Planet and L8/S2 scenes in the training of the predictive VI models and accounts for changes in cover conditions over the acquisition timespan. Application of the models to full resolution Planet imagery results in cross-sensor consistent VI estimates at the scale and time of the nano-satellite acquisition. The utility of the approach for reproducing spatial features in L8 and S2 based indices based on Planet imagery is evaluated. The technique is generic, computationally efficient, and extendable and serves well for implementation within a cloud computing framework for processing over larger domains and time intervals.

  7. Quaternion-valued echo state networks.

    PubMed

    Xia, Yili; Jahanchahi, Cyrus; Mandic, Danilo P

    2015-04-01

    Quaternion-valued echo state networks (QESNs) are introduced to cater for 3-D and 4-D processes, such as those observed in the context of renewable energy (3-D wind modeling) and human centered computing (3-D inertial body sensors). The introduction of QESNs is made possible by the recent emergence of quaternion nonlinear activation functions with local analytic properties, required by nonlinear gradient descent training algorithms. To make QENSs second-order optimal for the generality of quaternion signals (both circular and noncircular), we employ augmented quaternion statistics to introduce widely linear QESNs. To that end, the standard widely linear model is modified so as to suit the properties of dynamical reservoir, typically realized by recurrent neural networks. This allows for a full exploitation of second-order information in the data, contained both in the covariance and pseudocovariances, and a rigorous account of second-order noncircularity (improperness), and the corresponding power mismatch and coupling between the data components. Simulations in the prediction setting on both benchmark circular and noncircular signals and on noncircular real-world 3-D body motion data support the analysis.

  8. Forward modelling of global gravity fields with 3D density structures and an application to the high-resolution ( 2 km) gravity fields of the Moon

    NASA Astrophysics Data System (ADS)

    Šprlák, M.; Han, S.-C.; Featherstone, W. E.

    2017-12-01

    Rigorous modelling of the spherical gravitational potential spectra from the volumetric density and geometry of an attracting body is discussed. Firstly, we derive mathematical formulas for the spatial analysis of spherical harmonic coefficients. Secondly, we present a numerically efficient algorithm for rigorous forward modelling. We consider the finite-amplitude topographic modelling methods as special cases, with additional postulates on the volumetric density and geometry. Thirdly, we implement our algorithm in the form of computer programs and test their correctness with respect to the finite-amplitude topography routines. For this purpose, synthetic and realistic numerical experiments, applied to the gravitational field and geometry of the Moon, are performed. We also investigate the optimal choice of input parameters for the finite-amplitude modelling methods. Fourth, we exploit the rigorous forward modelling for the determination of the spherical gravitational potential spectra inferred by lunar crustal models with uniform, laterally variable, radially variable, and spatially (3D) variable bulk density. Also, we analyse these four different crustal models in terms of their spectral characteristics and band-limited radial gravitation. We demonstrate applicability of the rigorous forward modelling using currently available computational resources up to degree and order 2519 of the spherical harmonic expansion, which corresponds to a resolution of 2.2 km on the surface of the Moon. Computer codes, a user manual and scripts developed for the purposes of this study are publicly available to potential users.

  9. Telemonitoring of patients with Parkinson's disease using inertia sensors.

    PubMed

    Piro, N E; Baumann, L; Tengler, M; Piro, L; Blechschmidt-Trapp, R

    2014-01-01

    Medical treatment in patients suffering from Parkinson's disease is very difficult as dose-finding is mainly based on selective and subjective impressions by the physician. To allow for the objective evaluation of patients' symptoms required for optimal dosefinding, a telemonitoring system tracks the motion of patients in their surroundings. The system focuses on providing interoperability and usability in order to ensure high acceptance. Patients wear inertia sensors and perform standardized motor tasks. Data are recorded, processed and then presented to the physician in a 3D animated form. In addition, the same data is rated based on the UPDRS score. Interoperability is realized by developing the system in compliance with the recommendations of the Continua Health Alliance. Detailed requirements analysis and continuous collaboration with respective user groups help to achieve high usability. A sensor platform was developed that is capable of measuring acceleration and angular rate of motions as well as the absolute orientation of the device itself through an included compass sensor. The system architecture was designed and required infrastructure, and essential parts of the communication between the system components were implemented following Continua guidelines. Moreover, preliminary data analysis based on three-dimensional acceleration and angular rate data could be established. A prototype system for the telemonitoring of Parkinson's disease patients was successfully developed. The developed sensor platform fully satisfies the needs of monitoring patients of Parkinson's disease and is comparable to other sensor platforms, although these sensor platforms have yet to be tested rigorously against each other. Suitable approaches to provide interoperability and usability were identified and realized and remain to be tested in the field.

  10. Design analysis of doped-silicon surface plasmon resonance immunosensors in mid-infrared range.

    PubMed

    DiPippo, William; Lee, Bong Jae; Park, Keunhan

    2010-08-30

    This paper reports the design analysis of a microfabricatable mid-infrared (mid-IR) surface plasmon resonance (SPR) sensor platform. The proposed platform has periodic heavily doped profiles implanted into intrinsic silicon and a thin gold layer deposited on top, making a physically flat grating SPR coupler. A rigorous coupled-wave analysis was conducted to prove the design feasibility, characterize the sensor's performance, and determine geometric parameters of the heavily doped profiles. Finite element analysis (FEA) was also employed to compute the electromagnetic field distributions at the plasmon resonance. Obtained results reveal that the proposed structure can excite the SPR on the normal incidence of mid-IR light, resulting in a large probing depth that will facilitate the study of larger analytes. Furthermore, the whole structure can be microfabricated with well-established batch protocols, providing tunability in the SPR excitation wavelength for specific biosensing needs with a low manufacturing cost. When the SPR sensor is to be used in a Fourier-transform infrared (FTIR) spectroscopy platform, its detection sensitivity and limit of detection are estimated to be 3022 nm/RIU and ~70 pg/mm(2), respectively, at a sample layer thickness of 100 nm. The design analysis performed in the present study will allow the fabrication of a tunable, disposable mid-IR SPR sensor that combines advantages of conventional prism and metallic grating SPR sensors.

  11. Rotation and anisotropy of galaxies revisited

    NASA Astrophysics Data System (ADS)

    Binney, James

    2005-11-01

    The use of the tensor virial theorem (TVT) as a diagnostic of anisotropic velocity distributions in galaxies is revisited. The TVT provides a rigorous global link between velocity anisotropy, rotation and shape, but the quantities appearing in it are not easily estimated observationally. Traditionally, use has been made of a centrally averaged velocity dispersion and the peak rotation velocity. Although this procedure cannot be rigorously justified, tests on model galaxies show that it works surprisingly well. With the advent of integral-field spectroscopy it is now possible to establish a rigorous connection between the TVT and observations. The TVT is reformulated in terms of sky-averages, and the new formulation is tested on model galaxies.

  12. Rigor of cell fate decision by variable p53 pulses and roles of cooperative gene expression by p53

    PubMed Central

    Murakami, Yohei; Takada, Shoji

    2012-01-01

    Upon DNA damage, the cell fate decision between survival and apoptosis is largely regulated by p53-related networks. Recent experiments found a series of discrete p53 pulses in individual cells, which led to the hypothesis that the cell fate decision upon DNA damage is controlled by counting the number of p53 pulses. Under this hypothesis, Sun et al. (2009) modeled the Bax activation switch in the apoptosis signal transduction pathway that can rigorously “count” the number of uniform p53 pulses. Based on experimental evidence, here we use variable p53 pulses with Sun et al.’s model to investigate how the variability in p53 pulses affects the rigor of the cell fate decision by the pulse number. Our calculations showed that the experimentally anticipated variability in the pulse sizes reduces the rigor of the cell fate decision. In addition, we tested the roles of the cooperativity in PUMA expression by p53, finding that lower cooperativity is plausible for more rigorous cell fate decision. This is because the variability in the p53 pulse height is more amplified in PUMA expressions with more cooperative cases. PMID:27857606

  13. Ensemble machine learning and forecasting can achieve 99% uptime for rural handpumps

    PubMed Central

    Thomas, Evan A.

    2017-01-01

    Broken water pumps continue to impede efforts to deliver clean and economically-viable water to the global poor. The literature has demonstrated that customers’ health benefits and willingness to pay for clean water are best realized when clean water infrastructure performs extremely well (>99% uptime). In this paper, we used sensor data from 42 Afridev-brand handpumps observed for 14 months in western Kenya to demonstrate how sensors and supervised ensemble machine learning could be used to increase total fleet uptime from a best-practices baseline of about 70% to >99%. We accomplish this increase in uptime by forecasting pump failures and identifying existing failures very quickly. Comparing the costs of operating the pump per functional year over a lifetime of 10 years, we estimate that implementing this algorithm would save 7% on the levelized cost of water relative to a sensor-less scheduled maintenance program. Combined with a rigorous system for dispatching maintenance personnel, implementing this algorithm in a real-world program could significantly improve health outcomes and customers’ willingness to pay for water services. PMID:29182673

  14. Extreme health sensing: the challenges, technologies, and strategies for active health sustainment of military personnel during training and combat missions

    NASA Astrophysics Data System (ADS)

    Buller, Mark; Welles, Alexander; Chadwicke Jenkins, Odest; Hoyt, Reed

    2010-04-01

    Military personnel are often asked to accomplish rigorous missions in extremes of climate, terrain, and terrestrial altitude. Personal protective clothing and individual equipment such as body armor or chemical biological suits and excessive equipment loads, exacerbate the physiological strain. Health, over even short mission durations, can easily be compromised. Measuring and acting upon health information can provide a means to dynamically manage both health and mission goals. However, the measurement of health state in austere military environments is challenging; (1) body worn sensors must be of minimal weight and size, consume little power, and be comfortable and unobtrusive enough for prolonged wear; (2) health states are not directly measureable and must be estimated; (3) sensor measurements are prone to noise, artifact, and failure. Given these constraints we examine current successful ambulatory physiological status monitoring technologies, review maturing sensors that may provide key health state insights in the future, and discuss unconventional analytical techniques that optimize health, mission goals, and doctrine from the perspective of thermal work strain assessment and management.

  15. Satellite Re-entry Modeling and Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Horsley, M.

    2012-09-01

    LEO trajectory modeling is a fundamental aerospace capability and has applications in many areas of aerospace, such as maneuver planning, sensor scheduling, re-entry prediction, collision avoidance, risk analysis, and formation flying. Somewhat surprisingly, modeling the trajectory of an object in low Earth orbit is still a challenging task. This is primarily due to the large uncertainty in the upper atmospheric density, about 15-20% (1-sigma) for most thermosphere models. Other contributions come from our inability to precisely model future solar and geomagnetic activities, the potentially unknown shape, material construction and attitude history of the satellite, and intermittent, noisy tracking data. Current methods to predict a satellite's re-entry trajectory typically involve making a single prediction, with the uncertainty dealt with in an ad-hoc manner, usually based on past experience. However, due to the extreme speed of a LEO satellite, even small uncertainties in the re-entry time translate into a very large uncertainty in the location of the re-entry event. Currently, most methods simply update the re-entry estimate on a regular basis. This results in a wide range of estimates that are literally spread over the entire globe. With no understanding of the underlying distribution of potential impact points, the sequence of impact points predicted by the current methodology are largely useless until just a few hours before re-entry. This paper will discuss the development of a set of the High Performance Computing (HPC)-based capabilities to support near real-time quantification of the uncertainty inherent in uncontrolled satellite re-entries. An appropriate management of the uncertainties is essential for a rigorous treatment of the re-entry/LEO trajectory problem. The development of HPC-based tools for re-entry analysis is important as it will allow a rigorous and robust approach to risk assessment by decision makers in an operational setting. Uncertainty quantification results from the recent uncontrolled re-entry of the Phobos-Grunt satellite will be presented and discussed. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  16. Towards a Credibility Assessment of Models and Simulations

    NASA Technical Reports Server (NTRS)

    Blattnig, Steve R.; Green, Lawrence L.; Luckring, James M.; Morrison, Joseph H.; Tripathi, Ram K.; Zang, Thomas A.

    2008-01-01

    A scale is presented to evaluate the rigor of modeling and simulation (M&S) practices for the purpose of supporting a credibility assessment of the M&S results. The scale distinguishes required and achieved levels of rigor for a set of M&S elements that contribute to credibility including both technical and process measures. The work has its origins in an interest within NASA to include a Credibility Assessment Scale in development of a NASA standard for models and simulations.

  17. Performance Evaluation of Bluetooth Low Energy: A Systematic Review.

    PubMed

    Tosi, Jacopo; Taffoni, Fabrizio; Santacatterina, Marco; Sannino, Roberto; Formica, Domenico

    2017-12-13

    Small, compact and embedded sensors are a pervasive technology in everyday life for a wide number of applications (e.g., wearable devices, domotics, e-health systems, etc.). In this context, wireless transmission plays a key role, and among available solutions, Bluetooth Low Energy (BLE) is gaining more and more popularity. BLE merges together good performance, low-energy consumption and widespread diffusion. The aim of this work is to review the main methodologies adopted to investigate BLE performance. The first part of this review is an in-depth description of the protocol, highlighting the main characteristics and implementation details. The second part reviews the state of the art on BLE characteristics and performance. In particular, we analyze throughput, maximum number of connectable sensors, power consumption, latency and maximum reachable range, with the aim to identify what are the current limits of BLE technology. The main results can be resumed as follows: throughput may theoretically reach the limit of ~230 kbps, but actual applications analyzed in this review show throughputs limited to ~100 kbps; the maximum reachable range is strictly dependent on the radio power, and it goes up to a few tens of meters; the maximum number of nodes in the network depends on connection parameters, on the network architecture and specific device characteristics, but it is usually lower than 10; power consumption and latency are largely modeled and analyzed and are strictly dependent on a huge number of parameters. Most of these characteristics are based on analytical models, but there is a need for rigorous experimental evaluations to understand the actual limits.

  18. Performance Evaluation of Bluetooth Low Energy: A Systematic Review

    PubMed Central

    Taffoni, Fabrizio; Santacatterina, Marco; Sannino, Roberto

    2017-01-01

    Small, compact and embedded sensors are a pervasive technology in everyday life for a wide number of applications (e.g., wearable devices, domotics, e-health systems, etc.). In this context, wireless transmission plays a key role, and among available solutions, Bluetooth Low Energy (BLE) is gaining more and more popularity. BLE merges together good performance, low-energy consumption and widespread diffusion. The aim of this work is to review the main methodologies adopted to investigate BLE performance. The first part of this review is an in-depth description of the protocol, highlighting the main characteristics and implementation details. The second part reviews the state of the art on BLE characteristics and performance. In particular, we analyze throughput, maximum number of connectable sensors, power consumption, latency and maximum reachable range, with the aim to identify what are the current limits of BLE technology. The main results can be resumed as follows: throughput may theoretically reach the limit of ~230 kbps, but actual applications analyzed in this review show throughputs limited to ~100 kbps; the maximum reachable range is strictly dependent on the radio power, and it goes up to a few tens of meters; the maximum number of nodes in the network depends on connection parameters, on the network architecture and specific device characteristics, but it is usually lower than 10; power consumption and latency are largely modeled and analyzed and are strictly dependent on a huge number of parameters. Most of these characteristics are based on analytical models, but there is a need for rigorous experimental evaluations to understand the actual limits. PMID:29236085

  19. Long-Term Stability Assessment of Sonoran Desert for Vicarious Calibration of GOES-R

    NASA Astrophysics Data System (ADS)

    Kim, W.; Liang, S.; Cao, C.

    2012-12-01

    Vicarious calibration refers to calibration techniques that do not depend on onboard calibration devices. Although sensors and onboard calibration devices undergo rigorous validation processes before launch, performance of sensors often degrades after the launch due to exposure to the harsh space environment and the aging of devices. Such in-flight changes of devices can be identified and adjusted through vicarious calibration activities where the sensor degradation is measured in reference to exterior calibration sources such as the Sun, the Moon, and the Earth surface. Sonoran desert is one of the best calibration sites located in the North America that are available for vicarious calibration of GOES-R satellite. To accurately calibrate sensors onboard GOES-R satellite (e.g. advanced baseline imager (ABI)), the temporal stability of Sonoran desert needs to be assessed precisely. However, short-/mid-term variations in top-of-atmosphere (TOA) reflectance caused by meteorological variables such as water vapor amount and aerosol loading are often difficult to retrieve, making the use of TOA reflectance time series for the stability assessment of the site. In this paper, we address this issue of normalization of TOA reflectance time series using a time series analysis algorithm - seasonal trend decomposition procedure based on LOESS (STL) (Cleveland et al, 1990). The algorithm is basically a collection of smoothing filters which leads to decomposition of a time series into three additive components; seasonal, trend, and remainder. Since this non-linear technique is capable of extracting seasonal patterns in the presence of trend changes, the seasonal variation can be effectively identified in the time series of remote sensing data subject to various environmental changes. The experiment results performed with Landsat 5 TM data show that the decomposition results acquired for the Sonoran Desert area produce normalized series that have much less uncertainty than those of traditional BRDF models, which leads to more accurate stability assessment.

  20. Peer Review of EPA's Draft BMDS Document: Exponential ...

    EPA Pesticide Factsheets

    BMDS is one of the Agency's premier tools for estimating risk assessments, therefore the validity and reliability of its statistical models are of paramount importance. This page provides links to peer review of the BMDS applications and its models as they were developed and eventually released documenting the rigorous review process taken to provide the best science tools available for statistical modeling. This page provides links to peer review of the BMDS applications and its models as they were developed and eventually released documenting the rigorous review process taken to provide the best science tools available for statistical modeling.

  1. A Regional Seismic Travel Time Model for North America

    DTIC Science & Technology

    2010-09-01

    velocity at the Moho, the mantle velocity gradient, and the average crustal velocity. After tomography across Eurasia, rigorous tests find that Pn...velocity gradient, and the average crustal velocity. After tomography across Eurasia rigorous tests find that Pn travel time residuals are reduced...and S-wave velocity in the crustal layers and in the upper mantle. A good prior model is essential because the RSTT tomography inversion is invariably

  2. Engineering research, development and technology FY99

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langland, R T

    The growth of computer power and connectivity, together with advances in wireless sensing and communication technologies, is transforming the field of complex distributed systems. The ability to deploy large numbers of sensors with a rapid, broadband communication system will enable high-fidelity, near real-time monitoring of complex systems. These technological developments will provide unprecedented insight into the actual performance of engineered and natural environment systems, enable the evolution of many new types of engineered systems for monitoring and detection, and enhance our ability to perform improved and validated large-scale simulations of complex systems. One of the challenges facing engineering is tomore » develop methodologies to exploit the emerging information technologies. Particularly important will be the ability to assimilate measured data into the simulation process in a way which is much more sophisticated than current, primarily ad hoc procedures. The reports contained in this section on the Center for Complex Distributed Systems describe activities related to the integrated engineering of large complex systems. The first three papers describe recent developments for each link of the integrated engineering process for large structural systems. These include (1) the development of model-based signal processing algorithms which will formalize the process of coupling measurements and simulation and provide a rigorous methodology for validation and update of computational models; (2) collaborative efforts with faculty at the University of California at Berkeley on the development of massive simulation models for the earth and large bridge structures; and (3) the development of wireless data acquisition systems which provide a practical means of monitoring large systems like the National Ignition Facility (NIF) optical support structures. These successful developments are coming to a confluence in the next year with applications to NIF structural characterizations and analysis of large bridge structures for the State of California. Initial feasibility investigations into the development of monitoring and detection systems are described in the papers on imaging of underground structures with ground-penetrating radar, and the use of live insects as sensor platforms. These efforts are establishing the basic performance characteristics essential to the decision process for future development of sensor arrays for information gathering related to national security.« less

  3. Geopositioning with a quadcopter: Extracted feature locations and predicted accuracy without a priori sensor attitude information

    NASA Astrophysics Data System (ADS)

    Dolloff, John; Hottel, Bryant; Edwards, David; Theiss, Henry; Braun, Aaron

    2017-05-01

    This paper presents an overview of the Full Motion Video-Geopositioning Test Bed (FMV-GTB) developed to investigate algorithm performance and issues related to the registration of motion imagery and subsequent extraction of feature locations along with predicted accuracy. A case study is included corresponding to a video taken from a quadcopter. Registration of the corresponding video frames is performed without the benefit of a priori sensor attitude (pointing) information. In particular, tie points are automatically measured between adjacent frames using standard optical flow matching techniques from computer vision, an a priori estimate of sensor attitude is then computed based on supplied GPS sensor positions contained in the video metadata and a photogrammetric/search-based structure from motion algorithm, and then a Weighted Least Squares adjustment of all a priori metadata across the frames is performed. Extraction of absolute 3D feature locations, including their predicted accuracy based on the principles of rigorous error propagation, is then performed using a subset of the registered frames. Results are compared to known locations (check points) over a test site. Throughout this entire process, no external control information (e.g. surveyed points) is used other than for evaluation of solution errors and corresponding accuracy.

  4. Sensor Integration in a Low Cost Land Mobile Mapping System

    PubMed Central

    Madeira, Sergio; Gonçalves, José A.; Bastos, Luísa

    2012-01-01

    Mobile mapping is a multidisciplinary technique which requires several dedicated equipment, calibration procedures that must be as rigorous as possible, time synchronization of all acquired data and software for data processing and extraction of additional information. To decrease the cost and complexity of Mobile Mapping Systems (MMS), the use of less expensive sensors and the simplification of procedures for calibration and data acquisition are mandatory features. This article refers to the use of MMS technology, focusing on the main aspects that need to be addressed to guarantee proper data acquisition and describing the way those aspects were handled in a terrestrial MMS developed at the University of Porto. In this case the main aim was to implement a low cost system while maintaining good quality standards of the acquired georeferenced information. The results discussed here show that this goal has been achieved. PMID:22736985

  5. Rigorous simulations of a helical core fiber by the use of transformation optics formalism.

    PubMed

    Napiorkowski, Maciej; Urbanczyk, Waclaw

    2014-09-22

    We report for the first time on rigorous numerical simulations of a helical-core fiber by using a full vectorial method based on the transformation optics formalism. We modeled the dependence of circular birefringence of the fundamental mode on the helix pitch and analyzed the effect of a birefringence increase caused by the mode displacement induced by a core twist. Furthermore, we analyzed the complex field evolution versus the helix pitch in the first order modes, including polarization and intensity distribution. Finally, we show that the use of the rigorous vectorial method allows to better predict the confinement loss of the guided modes compared to approximate methods based on equivalent in-plane bending models.

  6. Characterizing Intra-Urban Air Quality Gradients with a Spatially-Distributed Network

    NASA Astrophysics Data System (ADS)

    Zimmerman, N.; Ellis, A.; Schurman, M. I.; Gu, P.; Li, H.; Snell, L.; Gu, J.; Subramanian, R.; Robinson, A. L.; Apte, J.; Presto, A. A.

    2016-12-01

    City-wide air pollution measurements have typically relied on regulatory or research monitoring sites with low spatial density to assess population-scale exposure. However, air pollutant concentrations exhibit significant spatial variability depending on local sources and features of the built environment, which may not be well captured by the existing monitoring regime. To better understand urban spatial and temporal pollution gradients at 1 km resolution, a network of 12 real-time air quality monitoring stations was deployed beginning July 2016 in Pittsburgh, PA. The stations were deployed at sites along an urban-rural transect and in urban locations with a range of traffic, restaurant, and tall building densities to examine the impact of various modifiable factors. Measurements from the stationary monitoring stations were further supported by mobile monitoring, which provided higher spatial resolution pollutant measurements on nearby roadways and enabled routine calibration checks. The stationary monitoring measurements comprise ultrafine particle number (Aerosol Dynamics "MAGIC" CPC), PM2.5 (Met One Neighborhood PM Monitor), black carbon (Met One BC 1050), and a new low-cost air quality monitor, the Real-time Affordable Multi-Pollutant (RAMP) sensor package for measuring CO, NO2, SO2, O3, CO2, temperature and relative humidity. High time-resolution (sub-minute) measurements across the distributed monitoring network enable insight into dynamic pollutant behaviour. Our preliminary findings show that our instruments are sensitive to PM2.5 gradients exceeding 2 micro-grams per cubic meter and ultrafine particle gradients exceeding 1000 particles per cubic centimeter. Additionally, we have developed rigorous calibration protocols to characterize the RAMP sensor response and drift, as well as multiple linear regression models to convert sensor response into pollutant concentrations that are comparable to reference instrumentation.

  7. Hyperspectral target detection analysis of a cluttered scene from a virtual airborne sensor platform using MuSES

    NASA Astrophysics Data System (ADS)

    Packard, Corey D.; Viola, Timothy S.; Klein, Mark D.

    2017-10-01

    The ability to predict spectral electro-optical (EO) signatures for various targets against realistic, cluttered backgrounds is paramount for rigorous signature evaluation. Knowledge of background and target signatures, including plumes, is essential for a variety of scientific and defense-related applications including contrast analysis, camouflage development, automatic target recognition (ATR) algorithm development and scene material classification. The capability to simulate any desired mission scenario with forecast or historical weather is a tremendous asset for defense agencies, serving as a complement to (or substitute for) target and background signature measurement campaigns. In this paper, a systematic process for the physical temperature and visible-through-infrared radiance prediction of several diverse targets in a cluttered natural environment scene is presented. The ability of a virtual airborne sensor platform to detect and differentiate targets from a cluttered background, from a variety of sensor perspectives and across numerous wavelengths in differing atmospheric conditions, is considered. The process described utilizes the thermal and radiance simulation software MuSES and provides a repeatable, accurate approach for analyzing wavelength-dependent background and target (including plume) signatures in multiple band-integrated wavebands (multispectral) or hyperspectrally. The engineering workflow required to combine 3D geometric descriptions, thermal material properties, natural weather boundary conditions, all modes of heat transfer and spectral surface properties is summarized. This procedure includes geometric scene creation, material and optical property attribution, and transient physical temperature prediction. Radiance renderings, based on ray-tracing and the Sandford-Robertson BRDF model, are coupled with MODTRAN for the inclusion of atmospheric effects. This virtual hyperspectral/multispectral radiance prediction methodology has been extensively validated and provides a flexible process for signature evaluation and algorithm development.

  8. Refractive index-based detection of gradient elution liquid chromatography using chip-integrated microring resonator arrays.

    PubMed

    Wade, James H; Bailey, Ryan C

    2014-01-07

    Refractive index-based sensors offer attractive characteristics as nondestructive and universal detectors for liquid chromatographic separations, but a small dynamic range and sensitivity to minor thermal perturbations limit the utility of commercial RI detectors for many potential applications, especially those requiring the use of gradient elutions. As such, RI detectors find use almost exclusively in sample abundant, isocratic separations when interfaced with high-performance liquid chromatography. Silicon photonic microring resonators are refractive index-sensitive optical devices that feature good sensitivity and tremendous dynamic range. The large dynamic range of microring resonators allows the sensors to function across a wide spectrum of refractive indices, such as that encountered when moving from an aqueous to organic mobile phase during a gradient elution, a key analytical advantage not supported in commercial RI detectors. Microrings are easily configured into sensor arrays, and chip-integrated control microrings enable real-time corrections of thermal drift. Thermal controls allow for analyses at any temperature and, in the absence of rigorous temperature control, obviates extended detector equilibration wait times. Herein, proof of concept isocratic and gradient elution separations were performed using well-characterized model analytes (e.g., caffeine, ibuprofen) in both neat buffer and more complex sample matrices. These experiments demonstrate the ability of microring arrays to perform isocratic and gradient elutions under ambient conditions, avoiding two major limitations of commercial RI-based detectors and maintaining comparable bulk RI sensitivity. Further benefit may be realized in the future through selective surface functionalization to impart degrees of postcolumn (bio)molecular specificity at the detection phase of a separation. The chip-based and microscale nature of microring resonators also make it an attractive potential detection technology that could be integrated within lab-on-a-chip and microfluidic separation devices.

  9. A VLF-based technique in applications to digital control of nonlinear hybrid multirate systems

    NASA Astrophysics Data System (ADS)

    Vassilyev, Stanislav; Ulyanov, Sergey; Maksimkin, Nikolay

    2017-01-01

    In this paper, a technique for rigorous analysis and design of nonlinear multirate digital control systems on the basis of the reduction method and sublinear vector Lyapunov functions is proposed. The control system model under consideration incorporates continuous-time dynamics of the plant and discrete-time dynamics of the controller and takes into account uncertainties of the plant, bounded disturbances, nonlinear characteristics of sensors and actuators. We consider a class of multirate systems where the control update rate is slower than the measurement sampling rates and periodic non-uniform sampling is admitted. The proposed technique does not use the preliminary discretization of the system, and, hence, allows one to eliminate the errors associated with the discretization and improve the accuracy of analysis. The technique is applied to synthesis of digital controller for a flexible spacecraft in the fine stabilization mode and decentralized controller for a formation of autonomous underwater vehicles. Simulation results are provided to validate the good performance of the designed controllers.

  10. Coverage-guaranteed sensor node deployment strategies for wireless sensor networks.

    PubMed

    Fan, Gaojuan; Wang, Ruchuan; Huang, Haiping; Sun, Lijuan; Sha, Chao

    2010-01-01

    Deployment quality and cost are two conflicting aspects in wireless sensor networks. Random deployment, where the monitored field is covered by randomly and uniformly deployed sensor nodes, is an appropriate approach for large-scale network applications. However, their successful applications depend considerably on the deployment quality that uses the minimum number of sensors to achieve a desired coverage. Currently, the number of sensors required to meet the desired coverage is based on asymptotic analysis, which cannot meet deployment quality due to coverage overestimation in real applications. In this paper, we first investigate the coverage overestimation and address the challenge of designing coverage-guaranteed deployment strategies. To overcome this problem, we propose two deployment strategies, namely, the Expected-area Coverage Deployment (ECD) and BOundary Assistant Deployment (BOAD). The deployment quality of the two strategies is analyzed mathematically. Under the analysis, a lower bound on the number of deployed sensor nodes is given to satisfy the desired deployment quality. We justify the correctness of our analysis through rigorous proof, and validate the effectiveness of the two strategies through extensive simulation experiments. The simulation results show that both strategies alleviate the coverage overestimation significantly. In addition, we also evaluate two proposed strategies in the context of target detection application. The comparison results demonstrate that if the target appears at the boundary of monitored region in a given random deployment, the average intrusion distance of BOAD is considerably shorter than that of ECD with the same desired deployment quality. In contrast, ECD has better performance in terms of the average intrusion distance when the invasion of intruder is from the inside of monitored region.

  11. Laboratory generated M -6 earthquakes

    USGS Publications Warehouse

    McLaskey, Gregory C.; Kilgore, Brian D.; Lockner, David A.; Beeler, Nicholas M.

    2014-01-01

    We consider whether mm-scale earthquake-like seismic events generated in laboratory experiments are consistent with our understanding of the physics of larger earthquakes. This work focuses on a population of 48 very small shocks that are foreshocks and aftershocks of stick–slip events occurring on a 2.0 m by 0.4 m simulated strike-slip fault cut through a large granite sample. Unlike the larger stick–slip events that rupture the entirety of the simulated fault, the small foreshocks and aftershocks are contained events whose properties are controlled by the rigidity of the surrounding granite blocks rather than characteristics of the experimental apparatus. The large size of the experimental apparatus, high fidelity sensors, rigorous treatment of wave propagation effects, and in situ system calibration separates this study from traditional acoustic emission analyses and allows these sources to be studied with as much rigor as larger natural earthquakes. The tiny events have short (3–6 μs) rise times and are well modeled by simple double couple focal mechanisms that are consistent with left-lateral slip occurring on a mm-scale patch of the precut fault surface. The repeatability of the experiments indicates that they are the result of frictional processes on the simulated fault surface rather than grain crushing or fracture of fresh rock. Our waveform analysis shows no significant differences (other than size) between the M -7 to M -5.5 earthquakes reported here and larger natural earthquakes. Their source characteristics such as stress drop (1–10 MPa) appear to be entirely consistent with earthquake scaling laws derived for larger earthquakes.

  12. Approximate direct georeferencing in national coordinates

    NASA Astrophysics Data System (ADS)

    Legat, Klaus

    Direct georeferencing has gained an increasing importance in photogrammetry and remote sensing. Thereby, the parameters of exterior orientation (EO) of an image sensor are determined by GPS/INS, yielding results in a global geocentric reference frame. Photogrammetric products like digital terrain models or orthoimages, however, are often required in national geodetic datums and mapped by national map projections, i.e., in "national coordinates". As the fundamental mathematics of photogrammetry is based on Cartesian coordinates, the scene restitution is often performed in a Cartesian frame located at some central position of the image block. The subsequent transformation to national coordinates is a standard problem in geodesy and can be done in a rigorous manner-at least if the formulas of the map projection are rigorous. Drawbacks of this procedure include practical deficiencies related to the photogrammetric processing as well as the computational cost of transforming the whole scene. To avoid these problems, the paper pursues an alternative processing strategy where the EO parameters are transformed prior to the restitution. If only this transition was done, however, the scene would be systematically distorted. The reason is that the national coordinates are not Cartesian due to the earth curvature and the unavoidable length distortion of map projections. To settle these distortions, several corrections need to be applied. These are treated in detail for both passive and active imaging. Since all these corrections are approximations only, the resulting technique is termed "approximate direct georeferencing". Still, the residual distortions are usually very low as is demonstrated by simulations, rendering the technique an attractive approach to direct georeferencing.

  13. Sensor Web in Antarctica: Developing an Intelligent, Autonomous Platform for Locating Biological Flourishes in Cryogenic Environments

    NASA Technical Reports Server (NTRS)

    Delin, K. A.; Harvey, R. P.; Chabot, N. A.; Jackson, S. P.; Adams, Mike; Johnson, D. W.; Britton, J. T.

    2003-01-01

    The most rigorous tests of the ability to detect extant life will occur where biotic activity is limited by severe environmental conditions. Cryogenic environments are among the most severe-the energy and nutrients needed for biological activity are in short supply while the climate itself is actively destructive to biological mechanisms. In such settings biological activity is often limited to brief flourishes, occurring only when and where conditions are at their most favorable. The closer that typical regional conditions approach conditions that are actively hostile , the more widely distributed biological blooms will be in both time and space. On a spatial dimension of a few meters or a time dimension of a few days, biological activity becomes much more difficult to detect. One way to overcome this difficulty is to establish a Sensor Web that can monitor microclimates over appropriate scales of time and distance, allowing a continuous virtual presence for instant recognition of favorable conditions. A more sophisticated Sensor Web, incorporating metabolic sensors, can effectively meet the challenge to be in "the right place in the right time". This is particularly of value in planetary surface missions, where limited mobility and mission timelines require extremely efficient sample and data acquisition. Sensor Webs can be an effective way to fill the gap between broad scale orbital data collection and fine-scale surface lander science. We are in the process of developing an intelligent, distributed and autonomous Sensor Web that will allow us to monitor microclimate under severe cryogenic conditions, approaching those extant on the surface of Mars. Ultimately this Sensor Web will include the ability to detect and/or establish limits on extant microbiological activity through incorporation of novel metabolic gas sensors. Here we report the results of our first deployment of a Sensor Web prototype in a previously unexplored high altitude East Antarctic Plateau "micro-oasis" at the MacAlpine Hills, Law Glacier, Antarctica.

  14. Observations to information

    NASA Astrophysics Data System (ADS)

    Cox, S. J.

    2013-12-01

    Observations provide the fundamental constraint on natural science interpretations. Earth science observations originate in many contexts, including in-situ field observations and monitoring, various modes of remote sensing and geophysics, sampling for ex-situ (laboratory) analysis, as well as numerical modelling and simulation which also provide estimates of parameter values. Most investigations require a combination of these, often sourced from multiple initiatives and archives, so data discovery and re-organization can be a significant project burden. The Observations and Measurements (O&M) information model was developed to provide a common vocabulary that can be applied to all these cases, and thus provide a basis for cross-initiative and cross-domain interoperability. O&M was designed in the context of the standards for geographic information from OGC and ISO. It provides a complementary viewpoint to the well-known feature (object oriented) and coverage (property field) views, but prioritizes the property determination process. Nevertheless, use of O&M implies the existence of well defined feature types. In disciplines such as geology and ecosystem sciences the primary complexity is in their model of the world, for which the description of each item requires access to diverse observation sets. On the other hand, geophysics and earth observations work with simpler underlying information items, but in larger quantities over multiple spatio-temporal dimensions, acquired using complex sensor systems. Multiple transformations between the three viewpoints are involved in the data flows in most investigations, from collection through analysis to information and story. The O&M model classifies observations: - from a provider viewpoint: in terms of the sensor or procedure involved; - from a consumer viewpoint: in terms of the property being reported, and the feature with which it is associated. These concerns carry different weights in different applications. Communities generating data using ships, satellites and aircraft habitually classify observations by the source platform and mission, as this implies a rich set of metadata to the cognoscenti. However, integrators are more likely to focus on the phenomenon being observed, together with the location of the features carrying it. In this context sensor information informs quality evaluation, as a secondary consideration following after data discovery. The observation model is specialized by constraining facets, such as observed property, sensor or procedure, to be taken from a specific set or vocabulary. Such vocabularies are typically developed on a project or community basis, but data fusion depends on them being widely accessible, and comparable with related vocabularies. Better still if they are transparently governed, trusted and stable enough to encourage re-use. Semantic web technologies support distribution of rigorously constructed vocabularies through standard interfaces, with standard mechanisms for asserting or inferring of proximity and other relationships. Interoperability of observation data in future is likely to depend on the development of a viable ecosystem of these secondary resources.

  15. Development of an environmental chamber for evaluating the performance of low-cost air quality sensors under controlled conditions

    NASA Astrophysics Data System (ADS)

    Papapostolou, Vasileios; Zhang, Hang; Feenstra, Brandon J.; Polidori, Andrea

    2017-12-01

    A state-of-the-art integrated chamber system has been developed for evaluating the performance of low-cost air quality sensors. The system contains two professional grade chamber enclosures. A 1.3 m3 stainless-steel outer chamber and a 0.11 m3 Teflon-coated stainless-steel inner chamber are used to create controlled aerosol and gaseous atmospheres, respectively. Both chambers are temperature and relative humidity controlled with capability to generate a wide range of environmental conditions. The system is equipped with an integrated zero-air system, an ozone and two aerosol generation systems, a dynamic dilution calibrator, certified gas cylinders, an array of Federal Reference Method (FRM), Federal Equivalent Method (FEM), and Best Available Technology (BAT) reference instruments and an automated control and sequencing software. Our experiments have demonstrated that the chamber system is capable of generating stable and reproducible aerosol and gas concentrations at low, medium, and high levels. This paper discusses the development of the chamber system along with the methods used to quantitatively evaluate sensor performance. Considering that a significant number of academic and research institutions, government agencies, public and private institutions, and individuals are becoming interested in developing and using low-cost air quality sensors, it is important to standardize the procedures used to evaluate their performance. The information discussed herein provides a roadmap for entities who are interested in characterizing air quality sensors in a rigorous, systematic and reproducible manner.

  16. On the Response of the Special Sensor Microwave/Imager to the Marine Environment: Implications for Atmospheric Parameter Retrievals. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Petty, Grant W.

    1990-01-01

    A reasonably rigorous basis for understanding and extracting the physical information content of Special Sensor Microwave/Imager (SSM/I) satellite images of the marine environment is provided. To this end, a comprehensive algebraic parameterization is developed for the response of the SSM/I to a set of nine atmospheric and ocean surface parameters. The brightness temperature model includes a closed-form approximation to microwave radiative transfer in a non-scattering atmosphere and fitted models for surface emission and scattering based on geometric optics calculations for the roughened sea surface. The combined model is empirically tuned using suitable sets of SSM/I data and coincident surface observations. The brightness temperature model is then used to examine the sensitivity of the SSM/I to realistic variations in the scene being observed and to evaluate the theoretical maximum precision of global SSM/I retrievals of integrated water vapor, integrated cloud liquid water, and surface wind speed. A general minimum-variance method for optimally retrieving geophysical parameters from multichannel brightness temperature measurements is outlined, and several global statistical constraints of the type required by this method are computed. Finally, a unified set of efficient statistical and semi-physical algorithms is presented for obtaining fields of surface wind speed, integrated water vapor, cloud liquid water, and precipitation from SSM/I brightness temperature data. Features include: a semi-physical method for retrieving integrated cloud liquid water at 15 km resolution and with rms errors as small as approximately 0.02 kg/sq m; a 3-channel statistical algorithm for integrated water vapor which was constructed so as to have improved linear response to water vapor and reduced sensitivity to precipitation; and two complementary indices of precipitation activity (based on 37 GHz attenuation and 85 GHz scattering, respectively), each of which are relatively insensitive to variations in other environmental parameters.

  17. Efficient Strategies for Predictive Cell-Level Control of Lithium-Ion Batteries

    NASA Astrophysics Data System (ADS)

    Xavier, Marcelo A.

    This dissertation introduces a set of state-space based model predictive control (MPC) algorithms tailored to a non-zero feedthrough term to account for the ohmic resistance that is inherent to the battery dynamics. MPC is herein applied to the problem of regulating cell-level measures of performance for lithium-ion batteries; the control methodologies are used first to compute a fast charging profile that respects input, output, and state constraints, i.e., input current, terminal voltage, and state of charge for an equivalent circuit model of the battery cell, and extended later to a linearized physics-based reduced-order model. The novelty of this work can summarized as follows: (1) the MPC variants are employed to a physics based reduce-order model in order to make use of the available set of internal electrochemical variables and mitigate internal mechanisms of cell degradation. (e.g., lithium plating); (2) we developed a dual-mode MPC closed-loop paradigm that suits the battery control problem with the objective of reducing computational effort by solving simpler optimization routines and guaranteeing stability; and finally (3) we developed a completely new approach of the use of a predictive control strategy where MPC is employed as a "smart sensor" for power estimation. Results are presented that show the comparative performance of the MPC algorithms for both EMC and PBROM These results highlight that dual-mode MPC can deliver optimal input current profiles by using a shorter horizon while still guaranteeing stability. Additionally, rigorous mathematical developments are presented for the development of the MPC algorithms. The use of MPC as a "smart sensor" presents it self as an appealing method for power estimation, since MPC permits a fully dynamic input profile that is able to achieve performance right at the proper constraint boundaries. Therefore, MPC is expected to produce accurate power limits for each computed sample time when compared to the Bisection method [1] which assumes constant input values over the prediction interval.

  18. Comparison of rigorous and simple vibrational models for the CO2 gasdynamic laser

    NASA Technical Reports Server (NTRS)

    Monson, D. J.

    1977-01-01

    The accuracy of a simple vibrational model for computing the gain in a CO2 gasdynamic laser is assessed by comparing results computed from it with results computed from a rigorous vibrational model. The simple model is that of Anderson et al. (1971), in which the vibrational kinetics are modeled by grouping the nonequilibrium vibrational degrees of freedom into two modes, to each of which there corresponds an equation describing vibrational relaxation. The two models agree fairly well in the computed gain at low temperatures, but the simple model predicts too high a gain at the higher temperatures of current interest. The sources of error contributing to the overestimation given by the simple model are determined by examining the simplified relaxation equations.

  19. Review of rigorous coupled-wave analysis and of homogeneous effective medium approximations for high spatial-frequency surface-relief gratings

    NASA Technical Reports Server (NTRS)

    Glytsis, Elias N.; Brundrett, David L.; Gaylord, Thomas K.

    1993-01-01

    A review of the rigorous coupled-wave analysis as applied to the diffraction of electro-magnetic waves by gratings is presented. The analysis is valid for any polarization, angle of incidence, and conical diffraction. Cascaded and/or multiplexed gratings as well as material anisotropy can be incorporated under the same formalism. Small period rectangular groove gratings can also be modeled using approximately equivalent uniaxial homogeneous layers (effective media). The ordinary and extraordinary refractive indices of these layers depend on the gratings filling factor, the refractive indices of the substrate and superstrate, and the ratio of the freespace wavelength to grating period. Comparisons of the homogeneous effective medium approximations with the rigorous coupled-wave analysis are presented. Antireflection designs (single-layer or multilayer) using the effective medium models are presented and compared. These ultra-short period antireflection gratings can also be used to produce soft x-rays. Comparisons of the rigorous coupled-wave analysis with experimental results on soft x-ray generation by gratings are also included.

  20. Verification of Compartmental Epidemiological Models using Metamorphic Testing, Model Checking and Visual Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramanathan, Arvind; Steed, Chad A; Pullum, Laura L

    Compartmental models in epidemiology are widely used as a means to model disease spread mechanisms and understand how one can best control the disease in case an outbreak of a widespread epidemic occurs. However, a significant challenge within the community is in the development of approaches that can be used to rigorously verify and validate these models. In this paper, we present an approach to rigorously examine and verify the behavioral properties of compartmen- tal epidemiological models under several common modeling scenarios including birth/death rates and multi-host/pathogen species. Using metamorphic testing, a novel visualization tool and model checking, we buildmore » a workflow that provides insights into the functionality of compartmental epidemiological models. Our initial results indicate that metamorphic testing can be used to verify the implementation of these models and provide insights into special conditions where these mathematical models may fail. The visualization front-end allows the end-user to scan through a variety of parameters commonly used in these models to elucidate the conditions under which an epidemic can occur. Further, specifying these models using a process algebra allows one to automatically construct behavioral properties that can be rigorously verified using model checking. Taken together, our approach allows for detecting implementation errors as well as handling conditions under which compartmental epidemiological models may fail to provide insights into disease spread dynamics.« less

  1. Fast Bayesian approach for modal identification using free vibration data, Part I - Most probable value

    NASA Astrophysics Data System (ADS)

    Zhang, Feng-Liang; Ni, Yan-Chun; Au, Siu-Kui; Lam, Heung-Fai

    2016-03-01

    The identification of modal properties from field testing of civil engineering structures is becoming economically viable, thanks to the advent of modern sensor and data acquisition technology. Its demand is driven by innovative structural designs and increased performance requirements of dynamic-prone structures that call for a close cross-checking or monitoring of their dynamic properties and responses. Existing instrumentation capabilities and modal identification techniques allow structures to be tested under free vibration, forced vibration (known input) or ambient vibration (unknown broadband loading). These tests can be considered complementary rather than competing as they are based on different modeling assumptions in the identification model and have different implications on costs and benefits. Uncertainty arises naturally in the dynamic testing of structures due to measurement noise, sensor alignment error, modeling error, etc. This is especially relevant in field vibration tests because the test condition in the field environment can hardly be controlled. In this work, a Bayesian statistical approach is developed for modal identification using the free vibration response of structures. A frequency domain formulation is proposed that makes statistical inference based on the Fast Fourier Transform (FFT) of the data in a selected frequency band. This significantly simplifies the identification model because only the modes dominating the frequency band need to be included. It also legitimately ignores the information in the excluded frequency bands that are either irrelevant or difficult to model, thereby significantly reducing modeling error risk. The posterior probability density function (PDF) of the modal parameters is derived rigorously from modeling assumptions and Bayesian probability logic. Computational difficulties associated with calculating the posterior statistics, including the most probable value (MPV) and the posterior covariance matrix, are addressed. Fast computational algorithms for determining the MPV are proposed so that the method can be practically implemented. In the companion paper (Part II), analytical formulae are derived for the posterior covariance matrix so that it can be evaluated without resorting to finite difference method. The proposed method is verified using synthetic data. It is also applied to modal identification of full-scale field structures.

  2. greenland_summer_campaign

    NASA Image and Video Library

    2015-08-28

    Laurence Smith, chair of geography at University of California, Los Angeles, deploys an autonomous drift boat equipped with several sensors in a meltwater river on the surface of the Greenland ice sheet on July 19, 2015. “Surface melting in Greenland has increased recently, and we lacked a rigorous estimate of the water volumes being produced and their transport,” said Tom Wagner, the cryosphere program scientist at NASA Headquarters in Washington. “NASA funds fieldwork like Smith’s because it helps us to interpret satellite data, and to extrapolate measurements from the local field sites to the larger ice sheet." Credit: NASA/Goddard/Jefferson Beck

  3. An automated, open-source (NASA Ames Stereo Pipeline) workflow for mass production of high-resolution DEMs from commercial stereo satellite imagery: Application to mountain glacies in the contiguous US

    NASA Astrophysics Data System (ADS)

    Shean, D. E.; Arendt, A. A.; Whorton, E.; Riedel, J. L.; O'Neel, S.; Fountain, A. G.; Joughin, I. R.

    2016-12-01

    We adapted the open source NASA Ames Stereo Pipeline (ASP) to generate digital elevation models (DEMs) and orthoimages from very-high-resolution (VHR) commercial imagery of the Earth. These modifications include support for rigorous and rational polynomial coefficient (RPC) sensor models, sensor geometry correction, bundle adjustment, point cloud co-registration, and significant improvements to the ASP code base. We outline an automated processing workflow for 0.5 m GSD DigitalGlobe WorldView-1/2/3 and GeoEye-1 along-track and cross-track stereo image data. Output DEM products are posted at 2, 8, and 32 m with direct geolocation accuracy of <5.0 m CE90/LE90. An automated iterative closest-point (ICP) co-registration tool reduces absolute vertical and horizontal error to <0.5­ m where appropriate ground-control data are available, with observed standard deviation of 0.1-0.5 m for overlapping, co-registered DEMs (n=14,17). While ASP can be used to process individual stereo pairs on a local workstation, the methods presented here were developed for large-scale batch processing in a high-performance computing environment. We have leveraged these resources to produce dense time series and regional mosaics for the Earth's ice sheets. We are now processing and analyzing all available 2008-2016 commercial stereo DEMs over glaciers and perennial snowfields in the contiguous US. We are using these records to study long-term, interannual, and seasonal volume change and glacier mass balance. This analysis will provide a new assessment of regional climate change, and will offer basin-scale analyses of snowpack evolution and snow/ice melt runoff for water resource applications.

  4. An Authentication Protocol for Future Sensor Networks.

    PubMed

    Bilal, Muhammad; Kang, Shin-Gak

    2017-04-28

    Authentication is one of the essential security services in Wireless Sensor Networks (WSNs) for ensuring secure data sessions. Sensor node authentication ensures the confidentiality and validity of data collected by the sensor node, whereas user authentication guarantees that only legitimate users can access the sensor data. In a mobile WSN, sensor and user nodes move across the network and exchange data with multiple nodes, thus experiencing the authentication process multiple times. The integration of WSNs with Internet of Things (IoT) brings forth a new kind of WSN architecture along with stricter security requirements; for instance, a sensor node or a user node may need to establish multiple concurrent secure data sessions. With concurrent data sessions, the frequency of the re-authentication process increases in proportion to the number of concurrent connections. Moreover, to establish multiple data sessions, it is essential that a protocol participant have the capability of running multiple instances of the protocol run, which makes the security issue even more challenging. The currently available authentication protocols were designed for the autonomous WSN and do not account for the above requirements. Hence, ensuring a lightweight and efficient authentication protocol has become more crucial. In this paper, we present a novel, lightweight and efficient key exchange and authentication protocol suite called the Secure Mobile Sensor Network (SMSN) Authentication Protocol. In the SMSN a mobile node goes through an initial authentication procedure and receives a re-authentication ticket from the base station. Later a mobile node can use this re-authentication ticket when establishing multiple data exchange sessions and/or when moving across the network. This scheme reduces the communication and computational complexity of the authentication process. We proved the strength of our protocol with rigorous security analysis (including formal analysis using the BAN-logic) and simulated the SMSN and previously proposed schemes in an automated protocol verifier tool. Finally, we compared the computational complexity and communication cost against well-known authentication protocols.

  5. An Authentication Protocol for Future Sensor Networks

    PubMed Central

    Bilal, Muhammad; Kang, Shin-Gak

    2017-01-01

    Authentication is one of the essential security services in Wireless Sensor Networks (WSNs) for ensuring secure data sessions. Sensor node authentication ensures the confidentiality and validity of data collected by the sensor node, whereas user authentication guarantees that only legitimate users can access the sensor data. In a mobile WSN, sensor and user nodes move across the network and exchange data with multiple nodes, thus experiencing the authentication process multiple times. The integration of WSNs with Internet of Things (IoT) brings forth a new kind of WSN architecture along with stricter security requirements; for instance, a sensor node or a user node may need to establish multiple concurrent secure data sessions. With concurrent data sessions, the frequency of the re-authentication process increases in proportion to the number of concurrent connections. Moreover, to establish multiple data sessions, it is essential that a protocol participant have the capability of running multiple instances of the protocol run, which makes the security issue even more challenging. The currently available authentication protocols were designed for the autonomous WSN and do not account for the above requirements. Hence, ensuring a lightweight and efficient authentication protocol has become more crucial. In this paper, we present a novel, lightweight and efficient key exchange and authentication protocol suite called the Secure Mobile Sensor Network (SMSN) Authentication Protocol. In the SMSN a mobile node goes through an initial authentication procedure and receives a re-authentication ticket from the base station. Later a mobile node can use this re-authentication ticket when establishing multiple data exchange sessions and/or when moving across the network. This scheme reduces the communication and computational complexity of the authentication process. We proved the strength of our protocol with rigorous security analysis (including formal analysis using the BAN-logic) and simulated the SMSN and previously proposed schemes in an automated protocol verifier tool. Finally, we compared the computational complexity and communication cost against well-known authentication protocols. PMID:28452937

  6. Characterization of photocathode dark current vs. temperature in image intensifier tube modules and intensified televisions

    NASA Astrophysics Data System (ADS)

    Bender, Edward J.; Wood, Michael V.; Hart, Steve; Heim, Gerald B.; Torgerson, John A.

    2004-10-01

    Image intensifiers (I2) have gained wide acceptance throughout the Army as the premier nighttime mobility sensor for the individual soldier, with over 200,000 fielded systems. There is increasing need, however, for such a sensor with a video output, so that it can be utilized in remote vehicle platforms, and/or can be electronically fused with other sensors. The image-intensified television (I2TV), typically consisting of an image intensifier tube coupled via fiber optic to a solid-state imaging array, has been the primary solution to this need. I2TV platforms in vehicles, however, can generate high internal heat loads and must operate in high-temperature environments. Intensifier tube dark current, called "Equivalent Background Input" or "EBI", is not a significant factor at room temperature, but can seriously degrade image contrast and intra-scene dynamic range at such high temperatures. Cooling of the intensifier's photocathode is the only practical solution to this problem. The US Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate (NVESD) and Ball Aerospace have collaborated in the reported effort to more rigorously characterize intensifier EBI versus temperature. NVESD performed non-imaging EBI measurements of Generation 2 and 3 tube modules over a large range of ambient temperature, while Ball performed an imaging evaluation of Generation 3 I2TVs over a similar temperature range. The findings and conclusions of this effort are presented.

  7. Reinventing the High School Government Course: Rigor, Simulations, and Learning from Text

    ERIC Educational Resources Information Center

    Parker, Walter C.; Lo, Jane C.

    2016-01-01

    The high school government course is arguably the main site of formal civic education in the country today. This article presents the curriculum that resulted from a multiyear study aimed at improving the course. The pedagogic model, called "Knowledge in Action," centers on a rigorous form of project-based learning where the projects are…

  8. All Rigor and No Play Is No Way to Improve Learning

    ERIC Educational Resources Information Center

    Wohlwend, Karen; Peppler, Kylie

    2015-01-01

    The authors propose and discuss their Playshop curricular model, which they developed with teachers. Their studies suggest a playful approach supports even more rigor than the Common Core State Standards require for preschool and early grade children. Children keep their attention longer when learning comes in the form of something they can play…

  9. Scientific rigor through videogames.

    PubMed

    Treuille, Adrien; Das, Rhiju

    2014-11-01

    Hypothesis-driven experimentation - the scientific method - can be subverted by fraud, irreproducibility, and lack of rigorous predictive tests. A robust solution to these problems may be the 'massive open laboratory' model, recently embodied in the internet-scale videogame EteRNA. Deploying similar platforms throughout biology could enforce the scientific method more broadly. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Point- and line-based transformation models for high resolution satellite image rectification

    NASA Astrophysics Data System (ADS)

    Abd Elrahman, Ahmed Mohamed Shaker

    Rigorous mathematical models with the aid of satellite ephemeris data can present the relationship between the satellite image space and the object space. With government funded satellites, access to calibration and ephemeris data has allowed the development and use of these models. However, for commercial high-resolution satellites, which have been recently launched, these data are withheld from users, and therefore alternative empirical models should be used. In general, the existing empirical models are based on the use of control points and involve linking points in the image space and the corresponding points in the object space. But the lack of control points in some remote areas and the questionable accuracy of the identified discrete conjugate points provide a catalyst for the development of algorithms based on features other than control points. This research, concerned with image rectification and 3D geo-positioning determination using High-Resolution Satellite Imagery (HRSI), has two major objectives. First, the effects of satellite sensor characteristics, number of ground control points (GCPs), and terrain elevation variations on the performance of several point based empirical models are studied. Second, a new mathematical model, using only linear features as control features, or linear features with a minimum number of GCPs, is developed. To meet the first objective, several experiments for different satellites such as Ikonos, QuickBird, and IRS-1D have been conducted using different point based empirical models. Various data sets covering different terrain types are presented and results from representative sets of the experiments are shown and analyzed. The results demonstrate the effectiveness and the superiority of these models under certain conditions. From the results obtained, several alternatives to circumvent the effects of the satellite sensor characteristics, the number of GCPs, and the terrain elevation variations are introduced. To meet the second objective, a new model named the Line Based Transformation Model (LBTM) is developed for HRSI rectification. The model has the flexibility to either solely use linear features or use linear features and a number of control points to define the image transformation parameters. Unlike point features, which must be explicitly defined, linear features have the advantage that they can be implicitly defined by any segment along the line. (Abstract shortened by UMI.)

  11. Recent Advances in Hyporheic Zone Science

    NASA Astrophysics Data System (ADS)

    Hester, E. T.

    2017-12-01

    The hyporheic zone exists beneath and adjacent to streams and rivers where surface water and groundwater interact. It provides unique habitat for aquatic organisms, can buffer surface water temperatures, and can be highly reactive, processing nutrients and improving water quality. The hyporheic zone is the subject of considerable research and the past year in WRR witnessed important conceptual advances. A key focus was rigorous evaluation of mixing between surface water and groundwater that occurs within hyporheic sediments. Field observations indicate that greater mixing occurs in the hyporheic zone than in deeper groundwater, and this distinction has been explored by recent numerical modeling studies, but more research is needed to fully understand the causes. A commentary this year in WRR proposed that hyporheic mixing is enhanced by a combination of fluctuating boundary conditions and multiscale physical and chemical spatial heterogeneity but confirmation is left to future research. This year also witnessed the boundaries of knowledge pushed back in a number of other key areas. Field quantification of hyporheic exchange and reactions benefited from advances including the use and interpretation of high frequency nutrient sensors, actively heater fiber optic sensors, isotope tracers, and geophysical methods such as electrical resistivity imaging. Conceptual advances were made in understanding the effects of unsteady environmental conditions (e.g., tides and storms) and preferential flow on hyporheic processes. Finally, hyporheic science is being brought increasingly to bear on applied issues such as informing nutrient removal crediting for stream restoration practices, for example in the Chesapeake Bay watershed.

  12. a Redundant Gnss-Ins Low-Cost Uav Navigation Solution for Professional Applications

    NASA Astrophysics Data System (ADS)

    Navarro, J.; Parés, M. E.; Colomina, I.; Bianchi, G.; Pluchino, S.; Baddour, R.; Consoli, A.; Ayadi, J.; Gameiro, A.; Sekkas, O.; Tsetsos, V.; Gatsos, T.; Navoni, R.

    2015-08-01

    This paper presents the current results for the FP7 GINSEC project. Its goal is to build a pre-commercial prototype of a low-cost, accurate and reliable system for the professional UAV market. Low-cost, in this context, stands for the use of sensors in the most affordable segment of the market, especially MEMS IMUs and GNSS receivers. Reliability applies to the ability of the autopilot to cope with situations where unfavourable GNSS reception conditions or strong electromagnetic fields make the computation of the position and / or attitude of the UAV difficult. Professional and accurate mean that, at least using post-processing techniques as PPP, it will be possible to reach cm-level precisions that open the door to a range of applications demanding high levels of quality in positioning, as precision agriculture or mapping. To achieve such goal, a rigorous sensor error modelling approach, the use of redundant IMUs and a dual-GNSS receiver setup, together with close-coupling techniques and an extended Kalman filter with self-analysis capabilities have been used. Although the project is not yet complete, the results obtained up to now prove the feasibility of the aforementioned goal, especially in those aspects related to position determination. Research work is still undergoing to estimate the heading using a dual-GNNS receiver setup; preliminary results prove the validity of this approach for relatively long baselines, although positive results are expected when these are shorter than 1 m - which is a necessary requisite for small-sized UAVs.

  13. Digital morphogenesis via Schelling segregation

    NASA Astrophysics Data System (ADS)

    Barmpalias, George; Elwes, Richard; Lewis-Pye, Andrew

    2018-04-01

    Schelling’s model of segregation looks to explain the way in which particles or agents of two types may come to arrange themselves spatially into configurations consisting of large homogeneous clusters, i.e. connected regions consisting of only one type. As one of the earliest agent based models studied by economists and perhaps the most famous model of self-organising behaviour, it also has direct links to areas at the interface between computer science and statistical mechanics, such as the Ising model and the study of contagion and cascading phenomena in networks. While the model has been extensively studied it has largely resisted rigorous analysis, prior results from the literature generally pertaining to variants of the model which are tweaked so as to be amenable to standard techniques from statistical mechanics or stochastic evolutionary game theory. In Brandt et al (2012 Proc. 44th Annual ACM Symp. on Theory of Computing) provided the first rigorous analysis of the unperturbed model, for a specific set of input parameters. Here we provide a rigorous analysis of the model’s behaviour much more generally and establish some surprising forms of threshold behaviour, notably the existence of situations where an increased level of intolerance for neighbouring agents of opposite type leads almost certainly to decreased segregation.

  14. Assessing the Rigor of HS Curriculum in Admissions Decisions: A Functional Method, Plus Practical Advising for Prospective Students and High School Counselors

    ERIC Educational Resources Information Center

    Micceri, Theodore; Brigman, Leellen; Spatig, Robert

    2009-01-01

    An extensive, internally cross-validated analytical study using nested (within academic disciplines) Multilevel Modeling (MLM) on 4,560 students identified functional criteria for defining high school curriculum rigor and further determined which measures could best be used to help guide decision making for marginal applicants. The key outcome…

  15. A rigorous test of the accuracy of USGS digital elevation models in forested areas of Oregon and Washington.

    Treesearch

    Ward W. Carson; Stephen E. Reutebuch

    1997-01-01

    A procedure for performing a rigorous test of elevational accuracy of DEMs using independent ground coordinate data digitized photogrammetrically from aerial photography is presented. The accuracy of a sample set of 23 DEMs covering National Forests in Oregon and Washington was evaluated. Accuracy varied considerably between eastern and western parts of Oregon and...

  16. Accelerating Biomedical Discoveries through Rigor and Transparency.

    PubMed

    Hewitt, Judith A; Brown, Liliana L; Murphy, Stephanie J; Grieder, Franziska; Silberberg, Shai D

    2017-07-01

    Difficulties in reproducing published research findings have garnered a lot of press in recent years. As a funder of biomedical research, the National Institutes of Health (NIH) has taken measures to address underlying causes of low reproducibility. Extensive deliberations resulted in a policy, released in 2015, to enhance reproducibility through rigor and transparency. We briefly explain what led to the policy, describe its elements, provide examples and resources for the biomedical research community, and discuss the potential impact of the policy on translatability with a focus on research using animal models. Importantly, while increased attention to rigor and transparency may lead to an increase in the number of laboratory animals used in the near term, it will lead to more efficient and productive use of such resources in the long run. The translational value of animal studies will be improved through more rigorous assessment of experimental variables and data, leading to better assessments of the translational potential of animal models, for the benefit of the research community and society. Published by Oxford University Press on behalf of the Institute for Laboratory Animal Research 2017. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  17. A High Performance Computing Study of a Scalable FISST-Based Approach to Multi-Target, Multi-Sensor Tracking

    NASA Astrophysics Data System (ADS)

    Hussein, I.; Wilkins, M.; Roscoe, C.; Faber, W.; Chakravorty, S.; Schumacher, P.

    2016-09-01

    Finite Set Statistics (FISST) is a rigorous Bayesian multi-hypothesis management tool for the joint detection, classification and tracking of multi-sensor, multi-object systems. Implicit within the approach are solutions to the data association and target label-tracking problems. The full FISST filtering equations, however, are intractable. While FISST-based methods such as the PHD and CPHD filters are tractable, they require heavy moment approximations to the full FISST equations that result in a significant loss of information contained in the collected data. In this paper, we review Smart Sampling Markov Chain Monte Carlo (SSMCMC) that enables FISST to be tractable while avoiding moment approximations. We study the effect of tuning key SSMCMC parameters on tracking quality and computation time. The study is performed on a representative space object catalog with varying numbers of RSOs. The solution is implemented in the Scala computing language at the Maui High Performance Computing Center (MHPCC) facility.

  18. Image synthesis for SAR system, calibration and processor design

    NASA Technical Reports Server (NTRS)

    Holtzman, J. C.; Abbott, J. L.; Kaupp, V. H.; Frost, V. S.

    1978-01-01

    The Point Scattering Method of simulating radar imagery rigorously models all aspects of the imaging radar phenomena. Its computational algorithms operate on a symbolic representation of the terrain test site to calculate such parameters as range, angle of incidence, resolution cell size, etc. Empirical backscatter data and elevation data are utilized to model the terrain. Additionally, the important geometrical/propagation effects such as shadow, foreshortening, layover, and local angle of incidence are rigorously treated. Applications of radar image simulation to a proposed calibrated SAR system are highlighted: soil moisture detection and vegetation discrimination.

  19. Mathematical Rigor vs. Conceptual Change: Some Early Results

    NASA Astrophysics Data System (ADS)

    Alexander, W. R.

    2003-05-01

    Results from two different pedagogical approaches to teaching introductory astronomy at the college level will be presented. The first of these approaches is a descriptive, conceptually based approach that emphasizes conceptual change. This descriptive class is typically an elective for non-science majors. The other approach is a mathematically rigorous treatment that emphasizes problem solving and is designed to prepare students for further study in astronomy. The mathematically rigorous class is typically taken by science majors. It also fulfills an elective science requirement for these science majors. The Astronomy Diagnostic Test version 2 (ADT 2.0) was used as an assessment instrument since the validity and reliability have been investigated by previous researchers. The ADT 2.0 was administered as both a pre-test and post-test to both groups. Initial results show no significant difference between the two groups in the post-test. However, there is a slightly greater improvement for the descriptive class between the pre and post testing compared to the mathematically rigorous course. There was great care to account for variables. These variables included: selection of text, class format as well as instructor differences. Results indicate that the mathematically rigorous model, doesn't improve conceptual understanding any better than the conceptual change model. Additional results indicate that there is a similar gender bias in favor of males that has been measured by previous investigators. This research has been funded by the College of Science and Mathematics at James Madison University.

  20. Intercomparison of cosmic-ray neutron sensors and water balance monitoring in an urban environment

    NASA Astrophysics Data System (ADS)

    Schrön, Martin; Zacharias, Steffen; Womack, Gary; Köhli, Markus; Desilets, Darin; Oswald, Sascha E.; Bumberger, Jan; Mollenhauer, Hannes; Kögler, Simon; Remmler, Paul; Kasner, Mandy; Denk, Astrid; Dietrich, Peter

    2018-03-01

    Sensor-to-sensor variability is a source of error common to all geoscientific instruments that needs to be assessed before comparative and applied research can be performed with multiple sensors. Consistency among sensor systems is especially critical when subtle features of the surrounding terrain are to be identified. Cosmic-ray neutron sensors (CRNSs) are a recent technology used to monitor hectometre-scale environmental water storages, for which a rigorous comparison study of numerous co-located sensors has not yet been performed. In this work, nine stationary CRNS probes of type CRS1000 were installed in relative proximity on a grass patch surrounded by trees, buildings, and sealed areas. While the dynamics of the neutron count rates were found to be similar, offsets of a few percent from the absolute average neutron count rates were found. Technical adjustments of the individual detection parameters brought all instruments into good agreement. Furthermore, we found a critical integration time of 6 h above which all sensors showed consistent dynamics in the data and their RMSE fell below 1 % of gravimetric water content. The residual differences between the nine signals indicated local effects of the complex urban terrain on the scale of several metres. Mobile CRNS measurements and spatial simulations with the URANOS neutron transport code in the surrounding area (25 ha) have revealed substantial sub-footprint heterogeneity to which CRNS detectors are sensitive despite their large averaging volume. The sealed and constantly dry structures in the footprint furthermore damped the dynamics of the CRNS-derived soil moisture. We developed strategies to correct for the sealed-area effect based on theoretical insights about the spatial sensitivity of the sensor. This procedure not only led to reliable soil moisture estimation during dry-out periods, it further revealed a strong signal of intercepted water that emerged over the sealed surfaces during rain events. The presented arrangement offered a unique opportunity to demonstrate the CRNS performance in complex terrain, and the results indicated great potential for further applications in urban climate research.

  1. Fixture For Mounting A Pressure Sensor

    NASA Technical Reports Server (NTRS)

    Cagle, Christopher M.

    1995-01-01

    Fixture for mounting pressure sensor in aerodynamic model simplifies task of removal and replacement of sensor in event sensor becomes damaged. Makes it unnecessary to dismantle model. Also minimizes any change in aerodynamic characteristics of model in event of replacement. Removable pressure sensor installed in fixture in wall of model. Wires from sensor pass through channel under surface.

  2. Integrating teaching and authentic research in the field and laboratory settings

    NASA Astrophysics Data System (ADS)

    Daryanto, S.; Wang, L.; Kaseke, K. F.; Ravi, S.

    2016-12-01

    Typically authentic research activities are separated from rigorous classroom teaching. Here we assessed the potential of integrating teaching and research activities both in the field and in the laboratory. We worked with students from both US and abroad without strong science background to utilize advanced environmental sensors and statistical tool to conduct innovative projects. The students include one from Namibia and two local high school students in Indianapolis (through Project SEED, Summer Experience for the Economically Disadvantaged). They conducted leaf potential measurements, isotope measurements and meta-analysis. The experience showed us the great potential of integrating teaching and research in both field and laboratory settings.

  3. Lightweight biometric detection system for human classification using pyroelectric infrared detectors.

    PubMed

    Burchett, John; Shankar, Mohan; Hamza, A Ben; Guenther, Bob D; Pitsianis, Nikos; Brady, David J

    2006-05-01

    We use pyroelectric detectors that are differential in nature to detect motion in humans by their heat emissions. Coded Fresnel lens arrays create boundaries that help to localize humans in space as well as to classify the nature of their motion. We design and implement a low-cost biometric tracking system by using off-the-shelf components. We demonstrate two classification methods by using data gathered from sensor clusters of dual-element pyroelectric detectors with coded Fresnel lens arrays. We propose two algorithms for person identification, a more generalized spectral clustering method and a more rigorous example that uses principal component regression to perform a blind classification.

  4. Investigation of Time Series Representations and Similarity Measures for Structural Damage Pattern Recognition

    PubMed Central

    Swartz, R. Andrew

    2013-01-01

    This paper investigates the time series representation methods and similarity measures for sensor data feature extraction and structural damage pattern recognition. Both model-based time series representation and dimensionality reduction methods are studied to compare the effectiveness of feature extraction for damage pattern recognition. The evaluation of feature extraction methods is performed by examining the separation of feature vectors among different damage patterns and the pattern recognition success rate. In addition, the impact of similarity measures on the pattern recognition success rate and the metrics for damage localization are also investigated. The test data used in this study are from the System Identification to Monitor Civil Engineering Structures (SIMCES) Z24 Bridge damage detection tests, a rigorous instrumentation campaign that recorded the dynamic performance of a concrete box-girder bridge under progressively increasing damage scenarios. A number of progressive damage test case datasets and damage test data with different damage modalities are used. The simulation results show that both time series representation methods and similarity measures have significant impact on the pattern recognition success rate. PMID:24191136

  5. Conflict: Operational Realism versus Analytical Rigor in Defense Modeling and Simulation

    DTIC Science & Technology

    2012-06-14

    Campbell, Experimental and Quasi- Eperimental Designs for Generalized Causal Inference, Boston: Houghton Mifflin Company, 2002. [7] R. T. Johnson, G...experimentation? In order for an experiment to be considered rigorous, and the results valid, the experiment should be designed using established...addition to the interview, the pilots were administered a written survey, designed to capture their reactions regarding the level of realism present

  6. Climate Change Accuracy: Requirements and Economic Value

    NASA Astrophysics Data System (ADS)

    Wielicki, B. A.; Cooke, R.; Mlynczak, M. G.; Lukashin, C.; Thome, K. J.; Baize, R. R.

    2014-12-01

    Higher than normal accuracy is required to rigorously observe decadal climate change. But what level is needed? How can this be quantified? This presentation will summarize a new more rigorous and quantitative approach to determining the required accuracy for climate change observations (Wielicki et al., 2013, BAMS). Most current global satellite observations cannot meet this accuracy level. A proposed new satellite mission to resolve this challenge is CLARREO (Climate Absolute Radiance and Refractivity Observatory). CLARREO is designed to achieve advances of a factor of 10 for reflected solar spectra and a factor of 3 to 5 for thermal infrared spectra (Wielicki et al., Oct. 2013 BAMS). The CLARREO spectrometers are designed to serve as SI traceable benchmarks for the Global Satellite Intercalibration System (GSICS) and to greatly improve the utility of a wide range of LEO and GEO infrared and reflected solar passive satellite sensors for climate change observations (e.g. CERES, MODIS, VIIIRS, CrIS, IASI, Landsat, SPOT, etc). Providing more accurate decadal change trends can in turn lead to more rapid narrowing of key climate science uncertainties such as cloud feedback and climate sensitivity. A study has been carried out to quantify the economic benefits of such an advance as part of a rigorous and complete climate observing system. The study concludes that the economic value is $12 Trillion U.S. dollars in Net Present Value for a nominal discount rate of 3% (Cooke et al. 2013, J. Env. Sys. Dec.). A brief summary of these two studies and their implications for the future of climate science will be presented.

  7. High-order computer-assisted estimates of topological entropy

    NASA Astrophysics Data System (ADS)

    Grote, Johannes

    The concept of Taylor Models is introduced, which offers highly accurate C0-estimates for the enclosures of functional dependencies, combining high-order Taylor polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified interval arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly nonlinear dynamical systems. A method to obtain sharp rigorous enclosures of Poincare maps for certain types of flows and surfaces is developed and numerical examples are presented. Differential algebraic techniques allow the efficient and accurate computation of polynomial approximations for invariant curves of certain planar maps around hyperbolic fixed points. Subsequently we introduce a procedure to extend these polynomial curves to verified Taylor Model enclosures of local invariant manifolds with C0-errors of size 10-10--10 -14, and proceed to generate the global invariant manifold tangle up to comparable accuracy through iteration in Taylor Model arithmetic. Knowledge of the global manifold structure up to finite iterations of the local manifold pieces enables us to find all homoclinic and heteroclinic intersections in the generated manifold tangle. Combined with the mapping properties of the homoclinic points and their ordering we are able to construct a subshift of finite type as a topological factor of the original planar system to obtain rigorous lower bounds for its topological entropy. This construction is fully automatic and yields homoclinic tangles with several hundred homoclinic points. As an example rigorous lower bounds for the topological entropy of the Henon map are computed, which to the best knowledge of the authors yield the largest such estimates published so far.

  8. Testability of evolutionary game dynamics based on experimental economics data

    NASA Astrophysics Data System (ADS)

    Wang, Yijia; Chen, Xiaojie; Wang, Zhijian

    2017-11-01

    Understanding the dynamic processes of a real game system requires an appropriate dynamics model, and rigorously testing a dynamics model is nontrivial. In our methodological research, we develop an approach to testing the validity of game dynamics models that considers the dynamic patterns of angular momentum and speed as measurement variables. Using Rock-Paper-Scissors (RPS) games as an example, we illustrate the geometric patterns in the experiment data. We then derive the related theoretical patterns from a series of typical dynamics models. By testing the goodness-of-fit between the experimental and theoretical patterns, we show that the validity of these models can be evaluated quantitatively. Our approach establishes a link between dynamics models and experimental systems, which is, to the best of our knowledge, the most effective and rigorous strategy for ascertaining the testability of evolutionary game dynamics models.

  9. New tools for Content Innovation and data sharing: Enhancing reproducibility and rigor in biomechanics research.

    PubMed

    Guilak, Farshid

    2017-03-21

    We are currently in one of the most exciting times for science and engineering as we witness unprecedented growth in our computational and experimental capabilities to generate new data and models. To facilitate data and model sharing, and to enhance reproducibility and rigor in biomechanics research, the Journal of Biomechanics has introduced a number of tools for Content Innovation to allow presentation, sharing, and archiving of methods, models, and data in our articles. The tools include an Interactive Plot Viewer, 3D Geometric Shape and Model Viewer, Virtual Microscope, Interactive MATLAB Figure Viewer, and Audioslides. Authors are highly encouraged to make use of these in upcoming journal submissions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Kinetics versus thermodynamics in materials modeling: The case of the di-vacancy in iron

    NASA Astrophysics Data System (ADS)

    Djurabekova, F.; Malerba, L.; Pasianot, R. C.; Olsson, P.; Nordlund, K.

    2010-07-01

    Monte Carlo models are widely used for the study of microstructural and microchemical evolution of materials under irradiation. However, they often link explicitly the relevant activation energies to the energy difference between local equilibrium states. We provide a simple example (di-vacancy migration in iron) in which a rigorous activation energy calculation, by means of both empirical interatomic potentials and density functional theory methods, clearly shows that such a link is not granted, revealing a migration mechanism that a thermodynamics-linked activation energy model cannot predict. Such a mechanism is, however, fully consistent with thermodynamics. This example emphasizes the importance of basing Monte Carlo methods on models where the activation energies are rigorously calculated, rather than deduced from widespread heuristic equations.

  11. A Rigorous Sharp Interface Limit of a Diffuse Interface Model Related to Tumor Growth

    NASA Astrophysics Data System (ADS)

    Rocca, Elisabetta; Scala, Riccardo

    2017-06-01

    In this paper, we study the rigorous sharp interface limit of a diffuse interface model related to the dynamics of tumor growth, when a parameter ɛ, representing the interface thickness between the tumorous and non-tumorous cells, tends to zero. More in particular, we analyze here a gradient-flow-type model arising from a modification of the recently introduced model for tumor growth dynamics in Hawkins-Daruud et al. (Int J Numer Math Biomed Eng 28:3-24, 2011) (cf. also Hilhorst et al. Math Models Methods Appl Sci 25:1011-1043, 2015). Exploiting the techniques related to both gradient flows and gamma convergence, we recover a condition on the interface Γ relating the chemical and double-well potentials, the mean curvature, and the normal velocity.

  12. A Mathematical Evaluation of the Core Conductor Model

    PubMed Central

    Clark, John; Plonsey, Robert

    1966-01-01

    This paper is a mathematical evaluation of the core conductor model where its three dimensionality is taken into account. The problem considered is that of a single, active, unmyelinated nerve fiber situated in an extensive, homogeneous, conducting medium. Expressions for the various core conductor parameters have been derived in a mathematically rigorous manner according to the principles of electromagnetic theory. The purpose of employing mathematical rigor in this study is to bring to light the inherent assumptions of the one dimensional core conductor model, providing a method of evaluating the accuracy of this linear model. Based on the use of synthetic squid axon data, the conclusion of this study is that the linear core conductor model is a good approximation for internal but not external parameters. PMID:5903155

  13. Performance analysis of photoresistor and phototransistor for automotive’s halogen and xenon bulbs light output

    NASA Astrophysics Data System (ADS)

    Rammohan, A.; Kumar, C. Ramesh

    2017-11-01

    Illumination of any light is measured using a different kind of calibrated equipment’s available in the market such as a goniometer, spectral radiometer, photometer, Lux meter and camera based systems which directly display the illumination of automotive headlights light distribution in the unit of lux, foot-candles, lumens/sq. ft. and Lambert etc., In this research, we dealt with evaluating the photo resistor or Light Dependent Resistor (LDR) and phototransistor whether it is useful for sensing light patterns of Automotive Halogen and Xenon bulbs. The experiments are conducted during night hours under complete dark space. We have used the headlamp setup available in TATA SUMO VICTA vehicle in the Indian market and conducted the experiments separately for Halogen and Xenon bulbs under low and high beam operations at various degrees and test points within ten meters of distance. Also, we have compared the light intensity of halogen and xenon bulbs to prove the highest light intensity between halogen and Xenon bulbs. After doing a rigorous test with these two sensors it is understood both are good to sensing beam pattern of automotive bulbs and even it is good if we use an array of sensors or a mixed combination of sensors for measuring illumination purposes under perfect calibrations.

  14. Use of x-ray fluorescence for in-situ detection of metals

    NASA Astrophysics Data System (ADS)

    Elam, W. T. E.; Whitlock, Robert R.; Gilfrich, John V.

    1995-01-01

    X-ray fluorescence (XRF) is a well-established, non-destructive method of determining elemental concentrations at ppm levels in complex samples. It can operate in atmosphere with no sample preparation, and provides accuracies of 1% or better under optimum conditions. This report addresses two sets of issues concerning the use of x-ray fluorescence as a sensor technology for the cone penetrometer, for shipboard waste disposal, or for other in-situ, real- time environmental applications. The first issue concerns the applicability of XRF to these applications, and includes investigation of detection limits and matrix effects. We have evaluated the detection limits and quantitative accuracy of a sensor mock-up for metals in soils under conditions expected in the field. In addition, several novel ways of improving the lower limits of detection to reach the drinking water regulatory limits have been explored. The second issue is the engineering involved with constructing a spectrometer within the 1.75 inch diameter of the penetrometer pipe, which is the most rigorous physical constraint. Only small improvements over current state-of-the-art are required. Additional advantages of XRF are that no radioactive sources or hazardous materials are used in the sensor design, and no reagents or any possible sources of ignition are involved.

  15. Observations and Numerical Modeling of the 2012 Haida Gwaii Tsunami off the Coast of British Columbia

    NASA Astrophysics Data System (ADS)

    Fine, Isaac V.; Cherniawsky, Josef Y.; Thomson, Richard E.; Rabinovich, Alexander B.; Krassovski, Maxim V.

    2015-03-01

    A major ( M w 7.7) earthquake occurred on October 28, 2012 along the Queen Charlotte Fault Zone off the west coast of Haida Gwaii (formerly the Queen Charlotte Islands). The earthquake was the second strongest instrumentally recorded earthquake in Canadian history and generated the largest local tsunami ever recorded on the coast of British Columbia. A field survey on the Pacific side of Haida Gwaii revealed maximum runup heights of up to 7.6 m at sites sheltered from storm waves and 13 m in a small inlet that is less sheltered from storms (L eonard and B ednarski 2014). The tsunami was recorded by tide gauges along the coast of British Columbia, by open-ocean bottom pressure sensors of the NEPTUNE facility at Ocean Networks Canada's cabled observatory located seaward of southwestern Vancouver Island, and by several DART stations located in the northeast Pacific. The tsunami observations, in combination with rigorous numerical modeling, enabled us to determine the physical properties of this event and to correct the location of the tsunami source with respect to the initial geophysical estimates. The initial model results were used to specify sites of particular interest for post-tsunami field surveys on the coast of Moresby Island (Haida Gwaii), while field survey observations (L eonard and B ednarski 2014) were used, in turn, to verify the numerical simulations based on the corrected source region.

  16. ZY3-02 Laser Altimeter Footprint Geolocation Prediction

    PubMed Central

    Xie, Junfeng; Tang, Xinming; Mo, Fan; Li, Guoyuan; Zhu, Guangbin; Wang, Zhenming; Fu, Xingke; Gao, Xiaoming; Dou, Xianhui

    2017-01-01

    Successfully launched on 30 May 2016, ZY3-02 is the first Chinese surveying and mapping satellite equipped with a lightweight laser altimeter. Calibration is necessary before the laser altimeter becomes operational. Laser footprint location prediction is the first step in calibration that is based on ground infrared detectors, and it is difficult because the sample frequency of the ZY3-02 laser altimeter is 2 Hz, and the distance between two adjacent laser footprints is about 3.5 km. In this paper, we build an on-orbit rigorous geometric prediction model referenced to the rigorous geometric model of optical remote sensing satellites. The model includes three kinds of data that must be predicted: pointing angle, orbit parameters, and attitude angles. The proposed method is verified by a ZY3-02 laser altimeter on-orbit geometric calibration test. Five laser footprint prediction experiments are conducted based on the model, and the laser footprint prediction accuracy is better than 150 m on the ground. The effectiveness and accuracy of the on-orbit rigorous geometric prediction model are confirmed by the test results. The geolocation is predicted precisely by the proposed method, and this will give a reference to the geolocation prediction of future land laser detectors in other laser altimeter calibration test. PMID:28934160

  17. ZY3-02 Laser Altimeter Footprint Geolocation Prediction.

    PubMed

    Xie, Junfeng; Tang, Xinming; Mo, Fan; Li, Guoyuan; Zhu, Guangbin; Wang, Zhenming; Fu, Xingke; Gao, Xiaoming; Dou, Xianhui

    2017-09-21

    Successfully launched on 30 May 2016, ZY3-02 is the first Chinese surveying and mapping satellite equipped with a lightweight laser altimeter. Calibration is necessary before the laser altimeter becomes operational. Laser footprint location prediction is the first step in calibration that is based on ground infrared detectors, and it is difficult because the sample frequency of the ZY3-02 laser altimeter is 2 Hz, and the distance between two adjacent laser footprints is about 3.5 km. In this paper, we build an on-orbit rigorous geometric prediction model referenced to the rigorous geometric model of optical remote sensing satellites. The model includes three kinds of data that must be predicted: pointing angle, orbit parameters, and attitude angles. The proposed method is verified by a ZY3-02 laser altimeter on-orbit geometric calibration test. Five laser footprint prediction experiments are conducted based on the model, and the laser footprint prediction accuracy is better than 150 m on the ground. The effectiveness and accuracy of the on-orbit rigorous geometric prediction model are confirmed by the test results. The geolocation is predicted precisely by the proposed method, and this will give a reference to the geolocation prediction of future land laser detectors in other laser altimeter calibration test.

  18. Skill Assessment for Coupled Biological/Physical Models of Marine Systems.

    PubMed

    Stow, Craig A; Jolliff, Jason; McGillicuddy, Dennis J; Doney, Scott C; Allen, J Icarus; Friedrichs, Marjorie A M; Rose, Kenneth A; Wallhead, Philip

    2009-02-20

    Coupled biological/physical models of marine systems serve many purposes including the synthesis of information, hypothesis generation, and as a tool for numerical experimentation. However, marine system models are increasingly used for prediction to support high-stakes decision-making. In such applications it is imperative that a rigorous model skill assessment is conducted so that the model's capabilities are tested and understood. Herein, we review several metrics and approaches useful to evaluate model skill. The definition of skill and the determination of the skill level necessary for a given application is context specific and no single metric is likely to reveal all aspects of model skill. Thus, we recommend the use of several metrics, in concert, to provide a more thorough appraisal. The routine application and presentation of rigorous skill assessment metrics will also serve the broader interests of the modeling community, ultimately resulting in improved forecasting abilities as well as helping us recognize our limitations.

  19. System Design, Calibration and Performance Analysis of a Novel 360° Stereo Panoramic Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Blaser, S.; Nebiker, S.; Cavegn, S.

    2017-05-01

    Image-based mobile mapping systems enable the efficient acquisition of georeferenced image sequences, which can later be exploited in cloud-based 3D geoinformation services. In order to provide a 360° coverage with accurate 3D measuring capabilities, we present a novel 360° stereo panoramic camera configuration. By using two 360° panorama cameras tilted forward and backward in combination with conventional forward and backward looking stereo camera systems, we achieve a full 360° multi-stereo coverage. We furthermore developed a fully operational new mobile mapping system based on our proposed approach, which fulfils our high accuracy requirements. We successfully implemented a rigorous sensor and system calibration procedure, which allows calibrating all stereo systems with a superior accuracy compared to that of previous work. Our study delivered absolute 3D point accuracies in the range of 4 to 6 cm and relative accuracies of 3D distances in the range of 1 to 3 cm. These results were achieved in a challenging urban area. Furthermore, we automatically reconstructed a 3D city model of our study area by employing all captured and georeferenced mobile mapping imagery. The result is a very high detailed and almost complete 3D city model of the street environment.

  20. Circular instead of hierarchical: methodological principles for the evaluation of complex interventions

    PubMed Central

    Walach, Harald; Falkenberg, Torkel; Fønnebø, Vinjar; Lewith, George; Jonas, Wayne B

    2006-01-01

    Background The reasoning behind evaluating medical interventions is that a hierarchy of methods exists which successively produce improved and therefore more rigorous evidence based medicine upon which to make clinical decisions. At the foundation of this hierarchy are case studies, retrospective and prospective case series, followed by cohort studies with historical and concomitant non-randomized controls. Open-label randomized controlled studies (RCTs), and finally blinded, placebo-controlled RCTs, which offer most internal validity are considered the most reliable evidence. Rigorous RCTs remove bias. Evidence from RCTs forms the basis of meta-analyses and systematic reviews. This hierarchy, founded on a pharmacological model of therapy, is generalized to other interventions which may be complex and non-pharmacological (healing, acupuncture and surgery). Discussion The hierarchical model is valid for limited questions of efficacy, for instance for regulatory purposes and newly devised products and pharmacological preparations. It is inadequate for the evaluation of complex interventions such as physiotherapy, surgery and complementary and alternative medicine (CAM). This has to do with the essential tension between internal validity (rigor and the removal of bias) and external validity (generalizability). Summary Instead of an Evidence Hierarchy, we propose a Circular Model. This would imply a multiplicity of methods, using different designs, counterbalancing their individual strengths and weaknesses to arrive at pragmatic but equally rigorous evidence which would provide significant assistance in clinical and health systems innovation. Such evidence would better inform national health care technology assessment agencies and promote evidence based health reform. PMID:16796762

  1. The KP Approximation Under a Weak Coriolis Forcing

    NASA Astrophysics Data System (ADS)

    Melinand, Benjamin

    2018-02-01

    In this paper, we study the asymptotic behavior of weakly transverse water-waves under a weak Coriolis forcing in the long wave regime. We derive the Boussinesq-Coriolis equations in this setting and we provide a rigorous justification of this model. Then, from these equations, we derive two other asymptotic models. When the Coriolis forcing is weak, we fully justify the rotation-modified Kadomtsev-Petviashvili equation (also called Grimshaw-Melville equation). When the Coriolis forcing is very weak, we rigorously justify the Kadomtsev-Petviashvili equation. This work provides the first mathematical justification of the KP approximation under a Coriolis forcing.

  2. Imaging 2D optical diffuse reflectance in skeletal muscle

    NASA Astrophysics Data System (ADS)

    Ranasinghesagara, Janaka; Yao, Gang

    2007-04-01

    We discovered a unique pattern of optical reflectance from fresh prerigor skeletal muscles, which can not be described using existing theories. A numerical fitting function was developed to quantify the equiintensity contours of acquired reflectance images. Using this model, we studied the changes of reflectance profile during stretching and rigor process. We found that the prominent anisotropic features diminished after rigor completion. These results suggested that muscle sarcomere structures played important roles in modulating light propagation in whole muscle. When incorporating the sarcomere diffraction in a Monte Carlo model, we showed that the resulting reflectance profiles quantitatively resembled the experimental observation.

  3. A Rigorous Test of the Fit of the Circumplex Model to Big Five Personality Data: Theoretical and Methodological Issues and Two Large Sample Empirical Tests.

    PubMed

    DeGeest, David Scott; Schmidt, Frank

    2015-01-01

    Our objective was to apply the rigorous test developed by Browne (1992) to determine whether the circumplex model fits Big Five personality data. This test has yet to be applied to personality data. Another objective was to determine whether blended items explained correlations among the Big Five traits. We used two working adult samples, the Eugene-Springfield Community Sample and the Professional Worker Career Experience Survey. Fit to the circumplex was tested via Browne's (1992) procedure. Circumplexes were graphed to identify items with loadings on multiple traits (blended items), and to determine whether removing these items changed five-factor model (FFM) trait intercorrelations. In both samples, the circumplex structure fit the FFM traits well. Each sample had items with dual-factor loadings (8 items in the first sample, 21 in the second). Removing blended items had little effect on construct-level intercorrelations among FFM traits. We conclude that rigorous tests show that the fit of personality data to the circumplex model is good. This finding means the circumplex model is competitive with the factor model in understanding the organization of personality traits. The circumplex structure also provides a theoretically and empirically sound rationale for evaluating intercorrelations among FFM traits. Even after eliminating blended items, FFM personality traits remained correlated.

  4. Reliability issues in human brain temperature measurement

    PubMed Central

    2009-01-01

    Introduction The influence of brain temperature on clinical outcome after severe brain trauma is currently poorly understood. When brain temperature is measured directly, different values between the inside and outside of the head can occur. It is not yet clear if these differences are 'real' or due to measurement error. Methods The aim of this study was to assess the performance and measurement uncertainty of body and brain temperature sensors currently in use in neurocritical care. Two organic fixed-point, ultra stable temperature sources were used as the temperature references. Two different types of brain sensor (brain type 1 and brain type 2) and one body type sensor were tested under rigorous laboratory conditions and at the bedside. Measurement uncertainty was calculated using internationally recognised methods. Results Average differences between the 26°C reference temperature source and the clinical temperature sensors were +0.11°C (brain type 1), +0.24°C (brain type 2) and -0.15°C (body type), respectively. For the 36°C temperature reference source, average differences between the reference source and clinical thermometers were -0.02°C, +0.09°C and -0.03°C for brain type 1, brain type 2 and body type sensor, respectively. Repeat calibrations the following day confirmed that these results were within the calculated uncertainties. The results of the immersion tests revealed that the reading of the body type sensor was sensitive to position, with differences in temperature of -0.5°C to -1.4°C observed on withdrawing the thermometer from the base of the isothermal environment by 4 cm and 8 cm, respectively. Taking into account all the factors tested during the calibration experiments, the measurement uncertainty of the clinical sensors against the (nominal) 26°C and 36°C temperature reference sources for the brain type 1, brain type 2 and body type sensors were ± 0.18°C, ± 0.10°C and ± 0.12°C respectively. Conclusions The results show that brain temperature sensors are fundamentally accurate and the measurements are precise to within 0.1 to 0.2°C. Subtle dissociation between brain and body temperature in excess of 0.1 to 0.2°C is likely to be real. Body temperature sensors need to be secured in position to ensure that measurements are reliable. PMID:19573241

  5. Modeling of a Surface Acoustic Wave Strain Sensor

    NASA Technical Reports Server (NTRS)

    Wilson, W. C.; Atkinson, Gary M.

    2010-01-01

    NASA Langley Research Center is investigating Surface Acoustic Wave (SAW) sensor technology for harsh environments aimed at aerospace applications. To aid in development of sensors a model of a SAW strain sensor has been developed. The new model extends the modified matrix method to include the response of Orthogonal Frequency Coded (OFC) reflectors and the response of SAW devices to strain. These results show that the model accurately captures the strain response of a SAW sensor on a Langasite substrate. The results of the model of a SAW Strain Sensor on Langasite are presented

  6. New methods for state estimation and adaptive observation of environmental flow systems leveraging coordinated swarms of sensor vehicles

    NASA Astrophysics Data System (ADS)

    Bewley, Thomas

    2015-11-01

    Accurate long-term forecasts of the path and intensity of hurricanes are imperative to protect property and save lives. Accurate estimations and forecasts of the spread of large-scale contaminant plumes, such as those from Deepwater Horizon, Fukushima, and recent volcanic eruptions in Iceland, are essential for assessing environment impact, coordinating remediation efforts, and in certain cases moving folks out of harm's way. The challenges in estimating and forecasting such systems include: (a) environmental flow modeling, (b) high-performance real-time computing, (c) assimilating measured data into numerical simulations, and (d) acquiring in-situ data, beyond what can be measured from satellites, that is maximally relevant for reducing forecast uncertainty. This talk will focus on new techniques for addressing (c) and (d), namely, data assimilation and adaptive observation, in both hurricanes and large-scale environmental plumes. In particular, we will present a new technique for the energy-efficient coordination of swarms of sensor-laden balloons for persistent, in-situ, distributed, real-time measurement of developing hurricanes, leveraging buoyancy control only (coupled with the predictable and strongly stratified flowfield within the hurricane). Animations of these results are available at http://flowcontrol.ucsd.edu/3dhurricane.mp4 and http://flowcontrol.ucsd.edu/katrina.mp4. We also will survey our unique hybridization of the venerable Ensemble Kalman and Variational approaches to large-scale data assimilation in environmental flow systems, and how essentially the dual of this hybrid approach may be used to solve the adaptive observation problem in a uniquely effective and rigorous fashion.

  7. Optical simulations of organic light-emitting diodes through a combination of rigorous electromagnetic solvers and Monte Carlo ray-tracing methods

    NASA Astrophysics Data System (ADS)

    Bahl, Mayank; Zhou, Gui-Rong; Heller, Evan; Cassarly, William; Jiang, Mingming; Scarmozzino, Rob; Gregory, G. Groot

    2014-09-01

    Over the last two decades there has been extensive research done to improve the design of Organic Light Emitting Diodes (OLEDs) so as to enhance light extraction efficiency, improve beam shaping, and allow color tuning through techniques such as the use of patterned substrates, photonic crystal (PCs) gratings, back reflectors, surface texture, and phosphor down-conversion. Computational simulation has been an important tool for examining these increasingly complex designs. It has provided insights for improving OLED performance as a result of its ability to explore limitations, predict solutions, and demonstrate theoretical results. Depending upon the focus of the design and scale of the problem, simulations are carried out using rigorous electromagnetic (EM) wave optics based techniques, such as finite-difference time-domain (FDTD) and rigorous coupled wave analysis (RCWA), or through ray optics based technique such as Monte Carlo ray-tracing. The former are typically used for modeling nanostructures on the OLED die, and the latter for modeling encapsulating structures, die placement, back-reflection, and phosphor down-conversion. This paper presents the use of a mixed-level simulation approach which unifies the use of EM wave-level and ray-level tools. This approach uses rigorous EM wave based tools to characterize the nanostructured die and generate both a Bidirectional Scattering Distribution function (BSDF) and a far-field angular intensity distribution. These characteristics are then incorporated into the ray-tracing simulator to obtain the overall performance. Such mixed-level approach allows for comprehensive modeling of the optical characteristic of OLEDs and can potentially lead to more accurate performance than that from individual modeling tools alone.

  8. Rigorous high-precision enclosures of fixed points and their invariant manifolds

    NASA Astrophysics Data System (ADS)

    Wittig, Alexander N.

    The well established concept of Taylor Models is introduced, which offer highly accurate C0 enclosures of functional dependencies, combining high-order polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly non-linear dynamical systems. A method is proposed to extend the existing implementation of Taylor Models in COSY INFINITY from double precision coefficients to arbitrary precision coefficients. Great care is taken to maintain the highest efficiency possible by adaptively adjusting the precision of higher order coefficients in the polynomial expansion. High precision operations are based on clever combinations of elementary floating point operations yielding exact values for round-off errors. An experimental high precision interval data type is developed and implemented. Algorithms for the verified computation of intrinsic functions based on the High Precision Interval datatype are developed and described in detail. The application of these operations in the implementation of High Precision Taylor Models is discussed. An application of Taylor Model methods to the verification of fixed points is presented by verifying the existence of a period 15 fixed point in a near standard Henon map. Verification is performed using different verified methods such as double precision Taylor Models, High Precision intervals and High Precision Taylor Models. Results and performance of each method are compared. An automated rigorous fixed point finder is implemented, allowing the fully automated search for all fixed points of a function within a given domain. It returns a list of verified enclosures of each fixed point, optionally verifying uniqueness within these enclosures. An application of the fixed point finder to the rigorous analysis of beam transfer maps in accelerator physics is presented. Previous work done by Johannes Grote is extended to compute very accurate polynomial approximations to invariant manifolds of discrete maps of arbitrary dimension around hyperbolic fixed points. The algorithm presented allows for automatic removal of resonances occurring during construction. A method for the rigorous enclosure of invariant manifolds of continuous systems is introduced. Using methods developed for discrete maps, polynomial approximations of invariant manifolds of hyperbolic fixed points of ODEs are obtained. These approximations are outfit with a sharp error bound which is verified to rigorously contain the manifolds. While we focus on the three dimensional case, verification in higher dimensions is possible using similar techniques. Integrating the resulting enclosures using the verified COSY VI integrator, the initial manifold enclosures are expanded to yield sharp enclosures of large parts of the stable and unstable manifolds. To demonstrate the effectiveness of this method, we construct enclosures of the invariant manifolds of the Lorenz system and show pictures of the resulting manifold enclosures. To the best of our knowledge, these enclosures are the largest verified enclosures of manifolds in the Lorenz system in existence.

  9. Continuous real-time water-quality monitoring and regression analysis to compute constituent concentrations and loads in the North Fork Ninnescah River upstream from Cheney Reservoir, south-central Kansas, 1999–2012

    USGS Publications Warehouse

    Stone, Mandy L.; Graham, Jennifer L.; Gatotho, Jackline W.

    2013-01-01

    Cheney Reservoir, located in south-central Kansas, is the primary water supply for the city of Wichita. The U.S. Geological Survey has operated a continuous real-time water-quality monitoring station since 1998 on the North Fork Ninnescah River, the main source of inflow to Cheney Reservoir. Continuously measured water-quality physical properties include streamflow, specific conductance, pH, water temperature, dissolved oxygen, and turbidity. Discrete water-quality samples were collected during 1999 through 2009 and analyzed for sediment, nutrients, bacteria, and other water-quality constituents. Regression models were developed to establish relations between discretely sampled constituent concentrations and continuously measured physical properties to compute concentrations of those constituents of interest that are not easily measured in real time because of limitations in sensor technology and fiscal constraints. Regression models were published in 2006 that were based on data collected during 1997 through 2003. This report updates those models using discrete and continuous data collected during January 1999 through December 2009. Models also were developed for four new constituents, including additional nutrient species and indicator bacteria. In addition, a conversion factor of 0.68 was established to convert the Yellow Springs Instruments (YSI) model 6026 turbidity sensor measurements to the newer YSI model 6136 sensor at the North Ninnescah River upstream from Cheney Reservoir site. Newly developed models and 14 years of hourly continuously measured data were used to calculate selected constituent concentrations and loads during January 1999 through December 2012. The water-quality information in this report is important to the city of Wichita because it allows the concentrations of many potential pollutants of interest to Cheney Reservoir, including nutrients and sediment, to be estimated in real time and characterized over conditions and time scales that would not be possible otherwise. In general, model forms and the amount of variance explained by the models was similar between the original and updated models. The amount of variance explained by the updated models changed by 10 percent or less relative to the original models. Total nitrogen, nitrate, organic nitrogen, E. coli bacteria, and total organic carbon models were newly developed for this report. Additional data collection over a wider range of hydrological conditions facilitated the development of these models. The nitrate model is particularly important because it allows for comparison to Cheney Reservoir Task Force goals. Mean hourly computed total suspended solids concentration during 1999 through 2012 was 54 milligrams per liter (mg/L). The total suspended solids load during 1999 through 2012 was 174,031 tons. On an average annual basis, the Cheney Reservoir Task Force runoff (550 mg/L) and long-term (100 mg/L) total suspended solids goals were never exceeded, but the base flow goal was exceeded every year during 1999 through 2012. Mean hourly computed nitrate concentration was 1.08 mg/L during 1999 through 2012. The total nitrate load during 1999 through 2012 was 1,361 tons. On an annual average basis, the Cheney Reservoir Task Force runoff (6.60 mg/L) nitrate goal was never exceeded, the long-term goal (1.20 mg/L) was exceeded only in 2012, and the base flow goal of 0.25 mg/L was exceeded every year. Mean nitrate concentrations that were higher during base flow, rather than during runoff conditions, suggest that groundwater sources are the main contributors of nitrate to the North Fork Ninnescah River above Cheney Reservoir. Mean hourly computed phosphorus concentration was 0.14 mg/L during 1999 through 2012. The total phosphorus load during 1999 through 2012 was 328 tons. On an average annual basis, the Cheney Reservoir Task Force runoff goal of 0.40 mg/L for total phosphorus was exceeded in 2002, the year with the largest yearly mean turbidity, and the long-term goal (0.10 mg/L) was exceeded in every year except 2011 and 2012, the years with the smallest mean streamflows. The total phosphorus base flow goal of 0.05 mg/L was exceeded every year. Given that base flow goals for total suspended solids, nitrate, and total phosphorus were exceeded every year despite hydrologic conditions, the established base flow goals are either unattainable or substantially more best management practices will need to be implemented to attain them. On an annual average basis, no discernible patterns were evident in total suspended sediment, nitrate, and total phosphorus concentrations or loads over time, in large part because of hydrologic variability. However, more rigorous statistical analyses are required to evaluate temporal trends. A more rigorous analysis of temporal trends will allow evaluation of watershed investments in best management practices.

  10. Modeling Analysis of a Six-axis Force/Tactile Sensor for Robot Fingers and a Method for Decreasing Error

    NASA Astrophysics Data System (ADS)

    Luo, Minghua; Shimizu, Etsuro; Zhang, Feifei; Ito, Masanori

    This paper describes a six-axis force/tactile sensor for robot fingers. A mathematical model of this sensor is proposed. By this model, the grasping force and its moments, and touching position of robot finger for holding an object can be calculated. A new sensor is fabricated based on this model, where the elastic sensing unit of the sensor is made of a brazen plate. A new compensating method for decreasing error is proposed. Furthermore, the performance of this sensor is examined. The test results present approximate relationship between theoretical input and output of the sensor. It is obvious that the performance of the new sensor is better than the sensor with no compensation.

  11. Comparison of the Effectiveness of a Traditional Intermediate Algebra Course With That of a Less Rigorous Intermediate Algebra Course in Preparing Students for Success in a Subsequent Mathematics Course

    ERIC Educational Resources Information Center

    Sworder, Steven C.

    2007-01-01

    An experimental two-track intermediate algebra course was offered at Saddleback College, Mission Viejo, CA, between the Fall, 2002 and Fall, 2005 semesters. One track was modeled after the existing traditional California community college intermediate algebra course and the other track was a less rigorous intermediate algebra course in which the…

  12. On the Character and Mitigation of Atmospheric Noise in InSAR Time Series Analysis (Invited)

    NASA Astrophysics Data System (ADS)

    Barnhart, W. D.; Fielding, E. J.; Fishbein, E.

    2013-12-01

    Time series analysis of interferometric synthetic aperture radar (InSAR) data, with its broad spatial coverage and ability to image regions that are sometimes very difficult to access, is a powerful tool for characterizing continental surface deformation and its temporal variations. With the impending launch of dedicated SAR missions such as Sentinel-1, ALOS-2, and the planned NASA L-band SAR mission, large volume data sets will allow researchers to further probe ground displacement processes with increased fidelity. Unfortunately, the precision of measurements in individual interferograms is impacted by several sources of noise, notably spatially correlated signals caused by path delays through the stratified and turbulent atmosphere and ionosphere. Spatial and temporal variations in atmospheric water vapor often introduce several to tens of centimeters of apparent deformation in the radar line-of-sight, correlated over short spatial scales (<10 km). Signals resulting from atmospheric path delays are particularly problematic because, like the subsidence and uplift signals associated with tectonic deformation, they are often spatially correlated with topography. In this talk, we provide an overview of the effects of spatially correlated tropospheric noise in individual interferograms and InSAR time series analysis, and we highlight where common assumptions of the temporal and spatial characteristics of tropospheric noise fail. Next, we discuss two classes of methods for mitigating the effects of tropospheric water vapor noise in InSAR time series analysis and single interferograms: noise estimation and characterization with independent observations from multispectral sensors such as MODIS and MERIS; and noise estimation and removal with weather models, multispectral sensor observations, and GPS. Each of these techniques can provide independent assessments of the contribution of water vapor in interferograms, but each technique also suffers from several pitfalls that we outline. The multispectral near-infrared (NIR) sensors provide high spatial resolution (~1 km) estimates of total column tropospheric water vapor by measuring the absorption of reflected solar illumination and provide may excellent estimates of wet delay. The Online Services for Correcting Atmosphere in Radar (OSCAR) project currently provides water vapor products through web services (http://oscar.jpl.nasa.gov). Unfortunately, such sensors require daytime and cloudless observations. Global and regional numerical weather models can provide an additional estimate of both the dry and atmospheric delays with spatial resolution of (3-100 km) and time scales of 1-3 hours, though these models are of lower accuracy than imaging observations and are benefited by independent observations from independent observations of atmospheric water vapor. Despite these issues, the integration of these techniques for InSAR correction and uncertainty estimation may contribute substantially to the reduction and rigorous characterization of uncertainty in InSAR time series analysis - helping to expand the range of tectonic displacements imaged with InSAR, to robustly constrain geophysical models, and to generate a-priori assessments of satellite acquisitions goals.

  13. Application of zonal model on indoor air sensor network design

    NASA Astrophysics Data System (ADS)

    Chen, Y. Lisa; Wen, Jin

    2007-04-01

    Growing concerns over the safety of the indoor environment have made the use of sensors ubiquitous. Sensors that detect chemical and biological warfare agents can offer early warning of dangerous contaminants. However, current sensor system design is more informed by intuition and experience rather by systematic design. To develop a sensor system design methodology, a proper indoor airflow modeling approach is needed. Various indoor airflow modeling techniques, from complicated computational fluid dynamics approaches to simplified multi-zone approaches, exist in the literature. In this study, the effects of two airflow modeling techniques, multi-zone modeling technique and zonal modeling technique, on indoor air protection sensor system design are discussed. Common building attack scenarios, using a typical CBW agent, are simulated. Both multi-zone and zonal models are used to predict airflows and contaminant dispersion. Genetic Algorithm is then applied to optimize the sensor location and quantity. Differences in the sensor system design resulting from the two airflow models are discussed for a typical office environment and a large hall environment.

  14. Hand-writing motion tracking with vision-inertial sensor fusion: calibration and error correction.

    PubMed

    Zhou, Shengli; Fei, Fei; Zhang, Guanglie; Liu, Yunhui; Li, Wen J

    2014-08-25

    The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model.

  15. Sandhoff Disease

    MedlinePlus

    ... Coordinating Committees CounterACT Rigor & Transparency Scientific Resources Animal Models Cell/Tissue/DNA Clinical and Translational Resources Gene ... virus-delivered gene therapy seen in an animal model of Tay-Sachs and Sandhoff diseases for use ...

  16. An Efficient Interactive Model for On-Demand Sensing-As-A-Servicesof Sensor-Cloud

    PubMed Central

    Dinh, Thanh; Kim, Younghan

    2016-01-01

    This paper proposes an efficient interactive model for the sensor-cloud to enable the sensor-cloud to efficiently provide on-demand sensing services for multiple applications with different requirements at the same time. The interactive model is designed for both the cloud and sensor nodes to optimize the resource consumption of physical sensors, as well as the bandwidth consumption of sensing traffic. In the model, the sensor-cloud plays a key role in aggregating application requests to minimize the workloads required for constrained physical nodes while guaranteeing that the requirements of all applications are satisfied. Physical sensor nodes perform their sensing under the guidance of the sensor-cloud. Based on the interactions with the sensor-cloud, physical sensor nodes adapt their scheduling accordingly to minimize their energy consumption. Comprehensive experimental results show that our proposed system achieves a significant improvement in terms of the energy consumption of physical sensors, the bandwidth consumption from the sink node to the sensor-cloud, the packet delivery latency, reliability and scalability, compared to current approaches. Based on the obtained results, we discuss the economical benefits and how the proposed system enables a win-win model in the sensor-cloud. PMID:27367689

  17. An Efficient Interactive Model for On-Demand Sensing-As-A-Servicesof Sensor-Cloud.

    PubMed

    Dinh, Thanh; Kim, Younghan

    2016-06-28

    This paper proposes an efficient interactive model for the sensor-cloud to enable the sensor-cloud to efficiently provide on-demand sensing services for multiple applications with different requirements at the same time. The interactive model is designed for both the cloud and sensor nodes to optimize the resource consumption of physical sensors, as well as the bandwidth consumption of sensing traffic. In the model, the sensor-cloud plays a key role in aggregating application requests to minimize the workloads required for constrained physical nodes while guaranteeing that the requirements of all applications are satisfied. Physical sensor nodes perform their sensing under the guidance of the sensor-cloud. Based on the interactions with the sensor-cloud, physical sensor nodes adapt their scheduling accordingly to minimize their energy consumption. Comprehensive experimental results show that our proposed system achieves a significant improvement in terms of the energy consumption of physical sensors, the bandwidth consumption from the sink node to the sensor-cloud, the packet delivery latency, reliability and scalability, compared to current approaches. Based on the obtained results, we discuss the economical benefits and how the proposed system enables a win-win model in the sensor-cloud.

  18. Resilient Sensor Networks with Spatiotemporal Interpolation of Missing Sensors: An Example of Space Weather Forecasting by Multiple Satellites

    PubMed Central

    Tokumitsu, Masahiro; Hasegawa, Keisuke; Ishida, Yoshiteru

    2016-01-01

    This paper attempts to construct a resilient sensor network model with an example of space weather forecasting. The proposed model is based on a dynamic relational network. Space weather forecasting is vital for a satellite operation because an operational team needs to make a decision for providing its satellite service. The proposed model is resilient to failures of sensors or missing data due to the satellite operation. In the proposed model, the missing data of a sensor is interpolated by other sensors associated. This paper demonstrates two examples of space weather forecasting that involves the missing observations in some test cases. In these examples, the sensor network for space weather forecasting continues a diagnosis by replacing faulted sensors with virtual ones. The demonstrations showed that the proposed model is resilient against sensor failures due to suspension of hardware failures or technical reasons. PMID:27092508

  19. Resilient Sensor Networks with Spatiotemporal Interpolation of Missing Sensors: An Example of Space Weather Forecasting by Multiple Satellites.

    PubMed

    Tokumitsu, Masahiro; Hasegawa, Keisuke; Ishida, Yoshiteru

    2016-04-15

    This paper attempts to construct a resilient sensor network model with an example of space weather forecasting. The proposed model is based on a dynamic relational network. Space weather forecasting is vital for a satellite operation because an operational team needs to make a decision for providing its satellite service. The proposed model is resilient to failures of sensors or missing data due to the satellite operation. In the proposed model, the missing data of a sensor is interpolated by other sensors associated. This paper demonstrates two examples of space weather forecasting that involves the missing observations in some test cases. In these examples, the sensor network for space weather forecasting continues a diagnosis by replacing faulted sensors with virtual ones. The demonstrations showed that the proposed model is resilient against sensor failures due to suspension of hardware failures or technical reasons.

  20. Development of esMOCA Biomechanic, Motion Capture Instrumentation for Biomechanics Analysis

    NASA Astrophysics Data System (ADS)

    Arendra, A.; Akhmad, S.

    2018-01-01

    This study aims to build motion capture instruments using inertial measurement unit sensors to assist in the analysis of biomechanics. Sensors used are accelerometer and gyroscope. Estimation of orientation sensors is done by digital motion processing in each sensor nodes. There are nine sensor nodes attached to the upper limbs. This sensor is connected to the pc via a wireless sensor network. The development of kinematics and inverse dynamamic models of the upper limb is done in simulink simmechanic. The kinematic model receives streaming data of sensor nodes mounted on the limbs. The output of the kinematic model is the pose of each limbs and visualized on display. The dynamic inverse model outputs the reaction force and reaction moment of each joint based on the limb motion input. Model validation in simulink with mathematical model of mechanical analysis showed results that did not differ significantly

  1. An assessment of clinical chemical sensing technology for potential use in space station health maintenance facility

    NASA Technical Reports Server (NTRS)

    1987-01-01

    A Health Maintenance Facility is currently under development for space station application which will provide capabilities equivalent to those found on Earth. This final report addresses the study of alternate means of diagnosis and evaluation of impaired tissue perfusion in a microgravity environment. Chemical data variables related to the dysfunction and the sensors required to measure these variables are reviewed. A technology survey outlines the ability of existing systems to meet these requirements. How the candidate sensing system was subjected to rigorous testing is explored to determine its suitability. Recommendations for follow-on activities are included that would make the commercial system more appropriate for space station applications.

  2. Color regeneration from reflective color sensor using an artificial intelligent technique.

    PubMed

    Saracoglu, Ömer Galip; Altural, Hayriye

    2010-01-01

    A low-cost optical sensor based on reflective color sensing is presented. Artificial neural network models are used to improve the color regeneration from the sensor signals. Analog voltages of the sensor are successfully converted to RGB colors. The artificial intelligent models presented in this work enable color regeneration from analog outputs of the color sensor. Besides, inverse modeling supported by an intelligent technique enables the sensor probe for use of a colorimetric sensor that relates color changes to analog voltages.

  3. Sensor fusion display evaluation using information integration models in enhanced/synthetic vision applications

    NASA Technical Reports Server (NTRS)

    Foyle, David C.

    1993-01-01

    Based on existing integration models in the psychological literature, an evaluation framework is developed to assess sensor fusion displays as might be implemented in an enhanced/synthetic vision system. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The pilot's performance with the sensor fusion image is compared to models' predictions based on the pilot's performance when viewing the original component sensor images prior to fusion. This allows for the determination as to when a sensor fusion system leads to: poorer performance than one of the original sensor displays, clearly an undesirable system in which the fused sensor system causes some distortion or interference; better performance than with either single sensor system alone, but at a sub-optimal level compared to model predictions; optimal performance compared to model predictions; or, super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays.

  4. Wernicke-Korsakoff Syndrome

    MedlinePlus

    ... Coordinating Committees CounterACT Rigor & Transparency Scientific Resources Animal Models Cell/Tissue/DNA Clinical and Translational Resources Gene ... modulation of certain nerve cells in a rodent model of amnesia produced by by thiamine deficiency. The ...

  5. Interval-parameter chance-constraint programming model for end-of-life vehicles management under rigorous environmental regulations.

    PubMed

    Simic, Vladimir

    2016-06-01

    As the number of end-of-life vehicles (ELVs) is estimated to increase to 79.3 million units per year by 2020 (e.g., 40 million units were generated in 2010), there is strong motivation to effectively manage this fast-growing waste flow. Intensive work on management of ELVs is necessary in order to more successfully tackle this important environmental challenge. This paper proposes an interval-parameter chance-constraint programming model for end-of-life vehicles management under rigorous environmental regulations. The proposed model can incorporate various uncertainty information in the modeling process. The complex relationships between different ELV management sub-systems are successfully addressed. Particularly, the formulated model can help identify optimal patterns of procurement from multiple sources of ELV supply, production and inventory planning in multiple vehicle recycling factories, and allocation of sorted material flows to multiple final destinations under rigorous environmental regulations. A case study is conducted in order to demonstrate the potentials and applicability of the proposed model. Various constraint-violation probability levels are examined in detail. Influences of parameter uncertainty on model solutions are thoroughly investigated. Useful solutions for the management of ELVs are obtained under different probabilities of violating system constraints. The formulated model is able to tackle a hard, uncertainty existing ELV management problem. The presented model has advantages in providing bases for determining long-term ELV management plans with desired compromises between economic efficiency of vehicle recycling system and system-reliability considerations. The results are helpful for supporting generation and improvement of ELV management plans. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Derivation of phase functions from multiply scattered sunlight transmitted through a hazy atmosphere

    NASA Technical Reports Server (NTRS)

    Weinman, J. A.; Twitty, J. T.; Browning, S. R.; Herman, B. M.

    1975-01-01

    The intensity of sunlight multiply scattered in model atmospheres is derived from the equation of radiative transfer by an analytical small-angle approximation. The approximate analytical solutions are compared to rigorous numerical solutions of the same problem. Results obtained from an aerosol-laden model atmosphere are presented. Agreement between the rigorous and the approximate solutions is found to be within a few per cent. The analytical solution to the problem which considers an aerosol-laden atmosphere is then inverted to yield a phase function which describes a single scattering event at small angles. The effect of noisy data on the derived phase function is discussed.

  7. Fast synthesis of topographic mask effects based on rigorous solutions

    NASA Astrophysics Data System (ADS)

    Yan, Qiliang; Deng, Zhijie; Shiely, James

    2007-10-01

    Topographic mask effects can no longer be ignored at technology nodes of 45 nm, 32 nm and beyond. As feature sizes become comparable to the mask topographic dimensions and the exposure wavelength, the popular thin mask model breaks down, because the mask transmission no longer follows the layout. A reliable mask transmission function has to be derived from Maxwell equations. Unfortunately, rigorous solutions of Maxwell equations are only manageable for limited field sizes, but impractical for full-chip optical proximity corrections (OPC) due to the prohibitive runtime. Approximation algorithms are in demand to achieve a balance between acceptable computation time and tolerable errors. In this paper, a fast algorithm is proposed and demonstrated to model topographic mask effects for OPC applications. The ProGen Topographic Mask (POTOMAC) model synthesizes the mask transmission functions out of small-sized Maxwell solutions from a finite-difference-in-time-domain (FDTD) engine, an industry leading rigorous simulator of topographic mask effect from SOLID-E. The integral framework presents a seamless solution to the end user. Preliminary results indicate the overhead introduced by POTOMAC is contained within the same order of magnitude in comparison to the thin mask approach.

  8. A METHODOLOGY FOR ESTIMATING UNCERTAINTY OF A DISTRIBUTED HYDROLOGIC MODEL: APPLICATION TO POCONO CREEK WATERSHED

    EPA Science Inventory

    Utility of distributed hydrologic and water quality models for watershed management and sustainability studies should be accompanied by rigorous model uncertainty analysis. However, the use of complex watershed models primarily follows the traditional {calibrate/validate/predict}...

  9. Mechanical properties of frog skeletal muscles in iodoacetic acid rigor.

    PubMed Central

    Mulvany, M J

    1975-01-01

    1. Methods have been developed for describing the length: tension characteristics of frog skeletal muscles which go into rigor at 4 degrees C following iodoacetic acid poisoning either in the presence of Ca2+ (Ca-rigor) or its absence (Ca-free-rigor). 2. Such rigor muscles showed less resistance to slow stretch (slow rigor resistance) that to fast stretch (fast rigor resistance). The slow and fast rigor resistances of Ca-free-rigor muscles were much lower than those of Ca-rigor muscles. 3. The slow rigor resistance of Ca-rigor muscles was proportional to the amount of overlap between the contractile filaments present when the muscles were put into rigor. 4. Withdrawing Ca2+ from Ca-rigor muscles (induced-Ca-free rigor) reduced their slow and fast rigor resistances. Readdition of Ca2+ (but not Mg2+, Mn2+ or Sr2+) reversed the effect. 5. The slow and fast rigor resistances of Ca-rigor muscles (but not of Ca-free-rigor muscles) decreased with time. 6.The sarcomere structure of Ca-rigor and induced-Ca-free rigor muscles stretched by 0.2lo was destroyed in proportion to the amount of stretch, but the lengths of the remaining intact sarcomeres were essentially unchanged. This suggests that there had been a successive yielding of the weakeast sarcomeres. 7. The difference between the slow and fast rigor resistance and the effect of calcium on these resistances are discussed in relation to possible variations in the strength of crossbridges between the thick and thin filaments. Images Plate 1 Plate 2 PMID:1082023

  10. Sensor Location Problem Optimization for Traffic Network with Different Spatial Distributions of Traffic Information.

    PubMed

    Bao, Xu; Li, Haijian; Qin, Lingqiao; Xu, Dongwei; Ran, Bin; Rong, Jian

    2016-10-27

    To obtain adequate traffic information, the density of traffic sensors should be sufficiently high to cover the entire transportation network. However, deploying sensors densely over the entire network may not be realistic for practical applications due to the budgetary constraints of traffic management agencies. This paper describes several possible spatial distributions of traffic information credibility and proposes corresponding different sensor information credibility functions to describe these spatial distribution properties. A maximum benefit model and its simplified model are proposed to solve the traffic sensor location problem. The relationships between the benefit and the number of sensors are formulated with different sensor information credibility functions. Next, expanding models and algorithms in analytic results are performed. For each case, the maximum benefit, the optimal number and spacing of sensors are obtained and the analytic formulations of the optimal sensor locations are derived as well. Finally, a numerical example is proposed to verify the validity and availability of the proposed models for solving a network sensor location problem. The results show that the optimal number of sensors of segments with different model parameters in an entire freeway network can be calculated. Besides, it can also be concluded that the optimal sensor spacing is independent of end restrictions but dependent on the values of model parameters that represent the physical conditions of sensors and roads.

  11. Sensor Location Problem Optimization for Traffic Network with Different Spatial Distributions of Traffic Information

    PubMed Central

    Bao, Xu; Li, Haijian; Qin, Lingqiao; Xu, Dongwei; Ran, Bin; Rong, Jian

    2016-01-01

    To obtain adequate traffic information, the density of traffic sensors should be sufficiently high to cover the entire transportation network. However, deploying sensors densely over the entire network may not be realistic for practical applications due to the budgetary constraints of traffic management agencies. This paper describes several possible spatial distributions of traffic information credibility and proposes corresponding different sensor information credibility functions to describe these spatial distribution properties. A maximum benefit model and its simplified model are proposed to solve the traffic sensor location problem. The relationships between the benefit and the number of sensors are formulated with different sensor information credibility functions. Next, expanding models and algorithms in analytic results are performed. For each case, the maximum benefit, the optimal number and spacing of sensors are obtained and the analytic formulations of the optimal sensor locations are derived as well. Finally, a numerical example is proposed to verify the validity and availability of the proposed models for solving a network sensor location problem. The results show that the optimal number of sensors of segments with different model parameters in an entire freeway network can be calculated. Besides, it can also be concluded that the optimal sensor spacing is independent of end restrictions but dependent on the values of model parameters that represent the physical conditions of sensors and roads. PMID:27801794

  12. A Rigorous Investigation on the Ground State of the Penson-Kolb Model

    NASA Astrophysics Data System (ADS)

    Yang, Kai-Hua; Tian, Guang-Shan; Han, Ru-Qi

    2003-05-01

    By using either numerical calculations or analytical methods, such as the bosonization technique, the ground state of the Penson-Kolb model has been previously studied by several groups. Some physicists argued that, as far as the existence of superconductivity in this model is concerned, it is canonically equivalent to the negative-U Hubbard model. However, others did not agree. In the present paper, we shall investigate this model by an independent and rigorous approach. We show that the ground state of the Penson-Kolb model is nondegenerate and has a nonvanishing overlap with the ground state of the negative-U Hubbard model. Furthermore, we also show that the ground states of both the models have the same good quantum numbers and may have superconducting long-range order at the same momentum q = 0. Our results support the equivalence between these models. The project partially supported by the Special Funds for Major State Basic Research Projects (G20000365) and National Natural Science Foundation of China under Grant No. 10174002

  13. Widely tunable Fabry-Perot filter based MWIR and LWIR microspectrometers

    NASA Astrophysics Data System (ADS)

    Ebermann, Martin; Neumann, Norbert; Hiller, Karla; Gittler, Elvira; Meinig, Marco; Kurth, Steffen

    2012-06-01

    As is generally known, miniature infrared spectrometers have great potential, e. g. for process and environmental analytics or in medical applications. Many efforts are being made to shrink conventional spectrometers, such as FTIR or grating based devices. A more rigorous approach for miniaturization is the use of MEMS technologies. Based on an established design for the MWIR new MEMS Fabry-Perot filters and sensors with expanded spectral ranges in the LWIR have been developed. The range 5.5 - 8 μm is particularly suited for the analysis of liquids. A dual-band sensor, which can be simultaneously tuned from 4 - 5 μm and 8 - 11 μm for the measurement of anesthetics and carbon dioxide has also been developed. A new material system is used to reduce internal stress in the reflector layer stack. Good results in terms of finesse (<= 60) and transmittance (<= 80 %) could be demonstrated. The hybrid integration of the filter in a pyroelectric detector results in very compact, robust and cost effective microspectrometers. FP filters with two moveable reflectors instead of only one reduce significantly the acceleration sensitivity and actuation voltage.

  14. Overview of the 2015 Algodones Sand Dunes field campaign to support sensor intercalibration

    NASA Astrophysics Data System (ADS)

    McCorkel, Joel; Bachmann, Charles M.; Coburn, Craig; Gerace, Aaron; Leigh, Larry; Czapla-Myers, Jeff; Helder, Dennis; Cook, Bruce

    2018-01-01

    Several sites from around the world are being used operationally and are suitable for vicarious calibration of space-borne imaging platforms. However, due to the proximity of these sites (e.g., Libya 4), a rigorous characterization of the landscape is not feasible, limiting their utility for sensor intercalibration efforts. Due to its accessibility and similarities to Libya 4, the Algodones Sand Dunes System in California, USA, was identified as a potentially attractive intercalibration site for space-borne, reflective instruments such as Landsat. In March 2015, a 4-day field campaign was conducted to develop an initial characterization of Algodones with a primary goal of assessing its intercalibration potential. Five organizations from the US and Canada collaborated to collect both active and passive airborne image data, spatial and temporal measurements of spectral bidirectional reflectance distribution function, and in-situ sand samples from several locations across the Algodones system. The collection activities conducted to support the campaign goal is summarized, including a summary of all instrumentation used, the data collected, and the experiments performed in an effort to characterize the Algodones site.

  15. A model-based reasoning approach to sensor placement for monitorability

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Doyle, Richard; Homemdemello, Luiz

    1992-01-01

    An approach is presented to evaluating sensor placements to maximize monitorability of the target system while minimizing the number of sensors. The approach uses a model of the monitored system to score potential sensor placements on the basis of four monitorability criteria. The scores can then be analyzed to produce a recommended sensor set. An example from our NASA application domain is used to illustrate our model-based approach to sensor placement.

  16. Chaparral Model 60 Infrasound Sensor Evaluation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slad, George William; Merchant, Bion J.

    2016-03-01

    Sandia National Laboratories has tested and evaluated an infrasound sensor, the Model 60 manufactured by Chaparral Physics, a Division of Geophysical Institute of the University of Alaska, Fairbanks. The purpose of the infrasound sensor evaluation was to determine a measured sensitivity, transfer function, power, self-noise, dynamic range, and seismic sensitivity. The Model 60 infrasound sensor is a new sensor developed by Chaparral Physics intended to be a small, rugged sensor used in more flexible application conditions.

  17. Estimating Snow Water Equivalent over the American River in the Sierra Nevada Basin Using Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Welch, S. C.; Kerkez, B.; Glaser, S. D.; Bales, R. C.; Rice, R.

    2011-12-01

    We have designed a basin-scale (>2000 km2) instrument cluster, made up of 20 local-scale (1-km footprint) wireless sensor networks (WSNs), to measure patterns of snow depth and snow water equivalent (SWE) across the main snowmelt producing area within the American River basin. Each of the 20 WSNs has on the order of 25 wireless nodes, with over 10 nodes actively sensing snow depth, and thus snow accumulation and melt. When combined with existing snow density measurements and full-basin satellite snowcover data, these measurements are designed to provide dense ground-truth snow properties for research and real-time SWE for water management. The design of this large-scale network is based on rigorous testing of previous, smaller-scale studies, permitting for the development of methods to significantly, and efficiently scale up network operations. Recent advances in WSN technology have resulted in a modularized strategy that permits rapid future network deployment. To select network and sensor locations, various sensor placement approaches were compared, including random placement, placement of WSNs in locations that have captured the historical basin mean, as well as a placement algorithm leveraging the covariance structure of the SWE distribution. We show that that the optimal network locations do not exhibit a uniform grid, but rather follow strategic patterns based on physiographic terrain parameters. Uncertainty estimates are also provided to assess the confidence in the placement approach. To ensure near-optimal coverage of the full basin, we validated each placement approach with a multi-year record of SWE derived from reconstruction of historical satellite measurements.

  18. (U) An Analytic Study of Piezoelectric Ejecta Mass Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tregillis, Ian Lee

    2017-02-16

    We consider the piezoelectric measurement of the areal mass of an ejecta cloud, for the specific case where ejecta are created by a single shock at the free surface and fly ballistically through vacuum to the sensor. To do so, we define time- and velocity-dependent ejecta “areal mass functions” at the source and sensor in terms of typically unknown distribution functions for the ejecta particles. Next, we derive an equation governing the relationship between the areal mass function at the source (which resides in the rest frame of the free surface) and at the sensor (which resides in the laboratorymore » frame). We also derive expressions for the analytic (“true”) accumulated ejecta mass at the sensor and the measured (“inferred”) value obtained via the standard method for analyzing piezoelectric voltage traces. This approach enables us to derive an exact expression for the error imposed upon a piezoelectric ejecta mass measurement (in a perfect system) by the assumption of instantaneous creation. We verify that when the ejecta are created instantaneously (i.e., when the time dependence is a delta function), the piezoelectric inference method exactly reproduces the correct result. When creation is not instantaneous, the standard piezo analysis will always overestimate the true mass. However, the error is generally quite small (less than several percent) for most reasonable velocity and time dependences. In some cases, errors exceeding 10-15% may require velocity distributions or ejecta production timescales inconsistent with experimental observations. These results are demonstrated rigorously with numerous analytic test problems.« less

  19. Modeling Carbon-Black/Polymer Composite Sensors

    PubMed Central

    Lei, Hua; Pitt, William G.; McGrath, Lucas K.; Ho, Clifford K.

    2012-01-01

    Conductive polymer composite sensors have shown great potential in identifying gaseous analytes. To more thoroughly understand the physical and chemical mechanisms of this type of sensor, a mathematical model was developed by combining two sub-models: a conductivity model and a thermodynamic model, which gives a relationship between the vapor concentration of analyte(s) and the change of the sensor signals. In this work, 64 chemiresistors representing eight different carbon concentrations (8–60 vol% carbon) were constructed by depositing thin films of a carbon-black/polyisobutylene composite onto concentric spiral platinum electrodes on a silicon chip. The responses of the sensors were measured in dry air and at various vapor pressures of toluene and trichloroethylene. Three parameters in the conductivity model were determined by fitting the experimental data. It was shown that by applying this model, the sensor responses can be adequately predicted for given vapor pressures; furthermore the analyte vapor concentrations can be estimated based on the sensor responses. This model will guide the improvement of the design and fabrication of conductive polymer composite sensors for detecting and identifying mixtures of organic vapors. PMID:22518071

  20. Hand-Writing Motion Tracking with Vision-Inertial Sensor Fusion: Calibration and Error Correction

    PubMed Central

    Zhou, Shengli; Fei, Fei; Zhang, Guanglie; Liu, Yunhui; Li, Wen J.

    2014-01-01

    The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model. PMID:25157546

  1. Radargrammetric DSM generation in mountainous areas through adaptive-window least squares matching constrained by enhanced epipolar geometry

    NASA Astrophysics Data System (ADS)

    Dong, Yuting; Zhang, Lu; Balz, Timo; Luo, Heng; Liao, Mingsheng

    2018-03-01

    Radargrammetry is a powerful tool to construct digital surface models (DSMs) especially in heavily vegetated and mountainous areas where SAR interferometry (InSAR) technology suffers from decorrelation problems. In radargrammetry, the most challenging step is to produce an accurate disparity map through massive image matching, from which terrain height information can be derived using a rigorous sensor orientation model. However, precise stereoscopic SAR (StereoSAR) image matching is a very difficult task in mountainous areas due to the presence of speckle noise and dissimilar geometric/radiometric distortions. In this article, an adaptive-window least squares matching (AW-LSM) approach with an enhanced epipolar geometric constraint is proposed to robustly identify homologous points after compensation for radiometric discrepancies and geometric distortions. The matching procedure consists of two stages. In the first stage, the right image is re-projected into the left image space to generate epipolar images using rigorous imaging geometries enhanced with elevation information extracted from the prior DEM data e.g. SRTM DEM instead of the mean height of the mapped area. Consequently, the dissimilarities in geometric distortions between the left and right images are largely reduced, and the residual disparity corresponds to the height difference between true ground surface and the prior DEM. In the second stage, massive per-pixel matching between StereoSAR epipolar images identifies the residual disparity. To ensure the reliability and accuracy of the matching results, we develop an iterative matching scheme in which the classic cross correlation matching is used to obtain initial results, followed by the least squares matching (LSM) to refine the matching results. An adaptively resizing search window strategy is adopted during the dense matching step to help find right matching points. The feasibility and effectiveness of the proposed approach is demonstrated using Stripmap and Spotlight mode TerraSAR-X stereo data pairs covering Mount Song in central China. Experimental results show that the proposed method can provide a robust and effective matching tool for radargrammetry in mountainous areas.

  2. Sensor trustworthiness in uncertain time varying stochastic environments

    NASA Astrophysics Data System (ADS)

    Verma, Ajay; Fernandes, Ronald; Vadakkeveedu, Kalyan

    2011-06-01

    Persistent surveillance applications require unattended sensors deployed in remote regions to track and monitor some physical stimulant of interest that can be modeled as output of time varying stochastic process. However, the accuracy or the trustworthiness of the information received through a remote and unattended sensor and sensor network cannot be readily assumed, since sensors may get disabled, corrupted, or even compromised, resulting in unreliable information. The aim of this paper is to develop information theory based metric to determine sensor trustworthiness from the sensor data in an uncertain and time varying stochastic environment. In this paper we show an information theory based determination of sensor data trustworthiness using an adaptive stochastic reference sensor model that tracks the sensor performance for the time varying physical feature, and provides a baseline model that is used to compare and analyze the observed sensor output. We present an approach in which relative entropy is used for reference model adaptation and determination of divergence of the sensor signal from the estimated reference baseline. We show that that KL-divergence is a useful metric that can be successfully used in determination of sensor failures or sensor malice of various types.

  3. Virtual sensors for robust on-line monitoring (OLM) and Diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tipireddy, Ramakrishna; Lerchen, Megan E.; Ramuhalli, Pradeep

    Unscheduled shutdown of nuclear power facilities for recalibration and replacement of faulty sensors can be expensive and disruptive to grid management. In this work, we present virtual (software) sensors that can replace a faulty physical sensor for a short duration thus allowing recalibration to be safely deferred to a later time. The virtual sensor model uses a Gaussian process model to process input data from redundant and other nearby sensors. Predicted data includes uncertainty bounds including spatial association uncertainty and measurement noise and error. Using data from an instrumented cooling water flow loop testbed, the virtual sensor model has predictedmore » correct sensor measurements and the associated error corresponding to a faulty sensor.« less

  4. A Low-Signal-to-Noise-Ratio Sensor Framework Incorporating Improved Nighttime Capabilities in DIRSIG

    NASA Astrophysics Data System (ADS)

    Rizzuto, Anthony P.

    When designing new remote sensing systems, it is difficult to make apples-to-apples comparisons between designs because of the number of sensor parameters that can affect the final image. Using synthetic imagery and a computer sensor model allows for comparisons to be made between widely different sensor designs or between competing design parameters. Little work has been done in fully modeling low-SNR systems end-to-end for these types of comparisons. Currently DIRSIG has limited capability to accurately model nighttime scenes under new moon conditions or near large cities. An improved DIRSIG scene modeling capability is presented that incorporates all significant sources of nighttime radiance, including new models for urban glow and airglow, both taken from the astronomy community. A low-SNR sensor modeling tool is also presented that accounts for sensor components and noise sources to generate synthetic imagery from a DIRSIG scene. The various sensor parameters that affect SNR are discussed, and example imagery is shown with the new sensor modeling tool. New low-SNR detectors have recently been designed and marketed for remote sensing applications. A comparison of system parameters for a state-of-the-art low-SNR sensor is discussed, and a sample design trade study is presented for a hypothetical scene and sensor.

  5. Agricultural model intercomparison and improvement project: Overview of model intercomparisons

    USDA-ARS?s Scientific Manuscript database

    Improvement of crop simulation models to better estimate growth and yield is one of the objectives of the Agricultural Model Intercomparison and Improvement Project (AgMIP). The overall goal of AgMIP is to provide an assessment of crop model through rigorous intercomparisons and evaluate future clim...

  6. The Importance of Post-Launch, On-Orbit Absolute Radiometric Calibration for Remote Sensing Applications

    NASA Astrophysics Data System (ADS)

    Kuester, M. A.

    2015-12-01

    Remote sensing is a powerful tool for monitoring changes on the surface of the Earth at a local or global scale. The use of data sets from different sensors across many platforms, or even a single sensor over time, can bring a wealth of information when exploring anthropogenic changes to the environment. For example, variations in crop yield and health for a specific region can be detected by observing changes in the spectral signature of the particular species under study. However, changes in the atmosphere, sun illumination and viewing geometries during image capture can result in inconsistent image data, hindering automated information extraction. Additionally, an incorrect spectral radiometric calibration will lead to false or misleading results. It is therefore critical that the data being used are normalized and calibrated on a regular basis to ensure that physically derived variables are as close to truth as is possible. Although most earth observing sensors are well-calibrated in a laboratory prior to launch, a change in the radiometric response of the system is inevitable due to thermal, mechanical or electrical effects caused during the rigors of launch or by the space environment itself. Outgassing and exposure to ultra-violet radiation will also have an effect on the sensor's filter responses. Pre-launch lamps and other laboratory calibration systems can also fall short in representing the actual output of the Sun. A presentation of the differences in the results of some example cases (e.g. geology, agriculture) derived for science variables using pre- and post-launch calibration will be presented using DigitalGlobe's WorldView-3 super spectral sensor, with bands in the visible and near infrared, as well as in the shortwave infrared. Important defects caused by an incomplete (i.e. pre-launch only) calibration will be discussed using validation data where available. In addition, the benefits of using a well-validated surface reflectance product will be presented. DigitalGlobe is committed to providing ongoing assessment of the radiometric performance of our sensors, which allows customers to get the most out of our extensive multi-sensor constellation.

  7. Data-driven Modeling of Metal-oxide Sensors with Dynamic Bayesian Networks

    NASA Astrophysics Data System (ADS)

    Gosangi, Rakesh; Gutierrez-Osuna, Ricardo

    2011-09-01

    We present a data-driven probabilistic framework to model the transient response of MOX sensors modulated with a sequence of voltage steps. Analytical models of MOX sensors are usually built based on the physico-chemical properties of the sensing materials. Although building these models provides an insight into the sensor behavior, they also require a thorough understanding of the underlying operating principles. Here we propose a data-driven approach to characterize the dynamical relationship between sensor inputs and outputs. Namely, we use dynamic Bayesian networks (DBNs), probabilistic models that represent temporal relations between a set of random variables. We identify a set of control variables that influence the sensor responses, create a graphical representation that captures the causal relations between these variables, and finally train the model with experimental data. We validated the approach on experimental data in terms of predictive accuracy and classification performance. Our results show that DBNs can accurately predict the dynamic response of MOX sensors, as well as capture the discriminatory information present in the sensor transients.

  8. Surface Soil Moisture Retrieval Using SSM/I and Its Comparison with ESTAR: A Case Study Over a Grassland Region

    NASA Technical Reports Server (NTRS)

    Jackson, T.; Hsu, A. Y.; ONeill, P. E.

    1999-01-01

    This study extends a previous investigation on estimating surface soil moisture using the Special Sensor Microwave/Imager (SSM/I) over a grassland region. Although SSM/I is not optimal for soil moisture retrieval, it can under some conditions provide information. Rigorous analyses over land have been difficult due to the lack of good validation data sets. A scientific objective of the Southern Great Plains 1997 (SGP97) Hydrology Experiment was to investigate whether the retrieval algorithms for surface soil moisture developed at higher spatial resolution using truck-and aircraft-based passive microwave sensors can be extended to the coarser resolutions expected from satellite platform. With the data collected for the SGP97, the objective of this study is to compare the surface soil moisture estimated from the SSM/I data with those retrieved from the L-band Electronically Scanned Thinned Array Radiometer (ESTAR) data, the core sensor for the experiment, using the same retrieval algorithm. The results indicated that an error of estimate of 7.81% could be achieved with SSM/I data as contrasted to 2.82% with ESTAR data over three intensive sampling areas of different vegetation regimes. It confirms the results of previous study that SSM/I data can be used to retrieve surface soil moisture information at a regional scale under certain conditions.

  9. Advancing Porous Silicon Biosensor Technology for Use in Clinical Diagnostics

    NASA Astrophysics Data System (ADS)

    Bonanno, Lisa Marie

    Inexpensive and robust analytical techniques for detecting molecular recognition events are in great demand in healthcare, food safety, and environmental monitoring. Despite vast research in this area, challanges remain to develop practical biomolecular platforms that, meet the rigorous demands of real-world applications. This includes maintaining low-cost devices that are sensitive and specific in complex test specimens, are stable after storage, have short assay time, and possess minimal complexity of instrumentation for readout. Nanostructured porous silicon (PSi) material has been identified as an ideal candidate towards achieving these goals and the past decade has seen diverse proof-of-principle studies developing optical-based sensing techniques. In Part 1 of this thesis, the impact of surface chemistry and PSi morphology on detection sensitivity of target molecules is investigated. Initial proof-of-concept that PSi devices facilitate detection of protein in whole blood is demonstrated. This work highlights the importance of material stability and blocking chemistry for sensor use in real world biological samples. In addition, the intrinisic filtering capability of the 3-D PSi morphology is shown as an advantage in complex solutions, such as whole blood. Ultimately, this initial work identified a need to improve detection sensitivity of the PSI biosensor technique to facilitate clinical diagnostic use over relevant target concentration ranges. The second part of this thesis, builds upon sensitivity challenges that are highlighted in the first part of the thesis and development of a surface-bound competitive inhibition immunoassay facilitated improved detection sensitivity of small molecular weight targets (opiates) over a relevant clinical concentration range. In addition, optimization of assay protocol addressed issues of maintaining stability of sensors after storage. Performance of the developed assay (specificity and sensitivity) was then validated in a blind clinical study that screened real patient urine samples (n=70) for opiates in collaboration with Strong Memorial Hospital Clinical Toxicology Laboratory. PSI sensor results showed improved clinical specificity over current commercial opiate immunoassay techniques and therefore, identified potential for a reduction in false-negative and false-positive screening results. Here, we demonstrate for the first time, successful clinical capability of a PSi sensor to detect opiates as a model target in real-world patient samples. The final part of this thesis explores novel sensor designs to leverage the tunable optical properties of PSi photonic devices and facilitate colorimetric readout of molecular recognition events by the unaided eye. Such a design is ideal for uncomplicated diagnostic screening at point-of-care as no instrumentation is needed for result readout. The photonic PSi transducers were integrated with target analyte-responsive hydrogels (TRAP-gels) that upon exposure to a target solution would swell and dissolute, inducing material property changes that were optically detected by the incorporated PSi transducer. This strategy extends target detection throughout the 3-ll internal volume of the PSi, improving upon current techniques that limit detection to the surface area (2-ll) of PSi. Work to acheive this approach involved design of TRAP-gel networks, polymer synthesis and characterization techniques, and optical characterization of the hybrid hydrogel-PSi material sensor. Successful implementation of a hybrid sensor design was exhibited for a. model chemical target (reducing agent), in which visual colorimetric change from red to green was observed for above-threshold exposure to the chemical target. In addition, initial proof-of-concept of an opiate responsive TRAP-gel is also demonstrated where cross-links are formed between antibody-antigen interactions and exposure to opiates induces bulk gel dissolution.

  10. Enhanced modeling and simulation of EO/IR sensor systems

    NASA Astrophysics Data System (ADS)

    Hixson, Jonathan G.; Miller, Brian; May, Christopher

    2015-05-01

    The testing and evaluation process developed by the Night Vision and Electronic Sensors Directorate (NVESD) Modeling and Simulation Division (MSD) provides end to end systems evaluation, testing, and training of EO/IR sensors. By combining NV-LabCap, the Night Vision Integrated Performance Model (NV-IPM), One Semi-Automated Forces (OneSAF) input sensor file generation, and the Night Vision Image Generator (NVIG) capabilities, NVESD provides confidence to the M&S community that EO/IR sensor developmental and operational testing and evaluation are accurately represented throughout the lifecycle of an EO/IR system. This new process allows for both theoretical and actual sensor testing. A sensor can be theoretically designed in NV-IPM, modeled in NV-IPM, and then seamlessly input into the wargames for operational analysis. After theoretical design, prototype sensors can be measured by using NV-LabCap, then modeled in NV-IPM and input into wargames for further evaluation. The measurement process to high fidelity modeling and simulation can then be repeated again and again throughout the entire life cycle of an EO/IR sensor as needed, to include LRIP, full rate production, and even after Depot Level Maintenance. This is a prototypical example of how an engineering level model and higher level simulations can share models to mutual benefit.

  11. Sensor Management for Applied Research Technologies (SMART)-On Demand Modeling (ODM) Project

    NASA Technical Reports Server (NTRS)

    Goodman, M.; Blakeslee, R.; Hood, R.; Jedlovec, G.; Botts, M.; Li, X.

    2006-01-01

    NASA requires timely on-demand data and analysis capabilities to enable practical benefits of Earth science observations. However, a significant challenge exists in accessing and integrating data from multiple sensors or platforms to address Earth science problems because of the large data volumes, varying sensor scan characteristics, unique orbital coverage, and the steep learning curve associated with each sensor and data type. The development of sensor web capabilities to autonomously process these data streams (whether real-time or archived) provides an opportunity to overcome these obstacles and facilitate the integration and synthesis of Earth science data and weather model output. A three year project, entitled Sensor Management for Applied Research Technologies (SMART) - On Demand Modeling (ODM), will develop and demonstrate the readiness of Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) capabilities that integrate both Earth observations and forecast model output into new data acquisition and assimilation strategies. The advancement of SWE-enabled systems (i.e., use of SensorML, sensor planning services - SPS, sensor observation services - SOS, sensor alert services - SAS and common observation model protocols) will have practical and efficient uses in the Earth science community for enhanced data set generation, real-time data assimilation with operational applications, and for autonomous sensor tasking for unique data collection.

  12. Dynamic model inversion techniques for breath-by-breath measurement of carbon dioxide from low bandwidth sensors.

    PubMed

    Sivaramakrishnan, Shyam; Rajamani, Rajesh; Johnson, Bruce D

    2009-01-01

    Respiratory CO(2) measurement (capnography) is an important diagnosis tool that lacks inexpensive and wearable sensors. This paper develops techniques to enable use of inexpensive but slow CO(2) sensors for breath-by-breath tracking of CO(2) concentration. This is achieved by mathematically modeling the dynamic response and using model-inversion techniques to predict input CO(2) concentration from the slow-varying output. Experiments are designed to identify model-dynamics and extract relevant model-parameters for a solidstate room monitoring CO(2) sensor. A second-order model that accounts for flow through the sensor's filter and casing is found to be accurate in describing the sensor's slow response. The resulting estimate is compared with a standard-of-care respiratory CO(2) analyzer and shown to effectively track variation in breath-by-breath CO(2) concentration. This methodology is potentially useful for measuring fast-varying inputs to any slow sensor.

  13. Impact of topographic mask models on scanner matching solutions

    NASA Astrophysics Data System (ADS)

    Tyminski, Jacek K.; Pomplun, Jan; Renwick, Stephen P.

    2014-03-01

    Of keen interest to the IC industry are advanced computational lithography applications such as Optical Proximity Correction of IC layouts (OPC), scanner matching by optical proximity effect matching (OPEM), and Source Optimization (SO) and Source-Mask Optimization (SMO) used as advanced reticle enhancement techniques. The success of these tasks is strongly dependent on the integrity of the lithographic simulators used in computational lithography (CL) optimizers. Lithographic mask models used by these simulators are key drivers impacting the accuracy of the image predications, and as a consequence, determine the validity of these CL solutions. Much of the CL work involves Kirchhoff mask models, a.k.a. thin masks approximation, simplifying the treatment of the mask near-field images. On the other hand, imaging models for hyper-NA scanner require that the interactions of the illumination fields with the mask topography be rigorously accounted for, by numerically solving Maxwell's Equations. The simulators used to predict the image formation in the hyper-NA scanners must rigorously treat the masks topography and its interaction with the scanner illuminators. Such imaging models come at a high computational cost and pose challenging accuracy vs. compute time tradeoffs. Additional complication comes from the fact that the performance metrics used in computational lithography tasks show highly non-linear response to the optimization parameters. Finally, the number of patterns used for tasks such as OPC, OPEM, SO, or SMO range from tens to hundreds. These requirements determine the complexity and the workload of the lithography optimization tasks. The tools to build rigorous imaging optimizers based on first-principles governing imaging in scanners are available, but the quantifiable benefits they might provide are not very well understood. To quantify the performance of OPE matching solutions, we have compared the results of various imaging optimization trials obtained with Kirchhoff mask models to those obtained with rigorous models involving solutions of Maxwell's Equations. In both sets of trials, we used sets of large numbers of patterns, with specifications representative of CL tasks commonly encountered in hyper-NA imaging. In this report we present OPEM solutions based on various mask models and discuss the models' impact on hyper- NA scanner matching accuracy. We draw conclusions on the accuracy of results obtained with thin mask models vs. the topographic OPEM solutions. We present various examples representative of the scanner image matching for patterns representative of the current generation of IC designs.

  14. Probability bounds analysis for nonlinear population ecology models.

    PubMed

    Enszer, Joshua A; Andrei Măceș, D; Stadtherr, Mark A

    2015-09-01

    Mathematical models in population ecology often involve parameters that are empirically determined and inherently uncertain, with probability distributions for the uncertainties not known precisely. Propagating such imprecise uncertainties rigorously through a model to determine their effect on model outputs can be a challenging problem. We illustrate here a method for the direct propagation of uncertainties represented by probability bounds though nonlinear, continuous-time, dynamic models in population ecology. This makes it possible to determine rigorous bounds on the probability that some specified outcome for a population is achieved, which can be a core problem in ecosystem modeling for risk assessment and management. Results can be obtained at a computational cost that is considerably less than that required by statistical sampling methods such as Monte Carlo analysis. The method is demonstrated using three example systems, with focus on a model of an experimental aquatic food web subject to the effects of contamination by ionic liquids, a new class of potentially important industrial chemicals. Copyright © 2015. Published by Elsevier Inc.

  15. Combining Community Engagement and Scientific Approaches in Next-Generation Monitor Siting: The Case of the Imperial County Community Air Network.

    PubMed

    Wong, Michelle; Bejarano, Esther; Carvlin, Graeme; Fellows, Katie; King, Galatea; Lugo, Humberto; Jerrett, Michael; Meltzer, Dan; Northcross, Amanda; Olmedo, Luis; Seto, Edmund; Wilkie, Alexa; English, Paul

    2018-03-15

    Air pollution continues to be a global public health threat, and the expanding availability of small, low-cost air sensors has led to increased interest in both personal and crowd-sourced air monitoring. However, to date, few low-cost air monitoring networks have been developed with the scientific rigor or continuity needed to conduct public health surveillance and inform policy. In Imperial County, California, near the U.S./Mexico border, we used a collaborative, community-engaged process to develop a community air monitoring network that attains the scientific rigor required for research, while also achieving community priorities. By engaging community residents in the project design, monitor siting processes, data dissemination, and other key activities, the resulting air monitoring network data are relevant, trusted, understandable, and used by community residents. Integration of spatial analysis and air monitoring best practices into the network development process ensures that the data are reliable and appropriate for use in research activities. This combined approach results in a community air monitoring network that is better able to inform community residents, support research activities, guide public policy, and improve public health. Here we detail the monitor siting process and outline the advantages and challenges of this approach.

  16. Combining Community Engagement and Scientific Approaches in Next-Generation Monitor Siting: The Case of the Imperial County Community Air Network

    PubMed Central

    Wong, Michelle; Bejarano, Esther; Carvlin, Graeme; King, Galatea; Lugo, Humberto; Jerrett, Michael; Northcross, Amanda; Olmedo, Luis; Seto, Edmund; Wilkie, Alexa; English, Paul

    2018-01-01

    Air pollution continues to be a global public health threat, and the expanding availability of small, low-cost air sensors has led to increased interest in both personal and crowd-sourced air monitoring. However, to date, few low-cost air monitoring networks have been developed with the scientific rigor or continuity needed to conduct public health surveillance and inform policy. In Imperial County, California, near the U.S./Mexico border, we used a collaborative, community-engaged process to develop a community air monitoring network that attains the scientific rigor required for research, while also achieving community priorities. By engaging community residents in the project design, monitor siting processes, data dissemination, and other key activities, the resulting air monitoring network data are relevant, trusted, understandable, and used by community residents. Integration of spatial analysis and air monitoring best practices into the network development process ensures that the data are reliable and appropriate for use in research activities. This combined approach results in a community air monitoring network that is better able to inform community residents, support research activities, guide public policy, and improve public health. Here we detail the monitor siting process and outline the advantages and challenges of this approach. PMID:29543726

  17. Advanced EUV mask and imaging modeling

    NASA Astrophysics Data System (ADS)

    Evanschitzky, Peter; Erdmann, Andreas

    2017-10-01

    The exploration and optimization of image formation in partially coherent EUV projection systems with complex source shapes requires flexible, accurate, and efficient simulation models. This paper reviews advanced mask diffraction and imaging models for the highly accurate and fast simulation of EUV lithography systems, addressing important aspects of the current technical developments. The simulation of light diffraction from the mask employs an extended rigorous coupled wave analysis (RCWA) approach, which is optimized for EUV applications. In order to be able to deal with current EUV simulation requirements, several additional models are included in the extended RCWA approach: a field decomposition and a field stitching technique enable the simulation of larger complex structured mask areas. An EUV multilayer defect model including a database approach makes the fast and fully rigorous defect simulation and defect repair simulation possible. A hybrid mask simulation approach combining real and ideal mask parts allows the detailed investigation of the origin of different mask 3-D effects. The image computation is done with a fully vectorial Abbe-based approach. Arbitrary illumination and polarization schemes and adapted rigorous mask simulations guarantee a high accuracy. A fully vectorial sampling-free description of the pupil with Zernikes and Jones pupils and an optimized representation of the diffraction spectrum enable the computation of high-resolution images with high accuracy and short simulation times. A new pellicle model supports the simulation of arbitrary membrane stacks, pellicle distortions, and particles/defects on top of the pellicle. Finally, an extension for highly accurate anamorphic imaging simulations is included. The application of the models is demonstrated by typical use cases.

  18. A Greenhouse-Gas Information System: Monitoring and Validating Emissions Reporting and Mitigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jonietz, Karl K.; Dimotakis, Paul E.; Rotman, Douglas A.

    2011-09-26

    This study and report focus on attributes of a greenhouse-gas information system (GHGIS) needed to support MRV&V needs. These needs set the function of such a system apart from scientific/research monitoring of GHGs and carbon-cycle systems, and include (not exclusively): the need for a GHGIS that is operational, as required for decision-support; the need for a system that meets specifications derived from imposed requirements; the need for rigorous calibration, verification, and validation (CV&V) standards, processes, and records for all measurement and modeling/data-inversion data; the need to develop and adopt an uncertainty-quantification (UQ) regimen for all measurement and modeling data; andmore » the requirement that GHGIS products can be subjected to third-party questioning and scientific scrutiny. This report examines and assesses presently available capabilities that could contribute to a future GHGIS. These capabilities include sensors and measurement technologies; data analysis and data uncertainty quantification (UQ) practices and methods; and model-based data-inversion practices, methods, and their associated UQ. The report further examines the need for traceable calibration, verification, and validation processes and attached metadata; differences between present science-/research-oriented needs and those that would be required for an operational GHGIS; the development, operation, and maintenance of a GHGIS missions-operations center (GMOC); and the complex systems engineering and integration that would be required to develop, operate, and evolve a future GHGIS.« less

  19. A Novel Petri Nets-Based Modeling Method for the Interaction between the Sensor and the Geographic Environment in Emerging Sensor Networks

    PubMed Central

    Zhang, Feng; Xu, Yuetong; Chou, Jarong

    2016-01-01

    The service of sensor device in Emerging Sensor Networks (ESNs) is the extension of traditional Web services. Through the sensor network, the service of sensor device can communicate directly with the entity in the geographic environment, and even impact the geographic entity directly. The interaction between the sensor device in ESNs and geographic environment is very complex, and the interaction modeling is a challenging problem. This paper proposed a novel Petri Nets-based modeling method for the interaction between the sensor device and the geographic environment. The feature of the sensor device service in ESNs is more easily affected by the geographic environment than the traditional Web service. Therefore, the response time, the fault-tolerant ability and the resource consumption become important factors in the performance of the whole sensor application system. Thus, this paper classified IoT services as Sensing services and Controlling services according to the interaction between IoT service and geographic entity, and classified GIS services as data services and processing services. Then, this paper designed and analyzed service algebra and Colored Petri Nets model to modeling the geo-feature, IoT service, GIS service and the interaction process between the sensor and the geographic enviroment. At last, the modeling process is discussed by examples. PMID:27681730

  20. A model for ionic polymer metal composites as sensors

    NASA Astrophysics Data System (ADS)

    Bonomo, C.; Fortuna, L.; Giannone, P.; Graziani, S.; Strazzeri, S.

    2006-06-01

    This paper introduces a comprehensive model of sensors based on ionic polymer metal composites (IPMCs) working in air. Significant quantities ruling the sensing properties of IPMC-based sensors are taken into account and the dynamics of the sensors are modelled. A large amount of experimental evidence is given for the excellent agreement between estimations obtained using the proposed model and the observed signals. Furthermore, the effect of sensor scaling is investigated, giving interesting support to the activities involved in the design of sensing devices based on these novel materials. We observed that the need for a wet environment is not a key issue for IPMC-based sensors to work well. This fact allows us to put IPMC-based sensors in a totally different light to the corresponding actuators, showing that sensors do not suffer from the same drawbacks.

  1. Single toxin dose-response models revisited

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demidenko, Eugene, E-mail: eugened@dartmouth.edu

    The goal of this paper is to offer a rigorous analysis of the sigmoid shape single toxin dose-response relationship. The toxin efficacy function is introduced and four special points, including maximum toxin efficacy and inflection points, on the dose-response curve are defined. The special points define three phases of the toxin effect on mortality: (1) toxin concentrations smaller than the first inflection point or (2) larger then the second inflection point imply low mortality rate, and (3) concentrations between the first and the second inflection points imply high mortality rate. Probabilistic interpretation and mathematical analysis for each of the fourmore » models, Hill, logit, probit, and Weibull is provided. Two general model extensions are introduced: (1) the multi-target hit model that accounts for the existence of several vital receptors affected by the toxin, and (2) model with a nonzero mortality at zero concentration to account for natural mortality. Special attention is given to statistical estimation in the framework of the generalized linear model with the binomial dependent variable as the mortality count in each experiment, contrary to the widespread nonlinear regression treating the mortality rate as continuous variable. The models are illustrated using standard EPA Daphnia acute (48 h) toxicity tests with mortality as a function of NiCl or CuSO{sub 4} toxin. - Highlights: • The paper offers a rigorous study of a sigmoid dose-response relationship. • The concentration with highest mortality rate is rigorously defined. • A table with four special points for five morality curves is presented. • Two new sigmoid dose-response models have been introduced. • The generalized linear model is advocated for estimation of sigmoid dose-response relationship.« less

  2. Aboveground biomass mapping of African forest mosaics using canopy texture analysis: toward a regional approach.

    PubMed

    Bastin, Jean-François; Barbier, Nicolas; Couteron, Pierre; Adams, Benoît; Shapiro, Aurélie; Bogaert, Jan; De Cannière, Charles

    In the context of the reduction of greenhouse gas emissions caused by deforestation and forest degradation (the REDD+ program), optical very high resolution (VHR) satellite images provide an opportunity to characterize forest canopy structure and to quantify aboveground biomass (AGB) at less expense than methods based on airborne remote sensing data. Among the methods for processing these VHR images, Fourier textural ordination (FOTO) presents a good method to detect forest canopy structural heterogeneity and therefore to predict AGB variations. Notably, the method does not saturate at intermediate AGB values as do pixelwise processing of available space borne optical and radar signals. However, a regional-scale application requires overcoming two difficulties: (1) instrumental effects due to variations in sun–scene–sensor geometry or sensor-specific responses that preclude the use of wide arrays of images acquired under heterogeneous conditions and (2) forest structural diversity including monodominant or open canopy forests, which are of particular importance in Central Africa. In this study, we demonstrate the feasibility of a rigorous regional study of canopy texture by harmonizing FOTO indices of images acquired from two different sensors (Geoeye-1 and QuickBird-2) and different sun–scene–sensor geometries and by calibrating a piecewise biomass inversion model using 26 inventory plots (1 ha) sampled across very heterogeneous forest types. A good agreement was found between observed and predicted AGB (residual standard error [RSE] = 15%; R2 = 0.85; P < 0.001) across a wide range of AGB levels from 26 Mg/ha to 460 Mg/ha, and was confirmed by cross validation. A high-resolution biomass map (100-m pixels) was produced for a 400-km2 area, and predictions obtained from both imagery sources were consistent with each other (r = 0.86; slope = 1.03; intercept = 12.01 Mg/ha). These results highlight the horizontal structure of forest canopy as a powerful descriptor of the entire forest stand structure and heterogeneity. In particular, we show that quantitative metrics resulting from such textural analysis offer new opportunities to characterize the spatial and temporal variation of the structure of dense forests and may complement the toolbox used by tropical forest ecologists, managers or REDD+ national monitoring, reporting and verification bodies.

  3. Peer Assessment with Online Tools to Improve Student Modeling

    NASA Astrophysics Data System (ADS)

    Atkins, Leslie J.

    2012-11-01

    Introductory physics courses often require students to develop precise models of phenomena and represent these with diagrams, including free-body diagrams, light-ray diagrams, and maps of field lines. Instructors expect that students will adopt a certain rigor and precision when constructing these diagrams, but we want that rigor and precision to be an aid to sense-making rather than meeting seemingly arbitrary requirements set by the instructor. By giving students the authority to develop their own models and establish requirements for their diagrams, the sense that these are arbitrary requirements diminishes and students are more likely to see modeling as a sense-making activity. The practice of peer assessment can help students take ownership; however, it can be difficult for instructors to manage. Furthermore, it is not without risk: students can be reluctant to critique their peers, they may view this as the job of the instructor, and there is no guarantee that students will employ greater rigor and precision as a result of peer assessment. In this article, we describe one approach for peer assessment that can establish norms for diagrams in a way that is student driven, where students retain agency and authority in assessing and improving their work. We show that such an approach does indeed improve students' diagrams and abilities to assess their own work, without sacrificing students' authority and agency.

  4. Separating intrinsic from extrinsic fluctuations in dynamic biological systems

    PubMed Central

    Paulsson, Johan

    2011-01-01

    From molecules in cells to organisms in ecosystems, biological populations fluctuate due to the intrinsic randomness of individual events and the extrinsic influence of changing environments. The combined effect is often too complex for effective analysis, and many studies therefore make simplifying assumptions, for example ignoring either intrinsic or extrinsic effects to reduce the number of model assumptions. Here we mathematically demonstrate how two identical and independent reporters embedded in a shared fluctuating environment can be used to identify intrinsic and extrinsic noise terms, but also how these contributions are qualitatively and quantitatively different from what has been previously reported. Furthermore, we show for which classes of biological systems the noise contributions identified by dual-reporter methods correspond to the noise contributions predicted by correct stochastic models of either intrinsic or extrinsic mechanisms. We find that for broad classes of systems, the extrinsic noise from the dual-reporter method can be rigorously analyzed using models that ignore intrinsic stochasticity. In contrast, the intrinsic noise can be rigorously analyzed using models that ignore extrinsic stochasticity only under very special conditions that rarely hold in biology. Testing whether the conditions are met is rarely possible and the dual-reporter method may thus produce flawed conclusions about the properties of the system, particularly about the intrinsic noise. Our results contribute toward establishing a rigorous framework to analyze dynamically fluctuating biological systems. PMID:21730172

  5. Separating intrinsic from extrinsic fluctuations in dynamic biological systems.

    PubMed

    Hilfinger, Andreas; Paulsson, Johan

    2011-07-19

    From molecules in cells to organisms in ecosystems, biological populations fluctuate due to the intrinsic randomness of individual events and the extrinsic influence of changing environments. The combined effect is often too complex for effective analysis, and many studies therefore make simplifying assumptions, for example ignoring either intrinsic or extrinsic effects to reduce the number of model assumptions. Here we mathematically demonstrate how two identical and independent reporters embedded in a shared fluctuating environment can be used to identify intrinsic and extrinsic noise terms, but also how these contributions are qualitatively and quantitatively different from what has been previously reported. Furthermore, we show for which classes of biological systems the noise contributions identified by dual-reporter methods correspond to the noise contributions predicted by correct stochastic models of either intrinsic or extrinsic mechanisms. We find that for broad classes of systems, the extrinsic noise from the dual-reporter method can be rigorously analyzed using models that ignore intrinsic stochasticity. In contrast, the intrinsic noise can be rigorously analyzed using models that ignore extrinsic stochasticity only under very special conditions that rarely hold in biology. Testing whether the conditions are met is rarely possible and the dual-reporter method may thus produce flawed conclusions about the properties of the system, particularly about the intrinsic noise. Our results contribute toward establishing a rigorous framework to analyze dynamically fluctuating biological systems.

  6. Finite element modelling of fibre Bragg grating strain sensors and experimental validation

    NASA Astrophysics Data System (ADS)

    Malik, Shoaib A.; Mahendran, Ramani S.; Harris, Dee; Paget, Mark; Pandita, Surya D.; Machavaram, Venkata R.; Collins, David; Burns, Jonathan M.; Wang, Liwei; Fernando, Gerard F.

    2009-03-01

    Fibre Bragg grating (FBG) sensors continue to be used extensively for monitoring strain and temperature in and on engineering materials and structures. Previous researchers have also developed analytical models to predict the loadtransfer characteristics of FBG sensors as a function of applied strain. The general properties of the coating or adhesive that is used to surface-bond the FBG sensor to the substrate has also been modelled using finite element analysis. In this current paper, a technique was developed to surface-mount FBG sensors with a known volume and thickness of adhesive. The substrates used were aluminium dog-bone tensile test specimens. The FBG sensors were tensile tested in a series of ramp-hold sequences until failure. The reflected FBG spectra were recorded using a commercial instrument. Finite element analysis was performed to model the response of the surface-mounted FBG sensors. In the first instance, the effect of the mechanical properties of the adhesive and substrate were modelled. This was followed by modelling the volume of adhesive used to bond the FBG sensor to the substrate. Finally, the predicted values obtained via finite element modelling were correlated to the experimental results. In addition to the FBG sensors, the tensile test specimens were instrumented with surface-mounted electrical resistance strain gauges.

  7. Fault Diagnostics for Turbo-Shaft Engine Sensors Based on a Simplified On-Board Model

    PubMed Central

    Lu, Feng; Huang, Jinquan; Xing, Yaodong

    2012-01-01

    Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient. PMID:23112645

  8. Fault diagnostics for turbo-shaft engine sensors based on a simplified on-board model.

    PubMed

    Lu, Feng; Huang, Jinquan; Xing, Yaodong

    2012-01-01

    Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient.

  9. Comparing an annual and daily time-step model for predicting field-scale P loss

    USDA-ARS?s Scientific Manuscript database

    Several models with varying degrees of complexity are available for describing P movement through the landscape. The complexity of these models is dependent on the amount of data required by the model, the number of model parameters needed to be estimated, the theoretical rigor of the governing equa...

  10. Effective System for Automatic Bundle Block Adjustment and Ortho Image Generation from Multi Sensor Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Akilan, A.; Nagasubramanian, V.; Chaudhry, A.; Reddy, D. Rajesh; Sudheer Reddy, D.; Usha Devi, R.; Tirupati, T.; Radhadevi, P. V.; Varadan, G.

    2014-11-01

    Block Adjustment is a technique for large area mapping for images obtained from different remote sensingsatellites.The challenge in this process is to handle huge number of satellite imageries from different sources with different resolution and accuracies at the system level. This paper explains a system with various tools and techniques to effectively handle the end-to-end chain in large area mapping and production with good level of automation and the provisions for intuitive analysis of final results in 3D and 2D environment. In addition, the interface for using open source ortho and DEM references viz., ETM, SRTM etc. and displaying ESRI shapes for the image foot-prints are explained. Rigorous theory, mathematical modelling, workflow automation and sophisticated software engineering tools are included to ensure high photogrammetric accuracy and productivity. Major building blocks like Georeferencing, Geo-capturing and Geo-Modelling tools included in the block adjustment solution are explained in this paper. To provide optimal bundle block adjustment solution with high precision results, the system has been optimized in many stages to exploit the full utilization of hardware resources. The robustness of the system is ensured by handling failure in automatic procedure and saving the process state in every stage for subsequent restoration from the point of interruption. The results obtained from various stages of the system are presented in the paper.

  11. A rigorous approach to investigating common assumptions about disease transmission: Process algebra as an emerging modelling methodology for epidemiology.

    PubMed

    McCaig, Chris; Begon, Mike; Norman, Rachel; Shankland, Carron

    2011-03-01

    Changing scale, for example, the ability to move seamlessly from an individual-based model to a population-based model, is an important problem in many fields. In this paper, we introduce process algebra as a novel solution to this problem in the context of models of infectious disease spread. Process algebra allows us to describe a system in terms of the stochastic behaviour of individuals, and is a technique from computer science. We review the use of process algebra in biological systems, and the variety of quantitative and qualitative analysis techniques available. The analysis illustrated here solves the changing scale problem: from the individual behaviour we can rigorously derive equations to describe the mean behaviour of the system at the level of the population. The biological problem investigated is the transmission of infection, and how this relates to individual interactions.

  12. Development of response models for the Earth Radiation Budget Experiment (ERBE) sensors. Part 1: Dynamic models and computer simulations for the ERBE nonscanner, scanner and solar monitor sensors

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Choi, Sang H.; Chrisman, Dan A., Jr.; Samms, Richard W.

    1987-01-01

    Dynamic models and computer simulations were developed for the radiometric sensors utilized in the Earth Radiation Budget Experiment (ERBE). The models were developed to understand performance, improve measurement accuracy by updating model parameters and provide the constants needed for the count conversion algorithms. Model simulations were compared with the sensor's actual responses demonstrated in the ground and inflight calibrations. The models consider thermal and radiative exchange effects, surface specularity, spectral dependence of a filter, radiative interactions among an enclosure's nodes, partial specular and diffuse enclosure surface characteristics and steady-state and transient sensor responses. Relatively few sensor nodes were chosen for the models since there is an accuracy tradeoff between increasing the number of nodes and approximating parameters such as the sensor's size, material properties, geometry, and enclosure surface characteristics. Given that the temperature gradients within a node and between nodes are small enough, approximating with only a few nodes does not jeopardize the accuracy required to perform the parameter estimates and error analyses.

  13. New activity-based funding model for Australian private sector overnight rehabilitation cases: the rehabilitation Australian National Sub-Acute and Non-Acute Patient (AN-SNAP) model.

    PubMed

    Hanning, Brian; Predl, Nicolle

    2015-09-01

    Traditional overnight rehabilitation payment models in the private sector are not based on a rigorous classification system and vary greatly between contracts with no consideration of patient complexity. The payment rates are not based on relative cost and the length-of-stay (LOS) point at which a reduced rate applies (step downs) varies markedly. The rehabilitation Australian National Sub-Acute and Non-Acute Patient (AN-SNAP) model (RAM), which has been in place for over 2 years in some private hospitals, bases payment on a rigorous classification system, relative cost and industry LOS. RAM is in the process of being rolled out more widely. This paper compares and contrasts RAM with traditional overnight rehabilitation payment models. It considers the advantages of RAM for hospitals and Australian Health Service Alliance. It also considers payment model changes in the context of maintaining industry consistency with Electronic Claims Lodgement and Information Processing System Environment (ECLIPSE) and health reform generally.

  14. Random Matrix Theory and the Anderson Model

    NASA Astrophysics Data System (ADS)

    Bellissard, Jean

    2004-08-01

    This paper is devoted to a discussion of possible strategies to prove rigorously the existence of a metal-insulator Anderson transition for the Anderson model in dimension d≥3. The possible criterions used to define such a transition are presented. It is argued that at low disorder the lowest order in perturbation theory is described by a random matrix model. Various simplified versions for which rigorous results have been obtained in the past are discussed. It includes a free probability approach, the Wegner n-orbital model and a class of models proposed by Disertori, Pinson, and Spencer, Comm. Math. Phys. 232:83-124 (2002). At last a recent work by Magnen, Rivasseau, and the author, Markov Process and Related Fields 9:261-278 (2003) is summarized: it gives a toy modeldescribing the lowest order approximation of Anderson model and it is proved that, for d=2, its density of states is given by the semicircle distribution. A short discussion of its extension to d≥3 follows.

  15. Finite state projection based bounds to compare chemical master equation models using single-cell data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, Zachary; Neuert, Gregor; Department of Pharmacology, School of Medicine, Vanderbilt University, Nashville, Tennessee 37232

    2016-08-21

    Emerging techniques now allow for precise quantification of distributions of biological molecules in single cells. These rapidly advancing experimental methods have created a need for more rigorous and efficient modeling tools. Here, we derive new bounds on the likelihood that observations of single-cell, single-molecule responses come from a discrete stochastic model, posed in the form of the chemical master equation. These strict upper and lower bounds are based on a finite state projection approach, and they converge monotonically to the exact likelihood value. These bounds allow one to discriminate rigorously between models and with a minimum level of computational effort.more » In practice, these bounds can be incorporated into stochastic model identification and parameter inference routines, which improve the accuracy and efficiency of endeavors to analyze and predict single-cell behavior. We demonstrate the applicability of our approach using simulated data for three example models as well as for experimental measurements of a time-varying stochastic transcriptional response in yeast.« less

  16. Parent Management Training-Oregon Model: Adapting Intervention with Rigorous Research.

    PubMed

    Forgatch, Marion S; Kjøbli, John

    2016-09-01

    Parent Management Training-Oregon Model (PMTO(®) ) is a set of theory-based parenting programs with status as evidence-based treatments. PMTO has been rigorously tested in efficacy and effectiveness trials in different contexts, cultures, and formats. Parents, the presumed agents of change, learn core parenting practices, specifically skill encouragement, limit setting, monitoring/supervision, interpersonal problem solving, and positive involvement. The intervention effectively prevents and ameliorates children's behavior problems by replacing coercive interactions with positive parenting practices. Delivery format includes sessions with individual families in agencies or families' homes, parent groups, and web-based and telehealth communication. Mediational models have tested parenting practices as mechanisms of change for children's behavior and found support for the theory underlying PMTO programs. Moderating effects include children's age, maternal depression, and social disadvantage. The Norwegian PMTO implementation is presented as an example of how PMTO has been tailored to reach diverse populations as delivered by multiple systems of care throughout the nation. An implementation and research center in Oslo provides infrastructure and promotes collaboration between practitioners and researchers to conduct rigorous intervention research. Although evidence-based and tested within a wide array of contexts and populations, PMTO must continue to adapt to an ever-changing world. © 2016 Family Process Institute.

  17. Multi-sensor Cloud Retrieval Simulator and Remote Sensing from Model Parameters . Pt. 1; Synthetic Sensor Radiance Formulation; [Synthetic Sensor Radiance Formulation

    NASA Technical Reports Server (NTRS)

    Wind, G.; DaSilva, A. M.; Norris, P. M.; Platnick, S.

    2013-01-01

    In this paper we describe a general procedure for calculating synthetic sensor radiances from variable output from a global atmospheric forecast model. In order to take proper account of the discrepancies between model resolution and sensor footprint, the algorithm takes explicit account of the model subgrid variability, in particular its description of the probability density function of total water (vapor and cloud condensate.) The simulated sensor radiances are then substituted into an operational remote sensing algorithm processing chain to produce a variety of remote sensing products that would normally be produced from actual sensor output. This output can then be used for a wide variety of purposes such as model parameter verification, remote sensing algorithm validation, testing of new retrieval methods and future sensor studies.We show a specific implementation using the GEOS-5 model, the MODIS instrument and the MODIS Adaptive Processing System (MODAPS) Data Collection 5.1 operational remote sensing cloud algorithm processing chain (including the cloud mask, cloud top properties and cloud optical and microphysical properties products). We focus on clouds because they are very important to model development and improvement.

  18. The response function of modulated grid Faraday cup plasma instruments

    NASA Technical Reports Server (NTRS)

    Barnett, A.; Olbert, S.

    1986-01-01

    Modulated grid Faraday cup plasma analyzers are a very useful tool for making in situ measurements of space plasmas. One of their great attributes is that their simplicity permits their angular response function to be calculated theoretically. An expression is derived for this response function by computing the trajectories of the charged particles inside the cup. The Voyager Plasma Science (PLS) experiment is used as a specific example. Two approximations to the rigorous response function useful for data analysis are discussed. The theoretical formulas were tested by multi-sensor analysis of solar wind data. The tests indicate that the formulas represent the true cup response function for all angles of incidence with a maximum error of only a few percent.

  19. Performance of the Enhanced Vegetation Index to Detect Inner-annual Dry Season and Drought Impacts on Amazon Forest Canopies

    NASA Astrophysics Data System (ADS)

    Brede, B.; Verbesselt, J.; Dutrieux, L.; Herold, M.

    2015-04-01

    The Amazon rainforests represent the largest connected forested area in the tropics and play an integral role in the global carbon cycle. In the last years the discussion about their phenology and response to drought has intensified. A recent study argued that seasonality in greenness expressed as Enhanced Vegetation Index (EVI) is an artifact of variations in sun-sensor geometry throughout the year. We aimed to reproduce these results with the Moderate-Resolution Imaging Spectroradiometer (MODIS) MCD43 product suite, which allows modeling the Bidirectional Reflectance Distribution Function (BRDF) and keeping sun-sensor geometry constant. The derived BRDF-adjusted EVI was spatially aggregated over large areas of central Amazon forests. The resulting time series of EVI spanning the 2000-2013 period contained distinct seasonal patterns with peak values at the onset of the dry season, but also followed the same pattern of sun geometry expressed as Solar Zenith Angle (SZA). Additionally, we assessed EVI's sensitivity to precipitation anomalies. For that we compared BRDF-adjusted EVI dry season anomalies to two drought indices (Maximum Cumulative Water Deficit, Standardized Precipitation Index). This analysis covered the whole of Amazonia and data from the years 2000 to 2013. The results showed no meaningful connection between EVI anomalies and drought. This is in contrast to other studies that investigate the drought impact on EVI and forest photosynthetic capacity. The results from both sub-analyses question the predictive power of EVI for large scale assessments of forest ecosystem functioning in Amazonia. Based on the presented results, we recommend a careful evaluation of the EVI for applications in tropical forests, including rigorous validation supported by ground plots.

  20. Determinants of the Rigor of State Protection Policies for Persons With Dementia in Assisted Living.

    PubMed

    Nattinger, Matthew C; Kaskie, Brian

    2017-01-01

    Continued growth in the number of individuals with dementia residing in assisted living (AL) facilities raises concerns about their safety and protection. However, unlike federally regulated nursing facilities, AL facilities are state-regulated and there is a high degree of variation among policies designed to protect persons with dementia. Despite the important role these protection policies have in shaping the quality of life of persons with dementia residing in AL facilities, little is known about their formation. In this research, we examined the adoption of AL protection policies pertaining to staffing, the physical environment, and the use of chemical restraints. For each protection policy type, we modeled policy rigor using an innovative point-in-time approach, incorporating variables associated with state contextual, institutional, political, and external factors. We found that the rate of state AL protection policy adoptions remained steady over the study period, with staffing policies becoming less rigorous over time. Variables reflecting institutional policy making, including legislative professionalism and bureaucratic oversight, were associated with the rigor of state AL dementia protection policies. As we continue to evaluate the mechanisms contributing to the rigor of AL protection policies, it seems that organized advocacy efforts might expand their role in educating state policy makers about the importance of protecting persons with dementia residing in AL facilities and moving to advance appropriate policies.

  1. Heterogeneous nucleation on convex spherical substrate surfaces: A rigorous thermodynamic formulation of Fletcher's classical model and the new perspectives derived.

    PubMed

    Qian, Ma; Ma, Jie

    2009-06-07

    Fletcher's spherical substrate model [J. Chem. Phys. 29, 572 (1958)] is a basic model for understanding the heterogeneous nucleation phenomena in nature. However, a rigorous thermodynamic formulation of the model has been missing due to the significant complexities involved. This has not only left the classical model deficient but also likely obscured its other important features, which would otherwise have helped to better understand and control heterogeneous nucleation on spherical substrates. This work presents a rigorous thermodynamic formulation of Fletcher's model using a novel analytical approach and discusses the new perspectives derived. In particular, it is shown that the use of an intermediate variable, a selected geometrical angle or pseudocontact angle between the embryo and spherical substrate, revealed extraordinary similarities between the first derivatives of the free energy change with respect to embryo radius for nucleation on spherical and flat substrates. Enlightened by the discovery, it was found that there exists a local maximum in the difference between the equivalent contact angles for nucleation on spherical and flat substrates due to the existence of a local maximum in the difference between the shape factors for nucleation on spherical and flat substrate surfaces. This helps to understand the complexity of the heterogeneous nucleation phenomena in a practical system. Also, it was found that the unfavorable size effect occurs primarily when R<5r( *) (R: radius of substrate and r( *): critical embryo radius) and diminishes rapidly with increasing value of R/r( *) beyond R/r( *)=5. This finding provides a baseline for controlling the size effects in heterogeneous nucleation.

  2. Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.

    PubMed

    Zhang, Jiachao; Hirakawa, Keigo

    2017-04-01

    This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.

  3. Extreme Response Style: Which Model Is Best?

    ERIC Educational Resources Information Center

    Leventhal, Brian

    2017-01-01

    More robust and rigorous psychometric models, such as multidimensional Item Response Theory models, have been advocated for survey applications. However, item responses may be influenced by construct-irrelevant variance factors such as preferences for extreme response options. Through empirical and simulation methods, this study evaluates the use…

  4. Rigorous derivation of the effective model describing a non-isothermal fluid flow in a vertical pipe filled with porous medium

    NASA Astrophysics Data System (ADS)

    Beneš, Michal; Pažanin, Igor

    2018-03-01

    This paper reports an analytical investigation of non-isothermal fluid flow in a thin (or long) vertical pipe filled with porous medium via asymptotic analysis. We assume that the fluid inside the pipe is cooled (or heated) by the surrounding medium and that the flow is governed by the prescribed pressure drop between pipe's ends. Starting from the dimensionless Darcy-Brinkman-Boussinesq system, we formally derive a macroscopic model describing the effective flow at small Brinkman-Darcy number. The asymptotic approximation is given by the explicit formulae for the velocity, pressure and temperature clearly acknowledging the effects of the cooling (heating) and porous structure. The theoretical error analysis is carried out to indicate the order of accuracy and to provide a rigorous justification of the effective model.

  5. COMSOL-Based Modeling and Simulation of SnO2/rGO Gas Sensor for Detection of NO2.

    PubMed

    Yaghouti Niyat, Farshad; Shahrokh Abadi, M H

    2018-02-01

    Despite SIESTA and COMSOL being increasingly used for the simulation of the sensing mechanism in the gas sensors, there are no modeling and simulation reports in literature for detection of NO 2 based rGO/SnO 2 sensors. In the present study, we model, simulate, and characterize an NO 2 based rGO/SnO 2 gas sensor using COMSOL by solving the Poisson's equations under associated boundary conditions of mass, heat and electrical transitions. To perform the simulation, we use an exposure model for presenting the required NO 2 , a heat transfer model to obtain a reaction temperature, and an electrical model to characterize the sensor's response in the presence of the gas. We characterize the sensor's response in the presence of different concentrations of NO 2 at different working temperatures and compare the results with the experimental data, reported by Zhang et al. The results from the simulated sensor show a good agreement with the real sensor with some inconsistencies due to differences between the practical conditions in the real chamber and applied conditions to the analytical equations. The results also show that the method can be used to define and predict the behavior of the rGO-based gas sensors before undergoing the fabrication process.

  6. Development of rigor mortis is not affected by muscle volume.

    PubMed

    Kobayashi, M; Ikegaya, H; Takase, I; Hatanaka, K; Sakurada, K; Iwase, H

    2001-04-01

    There is a hypothesis suggesting that rigor mortis progresses more rapidly in small muscles than in large muscles. We measured rigor mortis as tension determined isometrically in rat musculus erector spinae that had been cut into muscle bundles of various volumes. The muscle volume did not influence either the progress or the resolution of rigor mortis, which contradicts the hypothesis. Differences in pre-rigor load on the muscles influenced the onset and resolution of rigor mortis in a few pairs of samples, but did not influence the time taken for rigor mortis to reach its full extent after death. Moreover, the progress of rigor mortis in this muscle was biphasic; this may reflect the early rigor of red muscle fibres and the late rigor of white muscle fibres.

  7. Development of a nonlinear model for the prediction of response times of glucose affinity sensors using concanavalin A and dextran and the development of a differential osmotic glucose affinity sensor

    NASA Astrophysics Data System (ADS)

    Reis, Louis G.

    With the increasing prevalence of diabetes in the United States and worldwide, blood glucose monitoring must be accurate and reliable. Current enzymatic sensors have numerous disadvantages that make them unreliable and unfavorable among patients. Recent research in glucose affinity sensors correct some of the problems that enzymatic sensors experience. Dextran and concanavalin A are two of the more common components used in glucose affinity sensors. When these sensors were first explored, a model was derived to predict the response time of a glucose affinity sensor using concanavalin A and dextran. However, the model assumed the system was linear and fell short of calculating times representative of the response times determined through experimental tests with the sensors. In this work, a new model that uses the Stokes-Einstein Equation to demonstrate the nonlinear behavior of the glucose affinity assay was developed to predict the response times of similar glucose affinity sensors. In addition to the device tested by the original linear model, additional devices were identified and tested with the proposed model. The nonlinear model was designed to accommodate the many different variations between systems. The proposed model was able to accurately calculate response times for sensors using the concanavalin A-dextran affinity assay with respect to the experimentally reported times by the independent research groups. Parameter studies using the nonlinear model were able to identify possible setbacks that could compromise the response of thesystem. Specifically, the model showed that the improper use of asymmetrical membranes could increase the response time by as little as 20% or more as the device is miniaturized. The model also demonstrated that systems using the concanavalin Adextran assay would experience higher response times in the hypoglycemic range. This work attempted to replicate and improve an osmotic glucose affinity sensor. The system was designed to negate additional effects that could cause artifacts or irregular readings such as external osmotic differences and external pressure differences. However, the experimental setup and execution faced numerous setbacks that highlighted the additional difficulty that sensors using asymmetrical ceramic membranes and the concanavalin A-dextran affinity assay may experience.

  8. Modeling and simulation of soft sensor design for real-time speed and position estimation of PMSM.

    PubMed

    Omrane, Ines; Etien, Erik; Dib, Wissam; Bachelier, Olivier

    2015-07-01

    This paper deals with the design of a speed soft sensor for permanent magnet synchronous motor. At high speed, model-based soft sensor is used and it gives excellent results. However, it fails to deliver satisfactory performance at zero or very low speed. High-frequency soft sensor is used at low speed. We suggest to use a model-based soft sensor together with the high-frequency soft sensor to overcome the limitations of the first one at low speed range. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Vibration analysis and experiment of giant magnetostrictive force sensor

    NASA Astrophysics Data System (ADS)

    Zhu, Zhiwen; Liu, Fang; Zhu, Xingqiao; Wang, Haibo; Xu, Jia

    2017-12-01

    In this paper, a kind of giant magnetostrictive force sensor is proposed, ans its magneto-mechanical coupled model is developed. The relationship between output voltage of giant magnetostrictive force sensor and input excitation force is obtained. The phenomena of accuracy aggravation in high frequency and delay of giant magnetostrictive sensor are explained. The experimental results show that the model can describe the actual response of giant magnetostrictive force sensor. The new model of giant magnetostrictive sensor has simple form and is easy to be analyzed in theory, which is helpful to be applied in measuring and control fields.

  10. Assessment of the suitability of Durafet-based sensors for pH measurement in dynamic estuarine environments

    NASA Astrophysics Data System (ADS)

    Gonski, Stephen F.; Cai, Wei-Jun; Ullman, William J.; Joesoef, Andrew; Main, Christopher R.; Pettay, D. Tye; Martz, Todd R.

    2018-01-01

    The suitability of the Honeywell Durafet to the measurement of pH in productive, high-fouling, and highly-turbid estuarine environments was investigated at the confluence of the Murderkill Estuary and Delaware Bay (Delaware, USA). Three different flow configurations of the SeapHOx sensor equipped with a Honeywell Durafet and its integrated internal (Ag/AgCl reference electrode containing a 4.5 M KCl gel liquid junction) and external (solid-state chloride ion selective electrode, Cl-ISE) reference electrodes were deployed for four periods between April 2015 and September 2016. In this environment, the Honeywell Durafet proved capable of making high-resolution and high-frequency pH measurements on the total scale between pH 6.8 and 8.4. Natural pH fluctuations of >1 pH unit were routinely captured over a range of timescales. The sensor pH collected between May and August 2016 using the most refined SeapHOx configuration exhibited good agreement with multiple sets of independently measured reference pH values. When deployed in conjunction with rigorous discrete sampling and calibration schemes, the sensor pH had a root-mean squared error ranging between 0.011 and 0.036 pH units across a wide range of salinity relative to both pHT calculated from measured dissolved inorganic carbon and total alkalinity and pHNBS measured with a glass electrode corrected to pHT at in situ conditions. The present work demonstrates the viability of the Honeywell Durafet to the measurement of pH to within the weather-level precision defined by the Global Ocean Acidification Observing Network (GOA-ON, ≤ 0.02 pH units) as a part of future estuarine CO2 chemistry studies undertaken in dynamic environments.

  11. Effects of rigor status during high-pressure processing on the physical qualities of farm-raised abalone (Haliotis rufescens).

    PubMed

    Hughes, Brianna H; Greenberg, Neil J; Yang, Tom C; Skonberg, Denise I

    2015-01-01

    High-pressure processing (HPP) is used to increase meat safety and shelf-life, with conflicting quality effects depending on rigor status during HPP. In the seafood industry, HPP is used to shuck and pasteurize oysters, but its use on abalones has only been minimally evaluated and the effect of rigor status during HPP on abalone quality has not been reported. Farm-raised abalones (Haliotis rufescens) were divided into 12 HPP treatments and 1 unprocessed control treatment. Treatments were processed pre-rigor or post-rigor at 2 pressures (100 and 300 MPa) and 3 processing times (1, 3, and 5 min). The control was analyzed post-rigor. Uniform plugs were cut from adductor and foot meat for texture profile analysis, shear force, and color analysis. Subsamples were used for scanning electron microscopy of muscle ultrastructure. Texture profile analysis revealed that post-rigor processed abalone was significantly (P < 0.05) less firm and chewy than pre-rigor processed irrespective of muscle type, processing time, or pressure. L values increased with pressure to 68.9 at 300 MPa for pre-rigor processed foot, 73.8 for post-rigor processed foot, 90.9 for pre-rigor processed adductor, and 89.0 for post-rigor processed adductor. Scanning electron microscopy images showed fraying of collagen fibers in processed adductor, but did not show pressure-induced compaction of the foot myofibrils. Post-rigor processed abalone meat was more tender than pre-rigor processed meat, and post-rigor processed foot meat was lighter in color than pre-rigor processed foot meat, suggesting that waiting for rigor to resolve prior to processing abalones may improve consumer perceptions of quality and market value. © 2014 Institute of Food Technologists®

  12. Bioeconomic and market models

    Treesearch

    Richard Haynes; Darius Adams; Peter Ince; John Mills; Ralph Alig

    2006-01-01

    The United States has a century of experience with the development of models that describe markets for forest products and trends in resource conditions. In the last four decades, increasing rigor in policy debates has stimulated the development of models to support policy analysis. Increasingly, research has evolved (often relying on computer-based models) to increase...

  13. Development of a Modular, Provider Customized Airway Trainer

    DTIC Science & Technology

    2015-11-25

    Instructions for Airway Model with sensors and computer ( Raspberry PI ) ........................................ 31 Appendix B: Instructions for...Appendix A: Instructions for Airway Model with sensors and computer ( Raspberry PI ) RASPBERRY PI INSTRUCTIONS 1. Connect multicolor sensor...cable and two blue sensor cables (blue sensor cable orientation does not matter) 2. Plug in power to the screen and raspberry pi ( two separate

  14. A square-force cohesion model and its extraction from bulk measurements

    NASA Astrophysics Data System (ADS)

    Liu, Peiyuan; Lamarche, Casey; Kellogg, Kevin; Hrenya, Christine

    2017-11-01

    Cohesive particles remain poorly understood, with order of magnitude differences exhibited for prior, physical predictions of agglomerate size. A major obstacle lies in the absence of robust models of particle-particle cohesion, thereby precluding accurate prediction of the behavior of cohesive particles. Rigorous cohesion models commonly contain parameters related to surface roughness, to which cohesion shows extreme sensitivity. However, both roughness measurement and its distillation into these model parameters are challenging. Accordingly, we propose a ``square-force'' model, where cohesive force remains constant until a cut-off separation. Via DEM simulations, we demonstrate validity of the square-force model as surrogate of more rigorous models, when its two parameters are selected to match the two key quantities governing dense and dilute granular flows, namely maximum cohesive force and critical cohesive energy, respectively. Perhaps more importantly, we establish a method to extract the parameters in the square-force model via defluidization, due to its ability to isolate the effects of the two parameters. Thus, instead of relying on complicated scans of individual grains, determination of particle-particle cohesion from simple bulk measurements becomes feasible. Dow Corning Corporation.

  15. Sensing (un)binding events via surface plasmons: effects of resonator geometry

    NASA Astrophysics Data System (ADS)

    Antosiewicz, Tomasz J.; Claudio, Virginia; Käll, Mikael

    2016-04-01

    The resonance conditions of localized surface plasmon resonances (LSPRs) can be perturbed in any number ways making plasmon nanoresonators viable tools in detection of e.g. phase changes, pH, gasses, and single molecules. Precise measurement via LSPR of molecular concentrations hinge on the ability to confidently count the number of molecules attached to a metal resonator and ideally to track binding and unbinding events in real-time. These two requirements make it necessary to rigorously quantify relations between the number of bound molecules and response of plasmonic sensors. This endeavor is hindered on the one hand by a spatially varying response of a given plasmonic nanosensor. On the other hand movement of molecules is determined by stochastic effects (Brownian motion) as well as deterministic flow, if present, in microfluidic channels. The combination of molecular dynamics and the electromagnetic response of the LSPR yield an uncertainty which is little understood and whose effect is often disregarded in quantitative sensing experiments. Using a combination of electromagnetic finite-difference time-domain (FDTD) calculations of the plasmon resonance peak shift of various metal nanosensors (disk, cone, rod, dimer) and stochastic diffusion-reaction simulations of biomolecular interactions on a sensor surface we clarify the interplay between position dependent binding probability and inhomogeneous sensitivity distribution. We show, how the statistical characteristics of the total signal upon molecular binding are determined. The proposed methodology is, in general, applicable to any sensor and any transduction mechanism, although the specifics of implementation will vary depending on circumstances. In this work we focus on elucidating how the interplay between electromagnetic and stochastic effects impacts the feasibility of employing particular shapes of plasmonic sensors for real-time monitoring of individual binding reactions or sensing low concentrations - which characteristics make a given sensor optimal for a given task. We also address the issue of how particular illumination conditions affect the level of uncertainty of the measured signal upon molecular binding.

  16. Next generation sensing platforms for extended deployments in large-scale, multidisciplinary, adaptive sampling and observational networks

    NASA Astrophysics Data System (ADS)

    Cross, J. N.; Meinig, C.; Mordy, C. W.; Lawrence-Slavas, N.; Cokelet, E. D.; Jenkins, R.; Tabisola, H. M.; Stabeno, P. J.

    2016-12-01

    New autonomous sensors have dramatically increased the resolution and accuracy of oceanographic data collection, enabling rapid sampling over extremely fine scales. Innovative new autonomous platofrms like floats, gliders, drones, and crawling moorings leverage the full potential of these new sensors by extending spatiotemporal reach across varied environments. During 2015 and 2016, The Innovative Technology for Arctic Exploration Program at the Pacific Marine Environmental Laboratory tested several new types of fully autonomous platforms with increased speed, durability, and power and payload capacity designed to deliver cutting-edge ecosystem assessment sensors to remote or inaccessible environments. The Expendable Ice-Tracking (EXIT) gloat developed by the NOAA Pacific Marine Environmental Laboratory (PMEL) is moored near bottom during the ice-free season and released on an autonomous timer beneath the ice during the following winter. The float collects a rapid profile during ascent, and continues to collect critical, poorly-accessible under-ice data until melt, when data is transmitted via satellite. The autonomous Oculus sub-surface glider developed by the University of Washington and PMEL has a large power and payload capacity and an enhanced buoyancy engine. This 'coastal truck' is designed for the rapid water column ascent required by optical imaging systems. The Saildrone is a solar and wind powered ocean unmanned surface vessel (USV) developed by Saildrone, Inc. in partnership with PMEL. This large-payload (200 lbs), fast (1-7 kts), durable (46 kts winds) platform was equipped with 15 sensors designed for ecosystem assessment during 2016, including passive and active acoustic systems specially redesigned for autonomous vehicle deployments. The senors deployed on these platforms achieved rigorous accuracy and precision standards. These innovative platforms provide new sampling capabilities and cost efficiencies in high-resolution sensor deployment, including reconnaissance for annual fisheries and marine mammal surveys; better linkages between sustained observing platforms; and adaptive deployments that can easily target anomalies as they arise.

  17. Modeling and simulation of soft sensor design for real-time speed estimation, measurement and control of induction motor.

    PubMed

    Etien, Erik

    2013-05-01

    This paper deals with the design of a speed soft sensor for induction motor. The sensor is based on the physical model of the motor. Because the validation step highlight the fact that the sensor cannot be validated for all the operating points, the model is modified in order to obtain a fully validated sensor in the whole speed range. An original feature of the proposed approach is that the modified model is derived from stability analysis using automatic control theory. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Analysis and modeling of leakage current sensor under pulsating direct current

    NASA Astrophysics Data System (ADS)

    Li, Kui; Dai, Yihua; Wang, Yao; Niu, Feng; Chen, Zhao; Huang, Shaopo

    2017-05-01

    In this paper, the transformation characteristics of current sensor under pulsating DC leakage current is investigated. The mathematical model of current sensor is proposed to accurately describe the secondary side current and excitation current. The transformation process of current sensor is illustrated in details and the transformation error is analyzed from multi aspects. A simulation model is built and a sensor prototype is designed to conduct comparative evaluation, and both simulation and experimental results are presented to verify the correctness of theoretical analysis.

  19. Chemical Sensor Array Response Modeling Using Quantitative Structure-Activity Relationships Technique

    NASA Astrophysics Data System (ADS)

    Shevade, Abhijit V.; Ryan, Margaret A.; Homer, Margie L.; Zhou, Hanying; Manfreda, Allison M.; Lara, Liana M.; Yen, Shiao-Pin S.; Jewell, April D.; Manatt, Kenneth S.; Kisor, Adam K.

    We have developed a Quantitative Structure-Activity Relationships (QSAR) based approach to correlate the response of chemical sensors in an array with molecular descriptors. A novel molecular descriptor set has been developed; this set combines descriptors of sensing film-analyte interactions, representing sensor response, with a basic analyte descriptor set commonly used in QSAR studies. The descriptors are obtained using a combination of molecular modeling tools and empirical and semi-empirical Quantitative Structure-Property Relationships (QSPR) methods. The sensors under investigation are polymer-carbon sensing films which have been exposed to analyte vapors at parts-per-million (ppm) concentrations; response is measured as change in film resistance. Statistically validated QSAR models have been developed using Genetic Function Approximations (GFA) for a sensor array for a given training data set. The applicability of the sensor response models has been tested by using it to predict the sensor activities for test analytes not considered in the training set for the model development. The validated QSAR sensor response models show good predictive ability. The QSAR approach is a promising computational tool for sensing materials evaluation and selection. It can also be used to predict response of an existing sensing film to new target analytes.

  20. Fractional Stochastic Differential Equations Satisfying Fluctuation-Dissipation Theorem

    NASA Astrophysics Data System (ADS)

    Li, Lei; Liu, Jian-Guo; Lu, Jianfeng

    2017-10-01

    We propose in this work a fractional stochastic differential equation (FSDE) model consistent with the over-damped limit of the generalized Langevin equation model. As a result of the `fluctuation-dissipation theorem', the differential equations driven by fractional Brownian noise to model memory effects should be paired with Caputo derivatives, and this FSDE model should be understood in an integral form. We establish the existence of strong solutions for such equations and discuss the ergodicity and convergence to Gibbs measure. In the linear forcing regime, we show rigorously the algebraic convergence to Gibbs measure when the `fluctuation-dissipation theorem' is satisfied, and this verifies that satisfying `fluctuation-dissipation theorem' indeed leads to the correct physical behavior. We further discuss possible approaches to analyze the ergodicity and convergence to Gibbs measure in the nonlinear forcing regime, while leave the rigorous analysis for future works. The FSDE model proposed is suitable for systems in contact with heat bath with power-law kernel and subdiffusion behaviors.

  1. Optimization of Self-Directed Target Coverage in Wireless Multimedia Sensor Network

    PubMed Central

    Yang, Yang; Wang, Yufei; Pi, Dechang; Wang, Ruchuan

    2014-01-01

    Video and image sensors in wireless multimedia sensor networks (WMSNs) have directed view and limited sensing angle. So the methods to solve target coverage problem for traditional sensor networks, which use circle sensing model, are not suitable for WMSNs. Based on the FoV (field of view) sensing model and FoV disk model proposed, how expected multimedia sensor covers the target is defined by the deflection angle between target and the sensor's current orientation and the distance between target and the sensor. Then target coverage optimization algorithms based on expected coverage value are presented for single-sensor single-target, multisensor single-target, and single-sensor multitargets problems distinguishingly. Selecting the orientation that sensor rotated to cover every target falling in the FoV disk of that sensor for candidate orientations and using genetic algorithm to multisensor multitargets problem, which has NP-complete complexity, then result in the approximated minimum subset of sensors which covers all the targets in networks. Simulation results show the algorithm's performance and the effect of number of targets on the resulting subset. PMID:25136667

  2. Rigorous description of holograms of particles illuminated by an astigmatic elliptical Gaussian beam

    NASA Astrophysics Data System (ADS)

    Yuan, Y. J.; Ren, K. F.; Coëtmellec, S.; Lebrun, D.

    2009-02-01

    The digital holography is a non-intrusive optical metrology and well adapted for the measurement of the size and velocity field of particles in the spray of a fluid. The simplified model of an opaque disk is often used in the treatment of the diagrams and therefore the refraction and the third dimension diffraction of the particle are not taken into account. We present in this paper a rigorous description of the holographic diagrams and evaluate the effects of the refraction and the third dimension diffraction by comparison to the opaque disk model. It is found that the effects are important when the real part of the refractive index is near unity or the imaginary part is non zero but small.

  3. Engineering education as a complex system

    NASA Astrophysics Data System (ADS)

    Gattie, David K.; Kellam, Nadia N.; Schramski, John R.; Walther, Joachim

    2011-12-01

    This paper presents a theoretical basis for cultivating engineering education as a complex system that will prepare students to think critically and make decisions with regard to poorly understood, ill-structured issues. Integral to this theoretical basis is a solution space construct developed and presented as a benchmark for evaluating problem-solving orientations that emerge within students' thinking as they progress through an engineering curriculum. It is proposed that the traditional engineering education model, while analytically rigorous, is characterised by properties that, although necessary, are insufficient for preparing students to address complex issues of the twenty-first century. A Synthesis and Design Studio model for engineering education is proposed, which maintains the necessary rigor of analysis within a uniquely complex yet sufficiently structured learning environment.

  4. A Prospective Test of Cognitive Vulnerability Models of Depression with Adolescent Girls

    ERIC Educational Resources Information Center

    Bohon, Cara; Stice, Eric; Burton, Emily; Fudell, Molly; Nolen-Hoeksema, Susan

    2008-01-01

    This study sought to provide a more rigorous prospective test of two cognitive vulnerability models of depression with longitudinal data from 496 adolescent girls. Results supported the cognitive vulnerability model in that stressors predicted future increases in depressive symptoms and onset of clinically significant major depression for…

  5. Learning, Judgment, and the Rooted Particular

    ERIC Educational Resources Information Center

    McCabe, David

    2012-01-01

    This article begins by acknowledging the general worry that scholarship in the humanities lacks the rigor and objectivity of other scholarly fields. In considering the validity of that criticism, I distinguish two models of learning: the covering law model exemplified by the natural sciences, and the model of rooted particularity that…

  6. The output voltage model and experiment of magnetostrictive displacement sensor based on Weidemann effect

    NASA Astrophysics Data System (ADS)

    Wang, Bowen; Li, Yuanyuan; Xie, Xinliang; Huang, Wenmei; Weng, Ling; Zhang, Changgeng

    2018-05-01

    Based on the Wiedemann effect and inverse magnetostritive effect, the output voltage model of a magnetostrictive displacement sensor has been established. The output voltage of the magnetostrictive displacement sensor is calculated in different magnetic fields. It is found that the calculating result is in an agreement with the experimental one. The theoretical and experimental results show that the output voltage of the displacement sensor is linearly related to the magnetostrictive differences, (λl-λt), of waveguide wires. The measured output voltages for Fe-Ga and Fe-Ni wire sensors are 51.5mV and 36.5mV, respectively, and the output voltage of Fe-Ga wire sensor is obviously higher than that of Fe-Ni wire sensor under the same magnetic field. The model can be used to predict the output voltage of the sensor and to provide guidance for the optimization design of the sensor.

  7. Human Activity Recognition by Combining a Small Number of Classifiers.

    PubMed

    Nazabal, Alfredo; Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Ghahramani, Zoubin

    2016-09-01

    We consider the problem of daily human activity recognition (HAR) using multiple wireless inertial sensors, and specifically, HAR systems with a very low number of sensors, each one providing an estimation of the performed activities. We propose new Bayesian models to combine the output of the sensors. The models are based on a soft outputs combination of individual classifiers to deal with the small number of sensors. We also incorporate the dynamic nature of human activities as a first-order homogeneous Markov chain. We develop both inductive and transductive inference methods for each model to be employed in supervised and semisupervised situations, respectively. Using different real HAR databases, we compare our classifiers combination models against a single classifier that employs all the signals from the sensors. Our models exhibit consistently a reduction of the error rate and an increase of robustness against sensor failures. Our models also outperform other classifiers combination models that do not consider soft outputs and an Markovian structure of the human activities.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roy, Surajit; Ladpli, Purim; Chang, Fu-Kuo

    Accurate interpretation of in-situ piezoelectric sensor signals is a challenging task. This article presents the development of a numerical compensation model based on physical insight to address the influence of structural loads on piezo-sensor signals. The model requires knowledge of in-situ strain and temperature distribution in a structure while acquiring sensor signals. The parameters of the numerical model are obtained using experiments on flat aluminum plate under uniaxial tensile loading. It is shown that the model parameters obtained experimentally can be used for different structures, and sensor layout. Furthermore, the combined effects of load and temperature on the piezo-sensor responsemore » are also investigated and it is observed that both of these factors have a coupled effect on the sensor signals. It is proposed to obtain compensation model parameters under a range of operating temperatures to address this coupling effect. An important outcome of this study is a new load monitoring concept using in-situ piezoelectric sensor signals to track changes in the load paths in a structure.« less

  9. A Neural Network Approach for Building An Obstacle Detection Model by Fusion of Proximity Sensors Data

    PubMed Central

    Peralta, Emmanuel; Vargas, Héctor; Hermosilla, Gabriel

    2018-01-01

    Proximity sensors are broadly used in mobile robots for obstacle detection. The traditional calibration process of this kind of sensor could be a time-consuming task because it is usually done by identification in a manual and repetitive way. The resulting obstacles detection models are usually nonlinear functions that can be different for each proximity sensor attached to the robot. In addition, the model is highly dependent on the type of sensor (e.g., ultrasonic or infrared), on changes in light intensity, and on the properties of the obstacle such as shape, colour, and surface texture, among others. That is why in some situations it could be useful to gather all the measurements provided by different kinds of sensor in order to build a unique model that estimates the distances to the obstacles around the robot. This paper presents a novel approach to get an obstacles detection model based on the fusion of sensors data and automatic calibration by using artificial neural networks. PMID:29495338

  10. Charge modeling of ionic polymer-metal composites for dynamic curvature sensing

    NASA Astrophysics Data System (ADS)

    Bahramzadeh, Yousef; Shahinpoor, Mohsen

    2011-04-01

    A curvature sensor based on Ionic Polymer-Metal Composite (IPMC) is proposed and characterized for sensing of curvature variation in structures such as inflatable space structures in which using low power and flexible curvature sensor is of high importance for dynamic monitoring of shape at desired points. The linearity of output signal of sensor for calibration, effect of deflection rate at low frequencies and the phase delay between the output signal and the input deformation of IPMC curvature sensor is investigated. An analytical chemo-electro-mechanical model for charge dynamic of IPMC sensor is presented based on Nernst-Planck partial differential equation which can be used to explain the phenomena observed in experiments. The rate dependency of output signal and phase delay between the applied deformation and sensor signal is studied using the proposed model. The model provides a background for predicting the general characteristics of IPMC sensor. It is shown that IPMC sensor exhibits good linearity, sensitivity, and repeatability for dynamic curvature sensing of inflatable structures.

  11. Proposed evaluation framework for assessing operator performance with multisensor displays

    NASA Technical Reports Server (NTRS)

    Foyle, David C.

    1992-01-01

    Despite aggressive work on the development of sensor fusion algorithms and techniques, no formal evaluation procedures have been proposed. Based on existing integration models in the literature, an evaluation framework is developed to assess an operator's ability to use multisensor, or sensor fusion, displays. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The operator's performance with the sensor fusion display can be compared to the models' predictions based on the operator's performance when viewing the original sensor displays prior to fusion. This allows for the determination as to when a sensor fusion system leads to: 1) poorer performance than one of the original sensor displays (clearly an undesirable system in which the fused sensor system causes some distortion or interference); 2) better performance than with either single sensor system alone, but at a sub-optimal (compared to the model predictions) level; 3) optimal performance (compared to model predictions); or, 4) super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays. An experiment demonstrating the usefulness of the proposed evaluation framework is discussed.

  12. Preserving pre-rigor meat functionality for beef patty production.

    PubMed

    Claus, J R; Sørheim, O

    2006-06-01

    Three methods were examined for preserving pre-rigor meat functionality in beef patties. Hot-boned semimembranosus muscles were processed as follows: (1) pre-rigor ground, salted, patties immediately cooked; (2) pre-rigor ground, salted and stored overnight; (3) pre-rigor injected with brine; and (4) post-rigor ground and salted. Raw patties contained 60% lean beef, 19.7% beef fat trim, 1.7% NaCl, 3.6% starch, and 15% water. Pre-rigor processing occurred at 3-3.5h postmortem. Patties made from pre-rigor ground meat had higher pH values; greater protein solubility; firmer, more cohesive, and chewier texture; and substantially lower cooking losses than the other treatments. Addition of salt was sufficient to reduce the rate and extent of glycolysis. Brine injection of intact pre-rigor muscles resulted in some preservation of the functional properties but not as pronounced as with salt addition to pre-rigor ground meat.

  13. Virtual sensor models for real-time applications

    NASA Astrophysics Data System (ADS)

    Hirsenkorn, Nils; Hanke, Timo; Rauch, Andreas; Dehlink, Bernhard; Rasshofer, Ralph; Biebl, Erwin

    2016-09-01

    Increased complexity and severity of future driver assistance systems demand extensive testing and validation. As supplement to road tests, driving simulations offer various benefits. For driver assistance functions the perception of the sensors is crucial. Therefore, sensors also have to be modeled. In this contribution, a statistical data-driven sensor-model, is described. The state-space based method is capable of modeling various types behavior. In this contribution, the modeling of the position estimation of an automotive radar system, including autocorrelations, is presented. For rendering real-time capability, an efficient implementation is presented.

  14. Estimation of the time since death--reconsidering the re-establishment of rigor mortis.

    PubMed

    Anders, Sven; Kunz, Michaela; Gehl, Axel; Sehner, Susanne; Raupach, Tobias; Beck-Bornholdt, Hans-Peter

    2013-01-01

    In forensic medicine, there is an undefined data background for the phenomenon of re-establishment of rigor mortis after mechanical loosening, a method used in establishing time since death in forensic casework that is thought to occur up to 8 h post-mortem. Nevertheless, the method is widely described in textbooks on forensic medicine. We examined 314 joints (elbow and knee) of 79 deceased at defined time points up to 21 h post-mortem (hpm). Data were analysed using a random intercept model. Here, we show that re-establishment occurred in 38.5% of joints at 7.5 to 19 hpm. Therefore, the maximum time span for the re-establishment of rigor mortis appears to be 2.5-fold longer than thought so far. These findings have major impact on the estimation of time since death in forensic casework.

  15. Advanced sensor-simulation capability

    NASA Astrophysics Data System (ADS)

    Cota, Stephen A.; Kalman, Linda S.; Keller, Robert A.

    1990-09-01

    This paper provides an overview of an advanced simulation capability currently in use for analyzing visible and infrared sensor systems. The software system, called VISTAS (VISIBLE/INFRARED SENSOR TRADES, ANALYSES, AND SIMULATIONS) combines classical image processing techniques with detailed sensor models to produce static and time dependent simulations of a variety of sensor systems including imaging, tracking, and point target detection systems. Systems modelled to date include space-based scanning line-array sensors as well as staring 2-dimensional array sensors which can be used for either imaging or point source detection.

  16. Comparison of Feature Learning Methods for Human Activity Recognition Using Wearable Sensors

    PubMed Central

    Li, Frédéric; Nisar, Muhammad Adeel; Köping, Lukas; Grzegorzek, Marcin

    2018-01-01

    Getting a good feature representation of data is paramount for Human Activity Recognition (HAR) using wearable sensors. An increasing number of feature learning approaches—in particular deep-learning based—have been proposed to extract an effective feature representation by analyzing large amounts of data. However, getting an objective interpretation of their performances faces two problems: the lack of a baseline evaluation setup, which makes a strict comparison between them impossible, and the insufficiency of implementation details, which can hinder their use. In this paper, we attempt to address both issues: we firstly propose an evaluation framework allowing a rigorous comparison of features extracted by different methods, and use it to carry out extensive experiments with state-of-the-art feature learning approaches. We then provide all the codes and implementation details to make both the reproduction of the results reported in this paper and the re-use of our framework easier for other researchers. Our studies carried out on the OPPORTUNITY and UniMiB-SHAR datasets highlight the effectiveness of hybrid deep-learning architectures involving convolutional and Long-Short-Term-Memory (LSTM) to obtain features characterising both short- and long-term time dependencies in the data. PMID:29495310

  17. Comparison of Feature Learning Methods for Human Activity Recognition Using Wearable Sensors.

    PubMed

    Li, Frédéric; Shirahama, Kimiaki; Nisar, Muhammad Adeel; Köping, Lukas; Grzegorzek, Marcin

    2018-02-24

    Getting a good feature representation of data is paramount for Human Activity Recognition (HAR) using wearable sensors. An increasing number of feature learning approaches-in particular deep-learning based-have been proposed to extract an effective feature representation by analyzing large amounts of data. However, getting an objective interpretation of their performances faces two problems: the lack of a baseline evaluation setup, which makes a strict comparison between them impossible, and the insufficiency of implementation details, which can hinder their use. In this paper, we attempt to address both issues: we firstly propose an evaluation framework allowing a rigorous comparison of features extracted by different methods, and use it to carry out extensive experiments with state-of-the-art feature learning approaches. We then provide all the codes and implementation details to make both the reproduction of the results reported in this paper and the re-use of our framework easier for other researchers. Our studies carried out on the OPPORTUNITY and UniMiB-SHAR datasets highlight the effectiveness of hybrid deep-learning architectures involving convolutional and Long-Short-Term-Memory (LSTM) to obtain features characterising both short- and long-term time dependencies in the data.

  18. History and Future for the Happy Marriage between the MODIS Land team and Fluxnet

    NASA Astrophysics Data System (ADS)

    Running, S. W.

    2015-12-01

    When I wrote the proposal to NASA in 1988 for daily global evapotranspiration and gross primary production algorithms for the MODIS sensor, I had no validation plan. Fluxnet probably saved my MODIS career by developing a global network of rigorously calibrated towers measuring water and carbon fluxes over a wide variety of ecosystems that I could not even envision at the time that first proposal was written. However my enthusiasm for Fluxnet was not reciprocated by the Fluxnet community until we began providing 7 x 7 pixel MODIS Land datasets exactly over each of their towers every 8 days, without them having to crawl thru the global datasets and make individual orders. This system, known informally as the MODIS ASCII cutouts, began in 2002 and operates at the Oak Ridge DAAC to this day, cementing a mutually beneficial data interchange between the Fluxnet and remote sensing communities. This talk will briefly discuss the history of MODIS validation with flux towers, and flux spatial scaling with MODIS data. More importantly I will detail the future continuity of global biophysical datasets in the post-MODIS era, and what next generation sensors will provide.

  19. Development of lidar sensor for cloud-based measurements during convective conditions

    NASA Astrophysics Data System (ADS)

    Vishnu, R.; Bhavani Kumar, Y.; Rao, T. Narayana; Nair, Anish Kumar M.; Jayaraman, A.

    2016-05-01

    Atmospheric convection is a natural phenomena associated with heat transport. Convection is strong during daylight periods and rigorous in summer months. Severe ground heating associated with strong winds experienced during these periods. Tropics are considered as the source regions for strong convection. Formation of thunder storm clouds is common during this period. Location of cloud base and its associated dynamics is important to understand the influence of convection on the atmosphere. Lidars are sensitive to Mie scattering and are the suitable instruments for locating clouds in the atmosphere than instruments utilizing the radio frequency spectrum. Thunder storm clouds are composed of hydrometers and strongly scatter the laser light. Recently, a lidar technique was developed at National Atmospheric Research Laboratory (NARL), a Department of Space (DOS) unit, located at Gadanki near Tirupati. The lidar technique employs slant path operation and provides high resolution measurements on cloud base location in real-time. The laser based remote sensing technique allows measurement of atmosphere for every second at 7.5 m range resolution. The high resolution data permits assessment of updrafts at the cloud base. The lidar also provides real-time convective boundary layer height using aerosols as the tracers of atmospheric dynamics. The developed lidar sensor is planned for up-gradation with scanning facility to understand the cloud dynamics in the spatial direction. In this presentation, we present the lidar sensor technology and utilization of its technology for high resolution cloud base measurements during convective conditions over lidar site, Gadanki.

  20. The Community Cloud retrieval for CLimate (CC4CL) - Part 1: A framework applied to multiple satellite imaging sensors

    NASA Astrophysics Data System (ADS)

    Sus, Oliver; Stengel, Martin; Stapelberg, Stefan; McGarragh, Gregory; Poulsen, Caroline; Povey, Adam C.; Schlundt, Cornelia; Thomas, Gareth; Christensen, Matthew; Proud, Simon; Jerg, Matthias; Grainger, Roy; Hollmann, Rainer

    2018-06-01

    We present here the key features of the Community Cloud retrieval for CLimate (CC4CL) processing algorithm. We focus on the novel features of the framework: the optimal estimation approach in general, explicit uncertainty quantification through rigorous propagation of all known error sources into the final product, and the consistency of our long-term, multi-platform time series provided at various resolutions, from 0.5 to 0.02°. By describing all key input data and processing steps, we aim to inform the user about important features of this new retrieval framework and its potential applicability to climate studies. We provide an overview of the retrieved and derived output variables. These are analysed for four, partly very challenging, scenes collocated with CALIOP (Cloud-Aerosol lidar with Orthogonal Polarization) observations in the high latitudes and over the Gulf of Guinea-West Africa. The results show that CC4CL provides very realistic estimates of cloud top height and cover for optically thick clouds but, where optically thin clouds overlap, returns a height between the two layers. CC4CL is a unique, coherent, multi-instrument cloud property retrieval framework applicable to passive sensor data of several EO missions. Through its flexibility, CC4CL offers the opportunity for combining a variety of historic and current EO missions into one dataset, which, compared to single sensor retrievals, is improved in terms of accuracy and temporal sampling.

  1. Equivalent Sensor Radiance Generation and Remote Sensing from Model Parameters. Part 1; Equivalent Sensor Radiance Formulation

    NASA Technical Reports Server (NTRS)

    Wind, Galina; DaSilva, Arlindo M.; Norris, Peter M.; Platnick, Steven E.

    2013-01-01

    In this paper we describe a general procedure for calculating equivalent sensor radiances from variables output from a global atmospheric forecast model. In order to take proper account of the discrepancies between model resolution and sensor footprint the algorithm takes explicit account of the model subgrid variability, in particular its description of the probably density function of total water (vapor and cloud condensate.) The equivalent sensor radiances are then substituted into an operational remote sensing algorithm processing chain to produce a variety of remote sensing products that would normally be produced from actual sensor output. This output can then be used for a wide variety of purposes such as model parameter verification, remote sensing algorithm validation, testing of new retrieval methods and future sensor studies. We show a specific implementation using the GEOS-5 model, the MODIS instrument and the MODIS Adaptive Processing System (MODAPS) Data Collection 5.1 operational remote sensing cloud algorithm processing chain (including the cloud mask, cloud top properties and cloud optical and microphysical properties products.) We focus on clouds and cloud/aerosol interactions, because they are very important to model development and improvement.

  2. Wearable-Sensor-Based Classification Models of Faller Status in Older Adults.

    PubMed

    Howcroft, Jennifer; Lemaire, Edward D; Kofman, Jonathan

    2016-01-01

    Wearable sensors have potential for quantitative, gait-based, point-of-care fall risk assessment that can be easily and quickly implemented in clinical-care and older-adult living environments. This investigation generated models for wearable-sensor based fall-risk classification in older adults and identified the optimal sensor type, location, combination, and modelling method; for walking with and without a cognitive load task. A convenience sample of 100 older individuals (75.5 ± 6.7 years; 76 non-fallers, 24 fallers based on 6 month retrospective fall occurrence) walked 7.62 m under single-task and dual-task conditions while wearing pressure-sensing insoles and tri-axial accelerometers at the head, pelvis, and left and right shanks. Participants also completed the Activities-specific Balance Confidence scale, Community Health Activities Model Program for Seniors questionnaire, six minute walk test, and ranked their fear of falling. Fall risk classification models were assessed for all sensor combinations and three model types: multi-layer perceptron neural network, naïve Bayesian, and support vector machine. The best performing model was a multi-layer perceptron neural network with input parameters from pressure-sensing insoles and head, pelvis, and left shank accelerometers (accuracy = 84%, F1 score = 0.600, MCC score = 0.521). Head sensor-based models had the best performance of the single-sensor models for single-task gait assessment. Single-task gait assessment models outperformed models based on dual-task walking or clinical assessment data. Support vector machines and neural networks were the best modelling technique for fall risk classification. Fall risk classification models developed for point-of-care environments should be developed using support vector machines and neural networks, with a multi-sensor single-task gait assessment.

  3. Physical retrieval of precipitation water contents from Special Sensor Microwave/Imager (SSM/I) data. Part 1: A cloud ensemble/radiative parameterization for sensor response (report version)

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Raymond, William H.

    1990-01-01

    The physical retrieval of geophysical parameters based upon remotely sensed data requires a sensor response model which relates the upwelling radiances that the sensor observes to the parameters to be retrieved. In the retrieval of precipitation water contents from satellite passive microwave observations, the sensor response model has two basic components. First, a description of the radiative transfer of microwaves through a precipitating atmosphere must be considered, because it is necessary to establish the physical relationship between precipitation water content and upwelling microwave brightness temperature. Also the spatial response of the satellite microwave sensor (or antenna pattern) must be included in the description of sensor response, since precipitation and the associated brightness temperature field can vary over a typical microwave sensor resolution footprint. A 'population' of convective cells, as well as stratiform clouds, are simulated using a computationally-efficient multi-cylinder cloud model. Ensembles of clouds selected at random from the population, distributed over a 25 km x 25 km model domain, serve as the basis for radiative transfer calculations of upwelling brightness temperatures at the SSM/I frequencies. Sensor spatial response is treated explicitly by convolving the upwelling brightness temperature by the domain-integrated SSM/I antenna patterns. The sensor response model is utilized in precipitation water content retrievals.

  4. A Polygon Model for Wireless Sensor Network Deployment with Directional Sensing Areas

    PubMed Central

    Wu, Chun-Hsien; Chung, Yeh-Ching

    2009-01-01

    The modeling of the sensing area of a sensor node is essential for the deployment algorithm of wireless sensor networks (WSNs). In this paper, a polygon model is proposed for the sensor node with directional sensing area. In addition, a WSN deployment algorithm is presented with topology control and scoring mechanisms to maintain network connectivity and improve sensing coverage rate. To evaluate the proposed polygon model and WSN deployment algorithm, a simulation is conducted. The simulation results show that the proposed polygon model outperforms the existed disk model and circular sector model in terms of the maximum sensing coverage rate. PMID:22303159

  5. Vibration sensing in flexible structures using a distributed-effect modal domain optical fiber sensor

    NASA Technical Reports Server (NTRS)

    Reichard, Karl M.; Lindner, Douglas K.; Claus, Richard O.

    1991-01-01

    Modal domain optical fiber sensors have recently been employed in the implementation of system identification algorithms and the closed-loop control of vibrations in flexible structures. The mathematical model of the modal domain optical fiber sensor used in these applications, however, only accounted for the effects of strain in the direction of the fiber's longitudinal axis. In this paper, we extend this model to include the effects of arbitrary stress. Using this sensor model, we characterize the sensor's sensitivity and dynamic range.

  6. Statistical analysis of target acquisition sensor modeling experiments

    NASA Astrophysics Data System (ADS)

    Deaver, Dawne M.; Moyer, Steve

    2015-05-01

    The U.S. Army RDECOM CERDEC NVESD Modeling and Simulation Division is charged with the development and advancement of military target acquisition models to estimate expected soldier performance when using all types of imaging sensors. Two elements of sensor modeling are (1) laboratory-based psychophysical experiments used to measure task performance and calibrate the various models and (2) field-based experiments used to verify the model estimates for specific sensors. In both types of experiments, it is common practice to control or measure environmental, sensor, and target physical parameters in order to minimize uncertainty of the physics based modeling. Predicting the minimum number of test subjects required to calibrate or validate the model should be, but is not always, done during test planning. The objective of this analysis is to develop guidelines for test planners which recommend the number and types of test samples required to yield a statistically significant result.

  7. 75 FR 61820 - Model Specifications for Breath Alcohol Ignition Interlock Devices (BAIIDs)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-06

    ... technology to alcohol-specific sensors (such as fuel cell technology based on electro-chemical oxidation of alcohol) or other emerging sensor technologies? Or, should NHTSA not specify the sensor technology and... require alcohol- specific technology in the Model Specifications, but that the particular sensor design...

  8. CMS-Wave

    DTIC Science & Technology

    2014-10-27

    a phase-averaged spectral wind-wave generation and transformation model and its interface in the Surface-water Modeling System (SMS). Ambrose...applications of the Boussinesq (BOUSS-2D) wave model that provides more rigorous calculations for design and performance optimization of integrated...navigation systems . Together these wave models provide reliable predictions on regional and local spatial domains and cost-effective engineering solutions

  9. Deposition Of Thin-Film Sensors On Glass-Fiber/Epoxy Models

    NASA Technical Reports Server (NTRS)

    Tran, Sang Q.

    1995-01-01

    Direct-deposition process devised for fabrication of thin-film sensors on three-dimensional, curved surfaces of models made of stainless steel covered with glass-fiber/epoxy-matrix composite material. Models used under cryogenic conditions, and sensors used to detect on-line transitions between laminar and turbulent flows in wind tunnel environments. Sensors fabricated by process used at temperatures from minus 300 degrees F to 175 degrees F.

  10. Fiber-optic instrumentation: Cryogenic sensor model description. [for measurement of conditions in cryogenic liquid propellant tanks

    NASA Technical Reports Server (NTRS)

    Sharma, M. M.

    1979-01-01

    An assessment and determination of technology requirements for developing a demonstration model to evaluate feasibility of practical cryogenic liquid level, pressure, and temperature sensors is presented. The construction of a demonstration model to measure characteristics of the selected sensor and to develop test procedures are discussed as well as the development of an appropriate electronic subsystem to operate the sensors.

  11. A framework for optimization and quantification of uncertainty and sensitivity for developing carbon capture systems

    DOE PAGES

    Eslick, John C.; Ng, Brenda; Gao, Qianwen; ...

    2014-12-31

    Under the auspices of the U.S. Department of Energy’s Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification throughmore » PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.« less

  12. Rigorous evaluation of chemical measurement uncertainty: liquid chromatographic analysis methods using detector response factor calibration

    NASA Astrophysics Data System (ADS)

    Toman, Blaza; Nelson, Michael A.; Bedner, Mary

    2017-06-01

    Chemical measurement methods are designed to promote accurate knowledge of a measurand or system. As such, these methods often allow elicitation of latent sources of variability and correlation in experimental data. They typically implement measurement equations that support quantification of effects associated with calibration standards and other known or observed parametric variables. Additionally, multiple samples and calibrants are usually analyzed to assess accuracy of the measurement procedure and repeatability by the analyst. Thus, a realistic assessment of uncertainty for most chemical measurement methods is not purely bottom-up (based on the measurement equation) or top-down (based on the experimental design), but inherently contains elements of both. Confidence in results must be rigorously evaluated for the sources of variability in all of the bottom-up and top-down elements. This type of analysis presents unique challenges due to various statistical correlations among the outputs of measurement equations. One approach is to use a Bayesian hierarchical (BH) model which is intrinsically rigorous, thus making it a straightforward method for use with complex experimental designs, particularly when correlations among data are numerous and difficult to elucidate or explicitly quantify. In simpler cases, careful analysis using GUM Supplement 1 (MC) methods augmented with random effects meta analysis yields similar results to a full BH model analysis. In this article we describe both approaches to rigorous uncertainty evaluation using as examples measurements of 25-hydroxyvitamin D3 in solution reference materials via liquid chromatography with UV absorbance detection (LC-UV) and liquid chromatography mass spectrometric detection using isotope dilution (LC-IDMS).

  13. Model-based assessment of estuary ecosystem health using the latent health factor index, with application to the richibucto estuary.

    PubMed

    Chiu, Grace S; Wu, Margaret A; Lu, Lin

    2013-01-01

    The ability to quantitatively assess ecological health is of great interest to those tasked with monitoring and conserving ecosystems. For decades, biomonitoring research and policies have relied on multimetric health indices of various forms. Although indices are numbers, many are constructed based on qualitative procedures, thus limiting the quantitative rigor of the practical interpretations of such indices. The statistical modeling approach to construct the latent health factor index (LHFI) was recently developed. With ecological data that otherwise are used to construct conventional multimetric indices, the LHFI framework expresses such data in a rigorous quantitative model, integrating qualitative features of ecosystem health and preconceived ecological relationships among such features. This hierarchical modeling approach allows unified statistical inference of health for observed sites (along with prediction of health for partially observed sites, if desired) and of the relevance of ecological drivers, all accompanied by formal uncertainty statements from a single, integrated analysis. Thus far, the LHFI approach has been demonstrated and validated in a freshwater context. We adapt this approach to modeling estuarine health, and illustrate it on the previously unassessed system in Richibucto in New Brunswick, Canada, where active oyster farming is a potential stressor through its effects on sediment properties. Field data correspond to health metrics that constitute the popular AZTI marine biotic index and the infaunal trophic index, as well as abiotic predictors preconceived to influence biota. Our paper is the first to construct a scientifically sensible model that rigorously identifies the collective explanatory capacity of salinity, distance downstream, channel depth, and silt-clay content-all regarded a priori as qualitatively important abiotic drivers-towards site health in the Richibucto ecosystem. This suggests the potential effectiveness of the LHFI approach for assessing not only freshwater systems but aquatic ecosystems in general.

  14. Control systems using modal domain optical fiber sensors for smart structure applications

    NASA Technical Reports Server (NTRS)

    Lindner, Douglas K.; Reichard, Karl M.

    1991-01-01

    Recently, a new class of sensors has emerged for structural control which respond to environmental changes over a significant gauge length; these sensors are called distributed-effect sensors. These sensors can be fabricated with spatially varying sensitivity to the distributed measurand, and can be configured to measure a variety of structural parameters which can not be measured directly using point sensors. Examples of distributed-effect sensors include piezoelectric film, holographic sensors, and modal domain optical fiber sensors. Optical fiber sensors are particularly attractive for smart structure applications because they are flexible, have low mass, and can easily be embedded directly into materials. In this paper we describe the implementation of weighted modal domain optical fiber sensors. The mathematical model of the modal domain optical fiber sensor model is described and used to derive an expression for the sensor sensitivity. The effects of parameter variations on the sensor sensitivity are demonstrated to illustrate methods of spatially varying the sensor sensitivity.

  15. Chemiresistive Graphene Sensors for Ammonia Detection.

    PubMed

    Mackin, Charles; Schroeder, Vera; Zurutuza, Amaia; Su, Cong; Kong, Jing; Swager, Timothy M; Palacios, Tomás

    2018-05-09

    The primary objective of this work is to demonstrate a novel sensor system as a convenient vehicle for scaled-up repeatability and the kinetic analysis of a pixelated testbed. This work presents a sensor system capable of measuring hundreds of functionalized graphene sensors in a rapid and convenient fashion. The sensor system makes use of a novel array architecture requiring only one sensor per pixel and no selector transistor. The sensor system is employed specifically for the evaluation of Co(tpfpp)ClO 4 functionalization of graphene sensors for the detection of ammonia as an extension of previous work. Co(tpfpp)ClO 4 treated graphene sensors were found to provide 4-fold increased ammonia sensitivity over pristine graphene sensors. Sensors were also found to exhibit excellent selectivity over interfering compounds such as water and common organic solvents. The ability to monitor a large sensor array with 160 pixels provides insights into performance variations and reproducibility-critical factors in the development of practical sensor systems. All sensors exhibit the same linearly related responses with variations in response exhibiting Gaussian distributions, a key finding for variation modeling and quality engineering purposes. The mean correlation coefficient between sensor responses was found to be 0.999 indicating highly consistent sensor responses and excellent reproducibility of Co(tpfpp)ClO 4 functionalization. A detailed kinetic model is developed to describe sensor response profiles. The model consists of two adsorption mechanisms-one reversible and one irreversible-and is shown capable of fitting experimental data with a mean percent error of 0.01%.

  16. Fusion of intraoperative force sensoring, surface reconstruction and biomechanical modeling

    NASA Astrophysics Data System (ADS)

    Röhl, S.; Bodenstedt, S.; Küderle, C.; Suwelack, S.; Kenngott, H.; Müller-Stich, B. P.; Dillmann, R.; Speidel, S.

    2012-02-01

    Minimally invasive surgery is medically complex and can heavily benefit from computer assistance. One way to help the surgeon is to integrate preoperative planning data into the surgical workflow. This information can be represented as a customized preoperative model of the surgical site. To use it intraoperatively, it has to be updated during the intervention due to the constantly changing environment. Hence, intraoperative sensor data has to be acquired and registered with the preoperative model. Haptic information which could complement the visual sensor data is still not established. In addition, biomechanical modeling of the surgical site can help in reflecting the changes which cannot be captured by intraoperative sensors. We present a setting where a force sensor is integrated into a laparoscopic instrument. In a test scenario using a silicone liver phantom, we register the measured forces with a reconstructed surface model from stereo endoscopic images and a finite element model. The endoscope, the instrument and the liver phantom are tracked with a Polaris optical tracking system. By fusing this information, we can transfer the deformation onto the finite element model. The purpose of this setting is to demonstrate the principles needed and the methods developed for intraoperative sensor data fusion. One emphasis lies on the calibration of the force sensor with the instrument and first experiments with soft tissue. We also present our solution and first results concerning the integration of the force sensor as well as accuracy to the fusion of force measurements, surface reconstruction and biomechanical modeling.

  17. Secure anonymous mutual authentication for star two-tier wireless body area networks.

    PubMed

    Ibrahim, Maged Hamada; Kumari, Saru; Das, Ashok Kumar; Wazid, Mohammad; Odelu, Vanga

    2016-10-01

    Mutual authentication is a very important service that must be established between sensor nodes in wireless body area network (WBAN) to ensure the originality and integrity of the patient's data sent by sensors distributed on different parts of the body. However, mutual authentication service is not enough. An adversary can benefit from monitoring the traffic and knowing which sensor is in transmission of patient's data. Observing the traffic (even without disclosing the context) and knowing its origin, it can reveal to the adversary information about the patient's medical conditions. Therefore, anonymity of the communicating sensors is an important service as well. Few works have been conducted in the area of mutual authentication among sensor nodes in WBAN. However, none of them has considered anonymity among body sensor nodes. Up to our knowledge, our protocol is the first attempt to consider this service in a two-tier WBAN. We propose a new secure protocol to realize anonymous mutual authentication and confidential transmission for star two-tier WBAN topology. The proposed protocol uses simple cryptographic primitives. We prove the security of the proposed protocol using the widely-accepted Burrows-Abadi-Needham (BAN) logic, and also through rigorous informal security analysis. In addition, to demonstrate the practicality of our protocol, we evaluate it using NS-2 simulator. BAN logic and informal security analysis prove that our proposed protocol achieves the necessary security requirements and goals of an authentication service. The simulation results show the impact on the various network parameters, such as end-to-end delay and throughput. The nodes in the network require to store few hundred bits. Nodes require to perform very few hash invocations, which are computationally very efficient. The communication cost of the proposed protocol is few hundred bits in one round of communication. Due to the low computation cost, the energy consumed by the nodes is also low. Our proposed protocol is a lightweight anonymous mutually authentication protocol to mutually authenticate the sensor nodes with the controller node (hub) in a star two-tier WBAN topology. Results show that our protocol proves efficiency over previously proposed protocols and at the same time, achieves the necessary security requirements for a secure anonymous mutual authentication scheme. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Reconstructing Constructivism: Causal Models, Bayesian Learning Mechanisms, and the Theory Theory

    ERIC Educational Resources Information Center

    Gopnik, Alison; Wellman, Henry M.

    2012-01-01

    We propose a new version of the "theory theory" grounded in the computational framework of probabilistic causal models and Bayesian learning. Probabilistic models allow a constructivist but rigorous and detailed approach to cognitive development. They also explain the learning of both more specific causal hypotheses and more abstract framework…

  19. Evaluating habitat suitability models for nesting white-headed woodpeckers in unburned forest

    Treesearch

    Quresh S. Latif; Victoria A. Saab; Kim Mellen-Mclean; Jonathan G. Dudley

    2015-01-01

    Habitat suitability models can provide guidelines for species conservation by predicting where species of interest are likely to occur. Presence-only models are widely used but typically provide only relative indices of habitat suitability (HSIs), necessitating rigorous evaluation often using independently collected presence-absence data. We refined and evaluated...

  20. Conservatoire Students' Experiences and Perceptions of Instrument-Specific Master Classes

    ERIC Educational Resources Information Center

    Long, Marion; Creech, Andrea; Gaunt, Helena; Hallam, Susan

    2014-01-01

    Historically, in the professional training of musicians, the master-apprentice model has played a central role in instilling the methods and values of the discipline, contributing to the rigorous formation of talent. Expert professional musicians advocate that certain thinking skills can be modelled through the master-apprentice model, yet its…

  1. Increasing the reliability of ecological models using modern software engineering techniques

    Treesearch

    Robert M. Scheller; Brian R. Sturtevant; Eric J. Gustafson; Brendan C. Ward; David J. Mladenoff

    2009-01-01

    Modern software development techniques are largely unknown to ecologists. Typically, ecological models and other software tools are developed for limited research purposes, and additional capabilities are added later, usually in an ad hoc manner. Modern software engineering techniques can substantially increase scientific rigor and confidence in ecological models and...

  2. Hysteresis in the trade cycle

    NASA Astrophysics Data System (ADS)

    Mc Namara, Hugh A.; Pokrovskii, Alexei V.

    2006-02-01

    The Kaldor model-one of the first nonlinear models of macroeconomics-is modified to incorporate a Preisach nonlinearity. The new dynamical system thus created shows highly complicated behaviour. This paper presents a rigorous (computer aided) proof of chaos in this new model, and of the existence of unstable periodic orbits of all minimal periods p>57.

  3. Designing an Educational Game with Ten Steps to Complex Learning

    ERIC Educational Resources Information Center

    Enfield, Jacob

    2012-01-01

    Few instructional design (ID) models exist which are specific for developing educational games. Moreover, those extant ID models have not been rigorously evaluated. No ID models were found which focus on educational games with complex learning objectives. "Ten Steps to Complex Learning" (TSCL) is based on the four component instructional…

  4. Vaporization and Zonal Mixing in Performance Modeling of Advanced LOX-Methane Rockets

    NASA Technical Reports Server (NTRS)

    Williams, George J., Jr.; Stiegemeier, Benjamin R.

    2013-01-01

    Initial modeling of LOX-Methane reaction control (RCE) 100 lbf thrusters and larger, 5500 lbf thrusters with the TDK/VIPER code has shown good agreement with sea-level and altitude test data. However, the vaporization and zonal mixing upstream of the compressible flow stage of the models leveraged empirical trends to match the sea-level data. This was necessary in part because the codes are designed primarily to handle the compressible part of the flow (i.e. contraction through expansion) and in part because there was limited data on the thrusters themselves on which to base a rigorous model. A more rigorous model has been developed which includes detailed vaporization trends based on element type and geometry, radial variations in mixture ratio within each of the "zones" associated with elements and not just between zones of different element types, and, to the extent possible, updated kinetic rates. The Spray Combustion Analysis Program (SCAP) was leveraged to support assumptions in the vaporization trends. Data of both thrusters is revisited and the model maintains a good predictive capability while addressing some of the major limitations of the previous version.

  5. Trans-dimensional and hierarchical Bayesian approaches toward rigorous estimation of seismic sources and structures in the Northeast Asia

    NASA Astrophysics Data System (ADS)

    Kim, Seongryong; Tkalčić, Hrvoje; Mustać, Marija; Rhie, Junkee; Ford, Sean

    2016-04-01

    A framework is presented within which we provide rigorous estimations for seismic sources and structures in the Northeast Asia. We use Bayesian inversion methods, which enable statistical estimations of models and their uncertainties based on data information. Ambiguities in error statistics and model parameterizations are addressed by hierarchical and trans-dimensional (trans-D) techniques, which can be inherently implemented in the Bayesian inversions. Hence reliable estimation of model parameters and their uncertainties is possible, thus avoiding arbitrary regularizations and parameterizations. Hierarchical and trans-D inversions are performed to develop a three-dimensional velocity model using ambient noise data. To further improve the model, we perform joint inversions with receiver function data using a newly developed Bayesian method. For the source estimation, a novel moment tensor inversion method is presented and applied to regional waveform data of the North Korean nuclear explosion tests. By the combination of new Bayesian techniques and the structural model, coupled with meaningful uncertainties related to each of the processes, more quantitative monitoring and discrimination of seismic events is possible.

  6. A predictive model for biomimetic plate type broadband frequency sensor

    NASA Astrophysics Data System (ADS)

    Ahmed, Riaz U.; Banerjee, Sourav

    2016-04-01

    In this work, predictive model for a bio-inspired broadband frequency sensor is developed. Broadband frequency sensing is essential in many domains of science and technology. One great example of such sensor is human cochlea, where it senses a frequency band of 20 Hz to 20 KHz. Developing broadband sensor adopting the physics of human cochlea has found tremendous interest in recent years. Although few experimental studies have been reported, a true predictive model to design such sensors is missing. A predictive model is utmost necessary for accurate design of selective broadband sensors that are capable of sensing very selective band of frequencies. Hence, in this study, we proposed a novel predictive model for the cochlea-inspired broadband sensor, aiming to select the frequency band and model parameters predictively. Tapered plate geometry is considered mimicking the real shape of the basilar membrane in the human cochlea. The predictive model is intended to develop flexible enough that can be employed in a wide variety of scientific domains. To do that, the predictive model is developed in such a way that, it can not only handle homogeneous but also any functionally graded model parameters. Additionally, the predictive model is capable of managing various types of boundary conditions. It has been found that, using the homogeneous model parameters, it is possible to sense a specific frequency band from a specific portion (B) of the model length (L). It is also possible to alter the attributes of `B' using functionally graded model parameters, which confirms the predictive frequency selection ability of the developed model.

  7. Predictive QSAR modeling workflow, model applicability domains, and virtual screening.

    PubMed

    Tropsha, Alexander; Golbraikh, Alexander

    2007-01-01

    Quantitative Structure Activity Relationship (QSAR) modeling has been traditionally applied as an evaluative approach, i.e., with the focus on developing retrospective and explanatory models of existing data. Model extrapolation was considered if only in hypothetical sense in terms of potential modifications of known biologically active chemicals that could improve compounds' activity. This critical review re-examines the strategy and the output of the modern QSAR modeling approaches. We provide examples and arguments suggesting that current methodologies may afford robust and validated models capable of accurate prediction of compound properties for molecules not included in the training sets. We discuss a data-analytical modeling workflow developed in our laboratory that incorporates modules for combinatorial QSAR model development (i.e., using all possible binary combinations of available descriptor sets and statistical data modeling techniques), rigorous model validation, and virtual screening of available chemical databases to identify novel biologically active compounds. Our approach places particular emphasis on model validation as well as the need to define model applicability domains in the chemistry space. We present examples of studies where the application of rigorously validated QSAR models to virtual screening identified computational hits that were confirmed by subsequent experimental investigations. The emerging focus of QSAR modeling on target property forecasting brings it forward as predictive, as opposed to evaluative, modeling approach.

  8. Integrating teaching and research in the field and laboratory settings

    NASA Astrophysics Data System (ADS)

    Wang, L.; Kaseke, K. F.; Daryanto, S.; Ravi, S.

    2015-12-01

    Field observations and laboratory measurements are great ways to engage students and spark students' interests in science. Typically these observations are separated from rigorous classroom teaching. Here we assessed the potential of integrating teaching and research in the field and laboratory setting in both US and abroad and worked with students without strong science background to utilize simple laboratory equipment and various environmental sensors to conduct innovative projects. We worked with students in Namibia and two local high school students in Indianapolis to conduct leaf potential measurements, soil nutrient extraction, soil infiltration measurements and isotope measurements. The experience showed us the potential of integrating teaching and research in the field setting and working with people with minimum exposure to modern scientific instrumentation to carry out creative projects.

  9. Progress in Modeling Nonlinear Dendritic Evolution in Two and Three Dimensions, and Its Mathematical Justification

    NASA Technical Reports Server (NTRS)

    Tanveer, S.; Foster, M. R.

    2002-01-01

    We report progress in three areas of investigation related to dendritic crystal growth. Those items include: 1. Selection of tip features dendritic crystal growth; 2) Investigation of nonlinear evolution for two-sided model; and 3) Rigorous mathematical justification.

  10. Accurate Biomass Estimation via Bayesian Adaptive Sampling

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Knuth, Kevin H.; Castle, Joseph P.; Lvov, Nikolay

    2005-01-01

    The following concepts were introduced: a) Bayesian adaptive sampling for solving biomass estimation; b) Characterization of MISR Rahman model parameters conditioned upon MODIS landcover. c) Rigorous non-parametric Bayesian approach to analytic mixture model determination. d) Unique U.S. asset for science product validation and verification.

  11. Resonant tunneling assisted propagation and amplification of plasmons in high electron mobility transistors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhardwaj, Shubhendu; Sensale-Rodriguez, Berardi; Xing, Huili Grace

    A rigorous theoretical and computational model is developed for the plasma-wave propagation in high electron mobility transistor structures with electron injection from a resonant tunneling diode at the gate. We discuss the conditions in which low-loss and sustainable plasmon modes can be supported in such structures. The developed analytical model is used to derive the dispersion relation for these plasmon-modes. A non-linear full-wave-hydrodynamic numerical solver is also developed using a finite difference time domain algorithm. The developed analytical solutions are validated via the numerical solution. We also verify previous observations that were based on a simplified transmission line model. Itmore » is shown that at high levels of negative differential conductance, plasmon amplification is indeed possible. The proposed rigorous models can enable accurate design and optimization of practical resonant tunnel diode-based plasma-wave devices for terahertz sources, mixers, and detectors, by allowing a precise representation of their coupling when integrated with other electromagnetic structures.« less

  12. Including Magnetostriction in Micromagnetic Models

    NASA Astrophysics Data System (ADS)

    Conbhuí, Pádraig Ó.; Williams, Wyn; Fabian, Karl; Nagy, Lesleis

    2016-04-01

    The magnetic anomalies that identify crustal spreading are predominantly recorded by basalts formed at the mid-ocean ridges, whose magnetic signals are dominated by iron-titanium-oxides (Fe3-xTixO4), so called "titanomagnetites", of which the Fe2.4Ti0.6O4 (TM60) phase is the most common. With sufficient quantities of titanium present, these minerals exhibit strong magnetostriction. To date, models of these grains in the pseudo-single domain (PSD) range have failed to accurately account for this effect. In particular, a popular analytic treatment provided by Kittel (1949) for describing the magnetostrictive energy as an effective increase of the anisotropy constant can produce unphysical strains for non-uniform magnetizations. I will present a rigorous approach based on work by Brown (1966) and by Kroner (1958) for including magnetostriction in micromagnetic codes which is suitable for modelling hysteresis loops and finding remanent states in the PSD regime. Preliminary results suggest the more rigorously defined micromagnetic models exhibit higher coercivities and extended single domain ranges when compared to more simplistic approaches.

  13. Obtaining Potential Virtual Temperature Profiles, Entrainment Fluxes, and Spectra from Mini Unmanned Aerial Vehicle Data

    NASA Astrophysics Data System (ADS)

    Dias, N. L.; Gonçalves, J. E.; Freire, L. S.; Hasegawa, T.; Malheiros, A. L.

    2012-10-01

    We present a simple but effective small unmanned aerial vehicle design that is able to make high-resolution temperature and humidity measurements of the atmospheric boundary layer. The air model used is an adapted commercial design, and is able to carry all the instrumentation (barometer, temperature and humidity sensor, and datalogger) required for such measurements. It is fitted with an autopilot that controls the plane's ascent and descent in a spiral to 1800 m above ground. We describe the results obtained on three different days when the plane, called Aerolemma-3, flew continuously throughout the day. Surface measurements of the sensible virtual heat flux made simultaneously allowed the calculation of all standard convective turbulence scales for the boundary layer, as well as a rigorous test of existing models for the entrainment flux at the top of the boundary layer, and for its growth. A novel approach to calculate the entrainment flux from the top-down, bottom-up model of Wynagaard and Brost is used. We also calculated temperature fluctuations by means of a spectral high-pass filter, and calculated their spectra. Although the time series are small, tapering proved ineffective in this case. The spectra from the untapered series displayed a consistent -5/3 behaviour, and from them it was possible to calculate a dimensionless dissipation function, which exhibited the expected similarity behaviour against boundary-layer bulk stability. The simplicity, ease of use and economy of such small aircraft make us optimistic about their usefulness in boundary-layer research.

  14. Staying Clear of the Dragons.

    PubMed

    Elf, Johan

    2016-04-27

    A new, game-changing approach makes it possible to rigorously disprove models without making assumptions about the unknown parts of the biological system. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Optimal Multi-Type Sensor Placement for Structural Identification by Static-Load Testing

    PubMed Central

    Papadopoulou, Maria; Vernay, Didier; Smith, Ian F. C.

    2017-01-01

    Assessing ageing infrastructure is a critical challenge for civil engineers due to the difficulty in the estimation and integration of uncertainties in structural models. Field measurements are increasingly used to improve knowledge of the real behavior of a structure; this activity is called structural identification. Error-domain model falsification (EDMF) is an easy-to-use model-based structural-identification methodology which robustly accommodates systematic uncertainties originating from sources such as boundary conditions, numerical modelling and model fidelity, as well as aleatory uncertainties from sources such as measurement error and material parameter-value estimations. In most practical applications of structural identification, sensors are placed using engineering judgment and experience. However, since sensor placement is fundamental to the success of structural identification, a more rational and systematic method is justified. This study presents a measurement system design methodology to identify the best sensor locations and sensor types using information from static-load tests. More specifically, three static-load tests were studied for the sensor system design using three types of sensors for a performance evaluation of a full-scale bridge in Singapore. Several sensor placement strategies are compared using joint entropy as an information-gain metric. A modified version of the hierarchical algorithm for sensor placement is proposed to take into account mutual information between load tests. It is shown that a carefully-configured measurement strategy that includes multiple sensor types and several load tests maximizes information gain. PMID:29240684

  16. Percutaneous window chamber method for chronic intravital microscopy of sensor-tissue interactions.

    PubMed

    Koschwanez, Heidi E; Klitzman, Bruce; Reichert, W Monty

    2008-11-01

    A dorsal, two-sided skin-fold window chamber model was employed previously by Gough in glucose sensor research to characterize poorly understood physiological factors affecting sensor performance. We have extended this work by developing a percutaneous one-sided window chamber model for the rodent dorsum that offers both a larger subcutaneous area and a less restrictive tissue space than previous animal models. A surgical procedure for implanting a sensor into the subcutis beneath an acrylic window (15 mm diameter) is presented. Methods to quantify changes in the microvascular network and red blood cell perfusion around the sensors using noninvasive intravital microscopy and laser Doppler flowmetry are described. The feasibility of combining interstitial glucose monitoring from an implanted sensor with intravital fluorescence microscopy was explored using a bolus injection of fluorescein and dextrose to observe real-time mass transport of a small molecule at the sensor-tissue interface. The percutaneous window chamber provides an excellent model for assessing the influence of different sensor modifications, such as surface morphologies, on neovascularization using real-time monitoring of the microvascular network and tissue perfusion. However, the tissue response to an implanted sensor was variable, and some sensors migrated entirely out of the field of view and could not be observed adequately. A percutaneous optical window provides direct, real-time images of the development and dynamics of microvascular networks, microvessel patency, and fibrotic encapsulation at the tissue-sensor interface. Additionally, observing microvessels following combined bolus injections of a fluorescent dye and glucose in the local sensor environment demonstrated a valuable technique to visualize mass transport at the sensor surface.

  17. A Prototype Land Information Sensor Web: Design, Implementation and Implication for the SMAP Mission

    NASA Astrophysics Data System (ADS)

    Su, H.; Houser, P.; Tian, Y.; Geiger, J. K.; Kumar, S. V.; Gates, L.

    2009-12-01

    Land Surface Model (LSM) predictions are regular in time and space, but these predictions are influenced by errors in model structure, input variables, parameters and inadequate treatment of sub-grid scale spatial variability. Consequently, LSM predictions are significantly improved through observation constraints made in a data assimilation framework. Several multi-sensor satellites are currently operating which provide multiple global observations of the land surface, and its related near-atmospheric properties. However, these observations are not optimal for addressing current and future land surface environmental problems. To meet future earth system science challenges, NASA will develop constellations of smart satellites in sensor web configurations which provide timely on-demand data and analysis to users, and can be reconfigured based on the changing needs of science and available technology. A sensor web is more than a collection of satellite sensors. That means a sensor web is a system composed of multiple platforms interconnected by a communication network for the purpose of performing specific observations and processing data required to support specific science goals. Sensor webs can eclipse the value of disparate sensor components by reducing response time and increasing scientific value, especially when the two-way interaction between the model and the sensor web is enabled. The study of a prototype Land Information Sensor Web (LISW) is sponsored by NASA, trying to integrate the Land Information System (LIS) in a sensor web framework which allows for optimal 2-way information flow that enhances land surface modeling using sensor web observations, and in turn allows sensor web reconfiguration to minimize overall system uncertainty. This prototype is based on a simulated interactive sensor web, which is then used to exercise and optimize the sensor web modeling interfaces. The Land Information Sensor Web Service-Oriented Architecture (LISW-SOA) has been developed and it is the very first sensor web framework developed especially for the land surface studies. Synthetic experiments based on the LISW-SOA and the virtual sensor web provide a controlled environment in which to examine the end-to-end performance of the prototype, the impact of various sensor web design trade-offs and the eventual value of sensor webs for a particular prediction or decision support. In this paper, the design, implementation of the LISW-SOA and the implication for the Soil Moisture Active and Passive (SMAP) mission is presented. Particular attention is focused on examining the relationship between the economic investment on a sensor web (space and air borne, ground based) and the accuracy of the model predicted soil moisture, which can be achieved by using such sensor observations. The Study of Virtual Land Information Sensor Web (LISW) is expected to provide some necessary a priori knowledge for designing and deploying the next generation Global Earth Observing System of systems (GEOSS).

  18. Using Sensor Web Processes and Protocols to Assimilate Satellite Data into a Forecast Model

    NASA Technical Reports Server (NTRS)

    Goodman, H. Michael; Conover, Helen; Zavodsky, Bradley; Maskey, Manil; Jedlovec, Gary; Regner, Kathryn; Li, Xiang; Lu, Jessica; Botts, Mike; Berthiau, Gregoire

    2008-01-01

    The goal of the Sensor Management Applied Research Technologies (SMART) On-Demand Modeling project is to develop and demonstrate the readiness of the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) capabilities to integrate both space-based Earth observations and forecast model output into new data acquisition and assimilation strategies. The project is developing sensor web-enabled processing plans to assimilate Atmospheric Infrared Sounding (AIRS) satellite temperature and moisture retrievals into a regional Weather Research and Forecast (WRF) model over the southeastern United States.

  19. Autonomous Mission Operations for Sensor Webs

    NASA Astrophysics Data System (ADS)

    Underbrink, A.; Witt, K.; Stanley, J.; Mandl, D.

    2008-12-01

    We present interim results of a 2005 ROSES AIST project entitled, "Using Intelligent Agents to Form a Sensor Web for Autonomous Mission Operations", or SWAMO. The goal of the SWAMO project is to shift the control of spacecraft missions from a ground-based, centrally controlled architecture to a collaborative, distributed set of intelligent agents. The network of intelligent agents intends to reduce management requirements by utilizing model-based system prediction and autonomic model/agent collaboration. SWAMO agents are distributed throughout the Sensor Web environment, which may include multiple spacecraft, aircraft, ground systems, and ocean systems, as well as manned operations centers. The agents monitor and manage sensor platforms, Earth sensing systems, and Earth sensing models and processes. The SWAMO agents form a Sensor Web of agents via peer-to-peer coordination. Some of the intelligent agents are mobile and able to traverse between on-orbit and ground-based systems. Other agents in the network are responsible for encapsulating system models to perform prediction of future behavior of the modeled subsystems and components to which they are assigned. The software agents use semantic web technologies to enable improved information sharing among the operational entities of the Sensor Web. The semantics include ontological conceptualizations of the Sensor Web environment, plus conceptualizations of the SWAMO agents themselves. By conceptualizations of the agents, we mean knowledge of their state, operational capabilities, current operational capacities, Web Service search and discovery results, agent collaboration rules, etc. The need for ontological conceptualizations over the agents is to enable autonomous and autonomic operations of the Sensor Web. The SWAMO ontology enables automated decision making and responses to the dynamic Sensor Web environment and to end user science requests. The current ontology is compatible with Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) Sensor Model Language (SensorML) concepts and structures. The agents are currently deployed on the U.S. Naval Academy MidSTAR-1 satellite and are actively managing the power subsystem on-orbit without the need for human intervention.

  20. Fiber-optical sensor with intensity compensation model in college teaching of physics experiment

    NASA Astrophysics Data System (ADS)

    Su, Liping; Zhang, Yang; Li, Kun; Zhang, Yu

    2017-08-01

    Optical fiber sensor technology is one of the main contents of modern information technology, which has a very important position in modern science and technology. Fiber optic sensor experiment can improve students' enthusiasm and broaden their horizons in college physics experiment. In this paper the main structure and working principle of fiberoptical sensor with intensity compensation model are introduced. And thus fiber-optical sensor with intensity compensation model is applied to measure micro displacement of Young's modulus measurement experiment and metal linear expansion coefficient measurement experiment in the college physics experiment. Results indicate that the measurement accuracy of micro displacement is higher than that of the traditional methods using fiber-optical sensor with intensity compensation model. Meanwhile this measurement method makes the students understand on the optical fiber, sensor and nature of micro displacement measurement method and makes each experiment strengthen relationship and compatibility, which provides a new idea for the reform of experimental teaching.

  1. Proceedings of the Augmented VIsual Display (AVID) Research Workshop

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K. (Editor); Sweet, Barbara T. (Editor)

    1993-01-01

    The papers, abstracts, and presentations were presented at a three day workshop focused on sensor modeling and simulation, and image enhancement, processing, and fusion. The technical sessions emphasized how sensor technology can be used to create visual imagery adequate for aircraft control and operations. Participants from industry, government, and academic laboratories contributed to panels on Sensor Systems, Sensor Modeling, Sensor Fusion, Image Processing (Computer and Human Vision), and Image Evaluation and Metrics.

  2. Three-Dimensional Sensor Common Operating Picture (3-D Sensor COP)

    DTIC Science & Technology

    2017-01-01

    created. Additionally, a 3-D model of the sensor itself can be created. Using these 3-D models, along with emerging virtual and augmented reality tools...augmented reality 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 20 19a...iii Contents List of Figures iv 1. Introduction 1 2. The 3-D Sensor COP 2 3. Virtual Sensor Placement 7 4. Conclusions 10 5. References 11

  3. Soft Sensors: Chemoinformatic Model for Efficient Control and Operation in Chemical Plants.

    PubMed

    Funatsu, Kimito

    2016-12-01

    Soft sensor is statistical model as an essential tool for controlling pharmaceutical, chemical and industrial plants. I introduce soft sensor, the roles, the applications, the problems and the research examples such as adaptive soft sensor, database monitoring and efficient process control. The use of soft sensor enables chemical industrial plants to be operated more effectively and stably. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Evaluating the Performance of the Goddard Multi-Scale Modeling Framework against GPM, TRMM and CloudSat/CALIPSO Products

    NASA Astrophysics Data System (ADS)

    Chern, J. D.; Tao, W. K.; Lang, S. E.; Matsui, T.; Mohr, K. I.

    2014-12-01

    Four six-month (March-August 2014) experiments with the Goddard Multi-scale Modeling Framework (MMF) were performed to study the impacts of different Goddard one-moment bulk microphysical schemes and large-scale forcings on the performance of the MMF. Recently a new Goddard one-moment bulk microphysics with four-ice classes (cloud ice, snow, graupel, and frozen drops/hail) has been developed based on cloud-resolving model simulations with large-scale forcings from field campaign observations. The new scheme has been successfully implemented to the MMF and two MMF experiments were carried out with this new scheme and the old three-ice classes (cloud ice, snow graupel) scheme. The MMF has global coverage and can rigorously evaluate microphysics performance for different cloud regimes. The results show MMF with the new scheme outperformed the old one. The MMF simulations are also strongly affected by the interaction between large-scale and cloud-scale processes. Two MMF sensitivity experiments with and without nudging large-scale forcings to those of ERA-Interim reanalysis were carried out to study the impacts of large-scale forcings. The model simulated mean and variability of surface precipitation, cloud types, cloud properties such as cloud amount, hydrometeors vertical profiles, and cloud water contents, etc. in different geographic locations and climate regimes are evaluated against GPM, TRMM, CloudSat/CALIPSO satellite observations. The Goddard MMF has also been coupled with the Goddard Satellite Data Simulation Unit (G-SDSU), a system with multi-satellite, multi-sensor, and multi-spectrum satellite simulators. The statistics of MMF simulated radiances and backscattering can be directly compared with satellite observations to assess the strengths and/or deficiencies of MMF simulations and provide guidance on how to improve the MMF and microphysics.

  5. Modelling the Energy Efficient Sensor Nodes for Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Dahiya, R.; Arora, A. K.; Singh, V. R.

    2015-09-01

    Energy is an important requirement of wireless sensor networks for better performance. A widely employed energy-saving technique is to place nodes in sleep mode, corresponding to low-power consumption as well as to reduce operational capabilities. In this paper, Markov model of a sensor network is developed. The node is considered to enter a sleep mode. This model is used to investigate the system performance in terms of energy consumption, network capacity and data delivery delay.

  6. The wildfire experiment (WIFE): observations with airborne remote sensors

    Treesearch

    L.F. Radke; T.L. Clark; J.L. Coen; C.A. Walther; R.N. Lockwood; P.J. Riggan; J.A. Brass; R.G. Higgins

    2000-01-01

    Airborne remote sensors have long been a cornerstone of wildland fire research, and recently three-dimensional fire behaviour models fully coupled to the atmosphere have begun to show a convincing level of verisimilitude. The WildFire Experiment (WiFE) attempted the marriage of airborne remote sensors, multi-sensor observations together with fire model development and...

  7. Modeling and experimental study on characterization of micromachined thermal gas inertial sensors.

    PubMed

    Zhu, Rong; Ding, Henggao; Su, Yan; Yang, Yongjun

    2010-01-01

    Micromachined thermal gas inertial sensors based on heat convection are novel devices that compared with conventional micromachined inertial sensors offer the advantages of simple structures, easy fabrication, high shock resistance and good reliability by virtue of using a gaseous medium instead of a mechanical proof mass as key moving and sensing elements. This paper presents an analytical modeling for a micromachined thermal gas gyroscope integrated with signal conditioning. A simplified spring-damping model is utilized to characterize the behavior of the sensor. The model relies on the use of the fluid mechanics and heat transfer fundamentals and is validated using experimental data obtained from a test-device and simulation. Furthermore, the nonideal issues of the sensor are addressed from both the theoretical and experimental points of view. The nonlinear behavior demonstrated in experimental measurements is analyzed based on the model. It is concluded that the sources of nonlinearity are mainly attributable to the variable stiffness of the sensor system and the structural asymmetry due to nonideal fabrication.

  8. Space Station racks weight and CG measurement using the rack insertion end-effector

    NASA Technical Reports Server (NTRS)

    Brewer, William V.

    1994-01-01

    The objective was to design a method to measure weight and center of gravity (C.G.) location for Space Station Modules by adding sensors to the existing Rack Insertion End Effector (RIEE). Accomplishments included alternative sensor placement schemes organized into categories. Vendors were queried for suitable sensor equipment recommendations. Inverse mathematical models for each category determine expected maximum sensor loads. Sensors are selected using these computations, yielding cost and accuracy data. Accuracy data for individual sensors are inserted into forward mathematical models to estimate the accuracy of an overall sensor scheme. Cost of the schemes can be estimated. Ease of implementation and operation are discussed.

  9. Crack Detection in Fibre Reinforced Plastic Structures Using Embedded Fibre Bragg Grating Sensors: Theory, Model Development and Experimental Validation

    PubMed Central

    Pereira, G. F.; Mikkelsen, L. P.; McGugan, M.

    2015-01-01

    In a fibre-reinforced polymer (FRP) structure designed using the emerging damage tolerance and structural health monitoring philosophy, sensors and models that describe crack propagation will enable a structure to operate despite the presence of damage by fully exploiting the material’s mechanical properties. When applying this concept to different structures, sensor systems and damage types, a combination of damage mechanics, monitoring technology, and modelling is required. The primary objective of this article is to demonstrate such a combination. This article is divided in three main topics: the damage mechanism (delamination of FRP), the structural health monitoring technology (fibre Bragg gratings to detect delamination), and the finite element method model of the structure that incorporates these concepts into a final and integrated damage-monitoring concept. A novel method for assessing a crack growth/damage event in fibre-reinforced polymer or structural adhesive-bonded structures using embedded fibre Bragg grating (FBG) sensors is presented by combining conventional measured parameters, such as wavelength shift, with parameters associated with measurement errors, typically ignored by the end-user. Conjointly, a novel model for sensor output prediction (virtual sensor) was developed using this FBG sensor crack monitoring concept and implemented in a finite element method code. The monitoring method was demonstrated and validated using glass fibre double cantilever beam specimens instrumented with an array of FBG sensors embedded in the material and tested using an experimental fracture procedure. The digital image correlation technique was used to validate the model prediction by correlating the specific sensor response caused by the crack with the developed model. PMID:26513653

  10. Autoregressive Modeling of Drift and Random Error to Characterize a Continuous Intravascular Glucose Monitoring Sensor.

    PubMed

    Zhou, Tony; Dickson, Jennifer L; Geoffrey Chase, J

    2018-01-01

    Continuous glucose monitoring (CGM) devices have been effective in managing diabetes and offer potential benefits for use in the intensive care unit (ICU). Use of CGM devices in the ICU has been limited, primarily due to the higher point accuracy errors over currently used traditional intermittent blood glucose (BG) measures. General models of CGM errors, including drift and random errors, are lacking, but would enable better design of protocols to utilize these devices. This article presents an autoregressive (AR) based modeling method that separately characterizes the drift and random noise of the GlySure CGM sensor (GlySure Limited, Oxfordshire, UK). Clinical sensor data (n = 33) and reference measurements were used to generate 2 AR models to describe sensor drift and noise. These models were used to generate 100 Monte Carlo simulations based on reference blood glucose measurements. These were then compared to the original CGM clinical data using mean absolute relative difference (MARD) and a Trend Compass. The point accuracy MARD was very similar between simulated and clinical data (9.6% vs 9.9%). A Trend Compass was used to assess trend accuracy, and found simulated and clinical sensor profiles were similar (simulated trend index 11.4° vs clinical trend index 10.9°). The model and method accurately represents cohort sensor behavior over patients, providing a general modeling approach to any such sensor by separately characterizing each type of error that can arise in the data. Overall, it enables better protocol design based on accurate expected CGM sensor behavior, as well as enabling the analysis of what level of each type of sensor error would be necessary to obtain desired glycemic control safety and performance with a given protocol.

  11. Terrestrial hyperspectral image shadow restoration through fusion with terrestrial lidar

    NASA Astrophysics Data System (ADS)

    Hartzell, Preston J.; Glennie, Craig L.; Finnegan, David C.; Hauser, Darren L.

    2017-05-01

    Recent advances in remote sensing technology have expanded the acquisition and fusion of active lidar and passive hyperspectral imagery (HSI) from exclusively airborne observations to include terrestrial modalities. In contrast to airborne collection geometry, hyperspectral imagery captured from terrestrial cameras is prone to extensive solar shadowing on vertical surfaces leading to reductions in pixel classification accuracies or outright removal of shadowed areas from subsequent analysis tasks. We demonstrate the use of lidar spatial information for sub-pixel HSI shadow detection and the restoration of shadowed pixel spectra via empirical methods that utilize sunlit and shadowed pixels of similar material composition. We examine the effectiveness of radiometrically calibrated lidar intensity in identifying these similar materials in sun and shade conditions and further evaluate a restoration technique that leverages ratios derived from the overlapping lidar laser and HSI wavelengths. Simulations of multiple lidar wavelengths, i.e., multispectral lidar, indicate the potential for HSI spectral restoration that is independent of the complexity and costs associated with rigorous radiometric transfer models, which have yet to be developed for horizontal-viewing terrestrial HSI sensors. The spectral restoration performance of shadowed HSI pixels is quantified for imagery of a geologic outcrop through improvements in spectral shape, spectral scale, and HSI band correlation.

  12. Principles to Products: Toward Realizing MOS 2.0

    NASA Technical Reports Server (NTRS)

    Bindschadler, Duane L.; Delp, Christopher L.

    2012-01-01

    This is a report on the Operations Revitalization Initiative, part of the ongoing NASA-funded Advanced Multi-Mission Operations Systems (AMMOS) program. We are implementing products that significantly improve efficiency and effectiveness of Mission Operations Systems (MOS) for deep-space missions. We take a multi-mission approach, in keeping with our organization's charter to "provide multi-mission tools and services that enable mission customers to operate at a lower total cost to NASA." Focusing first on architectural fundamentals of the MOS, we review the effort's progress. In particular, we note the use of stakeholder interactions and consideration of past lessons learned to motivate a set of Principles that guide the evolution of the AMMOS. Thus guided, we have created essential patterns and connections (detailed in companion papers) that are explicitly modeled and support elaboration at multiple levels of detail (system, sub-system, element...) throughout a MOS. This architecture is realized in design and implementation products that provide lifecycle support to a Mission at the system and subsystem level. The products include adaptable multi-mission engineering documentation that describes essentials such as operational concepts and scenarios, requirements, interfaces and agreements, information models, and mission operations processes. Because we have adopted a model-based system engineering method, these documents and their contents are meaningfully related to one another and to the system model. This means they are both more rigorous and reusable (from mission to mission) than standard system engineering products. The use of models also enables detailed, early (e.g., formulation phase) insight into the impact of changes (e.g., to interfaces or to software) that is rigorous and complete, allowing better decisions on cost or technical trades. Finally, our work provides clear and rigorous specification of operations needs to software developers, further enabling significant gains in productivity.

  13. Rigorous Approach in Investigation of Seismic Structure and Source Characteristicsin Northeast Asia: Hierarchical and Trans-dimensional Bayesian Inversion

    NASA Astrophysics Data System (ADS)

    Mustac, M.; Kim, S.; Tkalcic, H.; Rhie, J.; Chen, Y.; Ford, S. R.; Sebastian, N.

    2015-12-01

    Conventional approaches to inverse problems suffer from non-linearity and non-uniqueness in estimations of seismic structures and source properties. Estimated results and associated uncertainties are often biased by applied regularizations and additional constraints, which are commonly introduced to solve such problems. Bayesian methods, however, provide statistically meaningful estimations of models and their uncertainties constrained by data information. In addition, hierarchical and trans-dimensional (trans-D) techniques are inherently implemented in the Bayesian framework to account for involved error statistics and model parameterizations, and, in turn, allow more rigorous estimations of the same. Here, we apply Bayesian methods throughout the entire inference process to estimate seismic structures and source properties in Northeast Asia including east China, the Korean peninsula, and the Japanese islands. Ambient noise analysis is first performed to obtain a base three-dimensional (3-D) heterogeneity model using continuous broadband waveforms from more than 300 stations. As for the tomography of surface wave group and phase velocities in the 5-70 s band, we adopt a hierarchical and trans-D Bayesian inversion method using Voronoi partition. The 3-D heterogeneity model is further improved by joint inversions of teleseismic receiver functions and dispersion data using a newly developed high-efficiency Bayesian technique. The obtained model is subsequently used to prepare 3-D structural Green's functions for the source characterization. A hierarchical Bayesian method for point source inversion using regional complete waveform data is applied to selected events from the region. The seismic structure and source characteristics with rigorously estimated uncertainties from the novel Bayesian methods provide enhanced monitoring and discrimination of seismic events in northeast Asia.

  14. Predictability of the geospace variations and measuring the capability to model the state of the system

    NASA Astrophysics Data System (ADS)

    Pulkkinen, A.

    2012-12-01

    Empirical modeling has been the workhorse of the past decades in predicting the state of the geospace. For example, numerous empirical studies have shown that global geoeffectiveness indices such as Kp and Dst are generally well predictable from the solar wind input. These successes have been facilitated partly by the strongly externally driven nature of the system. Although characterizing the general state of the system is valuable and empirical modeling will continue playing an important role, refined physics-based quantification of the state of the system has been the obvious next step in moving toward more mature science. Importantly, more refined and localized products are needed also for space weather purposes. Predictions of local physical quantities are necessary to make physics-based links to the impacts on specific systems. As we have introduced more localized predictions of the geospace state one central question is how predictable these local quantities are? This complex question can be addressed by rigorously measuring the model performance against the observed data. Space sciences community has made great advanced on this topic over the past few years and there are ongoing efforts in SHINE, CEDAR and GEM to carry out community-wide evaluations of the state-of-the-art solar and heliospheric, ionosphere-thermosphere and geospace models, respectively. These efforts will help establish benchmarks and thus provide means to measure the progress in the field analogous to monitoring of the improvement in lower atmospheric weather predictions carried out rigorously since 1980s. In this paper we will discuss some of the latest advancements in predicting the local geospace parameters and give an overview of some of the community efforts to rigorously measure the model performances. We will also briefly discuss some of the future opportunities for advancing the geospace modeling capability. These will include further development in data assimilation and ensemble modeling (e.g. taking into account uncertainty in the inflow boundary conditions).

  15. A spatial-dynamic value transfer model of economic losses from a biological invasion

    Treesearch

    Thomas P. Holmes; Andrew M. Liebhold; Kent F. Kovacs; Betsy Von Holle

    2010-01-01

    Rigorous assessments of the economic impacts of introduced species at broad spatial scales are required to provide credible information to policy makers. We propose that economic models of aggregate damages induced by biological invasions need to link microeconomic analyses of site-specific economic damages with spatial-dynamic models of value change associated with...

  16. Pedagogy and the Intuitive Appeal of Learning Styles in Post-Compulsory Education in England

    ERIC Educational Resources Information Center

    Nixon, Lawrence; Gregson, Maggie; Spedding, Trish

    2007-01-01

    Despite the rigorous and robust evaluation of learning styles theories, models and inventories, little objective evidence in support of their effectiveness has been found. The lack of unambiguous evidence in support of these models and practices leaves the continued popularity of these models and instruments as a puzzle. Two related accounts of…

  17. A New Theory-to-Practice Model for Student Affairs: Integrating Scholarship, Context, and Reflection

    ERIC Educational Resources Information Center

    Reason, Robert D.; Kimball, Ezekiel W.

    2012-01-01

    In this article, we synthesize existing theory-to-practice approaches within the student affairs literature to arrive at a new model that incorporates formal and informal theory, institutional context, and reflective practice. The new model arrives at a balance between the rigor necessary for scholarly theory development and the adaptability…

  18. Performance Evaluation Modeling of Network Sensors

    NASA Technical Reports Server (NTRS)

    Clare, Loren P.; Jennings, Esther H.; Gao, Jay L.

    2003-01-01

    Substantial benefits are promised by operating many spatially separated sensors collectively. Such systems are envisioned to consist of sensor nodes that are connected by a communications network. A simulation tool is being developed to evaluate the performance of networked sensor systems, incorporating such metrics as target detection probabilities, false alarms rates, and classification confusion probabilities. The tool will be used to determine configuration impacts associated with such aspects as spatial laydown, and mixture of different types of sensors (acoustic, seismic, imaging, magnetic, RF, etc.), and fusion architecture. The QualNet discrete-event simulation environment serves as the underlying basis for model development and execution. This platform is recognized for its capabilities in efficiently simulating networking among mobile entities that communicate via wireless media. We are extending QualNet's communications modeling constructs to capture the sensing aspects of multi-target sensing (analogous to multiple access communications), unimodal multi-sensing (broadcast), and multi-modal sensing (multiple channels and correlated transmissions). Methods are also being developed for modeling the sensor signal sources (transmitters), signal propagation through the media, and sensors (receivers) that are consistent with the discrete event paradigm needed for performance determination of sensor network systems. This work is supported under the Microsensors Technical Area of the Army Research Laboratory (ARL) Advanced Sensors Collaborative Technology Alliance.

  19. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation

    PubMed Central

    Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar

    2015-01-01

    Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot’s wheels, and 24 fuzzy rules for the robot’s movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes. PMID:26712766

  20. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation.

    PubMed

    Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar

    2015-12-26

    Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot's wheels, and 24 fuzzy rules for the robot's movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes.

  1. Error Modeling and Experimental Study of a Flexible Joint 6-UPUR Parallel Six-Axis Force Sensor.

    PubMed

    Zhao, Yanzhi; Cao, Yachao; Zhang, Caifeng; Zhang, Dan; Zhang, Jie

    2017-09-29

    By combining a parallel mechanism with integrated flexible joints, a large measurement range and high accuracy sensor is realized. However, the main errors of the sensor involve not only assembly errors, but also deformation errors of its flexible leg. Based on a flexible joint 6-UPUR (a kind of mechanism configuration where U-universal joint, P-prismatic joint, R-revolute joint) parallel six-axis force sensor developed during the prephase, assembly and deformation error modeling and analysis of the resulting sensors with a large measurement range and high accuracy are made in this paper. First, an assembly error model is established based on the imaginary kinematic joint method and the Denavit-Hartenberg (D-H) method. Next, a stiffness model is built to solve the stiffness matrix. The deformation error model of the sensor is obtained. Then, the first order kinematic influence coefficient matrix when the synthetic error is taken into account is solved. Finally, measurement and calibration experiments of the sensor composed of the hardware and software system are performed. Forced deformation of the force-measuring platform is detected by using laser interferometry and analyzed to verify the correctness of the synthetic error model. In addition, the first order kinematic influence coefficient matrix in actual circumstances is calculated. By comparing the condition numbers and square norms of the coefficient matrices, the conclusion is drawn theoretically that it is very important to take into account the synthetic error for design stage of the sensor and helpful to improve performance of the sensor in order to meet needs of actual working environments.

  2. Error Modeling and Experimental Study of a Flexible Joint 6-UPUR Parallel Six-Axis Force Sensor

    PubMed Central

    Zhao, Yanzhi; Cao, Yachao; Zhang, Caifeng; Zhang, Dan; Zhang, Jie

    2017-01-01

    By combining a parallel mechanism with integrated flexible joints, a large measurement range and high accuracy sensor is realized. However, the main errors of the sensor involve not only assembly errors, but also deformation errors of its flexible leg. Based on a flexible joint 6-UPUR (a kind of mechanism configuration where U-universal joint, P-prismatic joint, R-revolute joint) parallel six-axis force sensor developed during the prephase, assembly and deformation error modeling and analysis of the resulting sensors with a large measurement range and high accuracy are made in this paper. First, an assembly error model is established based on the imaginary kinematic joint method and the Denavit-Hartenberg (D-H) method. Next, a stiffness model is built to solve the stiffness matrix. The deformation error model of the sensor is obtained. Then, the first order kinematic influence coefficient matrix when the synthetic error is taken into account is solved. Finally, measurement and calibration experiments of the sensor composed of the hardware and software system are performed. Forced deformation of the force-measuring platform is detected by using laser interferometry and analyzed to verify the correctness of the synthetic error model. In addition, the first order kinematic influence coefficient matrix in actual circumstances is calculated. By comparing the condition numbers and square norms of the coefficient matrices, the conclusion is drawn theoretically that it is very important to take into account the synthetic error for design stage of the sensor and helpful to improve performance of the sensor in order to meet needs of actual working environments. PMID:28961209

  3. Modeling, Detection, and Disambiguation of Sensor Faults for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Balaban, Edward; Saxena, Abhinav; Bansal, Prasun; Goebel, Kai F.; Curran, Simon

    2009-01-01

    Sensor faults continue to be a major hurdle for systems health management to reach its full potential. At the same time, few recorded instances of sensor faults exist. It is equally difficult to seed particular sensor faults. Therefore, research is underway to better understand the different fault modes seen in sensors and to model the faults. The fault models can then be used in simulated sensor fault scenarios to ensure that algorithms can distinguish between sensor faults and system faults. The paper illustrates the work with data collected from an electro-mechanical actuator in an aerospace setting, equipped with temperature, vibration, current, and position sensors. The most common sensor faults, such as bias, drift, scaling, and dropout were simulated and injected into the experimental data, with the goal of making these simulations as realistic as feasible. A neural network based classifier was then created and tested on both experimental data and the more challenging randomized data sequences. Additional studies were also conducted to determine sensitivity of detection and disambiguation efficacy to severity of fault conditions.

  4. Implementing CUAHSI and SWE observation data models in the long-term monitoring infrastructure TERENO

    NASA Astrophysics Data System (ADS)

    Klump, J. F.; Stender, V.; Schroeder, M.

    2013-12-01

    Terrestrial Environmental Observatories (TERENO) is an interdisciplinary and long-term research project spanning an Earth observation network across Germany. It includes four test sites within Germany from the North German lowlands to the Bavarian Alps and is operated by six research centers of the Helmholtz Association. The contribution by the participating research centers is organized as regional observatories. The challenge for TERENO and its observatories is to integrate all aspects of data management, data workflows, data modeling and visualizations into the design of a monitoring infrastructure. TERENO Northeast is one of the sub-observatories of TERENO and is operated by the German Research Centre for Geosciences GFZ in Potsdam. This observatory investigates geoecological processes in the northeastern lowland of Germany by collecting large amounts of environmentally relevant data. The success of long-term projects like TERENO depends on well-organized data management, data exchange between the partners involved and on the availability of the captured data. Data discovery and dissemination are facilitated not only through data portals of the regional TERENO observatories but also through a common spatial data infrastructure TEODOOR. TEODOOR bundles the data, provided by the different web services of the single observatories, and provides tools for data discovery, visualization and data access. The TERENO Northeast data infrastructure integrates data from more than 200 instruments and makes the data available through standard web services. Data are stored following the CUAHSI observation data model in combination with the 52° North Sensor Observation Service data model. The data model was implemented using the PostgreSQL/PostGIS DBMS. Especially in a long-term project, such as TERENO, care has to be taken in the data model. We chose to adopt the CUAHSI observational data model because it is designed to store observations and descriptive information (metadata) about the data values in combination with information about the sensor systems. Also the CUAHSI model is supported by a large and active international user community. The 52° North SOS data model can be modeled as a sub-set of the CUHASI data model. In our implementation the 52° North SWE data model is implemented as database views of the CUHASI model to avoid redundant data storage. An essential aspect in TERENO Northeast is the use of standard OGS web services to facilitate data exchange and interoperability. A uniform treatment of sensor data can be realized through OGC Sensor Web Enablement (SWE) which makes a number of standards and interface definitions available: Observation & Measurement (O&M) model for the description of observations and measurements, Sensor Model Language (SensorML) for the description of sensor systems, Sensor Observation Service (SOS) for obtaining sensor observations, Sensor Planning Service (SPS) for tasking sensors, Web Notification Service (WNS) for asynchronous dialogues and Sensor Alert Service (SAS) for sending alerts.

  5. Sensors Umbra Package v 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oppel, Fred J.; Hart, Brian E.; Whitford, Gregg Douglas

    2016-08-25

    This package contains modules that model sensors in Umbra. There is a mix of modalities for both accumulating and tracking energy sensors: seismic, magnetic, and radiation. Some modules fuss information from multiple sensor types. Sensor devices (e.g., seismic sensors), detect objects such as people and vehicles that have sensor properties attached (e.g., seismic properties).

  6. Finite Element Modelling of a Field-Sensed Magnetic Suspended System for Accurate Proximity Measurement Based on a Sensor Fusion Algorithm with Unscented Kalman Filter

    PubMed Central

    Chowdhury, Amor; Sarjaš, Andrej

    2016-01-01

    The presented paper describes accurate distance measurement for a field-sensed magnetic suspension system. The proximity measurement is based on a Hall effect sensor. The proximity sensor is installed directly on the lower surface of the electro-magnet, which means that it is very sensitive to external magnetic influences and disturbances. External disturbances interfere with the information signal and reduce the usability and reliability of the proximity measurements and, consequently, the whole application operation. A sensor fusion algorithm is deployed for the aforementioned reasons. The sensor fusion algorithm is based on the Unscented Kalman Filter, where a nonlinear dynamic model was derived with the Finite Element Modelling approach. The advantage of such modelling is a more accurate dynamic model parameter estimation, especially in the case when the real structure, materials and dimensions of the real-time application are known. The novelty of the paper is the design of a compact electro-magnetic actuator with a built-in low cost proximity sensor for accurate proximity measurement of the magnetic object. The paper successively presents a modelling procedure with the finite element method, design and parameter settings of a sensor fusion algorithm with Unscented Kalman Filter and, finally, the implementation procedure and results of real-time operation. PMID:27649197

  7. Finite Element Modelling of a Field-Sensed Magnetic Suspended System for Accurate Proximity Measurement Based on a Sensor Fusion Algorithm with Unscented Kalman Filter.

    PubMed

    Chowdhury, Amor; Sarjaš, Andrej

    2016-09-15

    The presented paper describes accurate distance measurement for a field-sensed magnetic suspension system. The proximity measurement is based on a Hall effect sensor. The proximity sensor is installed directly on the lower surface of the electro-magnet, which means that it is very sensitive to external magnetic influences and disturbances. External disturbances interfere with the information signal and reduce the usability and reliability of the proximity measurements and, consequently, the whole application operation. A sensor fusion algorithm is deployed for the aforementioned reasons. The sensor fusion algorithm is based on the Unscented Kalman Filter, where a nonlinear dynamic model was derived with the Finite Element Modelling approach. The advantage of such modelling is a more accurate dynamic model parameter estimation, especially in the case when the real structure, materials and dimensions of the real-time application are known. The novelty of the paper is the design of a compact electro-magnetic actuator with a built-in low cost proximity sensor for accurate proximity measurement of the magnetic object. The paper successively presents a modelling procedure with the finite element method, design and parameter settings of a sensor fusion algorithm with Unscented Kalman Filter and, finally, the implementation procedure and results of real-time operation.

  8. Peer-to-peer model for the area coverage and cooperative control of mobile sensor networks

    NASA Astrophysics Data System (ADS)

    Tan, Jindong; Xi, Ning

    2004-09-01

    This paper presents a novel model and distributed algorithms for the cooperation and redeployment of mobile sensor networks. A mobile sensor network composes of a collection of wireless connected mobile robots equipped with a variety of sensors. In such a sensor network, each mobile node has sensing, computation, communication, and locomotion capabilities. The locomotion ability enhances the autonomous deployment of the system. The system can be rapidly deployed to hostile environment, inaccessible terrains or disaster relief operations. The mobile sensor network is essentially a cooperative multiple robot system. This paper first presents a peer-to-peer model to define the relationship between neighboring communicating robots. Delaunay Triangulation and Voronoi diagrams are used to define the geometrical relationship between sensor nodes. This distributed model allows formal analysis for the fusion of spatio-temporal sensory information of the network. Based on the distributed model, this paper discusses a fault tolerant algorithm for autonomous self-deployment of the mobile robots. The algorithm considers the environment constraints, the presence of obstacles and the nonholonomic constraints of the robots. The distributed algorithm enables the system to reconfigure itself such that the area covered by the system can be enlarged. Simulation results have shown the effectiveness of the distributed model and deployment algorithms.

  9. Experimental evaluation of rigor mortis. V. Effect of various temperatures on the evolution of rigor mortis.

    PubMed

    Krompecher, T

    1981-01-01

    Objective measurements were carried out to study the evolution of rigor mortis on rats at various temperatures. Our experiments showed that: (1) at 6 degrees C rigor mortis reaches full development between 48 and 60 hours post mortem, and is resolved at 168 hours post mortem; (2) at 24 degrees C rigor mortis reaches full development at 5 hours post mortem, and is resolved at 16 hours post mortem; (3) at 37 degrees C rigor mortis reaches full development at 3 hours post mortem, and is resolved at 6 hours post mortem; (4) the intensity of rigor mortis grows with increase in temperature (difference between values obtained at 24 degrees C and 37 degrees C); and (5) and 6 degrees C a "cold rigidity" was found, in addition to and independent of rigor mortis.

  10. Evaluation of electrolytic tilt sensors for measuring model angle of attack in wind tunnel tests

    NASA Technical Reports Server (NTRS)

    Wong, Douglas T.

    1992-01-01

    The results of a laboratory evaluation of electrolytic tilt sensors as potential candidates for measuring model attitude or angle of attack in wind tunnel tests are presented. The performance of eight electrolytic tilt sensors was compared with that of typical servo accelerometers used for angle-of-attack measurements. The areas evaluated included linearity, hysteresis, repeatability, temperature characteristics, roll-on-pitch interaction, sensitivity to lead-wire resistance, step response time, and rectification. Among the sensors being evaluated, the Spectron model RG-37 electrolytic tilt sensors have the highest overall accuracy in terms of linearity, hysteresis, repeatability, temperature sensitivity, and roll sensitivity. A comparison of the sensors with the servo accelerometers revealed that the accuracy of the RG-37 sensors was on the average about one order of magnitude worse. Even though a comparison indicates that the cost of each tilt sensor is about one-third the cost of each servo accelerometer, the sensors are considered unsuitable for angle-of-attack measurements. However, the potential exists for other applications such as wind tunnel wall-attitude measurements where the errors resulting from roll interaction, vibration, and response time are less and sensor temperature can be controlled.

  11. Training a Joint and Expeditionary Mindset

    DTIC Science & Technology

    2006-12-01

    associated with the JEM constructs and for using them to create effective computer-mediated training scenarios. The pedagogic model enables development of...ensure the instructional rigor of scenarios and provide a sound basis for determining performance indicators. The pedagogical model enables development...and Subordinate Constructs ........................................................................... 3 Pedagogical Fram ew ork

  12. Wisconsin's Model Academic Standards for Music.

    ERIC Educational Resources Information Center

    Nikolay, Pauli; Grady, Susan; Stefonek, Thomas

    To assist parents and educators in preparing students for the 21st century, Wisconsin citizens have become involved in the development of challenging academic standards in 12 curricular areas. Having clear standards for students and teachers makes it possible to develop rigorous local curricula and valid, reliable assessments. This model of…

  13. 75 FR 2523 - Office of Innovation and Improvement; Overview Information; Arts in Education Model Development...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-15

    ... that is based on rigorous scientifically based research methods to assess the effectiveness of a...) Relies on measurements or observational methods that provide reliable and valid data across evaluators... of innovative, cohesive models that are based on research and have demonstrated that they effectively...

  14. A framework for analyzing the impact of data integrity/quality on electricity market operations

    NASA Astrophysics Data System (ADS)

    Choi, Dae Hyun

    This dissertation examines the impact of data integrity/quality in the supervisory control and data acquisition (SCADA) system on real-time locational marginal price (LMP) in electricity market operations. Measurement noise and/or manipulated sensor errors in a SCADA system may mislead system operators about real-time conditions in a power system, which, in turn, may impact the price signals in real-time power markets. This dissertation serves as a first attempt to analytically investigate the impact of bad/malicious data on electric power market operations. In future power system operations, which will probably involve many more sensors, the impact of sensor data integrity/quality on grid operations will become increasingly important. The first part of this dissertation studies from a market participant's perspective a new class of malicious data attacks on state estimation, which subsequently influences the result of the newly emerging look-ahead dispatch models in the real-time power market. In comparison with prior work of cyber-attack on static dispatch where no inter-temporal ramping constraint is considered, we propose a novel attack strategy, named ramp-induced data (RID) attack, with which the attacker can manipulate the limits of ramp constraints of generators in look-ahead dispatch. It is demonstrated that the proposed attack can lead to financial profits via malicious capacity withholding of selected generators, while being undetected by the existing bad data detection algorithm embedded in today's state estimation software. In the second part, we investigate from a system operator's perspective the sensitivity of locational marginal price (LMP) with respect to data corruption-induced state estimation error in real-time power market. Two data corruption scenarios are considered, in which corrupted continuous data (e.g., the power injection/flow and voltage magnitude) falsify power flow estimate whereas corrupted discrete data (e.g., the on/off status of a circuit breaker) do network topology estimate, thus leading to the distortion of LMP. We present an analytical framework to quantify real-time LMP sensitivity subject to continuous and discrete data corruption via state estimation. The proposed framework offers system operators an analytical tool to identify economically sensitive buses and transmission lines to data corruption as well as find sensors that impact LMP changes significantly. This dissertation serves as a first step towards rigorous understanding of the fundamental coupling among cyber, physical and economical layers of operations in future smart grid.

  15. A simple model for indentation creep

    NASA Astrophysics Data System (ADS)

    Ginder, Ryan S.; Nix, William D.; Pharr, George M.

    2018-03-01

    A simple model for indentation creep is developed that allows one to directly convert creep parameters measured in indentation tests to those observed in uniaxial tests through simple closed-form relationships. The model is based on the expansion of a spherical cavity in a power law creeping material modified to account for indentation loading in a manner similar to that developed by Johnson for elastic-plastic indentation (Johnson, 1970). Although only approximate in nature, the simple mathematical form of the new model makes it useful for general estimation purposes or in the development of other deformation models in which a simple closed-form expression for the indentation creep rate is desirable. Comparison to a more rigorous analysis which uses finite element simulation for numerical evaluation shows that the new model predicts uniaxial creep rates within a factor of 2.5, and usually much better than this, for materials creeping with stress exponents in the range 1 ≤ n ≤ 7. The predictive capabilities of the model are evaluated by comparing it to the more rigorous analysis and several sets of experimental data in which both the indentation and uniaxial creep behavior have been measured independently.

  16. Structural Stability Monitoring of a Physical Model Test on an Underground Cavern Group during Deep Excavations Using FBG Sensors.

    PubMed

    Li, Yong; Wang, Hanpeng; Zhu, Weishen; Li, Shucai; Liu, Jian

    2015-08-31

    Fiber Bragg Grating (FBG) sensors are comprehensively recognized as a structural stability monitoring device for all kinds of geo-materials by either embedding into or bonding onto the structural entities. The physical model in geotechnical engineering, which could accurately simulate the construction processes and the effects on the stability of underground caverns on the basis of satisfying the similarity principles, is an actual physical entity. Using a physical model test of underground caverns in Shuangjiangkou Hydropower Station, FBG sensors were used to determine how to model the small displacements of some key monitoring points in the large-scale physical model during excavation. In the process of building the test specimen, it is most successful to embed FBG sensors in the physical model through making an opening and adding some quick-set silicon. The experimental results show that the FBG sensor has higher measuring accuracy than other conventional sensors like electrical resistance strain gages and extensometers. The experimental results are also in good agreement with the numerical simulation results. In conclusion, FBG sensors could effectively measure small displacements of monitoring points in the whole process of the physical model test. The experimental results reveal the deformation and failure characteristics of the surrounding rock mass and make some guidance for the in situ engineering construction.

  17. Structural Stability Monitoring of a Physical Model Test on an Underground Cavern Group during Deep Excavations Using FBG Sensors

    PubMed Central

    Li, Yong; Wang, Hanpeng; Zhu, Weishen; Li, Shucai; Liu, Jian

    2015-01-01

    Fiber Bragg Grating (FBG) sensors are comprehensively recognized as a structural stability monitoring device for all kinds of geo-materials by either embedding into or bonding onto the structural entities. The physical model in geotechnical engineering, which could accurately simulate the construction processes and the effects on the stability of underground caverns on the basis of satisfying the similarity principles, is an actual physical entity. Using a physical model test of underground caverns in Shuangjiangkou Hydropower Station, FBG sensors were used to determine how to model the small displacements of some key monitoring points in the large-scale physical model during excavation. In the process of building the test specimen, it is most successful to embed FBG sensors in the physical model through making an opening and adding some quick-set silicon. The experimental results show that the FBG sensor has higher measuring accuracy than other conventional sensors like electrical resistance strain gages and extensometers. The experimental results are also in good agreement with the numerical simulation results. In conclusion, FBG sensors could effectively measure small displacements of monitoring points in the whole process of the physical model test. The experimental results reveal the deformation and failure characteristics of the surrounding rock mass and make some guidance for the in situ engineering construction. PMID:26404287

  18. Rigor and reproducibility in research with transcranial electrical stimulation: An NIMH-sponsored workshop.

    PubMed

    Bikson, Marom; Brunoni, Andre R; Charvet, Leigh E; Clark, Vincent P; Cohen, Leonardo G; Deng, Zhi-De; Dmochowski, Jacek; Edwards, Dylan J; Frohlich, Flavio; Kappenman, Emily S; Lim, Kelvin O; Loo, Colleen; Mantovani, Antonio; McMullen, David P; Parra, Lucas C; Pearson, Michele; Richardson, Jessica D; Rumsey, Judith M; Sehatpour, Pejman; Sommers, David; Unal, Gozde; Wassermann, Eric M; Woods, Adam J; Lisanby, Sarah H

    Neuropsychiatric disorders are a leading source of disability and require novel treatments that target mechanisms of disease. As such disorders are thought to result from aberrant neuronal circuit activity, neuromodulation approaches are of increasing interest given their potential for manipulating circuits directly. Low intensity transcranial electrical stimulation (tES) with direct currents (transcranial direct current stimulation, tDCS) or alternating currents (transcranial alternating current stimulation, tACS) represent novel, safe, well-tolerated, and relatively inexpensive putative treatment modalities. This report seeks to promote the science, technology and effective clinical applications of these modalities, identify research challenges, and suggest approaches for addressing these needs in order to achieve rigorous, reproducible findings that can advance clinical treatment. The National Institute of Mental Health (NIMH) convened a workshop in September 2016 that brought together experts in basic and human neuroscience, electrical stimulation biophysics and devices, and clinical trial methods to examine the physiological mechanisms underlying tDCS/tACS, technologies and technical strategies for optimizing stimulation protocols, and the state of the science with respect to therapeutic applications and trial designs. Advances in understanding mechanisms, methodological and technological improvements (e.g., electronics, computational models to facilitate proper dosing), and improved clinical trial designs are poised to advance rigorous, reproducible therapeutic applications of these techniques. A number of challenges were identified and meeting participants made recommendations made to address them. These recommendations align with requirements in NIMH funding opportunity announcements to, among other needs, define dosimetry, demonstrate dose/response relationships, implement rigorous blinded trial designs, employ computational modeling, and demonstrate target engagement when testing stimulation-based interventions for the treatment of mental disorders. Published by Elsevier Inc.

  19. Rigor and reproducibility in research with transcranial electrical stimulation: An NIMH-sponsored workshop

    PubMed Central

    Bikson, Marom; Brunoni, Andre R.; Charvet, Leigh E.; Clark, Vincent P.; Cohen, Leonardo G.; Deng, Zhi-De; Dmochowski, Jacek; Edwards, Dylan J.; Frohlich, Flavio; Kappenman, Emily S.; Lim, Kelvin O.; Loo, Colleen; Mantovani, Antonio; McMullen, David P.; Parra, Lucas C.; Pearson, Michele; Richardson, Jessica D.; Rumsey, Judith M.; Sehatpour, Pejman; Sommers, David; Unal, Gozde; Wassermann, Eric M.; Woods, Adam J.; Lisanby, Sarah H.

    2018-01-01

    Background Neuropsychiatric disorders are a leading source of disability and require novel treatments that target mechanisms of disease. As such disorders are thought to result from aberrant neuronal circuit activity, neuromodulation approaches are of increasing interest given their potential for manipulating circuits directly. Low intensity transcranial electrical stimulation (tES) with direct currents (transcranial direct current stimulation, tDCS) or alternating currents (transcranial alternating current stimulation, tACS) represent novel, safe, well-tolerated, and relatively inexpensive putative treatment modalities. Objective This report seeks to promote the science, technology and effective clinical applications of these modalities, identify research challenges, and suggest approaches for addressing these needs in order to achieve rigorous, reproducible findings that can advance clinical treatment. Methods The National Institute of Mental Health (NIMH) convened a workshop in September 2016 that brought together experts in basic and human neuroscience, electrical stimulation biophysics and devices, and clinical trial methods to examine the physiological mechanisms underlying tDCS/tACS, technologies and technical strategies for optimizing stimulation protocols, and the state of the science with respect to therapeutic applications and trial designs. Results Advances in understanding mechanisms, methodological and technological improvements (e.g., electronics, computational models to facilitate proper dosing), and improved clinical trial designs are poised to advance rigorous, reproducible therapeutic applications of these techniques. A number of challenges were identified and meeting participants made recommendations made to address them. Conclusions These recommendations align with requirements in NIMH funding opportunity announcements to, among other needs, define dosimetry, demonstrate dose/response relationships, implement rigorous blinded trial designs, employ computational modeling, and demonstrate target engagement when testing stimulation-based interventions for the treatment of mental disorders. PMID:29398575

  20. Affordable and personalized lighting using inverse modeling and virtual sensors

    NASA Astrophysics Data System (ADS)

    Basu, Chandrayee; Chen, Benjamin; Richards, Jacob; Dhinakaran, Aparna; Agogino, Alice; Martin, Rodney

    2014-03-01

    Wireless sensor networks (WSN) have great potential to enable personalized intelligent lighting systems while reducing building energy use by 50%-70%. As a result WSN systems are being increasingly integrated in state-ofart intelligent lighting systems. In the future these systems will enable participation of lighting loads as ancillary services. However, such systems can be expensive to install and lack the plug-and-play quality necessary for user-friendly commissioning. In this paper we present an integrated system of wireless sensor platforms and modeling software to enable affordable and user-friendly intelligent lighting. It requires ⇠ 60% fewer sensor deployments compared to current commercial systems. Reduction in sensor deployments has been achieved by optimally replacing the actual photo-sensors with real-time discrete predictive inverse models. Spatially sparse and clustered sub-hourly photo-sensor data captured by the WSN platforms are used to develop and validate a piece-wise linear regression of indoor light distribution. This deterministic data-driven model accounts for sky conditions and solar position. The optimal placement of photo-sensors is performed iteratively to achieve the best predictability of the light field desired for indoor lighting control. Using two weeks of daylight and artificial light training data acquired at the Sustainability Base at NASA Ames, the model was able to predict the light level at seven monitored workstations with 80%-95% accuracy. We estimate that 10% adoption of this intelligent wireless sensor system in commercial buildings could save 0.2-0.25 quads BTU of energy nationwide.

  1. A sensor simulation framework for the testing and evaluation of external hazard monitors and integrated alerting and notification functions

    NASA Astrophysics Data System (ADS)

    Uijt de Haag, Maarten; Venable, Kyle; Bezawada, Rajesh; Adami, Tony; Vadlamani, Ananth K.

    2009-05-01

    This paper discusses a sensor simulator/synthesizer framework that can be used to test and evaluate various sensor integration strategies for the implementation of an External Hazard Monitor (EHM) and Integrated Alerting and Notification (IAN) function as part of NASA's Integrated Intelligent Flight Deck (IIFD) project. The IIFD project under the NASA's Aviation Safety program "pursues technologies related to the flight deck that ensure crew workload and situational awareness are both safely optimized and adapted to the future operational environment as envisioned by NextGen." Within the simulation framework, various inputs to the IIFD and its subsystems, the EHM and IAN, are simulated, synthesized from actual collected data, or played back from actual flight test sensor data. Sensors and avionics included in this framework are TCAS, ADS-B, Forward-Looking Infrared, Vision cameras, GPS, Inertial navigators, EGPWS, Laser Detection and Ranging sensors, altimeters, communication links with ATC, and weather radar. The framework is implemented in Simulink, a modeling language developed by The Mathworks. This modeling language allows for test and evaluation of various sensor and communication link configurations as well as the inclusion of feedback from the pilot on the performance of the aircraft. Specifically, this paper addresses the architecture of the simulator, the sensor model interfaces, the timing and database (environment) aspects of the sensor models, the user interface of the modeling environment, and the various avionics implementations.

  2. Academic Rigor in the College Classroom: Two Federal Commissions Strive to Define Rigor in the Past 70 Years

    ERIC Educational Resources Information Center

    Francis, Clay

    2018-01-01

    Historic notions of academic rigor usually follow from critiques of the system--we often define our goals for academically rigorous work through the lens of our shortcomings. This chapter discusses how the Truman Commission in 1947 and the Spellings Commission in 2006 shaped the way we think about academic rigor in today's context.

  3. An articulated predictive model for fluid-free artificial basilar membrane as broadband frequency sensor

    NASA Astrophysics Data System (ADS)

    Ahmed, Riaz; Banerjee, Sourav

    2018-02-01

    In this article, an extremely versatile predictive model for a newly developed Basilar meta-Membrane (BM2) sensors is reported with variable engineering parameters that contribute to it's frequency selection capabilities. The predictive model reported herein is for advancement over existing method by incorporating versatile and nonhomogeneous (e.g. functionally graded) model parameters that could not only exploit the possibilities of creating complex combinations of broadband frequency sensors but also explain the unique unexplained physical phenomenon that prevails in BM2, e.g. tailgating waves. In recent years, few notable attempts were made to fabricate the artificial basilar membrane, mimicking the mechanics of the human cochlea within a very short range of frequencies. To explain the operation of these sensors a few models were proposed. But, we fundamentally argue the "fabrication to explanation" approach and proposed the model driven predictive design process for the design any (BM2) as broadband sensors. Inspired by the physics of basilar membrane, frequency domain predictive model is proposed where both the material and geometrical parameters can be arbitrarily varied. Broadband frequency is applicable in many fields of science, engineering and technology, such as, sensors for chemical, biological and acoustic applications. With the proposed model, which is three times faster than its FEM counterpart, it is possible to alter the attributes of the selected length of the designed sensor using complex combinations of model parameters, based on target frequency applications. Finally, the tailgating wave peaks in the artificial basilar membranes that prevails in the previously reported experimental studies are also explained using the proposed model.

  4. Method of Forming a Hot Film Sensor System on a Model

    NASA Technical Reports Server (NTRS)

    Tran, Sang Q. (Inventor)

    1998-01-01

    A method of forming a hot film sensor directly on a model is provided. A polyimide solution is sprayed onto the model. The model so sprayed is then heated in air. The steps of spraying and heating are repeated until a polyimide film of desired thickness is achieved on the model. The model with the polyimide film thereon is then thoroughly dried in air. One or more hot film sensors and corresponding electrical conducting leads are then applied directly onto the polyimide film.

  5. Satellite Ocean Color Sensor Design Concepts and Performance Requirements

    NASA Technical Reports Server (NTRS)

    McClain, Charles R.; Meister, Gerhard; Monosmith, Bryan

    2014-01-01

    In late 1978, the National Aeronautics and Space Administration (NASA) launched the Nimbus-7 satellite with the Coastal Zone Color Scanner (CZCS) and several other sensors, all of which provided major advances in Earth remote sensing. The inspiration for the CZCS is usually attributed to an article in Science by Clarke et al. who demonstrated that large changes in open ocean spectral reflectance are correlated to chlorophyll-a concentrations. Chlorophyll-a is the primary photosynthetic pigment in green plants (marine and terrestrial) and is used in estimating primary production, i.e., the amount of carbon fixed into organic matter during photosynthesis. Thus, accurate estimates of global and regional primary production are key to studies of the earth's carbon cycle. Because the investigators used an airborne radiometer, they were able to demonstrate the increased radiance contribution of the atmosphere with altitude that would be a major issue for spaceborne measurements. Since 1978, there has been much progress in satellite ocean color remote sensing such that the technique is well established and is used for climate change science and routine operational environmental monitoring. Also, the science objectives and accompanying methodologies have expanded and evolved through a succession of global missions, e.g., the Ocean Color and Temperature Sensor (OCTS), the Seaviewing Wide Field-of-view Sensor (SeaWiFS), the Moderate Resolution Imaging Spectroradiometer (MODIS), the Medium Resolution Imaging Spectrometer (MERIS), and the Global Imager (GLI). With each advance in science objectives, new and more stringent requirements for sensor capabilities (e.g., spectral coverage) and performance (e.g., signal-to-noise ratio, SNR) are established. The CZCS had four bands for chlorophyll and aerosol corrections. The Ocean Color Imager (OCI) recommended for the NASA Pre-Aerosol, Cloud, and Ocean Ecosystems (PACE) mission includes 5 nanometers hyperspectral coverage from 350 to 800 nanometers with three additional discrete near infrared (NIR) and shortwave infrared (SWIR) ocean aerosol correction bands. Also, to avoid drift in sensor sensitivity from being interpreted as environmental change, climate change research requires rigorous monitoring of sensor stability. For SeaWiFS, monthly lunar imaging accurately tracked stability at an accuracy of approximately 0.1% that allowed the data to be used for climate studies [2]. It is now acknowledged by the international community that future missions and sensor designs need to accommodate lunar calibrations. An overview of ocean color remote sensing and a review of the progress made in ocean color remote sensing and the variety of research applications derived from global satellite ocean color data are provided. The purpose of this chapter is to discuss the design options for ocean color satellite radiometers, performance and testing criteria, and sensor components (optics, detectors, electronics, etc.) that must be integrated into an instrument concept. These ultimately dictate the quality and quantity of data that can be delivered as a trade against mission cost. Historically, science and sensor technology have advanced in a "leap-frog" manner in that sensor design requirements for a mission are defined many years before a sensor is launched and by the end of the mission, perhaps 15-20 years later, science applications and requirements are well beyond the capabilities of the sensor. Section 3 provides a summary of historical mission science objectives and sensor requirements. This progression is expected to continue in the future as long as sensor costs can be constrained to affordable levels and still allow the incorporation of new technologies without incurring unacceptable risk to mission success. The IOCCG Report Number 13 discusses future ocean biology mission Level-1 requirements in depth.

  6. Emergency cricothyrotomy for trismus caused by instantaneous rigor in cardiac arrest patients.

    PubMed

    Lee, Jae Hee; Jung, Koo Young

    2012-07-01

    Instantaneous rigor as muscle stiffening occurring in the moment of death (or cardiac arrest) can be confused with rigor mortis. If trismus is caused by instantaneous rigor, orotracheal intubation is impossible and a surgical airway should be secured. Here, we report 2 patients who had emergency cricothyrotomy for trismus caused by instantaneous rigor. This case report aims to help physicians understand instantaneous rigor and to emphasize the importance of securing a surgical airway quickly on the occurrence of trismus. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Uncertainty Analysis of Inertial Model Attitude Sensor Calibration and Application with a Recommended New Calibration Method

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Statistical tools, previously developed for nonlinear least-squares estimation of multivariate sensor calibration parameters and the associated calibration uncertainty analysis, have been applied to single- and multiple-axis inertial model attitude sensors used in wind tunnel testing to measure angle of attack and roll angle. The analysis provides confidence and prediction intervals of calibrated sensor measurement uncertainty as functions of applied input pitch and roll angles. A comparative performance study of various experimental designs for inertial sensor calibration is presented along with corroborating experimental data. The importance of replicated calibrations over extended time periods has been emphasized; replication provides independent estimates of calibration precision and bias uncertainties, statistical tests for calibration or modeling bias uncertainty, and statistical tests for sensor parameter drift over time. A set of recommendations for a new standardized model attitude sensor calibration method and usage procedures is included. The statistical information provided by these procedures is necessary for the uncertainty analysis of aerospace test results now required by users of industrial wind tunnel test facilities.

  8. Propagation Modeling and Defending of a Mobile Sensor Worm in Wireless Sensor and Actuator Networks.

    PubMed

    Wang, Tian; Wu, Qun; Wen, Sheng; Cai, Yiqiao; Tian, Hui; Chen, Yonghong; Wang, Baowei

    2017-01-13

    WSANs (Wireless Sensor and Actuator Networks) are derived from traditional wireless sensor networks by introducing mobile actuator elements. Previous studies indicated that mobile actuators can improve network performance in terms of data collection, energy supplementation, etc. However, according to our experimental simulations, the actuator's mobility also causes the sensor worm to spread faster if an attacker launches worm attacks on an actuator and compromises it successfully. Traditional worm propagation models and defense strategies did not consider the diffusion with a mobile worm carrier. To address this new problem, we first propose a microscopic mathematical model to describe the propagation dynamics of the sensor worm. Then, a two-step local defending strategy (LDS) with a mobile patcher (a mobile element which can distribute patches) is designed to recover the network. In LDS, all recovering operations are only taken in a restricted region to minimize the cost. Extensive experimental results demonstrate that our model estimations are rather accurate and consistent with the actual spreading scenario of the mobile sensor worm. Moreover, on average, the LDS outperforms other algorithms by approximately 50% in terms of the cost.

  9. Lumped Model Generation and Evaluation: Sensitivity and Lie Algebraic Techniques with Applications to Combustion

    DTIC Science & Technology

    1989-03-03

    address global parameter space mapping issues for first order differential equations. The rigorous criteria for the existence of exact lumping by linear projective transformations was also established.

  10. A New Calibration Method for Commercial RGB-D Sensors.

    PubMed

    Darwish, Walid; Tang, Shenjun; Li, Wenbin; Chen, Wu

    2017-05-24

    Commercial RGB-D sensors such as Kinect and Structure Sensors have been widely used in the game industry, where geometric fidelity is not of utmost importance. For applications in which high quality 3D is required, i.e., 3D building models of centimeter‑level accuracy, accurate and reliable calibrations of these sensors are required. This paper presents a new model for calibrating the depth measurements of RGB-D sensors based on the structured light concept. Additionally, a new automatic method is proposed for the calibration of all RGB-D parameters, including internal calibration parameters for all cameras, the baseline between the infrared and RGB cameras, and the depth error model. When compared with traditional calibration methods, this new model shows a significant improvement in depth precision for both near and far ranges.

  11. Monitoring programs to assess reintroduction efforts: A critical component in recovery

    USGS Publications Warehouse

    Muths, E.; Dreitz, V.

    2008-01-01

    Reintroduction is a powerful tool in our conservation toolbox. However, the necessary follow-up, i.e. long-term monitoring, is not commonplace and if instituted may lack rigor. We contend that valid monitoring is possible, even with sparse data. We present a means to monitor based on demographic data and a projection model using the Wyoming toad (Bufo baxten) as an example. Using an iterative process, existing data is built upon gradually such that demographic estimates and subsequent inferences increase in reliability. Reintroduction and defensible monitoring may become increasingly relevant as the outlook for amphibians, especially in tropical regions, continues to deteriorate and emergency collection, captive breeding, and reintroduction become necessary. Rigorous use of appropriate modeling and an adaptive approach can validate the use of reintroduction and substantially increase its value to recovery programs. ?? 2008 Museu de Cie??ncies Naturals.

  12. Manyscale Computing for Sensor Processing in Support of Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Schmalz, M.; Chapman, W.; Hayden, E.; Sahni, S.; Ranka, S.

    2014-09-01

    Increasing image and signal data burden associated with sensor data processing in support of space situational awareness implies continuing computational throughput growth beyond the petascale regime. In addition to growing applications data burden and diversity, the breadth, diversity and scalability of high performance computing architectures and their various organizations challenge the development of a single, unifying, practicable model of parallel computation. Therefore, models for scalable parallel processing have exploited architectural and structural idiosyncrasies, yielding potential misapplications when legacy programs are ported among such architectures. In response to this challenge, we have developed a concise, efficient computational paradigm and software called Manyscale Computing to facilitate efficient mapping of annotated application codes to heterogeneous parallel architectures. Our theory, algorithms, software, and experimental results support partitioning and scheduling of application codes for envisioned parallel architectures, in terms of work atoms that are mapped (for example) to threads or thread blocks on computational hardware. Because of the rigor, completeness, conciseness, and layered design of our manyscale approach, application-to-architecture mapping is feasible and scalable for architectures at petascales, exascales, and above. Further, our methodology is simple, relying primarily on a small set of primitive mapping operations and support routines that are readily implemented on modern parallel processors such as graphics processing units (GPUs) and hybrid multi-processors (HMPs). In this paper, we overview the opportunities and challenges of manyscale computing for image and signal processing in support of space situational awareness applications. We discuss applications in terms of a layered hardware architecture (laboratory > supercomputer > rack > processor > component hierarchy). Demonstration applications include performance analysis and results in terms of execution time as well as storage, power, and energy consumption for bus-connected and/or networked architectures. The feasibility of the manyscale paradigm is demonstrated by addressing four principal challenges: (1) architectural/structural diversity, parallelism, and locality, (2) masking of I/O and memory latencies, (3) scalability of design as well as implementation, and (4) efficient representation/expression of parallel applications. Examples will demonstrate how manyscale computing helps solve these challenges efficiently on real-world computing systems.

  13. Bayesian probabilistic approach for inverse source determination from limited and noisy chemical or biological sensor concentration measurements

    NASA Astrophysics Data System (ADS)

    Yee, Eugene

    2007-04-01

    Although a great deal of research effort has been focused on the forward prediction of the dispersion of contaminants (e.g., chemical and biological warfare agents) released into the turbulent atmosphere, much less work has been directed toward the inverse prediction of agent source location and strength from the measured concentration, even though the importance of this problem for a number of practical applications is obvious. In general, the inverse problem of source reconstruction is ill-posed and unsolvable without additional information. It is demonstrated that a Bayesian probabilistic inferential framework provides a natural and logically consistent method for source reconstruction from a limited number of noisy concentration data. In particular, the Bayesian approach permits one to incorporate prior knowledge about the source as well as additional information regarding both model and data errors. The latter enables a rigorous determination of the uncertainty in the inference of the source parameters (e.g., spatial location, emission rate, release time, etc.), hence extending the potential of the methodology as a tool for quantitative source reconstruction. A model (or, source-receptor relationship) that relates the source distribution to the concentration data measured by a number of sensors is formulated, and Bayesian probability theory is used to derive the posterior probability density function of the source parameters. A computationally efficient methodology for determination of the likelihood function for the problem, based on an adjoint representation of the source-receptor relationship, is described. Furthermore, we describe the application of efficient stochastic algorithms based on Markov chain Monte Carlo (MCMC) for sampling from the posterior distribution of the source parameters, the latter of which is required to undertake the Bayesian computation. The Bayesian inferential methodology for source reconstruction is validated against real dispersion data for two cases involving contaminant dispersion in highly disturbed flows over urban and complex environments where the idealizations of horizontal homogeneity and/or temporal stationarity in the flow cannot be applied to simplify the problem. Furthermore, the methodology is applied to the case of reconstruction of multiple sources.

  14. A methodology for the rigorous verification of plasma simulation codes

    NASA Astrophysics Data System (ADS)

    Riva, Fabio

    2016-10-01

    The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.

  15. Component Design Report: International Transportation Energy Demand Determinants Model

    EIA Publications

    2017-01-01

    This Component Design Report discusses working design elements for a new model to replace the International Transportation Model (ITran) in the World Energy Projection System Plus (WEPS ) that is maintained by the U.S. Energy Information Administration. The key objective of the new International Transportation Energy Demand Determinants (ITEDD) model is to enable more rigorous, quantitative research related to energy consumption in the international transportation sectors.

  16. No-arbitrage, leverage and completeness in a fractional volatility model

    NASA Astrophysics Data System (ADS)

    Vilela Mendes, R.; Oliveira, M. J.; Rodrigues, A. M.

    2015-02-01

    When the volatility process is driven by fractional noise one obtains a model which is consistent with the empirical market data. Depending on whether the stochasticity generators of log-price and volatility are independent or are the same, two versions of the model are obtained with different leverage behaviors. Here, the no-arbitrage and completeness properties of the models are rigorously studied.

  17. Rigorous Results for the Distribution of Money on Connected Graphs

    NASA Astrophysics Data System (ADS)

    Lanchier, Nicolas; Reed, Stephanie

    2018-05-01

    This paper is concerned with general spatially explicit versions of three stochastic models for the dynamics of money that have been introduced and studied numerically by statistical physicists: the uniform reshuffling model, the immediate exchange model and the model with saving propensity. All three models consist of systems of economical agents that consecutively engage in pairwise monetary transactions. Computer simulations performed in the physics literature suggest that, when the number of agents and the average amount of money per agent are large, the limiting distribution of money as time goes to infinity approaches the exponential distribution for the first model, the gamma distribution with shape parameter two for the second model and a distribution similar but not exactly equal to a gamma distribution whose shape parameter depends on the saving propensity for the third model. The main objective of this paper is to give rigorous proofs of these conjectures and also extend these conjectures to generalizations of the first two models and a variant of the third model that include local rather than global interactions, i.e., instead of choosing the two interacting agents uniformly at random from the system, the agents are located on the vertex set of a general connected graph and can only interact with their neighbors.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mou, J.I.; King, C.

    The focus of this study is to develop a sensor fused process modeling and control methodology to model, assess, and then enhance the performance of a hexapod machine for precision product realization. Deterministic modeling technique was used to derive models for machine performance assessment and enhancement. Sensor fusion methodology was adopted to identify the parameters of the derived models. Empirical models and computational algorithms were also derived and implemented to model, assess, and then enhance the machine performance. The developed sensor fusion algorithms can be implemented on a PC-based open architecture controller to receive information from various sensors, assess themore » status of the process, determine the proper action, and deliver the command to actuators for task execution. This will enhance a hexapod machine`s capability to produce workpieces within the imposed dimensional tolerances.« less

  19. 1st Order Modeling of a SAW Delay Line using MathCAD(Registered)

    NASA Technical Reports Server (NTRS)

    Wilson, William C.; Atkinson, Gary M.

    2007-01-01

    To aid in the development of SAW sensors for Integrated Vehicle Health Monitoring applications, a first order model of a SAW Delay line has been created using MathCadA. The model implements the Impulse Response method to calculate the frequency response, impedance, and insertion loss. This paper presents the model and the results from the model for a SAW delay line design. Integrated Vehicle Health Monitoring (IVHM) of aerospace vehicles requires rugged sensors having reduced volume, mass, and power that can be used to measure a variety of phenomena. Wireless systems are preferred when retro-fitting sensors onto existing vehicles [1]. Surface Acoustic Wave (SAW) devices are capable of sensing: temperature, pressure, strain, chemical species, mass loading, acceleration, and shear stress. SAW technology is low cost, rugged, lightweight, and extremely low power. Passive wireless sensors have been developed using SAW technology. For these reasons new SAW sensors are being investigated for aerospace applications.

  20. Parametric modeling of wideband piezoelectric polymer sensors: Design for optoacoustic applications

    NASA Astrophysics Data System (ADS)

    Fernández Vidal, A.; Ciocci Brazzano, L.; Matteo, C. L.; Sorichetti, P. A.; González, M. G.

    2017-09-01

    In this work, we present a three-dimensional model for the design of wideband piezoelectric polymer sensors which includes the geometry and the properties of the transducer materials. The model uses FFT and numerical integration techniques in an explicit, semi-analytical approach. To validate the model, we made electrical and mechanical measurements on homemade sensors for optoacoustic applications. Each device was implemented using a polyvinylidene fluoride thin film piezoelectric polymer with a thickness of 25 μm. The sensors had detection areas in the range between 0.5 mm2 and 35 mm2 and were excited by acoustic pressure pulses of 5 ns (FWHM) from a source with a diameter around 10 μm. The experimental data obtained from the measurements agree well with the model results. We discuss the relative importance of the sensor design parameters for optoacoustic applications and we provide guidelines for the optimization of devices.

  1. Parametric modeling of wideband piezoelectric polymer sensors: Design for optoacoustic applications.

    PubMed

    Fernández Vidal, A; Ciocci Brazzano, L; Matteo, C L; Sorichetti, P A; González, M G

    2017-09-01

    In this work, we present a three-dimensional model for the design of wideband piezoelectric polymer sensors which includes the geometry and the properties of the transducer materials. The model uses FFT and numerical integration techniques in an explicit, semi-analytical approach. To validate the model, we made electrical and mechanical measurements on homemade sensors for optoacoustic applications. Each device was implemented using a polyvinylidene fluoride thin film piezoelectric polymer with a thickness of 25 μm. The sensors had detection areas in the range between 0.5 mm 2 and 35 mm 2 and were excited by acoustic pressure pulses of 5 ns (FWHM) from a source with a diameter around 10 μm. The experimental data obtained from the measurements agree well with the model results. We discuss the relative importance of the sensor design parameters for optoacoustic applications and we provide guidelines for the optimization of devices.

  2. Attitude Estimation for Large Field-of-View Sensors

    NASA Technical Reports Server (NTRS)

    Cheng, Yang; Crassidis, John L.; Markley, F. Landis

    2005-01-01

    The QUEST measurement noise model for unit vector observations has been widely used in spacecraft attitude estimation for more than twenty years. It was derived under the approximation that the noise lies in the tangent plane of the respective unit vector and is axially symmetrically distributed about the vector. For large field-of-view sensors, however, this approximation may be poor, especially when the measurement falls near the edge of the field of view. In this paper a new measurement noise model is derived based on a realistic noise distribution in the focal-plane of a large field-of-view sensor, which shows significant differences from the QUEST model for unit vector observations far away from the sensor boresight. An extended Kalman filter for attitude estimation is then designed with the new measurement noise model. Simulation results show that with the new measurement model the extended Kalman filter achieves better estimation performance using large field-of-view sensor observations.

  3. Incorporating signal-dependent noise for hyperspectral target detection

    NASA Astrophysics Data System (ADS)

    Morman, Christopher J.; Meola, Joseph

    2015-05-01

    The majority of hyperspectral target detection algorithms are developed from statistical data models employing stationary background statistics or white Gaussian noise models. Stationary background models are inaccurate as a result of two separate physical processes. First, varying background classes often exist in the imagery that possess different clutter statistics. Many algorithms can account for this variability through the use of subspaces or clustering techniques. The second physical process, which is often ignored, is a signal-dependent sensor noise term. For photon counting sensors that are often used in hyperspectral imaging systems, sensor noise increases as the measured signal level increases as a result of Poisson random processes. This work investigates the impact of this sensor noise on target detection performance. A linear noise model is developed describing sensor noise variance as a linear function of signal level. The linear noise model is then incorporated for detection of targets using data collected at Wright Patterson Air Force Base.

  4. Effects of room temperature aging on two cryogenic temperature sensor models used in aerospace applications

    NASA Astrophysics Data System (ADS)

    Courts, S. Scott; Krause, John

    2012-06-01

    Cryogenic temperature sensors used in aerospace applications are typically procured far in advance of the mission launch date. Depending upon the program, the temperature sensors may be stored at room temperature for extended periods as installation and groundbased testing can take years before the actual flight. The effects of long term storage at room temperature are sometimes approximated by the use of accelerated aging at temperatures well above room temperature, but this practice can yield invalid results as the sensing material and/or electrical contacting method can be increasingly unstable with higher temperature exposure. To date, little data are available on the effects of extended room temperature aging on sensors commonly used in aerospace applications. This research examines two such temperature sensors models - the Lake Shore Cryotronics, Inc. model CernoxTM and DT-670-SD temperature sensors. Sample groups of each model type have been maintained for ten years or longer with room temperature storage between calibrations. Over an eighteen year period, the CernoxTM temperature sensors exhibited a stability of better than ±20 mK for T<30 K and better than ±0.1% of temperature for T>30 K. Over a ten year period the model DT-670-SD sensors exhibited a stability of better than ±140 mK for T<25 K and better than ±75 mK for T>25 K.

  5. Health Monitoring for Airframe Structural Characterization

    NASA Technical Reports Server (NTRS)

    Munns, Thomas E.; Kent, Renee M.; Bartolini, Antony; Gause, Charles B.; Borinski, Jason W.; Dietz, Jason; Elster, Jennifer L.; Boyd, Clark; Vicari, Larry; Ray, Asok; hide

    2002-01-01

    This study established requirements for structural health monitoring systems, identified and characterized a prototype structural sensor system, developed sensor interpretation algorithms, and demonstrated the sensor systems on operationally realistic test articles. Fiber-optic corrosion sensors (i.e., moisture and metal ion sensors) and low-cycle fatigue sensors (i.e., strain and acoustic emission sensors) were evaluated to validate their suitability for monitoring aging degradation; characterize the sensor performance in aircraft environments; and demonstrate placement processes and multiplexing schemes. In addition, a unique micromachined multimeasure and sensor concept was developed and demonstrated. The results show that structural degradation of aircraft materials could be effectively detected and characterized using available and emerging sensors. A key component of the structural health monitoring capability is the ability to interpret the information provided by sensor system in order to characterize the structural condition. Novel deterministic and stochastic fatigue damage development and growth models were developed for this program. These models enable real time characterization and assessment of structural fatigue damage.

  6. A new torsion pendulum for testing enhancements to the LISA Gravitational Reference Sensor

    NASA Astrophysics Data System (ADS)

    Conklin, John; Chilton, A.; Ciani, G.; Mueller, G.; Olatunde, T.; Shelley, R.

    2014-01-01

    The Laser Interferometer Space Antenna (LISA), the most mature concept for observing gravitational waves from space, consists of three Sun-orbiting spacecraft that form a million km-scale equilateral triangle. Each spacecraft houses two free-floating test masses (TM), which are protected from disturbing forces so that they follow pure geodesics in spacetime. A single test mass together with its housing and associated components is referred to as a gravitational reference sensor (GRS). Laser interferometry is used to measure the minute variations in the distance between these free-falling TMs, caused by gravitational waves. The demanding acceleration noise requirement of 3E-15 m/sec^2Hz^1/2 for the LISA GRS has motivated a rigorous testing campaign in Europe and a dedicated technology mission, LISA Pathfinder, scheduled for launch in 2015. Recently, efforts have begun in the U.S. to design and assemble a new, nearly thermally noise limited torsion pendulum for testing GRS technology enhancements and for understanding the dozens of acceleration noise sources that affect the performance of the GRS. This experimental facility is based on the design of a similar facility at the University of Trento, and will consist of a vacuum enclosed torsion pendulum that suspends mock-ups of the LISA test masses, surrounded by electrode housings. The GRS technology enhancements under development include a novel TM charge control scheme based on ultraviolet LEDs, simplified capacitive readout electronics, and a six degree-of-freedom, all-optical TM sensor. This presentation will describe the design of the torsion pendulum facility, its expected performance, and the potential technology enhancements.

  7. Study of the quality characteristics in cold-smoked salmon (Salmo salar) originating from pre- or post-rigor raw material.

    PubMed

    Birkeland, S; Akse, L

    2010-01-01

    Improved slaughtering procedures in the salmon industry have caused a delayed onset of rigor mortis and, thus, a potential for pre-rigor secondary processing. The aim of this study was to investigate the effect of rigor status at time of processing on quality traits color, texture, sensory, microbiological, in injection salted, and cold-smoked Atlantic salmon (Salmo salar). Injection of pre-rigor fillets caused a significant (P<0.001) contraction (-7.9%± 0.9%) on the caudal-cranial axis. No significant differences in instrumental color (a*, b*, C*, or h*), texture (hardness), or sensory traits (aroma, color, taste, and texture) were observed between pre- or post-rigor processed fillets; however, post-rigor (1477 ± 38 g) fillets had a significant (P>0.05) higher fracturability than pre-rigor fillets (1369 ± 71 g). Pre-rigor fillets were significantly (P<0.01) lighter, L*, (39.7 ± 1.0) than post-rigor fillets (37.8 ± 0.8) and had significantly lower (P<0.05) aerobic plate count (APC), 1.4 ± 0.4 log CFU/g against 2.6 ± 0.6 log CFU/g, and psychrotrophic count (PC), 2.1 ± 0.2 log CFU/g against 3.0 ± 0.5 log CFU/g, than post-rigor processed fillets. This study showed that similar quality characteristics can be obtained in cold-smoked products processed either pre- or post-rigor when using suitable injection salting protocols and smoking techniques. © 2010 Institute of Food Technologists®

  8. A Smart Sensor Web for Ocean Observation: Integrated Acoustics, Satellite Networking, and Predictive Modeling

    NASA Astrophysics Data System (ADS)

    Arabshahi, P.; Chao, Y.; Chien, S.; Gray, A.; Howe, B. M.; Roy, S.

    2008-12-01

    In many areas of Earth science, including climate change research, there is a need for near real-time integration of data from heterogeneous and spatially distributed sensors, in particular in-situ and space- based sensors. The data integration, as provided by a smart sensor web, enables numerous improvements, namely, 1) adaptive sampling for more efficient use of expensive space-based sensing assets, 2) higher fidelity information gathering from data sources through integration of complementary data sets, and 3) improved sensor calibration. The specific purpose of the smart sensor web development presented here is to provide for adaptive sampling and calibration of space-based data via in-situ data. Our ocean-observing smart sensor web presented herein is composed of both mobile and fixed underwater in-situ ocean sensing assets and Earth Observing System (EOS) satellite sensors providing larger-scale sensing. An acoustic communications network forms a critical link in the web between the in-situ and space-based sensors and facilitates adaptive sampling and calibration. After an overview of primary design challenges, we report on the development of various elements of the smart sensor web. These include (a) a cable-connected mooring system with a profiler under real-time control with inductive battery charging; (b) a glider with integrated acoustic communications and broadband receiving capability; (c) satellite sensor elements; (d) an integrated acoustic navigation and communication network; and (e) a predictive model via the Regional Ocean Modeling System (ROMS). Results from field experiments, including an upcoming one in Monterey Bay (October 2008) using live data from NASA's EO-1 mission in a semi closed-loop system, together with ocean models from ROMS, are described. Plans for future adaptive sampling demonstrations using the smart sensor web are also presented.

  9. Soft sensor for real-time cement fineness estimation.

    PubMed

    Stanišić, Darko; Jorgovanović, Nikola; Popov, Nikola; Čongradac, Velimir

    2015-03-01

    This paper describes the design and implementation of soft sensors to estimate cement fineness. Soft sensors are mathematical models that use available data to provide real-time information on process variables when the information, for whatever reason, is not available by direct measurement. In this application, soft sensors are used to provide information on process variable normally provided by off-line laboratory tests performed at large time intervals. Cement fineness is one of the crucial parameters that define the quality of produced cement. Providing real-time information on cement fineness using soft sensors can overcome limitations and problems that originate from a lack of information between two laboratory tests. The model inputs were selected from candidate process variables using an information theoretic approach. Models based on multi-layer perceptrons were developed, and their ability to estimate cement fineness of laboratory samples was analyzed. Models that had the best performance, and capacity to adopt changes in the cement grinding circuit were selected to implement soft sensors. Soft sensors were tested using data from a continuous cement production to demonstrate their use in real-time fineness estimation. Their performance was highly satisfactory, and the sensors proved to be capable of providing valuable information on cement grinding circuit performance. After successful off-line tests, soft sensors were implemented and installed in the control room of a cement factory. Results on the site confirm results obtained by tests conducted during soft sensor development. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Modeling the Error of the Medtronic Paradigm Veo Enlite Glucose Sensor.

    PubMed

    Biagi, Lyvia; Ramkissoon, Charrise M; Facchinetti, Andrea; Leal, Yenny; Vehi, Josep

    2017-06-12

    Continuous glucose monitors (CGMs) are prone to inaccuracy due to time lags, sensor drift, calibration errors, and measurement noise. The aim of this study is to derive the model of the error of the second generation Medtronic Paradigm Veo Enlite (ENL) sensor and compare it with the Dexcom SEVEN PLUS (7P), G4 PLATINUM (G4P), and advanced G4 for Artificial Pancreas studies (G4AP) systems. An enhanced methodology to a previously employed technique was utilized to dissect the sensor error into several components. The dataset used included 37 inpatient sessions in 10 subjects with type 1 diabetes (T1D), in which CGMs were worn in parallel and blood glucose (BG) samples were analyzed every 15 ± 5 min Calibration error and sensor drift of the ENL sensor was best described by a linear relationship related to the gain and offset. The mean time lag estimated by the model is 9.4 ± 6.5 min. The overall average mean absolute relative difference (MARD) of the ENL sensor was 11.68 ± 5.07% Calibration error had the highest contribution to total error in the ENL sensor. This was also reported in the 7P, G4P, and G4AP. The model of the ENL sensor error will be useful to test the in silico performance of CGM-based applications, i.e., the artificial pancreas, employing this kind of sensor.

  11. Using sensors to measure activity in people with stroke.

    PubMed

    Fulk, George D; Sazonov, Edward

    2011-01-01

    The purpose of this study was to determine the ability of a novel shoe-based sensor that uses accelerometers, pressure sensors, and pattern recognition with a support vector machine (SVM) to accurately identify sitting, standing, and walking postures in people with stroke. Subjects with stroke wore the shoe-based sensor while randomly assuming 3 main postures: sitting, standing, and walking. A SVM classifier was used to train and validate the data to develop individual and group models, which were tested for accuracy, recall, and precision. Eight subjects participated. Both individual and group models were able to accurately identify the different postures (99.1% to 100% individual models and 76.9% to 100% group models). Recall and precision were also high for both individual (0.99 to 1.00) and group (0.82 to 0.99) models. The unique combination of accelerometer and pressure sensors built into the shoe was able to accurately identify postures. This shoe sensor could be used to provide accurate information on community performance of activities in people with stroke as well as provide behavioral enhancing feedback as part of a telerehabilitation intervention.

  12. Approach to Privacy-Preserve Data in Two-Tiered Wireless Sensor Network Based on Linear System and Histogram

    NASA Astrophysics Data System (ADS)

    Dang, Van H.; Wohlgemuth, Sven; Yoshiura, Hiroshi; Nguyen, Thuc D.; Echizen, Isao

    Wireless sensor network (WSN) has been one of key technologies for the future with broad applications from the military to everyday life [1,2,3,4,5]. There are two kinds of WSN model models with sensors for sensing data and a sink for receiving and processing queries from users; and models with special additional nodes capable of storing large amounts of data from sensors and processing queries from the sink. Among the latter type, a two-tiered model [6,7] has been widely adopted because of its storage and energy saving benefits for weak sensors, as proved by the advent of commercial storage node products such as Stargate [8] and RISE. However, by concentrating storage in certain nodes, this model becomes more vulnerable to attack. Our novel technique, called zip-histogram, contributes to solving the problems of previous studies [6,7] by protecting the stored data's confidentiality and integrity (including data from the sensor and queries from the sink) against attackers who might target storage nodes in two-tiered WSNs.

  13. An Integrated Intrusion Detection Model of Cluster-Based Wireless Sensor Network

    PubMed Central

    Sun, Xuemei; Yan, Bo; Zhang, Xinzhong; Rong, Chuitian

    2015-01-01

    Considering wireless sensor network characteristics, this paper combines anomaly and mis-use detection and proposes an integrated detection model of cluster-based wireless sensor network, aiming at enhancing detection rate and reducing false rate. Adaboost algorithm with hierarchical structures is used for anomaly detection of sensor nodes, cluster-head nodes and Sink nodes. Cultural-Algorithm and Artificial-Fish–Swarm-Algorithm optimized Back Propagation is applied to mis-use detection of Sink node. Plenty of simulation demonstrates that this integrated model has a strong performance of intrusion detection. PMID:26447696

  14. An Integrated Intrusion Detection Model of Cluster-Based Wireless Sensor Network.

    PubMed

    Sun, Xuemei; Yan, Bo; Zhang, Xinzhong; Rong, Chuitian

    2015-01-01

    Considering wireless sensor network characteristics, this paper combines anomaly and mis-use detection and proposes an integrated detection model of cluster-based wireless sensor network, aiming at enhancing detection rate and reducing false rate. Adaboost algorithm with hierarchical structures is used for anomaly detection of sensor nodes, cluster-head nodes and Sink nodes. Cultural-Algorithm and Artificial-Fish-Swarm-Algorithm optimized Back Propagation is applied to mis-use detection of Sink node. Plenty of simulation demonstrates that this integrated model has a strong performance of intrusion detection.

  15. Using URIs to effectively transmit sensor data and metadata

    NASA Astrophysics Data System (ADS)

    Kokkinaki, Alexandra; Buck, Justin; Darroch, Louise; Gardner, Thomas

    2017-04-01

    Autonomous ocean observation is massively increasing the number of sensors in the ocean. Accordingly, the continuing increase in datasets produced, makes selecting sensors that are fit for purpose a growing challenge. Decision making on selecting quality sensor data, is based on the sensor's metadata, i.e. manufacturer specifications, history of calibrations etc. The Open Geospatial Consortium (OGC) has developed the Sensor Web Enablement (SWE) standards to facilitate integration and interoperability of sensor data and metadata. The World Wide Web Consortium (W3C) Semantic Web technologies enable machine comprehensibility promoting sophisticated linking and processing of data published on the web. Linking the sensor's data and metadata according to the above-mentioned standards can yield practical difficulties, because of internal hardware bandwidth restrictions and a requirement to constrain data transmission costs. Our approach addresses these practical difficulties by uniquely identifying sensor and platform models and instances through URIs, which resolve via content negotiation to either OGC's sensor meta language, sensorML or W3C's Linked Data. Data transmitted by a sensor incorporate the sensor's unique URI to refer to its metadata. Sensor and platform model URIs and descriptions are created and hosted by the British Oceanographic Data Centre (BODC) linked systems service. The sensor owner creates the sensor and platform instance URIs prior and during sensor deployment, through an updatable web form, the Sensor Instance Form (SIF). SIF enables model and instance URI association but also platform and sensor linking. The use of URIs, which are dynamically generated through the SIF, offers both practical and economical benefits to the implementation of SWE and Linked Data standards in near real time systems. Data can be linked to metadata dynamically in-situ while saving on the costs associated to the transmission of long metadata descriptions. The transmission of short URIs also enables the implementation of standards on systems where it is impractical, such as legacy hardware.

  16. Self-Tuning Method for Increased Obstacle Detection Reliability Based on Internet of Things LiDAR Sensor Models

    PubMed Central

    2018-01-01

    On-chip LiDAR sensors for vehicle collision avoidance are a rapidly expanding area of research and development. The assessment of reliable obstacle detection using data collected by LiDAR sensors has become a key issue that the scientific community is actively exploring. The design of a self-tuning methodology and its implementation are presented in this paper, to maximize the reliability of LiDAR sensors network for obstacle detection in the ‘Internet of Things’ (IoT) mobility scenarios. The Webots Automobile 3D simulation tool for emulating sensor interaction in complex driving environments is selected in order to achieve that objective. Furthermore, a model-based framework is defined that employs a point-cloud clustering technique, and an error-based prediction model library that is composed of a multilayer perceptron neural network, and k-nearest neighbors and linear regression models. Finally, a reinforcement learning technique, specifically a Q-learning method, is implemented to determine the number of LiDAR sensors that are required to increase sensor reliability for obstacle localization tasks. In addition, a IoT driving assistance user scenario, connecting a five LiDAR sensor network is designed and implemented to validate the accuracy of the computational intelligence-based framework. The results demonstrated that the self-tuning method is an appropriate strategy to increase the reliability of the sensor network while minimizing detection thresholds. PMID:29748521

  17. Self-Tuning Method for Increased Obstacle Detection Reliability Based on Internet of Things LiDAR Sensor Models.

    PubMed

    Castaño, Fernando; Beruvides, Gerardo; Villalonga, Alberto; Haber, Rodolfo E

    2018-05-10

    On-chip LiDAR sensors for vehicle collision avoidance are a rapidly expanding area of research and development. The assessment of reliable obstacle detection using data collected by LiDAR sensors has become a key issue that the scientific community is actively exploring. The design of a self-tuning methodology and its implementation are presented in this paper, to maximize the reliability of LiDAR sensors network for obstacle detection in the 'Internet of Things' (IoT) mobility scenarios. The Webots Automobile 3D simulation tool for emulating sensor interaction in complex driving environments is selected in order to achieve that objective. Furthermore, a model-based framework is defined that employs a point-cloud clustering technique, and an error-based prediction model library that is composed of a multilayer perceptron neural network, and k-nearest neighbors and linear regression models. Finally, a reinforcement learning technique, specifically a Q-learning method, is implemented to determine the number of LiDAR sensors that are required to increase sensor reliability for obstacle localization tasks. In addition, a IoT driving assistance user scenario, connecting a five LiDAR sensor network is designed and implemented to validate the accuracy of the computational intelligence-based framework. The results demonstrated that the self-tuning method is an appropriate strategy to increase the reliability of the sensor network while minimizing detection thresholds.

  18. Multiple Fan-Beam Optical Tomography: Modelling Techniques

    PubMed Central

    Rahim, Ruzairi Abdul; Chen, Leong Lai; San, Chan Kok; Rahiman, Mohd Hafiz Fazalul; Fea, Pang Jon

    2009-01-01

    This paper explains in detail the solution to the forward and inverse problem faced in this research. In the forward problem section, the projection geometry and the sensor modelling are discussed. The dimensions, distributions and arrangements of the optical fibre sensors are determined based on the real hardware constructed and these are explained in the projection geometry section. The general idea in sensor modelling is to simulate an artificial environment, but with similar system properties, to predict the actual sensor values for various flow models in the hardware system. The sensitivity maps produced from the solution of the forward problems are important in reconstructing the tomographic image. PMID:22291523

  19. A Model of Solid State Gas Sensors

    NASA Astrophysics Data System (ADS)

    Woestman, J. T.; Brailsford, A. D.; Shane, M.; Logothetis, E. M.

    1997-03-01

    Solid state gas sensors are widely used to measure the concentrations of gases such as CO, CH_4, C_3H_6, H_2, C_3H8 and O2 The applications of these sensors range from air-to-fuel ratio control in combustion processes including those in automotive engines and industrial furnaces to leakage detection of inflammable and toxic gases in domestic and industrial environments. As the need increases to accurately measure smaller and smaller concentrations, problems such as poor selectivity, stability and response time limit the use of these sensors. In an effort to overcome some of these limitations, a theoretical model of the transient behavior of solid state gas sensors has been developed. In this presentation, a model for the transient response of an electrochemical gas sensor to gas mixtures containing O2 and one reducing species, such as CO, is discussed. This model accounts for the transport of the reactive species to the sampling electrode, the catalyzed oxidation/reduction reaction of these species and the generation of the resulting electrical signal. The model will be shown to reproduce the results of published steady state models and to agree with experimental steady state and transient data.

  20. Using finite element modelling and experimental methods to investigate planar coil sensor topologies for inductive measurement of displacement

    NASA Astrophysics Data System (ADS)

    Moreton, Gregory; Meydan, Turgut; Williams, Paul

    2018-04-01

    The usage of planar sensors is widespread due to their non-contact nature and small size profiles, however only a few basic design types are generally considered. In order to develop planar coil designs we have performed extensive finite element modelling (FEM) and experimentation to understand the performance of different planar sensor topologies when used in inductive sensing. We have applied this approach to develop a novel displacement sensor. Models of different topologies with varying pitch values have been analysed using the ANSYS Maxwell FEM package, furthermore the models incorporated a movable soft magnetic amorphous ribbon element. The different models used in the FEM were then constructed and experimentally tested with topologies that included mesh, meander, square coil, and circular coil configurations. The sensors were used to detect the displacement of the amorphous ribbon. A LabView program controlled both the displacement stage and the impedance analyser, the latter capturing the varying inductance values with ribbon displacement. There was good correlation between the FEM models and the experimental data confirming that the methodology described here offers an effective way for developing planar coil based sensors with improved performance.

  1. An Improved High-Sensitivity Airborne Transient Electromagnetic Sensor for Deep Penetration

    PubMed Central

    Chen, Shudong; Guo, Shuxu; Wang, Haofeng; He, Miao; Liu, Xiaoyan; Qiu, Yu; Zhang, Shuang; Yuan, Zhiwen; Zhang, Haiyang; Fang, Dong; Zhu, Jun

    2017-01-01

    The investigation depth of transient electromagnetic sensors can be effectively increased by reducing the system noise, which is mainly composed of sensor internal noise, electromagnetic interference (EMI), and environmental noise, etc. A high-sensitivity airborne transient electromagnetic (AEM) sensor with low sensor internal noise and good shielding effectiveness is of great importance for deep penetration. In this article, the design and optimization of such an AEM sensor is described in detail. To reduce sensor internal noise, a noise model with both a damping resistor and a preamplifier is established and analyzed. The results indicate that a sensor with a large diameter, low resonant frequency, and low sampling rate will have lower sensor internal noise. To improve the electromagnetic compatibility of the sensor, an electromagnetic shielding model for a central-tapped coil is established and discussed in detail. Previous studies have shown that unclosed shields with multiple layers and center grounding can effectively suppress EMI and eddy currents. According to these studies, an improved differential AEM sensor is constructed with a diameter, resultant effective area, resonant frequency, and normalized equivalent input noise of 1.1 m, 114 m2, 35.6 kHz, and 13.3 nV/m2, respectively. The accuracy of the noise model and the shielding effectiveness of the sensor have been verified experimentally. The results show a good agreement between calculated and measured results for the sensor internal noise. Additionally, over 20 dB shielding effectiveness is achieved in a complex electromagnetic environment. All of these results show a great improvement in sensor internal noise and shielding effectiveness. PMID:28106718

  2. Modeling the Cloud to Enhance Capabilities for Crises and Catastrophe Management

    DTIC Science & Technology

    2016-11-16

    order for cloud computing infrastructures to be successfully deployed in real world scenarios as tools for crisis and catastrophe management, where...Statement of the Problem Studied As cloud computing becomes the dominant computational infrastructure[1] and cloud technologies make a transition to hosting...1. Formulate rigorous mathematical models representing technological capabilities and resources in cloud computing for performance modeling and

  3. Kirkpatrick and Beyond: A Review of Models of Training Evaluation. IES Report.

    ERIC Educational Resources Information Center

    Tamkin, P.; Yarnall, J.; Kerrin, M.

    Many organizations are not satisfied that their methods of evaluating training are rigorous or extensive enough to answer questions of value to them. Complaints about Kirkpatrick's popular four-step model (1959) of training evaluation are that each level is assumed to be associated with the previous and next levels and that the model is too simple…

  4. Accurate force field for molybdenum by machine learning large materials data

    NASA Astrophysics Data System (ADS)

    Chen, Chi; Deng, Zhi; Tran, Richard; Tang, Hanmei; Chu, Iek-Heng; Ong, Shyue Ping

    2017-09-01

    In this work, we present a highly accurate spectral neighbor analysis potential (SNAP) model for molybdenum (Mo) developed through the rigorous application of machine learning techniques on large materials data sets. Despite Mo's importance as a structural metal, existing force fields for Mo based on the embedded atom and modified embedded atom methods do not provide satisfactory accuracy on many properties. We will show that by fitting to the energies, forces, and stress tensors of a large density functional theory (DFT)-computed dataset on a diverse set of Mo structures, a Mo SNAP model can be developed that achieves close to DFT accuracy in the prediction of a broad range of properties, including elastic constants, melting point, phonon spectra, surface energies, grain boundary energies, etc. We will outline a systematic model development process, which includes a rigorous approach to structural selection based on principal component analysis, as well as a differential evolution algorithm for optimizing the hyperparameters in the model fitting so that both the model error and the property prediction error can be simultaneously lowered. We expect that this newly developed Mo SNAP model will find broad applications in large and long-time scale simulations.

  5. Effect of Pre-rigor Salting Levels on Physicochemical and Textural Properties of Chicken Breast Muscles.

    PubMed

    Kim, Hyun-Wook; Hwang, Ko-Eun; Song, Dong-Heon; Kim, Yong-Jae; Ham, Youn-Kyung; Yeo, Eui-Joo; Jeong, Tae-Jun; Choi, Yun-Sang; Kim, Cheon-Jei

    2015-01-01

    This study was conducted to evaluate the effect of pre-rigor salting level (0-4% NaCl concentration) on physicochemical and textural properties of pre-rigor chicken breast muscles. The pre-rigor chicken breast muscles were de-boned 10 min post-mortem and salted within 25 min post-mortem. An increase in pre-rigor salting level led to the formation of high ultimate pH of chicken breast muscles at post-mortem 24 h. The addition of minimum of 2% NaCl significantly improved water holding capacity, cooking loss, protein solubility, and hardness when compared to the non-salting chicken breast muscle (p<0.05). On the other hand, the increase in pre-rigor salting level caused the inhibition of myofibrillar protein degradation and the acceleration of lipid oxidation. However, the difference in NaCl concentration between 3% and 4% had no great differences in the results of physicochemical and textural properties due to pre-rigor salting effects (p>0.05). Therefore, our study certified the pre-rigor salting effect of chicken breast muscle salted with 2% NaCl when compared to post-rigor muscle salted with equal NaCl concentration, and suggests that the 2% NaCl concentration is minimally required to ensure the definite pre-rigor salting effect on chicken breast muscle.

  6. Effect of Pre-rigor Salting Levels on Physicochemical and Textural Properties of Chicken Breast Muscles

    PubMed Central

    Choi, Yun-Sang

    2015-01-01

    This study was conducted to evaluate the effect of pre-rigor salting level (0-4% NaCl concentration) on physicochemical and textural properties of pre-rigor chicken breast muscles. The pre-rigor chicken breast muscles were de-boned 10 min post-mortem and salted within 25 min post-mortem. An increase in pre-rigor salting level led to the formation of high ultimate pH of chicken breast muscles at post-mortem 24 h. The addition of minimum of 2% NaCl significantly improved water holding capacity, cooking loss, protein solubility, and hardness when compared to the non-salting chicken breast muscle (p<0.05). On the other hand, the increase in pre-rigor salting level caused the inhibition of myofibrillar protein degradation and the acceleration of lipid oxidation. However, the difference in NaCl concentration between 3% and 4% had no great differences in the results of physicochemical and textural properties due to pre-rigor salting effects (p>0.05). Therefore, our study certified the pre-rigor salting effect of chicken breast muscle salted with 2% NaCl when compared to post-rigor muscle salted with equal NaCl concentration, and suggests that the 2% NaCl concentration is minimally required to ensure the definite pre-rigor salting effect on chicken breast muscle. PMID:26761884

  7. Use of software engineering techniques in the design of the ALEPH data acquisition system

    NASA Astrophysics Data System (ADS)

    Charity, T.; McClatchey, R.; Harvey, J.

    1987-08-01

    The SASD methodology is being used to provide a rigorous design framework for various components of the ALEPH data acquisition system. The Entity-Relationship data model is used to describe the layout and configuration of the control and acquisition systems and detector components. State Transition Diagrams are used to specify control applications such as run control and resource management and Data Flow Diagrams assist in decomposing software tasks and defining interfaces between processes. These techniques encourage rigorous software design leading to enhanced functionality and reliability. Improved documentation and communication ensures continuity over the system life-cycle and simplifies project management.

  8. Apparatus for sensor failure detection and correction in a gas turbine engine control system

    NASA Technical Reports Server (NTRS)

    Spang, H. A., III; Wanger, R. P. (Inventor)

    1981-01-01

    A gas turbine engine control system maintains a selected level of engine performance despite the failure or abnormal operation of one or more engine parameter sensors. The control system employs a continuously updated engine model which simulates engine performance and generates signals representing real time estimates of the engine parameter sensor signals. The estimate signals are transmitted to a control computational unit which utilizes them in lieu of the actual engine parameter sensor signals to control the operation of the engine. The estimate signals are also compared with the corresponding actual engine parameter sensor signals and the resulting difference signals are utilized to update the engine model. If a particular difference signal exceeds specific tolerance limits, the difference signal is inhibited from updating the model and a sensor failure indication is provided to the engine operator.

  9. Numerical modelling of distributed vibration sensor based on phase-sensitive OTDR

    NASA Astrophysics Data System (ADS)

    Masoudi, A.; Newson, T. P.

    2017-04-01

    A Distributed Vibration Sensor Based on Phase-Sensitive OTDR is numerically modeled. The advantage of modeling the building blocks of the sensor individually and combining the blocks to analyse the behavior of the sensing system is discussed. It is shown that the numerical model can accurately imitate the response of the experimental setup to dynamic perturbations a signal processing procedure similar to that used to extract the phase information from sensing setup.

  10. Theoretical and experimental investigations of sensor location for optimal aeroelastic system state estimation

    NASA Technical Reports Server (NTRS)

    Liu, G.

    1985-01-01

    One of the major concerns in the design of an active control system is obtaining the information needed for effective feedback. This involves the combination of sensing and estimation. A sensor location index is defined as the weighted sum of the mean square estimation errors in which the sensor locations can be regarded as estimator design parameters. The design goal is to choose these locations to minimize the sensor location index. The choice of the number of sensors is a tradeoff between the estimation quality based upon the same performance index and the total costs of installing and maintaining extra sensors. An experimental study for choosing the sensor location was conducted on an aeroelastic system. The system modeling which includes the unsteady aerodynamics model developed by Stephen Rock was improved. Experimental results verify the trend of the theoretical predictions of the sensor location index for different sensor locations at various wind speeds.

  11. A new algorithm for construction of coarse-grained sites of large biomolecules.

    PubMed

    Li, Min; Zhang, John Z H; Xia, Fei

    2016-04-05

    The development of coarse-grained (CG) models for large biomolecules remains a challenge in multiscale simulations, including a rigorous definition of CG representations for them. In this work, we proposed a new stepwise optimization imposed with the boundary-constraint (SOBC) algorithm to construct the CG sites of large biomolecules, based on the s cheme of essential dynamics CG. By means of SOBC, we can rigorously derive the CG representations of biomolecules with less computational cost. The SOBC is particularly efficient for the CG definition of large systems with thousands of residues. The resulted CG sites can be parameterized as a CG model using the normal mode analysis based fluctuation matching method. Through normal mode analysis, the obtained modes of CG model can accurately reflect the functionally related slow motions of biomolecules. The SOBC algorithm can be used for the construction of CG sites of large biomolecules such as F-actin and for the study of mechanical properties of biomaterials. © 2015 Wiley Periodicals, Inc.

  12. Derivation of rigorous conditions for high cell-type diversity by algebraic approach.

    PubMed

    Yoshida, Hiroshi; Anai, Hirokazu; Horimoto, Katsuhisa

    2007-01-01

    The development of a multicellular organism is a dynamic process. Starting with one or a few cells, the organism develops into different types of cells with distinct functions. We have constructed a simple model by considering the cell number increase and the cell-type order conservation, and have assessed conditions for cell-type diversity. This model is based on a stochastic Lindenmayer system with cell-to-cell interactions for three types of cells. In the present model, we have successfully derived complex but rigorous algebraic relations between the proliferation and transition rates for cell-type diversity by using a symbolic method: quantifier elimination (QE). Surprisingly, three modes for the proliferation and transition rates have emerged for large ratios of the initial cells to the developed cells. The three modes have revealed that the equality between the development rates for the highest cell-type diversity is reduced during the development process of multicellular organisms. Furthermore, we have found that the highest cell-type diversity originates from order conservation.

  13. Shear-induced opening of the coronal magnetic field

    NASA Technical Reports Server (NTRS)

    Wolfson, Richard

    1995-01-01

    This work describes the evolution of a model solar corona in response to motions of the footpoints of its magnetic field. The mathematics involved is semianalytic, with the only numerical solution being that of an ordinary differential equation. This approach, while lacking the flexibility and physical details of full MHD simulations, allows for very rapid computation along with complete and rigorous exploration of the model's implications. We find that the model coronal field bulges upward, at first slowly and then more dramatically, in response to footpoint displacements. The energy in the field rises monotonically from that of the initial potential state, and the field configuration and energy appraoch asymptotically that of a fully open field. Concurrently, electric currents develop and concentrate into a current sheet as the limiting case of the open field is approached. Examination of the equations shows rigorously that in the asymptotic limit of the fully open field, the current layer becomes a true ideal MHD singularity.

  14. Rigorous numerical modeling of scattering-type scanning near-field optical microscopy and spectroscopy

    NASA Astrophysics Data System (ADS)

    Chen, Xinzhong; Lo, Chiu Fan Bowen; Zheng, William; Hu, Hai; Dai, Qing; Liu, Mengkun

    2017-11-01

    Over the last decade, scattering-type scanning near-field optical microscopy and spectroscopy have been widely used in nano-photonics and material research due to their fine spatial resolution and broad spectral range. A number of simplified analytical models have been proposed to quantitatively understand the tip-scattered near-field signal. However, a rigorous interpretation of the experimental results is still lacking at this stage. Numerical modelings, on the other hand, are mostly done by simulating the local electric field slightly above the sample surface, which only qualitatively represents the near-field signal rendered by the tip-sample interaction. In this work, we performed a more comprehensive numerical simulation which is based on realistic experimental parameters and signal extraction procedures. By directly comparing to the experiments as well as other simulation efforts, our methods offer a more accurate quantitative description of the near-field signal, paving the way for future studies of complex systems at the nanoscale.

  15. Diffraction-based overlay measurement on dedicated mark using rigorous modeling method

    NASA Astrophysics Data System (ADS)

    Lu, Hailiang; Wang, Fan; Zhang, Qingyun; Chen, Yonghui; Zhou, Chang

    2012-03-01

    Diffraction Based Overlay (DBO) is widely evaluated by numerous authors, results show DBO can provide better performance than Imaging Based Overlay (IBO). However, DBO has its own problems. As well known, Modeling based DBO (mDBO) faces challenges of low measurement sensitivity and crosstalk between various structure parameters, which may result in poor accuracy and precision. Meanwhile, main obstacle encountered by empirical DBO (eDBO) is that a few pads must be employed to gain sufficient information on overlay-induced diffraction signature variations, which consumes more wafer space and costs more measuring time. Also, eDBO may suffer from mark profile asymmetry caused by processes. In this paper, we propose an alternative DBO technology that employs a dedicated overlay mark and takes a rigorous modeling approach. This technology needs only two or three pads for each direction, which is economic and time saving. While overlay measurement error induced by mark profile asymmetry being reduced, this technology is expected to be as accurate and precise as scatterometry technologies.

  16. Layered Plant-Growth Media for Optimizing Gaseous, Liquid and Nutrient Requirements: Modeling, Design and Monitoring

    NASA Astrophysics Data System (ADS)

    Heinse, R.; Jones, S. B.; Bingham, G.; Bugbee, B.

    2006-12-01

    Rigorous management of restricted root zones utilizing coarse-textured porous media greatly benefits from optimizing the gas-water balance within plant-growth media. Geophysical techniques can help to quantify root- zone parameters like water content, air-filled porosity, temperature and nutrient concentration to better address the root systems performance. The efficiency of plant growth amid high root densities and limited volumes is critically linked to maintaining a favorable water content/air-filled porosity balance while considering adequate fluxes to replenish water at decreasing hydraulic conductivities during uptake. Volumes adjacent to roots also need to be optimized to provide adequate nutrients throughout the plant's life cycle while avoiding excessive salt concentrations. Our objectives were to (1) design and model an optimized root zone system using optimized porous media layers, (2) verify our design by monitoring the water content distribution and tracking nutrient release and transport, and (3) mimic water and nutrient uptake using plants or wicks to draw water from the root system. We developed a unique root-zone system using layered Ottawa sands promoting vertically uniform water contents and air-filled porosities. Watering was achieved by maintaining a shallow saturated layer at the bottom of the column and allowing capillarity to draw water upward, where coarser particle sizes formed the bottom layers with finer particles sizes forming the layers above. The depth of each layer was designed to optimize water content based on measurements and modeling of the wetting water retention curves. Layer boundaries were chosen to retain saturation between 50 and 85 percent. The saturation distribution was verified by dual-probe heat-pulse water-content sensors. The nutrient experiment involved embedding slow release fertilizer in the porous media in order to detect variations in electrical resistivity versus time during the release, diffusion and uptake of nutrients. The experiment required a specific geometry for the acquisition of ERT data using the heat-pulse water-content sensor's steel needles as electrodes. ERT data were analyzed using the sensed water contents and deriving pore-water resistivities using Archie's law. This design should provide a more optimal root-zone environment by maintaining a more uniform water content and on-demand supply of water than designs with one particle size at all column heights. The monitoring capability offers an effective means to describe the relationship between root-system performance and plant growth.

  17. Bayesian operational modal analysis with asynchronous data, part I: Most probable value

    NASA Astrophysics Data System (ADS)

    Zhu, Yi-Chen; Au, Siu-Kui

    2018-01-01

    In vibration tests, multiple sensors are used to obtain detailed mode shape information about the tested structure. Time synchronisation among data channels is required in conventional modal identification approaches. Modal identification can be more flexibly conducted if this is not required. Motivated by the potential gain in feasibility and economy, this work proposes a Bayesian frequency domain method for modal identification using asynchronous 'output-only' ambient data, i.e. 'operational modal analysis'. It provides a rigorous means for identifying the global mode shape taking into account the quality of the measured data and their asynchronous nature. This paper (Part I) proposes an efficient algorithm for determining the most probable values of modal properties. The method is validated using synthetic and laboratory data. The companion paper (Part II) investigates identification uncertainty and challenges in applications to field vibration data.

  18. Semantically-enabled sensor plug & play for the sensor web.

    PubMed

    Bröring, Arne; Maúe, Patrick; Janowicz, Krzysztof; Nüst, Daniel; Malewski, Christian

    2011-01-01

    Environmental sensors have continuously improved by becoming smaller, cheaper, and more intelligent over the past years. As consequence of these technological advancements, sensors are increasingly deployed to monitor our environment. The large variety of available sensor types with often incompatible protocols complicates the integration of sensors into observing systems. The standardized Web service interfaces and data encodings defined within OGC's Sensor Web Enablement (SWE) framework make sensors available over the Web and hide the heterogeneous sensor protocols from applications. So far, the SWE framework does not describe how to integrate sensors on-the-fly with minimal human intervention. The driver software which enables access to sensors has to be implemented and the measured sensor data has to be manually mapped to the SWE models. In this article we introduce a Sensor Plug & Play infrastructure for the Sensor Web by combining (1) semantic matchmaking functionality, (2) a publish/subscribe mechanism underlying the SensorWeb, as well as (3) a model for the declarative description of sensor interfaces which serves as a generic driver mechanism. We implement and evaluate our approach by applying it to an oil spill scenario. The matchmaking is realized using existing ontologies and reasoning engines and provides a strong case for the semantic integration capabilities provided by Semantic Web research.

  19. Semantically-Enabled Sensor Plug & Play for the Sensor Web

    PubMed Central

    Bröring, Arne; Maúe, Patrick; Janowicz, Krzysztof; Nüst, Daniel; Malewski, Christian

    2011-01-01

    Environmental sensors have continuously improved by becoming smaller, cheaper, and more intelligent over the past years. As consequence of these technological advancements, sensors are increasingly deployed to monitor our environment. The large variety of available sensor types with often incompatible protocols complicates the integration of sensors into observing systems. The standardized Web service interfaces and data encodings defined within OGC’s Sensor Web Enablement (SWE) framework make sensors available over the Web and hide the heterogeneous sensor protocols from applications. So far, the SWE framework does not describe how to integrate sensors on-the-fly with minimal human intervention. The driver software which enables access to sensors has to be implemented and the measured sensor data has to be manually mapped to the SWE models. In this article we introduce a Sensor Plug & Play infrastructure for the Sensor Web by combining (1) semantic matchmaking functionality, (2) a publish/subscribe mechanism underlying the SensorWeb, as well as (3) a model for the declarative description of sensor interfaces which serves as a generic driver mechanism. We implement and evaluate our approach by applying it to an oil spill scenario. The matchmaking is realized using existing ontologies and reasoning engines and provides a strong case for the semantic integration capabilities provided by Semantic Web research. PMID:22164033

  20. On Identifiability of Bias-Type Actuator-Sensor Faults in Multiple-Model-Based Fault Detection and Identification

    NASA Technical Reports Server (NTRS)

    Joshi, Suresh M.

    2012-01-01

    This paper explores a class of multiple-model-based fault detection and identification (FDI) methods for bias-type faults in actuators and sensors. These methods employ banks of Kalman-Bucy filters to detect the faults, determine the fault pattern, and estimate the fault values, wherein each Kalman-Bucy filter is tuned to a different failure pattern. Necessary and sufficient conditions are presented for identifiability of actuator faults, sensor faults, and simultaneous actuator and sensor faults. It is shown that FDI of simultaneous actuator and sensor faults is not possible using these methods when all sensors have biases.

  1. Linking Simulation with Formal Verification and Modeling of Wireless Sensor Network in TLA+

    NASA Astrophysics Data System (ADS)

    Martyna, Jerzy

    In this paper, we present the results of the simulation of a wireless sensor network based on the flooding technique and SPIN protocols. The wireless sensor network was specified and verified by means of the TLA+ specification language [1]. For a model of wireless sensor network built this way simulation was carried with the help of specially constructed software tools. The obtained results allow us to predict the behaviour of the wireless sensor network in various topologies and spatial densities. Visualization of the output data enable precise examination of some phenomenas in wireless sensor networks, such as a hidden terminal, etc.

  2. Evaluation of Smartphone Inertial Sensor Performance for Cross-Platform Mobile Applications

    PubMed Central

    Kos, Anton; Tomažič, Sašo; Umek, Anton

    2016-01-01

    Smartphone sensors are being increasingly used in mobile applications. The performance of sensors varies considerably among different smartphone models and the development of a cross-platform mobile application might be a very complex and demanding task. A publicly accessible resource containing real-life-situation smartphone sensor parameters could be of great help for cross-platform developers. To address this issue we have designed and implemented a pilot participatory sensing application for measuring, gathering, and analyzing smartphone sensor parameters. We start with smartphone accelerometer and gyroscope bias and noise parameters. The application database presently includes sensor parameters of more than 60 different smartphone models of different platforms. It is a modest, but important start, offering information on several statistical parameters of the measured smartphone sensors and insights into their performance. The next step, a large-scale cloud-based version of the application, is already planned. The large database of smartphone sensor parameters may prove particularly useful for cross-platform developers. It may also be interesting for individual participants who would be able to check-up and compare their smartphone sensors against a large number of similar or identical models. PMID:27049391

  3. Sensor Web Dynamic Measurement Techniques and Adaptive Observing Strategies

    NASA Technical Reports Server (NTRS)

    Talabac, Stephen J.

    2004-01-01

    Sensor Web observing systems may have the potential to significantly improve our ability to monitor, understand, and predict the evolution of rapidly evolving, transient, or variable environmental features and events. This improvement will come about by integrating novel data collection techniques, new or improved instruments, emerging communications technologies and protocols, sensor mark-up languages, and interoperable planning and scheduling systems. In contrast to today's observing systems, "event-driven" sensor webs will synthesize real- or near-real time measurements and information from other platforms and then react by reconfiguring the platforms and instruments to invoke new measurement modes and adaptive observation strategies. Similarly, "model-driven" sensor webs will utilize environmental prediction models to initiate targeted sensor measurements or to use a new observing strategy. The sensor web concept contrasts with today's data collection techniques and observing system operations concepts where independent measurements are made by remote sensing and in situ platforms that do not share, and therefore cannot act upon, potentially useful complementary sensor measurement data and platform state information. This presentation describes NASA's view of event-driven and model-driven Sensor Webs and highlights several research and development activities at the Goddard Space Flight Center.

  4. Decoupling Principle Analysis and Development of a Parallel Three-Dimensional Force Sensor

    PubMed Central

    Zhao, Yanzhi; Jiao, Leihao; Weng, Dacheng; Zhang, Dan; Zheng, Rencheng

    2016-01-01

    In the development of the multi-dimensional force sensor, dimension coupling is the ubiquitous factor restricting the improvement of the measurement accuracy. To effectively reduce the influence of dimension coupling on the parallel multi-dimensional force sensor, a novel parallel three-dimensional force sensor is proposed using a mechanical decoupling principle, and the influence of the friction on dimension coupling is effectively reduced by making the friction rolling instead of sliding friction. In this paper, the mathematical model is established by combining with the structure model of the parallel three-dimensional force sensor, and the modeling and analysis of mechanical decoupling are carried out. The coupling degree (ε) of the designed sensor is defined and calculated, and the calculation results show that the mechanical decoupling parallel structure of the sensor possesses good decoupling performance. A prototype of the parallel three-dimensional force sensor was developed, and FEM analysis was carried out. The load calibration and data acquisition experiment system are built, and then calibration experiments were done. According to the calibration experiments, the measurement accuracy is less than 2.86% and the coupling accuracy is less than 3.02%. The experimental results show that the sensor system possesses high measuring accuracy, which provides a basis for the applied research of the parallel multi-dimensional force sensor. PMID:27649194

  5. Simulation of the spatial frequency-dependent sensitivities of Acoustic Emission sensors

    NASA Astrophysics Data System (ADS)

    Boulay, N.; Lhémery, A.; Zhang, F.

    2018-05-01

    Typical configurations of nondestructive testing by Acoustic Emission (NDT/AE) make use of multiple sensors positioned on the tested structure for detecting evolving flaws and possibly locating them by triangulation. Sensors positions must be optimized for ensuring global coverage sensitivity to AE events and minimizing their number. A simulator of NDT/AE is under development to provide help with designing testing configurations and with interpreting measurements. A global model performs sub-models simulating the various phenomena taking place at different spatial and temporal scales (crack growth, AE source and radiation, wave propagation in the structure, reception by sensors). In this context, accurate modelling of sensors behaviour must be developed. These sensors generally consist of a cylindrical piezoelectric element of radius approximately equal to its thickness, without damping and bonded to its case. Sensors themselves are bonded to the structure being tested. Here, a multiphysics finite element simulation tool is used to study the complex behaviour of AE sensor. The simulated behaviour is shown to accurately reproduce the high-amplitude measured contributions used in the AE practice.

  6. Acoustic/seismic signal propagation and sensor performance modeling

    NASA Astrophysics Data System (ADS)

    Wilson, D. Keith; Marlin, David H.; Mackay, Sean

    2007-04-01

    Performance, optimal employment, and interpretation of data from acoustic and seismic sensors depend strongly and in complex ways on the environment in which they operate. Software tools for guiding non-expert users of acoustic and seismic sensors are therefore much needed. However, such tools require that many individual components be constructed and correctly connected together. These components include the source signature and directionality, representation of the atmospheric and terrain environment, calculation of the signal propagation, characterization of the sensor response, and mimicking of the data processing at the sensor. Selection of an appropriate signal propagation model is particularly important, as there are significant trade-offs between output fidelity and computation speed. Attenuation of signal energy, random fading, and (for array systems) variations in wavefront angle-of-arrival should all be considered. Characterization of the complex operational environment is often the weak link in sensor modeling: important issues for acoustic and seismic modeling activities include the temporal/spatial resolution of the atmospheric data, knowledge of the surface and subsurface terrain properties, and representation of ambient background noise and vibrations. Design of software tools that address these challenges is illustrated with two examples: a detailed target-to-sensor calculation application called the Sensor Performance Evaluator for Battlefield Environments (SPEBE) and a GIS-embedded approach called Battlefield Terrain Reasoning and Awareness (BTRA).

  7. A cloud-based information repository for bridge monitoring applications

    NASA Astrophysics Data System (ADS)

    Jeong, Seongwoon; Zhang, Yilan; Hou, Rui; Lynch, Jerome P.; Sohn, Hoon; Law, Kincho H.

    2016-04-01

    This paper describes an information repository to support bridge monitoring applications on a cloud computing platform. Bridge monitoring, with instrumentation of sensors in particular, collects significant amount of data. In addition to sensor data, a wide variety of information such as bridge geometry, analysis model and sensor description need to be stored. Data management plays an important role to facilitate data utilization and data sharing. While bridge information modeling (BrIM) technologies and standards have been proposed and they provide a means to enable integration and facilitate interoperability, current BrIM standards support mostly the information about bridge geometry. In this study, we extend the BrIM schema to include analysis models and sensor information. Specifically, using the OpenBrIM standards as the base, we draw on CSI Bridge, a commercial software widely used for bridge analysis and design, and SensorML, a standard schema for sensor definition, to define the data entities necessary for bridge monitoring applications. NoSQL database systems are employed for data repository. Cloud service infrastructure is deployed to enhance scalability, flexibility and accessibility of the data management system. The data model and systems are tested using the bridge model and the sensor data collected at the Telegraph Road Bridge, Monroe, Michigan.

  8. Design and modeling of magnetically driven electric-field sensor for non-contact DC voltage measurement in electric power systems.

    PubMed

    Wang, Decai; Li, Ping; Wen, Yumei

    2016-10-01

    In this paper, the design and modeling of a magnetically driven electric-field sensor for non-contact DC voltage measurement are presented. The magnetic drive structure of the sensor is composed of a small solenoid and a cantilever beam with a cylindrical magnet mounted on it. The interaction of the magnet and the solenoid provides the magnetic driving force for the sensor. Employing magnetic drive structure brings the benefits of low driving voltage and large vibrating displacement, which consequently results in less interference from the drive signal. In the theoretical analyses, the capacitance calculation model between the wire and the sensing electrode is built. The expression of the magnetic driving force is derived by the method of linear fitting. The dynamical model of the magnetic-driven cantilever beam actuator is built by using Euler-Bernoulli theory and distributed parameter method. Taking advantage of the theoretical model, the output voltage of proposed sensor can be predicted. The experimental results are in good agreement with the theoretical results. The proposed sensor shows a favorable linear response characteristic. The proposed sensor has a measuring sensitivity of 9.87 μV/(V/m) at an excitation current of 37.5 mA. The electric field intensity resolution can reach 10.13 V/m.

  9. Network hydraulics inclusion in water quality event detection using multiple sensor stations data.

    PubMed

    Oliker, Nurit; Ostfeld, Avi

    2015-09-01

    Event detection is one of the current most challenging topics in water distribution systems analysis: how regular on-line hydraulic (e.g., pressure, flow) and water quality (e.g., pH, residual chlorine, turbidity) measurements at different network locations can be efficiently utilized to detect water quality contamination events. This study describes an integrated event detection model which combines multiple sensor stations data with network hydraulics. To date event detection modelling is likely limited to single sensor station location and dataset. Single sensor station models are detached from network hydraulics insights and as a result might be significantly exposed to false positive alarms. This work is aimed at decreasing this limitation through integrating local and spatial hydraulic data understanding into an event detection model. The spatial analysis complements the local event detection effort through discovering events with lower signatures by exploring the sensors mutual hydraulic influences. The unique contribution of this study is in incorporating hydraulic simulation information into the overall event detection process of spatially distributed sensors. The methodology is demonstrated on two example applications using base runs and sensitivity analyses. Results show a clear advantage of the suggested model over single-sensor event detection schemes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. A New Calibration Method for Commercial RGB-D Sensors

    PubMed Central

    Darwish, Walid; Tang, Shenjun; Li, Wenbin; Chen, Wu

    2017-01-01

    Commercial RGB-D sensors such as Kinect and Structure Sensors have been widely used in the game industry, where geometric fidelity is not of utmost importance. For applications in which high quality 3D is required, i.e., 3D building models of centimeter-level accuracy, accurate and reliable calibrations of these sensors are required. This paper presents a new model for calibrating the depth measurements of RGB-D sensors based on the structured light concept. Additionally, a new automatic method is proposed for the calibration of all RGB-D parameters, including internal calibration parameters for all cameras, the baseline between the infrared and RGB cameras, and the depth error model. When compared with traditional calibration methods, this new model shows a significant improvement in depth precision for both near and far ranges. PMID:28538695

  11. Sensor fusion IV: Control paradigms and data structures; Proceedings of the Meeting, Boston, MA, Nov. 12-15, 1991

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1992-01-01

    Various papers on control paradigms and data structures in sensor fusion are presented. The general topics addressed include: decision models and computational methods, sensor modeling and data representation, active sensing strategies, geometric planning and visualization, task-driven sensing, motion analysis, models motivated biology and psychology, decentralized detection and distributed decision, data fusion architectures, robust estimation of shapes and features, application and implementation. Some of the individual subjects considered are: the Firefly experiment on neural networks for distributed sensor data fusion, manifold traversing as a model for learning control of autonomous robots, choice of coordinate systems for multiple sensor fusion, continuous motion using task-directed stereo vision, interactive and cooperative sensing and control for advanced teleoperation, knowledge-based imaging for terrain analysis, physical and digital simulations for IVA robotics.

  12. Impact of malicious servers over trust and reputation models in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Verma, Vinod Kumar; Singh, Surinder; Pathak, N. P.

    2016-03-01

    This article deals with the impact of malicious servers over different trust and reputation models in wireless sensor networks. First, we analysed the five trust and reputation models, namely BTRM-WSN, Eigen trust, peer trust, power trust, linguistic fuzzy trust model. Further, we proposed wireless sensor network design for optimisation of these models. Finally, influence of malicious servers on the behaviour of above mentioned trust and reputation models is discussed. Statistical analysis has been carried out to prove the validity of our proposal.

  13. Experimental evaluation of rigor mortis. VI. Effect of various causes of death on the evolution of rigor mortis.

    PubMed

    Krompecher, T; Bergerioux, C; Brandt-Casadevall, C; Gujer, H R

    1983-07-01

    The evolution of rigor mortis was studied in cases of nitrogen asphyxia, drowning and strangulation, as well as in fatal intoxications due to strychnine, carbon monoxide and curariform drugs, using a modified method of measurement. Our experiments demonstrated that: (1) Strychnine intoxication hastens the onset and passing of rigor mortis. (2) CO intoxication delays the resolution of rigor mortis. (3) The intensity of rigor may vary depending upon the cause of death. (4) If the stage of rigidity is to be used to estimate the time of death, it is necessary: (a) to perform a succession of objective measurements of rigor mortis intensity; and (b) to verify the eventual presence of factors that could play a role in the modification of its development.

  14. Real-time GIS data model and sensor web service platform for environmental data management.

    PubMed

    Gong, Jianya; Geng, Jing; Chen, Zeqiang

    2015-01-09

    Effective environmental data management is meaningful for human health. In the past, environmental data management involved developing a specific environmental data management system, but this method often lacks real-time data retrieving and sharing/interoperating capability. With the development of information technology, a Geospatial Service Web method is proposed that can be employed for environmental data management. The purpose of this study is to determine a method to realize environmental data management under the Geospatial Service Web framework. A real-time GIS (Geographic Information System) data model and a Sensor Web service platform to realize environmental data management under the Geospatial Service Web framework are proposed in this study. The real-time GIS data model manages real-time data. The Sensor Web service platform is applied to support the realization of the real-time GIS data model based on the Sensor Web technologies. To support the realization of the proposed real-time GIS data model, a Sensor Web service platform is implemented. Real-time environmental data, such as meteorological data, air quality data, soil moisture data, soil temperature data, and landslide data, are managed in the Sensor Web service platform. In addition, two use cases of real-time air quality monitoring and real-time soil moisture monitoring based on the real-time GIS data model in the Sensor Web service platform are realized and demonstrated. The total time efficiency of the two experiments is 3.7 s and 9.2 s. The experimental results show that the method integrating real-time GIS data model and Sensor Web Service Platform is an effective way to manage environmental data under the Geospatial Service Web framework.

  15. A Computer Model for Teaching the Dynamic Behavior of AC Contactors

    ERIC Educational Resources Information Center

    Ruiz, J.-R. R.; Espinosa, A. G.; Romeral, L.

    2010-01-01

    Ac-powered contactors are extensively used in industry in applications such as automatic electrical devices, motor starters, and heaters. In this work, a practical session that allows students to model and simulate the dynamic behavior of ac-powered electromechanical contactors is presented. Simulation is carried out using a rigorous parametric…

  16. Testing Theoretical Models of Magnetic Damping Using an Air Track

    ERIC Educational Resources Information Center

    Vidaurre, Ana; Riera, Jaime; Monsoriu, Juan A.; Gimenez, Marcos H.

    2008-01-01

    Magnetic braking is a long-established application of Lenz's law. A rigorous analysis of the laws governing this problem involves solving Maxwell's equations in a time-dependent situation. Approximate models have been developed to describe different experimental results related to this phenomenon. In this paper we present a new method for the…

  17. The Cognitive Processes Associated with Occupational/Career Indecision: A Model for Gifted Adolescents

    ERIC Educational Resources Information Center

    Jung, Jae Yup

    2013-01-01

    This study developed and tested a new model of the cognitive processes associated with occupational/career indecision for gifted adolescents. A survey instrument with rigorous psychometric properties, developed from a number of existing instruments, was administered to a sample of 687 adolescents attending three academically selective high schools…

  18. Ocean Profile Measurements during the Seasonal Ice Zone Reconnaissance Surveys

    DTIC Science & Technology

    2012-09-30

    physical processes that occur within the BCSIZ that require data from all components of SIZRS, and improve predictive models of the SIZ through model ...the IABP (Ignatius Rigor) are approved by the USCG for operation from the ADA aircraft, but we anticipate being informed of any Safety of Flight Test

  19. Approximation Methods for Inverse Problems Governed by Nonlinear Parabolic Systems

    DTIC Science & Technology

    1999-12-17

    We present a rigorous theoretical framework for approximation of nonlinear parabolic systems with delays in the context of inverse least squares...numerical results demonstrating the convergence are given for a model of dioxin uptake and elimination in a distributed liver model that is a special case of the general theoretical framework .

  20. A Geometric Comparison of the Transformation Loci with Specific and Mobile Capital

    ERIC Educational Resources Information Center

    Colander, David; Gilbert, John; Oladi, Reza

    2008-01-01

    The authors show how the transformation loci in the specific factors model (capital specificity) and the Heckscher-Ohlin-Samuelson model (capital mobility) can be rigorously derived and easily compared by using geometric techniques on the basis of Savosnick geometry. The approach shows directly that the transformation locus with capital…

  1. Parental Maltreatment, Bullying, and Adolescent Depression: Evidence for the Mediating Role of Perceived Social Support

    ERIC Educational Resources Information Center

    Seeds, Pamela M.; Harkness, Kate L.; Quilty, Lena C.

    2010-01-01

    The support deterioration model of depression states that stress deteriorates the perceived availability and/or effectiveness of social support, which then leads to depression. The present study examined this model in adolescent depression following parent-perpetrated maltreatment and peer-perpetrated bullying, as assessed by a rigorous contextual…

  2. You've Shown the Program Model Is Effective. Now What?

    ERIC Educational Resources Information Center

    Ellickson, Phyllis L.

    2014-01-01

    Rigorous tests of theory-based programs require faithful implementation. Otherwise, lack of results might be attributable to faulty program delivery, faulty theory, or both. However, once the evidence indicates the model works and merits broader dissemination, implementation issues do not fade away. How can developers enhance the likelihood that…

  3. First Experiences with Kinect v2 Sensor for Close Range 3d Modelling

    NASA Astrophysics Data System (ADS)

    Lachat, E.; Macher, H.; Mittet, M.-A.; Landes, T.; Grussenmeyer, P.

    2015-02-01

    RGB-D cameras, also known as range imaging cameras, are a recent generation of sensors. As they are suitable for measuring distances to objects at high frame rate, such sensors are increasingly used for 3D acquisitions, and more generally for applications in robotics or computer vision. This kind of sensors became popular especially since the Kinect v1 (Microsoft) arrived on the market in November 2010. In July 2014, Windows has released a new sensor, the Kinect for Windows v2 sensor, based on another technology as its first device. However, due to its initial development for video games, the quality assessment of this new device for 3D modelling represents a major investigation axis. In this paper first experiences with Kinect v2 sensor are related, and the ability of close range 3D modelling is investigated. For this purpose, error sources on output data as well as a calibration approach are presented.

  4. Real time monitoring of urban surface water quality using a submersible, tryptophan-like fluorescence sensor

    NASA Astrophysics Data System (ADS)

    Khamis, Kieran; Bradley, Chris; Hannah, David; Stevens, Rob

    2014-05-01

    Due to the recent development of field-deployable optical sensor technology, continuous quantification and characterization of surface water dissolved organic matter (DOM) is possible now. Tryptophan-like (T1) fluorescence has the potential to be a particularly useful indicator of human influence on water quality as T1 peaks are associated with the input of labial organic carbon (e.g. sewage or farm waste) and its microbial breakdown. Hence, real-time recording of T1 fluorescence could be particular useful for monitoring waste water infrastructure, treatment efficiency and the identification of contamination events at higher temporal resolution than available hitherto. However, an understanding of sensor measurement repeatability/transferability and interaction with environmental parameters (e.g. turbidity) is required. Here, to address this practical knowledge gap, we present results from a rigorous test of a commercially available submersible tryptophan fluorometer (λex 285, λem 350). Sensor performance was first examined in the laboratory by incrementally increasing turbidity under controlled conditions. Further to this the sensor was integrated into a multi-parameter sonde and field tests were undertaken involving: (i) a spatial sampling campaign across a range of surface water sites in the West Midlands, UK; and (ii) collection of high resolution (sub-hourly) samples from an urban stream (Bournbrook, Birmingham, U.K). To determine the ability of the sensor to capture spatiotemporal dynamics of urban waters DOM was characterized for each site or discrete time step using Excitation Emission Matrix spectroscopy and PARAFAC. In both field and laboratory settings fluorescence intensity was attenuated at high turbidity due to suspended particles increasing absorption and light scattering. For the spatial survey, instrument readings were compared to those obtained by a laboratory grade fluorometer (Varian Cary Eclipse) and a strong, linear relationship was apparent (R2 > 0.7). Parallel water sampling and laboratory analysis identified the potential for correction of T1 fluorescence intensity based on turbidity readings. These findings highlight the potential utility of real time monitoring of T1 fluorescence for a range of environmental applications (e.g. monitoring sewage treatment processes and tracing polluting DOM sources). However, if high/variable suspended sediment loads are anticipated concurrent monitoring of turbidity is required for accurate readings.

  5. RIGOR MORTIS AND THE INFLUENCE OF CALCIUM AND MAGNESIUM SALTS UPON ITS DEVELOPMENT.

    PubMed

    Meltzer, S J; Auer, J

    1908-01-01

    Calcium salts hasten and magnesium salts retard the development of rigor mortis, that is, when these salts are administered subcutaneously or intravenously. When injected intra-arterially, concentrated solutions of both kinds of salts cause nearly an immediate onset of a strong stiffness of the muscles which is apparently a contraction, brought on by a stimulation caused by these salts and due to osmosis. This contraction, if strong, passes over without a relaxation into a real rigor. This form of rigor may be classed as work-rigor (Arbeitsstarre). In animals, at least in frogs, with intact cords, the early contraction and the following rigor are stronger than in animals with destroyed cord. If M/8 solutions-nearly equimolecular to "physiological" solutions of sodium chloride-are used, even when injected intra-arterially, calcium salts hasten and magnesium salts retard the onset of rigor. The hastening and retardation in this case as well as in the cases of subcutaneous and intravenous injections, are ion effects and essentially due to the cations, calcium and magnesium. In the rigor hastened by calcium the effects of the extensor muscles mostly prevail; in the rigor following magnesium injection, on the other hand, either the flexor muscles prevail or the muscles become stiff in the original position of the animal at death. There seems to be no difference in the degree of stiffness in the final rigor, only the onset and development of the rigor is hastened in the case of the one salt and retarded in the other. Calcium hastens also the development of heat rigor. No positive facts were obtained with regard to the effect of magnesium upon heat vigor. Calcium also hastens and magnesium retards the onset of rigor in the left ventricle of the heart. No definite data were gathered with regard to the effects of these salts upon the right ventricle.

  6. RIGOR MORTIS AND THE INFLUENCE OF CALCIUM AND MAGNESIUM SALTS UPON ITS DEVELOPMENT

    PubMed Central

    Meltzer, S. J.; Auer, John

    1908-01-01

    Calcium salts hasten and magnesium salts retard the development of rigor mortis, that is, when these salts are administered subcutaneously or intravenously. When injected intra-arterially, concentrated solutions of both kinds of salts cause nearly an immediate onset of a strong stiffness of the muscles which is apparently a contraction, brought on by a stimulation caused by these salts and due to osmosis. This contraction, if strong, passes over without a relaxation into a real rigor. This form of rigor may be classed as work-rigor (Arbeitsstarre). In animals, at least in frogs, with intact cords, the early contraction and the following rigor are stronger than in animals with destroyed cord. If M/8 solutions—nearly equimolecular to "physiological" solutions of sodium chloride—are used, even when injected intra-arterially, calcium salts hasten and magnesium salts retard the onset of rigor. The hastening and retardation in this case as well as in the cases of subcutaneous and intravenous injections, are ion effects and essentially due to the cations, calcium and magnesium. In the rigor hastened by calcium the effects of the extensor muscles mostly prevail; in the rigor following magnesium injection, on the other hand, either the flexor muscles prevail or the muscles become stiff in the original position of the animal at death. There seems to be no difference in the degree of stiffness in the final rigor, only the onset and development of the rigor is hastened in the case of the one salt and retarded in the other. Calcium hastens also the development of heat rigor. No positive facts were obtained with regard to the effect of magnesium upon heat vigor. Calcium also hastens and magnesium retards the onset of rigor in the left ventricle of the heart. No definite data were gathered with regard to the effects of these salts upon the right ventricle. PMID:19867124

  7. Automated multivariate analysis of multi-sensor data submitted online: Real-time environmental monitoring.

    PubMed

    Eide, Ingvar; Westad, Frank

    2018-01-01

    A pilot study demonstrating real-time environmental monitoring with automated multivariate analysis of multi-sensor data submitted online has been performed at the cabled LoVe Ocean Observatory located at 258 m depth 20 km off the coast of Lofoten-Vesterålen, Norway. The major purpose was efficient monitoring of many variables simultaneously and early detection of changes and time-trends in the overall response pattern before changes were evident in individual variables. The pilot study was performed with 12 sensors from May 16 to August 31, 2015. The sensors provided data for chlorophyll, turbidity, conductivity, temperature (three sensors), salinity (calculated from temperature and conductivity), biomass at three different depth intervals (5-50, 50-120, 120-250 m), and current speed measured in two directions (east and north) using two sensors covering different depths with overlap. A total of 88 variables were monitored, 78 from the two current speed sensors. The time-resolution varied, thus the data had to be aligned to a common time resolution. After alignment, the data were interpreted using principal component analysis (PCA). Initially, a calibration model was established using data from May 16 to July 31. The data on current speed from two sensors were subject to two separate PCA models and the score vectors from these two models were combined with the other 10 variables in a multi-block PCA model. The observations from August were projected on the calibration model consecutively one at a time and the result was visualized in a score plot. Automated PCA of multi-sensor data submitted online is illustrated with an attached time-lapse video covering the relative short time period used in the pilot study. Methods for statistical validation, and warning and alarm limits are described. Redundant sensors enable sensor diagnostics and quality assurance. In a future perspective, the concept may be used in integrated environmental monitoring.

  8. Normalization of time-series satellite reflectance data to a standard sun-target-sensor geometry using a semi-empirical model

    NASA Astrophysics Data System (ADS)

    Zhao, Yongguang; Li, Chuanrong; Ma, Lingling; Tang, Lingli; Wang, Ning; Zhou, Chuncheng; Qian, Yonggang

    2017-10-01

    Time series of satellite reflectance data have been widely used to characterize environmental phenomena, describe trends in vegetation dynamics and study climate change. However, several sensors with wide spatial coverage and high observation frequency are usually designed to have large field of view (FOV), which cause variations in the sun-targetsensor geometry in time-series reflectance data. In this study, on the basis of semiempirical kernel-driven BRDF model, a new semi-empirical model was proposed to normalize the sun-target-sensor geometry of remote sensing image. To evaluate the proposed model, bidirectional reflectance under different canopy growth conditions simulated by Discrete Anisotropic Radiative Transfer (DART) model were used. The semi-empirical model was first fitted by using all simulated bidirectional reflectance. Experimental result showed a good fit between the bidirectional reflectance estimated by the proposed model and the simulated value. Then, MODIS time-series reflectance data was normalized to a common sun-target-sensor geometry by the proposed model. The experimental results showed the proposed model yielded good fits between the observed and estimated values. The noise-like fluctuations in time-series reflectance data was also reduced after the sun-target-sensor normalization process.

  9. Learning optimal quantum models is NP-hard

    NASA Astrophysics Data System (ADS)

    Stark, Cyril J.

    2018-02-01

    Physical modeling translates measured data into a physical model. Physical modeling is a major objective in physics and is generally regarded as a creative process. How good are computers at solving this task? Here, we show that in the absence of physical heuristics, the inference of optimal quantum models cannot be computed efficiently (unless P=NP ). This result illuminates rigorous limits to the extent to which computers can be used to further our understanding of nature.

  10. Automated Synthetic Scene Generation

    DTIC Science & Technology

    2014-07-01

    Using the Beard-Maxwell BRDF model , the BRDF from Equations (3.3) and (3.4) is composed of specular, diffuse, and volumetric terms such that x y zSun... models help organizations developing new remote sensing instruments anticipate sensor performance by enabling the ability to create synthetic imagery...for proposed sensor before a sensor is built. One of the largest challenges in modeling realistic synthetic imagery, however, is generating the

  11. Acoustic Scattering by Near-Surface Inhomogeneities in Porous Media

    DTIC Science & Technology

    1990-02-21

    surfaces [8]. Recently, this empirical model has been replaced by a more rigorous mi- crostructural model [9]. Here, the acoustical characteristics of...boundaries. A discussion of how ground acoustic characteristics are modelled then follows, with the chapter being concluded by a brief summary. 3.1...of ground acoustic char- acteristics, with particular emphasis on the Four parameter model of Atten- borough, that will be used extensively later. 48

  12. The relationship of rain-induced cross-polarization discrimination to attenuation for 10 to 30 GHz earth-space radio links

    NASA Technical Reports Server (NTRS)

    Stutzman, W. L.; Runyon, D. L.

    1984-01-01

    Rain depolarization is quantified through the cross-polarization discrimination (XPD) versus attenuation relationship. Such a relationship is derived by curve fitting to a rigorous theoretical model (the multiple scattering model) to determine the variation of the parameters involved. This simple isolation model (SIM) is compared to data from several earth-space link experiments and to three other models.

  13. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Howard; Braun, James E.

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less

  14. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Howard; Braun, James E.

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less

  15. Forecasting volatility with neural regression: a contribution to model adequacy.

    PubMed

    Refenes, A N; Holt, W T

    2001-01-01

    Neural nets' usefulness for forecasting is limited by problems of overfitting and the lack of rigorous procedures for model identification, selection and adequacy testing. This paper describes a methodology for neural model misspecification testing. We introduce a generalization of the Durbin-Watson statistic for neural regression and discuss the general issues of misspecification testing using residual analysis. We derive a generalized influence matrix for neural estimators which enables us to evaluate the distribution of the statistic. We deploy Monte Carlo simulation to compare the power of the test for neural and linear regressors. While residual testing is not a sufficient condition for model adequacy, it is nevertheless a necessary condition to demonstrate that the model is a good approximation to the data generating process, particularly as neural-network estimation procedures are susceptible to partial convergence. The work is also an important step toward developing rigorous procedures for neural model identification, selection and adequacy testing which have started to appear in the literature. We demonstrate its applicability in the nontrivial problem of forecasting implied volatility innovations using high-frequency stock index options. Each step of the model building process is validated using statistical tests to verify variable significance and model adequacy with the results confirming the presence of nonlinear relationships in implied volatility innovations.

  16. A Simple Sensor Model for THUNDER Actuators

    NASA Technical Reports Server (NTRS)

    Campbell, Joel F.; Bryant, Robert G.

    2009-01-01

    A quasi-static (low frequency) model is developed for THUNDER actuators configured as displacement sensors based on a simple Raleigh-Ritz technique. This model is used to calculate charge as a function of displacement. Using this and the calculated capacitance, voltage vs. displacement and voltage vs. electrical load curves are generated and compared with measurements. It is shown this model gives acceptable results and is useful for determining rough estimates of sensor output for various loads, laminate configurations and thicknesses.

  17. Novel Visual Sensor Coverage and Deployment in Time Aware PTZ Wireless Visual Sensor Networks.

    PubMed

    Yap, Florence G H; Yen, Hong-Hsu

    2016-12-30

    In this paper, we consider the visual sensor deployment algorithm in Pan-Tilt-Zoom (PTZ) Wireless Visual Sensor Networks (WVSNs). With PTZ capability, a sensor's visual coverage can be extended to reduce the number of visual sensors that need to be deployed. The coverage zone of a visual sensor in PTZ WVSN is composed of two regions, a Direct Coverage Region (DCR) and a PTZ Coverage Region (PTZCR). In the PTZCR, a visual sensor needs a mechanical pan-tilt-zoom operation to cover an object. This mechanical operation can take seconds, so the sensor might not be able to adjust the camera in time to capture the visual data. In this paper, for the first time, we study this PTZ time-aware PTZ WVSN deployment problem. We formulate this PTZ time-aware PTZ WVSN deployment problem as an optimization problem where the objective is to minimize the total visual sensor deployment cost so that each area is either covered in the DCR or in the PTZCR while considering the PTZ time constraint. The proposed Time Aware Coverage Zone (TACZ) model successfully captures the PTZ visual sensor coverage in terms of camera focal range, angle span zone coverage and camera PTZ time. Then a novel heuristic, called Time Aware Deployment with PTZ camera (TADPTZ) algorithm, is proposed to solve the problem. From our computational experiments, we found out that TACZ model outperforms the existing M coverage model under all network scenarios. In addition, as compared to the optimal solutions, the TACZ model is scalable and adaptable to the different PTZ time requirements when deploying large PTZ WVSNs.

  18. Long persistence of rigor mortis at constant low temperature.

    PubMed

    Varetto, Lorenzo; Curto, Ombretta

    2005-01-06

    We studied the persistence of rigor mortis by using physical manipulation. We tested the mobility of the knee on 146 corpses kept under refrigeration at Torino's city mortuary at a constant temperature of +4 degrees C. We found a persistence of complete rigor lasting for 10 days in all the cadavers we kept under observation; and in one case, rigor lasted for 16 days. Between the 11th and the 17th days, a progressively increasing number of corpses showed a change from complete into partial rigor (characterized by partial bending of the articulation). After the 17th day, all the remaining corpses showed partial rigor and in the two cadavers that were kept under observation "à outrance" we found the absolute resolution of rigor mortis occurred on the 28th day. Our results prove that it is possible to find a persistence of rigor mortis that is much longer than the expected when environmental conditions resemble average outdoor winter temperatures in temperate zones. Therefore, this datum must be considered when a corpse is found in those environmental conditions so that when estimating the time of death, we are not misled by the long persistence of rigor mortis.

  19. Ultrasonic imaging of seismic physical models using a fringe visibility enhanced fiber-optic Fabry-Perot interferometric sensor.

    PubMed

    Zhang, Wenlu; Chen, Fengyi; Ma, Wenwen; Rong, Qiangzhou; Qiao, Xueguang; Wang, Ruohui

    2018-04-16

    A fringe visibility enhanced fiber-optic Fabry-Perot interferometer based ultrasonic sensor is proposed and experimentally demonstrated for seismic physical model imaging. The sensor consists of a graded index multimode fiber collimator and a PTFE (polytetrafluoroethylene) diaphragm to form a Fabry-Perot interferometer. Owing to the increase of the sensor's spectral sideband slope and the smaller Young's modulus of the PTFE diaphragm, a high response to both continuous and pulsed ultrasound with a high SNR of 42.92 dB in 300 kHz is achieved when the spectral sideband filter technique is used to interrogate the sensor. The ultrasonic reconstructed images can clearly differentiate the shape of models with a high resolution.

  20. Predictive simulations and optimization of nanowire field-effect PSA sensors including screening

    NASA Astrophysics Data System (ADS)

    Baumgartner, Stefan; Heitzinger, Clemens; Vacic, Aleksandar; Reed, Mark A.

    2013-06-01

    We apply our self-consistent PDE model for the electrical response of field-effect sensors to the 3D simulation of nanowire PSA (prostate-specific antigen) sensors. The charge concentration in the biofunctionalized boundary layer at the semiconductor-electrolyte interface is calculated using the propka algorithm, and the screening of the biomolecules by the free ions in the liquid is modeled by a sensitivity factor. This comprehensive approach yields excellent agreement with experimental current-voltage characteristics without any fitting parameters. Having verified the numerical model in this manner, we study the sensitivity of nanowire PSA sensors by changing device parameters, making it possible to optimize the devices and revealing the attributes of the optimal field-effect sensor.

  1. Hybrid Atom Electrostatic System for Satellite Geodesy

    NASA Astrophysics Data System (ADS)

    Zahzam, Nassim; Bidel, Yannick; Bresson, Alexandre; Huynh, Phuong-Anh; Liorzou, Françoise; Lebat, Vincent; Foulon, Bernard; Christophe, Bruno

    2017-04-01

    The subject of this poster comes within the framework of new concepts identification and development for future satellite gravity missions, in continuation of previously launched space missions CHAMP, GRACE, GOCE and ongoing and prospective studies like NGGM, GRACE 2 or E-GRASP. We were here more focused on the inertial sensors that complete the payload of such satellites. The clearly identified instruments for space accelerometry are based on the electrostatic technology developed for many years by ONERA and that offer a high level of performance and a high degree of maturity for space applications. On the other hand, a new generation of sensors based on cold atom interferometry (AI) is emerging and seems very promising in this context. These atomic instruments have already demonstrated on ground impressive results, especially with the development of state-of-the-art gravimeters, and should reach their full potential only in space, where the microgravity environment allows long interaction times. Each of these two types of instruments presents their own advantages which are, for the electrostatic sensors (ES), their demonstrated short term sensitivity and their high TRL, and for AI, amongst others, the absolute nature of the measurement and therefore no need for calibration processes. These two technologies seem in some aspects very complementary and a hybrid sensor bringing together all their assets could be the opportunity to take a big step in this context of gravity space missions. We present here the first experimental association on ground of an electrostatic accelerometer and an atomic accelerometer and underline the interest of calibrating the ES instrument with the AI. Some technical methods using the ES proof-mass as the Raman Mirror seem very promising to remove rotation effects of the satellite on the AI signal. We propose a roadmap to explore further in details and more rigorously this attractive hybridization scheme in order to assess its potential for a future geodesy space mission with theoretical and experimental work.

  2. A data management infrastructure for bridge monitoring

    NASA Astrophysics Data System (ADS)

    Jeong, Seongwoon; Byun, Jaewook; Kim, Daeyoung; Sohn, Hoon; Bae, In Hwan; Law, Kincho H.

    2015-04-01

    This paper discusses a data management infrastructure framework for bridge monitoring applications. As sensor technologies mature and become economically affordable, their deployment for bridge monitoring will continue to grow. Data management becomes a critical issue not only for storing the sensor data but also for integrating with the bridge model to support other functions, such as management, maintenance and inspection. The focus of this study is on the effective data management of bridge information and sensor data, which is crucial to structural health monitoring and life cycle management of bridge structures. We review the state-of-the-art of bridge information modeling and sensor data management, and propose a data management framework for bridge monitoring based on NoSQL database technologies that have been shown useful in handling high volume, time-series data and to flexibly deal with unstructured data schema. Specifically, Apache Cassandra and Mongo DB are deployed for the prototype implementation of the framework. This paper describes the database design for an XML-based Bridge Information Modeling (BrIM) schema, and the representation of sensor data using Sensor Model Language (SensorML). The proposed prototype data management framework is validated using data collected from the Yeongjong Bridge in Incheon, Korea.

  3. Propagation Modeling and Defending of a Mobile Sensor Worm in Wireless Sensor and Actuator Networks

    PubMed Central

    Wang, Tian; Wu, Qun; Wen, Sheng; Cai, Yiqiao; Tian, Hui; Chen, Yonghong; Wang, Baowei

    2017-01-01

    WSANs (Wireless Sensor and Actuator Networks) are derived from traditional wireless sensor networks by introducing mobile actuator elements. Previous studies indicated that mobile actuators can improve network performance in terms of data collection, energy supplementation, etc. However, according to our experimental simulations, the actuator’s mobility also causes the sensor worm to spread faster if an attacker launches worm attacks on an actuator and compromises it successfully. Traditional worm propagation models and defense strategies did not consider the diffusion with a mobile worm carrier. To address this new problem, we first propose a microscopic mathematical model to describe the propagation dynamics of the sensor worm. Then, a two-step local defending strategy (LDS) with a mobile patcher (a mobile element which can distribute patches) is designed to recover the network. In LDS, all recovering operations are only taken in a restricted region to minimize the cost. Extensive experimental results demonstrate that our model estimations are rather accurate and consistent with the actual spreading scenario of the mobile sensor worm. Moreover, on average, the LDS outperforms other algorithms by approximately 50% in terms of the cost. PMID:28098748

  4. An optimal state estimation model of sensory integration in human postural balance

    NASA Astrophysics Data System (ADS)

    Kuo, Arthur D.

    2005-09-01

    We propose a model for human postural balance, combining state feedback control with optimal state estimation. State estimation uses an internal model of body and sensor dynamics to process sensor information and determine body orientation. Three sensory modalities are modeled: joint proprioception, vestibular organs in the inner ear, and vision. These are mated with a two degree-of-freedom model of body dynamics in the sagittal plane. Linear quadratic optimal control is used to design state feedback and estimation gains. Nine free parameters define the control objective and the signal-to-noise ratios of the sensors. The model predicts statistical properties of human sway in terms of covariance of ankle and hip motion. These predictions are compared with normal human responses to alterations in sensory conditions. With a single parameter set, the model successfully reproduces the general nature of postural motion as a function of sensory environment. Parameter variations reveal that the model is highly robust under normal sensory conditions, but not when two or more sensors are inaccurate. This behavior is similar to that of normal human subjects. We propose that age-related sensory changes may be modeled with decreased signal-to-noise ratios, and compare the model's behavior with degraded sensors against experimental measurements from older adults. We also examine removal of the model's vestibular sense, which leads to instability similar to that observed in bilateral vestibular loss subjects. The model may be useful for predicting which sensors are most critical for balance, and how much they can deteriorate before posture becomes unstable.

  5. A machine learning calibration model using random forests to improve sensor performance for lower-cost air quality monitoring

    NASA Astrophysics Data System (ADS)

    Zimmerman, Naomi; Presto, Albert A.; Kumar, Sriniwasa P. N.; Gu, Jason; Hauryliuk, Aliaksei; Robinson, Ellis S.; Robinson, Allen L.; Subramanian, R.

    2018-01-01

    Low-cost sensing strategies hold the promise of denser air quality monitoring networks, which could significantly improve our understanding of personal air pollution exposure. Additionally, low-cost air quality sensors could be deployed to areas where limited monitoring exists. However, low-cost sensors are frequently sensitive to environmental conditions and pollutant cross-sensitivities, which have historically been poorly addressed by laboratory calibrations, limiting their utility for monitoring. In this study, we investigated different calibration models for the Real-time Affordable Multi-Pollutant (RAMP) sensor package, which measures CO, NO2, O3, and CO2. We explored three methods: (1) laboratory univariate linear regression, (2) empirical multiple linear regression, and (3) machine-learning-based calibration models using random forests (RF). Calibration models were developed for 16-19 RAMP monitors (varied by pollutant) using training and testing windows spanning August 2016 through February 2017 in Pittsburgh, PA, US. The random forest models matched (CO) or significantly outperformed (NO2, CO2, O3) the other calibration models, and their accuracy and precision were robust over time for testing windows of up to 16 weeks. Following calibration, average mean absolute error on the testing data set from the random forest models was 38 ppb for CO (14 % relative error), 10 ppm for CO2 (2 % relative error), 3.5 ppb for NO2 (29 % relative error), and 3.4 ppb for O3 (15 % relative error), and Pearson r versus the reference monitors exceeded 0.8 for most units. Model performance is explored in detail, including a quantification of model variable importance, accuracy across different concentration ranges, and performance in a range of monitoring contexts including the National Ambient Air Quality Standards (NAAQS) and the US EPA Air Sensors Guidebook recommendations of minimum data quality for personal exposure measurement. A key strength of the RF approach is that it accounts for pollutant cross-sensitivities. This highlights the importance of developing multipollutant sensor packages (as opposed to single-pollutant monitors); we determined this is especially critical for NO2 and CO2. The evaluation reveals that only the RF-calibrated sensors meet the US EPA Air Sensors Guidebook recommendations of minimum data quality for personal exposure measurement. We also demonstrate that the RF-model-calibrated sensors could detect differences in NO2 concentrations between a near-road site and a suburban site less than 1.5 km away. From this study, we conclude that combining RF models with carefully controlled state-of-the-art multipollutant sensor packages as in the RAMP monitors appears to be a very promising approach to address the poor performance that has plagued low-cost air quality sensors.

  6. Target Coverage in Wireless Sensor Networks with Probabilistic Sensors

    PubMed Central

    Shan, Anxing; Xu, Xianghua; Cheng, Zongmao

    2016-01-01

    Sensing coverage is a fundamental problem in wireless sensor networks (WSNs), which has attracted considerable attention. Conventional research on this topic focuses on the 0/1 coverage model, which is only a coarse approximation to the practical sensing model. In this paper, we study the target coverage problem, where the objective is to find the least number of sensor nodes in randomly-deployed WSNs based on the probabilistic sensing model. We analyze the joint detection probability of target with multiple sensors. Based on the theoretical analysis of the detection probability, we formulate the minimum ϵ-detection coverage problem. We prove that the minimum ϵ-detection coverage problem is NP-hard and present an approximation algorithm called the Probabilistic Sensor Coverage Algorithm (PSCA) with provable approximation ratios. To evaluate our design, we analyze the performance of PSCA theoretically and also perform extensive simulations to demonstrate the effectiveness of our proposed algorithm. PMID:27618902

  7. Development of esMOCA RULA, Motion Capture Instrumentation for RULA Assessment

    NASA Astrophysics Data System (ADS)

    Akhmad, S.; Arendra, A.

    2018-01-01

    The purpose of this research is to build motion capture instrumentation using sensors fusion accelerometer and gyroscope to assist in RULA assessment. Data processing of sensor orientation is done in every sensor node by digital motion processor. Nine sensors are placed in the upper limb of operator subject. Development of kinematics model is done with Simmechanic Simulink. This kinematics model receives streaming data from sensors via wireless sensors network. The output of the kinematics model is the relative angular angle between upper limb members and visualized on the monitor. This angular information is compared to the look-up table of the RULA worksheet and gives the RULA score. The assessment result of the instrument is compared with the result of the assessment by rula assessors. To sum up, there is no significant difference of assessment by the instrument with an assessment by an assessor.

  8. Automatic Earth observation data service based on reusable geo-processing workflow

    NASA Astrophysics Data System (ADS)

    Chen, Nengcheng; Di, Liping; Gong, Jianya; Yu, Genong; Min, Min

    2008-12-01

    A common Sensor Web data service framework for Geo-Processing Workflow (GPW) is presented as part of the NASA Sensor Web project. This framework consists of a data service node, a data processing node, a data presentation node, a Catalogue Service node and BPEL engine. An abstract model designer is used to design the top level GPW model, model instantiation service is used to generate the concrete BPEL, and the BPEL execution engine is adopted. The framework is used to generate several kinds of data: raw data from live sensors, coverage or feature data, geospatial products, or sensor maps. A scenario for an EO-1 Sensor Web data service for fire classification is used to test the feasibility of the proposed framework. The execution time and influences of the service framework are evaluated. The experiments show that this framework can improve the quality of services for sensor data retrieval and processing.

  9. Model-Based Assessment of Estuary Ecosystem Health Using the Latent Health Factor Index, with Application to the Richibucto Estuary

    PubMed Central

    Chiu, Grace S.; Wu, Margaret A.; Lu, Lin

    2013-01-01

    The ability to quantitatively assess ecological health is of great interest to those tasked with monitoring and conserving ecosystems. For decades, biomonitoring research and policies have relied on multimetric health indices of various forms. Although indices are numbers, many are constructed based on qualitative procedures, thus limiting the quantitative rigor of the practical interpretations of such indices. The statistical modeling approach to construct the latent health factor index (LHFI) was recently developed. With ecological data that otherwise are used to construct conventional multimetric indices, the LHFI framework expresses such data in a rigorous quantitative model, integrating qualitative features of ecosystem health and preconceived ecological relationships among such features. This hierarchical modeling approach allows unified statistical inference of health for observed sites (along with prediction of health for partially observed sites, if desired) and of the relevance of ecological drivers, all accompanied by formal uncertainty statements from a single, integrated analysis. Thus far, the LHFI approach has been demonstrated and validated in a freshwater context. We adapt this approach to modeling estuarine health, and illustrate it on the previously unassessed system in Richibucto in New Brunswick, Canada, where active oyster farming is a potential stressor through its effects on sediment properties. Field data correspond to health metrics that constitute the popular AZTI marine biotic index and the infaunal trophic index, as well as abiotic predictors preconceived to influence biota. Our paper is the first to construct a scientifically sensible model that rigorously identifies the collective explanatory capacity of salinity, distance downstream, channel depth, and silt–clay content–all regarded a priori as qualitatively important abiotic drivers–towards site health in the Richibucto ecosystem. This suggests the potential effectiveness of the LHFI approach for assessing not only freshwater systems but aquatic ecosystems in general. PMID:23785443

  10. Numerical modeling and performance analysis of zinc oxide (ZnO) thin-film based gas sensor

    NASA Astrophysics Data System (ADS)

    Punetha, Deepak; Ranjan, Rashmi; Pandey, Saurabh Kumar

    2018-05-01

    This manuscript describes the modeling and analysis of Zinc Oxide thin film based gas sensor. The conductance and sensitivity of the sensing layer has been described by change in temperature as well as change in gas concentration. The analysis has been done for reducing and oxidizing agents. Simulation results revealed the change in resistance and sensitivity of the sensor with respect to temperature and different gas concentration. To check the feasibility of the model, all the simulated results have been analyze by different experimental reported work. Wolkenstein theory has been used to model the proposed sensor and the simulation results have been shown by using device simulation software.

  11. Facilities Stewardship: Measuring the Return on Physical Assets.

    ERIC Educational Resources Information Center

    Kadamus, David A.

    2001-01-01

    Asserts that colleges and universities should apply the same analytical rigor to physical assets as they do financial assets. Presents a management tool, the Return on Physical Assets model, to help guide physical asset allocation decisions. (EV)

  12. Career Decision Making and Its Evaluation.

    ERIC Educational Resources Information Center

    Miller-Tiedeman, Anna

    1979-01-01

    The author discusses a career decision-making program which she designed and implemented using a pyramidal model of exploration, crystallization, choice, and classification. Her article outlines the value of rigorous evaluation techniques applied by the local practitioner. (MF)

  13. Infrared Stokes polarimetry and spectropolarimetry

    NASA Astrophysics Data System (ADS)

    Kudenov, Michael William

    In this work, three methods of measuring the polarization state of light in the thermal infrared (3-12 mum) are modeled, simulated, calibrated and experimentally verified in the laboratory. The first utilizes the method of channeled spectropolarimetry (CP) to encode the Stokes polarization parameters onto the optical power spectrum. This channeled spectral technique is implemented with the use of two Yttrium Vanadate (YVO4) crystal retarders. A basic mathematical model for the system is presented, showing that all the Stokes parameters are directly present in the interferogram. Theoretical results are compared with real data from the system, an improved model is provided to simulate the effects of absorption within the crystal, and a modified calibration technique is introduced to account for this absorption. Lastly, effects due to interferometer instabilities on the reconstructions, including non-uniform sampling and interferogram translations, are investigated and techniques are employed to mitigate them. Second is the method of prismatic imaging polarimetry (PIP), which can be envisioned as the monochromatic application of channeled spectropolarimetry. Unlike CP, PIP encodes the 2-dimensional Stokes parameters in a scene onto spatial carrier frequencies. However, the calibration techniques derived in the infrared for CP are extremely similar to that of the PIP. Consequently, the PIP technique is implemented with a set of four YVO4 crystal prisms. A mathematical model for the polarimeter is presented in which diattenuation due to Fresnel effects and dichroism in the crystal are included. An improved polarimetric calibration technique is introduced to remove the diattenuation effects, along with the relative radiometric calibration required for the BPIP operating with a thermal background and large detector offsets. Data demonstrating emission polarization are presented from various blackbodies, which are compared to data from our Fourier transform infrared spectropolarimeter. Additionally, limitations in the PIP technique with regards to the spectral bandwidth and F/# of the imaging system are analyzed. A model able to predict the carrier frequency's fringe visibility is produced and experimentally verified, further reinforcing the PIP's limitations. The last technique is significantly different from CP or PIP and involves the simulation and calibration of a thermal infrared division of amplitude imaging Stokes polarimeter. For the first time, application of microbolometer focal plane array (FPA) technology to polarimetry is demonstrated. The sensor utilizes a wire-grid beamsplitter with imaging systems positioned at each output to analyze two orthogonal linear polarization states simultaneously. Combined with a form birefringent wave plate, the system is capable of snapshot imaging polarimetry in any one Stokes vector (S1, S2 or S3). Radiometric and polarimetric calibration procedures for the instrument are provided and the reduction matrices from the calibration are compared to rigorous coupled wave analysis (RCWA) and raytracing simulations. The design and optimization of the sensor's wire-grid beam splitter and wave plate are presented, along with their corresponding prescriptions. Polarimetric calibration error due to the spectrally broadband nature of the instrument is also overviewed. Image registration techniques for the sensor are discussed and data from the instrument are presented, demonstrating a microbolometer's ability to measure the small intensity variations corresponding to polarized emission in natural environments.

  14. Computer vision-based evaluation of pre- and postrigor changes in size and shape of Atlantic cod (Gadus morhua) and Atlantic salmon (Salmo salar) fillets during rigor mortis and ice storage: effects of perimortem handling stress.

    PubMed

    Misimi, E; Erikson, U; Digre, H; Skavhaug, A; Mathiassen, J R

    2008-03-01

    The present study describes the possibilities for using computer vision-based methods for the detection and monitoring of transient 2D and 3D changes in the geometry of a given product. The rigor contractions of unstressed and stressed fillets of Atlantic salmon (Salmo salar) and Atlantic cod (Gadus morhua) were used as a model system. Gradual changes in fillet shape and size (area, length, width, and roundness) were recorded for 7 and 3 d, respectively. Also, changes in fillet area and height (cross-section profiles) were tracked using a laser beam and a 3D digital camera. Another goal was to compare rigor developments of the 2 species of farmed fish, and whether perimortem stress affected the appearance of the fillets. Some significant changes in fillet size and shape were found (length, width, area, roundness, height) between unstressed and stressed fish during the course of rigor mortis as well as after ice storage (postrigor). However, the observed irreversible stress-related changes were small and would hardly mean anything for postrigor fish processors or consumers. The cod were less stressed (as defined by muscle biochemistry) than the salmon after the 2 species had been subjected to similar stress bouts. Consequently, the difference between the rigor courses of unstressed and stressed fish was more extreme in the case of salmon. However, the maximal whole fish rigor strength was judged to be about the same for both species. Moreover, the reductions in fillet area and length, as well as the increases in width, were basically of similar magnitude for both species. In fact, the increases in fillet roundness and cross-section height were larger for the cod. We conclude that the computer vision method can be used effectively for automated monitoring of changes in 2D and 3D shape and size of fish fillets during rigor mortis and ice storage. In addition, it can be used for grading of fillets according to uniformity in size and shape, as well as measurement of fillet yield measured in thickness. The methods are accurate, rapid, nondestructive, and contact-free and can therefore be regarded as suitable for industrial purposes.

  15. Rigorous force field optimization principles based on statistical distance minimization

    DOE PAGES

    Vlcek, Lukas; Chialvo, Ariel A.

    2015-10-12

    We use the concept of statistical distance to define a measure of distinguishability between a pair of statistical mechanical systems, i.e., a model and its target, and show that its minimization leads to general convergence of the model’s static measurable properties to those of the target. Here we exploit this feature to define a rigorous basis for the development of accurate and robust effective molecular force fields that are inherently compatible with coarse-grained experimental data. The new model optimization principles and their efficient implementation are illustrated through selected examples, whose outcome demonstrates the higher robustness and predictive accuracy of themore » approach compared to other currently used methods, such as force matching and relative entropy minimization. We also discuss relations between the newly developed principles and established thermodynamic concepts, which include the Gibbs-Bogoliubov inequality and the thermodynamic length.« less

  16. Rigor Made Easy: Getting Started

    ERIC Educational Resources Information Center

    Blackburn, Barbara R.

    2012-01-01

    Bestselling author and noted rigor expert Barbara Blackburn shares the secrets to getting started, maintaining momentum, and reaching your goals. Learn what rigor looks like in the classroom, understand what it means for your students, and get the keys to successful implementation. Learn how to use rigor to raise expectations, provide appropriate…

  17. Close Early Learning Gaps with Rigorous DAP

    ERIC Educational Resources Information Center

    Brown, Christopher P.; Mowry, Brian

    2015-01-01

    Rigorous DAP (developmentally appropriate practices) is a set of 11 principles of instruction intended to help close early childhood learning gaps. Academically rigorous learning environments create the conditions for children to learn at high levels. While academic rigor focuses on one dimension of education--academic--DAP considers the whole…

  18. The Sea-Ice Floe Size Distribution

    NASA Astrophysics Data System (ADS)

    Stern, H. L., III; Schweiger, A. J. B.; Zhang, J.; Steele, M.

    2017-12-01

    The size distribution of ice floes in the polar seas affects the dynamics and thermodynamics of the ice cover and its interaction with the ocean and atmosphere. Ice-ocean models are now beginning to include the floe size distribution (FSD) in their simulations. In order to characterize seasonal changes of the FSD and provide validation data for our ice-ocean model, we calculated the FSD in the Beaufort and Chukchi seas over two spring-summer-fall seasons (2013 and 2014) using more than 250 cloud-free visible-band scenes from the MODIS sensors on NASA's Terra and Aqua satellites, identifying nearly 250,000 ice floes between 2 and 30 km in diameter. We found that the FSD follows a power-law distribution at all locations, with a seasonally varying exponent that reflects floe break-up in spring, loss of smaller floes in summer, and the return of larger floes after fall freeze-up. We extended the results to floe sizes from 10 m to 2 km at selected time/space locations using more than 50 high-resolution radar and visible-band satellite images. Our analysis used more data and applied greater statistical rigor than any previous study of the FSD. The incorporation of the FSD into our ice-ocean model resulted in reduced sea-ice thickness, mainly in the marginal ice zone, which improved the simulation of sea-ice extent and yielded an earlier ice retreat. We also examined results from 17 previous studies of the FSD, most of which report power-law FSDs but with widely varying exponents. It is difficult to reconcile the range of results due to different study areas, seasons, and methods of analysis. We review the power-law representation of the FSD in these studies and discuss some mathematical details that are important to consider in any future analysis.

  19. A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors.

    PubMed

    Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei

    2017-09-21

    In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors.

  20. Active vibration control using a modal-domain fiber optic sensor

    NASA Technical Reports Server (NTRS)

    Cox, David E.

    1992-01-01

    A closed-loop control experiment is described in which vibrations of a cantilevered beam are suppressed using measurements from a modal-domain fiber optic sensor. Modal-domain sensors are interference between the modes of a few-mode optical waveguide to detect strain. The fiber is bonded along the length of the beam and provides a measurement related to the strain distribution on the surface of the beam. A model for the fiber optic sensor is derived, and this model is integrated with the dynamic model of the beam. A piezoelectric actuator is also bonded to the beam and used to provide control forces. Control forces are obtained through dynamic compensation of the signal from the fiber optic sensor. The compensator is implemented with a real-time digital controller. Analytical models are verified by comparing simulations to experimental results for both open-loop and closed-loop configurations.

Top