Sample records for time points representing

  1. Change Point Detection in Correlation Networks

    NASA Astrophysics Data System (ADS)

    Barnett, Ian; Onnela, Jukka-Pekka

    2016-01-01

    Many systems of interacting elements can be conceptualized as networks, where network nodes represent the elements and network ties represent interactions between the elements. In systems where the underlying network evolves, it is useful to determine the points in time where the network structure changes significantly as these may correspond to functional change points. We propose a method for detecting change points in correlation networks that, unlike previous change point detection methods designed for time series data, requires minimal distributional assumptions. We investigate the difficulty of change point detection near the boundaries of the time series in correlation networks and study the power of our method and competing methods through simulation. We also show the generalizable nature of the method by applying it to stock price data as well as fMRI data.

  2. Compression of 3D Point Clouds Using a Region-Adaptive Hierarchical Transform.

    PubMed

    De Queiroz, Ricardo; Chou, Philip A

    2016-06-01

    In free-viewpoint video, there is a recent trend to represent scene objects as solids rather than using multiple depth maps. Point clouds have been used in computer graphics for a long time and with the recent possibility of real time capturing and rendering, point clouds have been favored over meshes in order to save computation. Each point in the cloud is associated with its 3D position and its color. We devise a method to compress the colors in point clouds which is based on a hierarchical transform and arithmetic coding. The transform is a hierarchical sub-band transform that resembles an adaptive variation of a Haar wavelet. The arithmetic encoding of the coefficients assumes Laplace distributions, one per sub-band. The Laplace parameter for each distribution is transmitted to the decoder using a custom method. The geometry of the point cloud is encoded using the well-established octtree scanning. Results show that the proposed solution performs comparably to the current state-of-the-art, in many occasions outperforming it, while being much more computationally efficient. We believe this work represents the state-of-the-art in intra-frame compression of point clouds for real-time 3D video.

  3. Strengths and weaknesses of temporal stability analysis for monitoring and estimating grid-mean soil moisture in a high-intensity irrigated agricultural landscape

    NASA Astrophysics Data System (ADS)

    Ran, Youhua; Li, Xin; Jin, Rui; Kang, Jian; Cosh, Michael H.

    2017-01-01

    Monitoring and estimating grid-mean soil moisture is very important for assessing many hydrological, biological, and biogeochemical processes and for validating remotely sensed surface soil moisture products. Temporal stability analysis (TSA) is a valuable tool for identifying a small number of representative sampling points to estimate the grid-mean soil moisture content. This analysis was evaluated and improved using high-quality surface soil moisture data that were acquired by a wireless sensor network in a high-intensity irrigated agricultural landscape in an arid region of northwestern China. The performance of the TSA was limited in areas where the representative error was dominated by random events, such as irrigation events. This shortcoming can be effectively mitigated by using a stratified TSA (STSA) method, proposed in this paper. In addition, the following methods were proposed for rapidly and efficiently identifying representative sampling points when using TSA. (1) Instantaneous measurements can be used to identify representative sampling points to some extent; however, the error resulting from this method is significant when validating remotely sensed soil moisture products. Thus, additional representative sampling points should be considered to reduce this error. (2) The calibration period can be determined from the time span of the full range of the grid-mean soil moisture content during the monitoring period. (3) The representative error is sensitive to the number of calibration sampling points, especially when only a few representative sampling points are used. Multiple sampling points are recommended to reduce data loss and improve the likelihood of representativeness at two scales.

  4. Real-time Probabilistic Covariance Tracking with Efficient Model Update

    DTIC Science & Technology

    2012-05-01

    NAME OF RESPONSIBLE PERSON a. REPORT unclassified b. ABSTRACT unclassified c . THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed...feature points inside a given rectangular region R of F . The region R is represented by the d×d covariance matrix of the feature points C = 1 N − 1 N...i=1 (fi − µ)(fi − µ)T , where N is the number of pixels in the region R and µ is the mean of the feature points. The element (i, j) of C represents

  5. Spatial Representativeness of Surface-Measured Variations of Downward Solar Radiation

    NASA Astrophysics Data System (ADS)

    Schwarz, M.; Folini, D.; Hakuba, M. Z.; Wild, M.

    2017-12-01

    When using time series of ground-based surface solar radiation (SSR) measurements in combination with gridded data, the spatial and temporal representativeness of the point observations must be considered. We use SSR data from surface observations and high-resolution (0.05°) satellite-derived data to infer the spatiotemporal representativeness of observations for monthly and longer time scales in Europe. The correlation analysis shows that the squared correlation coefficients (R2) between SSR times series decrease linearly with increasing distance between the surface observations. For deseasonalized monthly mean time series, R2 ranges from 0.85 for distances up to 25 km between the stations to 0.25 at distances of 500 km. A decorrelation length (i.e., the e-folding distance of R2) on the order of 400 km (with spread of 100-600 km) was found. R2 from correlations between point observations and colocated grid box area means determined from satellite data were found to be 0.80 for a 1° grid. To quantify the error which arises when using a point observation as a surrogate for the area mean SSR of larger surroundings, we calculated a spatial sampling error (SSE) for a 1° grid of 8 (3) W/m2 for monthly (annual) time series. The SSE based on a 1° grid, therefore, is of the same magnitude as the measurement uncertainty. The analysis generally reveals that monthly mean (or longer temporally aggregated) point observations of SSR capture the larger-scale variability well. This finding shows that comparing time series of SSR measurements with gridded data is feasible for those time scales.

  6. [Proposal of a costing method for the provision of sterilization in a public hospital].

    PubMed

    Bauler, S; Combe, C; Piallat, M; Laurencin, C; Hida, H

    2011-07-01

    To refine the billing to institutions whose operations of sterilization are outsourced, a sterilization cost approach was developed. The aim of the study is to determine the value of a sterilization unit (one point "S") evolving according to investments, quantities processed, types of instrumentation or packaging. The time of preparation has been selected from all sub-processes of sterilization to determine the value of one point S. The time of preparation of sterilized large and small containers and pouches were raised. The reference time corresponds to one bag (equal to one point S). Simultaneously, the annual operating cost of sterilization was defined and divided into several areas of expenditure: employees, equipments and building depreciation, supplies, and maintenance. A total of 136 crossing times of containers were measured. Time to prepare a pouch has been estimated at one minute (one S). A small container represents four S and a large container represents 10S. By dividing the operating cost of sterilization by the total number of points of sterilization over a given period, the cost of one S can be determined. This method differs from traditional costing method in sterilizing services, considering each item of expenditure. This point S will be the base for billing of subcontracts to other institutions. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  7. Split delivery vehicle routing problem with time windows: a case study

    NASA Astrophysics Data System (ADS)

    Latiffianti, E.; Siswanto, N.; Firmandani, R. A.

    2018-04-01

    This paper aims to implement an extension of VRP so called split delivery vehicle routing problem (SDVRP) with time windows in a case study involving pickups and deliveries of workers from several points of origin and several destinations. Each origin represents a bus stop and the destination represents either site or office location. An integer linear programming of the SDVRP problem is presented. The solution was generated using three stages of defining the starting points, assigning busses, and solving the SDVRP with time windows using an exact method. Although the overall computational time was relatively lengthy, the results indicated that the produced solution was better than the existing routing and scheduling that the firm used. The produced solution was also capable of reducing fuel cost by 9% that was obtained from shorter total distance travelled by the shuttle buses.

  8. An efficient reliability algorithm for locating design point using the combination of importance sampling concepts and response surface method

    NASA Astrophysics Data System (ADS)

    Shayanfar, Mohsen Ali; Barkhordari, Mohammad Ali; Roudak, Mohammad Amin

    2017-06-01

    Monte Carlo simulation (MCS) is a useful tool for computation of probability of failure in reliability analysis. However, the large number of required random samples makes it time-consuming. Response surface method (RSM) is another common method in reliability analysis. Although RSM is widely used for its simplicity, it cannot be trusted in highly nonlinear problems due to its linear nature. In this paper, a new efficient algorithm, employing the combination of importance sampling, as a class of MCS, and RSM is proposed. In the proposed algorithm, analysis starts with importance sampling concepts and using a represented two-step updating rule of design point. This part finishes after a small number of samples are generated. Then RSM starts to work using Bucher experimental design, with the last design point and a represented effective length as the center point and radius of Bucher's approach, respectively. Through illustrative numerical examples, simplicity and efficiency of the proposed algorithm and the effectiveness of the represented rules are shown.

  9. Selecting the most appropriate time points to profile in high-throughput studies

    PubMed Central

    Kleyman, Michael; Sefer, Emre; Nicola, Teodora; Espinoza, Celia; Chhabra, Divya; Hagood, James S; Kaminski, Naftali; Ambalavanan, Namasivayam; Bar-Joseph, Ziv

    2017-01-01

    Biological systems are increasingly being studied by high throughput profiling of molecular data over time. Determining the set of time points to sample in studies that profile several different types of molecular data is still challenging. Here we present the Time Point Selection (TPS) method that solves this combinatorial problem in a principled and practical way. TPS utilizes expression data from a small set of genes sampled at a high rate. As we show by applying TPS to study mouse lung development, the points selected by TPS can be used to reconstruct an accurate representation for the expression values of the non selected points. Further, even though the selection is only based on gene expression, these points are also appropriate for representing a much larger set of protein, miRNA and DNA methylation changes over time. TPS can thus serve as a key design strategy for high throughput time series experiments. Supporting Website: www.sb.cs.cmu.edu/TPS DOI: http://dx.doi.org/10.7554/eLife.18541.001 PMID:28124972

  10. How to Assess the Existence of Competing Strategies in Cognitive Tasks: A Primer on the Fixed-Point Property

    PubMed Central

    van Maanen, Leendert; de Jong, Ritske; van Rijn, Hedderik

    2014-01-01

    When multiple strategies can be used to solve a type of problem, the observed response time distributions are often mixtures of multiple underlying base distributions each representing one of these strategies. For the case of two possible strategies, the observed response time distributions obey the fixed-point property. That is, there exists one reaction time that has the same probability of being observed irrespective of the actual mixture proportion of each strategy. In this paper we discuss how to compute this fixed-point, and how to statistically assess the probability that indeed the observed response times are generated by two competing strategies. Accompanying this paper is a free R package that can be used to compute and test the presence or absence of the fixed-point property in response time data, allowing for easy to use tests of strategic behavior. PMID:25170893

  11. Extensible 3D (X3D) Graphics for Visualizing Marine Mammal Reaction to Underwater Sound on the Southern California ASW Range (SOAR)

    DTIC Science & Technology

    2007-06-01

    file ARC- 20060811T130816.txt, where color is used to represent points in time (red being the earliest, transitioning to orange, yellow , then white...Administration (NOAA), using passive hydrophone arrays along the mid-Atlantic ridge to listen for underwater earthquakes and volcanoes , have found that a...appeared. The earliest data points were designated red, and later points were shades of orange and yellow , until the last points (relative to the

  12. Research in Knowledge Representation for Natural Language Communication and Planning Assistance

    DTIC Science & Technology

    1987-10-01

    elements of PFR Instants of time are represented as individuals where they form a continuum Let "seconds" map real numbers to instants where "seconds(n...34 denotes n seconds. Points in space form a 3-dimensional continuum. Changing relations are represented as functions on instants of time. Formulas and...occupies at time t. "occ.space(x)(t)" is defined iff x is a physical object, I is an instant of lime, and x exists at t Further, x must occupy a non

  13. Diversity of human small intestinal Streptococcus and Veillonella populations.

    PubMed

    van den Bogert, Bartholomeus; Erkus, Oylum; Boekhorst, Jos; de Goffau, Marcus; Smid, Eddy J; Zoetendal, Erwin G; Kleerebezem, Michiel

    2013-08-01

    Molecular and cultivation approaches were employed to study the phylogenetic richness and temporal dynamics of Streptococcus and Veillonella populations in the small intestine. Microbial profiling of human small intestinal samples collected from four ileostomy subjects at four time points displayed abundant populations of Streptococcus spp. most affiliated with S. salivarius, S. thermophilus, and S. parasanguinis, as well as Veillonella spp. affiliated with V. atypica, V. parvula, V. dispar, and V. rogosae. Relative abundances varied per subject and time of sampling. Streptococcus and Veillonella isolates were cultured using selective media from ileostoma effluent samples collected at two time points from a single subject. The richness of the Streptococcus and Veillonella isolates was assessed at species and strain level by 16S rRNA gene sequencing and genetic fingerprinting, respectively. A total of 160 Streptococcus and 37 Veillonella isolates were obtained. Genetic fingerprinting differentiated seven Streptococcus lineages from ileostoma effluent, illustrating the strain richness within this ecosystem. The Veillonella isolates were represented by a single phylotype. Our study demonstrated that the small intestinal Streptococcus populations displayed considerable changes over time at the genetic lineage level because only representative strains of a single Streptococcus lineage could be cultivated from ileostoma effluent at both time points. © 2013 Federation of European Microbiological Societies. Published by John Wiley & Sons Ltd. All rights reserved.

  14. Sediment storage time in a simulated meandering river's floodplain, comparisons of point bar and overbank deposits

    NASA Astrophysics Data System (ADS)

    Ackerman, T. R.; Pizzuto, J. E.

    2016-12-01

    Sediment may be stored briefly or for long periods in alluvial deposits adjacent to rivers. The duration of sediment storage may affect diagenesis, and controls the timing of sediment delivery, affecting the propagation of upland sediment signals caused by tectonics, climate change, and land use, and the efficacy of watershed management strategies designed to reduce sediment loading to estuaries and reservoirs. Understanding the functional form of storage time distributions can help to extrapolate from limited field observations and improve forecasts of sediment loading. We simulate stratigraphy adjacent to a modeled river where meander migration is driven by channel curvature. The basal unit is built immediately as the channel migrates away, analogous to a point bar; rules for overbank (flood) deposition create thicker deposits at low elevations and near the channel, forming topographic features analogous to natural levees, scroll bars, and terraces. Deposit age is tracked everywhere throughout the simulation, and the storage time is recorded when the channel returns and erodes the sediment at each pixel. 210 ky of simulated run time is sufficient for the channel to migrate 10,500 channel widths, but only the final 90 ky are analyzed. Storage time survivor functions are well fit by exponential functions until 500 years (point bar) or 600 years (overbank) representing the youngest 50% of eroded sediment. Then (until an age of 12 ky, representing the next 48% (point bar) or 45% (overbank) of eroding sediment), the distributions are well fit by heavy tailed power functions with slopes of -1 (point bar) and -0.75 (overbank). After 12 ky (6% of model run time) the remainder of the storage time distributions become exponential (light tailed). Point bar sediment has the greatest chance (6%) of eroding at 120 years, as the river reworks recently deposited point bars. Overbank sediment has an 8% chance of eroding after 1 time step, a chance that declines by half after 3 time steps. The high probability of eroding young overbank deposits occurs as the river reworks recently formed natural levees. These results show that depositional environment affects river floodplain storage times shorter than a few centuries, and suggest that a power law distribution with a truncated tail may be the most reasonable functional fit.

  15. Speed Approach for UAV Collision Avoidance

    NASA Astrophysics Data System (ADS)

    Berdonosov, V. D.; Zivotova, A. A.; Htet Naing, Zaw; Zhuravlev, D. O.

    2018-05-01

    The article represents a new approach of defining potential collision of two or more UAVs in a common aviation area. UAVs trajectories are approximated by two or three trajectories’ points obtained from the ADS-B system. In the process of defining meeting points of trajectories, two cutoff values of the critical speed range, at which a UAVs collision is possible, are calculated. As calculation expressions for meeting points and cutoff values of the critical speed are represented in the analytical form, even if an on-board computer system has limited computational capacity, the time for calculation will be far less than the time of receiving data from ADS-B. For this reason, calculations can be updated at each cycle of new data receiving, and the trajectory approximation can be bounded by straight lines. Such approach allows developing the compact algorithm of collision avoidance, even for a significant amount of UAVs (more than several dozens). To proof the research adequacy, modeling was performed using a software system developed specifically for this purpose.

  16. Small-aperture seismic array data processing using a representation of seismograms at zero-crossing points

    NASA Astrophysics Data System (ADS)

    Brokešová, Johana; Málek, Jiří

    2018-07-01

    A new method for representing seismograms by using zero-crossing points is described. This method is based on decomposing a seismogram into a set of quasi-harmonic components and, subsequently, on determining the precise zero-crossing times of these components. An analogous approach can be applied to determine extreme points that represent the zero-crossings of the first time derivative of the quasi-harmonics. Such zero-crossing and/or extreme point seismogram representation can be used successfully to reconstruct single-station seismograms, but the main application is to small-aperture array data analysis to which standard methods cannot be applied. The precise times of the zero-crossing and/or extreme points make it possible to determine precise time differences across the array used to retrieve the parameters of a plane wave propagating across the array, namely, its backazimuth and apparent phase velocity along the Earth's surface. The applicability of this method is demonstrated using two synthetic examples. In the real-data example from the Příbram-Háje array in central Bohemia (Czech Republic) for the Mw 6.4 Crete earthquake of October 12, 2013, this method is used to determine the phase velocity dispersion of both Rayleigh and Love waves. The resulting phase velocities are compared with those obtained by employing the seismic plane-wave rotation-to-translation relations. In this approach, the phase velocity is calculated by obtaining the amplitude ratios between the rotation and translation components. Seismic rotations are derived from the array data, for which the small aperture is not only an advantage but also an applicability condition.

  17. Point-to-point connectivity prediction in porous media using percolation theory

    NASA Astrophysics Data System (ADS)

    Tavagh-Mohammadi, Behnam; Masihi, Mohsen; Ganjeh-Ghazvini, Mostafa

    2016-10-01

    The connectivity between two points in porous media is important for evaluating hydrocarbon recovery in underground reservoirs or toxic migration in waste disposal. For example, the connectivity between a producer and an injector in a hydrocarbon reservoir impact the fluid dispersion throughout the system. The conventional approach, flow simulation, is computationally very expensive and time consuming. Alternative method employs percolation theory. Classical percolation approach investigates the connectivity between two lines (representing the wells) in 2D cross sectional models whereas we look for the connectivity between two points (representing the wells) in 2D aerial models. In this study, site percolation is used to determine the fraction of permeable regions connected between two cells at various occupancy probabilities and system sizes. The master curves of mean connectivity and its uncertainty are then generated by finite size scaling. The results help to predict well-to-well connectivity without need to any further simulation.

  18. Normalization methods in time series of platelet function assays

    PubMed Central

    Van Poucke, Sven; Zhang, Zhongheng; Roest, Mark; Vukicevic, Milan; Beran, Maud; Lauwereins, Bart; Zheng, Ming-Hua; Henskens, Yvonne; Lancé, Marcus; Marcus, Abraham

    2016-01-01

    Abstract Platelet function can be quantitatively assessed by specific assays such as light-transmission aggregometry, multiple-electrode aggregometry measuring the response to adenosine diphosphate (ADP), arachidonic acid, collagen, and thrombin-receptor activating peptide and viscoelastic tests such as rotational thromboelastometry (ROTEM). The task of extracting meaningful statistical and clinical information from high-dimensional data spaces in temporal multivariate clinical data represented in multivariate time series is complex. Building insightful visualizations for multivariate time series demands adequate usage of normalization techniques. In this article, various methods for data normalization (z-transformation, range transformation, proportion transformation, and interquartile range) are presented and visualized discussing the most suited approach for platelet function data series. Normalization was calculated per assay (test) for all time points and per time point for all tests. Interquartile range, range transformation, and z-transformation demonstrated the correlation as calculated by the Spearman correlation test, when normalized per assay (test) for all time points. When normalizing per time point for all tests, no correlation could be abstracted from the charts as was the case when using all data as 1 dataset for normalization. PMID:27428217

  19. IgV gene intraclonal diversification and clonal evolution in B-cell chronic lymphocytic leukaemia.

    PubMed

    Bagnara, Davide; Callea, Vincenzo; Stelitano, Caterina; Morabito, Fortunato; Fabris, Sonia; Neri, Antonino; Zanardi, Sabrina; Ghiotto, Fabio; Ciccone, Ermanno; Grossi, Carlo Enrico; Fais, Franco

    2006-04-01

    Intraclonal diversification of immunoglobulin (Ig) variable (V) genes was evaluated in leukaemic cells from a B-cell chronic lymphocytic leukaemia (B-CLL) case over a 2-year period at four time points. Intraclonal heterogeneity was analysed by sequencing 305 molecular clones derived from polymerase chain reaction amplification of B-CLL cell IgV heavy (H) and light (C) chain gene rearrangements. Sequences were compared with evaluating intraclonal variation and the nature of somatic mutations. Although IgV intraclonal variation was detected at all time points, its level decreased with time and a parallel emergence of two more represented V(H)DJ(H) clones was observed. They differed by nine nucleotide substitutions one of which only caused a conservative replacement aminoacid change. In addition, one V(L)J(L) rearrangement became more represented over time. Analyses of somatic mutations suggest antigen selection and impairment of negative selection of neoplastic cells. In addition, a genealogical tree representing a model of clonal evolution of the neoplastic cells was created. It is of note that, during the period of study, the patient showed clinical progression of disease. We conclude that antigen stimulation and somatic hypermutation may participate in disease progression through the selection and expansion of neoplastic subclone(s).

  20. EFL Learners' Production of Questions over Time: Linguistic, Usage-Based, and Developmental Features

    ERIC Educational Resources Information Center

    Nekrasova-Beker, Tatiana M.

    2011-01-01

    The recognition of second language (L2) development as a dynamic process has led to different claims about how language development unfolds, what represents a learner's linguistic system (i.e., interlanguage) at a certain point in time, and how that system changes over time (Verspoor, de Bot, & Lowie, 2011). Responding to de Bot and…

  1. Representativity and univocity of traffic signs and their effect on trajectory movement in a driving simulation task: Warning signs.

    PubMed

    Vilchez, Jose Luis

    2017-07-04

    The effect of traffic signs on the behavior of drivers is not completely understood. Knowing about how humans process the meaning of signs (not just by learning but instinctively) will improve reaction time and decision making when traveling. The economic, social, and psychological consequences of car accidents are well known. This study sounds out which traffic signs are more ergonomic for participants, from a cognitive point of view, and determines, at the same time, their effect in participants' movement trajectories in a driving simulation task. Results point out that the signs least representative of their meaning produce a greater deviation from the center of the road than the most representative ones. This study encourages both an in-depth analysis of the effect on movement of roadside signs and the study of how this effect can be modified by the context in which these signs are presented (with the aim to move the research closer to and analyze the data in real contexts). The goal is to achieve clarity of meaning and lack of counterproductive effects on the trajectory of representative signs (those that provoke fewer mistakes in the decision task).

  2. Computer simulation of the probability that endangered whales will interact with oil spills, Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, M.; Jayko, K.; Bowles, A.

    1986-10-01

    A numerical model system was developed to assess quantitatively the probability that endangered bowhead and gray whales will encounter spilled oil in Alaskan waters. Bowhead and gray whale migration diving-surfacing models, and an oil-spill-trajectory model comprise the system. The migration models were developed from conceptual considerations, then calibrated with and tested against observations. The distribution of animals is represented in space and time by discrete points, each of which may represent one or more whales. The movement of a whale point is governed by a random-walk algorithm which stochastically follows a migratory pathway.

  3. Nation Before Service: The Evolution of Joint Operations to a Capabilities-Based Mindset

    DTIC Science & Technology

    2013-06-01

    Identified as one of America’s acupuncture points, cyberspace represents the soft underbelly that facilitates a majority of the world and US economy.90...Corpus, “America’s Acupuncture Points,” Asia Times Online (20 October 2006), http://www.atimes.com/atimes/China/HJ19Ad01.html (accessed 28 December...2012). See also, Zimet and Barry, “Military Service,” 287. 91 Corpus, “America’s Acupuncture Points,” 1. 62 and futurist Ray Kurzweil postulated

  4. Eleventh International Laser Radar Conference, Wisconsin University-Madison, 21-25 June 1982.

    DTIC Science & Technology

    1982-06-01

    an aircraft altitude, Iif(x) is an intensity of if beat signal in the sky at the point x, I is the laser power , y is the albedo of the ground surface...the aircraft flight path 2) Minimize degradation or power loss to the input/output path 3) Provide variable scan time points at rates up to .25...water particles. A lidar measurement at a specific point , therefore, is not necessarily representative of the entire globe. This will be discussed with

  5. Time-Domain Method for Computing Forces and Moments Acting on Three Dimensional Surface-Piercing Ship Hulls with Forward Speed.

    DTIC Science & Technology

    1980-09-01

    where 4BD represents the instantaneous effect of the body, while OFS represents the free surface disturbance generated by the body over all previous...acceleration boundary condition. This deter- mines the time-derivative of the body-induced component of the flow, 4BD (as well as OBD through integration...panel with uniform density ei acting over a surface of area Ai is replaced by a single point source with strength s i(t) - A i(a i(t n ) + (t-t n ) G( td

  6. Multi-time scale analysis of the spatial representativeness of in situ soil moisture data within satellite footprints

    USDA-ARS?s Scientific Manuscript database

    We conduct a novel comprehensive investigation that seeks to prove the connection between spatial and time scales in surface soil moisture (SM) within the satellite footprint (~50 km). Modeled and measured point series at Yanco and Little Washita in situ networks are first decomposed into anomalies ...

  7. Finding Out Critical Points For Real-Time Path Planning

    NASA Astrophysics Data System (ADS)

    Chen, Wei

    1989-03-01

    Path planning for a mobile robot is a classic topic, but the path planning under real-time environment is a different issue. The system sources including sampling time, processing time, processes communicating time, and memory space are very limited for this type of application. This paper presents a method which abstracts the world representation from the sensory data and makes the decision as to which point will be a potentially critical point to span the world map by using incomplete knowledge about physical world and heuristic rule. Without any previous knowledge or map of the workspace, the robot will determine the world map by roving through the workspace. The computational complexity for building and searching such a map is not more than O( n2 ) The find-path problem is well-known in robotics. Given an object with an initial location and orientation, a goal location and orientation, and a set of obstacles located in space, the problem is to find a continuous path for the object from the initial position to the goal position which avoids collisions with obstacles along the way. There are a lot of methods to find a collision-free path in given environment. Techniques for solving this problem can be classified into three approaches: 1) the configuration space approach [1],[2],[3] which represents the polygonal obstacles by vertices in a graph. The idea is to determine those parts of the free space which a reference point of the moving object can occupy without colliding with any obstacles. A path is then found for the reference point through this truly free space. Dealing with rotations turns out to be a major difficulty with the approach, requiring complex geometric algorithms which are computationally expensive. 2) the direct representation of the free space using basic shape primitives such as convex polygons [4] and overlapping generalized cones [5]. 3) the combination of technique 1 and 2 [6] by which the space is divided into the primary convex region, overlap region and obstacle region, then obstacle boundaries with attribute values are represented by the vertices of the hypergraph. The primary convex region and overlap region are represented by hyperedges, the centroids of overlap form the critical points. The difficulty is generating segment graph and estimating of minimum path width. The all techniques mentioned above need previous knowledge about the world to make path planning and the computational cost is not low. They are not available in an unknow and uncertain environment. Due to limited system resources such as CPU time, memory size and knowledge about the special application in an intelligent system (such as mobile robot), it is necessary to use algorithms that provide the good decision which is feasible with the available resources in real time rather than the best answer that could be achieved in unlimited time with unlimited resources. A real-time path planner should meet following requirements: - Quickly abstract the representation of the world from the sensory data without any previous knowledge about the robot environment. - Easily update the world model to spell out the global-path map and to reflect changes in the robot environment. - Must make a decision of where the robot must go and which direction the range sensor should point to in real time with limited resources. The method presented here assumes that the data from range sensors has been processed by signal process unite. The path planner will guide the scan of range sensor, find critical points, make decision where the robot should go and which point is poten- tial critical point, generate the path map and monitor the robot moves to the given point. The program runs recursively until the goal is reached or the whole workspace is roved through.

  8. Memory persistency and nonlinearity in daily mean dew point across India

    NASA Astrophysics Data System (ADS)

    Ray, Rajdeep; Khondekar, Mofazzal Hossain; Ghosh, Koushik; Bhattacharjee, Anup Kumar

    2016-04-01

    Enterprising endeavour has been taken in this work to realize and estimate the persistence in memory of the daily mean dew point time series obtained from seven different weather stations viz. Kolkata, Chennai (Madras), New Delhi, Mumbai (Bombay), Bhopal, Agartala and Ahmedabad representing different geographical zones in India. Hurst exponent values reveal an anti-persistent behaviour of these dew point series. To affirm the Hurst exponent values, five different scaling methods have been used and the corresponding results are compared to synthesize a finer and reliable conclusion out of it. The present analysis also bespeaks that the variation in daily mean dew point is governed by a non-stationary process with stationary increments. The delay vector variance (DVV) method has been exploited to investigate nonlinearity, and the present calculation confirms the presence of deterministic nonlinear profile in the daily mean dew point time series of the seven stations.

  9. A CPU benchmark for protein crystallographic refinement.

    PubMed

    Bourne, P E; Hendrickson, W A

    1990-01-01

    The CPU time required to complete a cycle of restrained least-squares refinement of a protein structure from X-ray crystallographic data using the FORTRAN codes PROTIN and PROLSQ are reported for 48 different processors, ranging from single-user workstations to supercomputers. Sequential, vector, VLIW, multiprocessor, and RISC hardware architectures are compared using both a small and a large protein structure. Representative compile times for each hardware type are also given, and the improvement in run-time when coding for a specific hardware architecture considered. The benchmarks involve scalar integer and vector floating point arithmetic and are representative of the calculations performed in many scientific disciplines.

  10. CREPT-MCNP code for efficiency calibration of HPGe detectors with the representative point method.

    PubMed

    Saegusa, Jun

    2008-01-01

    The representative point method for the efficiency calibration of volume samples has been previously proposed. For smoothly implementing the method, a calculation code named CREPT-MCNP has been developed. The code estimates the position of a representative point which is intrinsic to each shape of volume sample. The self-absorption correction factors are also given to make correction on the efficiencies measured at the representative point with a standard point source. Features of the CREPT-MCNP code are presented.

  11. Sampled control stability of the ESA instrument pointing system

    NASA Astrophysics Data System (ADS)

    Thieme, G.; Rogers, P.; Sciacovelli, D.

    Stability analysis and simulation results are presented for the ESA Instrument Pointing System (IPS) that is to be used in Spacelab's second launch. Of the two IPS plant dynamic models used in the ESA and NASA activities, one is based on six interconnected rigid bodies that represent the IPS and plant dynamic models used in the ESA and NASA activities, one is based on six interconnected rigid bodies that represent the IPS and its payload, while the other follows the NASA practice of defining an IPS-Spacelab 2 plant configuration through a structural finite element model, which is then used to generate modal data for various pointing directions. In both cases, the IPS dynamic plant model is truncated, then discretized at the sampling frequency and interfaces to a PID-based control law. A stability analysis has been carried out in discrete domain for various instrument pointing directions, taking into account suitable parameter variation ranges. A number of time simulations are presented.

  12. The Origins of the Cold War: A Unit of Study for Grades 9-12.

    ERIC Educational Resources Information Center

    King, Lisa

    This unit is one of a series that represents specific moments in history from which students focus on the meanings of landmark events. The events of 1945 are regarded widely as a turning point in 20th century history, a point when the United States unequivocally took its place as a world power, at a time when Americans had a strong but…

  13. A Novel Real-Time Reference Key Frame Scan Matching Method.

    PubMed

    Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu

    2017-05-07

    Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions' environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems.

  14. Point Charges Optimally Placed to Represent the Multipole Expansion of Charge Distributions

    PubMed Central

    Onufriev, Alexey V.

    2013-01-01

    We propose an approach for approximating electrostatic charge distributions with a small number of point charges to optimally represent the original charge distribution. By construction, the proposed optimal point charge approximation (OPCA) retains many of the useful properties of point multipole expansion, including the same far-field asymptotic behavior of the approximate potential. A general framework for numerically computing OPCA, for any given number of approximating charges, is described. We then derive a 2-charge practical point charge approximation, PPCA, which approximates the 2-charge OPCA via closed form analytical expressions, and test the PPCA on a set of charge distributions relevant to biomolecular modeling. We measure the accuracy of the new approximations as the RMS error in the electrostatic potential relative to that produced by the original charge distribution, at a distance the extent of the charge distribution–the mid-field. The error for the 2-charge PPCA is found to be on average 23% smaller than that of optimally placed point dipole approximation, and comparable to that of the point quadrupole approximation. The standard deviation in RMS error for the 2-charge PPCA is 53% lower than that of the optimal point dipole approximation, and comparable to that of the point quadrupole approximation. We also calculate the 3-charge OPCA for representing the gas phase quantum mechanical charge distribution of a water molecule. The electrostatic potential calculated by the 3-charge OPCA for water, in the mid-field (2.8 Å from the oxygen atom), is on average 33.3% more accurate than the potential due to the point multipole expansion up to the octupole order. Compared to a 3 point charge approximation in which the charges are placed on the atom centers, the 3-charge OPCA is seven times more accurate, by RMS error. The maximum error at the oxygen-Na distance (2.23 Å ) is half that of the point multipole expansion up to the octupole order. PMID:23861790

  15. An integrated national mortality surveillance system for death registration and mortality surveillance, China.

    PubMed

    Liu, Shiwei; Wu, Xiaoling; Lopez, Alan D; Wang, Lijun; Cai, Yue; Page, Andrew; Yin, Peng; Liu, Yunning; Li, Yichong; Liu, Jiangmei; You, Jinling; Zhou, Maigeng

    2016-01-01

    In China, sample-based mortality surveillance systems, such as the Chinese Center for Disease Control and Prevention's disease surveillance points system and the Ministry of Health's vital registration system, have been used for decades to provide nationally representative data on health status for health-care decision-making and performance evaluation. However, neither system provided representative mortality and cause-of-death data at the provincial level to inform regional health service needs and policy priorities. Moreover, the systems overlapped to a considerable extent, thereby entailing a duplication of effort. In 2013, the Chinese Government combined these two systems into an integrated national mortality surveillance system to provide a provincially representative picture of total and cause-specific mortality and to accelerate the development of a comprehensive vital registration and mortality surveillance system for the whole country. This new system increased the surveillance population from 6 to 24% of the Chinese population. The number of surveillance points, each of which covered a district or county, increased from 161 to 605. To ensure representativeness at the provincial level, the 605 surveillance points were selected to cover China's 31 provinces using an iterative method involving multistage stratification that took into account the sociodemographic characteristics of the population. This paper describes the development and operation of the new national mortality surveillance system, which is expected to yield representative provincial estimates of mortality in China for the first time.

  16. Real-time measurement of quality during the compaction of subgrade soils.

    DOT National Transportation Integrated Search

    2012-12-01

    Conventional quality control of subgrade soils during their compaction is usually performed by monitoring moisture content and dry density at a few discrete locations. However, randomly selected points do not adequately represent the entire compacted...

  17. 40 CFR 410.32 - Effluent limitations representing the degree of effluent reduction attainable by the application...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS TEXTILE MILLS POINT SOURCE CATEGORY Low Water Use Processing... 9.0 at all times. Water Jet Weaving Pollutant or pollutant property BPT limitations Maximum for any...

  18. Sidestepping questions of legitimacy: how community representatives manoeuvre to effect change in a health service.

    PubMed

    Nathan, Sally; Stephenson, Niamh; Braithwaite, Jeffrey

    2014-01-01

    Empirical studies of community participation in health services commonly tie effectiveness to the perceived legitimacy of community representatives among health staff. This article examines the underlying assumption that legitimacy is the major pathway to influence for community representatives. It takes a different vantage point from previous research in its examination of data (primarily through 34 in-depth interviews, observation and recording of 26 meetings and other interactions documented in field notes) from a 3-year study of community representatives' action in a large health region in Australia. The analysis primarily deploys Michel de Certeau's ideas of Strategy and Tactic to understand the action and effects of the generally 'weaker players' in the spaces and places dominated by powerful institutions. Through this lens, we can see the points where community representatives are active participants following their own agenda, tactically capitalising on cracks in the armour of the health service to seize opportunities that present themselves in time to effect change. Being able to see community representatives as active producers of change, not simply passengers following the path of the health service, challenges how we view the success of community participation in health.

  19. 40 CFR 463.22 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... pH (1) (1) 1 Within the range of 6.0 to 9.0 at all times. The permit authority will obtain the... cleaning water processes at a point source times the following pollutant concentrations: Subpart B [Cleaning water] Concentration used to calculate BPT effluent limitations Pollutant or pollutant property...

  20. Efficient Delaunay Tessellation through K-D Tree Decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morozov, Dmitriy; Peterka, Tom

    Delaunay tessellations are fundamental data structures in computational geometry. They are important in data analysis, where they can represent the geometry of a point set or approximate its density. The algorithms for computing these tessellations at scale perform poorly when the input data is unbalanced. We investigate the use of k-d trees to evenly distribute points among processes and compare two strategies for picking split points between domain regions. Because resulting point distributions no longer satisfy the assumptions of existing parallel Delaunay algorithms, we develop a new parallel algorithm that adapts to its input and prove its correctness. We evaluatemore » the new algorithm using two late-stage cosmology datasets. The new running times are up to 50 times faster using k-d tree compared with regular grid decomposition. Moreover, in the unbalanced data sets, decomposing the domain into a k-d tree is up to five times faster than decomposing it into a regular grid.« less

  1. Legibility Evaluation Using Point-of-regard Measurement

    NASA Astrophysics Data System (ADS)

    Saito, Daisuke; Saito, Keiichi; Saito, Masao

    Web site visibility has become important because of the rapid dissemination of World Wide Web, and combinations of foreground and background colors are crucial in providing high visibility. In our previous studies, the visibilities of several web-safe color combinations were examined using a psychological method. In those studies, simple stimuli were used because of experimental restriction. In this paper, legibility of sentences on web sites was examined using a psychophisiological method, point-of-regard measurement, to obtain other practical data. Ten people with normal color sensations ranging from ages 21 to 29 were recruited. The number of characters per line in each page was arranged in the same number, and the four representative achromatic web-safe colors, that is, #000000, #666666, #999999 and #CCCCCC, were examined. The reading time per character and the gaze time per line were obtained from point-of-regard measurement, and the normalized with the reading time and the gaze time of the three colors were calculated and compared. As the results, it was shown that the time of reading and gaze become long at the same ratio when the contrast decreases by point-of-regard measurement. Therefore, it was indicated that the legibility of color combinations could be estimated by point-of-regard measurement.

  2. The Geological Grading Scale: Every million Points Counts!

    NASA Astrophysics Data System (ADS)

    Stegman, D. R.; Cooper, C. M.

    2006-12-01

    The concept of geological time, ranging from thousands to billions of years, is naturally quite difficult for students to grasp initially, as it is much longer than the timescales over which they experience everyday life. Moreover, universities operate on a few key timescales (hourly lectures, weekly assignments, mid-term examinations) to which students' maximum attention is focused, largely driven by graded assessment. The geological grading scale exploits the overwhelming interest students have in grades as an opportunity to instill familiarity with geological time. With the geological grading scale, the number of possible points/marks/grades available in the course is scaled to 4.5 billion points --- collapsing the entirety of Earth history into one semester. Alternatively, geological time can be compressed into each assignment, with scores for weekly homeworks not worth 100 points each, but 4.5 billion! Homeworks left incomplete with questions unanswered lose 100's of millions of points - equivalent to missing the Paleozoic era. The expected quality of presentation for problem sets can be established with great impact in the first week by docking assignments an insignificant amount points for handing in messy work; though likely more points than they've lost in their entire schooling history combined. Use this grading scale and your students will gradually begin to appreciate exactly how much time represents a geological blink of the eye.

  3. Ten-year trends in adolescents' self-reported emotional and behavioral problems in the Netherlands.

    PubMed

    Duinhof, Elisa L; Stevens, Gonneke W J M; van Dorsselaer, Saskia; Monshouwer, Karin; Vollebergh, Wilma A M

    2015-09-01

    Changes in social, cultural, economic, and governmental systems over time may affect adolescents' development. The present study examined 10-year trends in self-reported emotional and behavioral problems among 11- to 16-year-old adolescents in the Netherlands. In addition, gender (girls versus boys), ethnic (Dutch versus non western) and educational (vocational versus academic) differences in these trends were examined. By means of the Strengths and Difficulties Questionnaire, trends in emotional and behavioral problems were studied in adolescents belonging to one of five independent population representative samples (2003: n = 6,904; 2005: n = 5,183; 2007: n = 6,228; 2009: n = 5,559; 2013: n = 5,478). Structural equation models indicated rather stable levels of emotional and behavioral problems over time. Whereas some small changes were found between different time points, these changes did not represent consistent changes in problem levels. Similarly, gender, ethnic and educational differences in self-reported problems on each time point were highly comparable, indicating stable mental health inequalities between groups of adolescents over time. Future internationally comparative studies using multiple measurement moments are needed to monitor whether these persistent mental health inequalities hold over extended periods of time and in different countries.

  4. Computer models of social processes: the case of migration.

    PubMed

    Beshers, J M

    1967-06-01

    The demographic model is a program for representing births, deaths, migration, and social mobility as social processes in a non-stationary stochastic process (Markovian). Transition probabilities for each age group are stored and then retrieved at the next appearance of that age cohort. In this way new transition probabilities can be calculated as a function of the old transition probabilities and of two successive distribution vectors.Transition probabilities can be calculated to represent effects of the whole age-by-state distribution at any given time period, too. Such effects as saturation or queuing may be represented by a market mechanism; for example, migration between metropolitan areas can be represented as depending upon job supplies and labor markets. Within metropolitan areas, migration can be represented as invasion and succession processes with tipping points (acceleration curves), and the market device has been extended to represent this phenomenon.Thus, the demographic model makes possible the representation of alternative classes of models of demographic processes. With each class of model one can deduce implied time series (varying parame-terswithin the class) and the output of the several classes can be compared to each other and to outside criteria, such as empirical time series.

  5. Reproducibility of dynamically represented acoustic lung images from healthy individuals

    PubMed Central

    Maher, T M; Gat, M; Allen, D; Devaraj, A; Wells, A U; Geddes, D M

    2008-01-01

    Background and aim: Acoustic lung imaging offers a unique method for visualising the lung. This study was designed to demonstrate reproducibility of acoustic lung images recorded from healthy individuals at different time points and to assess intra- and inter-rater agreement in the assessment of dynamically represented acoustic lung images. Methods: Recordings from 29 healthy volunteers were made on three separate occasions using vibration response imaging. Reproducibility was measured using quantitative, computerised assessment of vibration energy. Dynamically represented acoustic lung images were scored by six blinded raters. Results: Quantitative measurement of acoustic recordings was highly reproducible with an intraclass correlation score of 0.86 (very good agreement). Intraclass correlations for inter-rater agreement and reproducibility were 0.61 (good agreement) and 0.86 (very good agreement), respectively. There was no significant difference found between the six raters at any time point. Raters ranged from 88% to 95% in their ability to identically evaluate the different features of the same image presented to them blinded on two separate occasions. Conclusion: Acoustic lung imaging is reproducible in healthy individuals. Graphic representation of lung images can be interpreted with a high degree of accuracy by the same and by different reviewers. PMID:18024534

  6. Variance change point detection for fractional Brownian motion based on the likelihood ratio test

    NASA Astrophysics Data System (ADS)

    Kucharczyk, Daniel; Wyłomańska, Agnieszka; Sikora, Grzegorz

    2018-01-01

    Fractional Brownian motion is one of the main stochastic processes used for describing the long-range dependence phenomenon for self-similar processes. It appears that for many real time series, characteristics of the data change significantly over time. Such behaviour one can observe in many applications, including physical and biological experiments. In this paper, we present a new technique for the critical change point detection for cases where the data under consideration are driven by fractional Brownian motion with a time-changed diffusion coefficient. The proposed methodology is based on the likelihood ratio approach and represents an extension of a similar methodology used for Brownian motion, the process with independent increments. Here, we also propose a statistical test for testing the significance of the estimated critical point. In addition to that, an extensive simulation study is provided to test the performance of the proposed method.

  7. Independent evaluation of point source fossil fuel CO2 emissions to better than 10%

    PubMed Central

    Turnbull, Jocelyn Christine; Keller, Elizabeth D.; Norris, Margaret W.; Wiltshire, Rachael M.

    2016-01-01

    Independent estimates of fossil fuel CO2 (CO2ff) emissions are key to ensuring that emission reductions and regulations are effective and provide needed transparency and trust. Point source emissions are a key target because a small number of power plants represent a large portion of total global emissions. Currently, emission rates are known only from self-reported data. Atmospheric observations have the potential to meet the need for independent evaluation, but useful results from this method have been elusive, due to challenges in distinguishing CO2ff emissions from the large and varying CO2 background and in relating atmospheric observations to emission flux rates with high accuracy. Here we use time-integrated observations of the radiocarbon content of CO2 (14CO2) to quantify the recently added CO2ff mole fraction at surface sites surrounding a point source. We demonstrate that both fast-growing plant material (grass) and CO2 collected by absorption into sodium hydroxide solution provide excellent time-integrated records of atmospheric 14CO2. These time-integrated samples allow us to evaluate emissions over a period of days to weeks with only a modest number of measurements. Applying the same time integration in an atmospheric transport model eliminates the need to resolve highly variable short-term turbulence. Together these techniques allow us to independently evaluate point source CO2ff emission rates from atmospheric observations with uncertainties of better than 10%. This uncertainty represents an improvement by a factor of 2 over current bottom-up inventory estimates and previous atmospheric observation estimates and allows reliable independent evaluation of emissions. PMID:27573818

  8. Independent evaluation of point source fossil fuel CO2 emissions to better than 10%.

    PubMed

    Turnbull, Jocelyn Christine; Keller, Elizabeth D; Norris, Margaret W; Wiltshire, Rachael M

    2016-09-13

    Independent estimates of fossil fuel CO2 (CO2ff) emissions are key to ensuring that emission reductions and regulations are effective and provide needed transparency and trust. Point source emissions are a key target because a small number of power plants represent a large portion of total global emissions. Currently, emission rates are known only from self-reported data. Atmospheric observations have the potential to meet the need for independent evaluation, but useful results from this method have been elusive, due to challenges in distinguishing CO2ff emissions from the large and varying CO2 background and in relating atmospheric observations to emission flux rates with high accuracy. Here we use time-integrated observations of the radiocarbon content of CO2 ((14)CO2) to quantify the recently added CO2ff mole fraction at surface sites surrounding a point source. We demonstrate that both fast-growing plant material (grass) and CO2 collected by absorption into sodium hydroxide solution provide excellent time-integrated records of atmospheric (14)CO2 These time-integrated samples allow us to evaluate emissions over a period of days to weeks with only a modest number of measurements. Applying the same time integration in an atmospheric transport model eliminates the need to resolve highly variable short-term turbulence. Together these techniques allow us to independently evaluate point source CO2ff emission rates from atmospheric observations with uncertainties of better than 10%. This uncertainty represents an improvement by a factor of 2 over current bottom-up inventory estimates and previous atmospheric observation estimates and allows reliable independent evaluation of emissions.

  9. Assessing the Transient Gust Response of a Representative Ship Airwake using Proper Orthogonal Decomposition

    DTIC Science & Technology

    Velocimetry system was then used to acquire flow field data across a series of three horizontal planes spanning from 0.25 to 1.5 times the ship hangar height...included six separate data points at gust-frequency referenced Strouhal numbers ranging from 0.430 to1.474. A 725-Hertz time -resolved Particle Image

  10. 40 CFR 463.22 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... TSS 117 36 pH (1) (1) 1 Within the range of 6.0 to 9.0 at all times. The permit authority will obtain... for the cleaning water processes at a point source times the following pollutant concentrations: Subpart B [Cleaning water] Concentration used to calculate BPT effluent limitations Pollutant or pollutant...

  11. 40 CFR 463.22 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... TSS 117 36 pH (1) (1) 1 Within the range of 6.0 to 9.0 at all times. The permit authority will obtain... for the cleaning water processes at a point source times the following pollutant concentrations: Subpart B [Cleaning water] Concentration used to calculate BPT effluent limitations Pollutant or pollutant...

  12. 40 CFR 463.22 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... TSS 117 36 pH (1) (1) 1 Within the range of 6.0 to 9.0 at all times. The permit authority will obtain... for the cleaning water processes at a point source times the following pollutant concentrations: Subpart B [Cleaning water] Concentration used to calculate BPT effluent limitations Pollutant or pollutant...

  13. Estimating the frequency interval of a regularly spaced multicomponent harmonic line signal in colored noise

    NASA Astrophysics Data System (ADS)

    Frazer, Gordon J.; Anderson, Stuart J.

    1997-10-01

    The radar returns from some classes of time-varying point targets can be represented by the discrete-time signal plus noise model: xt equals st plus [vt plus (eta) t] equals (summation)i equals o P minus 1 Aiej2(pi f(i)/f(s)t) plus vt plus (eta) t, t (epsilon) 0, . . ., N minus 1, fi equals kfI plus fo where the received signal xt corresponds to the radar return from the target of interest from one azimuth-range cell. The signal has an unknown number of components, P, unknown complex amplitudes Ai and frequencies fi. The frequency parameters fo and fI are unknown, although constrained such that fo less than fI/2 and parameter k (epsilon) {minus u, . . ., minus 2, minus 1, 0, 1, 2, . . ., v} is constrained such that the component frequencies fi are bound by (minus fs/2, fs/2). The noise term vt, is typically colored, and represents clutter, interference and various noise sources. It is unknown, except that (summation)tvt2 less than infinity; in general, vt is not well modelled as an auto-regressive process of known order. The additional noise term (eta) t represents time-invariant point targets in the same azimuth-range cell. An important characteristic of the target is the unknown parameter, fI, representing the frequency interval between harmonic lines. It is desired to determine an estimate of fI from N samples of xt. We propose an algorithm to estimate fI based on Thomson's harmonic line F-Test, which is part of the multi-window spectrum estimation method and demonstrate the proposed estimator applied to target echo time series collected using an experimental HF skywave radar.

  14. Chronic Ethanol Exposure Produces Time- and Brain Region-Dependent Changes in Gene Coexpression Networks

    PubMed Central

    Osterndorff-Kahanek, Elizabeth A.; Becker, Howard C.; Lopez, Marcelo F.; Farris, Sean P.; Tiwari, Gayatri R.; Nunez, Yury O.; Harris, R. Adron; Mayfield, R. Dayne

    2015-01-01

    Repeated ethanol exposure and withdrawal in mice increases voluntary drinking and represents an animal model of physical dependence. We examined time- and brain region-dependent changes in gene coexpression networks in amygdala (AMY), nucleus accumbens (NAC), prefrontal cortex (PFC), and liver after four weekly cycles of chronic intermittent ethanol (CIE) vapor exposure in C57BL/6J mice. Microarrays were used to compare gene expression profiles at 0-, 8-, and 120-hours following the last ethanol exposure. Each brain region exhibited a large number of differentially expressed genes (2,000-3,000) at the 0- and 8-hour time points, but fewer changes were detected at the 120-hour time point (400-600). Within each region, there was little gene overlap across time (~20%). All brain regions were significantly enriched with differentially expressed immune-related genes at the 8-hour time point. Weighted gene correlation network analysis identified modules that were highly enriched with differentially expressed genes at the 0- and 8-hour time points with virtually no enrichment at 120 hours. Modules enriched for both ethanol-responsive and cell-specific genes were identified in each brain region. These results indicate that chronic alcohol exposure causes global ‘rewiring‘ of coexpression systems involving glial and immune signaling as well as neuronal genes. PMID:25803291

  15. A Novel Real-Time Reference Key Frame Scan Matching Method

    PubMed Central

    Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu

    2017-01-01

    Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions’ environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems. PMID:28481285

  16. Dental scanning in CAD/CAM technologies: laser beams

    NASA Astrophysics Data System (ADS)

    Sinescu, Cosmin; Negrutiu, Meda; Faur, Nicolae; Negru, Radu; Romînu, Mihai; Cozarov, Dalibor

    2008-02-01

    Scanning, also called digitizing, is the process of gathering the requisite data from an object. Many different technologies are used to collect three dimensional data. They range from mechanical and very slow, to radiation-based and highly-automated. Each technology has its advantages and disadvantages, and their applications and specifications overlap. The aims of this study are represented by establishing a viable method of digitally representing artifacts of dental casts, proposing a suitable scanner and post-processing software and obtaining 3D Models for the dental applications. The method is represented by the scanning procedure made by different scanners as the implicated materials. Scanners are the medium of data capture. 3D scanners aim to measure and record the relative distance between the object's surface and a known point in space. This geometric data is represented in the form of point cloud data. The contact and no contact scanners were presented. The results show that contact scanning procedures uses a touch probe to record the relative position of points on the objects' surface. This procedure is commonly used in Reverse engineering applications. Its merits are represented by efficiency for objects with low geometric surface detail. Disadvantages are represented by time consuming, this procedure being impractical for artifacts digitization. The non contact scanning procedure implies laser scanning (laser triangulation technology) and photogrammetry. As a conclusion it can be drawn that different types of dental structure needs different types of scanning procedures in order to obtain a competitive complex 3D virtual model that can be used in CAD/CAM technologies.

  17. Minimum and Maximum Times Required to Obtain Representative Suspended Sediment Samples

    NASA Astrophysics Data System (ADS)

    Gitto, A.; Venditti, J. G.; Kostaschuk, R.; Church, M. A.

    2014-12-01

    Bottle sampling is a convenient method of obtaining suspended sediment measurements for the development of sediment budgets. While these methods are generally considered to be reliable, recent analysis of depth-integrated sampling has identified considerable uncertainty in measurements of grain-size concentration between grain-size classes of multiple samples. Point-integrated bottle sampling is assumed to represent the mean concentration of suspended sediment but the uncertainty surrounding this method is not well understood. Here we examine at-a-point variability in velocity, suspended sediment concentration, grain-size distribution, and grain-size moments to determine if traditional point-integrated methods provide a representative sample of suspended sediment. We present continuous hour-long observations of suspended sediment from the sand-bedded portion of the Fraser River at Mission, British Columbia, Canada, using a LISST laser-diffraction instrument. Spectral analysis suggests that there are no statistically significant peak in energy density, suggesting the absence of periodic fluctuations in flow and suspended sediment. However, a slope break in the spectra at 0.003 Hz corresponds to a period of 5.5 minutes. This coincides with the threshold between large-scale turbulent eddies that scale with channel width/mean velocity and hydraulic phenomena related to channel dynamics. This suggests that suspended sediment samples taken over a period longer than 5.5 minutes incorporate variability that is larger scale than turbulent phenomena in this channel. Examination of 5.5-minute periods of our time series indicate that ~20% of the time a stable mean value of volumetric concentration is reached within 30 seconds, a typical bottle sample duration. In ~12% of measurements a stable mean was not reached over the 5.5 minute sample duration. The remaining measurements achieve a stable mean in an even distribution over the intervening interval.

  18. Optimizing Disaster Relief: Real-Time Operational and Tactical Decision Support

    DTIC Science & Technology

    1993-01-01

    efficiencies in completing the tAsks. Allocations recognize task priorities and the logistica l effects of geographic prox- imity, In addition...as if they ar~ collocated. Arcs connect loc-•I J>airs of zones to represent feasible dTrect point-to-point transportation and bear cost> ror...data to thl.’ de >~red level of aggregation. We have tested ARES manuall)’ ;mtl by replacins tbc deci~ion maker wrlh the decision simulator which

  19. Targeting SRC Family Kinases and HSP90 in Lung Cancer

    DTIC Science & Technology

    2016-12-01

    inhalation of Adeno-Cre, followed by MRI imaging at regular intervals to detect tumor initiation and growth, followed by euthanasia and processing of...experimental endpoint. 10 mice were used per time point Representative MRI data describing tumor volume (TV) are shown in Figure 1. Quantification of data is...dasatinib, we were able to make several conclusions. Figure 1. Representative MRI images from Nedd9wt or Nedd9 null Kras mutant mice, treated with

  20. Estimate of blow-up and relaxation time for self-gravitating Brownian particles and bacterial populations.

    PubMed

    Chavanis, P-H; Sire, C

    2004-08-01

    We determine an exact asymptotic expression of the blow-up time t(coll) for self-gravitating Brownian particles or bacterial populations (chemotaxis) close to the critical point in d=3. We show that t(coll) = t(*) (eta- eta(c) )(-1/2) with t(*) =0.917 677 02..., where eta represents the inverse temperature (for Brownian particles) or the mass (for bacterial colonies), and eta(c) is the critical value of eta above which the system blows up. This result is in perfect agreement with the numerical solution of the Smoluchowski-Poisson system. We also determine the exact asymptotic expression of the relaxation time close to but above the critical temperature and derive a large time asymptotic expansion for the density profile exactly at the critical point.

  1. Representative locations from time series of soil water content using time stability and wavelet analysis.

    PubMed

    Rivera, Diego; Lillo, Mario; Granda, Stalin

    2014-12-01

    The concept of time stability has been widely used in the design and assessment of monitoring networks of soil moisture, as well as in hydrological studies, because it is as a technique that allows identifying of particular locations having the property of representing mean values of soil moisture in the field. In this work, we assess the effect of time stability calculations as new information is added and how time stability calculations are affected at shorter periods, subsampled from the original time series, containing different amounts of precipitation. In doing so, we defined two experiments to explore the time stability behavior. The first experiment sequentially adds new data to the previous time series to investigate the long-term influence of new data in the results. The second experiment applies a windowing approach, taking sequential subsamples from the entire time series to investigate the influence of short-term changes associated with the precipitation in each window. Our results from an operating network (seven monitoring points equipped with four sensors each in a 2-ha blueberry field) show that as information is added to the time series, there are changes in the location of the most stable point (MSP), and that taking the moving 21-day windows, it is clear that most of the variability of soil water content changes is associated with both the amount and intensity of rainfall. The changes of the MSP over each window depend on the amount of water entering the soil and the previous state of the soil water content. For our case study, the upper strata are proxies for hourly to daily changes in soil water content, while the deeper strata are proxies for medium-range stored water. Thus, different locations and depths are representative of processes at different time scales. This situation must be taken into account when water management depends on soil water content values from fixed locations.

  2. 40 CFR 461.31 - Effluent limitations representing the degree of effluent reduction attainable by the application...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS BATTERY MANUFACTURING POINT SOURCE CATEGORY Lead Subcategory....860 0.410 pH (1) (1) 1 Within the range of 7.5 to 10.0 at all times. (5) Subpart C—Battery Wash (with... Within the range of 7.5 to 10.0 at all times. (6) Subpart C—Battery Wash (Water Only). BPT Effluent...

  3. 40 CFR 461.31 - Effluent limitations representing the degree of effluent reduction attainable by the application...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) BATTERY MANUFACTURING POINT SOURCE CATEGORY Lead....860 0.410 pH (1) (1) 1 Within the range of 7.5 to 10.0 at all times. (5) Subpart C—Battery Wash (with... Within the range of 7.5 to 10.0 at all times. (6) Subpart C—Battery Wash (Water Only). BPT Effluent...

  4. 40 CFR 461.31 - Effluent limitations representing the degree of effluent reduction attainable by the application...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) BATTERY MANUFACTURING POINT SOURCE CATEGORY Lead....860 0.410 pH (1) (1) 1 Within the range of 7.5 to 10.0 at all times. (5) Subpart C—Battery Wash (with... Within the range of 7.5 to 10.0 at all times. (6) Subpart C—Battery Wash (Water Only). BPT Effluent...

  5. 40 CFR 461.31 - Effluent limitations representing the degree of effluent reduction attainable by the application...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS BATTERY MANUFACTURING POINT SOURCE CATEGORY Lead Subcategory....860 0.410 pH (1) (1) 1 Within the range of 7.5 to 10.0 at all times. (5) Subpart C—Battery Wash (with... Within the range of 7.5 to 10.0 at all times. (6) Subpart C—Battery Wash (Water Only). BPT Effluent...

  6. 40 CFR 461.31 - Effluent limitations representing the degree of effluent reduction attainable by the application...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) BATTERY MANUFACTURING POINT SOURCE CATEGORY Lead....860 0.410 pH (1) (1) 1 Within the range of 7.5 to 10.0 at all times. (5) Subpart C—Battery Wash (with... Within the range of 7.5 to 10.0 at all times. (6) Subpart C—Battery Wash (Water Only). BPT Effluent...

  7. Comparison of the different approaches to generate holograms from data acquired with a Kinect sensor

    NASA Astrophysics Data System (ADS)

    Kang, Ji-Hoon; Leportier, Thibault; Ju, Byeong-Kwon; Song, Jin Dong; Lee, Kwang-Hoon; Park, Min-Chul

    2017-05-01

    Data of real scenes acquired in real-time with a Kinect sensor can be processed with different approaches to generate a hologram. 3D models can be generated from a point cloud or a mesh representation. The advantage of the point cloud approach is that computation process is well established since it involves only diffraction and propagation of point sources between parallel planes. On the other hand, the mesh representation enables to reduce the number of elements necessary to represent the object. Then, even though the computation time for the contribution of a single element increases compared to a simple point, the total computation time can be reduced significantly. However, the algorithm is more complex since propagation of elemental polygons between non-parallel planes should be implemented. Finally, since a depth map of the scene is acquired at the same time than the intensity image, a depth layer approach can also be adopted. This technique is appropriate for a fast computation since propagation of an optical wavefront from one plane to another can be handled efficiently with the fast Fourier transform. Fast computation with depth layer approach is convenient for real time applications, but point cloud method is more appropriate when high resolution is needed. In this study, since Kinect can be used to obtain both point cloud and depth map, we examine the different approaches that can be adopted for hologram computation and compare their performance.

  8. ImpulseDE: detection of differentially expressed genes in time series data using impulse models.

    PubMed

    Sander, Jil; Schultze, Joachim L; Yosef, Nir

    2017-03-01

    Perturbations in the environment lead to distinctive gene expression changes within a cell. Observed over time, those variations can be characterized by single impulse-like progression patterns. ImpulseDE is an R package suited to capture these patterns in high throughput time series datasets. By fitting a representative impulse model to each gene, it reports differentially expressed genes across time points from a single or between two time courses from two experiments. To optimize running time, the code uses clustering and multi-threading. By applying ImpulseDE , we demonstrate its power to represent underlying biology of gene expression in microarray and RNA-Seq data. ImpulseDE is available on Bioconductor ( https://bioconductor.org/packages/ImpulseDE/ ). niryosef@berkeley.edu. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  9. Avengers Assemble! Using Pop-Culture Icons to Communicate Science

    ERIC Educational Resources Information Center

    Zehr, E. Paul

    2014-01-01

    Engaging communication of complex scientific concepts with the general public requires more than simplification. Compelling, relevant, and timely points of linkage between scientific concepts and the experiences and interests of the general public are needed. Pop-culture icons such as superheroes can represent excellent opportunities for exploring…

  10. Quantum point contact displacement transducer for a mechanical resonator at sub-Kelvin temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okazaki, Yuma; Mahboob, Imran; Onomitsu, Koji

    Highly sensitive displacement transduction of a 1.67 MHz mechanical resonator with a quantum point contact (QPC) formed in a GaAs heterostructure is demonstrated. By positioning the QPC at the point of maximum mechanical strain on the resonator and operating at 80 mK, a displacement responsivity of 3.81 A/m is measured, which represents a two order of magnitude improvement on the previous QPC based devices. By further analyzing the QPC transport characteristics, a sub-Poisson-noise-limited displacement sensitivity of 25 fm/Hz{sup 1/2} is determined which corresponds to a position resolution that is 23 times the standard quantum limit.

  11. Study of the model of calibrating differences of brightness temperature from geostationary satellite generated by time zone differences

    NASA Astrophysics Data System (ADS)

    Li, Weidong; Shan, Xinjian; Qu, Chunyan

    2010-11-01

    In comparison with polar-orbiting satellites, geostationary satellites have a higher time resolution and wider field of visions, which can cover eleven time zones (an image covers about one third of the Earth's surface). For a geostationary satellite panorama graph at a point of time, the brightness temperature of different zones is unable to represent the thermal radiation information of the surface at the same point of time because of the effect of different sun solar radiation. So it is necessary to calibrate brightness temperature of different zones with respect to the same point of time. A model of calibrating the differences of the brightness temperature of geostationary satellite generated by time zone differences is suggested in this study. A total of 16 curves of four positions in four different stages are given through sample statistics of brightness temperature of every 5 days synthetic data which are from four different time zones (time zones 4, 6, 8, and 9). The above four stages span January -March (winter), April-June (spring), July-September (summer), and October-December (autumn). Three kinds of correct situations and correct formulas based on curves changes are able to better eliminate brightness temperature rising or dropping caused by time zone differences.

  12. Efficient Open Source Lidar for Desktop Users

    NASA Astrophysics Data System (ADS)

    Flanagan, Jacob P.

    Lidar --- Light Detection and Ranging --- is a remote sensing technology that utilizes a device similar to a rangefinder to determine a distance to a target. A laser pulse is shot at an object and the time it takes for the pulse to return in measured. The distance to the object is easily calculated using the speed property of light. For lidar, this laser is moved (primarily in a rotational movement usually accompanied by a translational movement) and records the distances to objects several thousands of times per second. From this, a 3 dimensional structure can be procured in the form of a point cloud. A point cloud is a collection of 3 dimensional points with at least an x, a y and a z attribute. These 3 attributes represent the position of a single point in 3 dimensional space. Other attributes can be associated with the points that include properties such as the intensity of the return pulse, the color of the target or even the time the point was recorded. Another very useful, post processed attribute is point classification where a point is associated with the type of object the point represents (i.e. ground.). Lidar has gained popularity and advancements in the technology has made its collection easier and cheaper creating larger and denser datasets. The need to handle this data in a more efficiently manner has become a necessity; The processing, visualizing or even simply loading lidar can be computationally intensive due to its very large size. Standard remote sensing and geographical information systems (GIS) software (ENVI, ArcGIS, etc.) was not originally built for optimized point cloud processing and its implementation is an afterthought and therefore inefficient. Newer, more optimized software for point cloud processing (QTModeler, TopoDOT, etc.) usually lack more advanced processing tools, requires higher end computers and are very costly. Existing open source lidar approaches the loading and processing of lidar in an iterative fashion that requires implementing batch coding and processing time that could take months for a standard lidar dataset. This project attempts to build a software with the best approach for creating, importing and exporting, manipulating and processing lidar, especially in the environmental field. Development of this software is described in 3 sections - (1) explanation of the search methods for efficiently extracting the "area of interest" (AOI) data from disk (file space), (2) using file space (for storage), budgeting memory space (for efficient processing) and moving between the two, and (3) method development for creating lidar products (usually raster based) used in environmental modeling and analysis (i.e.: hydrology feature extraction, geomorphological studies, ecology modeling, etc.).

  13. Development and application of a modified dynamic time warping algorithm (DTW-S) to analyses of primate brain expression time series

    PubMed Central

    2011-01-01

    Background Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Results Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. Conclusions The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html. PMID:21851598

  14. Development and application of a modified dynamic time warping algorithm (DTW-S) to analyses of primate brain expression time series.

    PubMed

    Yuan, Yuan; Chen, Yi-Ping Phoebe; Ni, Shengyu; Xu, Augix Guohua; Tang, Lin; Vingron, Martin; Somel, Mehmet; Khaitovich, Philipp

    2011-08-18

    Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html.

  15. Spectral reconstruction analysis for enhancing signal-to-noise in time-resolved spectroscopies

    NASA Astrophysics Data System (ADS)

    Wilhelm, Michael J.; Smith, Jonathan M.; Dai, Hai-Lung

    2015-09-01

    We demonstrate a new spectral analysis for the enhancement of the signal-to-noise ratio (SNR) in time-resolved spectroscopies. Unlike the simple linear average which produces a single representative spectrum with enhanced SNR, this Spectral Reconstruction analysis (SRa) improves the SNR (by a factor of ca. 0 . 6 √{ n } ) for all n experimentally recorded time-resolved spectra. SRa operates by eliminating noise in the temporal domain, thereby attenuating noise in the spectral domain, as follows: Temporal profiles at each measured frequency are fit to a generic mathematical function that best represents the temporal evolution; spectra at each time are then reconstructed with data points from the fitted profiles. The SRa method is validated with simulated control spectral data sets. Finally, we apply SRa to two distinct experimentally measured sets of time-resolved IR emission spectra: (1) UV photolysis of carbonyl cyanide and (2) UV photolysis of vinyl cyanide.

  16. Creative use of pilot points to address site and regional scale heterogeneity in a variable-density model

    USGS Publications Warehouse

    Dausman, Alyssa M.; Doherty, John; Langevin, Christian D.

    2010-01-01

    Pilot points for parameter estimation were creatively used to address heterogeneity at both the well field and regional scales in a variable-density groundwater flow and solute transport model designed to test multiple hypotheses for upward migration of fresh effluent injected into a highly transmissive saline carbonate aquifer. Two sets of pilot points were used within in multiple model layers, with one set of inner pilot points (totaling 158) having high spatial density to represent hydraulic conductivity at the site, while a second set of outer points (totaling 36) of lower spatial density was used to represent hydraulic conductivity further from the site. Use of a lower spatial density outside the site allowed (1) the total number of pilot points to be reduced while maintaining flexibility to accommodate heterogeneity at different scales, and (2) development of a model with greater areal extent in order to simulate proper boundary conditions that have a limited effect on the area of interest. The parameters associated with the inner pilot points were log transformed hydraulic conductivity multipliers of the conductivity field obtained by interpolation from outer pilot points. The use of this dual inner-outer scale parameterization (with inner parameters constituting multipliers for outer parameters) allowed smooth transition of hydraulic conductivity from the site scale, where greater spatial variability of hydraulic properties exists, to the regional scale where less spatial variability was necessary for model calibration. While the model is highly parameterized to accommodate potential aquifer heterogeneity, the total number of pilot points is kept at a minimum to enable reasonable calibration run times.

  17. Time-domain damping models in structural acoustics using digital filtering

    NASA Astrophysics Data System (ADS)

    Parret-Fréaud, Augustin; Cotté, Benjamin; Chaigne, Antoine

    2016-02-01

    This paper describes a new approach in order to formulate well-posed time-domain damping models able to represent various frequency domain profiles of damping properties. The novelty of this approach is to represent the behavior law of a given material directly in a discrete-time framework as a digital filter, which is synthesized for each material from a discrete set of frequency-domain data such as complex modulus through an optimization process. A key point is the addition of specific constraints to this process in order to guarantee stability, causality and verification of thermodynamics second law when transposing the resulting discrete-time behavior law into the time domain. Thus, this method offers a framework which is particularly suitable for time-domain simulations in structural dynamics and acoustics for a wide range of materials (polymers, wood, foam, etc.), allowing to control and even reduce the distortion effects induced by time-discretization schemes on the frequency response of continuous-time behavior laws.

  18. Systematic identification of an integrative network module during senescence from time-series gene expression.

    PubMed

    Park, Chihyun; Yun, So Jeong; Ryu, Sung Jin; Lee, Soyoung; Lee, Young-Sam; Yoon, Youngmi; Park, Sang Chul

    2017-03-15

    Cellular senescence irreversibly arrests growth of human diploid cells. In addition, recent studies have indicated that senescence is a multi-step evolving process related to important complex biological processes. Most studies analyzed only the genes and their functions representing each senescence phase without considering gene-level interactions and continuously perturbed genes. It is necessary to reveal the genotypic mechanism inferred by affected genes and their interaction underlying the senescence process. We suggested a novel computational approach to identify an integrative network which profiles an underlying genotypic signature from time-series gene expression data. The relatively perturbed genes were selected for each time point based on the proposed scoring measure denominated as perturbation scores. Then, the selected genes were integrated with protein-protein interactions to construct time point specific network. From these constructed networks, the conserved edges across time point were extracted for the common network and statistical test was performed to demonstrate that the network could explain the phenotypic alteration. As a result, it was confirmed that the difference of average perturbation scores of common networks at both two time points could explain the phenotypic alteration. We also performed functional enrichment on the common network and identified high association with phenotypic alteration. Remarkably, we observed that the identified cell cycle specific common network played an important role in replicative senescence as a key regulator. Heretofore, the network analysis from time series gene expression data has been focused on what topological structure was changed over time point. Conversely, we focused on the conserved structure but its context was changed in course of time and showed it was available to explain the phenotypic changes. We expect that the proposed method will help to elucidate the biological mechanism unrevealed by the existing approaches.

  19. BRMS1 Suppresses Breast Cancer Metastasis to Bone via Its Regulation of microRNA-125b and Downstream Attenuation of TNF-Alpha and HER2 Signaling Pathways

    DTIC Science & Technology

    2014-04-01

    cytoskeleton genes and genes regulating focal adhesion assembly, such as a5 integrin, Tenascin C, Talin-1, Profilin 1, and Actinin [35]. Intravital ...and allowed to adhere for time indicated, at which point cells were fixed and stainedwith crystal violet. Representative images for times 5, 10, 15...matrix milieu and imaged by time-lapse microscopy for 1 h or fixed and stained with crystal violet at times indicated. As shown in Figure 1C and

  20. Anticipatory control of xenon in a pressurized water reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Impink, A.J. Jr.

    1987-02-10

    A method is described for automatically dampening xenon-135 spatial transients in the core of a pressurized water reactor having control rods which regulate reactor power level, comprising the steps of: measuring the neutron flu in the reactor core at a plurality of axially spaced locations on a real-time, on-line basis; repetitively generating from the neutron flux measurements, on a point-by-point basis, signals representative of the current axial distribution of xenon-135, and signals representative of the current rate of change of the axial distribution of xenon-135; generating from the xenon-135 distribution signals and the rate of change of xenon distribution signals,more » control signals for reducing the xenon transients; and positioning the control rods as a function of the control signals to dampen the xenon-135 spatial transients.« less

  1. Calibration of a subcutaneous amperometric glucose sensor implanted for 7 days in diabetic patients. Part 2. Superiority of the one-point calibration method.

    PubMed

    Choleau, C; Klein, J C; Reach, G; Aussedat, B; Demaria-Pesce, V; Wilson, G S; Gifford, R; Ward, W K

    2002-08-01

    Calibration, i.e. the transformation in real time of the signal I(t) generated by the glucose sensor at time t into an estimation of glucose concentration G(t), represents a key issue for the development of a continuous glucose monitoring system. To compare two calibration procedures. In the one-point calibration, which assumes that I(o) is negligible, S is simply determined as the ratio I/G, and G(t) = I(t)/S. The two-point calibration consists in the determination of a sensor sensitivity S and of a background current I(o) by plotting two values of the sensor signal versus the concomitant blood glucose concentrations. The subsequent estimation of G(t) is given by G(t) = (I(t)-I(o))/S. A glucose sensor was implanted in the abdominal subcutaneous tissue of nine type 1 diabetic patients during 3 (n = 2) and 7 days (n = 7). The one-point calibration was performed a posteriori either once per day before breakfast, or twice per day before breakfast and dinner, or three times per day before each meal. The two-point calibration was performed each morning during breakfast. The percentages of points present in zones A and B of the Clarke Error Grid were significantly higher when the system was calibrated using the one-point calibration. Use of two one-point calibrations per day before meals was virtually as accurate as three one-point calibrations. This study demonstrates the feasibility of a simple method for calibrating a continuous glucose monitoring system.

  2. The Adult Life Cycle: Exploration and Implications.

    ERIC Educational Resources Information Center

    Baile, Susan

    Most of the frameworks that have been constructed to mark off the changes in the cycle of adulthood are characterized by a particular focus such as developmental ages, the role of age and timing, or ego development. The theory of Erik Erikson, based upon his clinical observations, represents these crucial turning points in human development: ages…

  3. Real-Time ambient carbon monoxide and ultrafine particle concentration mapping in a near-road environment

    EPA Science Inventory

    Ambient air quality has traditionally been monitored using a network of fixed point sampling sites that are strategically placed to represent regional (e.g., county or town) rather than local (e.g., neighborhood) air quality trends. This type of monitoring data has been used to m...

  4. Multi-Level Information Systems. AIR Forum Paper 1978.

    ERIC Educational Resources Information Center

    Jones, Leighton D.; Trautman, DeForest L.

    To support informational needs of day-to-day and long-range decision-making, many universities have developed their own data collection devices and institutional reporting systems. Often these models only represent a single point in time and do not effectively support needs at college and departmental levels. This paper identifies some of the more…

  5. The "One Voice" Project: A Case of Complexity in Community-Driven Education Reform

    ERIC Educational Resources Information Center

    Davis, Natalie R.; Monroe, Xavier J.; Drake, Thomas M.

    2018-01-01

    This case represents an effort to connect academic learning with educational reform in real time. It describes a district initiative to meaningfully engage a skeptical community and the subsequent attempts of university-based researchers to provide further entry points into the perspectives and concerns of educational partners. What is compelling…

  6. Impact of a personalised active labour market programme for persons with disabilities.

    PubMed

    Adamecz-Völgyi, Anna; Lévay, Petra Zsuzsa; Bördős, Katalin; Scharle, Ágota

    2018-02-01

    The paper estimates the impact of a supported employment programme implemented in Hungary. This is a non-experimental evaluation using a matching identification strategy supported by rich data on individual characteristics, personal employment and unemployment history and the local labour market situation. We use a time-window approach to ensure that programme participants and matched controls entered unemployment at the same point in time, and thus faced very similar labour market conditions. We find that the programme had a positive effect of 16 percentage points on the probability of finding a job among men and 25 percentage points among women. The alternative outcome indicator of not re-entering the unemployment registry shows somewhat smaller effects in the case of women. In comparison to similarly costly programmes that do not facilitate employment in the primary labour market, rehabilitation services represent a viable alternative.

  7. Voters' Fickleness:. a Mathematical Model

    NASA Astrophysics Data System (ADS)

    Boccara, Nino

    This paper presents a spatial agent-based model in order to study the evolution of voters' choice during the campaign of a two-candidate election. Each agent, represented by a point inside a two-dimensional square, is under the influence of its neighboring agents, located at a Euclidean distance less than or equal to d, and under the equal influence of both candidates seeking to win its support. Moreover, each agent located at time t at a given point moves at the next timestep to a randomly selected neighboring location distributed normally around its position at time t. Besides their location in space, agents are characterized by their level of awareness, a real a ∈ [0, 1], and their opinion ω ∈ {-1, 0, +1}, where -1 and +1 represent the respective intentions to cast a ballot in favor of one of the two candidates while 0 indicates either disinterest or refusal to vote. The essential purpose of the paper is qualitative; its aim is to show that voters' fickleness is strongly correlated to the level of voters' awareness and the efficiency of candidates' propaganda.

  8. Coronae on stars

    NASA Technical Reports Server (NTRS)

    Haisch, B. M.

    1986-01-01

    Three lines of evidence are noted to point to a flare heating source for stellar coronae: a strong correlation between time-averaged flare energy release and coronal X-ray luminosity, the high temperature flare-like component of the spectral signature of coronal X-ray emission, and the observed short time scale variability that indicates continuous flare activity. It is presently suggested that flares may represent only the extreme high energy tail of a continuous distribution of coronal energy release events.

  9. Mapping extent and change in surface mines within the United States for 2001 to 2006

    USGS Publications Warehouse

    Soulard, Christopher E.; Acevedo, William; Stehman, Stephen V.; Parker, Owen P.

    2016-01-01

    A complete, spatially explicit dataset illustrating the 21st century mining footprint for the conterminous United States does not exist. To address this need, we developed a semi-automated procedure to map the country's mining footprint (30-m pixel) and establish a baseline to monitor changes in mine extent over time. The process uses mine seed points derived from the U.S. Energy Information Administration (EIA), U.S. Geological Survey (USGS) Mineral Resources Data System (MRDS), and USGS National Land Cover Dataset (NLCD) and recodes patches of barren land that meet a “distance to seed” requirement and a patch area requirement before mapping a pixel as mining. Seed points derived from EIA coal points, an edited MRDS point file, and 1992 NLCD mine points were used in three separate efforts using different distance and patch area parameters for each. The three products were then merged to create a 2001 map of moderate-to-large mines in the United States, which was subsequently manually edited to reduce omission and commission errors. This process was replicated using NLCD 2006 barren pixels as a base layer to create a 2006 mine map and a 2001–2006 mine change map focusing on areas with surface mine expansion. In 2001, 8,324 km2 of surface mines were mapped. The footprint increased to 9,181 km2 in 2006, representing a 10·3% increase over 5 years. These methods exhibit merit as a timely approach to generate wall-to-wall, spatially explicit maps representing the recent extent of a wide range of surface mining activities across the country. 

  10. Effects of Above Real Time Training (ARTT) On Individual Skills and Contributions to Crew/Team Performance

    NASA Technical Reports Server (NTRS)

    Ali, Syed Firasat; Khan, M. Javed; Rossi, Marcia J.; Crane, Peter; Guckenberger, Dutch; Bageon, Kellye

    2001-01-01

    Above Real Time Training (ARTT) is the training acquired on a real time simulator when it is modified to present events at a faster pace than normal. The experiments on training of pilots performed by NASA engineers and others have indicated that real time training (RTT) reinforced with ARTT would offer an effective training strategy for such tasks which require significant effort at time and workload management. A study was conducted to find how ARTT and RTT complement each other for training of novice pilot-navigator teams to fly on a required route. In the experiment, each of the participating pilot-navigator teams was required to conduct simulator flights on a prescribed two-legged ground track while maintaining required air speed and altitude. At any instant in a flight, the distance between the actual spatial point location of the airplane and the required spatial point was used as a measure of deviation from the required route. A smaller deviation represented better performance. Over a segment of flight or over complete flight, an average value of the deviation represented consolidated performance. The deviations were computed from the information on latitude, longitude, and altitude. In the combined ARTT and RTT program, ARTT at intermediate training intervals was beneficial in improving the real time performance of the trainees. It was observed that the team interaction between pilot and navigator resulted in maintaining high motivation and active participation throughout the training program.

  11. Temporal Variability of Microplastic Concentrations in Freshwater Streams

    NASA Astrophysics Data System (ADS)

    Watkins, L.; Walter, M. T.

    2016-12-01

    Plastic pollution, specifically the size fraction less than 5mm known as microplastics, is an emerging contaminant in waterways worldwide. The ability of microplastics to adsorb and transport contaminants and microbes, as well as be ingested by organisms, makes them a concern in both freshwater and marine ecosystems. Recent efforts to determine the extent of microplastic pollution are increasingly focused on freshwater systems, but most studies have reported concentrations at a single time-point; few have begun to uncover how plastic concentrations in riverine systems may change through time. We hypothesize the time of day and season of sampling influences the concentrations of microplastics in water samples and more specifically, that daytime stormflow samples contain the highest microplastic concentrations due to maximized runoff and wastewater discharge. In order to test this hypothesis, we sampled in two similar streams in Ithaca, New York using a 333µm mesh net deployed within the thalweg. Repeat samples were collected to identify diurnal patterns as well as monthly variation. Samples were processed in the laboratory following the NOAA wet peroxide oxidation protocol. This work improves our ability to interpret existing single-time-point survey results by providing information on how microplastic concentrations change over time and whether concentrations in existing stream studies are likely representative of their location. Additionally, these results will inform future studies by providing insight into representative sample timing and capturing temporal trends for the purposes of modeling and of developing regulations for microplastic pollution.

  12. Real-time global illumination on mobile device

    NASA Astrophysics Data System (ADS)

    Ahn, Minsu; Ha, Inwoo; Lee, Hyong-Euk; Kim, James D. K.

    2014-02-01

    We propose a novel method for real-time global illumination on mobile devices. Our approach is based on instant radiosity, which uses a sequence of virtual point lights in order to represent the e ect of indirect illumination. Our rendering process consists of three stages. With the primary light, the rst stage generates a local illumination with the shadow map on GPU The second stage of the global illumination uses the re ective shadow map on GPU and generates the sequence of virtual point lights on CPU. Finally, we use the splatting method of Dachsbacher et al 1 and add the indirect illumination to the local illumination on GPU. With the limited computing resources in mobile devices, a small number of virtual point lights are allowed for real-time rendering. Our approach uses the multi-resolution sampling method with 3D geometry and attributes simultaneously and reduce the total number of virtual point lights. We also use the hybrid strategy, which collaboratively combines the CPUs and GPUs available in a mobile SoC due to the limited computing resources in mobile devices. Experimental results demonstrate the global illumination performance of the proposed method.

  13. Universal statistics of terminal dynamics before collapse

    NASA Astrophysics Data System (ADS)

    Lenner, Nicolas; Eule, Stephan; Wolf, Fred

    Recent biological developments have both drastically increased the precision as well as amount of generated data, allowing for a switching from pure mean value characterization of the process under consideration to an analysis of the whole ensemble, exploiting the stochastic nature of biology. We focus on the general class of non-equilibrium processes with distinguished terminal points as can be found in cell fate decision, check points or cognitive neuroscience. Aligning the data to a terminal point (e.g. represented as an absorbing boundary) allows to device a general methodology to characterize and reverse engineer the terminating history. Using a small noise approximation we derive mean variance and covariance of the aligned data for general finite time singularities.

  14. Integrated evaluation of visually induced motion sickness in terms of autonomic nervous regulation.

    PubMed

    Kiryu, Tohru; Tada, Gen; Toyama, Hiroshi; Iijima, Atsuhiko

    2008-01-01

    To evaluate visually-induced motion sickness, we integrated subjective and objective responses in terms of autonomic nervous regulation. Twenty-seven subjects viewed a 2-min-long first-person-view video section five times (total 10 min) continuously. Measured biosignals, the RR interval, respiration, and blood pressure, were used to estimate the indices related to autonomic nervous activity (ANA). Then we determined the trigger points and some sensation sections based on the time-varying behavior of ANA-related indices. We found that there was a suitable combination of biosignals to present the symptoms of visually-induced motion sickness. Based on the suitable combination, integrating trigger points and subjective scores allowed us to represent the time-distribution of subjective responses during visual exposure, and helps us to understand what types of camera motions will cause visually-induced motion sickness.

  15. Open Solar Physics Questions - What Can Orbiter Do That Could Not Be Addressed By Existing Missions?

    NASA Technical Reports Server (NTRS)

    Antiochos, S. K.

    2009-01-01

    Solar Orbiter represents a revolutionary advance in observing the Sun. Orbiter will have optical and XUV telescopes that will deliver high-resolution images and spectra from vantages points that have never been possible before, dose to the Sun and at high latitudes. At the same time, Orbiter will measure in situ the properties of the solar wind that originate from the observed solar photosphere and corona. In this presentation, Ivvi|/ describe how with its unique vantage points and capabilities, Orbiter will allow us to answer, for the first time, some of the major question in solar physics, such as: Where does the slow wind originate? How do CMEs initiate and evolve? What is the heating mechanism in corona/ loops.

  16. Spatial transformation abilities and their relation to later mathematics performance.

    PubMed

    Frick, Andrea

    2018-04-10

    Using a longitudinal approach, this study investigated the relational structure of different spatial transformation skills at kindergarten age, and how these spatial skills relate to children's later mathematics performance. Children were tested at three time points, in kindergarten, first grade, and second grade (N = 119). Exploratory factor analyses revealed two subcomponents of spatial transformation skills: one representing egocentric transformations (mental rotation and spatial scaling), and one representing allocentric transformations (e.g., cross-sectioning, perspective taking). Structural equation modeling suggested that egocentric transformation skills showed their strongest relation to the part of the mathematics test tapping arithmetic operations, whereas allocentric transformations were strongly related to Numeric-Logical and Spatial Functions as well as geometry. The present findings point to a tight connection between early mental transformation skills, particularly the ones requiring a high level of spatial flexibility and a strong sense for spatial magnitudes, and children's mathematics performance at the beginning of their school career.

  17. Associations between Language and Problem Behavior: A Systematic Review and Correlational Meta-Analysis

    ERIC Educational Resources Information Center

    Chow, Jason C.; Wehby, Joseph H.

    2018-01-01

    A growing body of evidence points to the common co-occurrence of language and behavioral difficulties in children. Primary studies often focus on this relation in children with identified deficits. However, it is unknown whether this relation holds across other children at risk or representative samples of children or over time. The purpose of…

  18. More "Private" than Private Institutions: Public Institutions of Higher Education and Financial Management

    ERIC Educational Resources Information Center

    Adams, Olin L., III; Robichaux, Rebecca R.; Guarino, A. J.

    2010-01-01

    This research compares the status of managerial accounting practices in public four-year colleges and universities and in private four-year colleges and universities. The investigators surveyed a national sample of chief financial officers (CFOs) at two points in time, 1998-99 and 2003-04. In 1998-99 CFOs representing private institutions reported…

  19. NLS Handbook, 2005. National Longitudinal Surveys

    ERIC Educational Resources Information Center

    Bureau of Labor Statistics, 2006

    2006-01-01

    The National Longitudinal Surveys (NLS), sponsored by the U.S. Bureau of Labor Statistics (BLS), are a set of surveys designed to gather information at multiple points in time on the labor market experiences of groups of men and women. Each of the cohorts has been selected to represent all people living in the United States at the initial…

  20. An Examination of the Pathways of Depressive Symptoms and Heavy Drinking from Adolescence to Adulthood

    ERIC Educational Resources Information Center

    Gustafson, Emily

    2011-01-01

    This study examined the dynamic interaction of heavy alcohol use and depressive symptoms at three points over a time period of 11 years from adolescence to adulthood using a subset of data from the nationally representative, multi-year, longitudinal data source, the National Longitudinal Study of Adolescent Health (Add Health). Results revealed…

  1. 40 CFR 463.32 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... for the finishing water processes at a point source times the following pollutant concentrations: Subpart C [Finishing water] Concentration used to calculate BPT effluent limitations Pollutant or pollutant property Maximum for any 1 day (mg/l) Maximum for monthly average (mg/l) TSS 130 37 pH (1) (1) 1...

  2. 40 CFR 463.32 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... finishing water processes at a point source times the following pollutant concentrations: Subpart C [Finishing water] Concentration used to calculate BPT effluent limitations Pollutant or pollutant property Maximum for any 1 day (mg/l) Maximum for monthly average (mg/l) TSS 130 37 pH (1) (1) 1 Within the range...

  3. 40 CFR 463.32 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... for the finishing water processes at a point source times the following pollutant concentrations: Subpart C [Finishing water] Concentration used to calculate BPT effluent limitations Pollutant or pollutant property Maximum for any 1 day (mg/l) Maximum for monthly average (mg/l) TSS 130 37 pH (1) (1) 1...

  4. 40 CFR 463.32 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... for the finishing water processes at a point source times the following pollutant concentrations: Subpart C [Finishing water] Concentration used to calculate BPT effluent limitations Pollutant or pollutant property Maximum for any 1 day (mg/l) Maximum for monthly average (mg/l) TSS 130 37 pH (1) (1) 1...

  5. Trends in Adolescent Emotional Problems in England: A Comparison of Two National Cohorts Twenty Years Apart

    ERIC Educational Resources Information Center

    Collishaw, Stephan; Maughan, Barbara; Natarajan, Lucy; Pickles, Andrew

    2010-01-01

    Background: Evidence about trends in adolescent emotional problems (depression and anxiety) is inconclusive, because few studies have used comparable measures and samples at different points in time. We compared rates of adolescent emotional problems in two nationally representative English samples of youth 20 years apart using identical symptom…

  6. Laser acupuncture causes thermal changes in small intestine meridian pathway.

    PubMed

    de Souza, Regina Célia; Pansini, Mario; Arruda, Gisele; Valente, Caroline; Brioschi, Marcos Leal

    2016-11-01

    The acupuncture meridians represent the flow of corporal energy which contains the acupuncture points. Laser acupuncture is a form of acupuncture stimulation by the use of laser. Thermographic images represent the propagation of heat in micro-environmental systems. The objective of this study was to investigate the use of thermographic images to document the changes on the small intestine meridian (S.I.M.) when submitted to laser acupuncture. Another important issue regards to the analysis of the flow direction if it is upward when stimulated by acupuncture points. For the execution of this work, a laser acupuncture pen was used in points of the meridian in the S.I.M. Two healthy male volunteers were selected (18 and 60 years old, respectively), and doses of 576,92 J/cm 2 with low-power infrared laser equipment with a wavelength of 780 nm in the SI.3 and SI.19 points were applied. An infrared thermal camera was used to measure the temperature of the S.I.M. during the 6 min laser acupuncture pen stimulus. When the laser acupuncture of both volunteers was conducted in the SI.3 point, it presented hyper-radiation of the hemi face in the same side, far from the application site. When this was applied in the SI.19 point, hyper-radiation in the same point and temperature lowering at the end of the meridian were observed. The laser energy caused thermal changes along the path of the S.I.M., distal, and proximal at the same time, proving the existence of the S.I.M.

  7. BRMS1 Suppresses Breast Cancer Metastasis to Bone via Its Regulation of MicroRNA-125b and Downstream Attenuation of TNF-alpha and HER2 Signaling Pathways

    DTIC Science & Technology

    2013-10-01

    genes regulating focal adhesion assembly, such as a5 integrin, Tenascin C, Talin-1, Profilin 1, and Actinin [35]. Intravital microscopy had shown...adhere for time indicated, at which point cells were fixed and stainedwith crystal violet. Representative images for times 5, 10, 15, 30, and 60 min... imaged by time-lapse microscopy for 1 h or fixed and stained with crystal violet at times indicated. As shown in Figure 1C and quantified in Figure 1D

  8. Extremal values of the sojourn time

    NASA Astrophysics Data System (ADS)

    Astaburuaga, M. A.; Cortés, V. H.; Duclos, P.

    2010-11-01

    Consider a self-adjoint operator H on a separable Hilbert space \\ {H} with non-trivial absolutely continuous component. We study the general properties of the real-valued functional, \\tau _{H}(\\psi )=\\int _{{\\ R}}|(e^{-itH}\\psi,\\psi )|^2\\,dt, which in quantum mechanics represents the sojourn time (or life time) of an initial state \\psi \\in \\ {H}. We characterize the critical points of the sojourn time, τX, of the operator multiplication by x in L^2({\\ R}), and prove that it attains a global maximum in the unit sphere of the Sobolev space \\ {W}^{1,2}({\\ R}).

  9. Solution of the advection-dispersion equation by a finite-volume eulerian-lagrangian local adjoint method

    USGS Publications Warehouse

    Healy, R.W.; Russell, T.F.

    1992-01-01

    A finite-volume Eulerian-Lagrangian local adjoint method for solution of the advection-dispersion equation is developed and discussed. The method is mass conservative and can solve advection-dominated ground-water solute-transport problems accurately and efficiently. An integrated finite-difference approach is used in the method. A key component of the method is that the integral representing the mass-storage term is evaluated numerically at the current time level. Integration points, and the mass associated with these points, are then forward tracked up to the next time level. The number of integration points required to reach a specified level of accuracy is problem dependent and increases as the sharpness of the simulated solute front increases. Integration points are generally equally spaced within each grid cell. For problems involving variable coefficients it has been found to be advantageous to include additional integration points at strategic locations in each well. These locations are determined by backtracking. Forward tracking of boundary fluxes by the method alleviates problems that are encountered in the backtracking approaches of most characteristic methods. A test problem is used to illustrate that the new method offers substantial advantages over other numerical methods for a wide range of problems.

  10. Explaining Leibniz equivalence as difference of non-inertial appearances: Dis-solution of the Hole Argument and physical individuation of point-events

    NASA Astrophysics Data System (ADS)

    Lusanna, Luca; Pauri, Massimo

    "The last remnant of physical objectivity of space-time" is disclosed in the case of a continuous family of spatially non-compact models of general relativity (GR). The physical individuation of point-events is furnished by the autonomous degrees of freedom of the gravitational field (viz., the Dirac observables) which represent-as it were-the ontic part of the metric field. The physical role of the epistemic part (viz. the gauge variables) is likewise clarified as embodying the unavoidable non-inertial aspects of GR. At the end the philosophical import of the Hole Argument is substantially weakened and in fact the Argument itself dissolved, while a specific four-dimensional holistic and structuralist view of space-time (called point-structuralism) emerges, including elements common to the tradition of both substantivalism and relationism. The observables of our models undergo real temporal change: this gives new evidence to the fact that statements like the frozen-time character of evolution, as other ontological claims about GR, are model dependent.

  11. Utilizing the Iterative Closest Point (ICP) algorithm for enhanced registration of high resolution surface models - more than a simple black-box application

    NASA Astrophysics Data System (ADS)

    Stöcker, Claudia; Eltner, Anette

    2016-04-01

    Advances in computer vision and digital photogrammetry (i.e. structure from motion) allow for fast and flexible high resolution data supply. Within geoscience applications and especially in the field of small surface topography, high resolution digital terrain models and dense 3D point clouds are valuable data sources to capture actual states as well as for multi-temporal studies. However, there are still some limitations regarding robust registration and accuracy demands (e.g. systematic positional errors) which impede the comparison and/or combination of multi-sensor data products. Therefore, post-processing of 3D point clouds can heavily enhance data quality. In this matter the Iterative Closest Point (ICP) algorithm represents an alignment tool which iteratively minimizes distances of corresponding points within two datasets. Even though tool is widely used; it is often applied as a black-box application within 3D data post-processing for surface reconstruction. Aiming for precise and accurate combination of multi-sensor data sets, this study looks closely at different variants of the ICP algorithm including sub-steps of point selection, point matching, weighting, rejection, error metric and minimization. Therefore, an agricultural utilized field was investigated simultaneously by terrestrial laser scanning (TLS) and unmanned aerial vehicle (UAV) sensors two times (once covered with sparse vegetation and once bare soil). Due to different perspectives both data sets show diverse consistency in terms of shadowed areas and thus gaps so that data merging would provide consistent surface reconstruction. Although photogrammetric processing already included sub-cm accurate ground control surveys, UAV point cloud exhibits an offset towards TLS point cloud. In order to achieve the transformation matrix for fine registration of UAV point clouds, different ICP variants were tested. Statistical analyses of the results show that final success of registration and therefore data quality depends particularly on parameterization and choice of error metric, especially for erroneous data sets as in the case of sparse vegetation cover. At this, the point-to-point metric is more sensitive to data "noise" than the point-to-plane metric which results in considerably higher cloud-to-cloud distances. Concluding, in order to comply with accuracy demands of high resolution surface reconstruction and the aspect that ground control surveys can reach their limits both in time exposure and terrain accessibility ICP algorithm represents a great tool to refine rough initial alignment. Here different variants of registration modules allow for individual application according to the quality of the input data.

  12. Application of spatial time domain reflectometry measurements in heterogeneous, rocky substrates

    NASA Astrophysics Data System (ADS)

    Gonzales, C.; Scheuermann, A.; Arnold, S.; Baumgartl, T.

    2016-10-01

    Measurement of soil moisture across depths using sensors is currently limited to point measurements or remote sensing technologies. Point measurements have limitations on spatial resolution, while the latter, although covering large areas may not represent real-time hydrologic processes, especially near the surface. The objective of the study was to determine the efficacy of elongated soil moisture probes—spatial time domain reflectometry (STDR)—and to describe transient soil moisture dynamics of unconsolidated mine waste rock materials. The probes were calibrated under controlled conditions in the glasshouse. Transient soil moisture content was measured using the gravimetric method and STDR. Volumetric soil moisture content derived from weighing was compared with values generated from a numerical model simulating the drying process. A calibration function was generated and applied to STDR field data sets. The use of elongated probes effectively assists in the real-time determination of the spatial distribution of soil moisture. It also allows hydrologic processes to be uncovered in the unsaturated zone, especially for water balance calculations that are commonly based on point measurements. The elongated soil moisture probes can potentially describe transient substrate processes and delineate heterogeneity in terms of the pore size distribution in a seasonally wet but otherwise arid environment.

  13. HIV-1 infections with multiple founders are associated with higher viral loads than infections with single founders.

    PubMed

    Janes, Holly; Herbeck, Joshua T; Tovanabutra, Sodsai; Thomas, Rasmi; Frahm, Nicole; Duerr, Ann; Hural, John; Corey, Lawrence; Self, Steve G; Buchbinder, Susan P; McElrath, M Juliana; O'Connell, Robert J; Paris, Robert M; Rerks-Ngarm, Supachai; Nitayaphan, Sorachai; Pitisuttihum, Punnee; Kaewkungwal, Jaranit; Robb, Merlin L; Michael, Nelson L; Mullins, James I; Kim, Jerome H; Gilbert, Peter B; Rolland, Morgane

    2015-10-01

    Given the variation in the HIV-1 viral load (VL) set point across subjects, as opposed to a fairly stable VL over time within an infected individual, it is important to identify the characteristics of the host and virus that affect VL set point. Although recently infected individuals with multiple phylogenetically linked HIV-1 founder variants represent a minority of HIV-1 infections, we found--n two different cohorts--hat more diverse HIV-1 populations in early infection were associated with significantly higher VL 1 year after HIV-1 diagnosis.

  14. MSL: A Measure to Evaluate Three-dimensional Patterns in Gene Expression Data

    PubMed Central

    Gutiérrez-Avilés, David; Rubio-Escudero, Cristina

    2015-01-01

    Microarray technology is highly used in biological research environments due to its ability to monitor the RNA concentration levels. The analysis of the data generated represents a computational challenge due to the characteristics of these data. Clustering techniques are widely applied to create groups of genes that exhibit a similar behavior. Biclustering relaxes the constraints for grouping, allowing genes to be evaluated only under a subset of the conditions. Triclustering appears for the analysis of longitudinal experiments in which the genes are evaluated under certain conditions at several time points. These triclusters provide hidden information in the form of behavior patterns from temporal experiments with microarrays relating subsets of genes, experimental conditions, and time points. We present an evaluation measure for triclusters called Multi Slope Measure, based on the similarity among the angles of the slopes formed by each profile formed by the genes, conditions, and times of the tricluster. PMID:26124630

  15. Unidirectional invisibility induced by parity-time symmetric circuit

    NASA Astrophysics Data System (ADS)

    Lv, Bo; Fu, Jiahui; Wu, Bian; Li, Rujiang; Zeng, Qingsheng; Yin, Xinhua; Wu, Qun; Gao, Lei; Chen, Wan; Wang, Zhefei; Liang, Zhiming; Li, Ao; Ma, Ruyu

    2017-01-01

    Parity-time (PT) symmetric structures present the unidirectional invisibility at the spontaneous PT-symmetry breaking point. In this paper, we propose a PT-symmetric circuit consisting of a resistor and a microwave tunnel diode (TD) which represent the attenuation and amplification, respectively. Based on the scattering matrix method, the circuit can exhibit an ideal unidirectional performance at the spontaneous PT-symmetry breaking point by tuning the transmission lines between the lumped elements. Additionally, the resistance of the reactance component can alter the bandwidth of the unidirectional invisibility flexibly. Furthermore, the electromagnetic simulation for the proposed circuit validates the unidirectional invisibility and the synchronization with the input energy well. Our work not only provides an unidirectional invisible circuit based on PT-symmetry, but also proposes a potential solution for the extremely selective filter or cloaking applications.

  16. Eating Problems and Their Risk Factors: A 7-Year Longitudinal Study of a Population Sample of Norwegian Adolescent Girls

    ERIC Educational Resources Information Center

    Kansi, Juliska; Wichstrom, Lars; Bergman, Lars R.

    2005-01-01

    The longitudinal stability of eating problems and their relationships to risk factors were investigated in a representative population sample of 623 Norwegian girls aged 13-14 followed over 7 years (3 time points). Three eating problem symptoms were measured: Restriction, Bulimia-food preoccupation, and Diet, all taken from the 12-item Eating…

  17. A Longitudinal Examination of 10-Year Change in Vocational and Educational Activities for Adults with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Taylor, Julie Lounds; Mailick, Marsha R.

    2014-01-01

    The transition from adolescence to adulthood has been shown to be a time of amplified risk for individuals with autism spectrum disorders (ASD). It is unknown, however, whether problems in educational attainment and employment in the years after high school exit represent momentary perturbations in development or a turning point with long-lasting…

  18. Validity of body mass index as a measurement of adiposity in infancy

    USDA-ARS?s Scientific Manuscript database

    To assess the validity of body mass index (BMI) and age- and sex-standardized BMI z-score (BMIZ) as surrogates for adiposity (body fat percentage [BF%], fat mass, and fat mass index [kg/m2]) at 3 time points in infancy (1, 4, and 7 months) and to assess the extent to which the change in BMIZ represe...

  19. Evaluation of the AFWA WRF 4-KM Moving Nest Model Predictions for Western North Pacific Tropical Cyclones

    DTIC Science & Technology

    2006-03-01

    16 3. Typhoon Mawar ..................................................................... 19 4. Typhoon Talim...From: Digital Typhoon website) Infrared satellite image of Tropical Storm Mawar (center) and the seedling convection of what would become...Typhoon Mawar . The red triangular points represent the period covered by the two 72-h ARW integrations. The large red dot indicates the ending time of

  20. Gender Differences in the Development of Dieting from Adolescence to Early Adulthood: A Longitudinal Study

    ERIC Educational Resources Information Center

    von Soest, Tilmann; Wichstrom, Lars

    2009-01-01

    This study examines gender differences in the development of dieting among a representative sample of 1,368 Norwegian boys and girls. The respondents were followed over 3 time points from ages 13/14 to 20/21. Latent growth curve analyses were conducted showing that girls' dieting scores increased while boys' scores remained constant. Gender…

  1. Fast generation of computer-generated holograms using wavelet shrinkage.

    PubMed

    Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2017-01-09

    Computer-generated holograms (CGHs) are generated by superimposing complex amplitudes emitted from a number of object points. However, this superposition process remains very time-consuming even when using the latest computers. We propose a fast calculation algorithm for CGHs that uses a wavelet shrinkage method, eliminating small wavelet coefficient values to express approximated complex amplitudes using only a few representative wavelet coefficients.

  2. 40 CFR 463.17 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... contact cooling and heating water processes at a point source times the following pollutant concentrations: Subpart A [Contact cooling and heating water] Concentration used to calculate BCT effluent limitations Pollutant or pollutant property Maximum for any 1 day (mg/l) BOD5 26 Oil and grease 29 TSS 19 pH (1) 1...

  3. 40 CFR 463.17 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... contact cooling and heating water processes at a point source times the following pollutant concentrations: Subpart A [Contact cooling and heating water] Concentration used to calculate BCT effluent limitations Pollutant or pollutant property Maximum for any 1 day (mg/l) BOD5 26 Oil and grease 29 TSS 19 pH (1) 1...

  4. 40 CFR 463.17 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... contact cooling and heating water processes at a point source times the following pollutant concentrations: Subpart A [Contact cooling and heating water] Concentration used to calculate BCT effluent limitations Pollutant or pollutant property Maximum for any 1 day (mg/l) BOD5 26 Oil and grease 29 TSS 19 pH (1) 1...

  5. 40 CFR 463.17 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... cooling and heating water processes at a point source times the following pollutant concentrations: Subpart A [Contact cooling and heating water] Concentration used to calculate BCT effluent limitations Pollutant or pollutant property Maximum for any 1 day (mg/l) BOD5 26 Oil and grease 29 TSS 19 pH (1) 1...

  6. Parent-Youth Closeness and Youth's Suicidal Ideation: The Moderating Effects of Gender, Stages of Adolescence, and Race or Ethnicity

    ERIC Educational Resources Information Center

    Liu, Ruth X.

    2005-01-01

    Data from a nationally representative sample of adolescents studied at two points in time are used to examine gender-specific influence of parent-youth closeness on youth's suicidal ideation and its variations by stages of adolescence and race or ethnicity. Logistic regression analyses yielded interesting findings: (a) Closeness with fathers…

  7. Break point on the auto-correlation function of Elsässer variable z- in the super-Alfvénic solar wind fluctuations

    NASA Astrophysics Data System (ADS)

    Wang, X.; Tu, C. Y.; He, J.; Wang, L.

    2017-12-01

    It has been a longstanding debate on what the nature of Elsässer variables z- observed in the Alfvénic solar wind is. It is widely believed that z- represents inward propagating Alfvén waves and undergoes non-linear interaction with z+ to produce energy cascade. However, z- variations sometimes show nature of convective structures. Here we present a new data analysis on z- autocorrelation functions to get some definite information on its nature. We find that there is usually a break point on the z- auto-correlation function when the fluctuations show nearly pure Alfvénicity. The break point observed by Helios-2 spacecraft near 0.3 AU is at the first time lag ( 81 s), where the autocorrelation coefficient has the value less than that at zero-time lag by a factor of more than 0.4. The autocorrelation function breaks also appear in the WIND observations near 1 AU. The z- autocorrelation function is separated by the break into two parts: fast decreasing part and slowly decreasing part, which cannot be described in a whole by an exponential formula. The breaks in the z- autocorrelation function may represent that the z- time series are composed of high-frequency white noise and low-frequency apparent structures, which correspond to the flat and steep parts of the function, respectively. This explanation is supported by a simple test with a superposition of an artificial random data series and a smoothed random data series. Since in many cases z- autocorrelation functions do not decrease very quickly at large time lag and cannot be considered as the Lanczos type, no reliable value for correlation-time can be derived. Our results showed that in these cases with high Alfvénicity, z- should not be considered as inward-propagating wave. The power-law spectrum of z+ should be made by fluid turbulence cascade process presented by Kolmogorov.

  8. Multiscale Poincaré plots for visualizing the structure of heartbeat time series.

    PubMed

    Henriques, Teresa S; Mariani, Sara; Burykin, Anton; Rodrigues, Filipa; Silva, Tiago F; Goldberger, Ary L

    2016-02-09

    Poincaré delay maps are widely used in the analysis of cardiac interbeat interval (RR) dynamics. To facilitate visualization of the structure of these time series, we introduce multiscale Poincaré (MSP) plots. Starting with the original RR time series, the method employs a coarse-graining procedure to create a family of time series, each of which represents the system's dynamics in a different time scale. Next, the Poincaré plots are constructed for the original and the coarse-grained time series. Finally, as an optional adjunct, color can be added to each point to represent its normalized frequency. We illustrate the MSP method on simulated Gaussian white and 1/f noise time series. The MSP plots of 1/f noise time series reveal relative conservation of the phase space area over multiple time scales, while those of white noise show a marked reduction in area. We also show how MSP plots can be used to illustrate the loss of complexity when heartbeat time series from healthy subjects are compared with those from patients with chronic (congestive) heart failure syndrome or with atrial fibrillation. This generalized multiscale approach to Poincaré plots may be useful in visualizing other types of time series.

  9. Tipping point analysis of ocean acoustic noise

    NASA Astrophysics Data System (ADS)

    Livina, Valerie N.; Brouwer, Albert; Harris, Peter; Wang, Lian; Sotirakopoulos, Kostas; Robinson, Stephen

    2018-02-01

    We apply tipping point analysis to a large record of ocean acoustic data to identify the main components of the acoustic dynamical system and study possible bifurcations and transitions of the system. The analysis is based on a statistical physics framework with stochastic modelling, where we represent the observed data as a composition of deterministic and stochastic components estimated from the data using time-series techniques. We analyse long-term and seasonal trends, system states and acoustic fluctuations to reconstruct a one-dimensional stochastic equation to approximate the acoustic dynamical system. We apply potential analysis to acoustic fluctuations and detect several changes in the system states in the past 14 years. These are most likely caused by climatic phenomena. We analyse trends in sound pressure level within different frequency bands and hypothesize a possible anthropogenic impact on the acoustic environment. The tipping point analysis framework provides insight into the structure of the acoustic data and helps identify its dynamic phenomena, correctly reproducing the probability distribution and scaling properties (power-law correlations) of the time series.

  10. Recombinase polymerase amplification as a promising tool in hepatitis C virus diagnosis.

    PubMed

    Zaghloul, Hosam; El-Shahat, Mahmoud

    2014-12-27

    Hepatitis C virus (HCV) infection represents a significant health problem and represents a heavy load on some countries like Egypt in which about 20% of the total population are infected. Initial infection is usually asymptomatic and result in chronic hepatitis that give rise to complications including cirrhosis and hepatocellular carcinoma. The management of HCV infection should not only be focus on therapy, but also to screen carrier individuals in order to prevent transmission. In the present, molecular detection and quantification of HCV genome by real time polymerase chain reaction (PCR) represent the gold standard in HCV diagnosis and plays a crucial role in the management of therapeutic regimens. However, real time PCR is a complicated approach and of limited distribution. On the other hand, isothermal DNA amplification techniques have been developed and offer molecular diagnosis of infectious dieses at point-of-care. In this review we discuss recombinase polymerase amplification technique and illustrate its diagnostic value over both PCR and other isothermal amplification techniques.

  11. The Intelligence-Religiosity Nexus: A Representative Study of White Adolescent Americans

    ERIC Educational Resources Information Center

    Nyborg, Helmuth

    2009-01-01

    The present study examined whether IQ relates systematically to denomination and income within the framework of the "g" nexus, using representative data from the National Longitudinal Study of Youth (NLSY97). Atheists score 1.95 IQ points higher than Agnostics, 3.82 points higher than Liberal persuasions, and 5.89 IQ points higher than…

  12. Network of dedicated processors for finding lowest-cost map path

    NASA Technical Reports Server (NTRS)

    Eberhardt, Silvio P. (Inventor)

    1991-01-01

    A method and associated apparatus are disclosed for finding the lowest cost path of several variable paths. The paths are comprised of a plurality of linked cost-incurring areas existing between an origin point and a destination point. The method comprises the steps of connecting a purality of nodes together in the manner of the cost-incurring areas; programming each node to have a cost associated therewith corresponding to one of the cost-incurring areas; injecting a signal into one of the nodes representing the origin point; propagating the signal through the plurality of nodes from inputs to outputs; reducing the signal in magnitude at each node as a function of the respective cost of the node; and, starting at one of the nodes representing the destination point and following a path having the least reduction in magnitude of the signal from node to node back to one of the nodes representing the origin point whereby the lowest cost path from the origin point to the destination point is found.

  13. Gravitational perturbations and metric reconstruction: Method of extended homogeneous solutions applied to eccentric orbits on a Schwarzschild black hole

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hopper, Seth; Evans, Charles R.

    2010-10-15

    We calculate the gravitational perturbations produced by a small mass in eccentric orbit about a much more massive Schwarzschild black hole and use the numerically computed perturbations to solve for the metric. The calculations are initially made in the frequency domain and provide Fourier-harmonic modes for the gauge-invariant master functions that satisfy inhomogeneous versions of the Regge-Wheeler and Zerilli equations. These gravitational master equations have specific singular sources containing both delta function and derivative-of-delta function terms. We demonstrate in this paper successful application of the method of extended homogeneous solutions, developed recently by Barack, Ori, and Sago, to handle sourcemore » terms of this type. The method allows transformation back to the time domain, with exponential convergence of the partial mode sums that represent the field. This rapid convergence holds even in the region of r traversed by the point mass and includes the time-dependent location of the point mass itself. We present numerical results of mode calculations for certain orbital parameters, including highly accurate energy and angular momentum fluxes at infinity and at the black hole event horizon. We then address the issue of reconstructing the metric perturbation amplitudes from the master functions, the latter being weak solutions of a particular form to the wave equations. The spherical harmonic amplitudes that represent the metric in Regge-Wheeler gauge can themselves be viewed as weak solutions. They are in general a combination of (1) two differentiable solutions that adjoin at the instantaneous location of the point mass (a result that has order of continuity C{sup -1} typically) and (2) (in some cases) a delta function distribution term with a computable time-dependent amplitude.« less

  14. Constraints on a Late Cretaceous uplift, denudation, and incision of the Grand Canyon region, southwestern Colorado Plateau, USA, from U-Pb dating of lacustrine limestone

    NASA Astrophysics Data System (ADS)

    Hill, Carol A.; Polyak, Victor J.; Asmerom, Yemane; P. Provencio, Paula

    2016-04-01

    The uplift and denudation of the Colorado Plateau is important in reconstructing the geomorphic and tectonic evolution of western North America. A Late Cretaceous (64 ± 2 Ma) U-Pb age for the Long Point limestone on the Coconino Plateau, which overlies a regional erosional surface developed on Permo-Triassic formations, supports unroofing of the Coconino Plateau part of Grand Canyon by that time. U-Pb analyses of three separate outcrops of this limestone gave ages of 64.0 ± 0.7, 60.5 ± 4.6, and 66.3 ± 3.9 Ma, which dates are older than a fossil-based, early Eocene age. Samples of the Long Point limestone were dated using the isotope dilution isochron method on well-preserved carbonates having high-uranium and low-lead concentrations. Our U-Pb ages on the Long Point limestone place important constraints on the (1) time of tectonic uplift of the southwestern Colorado Plateau and Kaibab arch, (2) time of denudation of the Coconino Plateau, and (3) Late Cretaceous models of paleocanyon incision west of, or across, the Kaibab arch. We propose that the age of the Long Point limestone, interbedded within the Music Mountain Formation in the Long Point area, represents a period of regional aggradation and a time of drainage blockage northward and eastward across the Kaibab arch, with possible diversion of northward drainage on the Coconino Plateau westward around the arch via a Laramide paleo-Grand Canyon.

  15. Increasing large scale windstorm damage in Western, Central and Northern European forests, 1951-2010

    NASA Astrophysics Data System (ADS)

    Gregow, H.; Laaksonen, A.; Alper, M. E.

    2017-04-01

    Using reports of forest losses caused directly by large scale windstorms (or primary damage, PD) from the European forest institute database (comprising 276 PD reports from 1951-2010), total growing stock (TGS) statistics of European forests and the daily North Atlantic Oscillation (NAO) index, we identify a statistically significant change in storm intensity in Western, Central and Northern Europe (17 countries). Using the validated set of storms, we found that the year 1990 represents a change-point at which the average intensity of the most destructive storms indicated by PD/TGS > 0.08% increased by more than a factor of three. A likelihood ratio test provides strong evidence that the change-point represents a real shift in the statistical behaviour of the time series. All but one of the seven catastrophic storms (PD/TGS > 0.2%) occurred since 1990. Additionally, we detected a related decrease in September-November PD/TGS and an increase in December-February PD/TGS. Our analyses point to the possibility that the impact of climate change on the North Atlantic storms hitting Europe has started during the last two and half decades.

  16. Increasing large scale windstorm damage in Western, Central and Northern European forests, 1951–2010

    PubMed Central

    Gregow, H.; Laaksonen, A.; Alper, M. E.

    2017-01-01

    Using reports of forest losses caused directly by large scale windstorms (or primary damage, PD) from the European forest institute database (comprising 276 PD reports from 1951–2010), total growing stock (TGS) statistics of European forests and the daily North Atlantic Oscillation (NAO) index, we identify a statistically significant change in storm intensity in Western, Central and Northern Europe (17 countries). Using the validated set of storms, we found that the year 1990 represents a change-point at which the average intensity of the most destructive storms indicated by PD/TGS > 0.08% increased by more than a factor of three. A likelihood ratio test provides strong evidence that the change-point represents a real shift in the statistical behaviour of the time series. All but one of the seven catastrophic storms (PD/TGS > 0.2%) occurred since 1990. Additionally, we detected a related decrease in September–November PD/TGS and an increase in December–February PD/TGS. Our analyses point to the possibility that the impact of climate change on the North Atlantic storms hitting Europe has started during the last two and half decades. PMID:28401947

  17. Uranium-Series Ages of Marine Terrace Corals from the Pacific Coast of North America and Implications for Last-Interglacial Sea Level History

    USGS Publications Warehouse

    Muhs, D.R.; Kennedy, G.L.; Rockwell, T.K.

    1994-01-01

    Few of the marine terraces along the Pacific coast of North America have been dated using uranium-series techniques. Ten terrace sequences from southern Oregon to southern Baja California Sur have yielded fossil corals in quantities suitable for U-series dating by alpha spectrometry. U-series-dated terraces representing the ???80,000 yr sea-level high stand are identified in five areas (Bandon, Oregon; Point Arena, San Nicolas Island, and Point Loma, California; and Punta Banda, Baja California); terraces representing the ???125,000 yr sea-level high stand are identified in eight areas (Cayucos, San Luis Obispo Bay, San Nicolas Island, San Clemente Island, and Point Loma, California; Punta Bands and Isla Guadalupe, Baja California; and Cabo Pulmo, Baja California Sur). On San Nicolas Island, Point Loma, and Punta Bands, both the ???80,000 and the ???125,000 yr terraces are dated. Terraces that may represent the ???105,000 sea-level high stand are rarely preserved and none has yielded corals for U-series dating. Similarity of coral ages from midlatitude, erosional marine terraces with coral ages from emergent, constructional reefs on tropical coastlines suggests a common forcing mechanism, namely glacioeustatically controlled fluctuations in sea level superimposed on steady tectonic uplift. The low marine terrace dated at ???125,000 yr on Isla Guadalupe, Baja California, presumed to be tectonically stable, supports evidence from other localities for a +6-m sea level at that time. Data from the Pacific Coast and a compilation of data from other coasts indicate that sea levels at ???80,000 and ???105,000 yr may have been closer to present sea level (within a few meters) than previous studies have suggested.

  18. Time Frame Affects Vantage Point in Episodic and Semantic Autobiographical Memory: Evidence from Response Latencies

    PubMed Central

    Karylowski, Jerzy J.; Mrozinski, Blazej

    2017-01-01

    Previous research suggests that, with the passage of time, representations of self in episodic memory become less dependent on their initial (internal) vantage point and shift toward an external perspective that is normally characteristic of how other people are represented. The present experiment examined this phenomenon in both episodic and semantic autobiographical memory using latency of self-judgments as a measure of accessibility of the internal vs. the external perspective. Results confirmed that in the case of representations of the self retrieved from recent autobiographical memories, trait-judgments regarding unobservable self-aspects (internal perspective) were faster than trait judgments regarding observable self-aspects (external perspective). Yet, in the case of self-representations retrieved from memories of a more distant past, judgments regarding observable self-aspects were faster. Those results occurred for both self-representations retrieved from episodic memory and for representations retrieved from the semantic memory. In addition, regardless of the effect of time, greater accessibility of unobservable (vs. observable) self-aspects was associated with the episodic rather than semantic autobiographical memory. Those results were modified by neither declared trait’s self-descriptiveness (yes vs. no responses) nor by its desirability (highly desirable vs. moderately desirable traits). Implications for compatibility between how self and others are represented and for the role of self in social perception are discussed. PMID:28473793

  19. Time Frame Affects Vantage Point in Episodic and Semantic Autobiographical Memory: Evidence from Response Latencies.

    PubMed

    Karylowski, Jerzy J; Mrozinski, Blazej

    2017-01-01

    Previous research suggests that, with the passage of time, representations of self in episodic memory become less dependent on their initial (internal) vantage point and shift toward an external perspective that is normally characteristic of how other people are represented. The present experiment examined this phenomenon in both episodic and semantic autobiographical memory using latency of self-judgments as a measure of accessibility of the internal vs. the external perspective. Results confirmed that in the case of representations of the self retrieved from recent autobiographical memories, trait-judgments regarding unobservable self-aspects (internal perspective) were faster than trait judgments regarding observable self-aspects (external perspective). Yet, in the case of self-representations retrieved from memories of a more distant past, judgments regarding observable self-aspects were faster. Those results occurred for both self-representations retrieved from episodic memory and for representations retrieved from the semantic memory. In addition, regardless of the effect of time, greater accessibility of unobservable (vs. observable) self-aspects was associated with the episodic rather than semantic autobiographical memory. Those results were modified by neither declared trait's self-descriptiveness ( yes vs. no responses) nor by its desirability (highly desirable vs. moderately desirable traits). Implications for compatibility between how self and others are represented and for the role of self in social perception are discussed.

  20. Model-based screening for critical wet-weather discharges related to micropollutants from urban areas.

    PubMed

    Mutzner, Lena; Staufer, Philipp; Ort, Christoph

    2016-11-01

    Wet-weather discharges contribute to anthropogenic micropollutant loads entering the aquatic environment. Thousands of wet-weather discharges exist in Swiss sewer systems, and we do not have the capacity to monitor them all. We consequently propose a model-based approach designed to identify critical discharge points in order to support effective monitoring. We applied a dynamic substance flow model to four substances representing different entry routes: indoor (Triclosan, Mecoprop, Copper) as well as rainfall-mobilized (Glyphosate, Mecoprop, Copper) inputs. The accumulation on different urban land-use surfaces in dry weather and subsequent substance-specific wash-off is taken into account. For evaluation, we use a conservative screening approach to detect critical discharge points. This approach considers only local dilution generated onsite from natural, unpolluted areas, i.e. excluding upstream dilution. Despite our conservative assumptions, we find that the environmental quality standards for Glyphosate and Mecoprop are not exceeded during any 10-min time interval over a representative one-year simulation period for all 2500 Swiss municipalities. In contrast, the environmental quality standard is exceeded during at least 20% of the discharge time at 83% of all modelled discharge points for Copper and at 71% for Triclosan. For Copper, this corresponds to a total median duration of approximately 19 days per year. For Triclosan, discharged only via combined sewer overflows, this means a median duration of approximately 10 days per year. In general, stormwater outlets contribute more to the calculated effect than combined sewer overflows for rainfall-mobilized substances. We further evaluate the Urban Index (A urban,impervious /A natural ) as a proxy for critical discharge points: catchments where Triclosan and Copper exceed the corresponding environmental quality standard often have an Urban Index >0.03. A dynamic substance flow analysis allows us to identify the most critical discharge points to be prioritized for more detailed analyses and monitoring. This forms a basis for the efficient mitigation of pollution. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Voronoi distance based prospective space-time scans for point data sets: a dengue fever cluster analysis in a southeast Brazilian town

    PubMed Central

    2011-01-01

    Background The Prospective Space-Time scan statistic (PST) is widely used for the evaluation of space-time clusters of point event data. Usually a window of cylindrical shape is employed, with a circular or elliptical base in the space domain. Recently, the concept of Minimum Spanning Tree (MST) was applied to specify the set of potential clusters, through the Density-Equalizing Euclidean MST (DEEMST) method, for the detection of arbitrarily shaped clusters. The original map is cartogram transformed, such that the control points are spread uniformly. That method is quite effective, but the cartogram construction is computationally expensive and complicated. Results A fast method for the detection and inference of point data set space-time disease clusters is presented, the Voronoi Based Scan (VBScan). A Voronoi diagram is built for points representing population individuals (cases and controls). The number of Voronoi cells boundaries intercepted by the line segment joining two cases points defines the Voronoi distance between those points. That distance is used to approximate the density of the heterogeneous population and build the Voronoi distance MST linking the cases. The successive removal of edges from the Voronoi distance MST generates sub-trees which are the potential space-time clusters. Finally, those clusters are evaluated through the scan statistic. Monte Carlo replications of the original data are used to evaluate the significance of the clusters. An application for dengue fever in a small Brazilian city is presented. Conclusions The ability to promptly detect space-time clusters of disease outbreaks, when the number of individuals is large, was shown to be feasible, due to the reduced computational load of VBScan. Instead of changing the map, VBScan modifies the metric used to define the distance between cases, without requiring the cartogram construction. Numerical simulations showed that VBScan has higher power of detection, sensitivity and positive predicted value than the Elliptic PST. Furthermore, as VBScan also incorporates topological information from the point neighborhood structure, in addition to the usual geometric information, it is more robust than purely geometric methods such as the elliptic scan. Those advantages were illustrated in a real setting for dengue fever space-time clusters. PMID:21513556

  2. Determining the sources of suspended sediment in a Mediterranean groundwater-dominated river: the Na Borges basin (Mallorca, Spain).

    NASA Astrophysics Data System (ADS)

    Estrany, Joan; Martinez-Carreras, Nuria

    2013-04-01

    Tracers have been acknowledged as a useful tool to identify sediment sources, based upon a variety of techniques and chemical and physical sediment properties. Sediment fingerprinting supports the notion that changes in sedimentation rates are not just related to increased/reduced erosion and transport in the same areas, but also to the establishment of different pathways increasing sediment connectivity. The Na Borges is a Mediterranean lowland agricultural river basin (319 km2) where traditional soil and water conservation practices have been applied over millennia to provide effective protection of cultivated land. During the twentieth century, industrialisation and pressure from tourism activities have increased urbanised surfaces, which have impacts on the processes that control streamflow. Within this context, source material sampling was focused in Na Borges on obtaining representative samples from potential sediment sources (comprised topsoil; i.e., 0-2 cm) susceptible to mobilisation by water and subsequent routing to the river channel network, while those representing channel bank sources were collected from actively eroding channel margins and ditches. Samples of road dust and of solids from sewage treatment plants were also collected. During two hydrological years (2004-2006), representative suspended sediment samples for use in source fingerprinting studies were collected at four flow gauging stations and at eight secondary sampling points using time-integrating sampling samplers. Likewise, representative bed-channel sediment samples were obtained using the resuspension approach at eight sampling points in the main stem of the Na Borges River. These deposits represent the fine sediment temporarily stored in the bed-channel and were also used for tracing source contributions. A total of 102 individual time-integrated sediment samples, 40 bulk samples and 48 bed-sediment samples were collected. Upon return to the laboratory, source material samples were oven-dried at 40° C, disaggregated using a pestle and mortar, and dry sieved to

  3. Development of Γ-ray tracking detectors

    DOE PAGES

    Lieder, R. M.; Gast, W.; Jäger, H. M.; ...

    2001-12-01

    The next generation of 4π arrays for high-precision γ-ray spectroscopy AGATA will consist of γ-ray tracking detectors. They represent high-fold segmented Ge detectors and a front-end electronics, based on digital signal processing techniques, which allows to extract energy, timing and spatial information on the interactions of a γ-ray in the Ge detector by pulse shape analysis of its signals. Utilizing the information on the positions of the interaction points and the energies released at each point the tracks of the γ-rays in a Ge shell can be reconstructed in three dimensions on the basis of the Compton-scattering formula.

  4. Use of microcomputer in mapping depth of stratigraphic horizons in National Petroleum Reserve in Alaska

    USGS Publications Warehouse

    Payne, Thomas G.

    1982-01-01

    REGIONAL MAPPER is a menu-driven system in the BASIC language for computing and plotting (1) time, depth, and average velocity to geologic horizons, (2) interval time, thickness, and interval velocity of stratigraphic intervals, and (3) subcropping and onlapping intervals at unconformities. The system consists of three programs: FILER, TRAVERSER, and PLOTTER. A control point is a shot point with velocity analysis or a shot point at or near a well with velocity check-shot survey. Reflection time to and code number of seismic horizons are filed by digitizing tablet from record sections. TRAVERSER starts at a point of geologic control and, in traversing to another, parallels seismic events, records loss of horizons by onlap and truncation, and stores reflection time for geologic horizons at traversed shot points. TRAVERSER is basically a phantoming procedure. Permafrost thickness and velocity variations, buried canyons with low-velocity fill, and error in seismically derived velocity cause velocity anomalies that complicate depth mapping. Two depths to the top of the pebble is based shale are computed for each control point. One depth, designated Zs on seismically derived velocity. The other (Zw) is based on interval velocity interpolated linearly between wells and multiplied by interval time (isochron) to give interval thickness. Z w is computed for all geologic horizons by downward summation of interval thickness. Unknown true depth (Z) to the pebble shale may be expressed as Z = Zs + es and Z = Zw + ew where the e terms represent error. Equating the two expressions gives the depth difference D = Zs + Zw = ew + es A plot of D for the top of the pebble shale is readily contourable but smoothing is required to produce a reasonably simple surface. Seismically derived velocity used in computing Zs includes the effect of velocity anomalies but is subject to some large randomly distributed errors resulting in depth errors (es). Well-derived velocity used in computing Zw does not include the effect of velocity anomalies, but the error (ew) should reflect these anomalies and should be contourable (non-random). The D surface as contoured with smoothing is assumed to represent ew, that is, the depth effect of variations in permafrost thickness and velocity and buried canyon depth. Estimated depth (Zest) to each geologic horizon is the sum of Z w for that horizon and a constant e w as contoured for the pebble shale, which is the first highly continuous seismic horizon below the zone of anomalous velocity. Results of this 'depthing' procedure are compared with those of Tetra Tech, Inc., the subcontractor responsible for geologic and geophysical interpretation and mapping.

  5. Local spectrum analysis of field propagation in an anisotropic medium. Part II. Time-dependent fields.

    PubMed

    Tinkelman, Igor; Melamed, Timor

    2005-06-01

    In Part I of this two-part investigation [J. Opt. Soc. Am. A 22, 1200 (2005)], we presented a theory for phase-space propagation of time-harmonic electromagnetic fields in an anisotropic medium characterized by a generic wave-number profile. In this Part II, these investigations are extended to transient fields, setting a general analytical framework for local analysis and modeling of radiation from time-dependent extended-source distributions. In this formulation the field is expressed as a superposition of pulsed-beam propagators that emanate from all space-time points in the source domain and in all directions. Using time-dependent quadratic-Lorentzian windows, we represent the field by a phase-space spectral distribution in which the propagating elements are pulsed beams, which are formulated by a transient plane-wave spectrum over the extended-source plane. By applying saddle-point asymptotics, we extract the beam phenomenology in the anisotropic environment resulting from short-pulsed processing. Finally, the general results are applied to the special case of uniaxial crystal and compared with a reference solution.

  6. Implicit assimilation for marine ecological models

    NASA Astrophysics Data System (ADS)

    Weir, B.; Miller, R.; Spitz, Y. H.

    2012-12-01

    We use a new data assimilation method to estimate the parameters of a marine ecological model. At a given point in the ocean, the estimated values of the parameters determine the behaviors of the modeled planktonic groups, and thus indicate which species are dominant. To begin, we assimilate in situ observations, e.g., the Bermuda Atlantic Time-series Study, the Hawaii Ocean Time-series, and Ocean Weather Station Papa. From there, we estimate the parameters at surrounding points in space based on satellite observations of ocean color. Given the variation of the estimated parameters, we divide the ocean into regions meant to represent distinct ecosystems. An important feature of the data assimilation approach is that it refines the confidence limits of the optimal Gaussian approximation to the distribution of the parameters. This enables us to determine the ecological divisions with greater accuracy.

  7. Time scales of radiation damage decay in four optical materials

    NASA Astrophysics Data System (ADS)

    Grupp, Frank; Geis, Norbert; Katterloher, Reinhard; Bender, Ralf

    2017-09-01

    In the framework of the qualification campaigns for the near infrared spectrometer and photometer instrument (NISP) on board the ESA/EUCLID satellite six optical materials where characterized with respect to their transmission losses after a radiation dose representing the mission exposure to high energy particles in the outer Lagrange point L2. Data was taken between 500 and 2000nm on six 25mm thick coated probes. Thickness and coating being representative for the NISP flight configuration. With this paper we present results owing up the radiation damage shown in [1]. We where able to follow up the decay of the radiation damage over almost one year under ambient conditions. This allows us to distinguish between curing effects that happen on different time-scales. As for some of the materials no radiation damage and thus no curing was detected, all materials that showed significant radiation damage in the measured passband showed two clearly distinguished time scales of curing. Up to 70% of the transmission losses cured on half decay time scales of several tens of days, while the rest of the damage cures on time scales of years.

  8. A pointing facilitation system for motor-impaired users combining polynomial smoothing and time-weighted gradient target prediction models.

    PubMed

    Blow, Nikolaus; Biswas, Pradipta

    2017-01-01

    As computers become more and more essential for everyday life, people who cannot use them are missing out on an important tool. The predominant method of interaction with a screen is a mouse, and difficulty in using a mouse can be a huge obstacle for people who would otherwise gain great value from using a computer. If mouse pointing were to be made easier, then a large number of users may be able to begin using a computer efficiently where they may previously have been unable to. The present article aimed to improve pointing speeds for people with arm or hand impairments. The authors investigated different smoothing and prediction models on a stored data set involving 25 people, and the best of these algorithms were chosen. A web-based prototype was developed combining a polynomial smoothing algorithm with a time-weighted gradient target prediction model. The adapted interface gave an average improvement of 13.5% in target selection times in a 10-person study of representative users of the system. A demonstration video of the system is available at https://youtu.be/sAzbrKHivEY.

  9. The Health and Retirement Study: Analysis of Associations Between Use of the Internet for Health Information and Use of Health Services at Multiple Time Points.

    PubMed

    Shim, Hyunju; Ailshire, Jennifer; Zelinski, Elizabeth; Crimmins, Eileen

    2018-05-25

    The use of the internet for health information among older people is receiving increasing attention, but how it is associated with chronic health conditions and health service use at concurrent and subsequent time points using nationally representative data is less known. This study aimed to determine whether the use of the internet for health information is associated with health service utilization and whether the association is affected by specific health conditions. The study used data collected in a technology module from a nationally representative sample of community-dwelling older Americans aged 52 years and above from the 2012 Health and Retirement Study (HRS; N=991). Negative binomial regressions were used to examine the association between use of Web-based health information and the reported health service uses in 2012 and 2014. Analyses included additional covariates adjusting for predisposing, enabling, and need factors. Interactions between the use of the internet for health information and chronic health conditions were also tested. A total of 48.0% (476/991) of Americans aged 52 years and above reported using Web-based health information. The use of Web-based health information was positively associated with the concurrent reports of doctor visits, but not over 2 years. However, an interaction of using Web-based health information with diabetes showed that users had significantly fewer doctor visits compared with nonusers with diabetes at both times. The use of the internet for health information was associated with higher health service use at the concurrent time, but not at the subsequent time. The interaction between the use of the internet for health information and diabetes was significant at both time points, which suggests that health-related internet use may be associated with fewer doctor visits for certain chronic health conditions. Results provide some insight into how Web-based health information may provide an alternative health care resource for managing chronic conditions. ©Hyunju Shim, Jennifer Ailshire, Elizabeth Zelinski, Eileen Crimmins. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 25.05.2018.

  10. Calibrating binary lumped parameter models

    NASA Astrophysics Data System (ADS)

    Morgenstern, Uwe; Stewart, Mike

    2017-04-01

    Groundwater at its discharge point is a mixture of water from short and long flowlines, and therefore has a distribution of ages rather than a single age. Various transfer functions describe the distribution of ages within the water sample. Lumped parameter models (LPMs), which are mathematical models of water transport based on simplified aquifer geometry and flow configuration can account for such mixing of groundwater of different age, usually representing the age distribution with two parameters, the mean residence time, and the mixing parameter. Simple lumped parameter models can often match well the measured time varying age tracer concentrations, and therefore are a good representation of the groundwater mixing at these sites. Usually a few tracer data (time series and/or multi-tracer) can constrain both parameters. With the building of larger data sets of age tracer data throughout New Zealand, including tritium, SF6, CFCs, and recently Halon-1301, and time series of these tracers, we realised that for a number of wells the groundwater ages using a simple lumped parameter model were inconsistent between the different tracer methods. Contamination or degradation of individual tracers is unlikely because the different tracers show consistent trends over years and decades. This points toward a more complex mixing of groundwaters with different ages for such wells than represented by the simple lumped parameter models. Binary (or compound) mixing models are able to represent a more complex mixing, with mixing of water of two different age distributions. The problem related to these models is that they usually have 5 parameters which makes them data-hungry and therefore difficult to constrain all parameters. Two or more age tracers with different input functions, with multiple measurements over time, can provide the required information to constrain the parameters of the binary mixing model. We obtained excellent results using tritium time series encompassing the passage of the bomb-tritium through the aquifer, and SF6 with its steep gradient currently in the input. We will show age tracer data from drinking water wells that enabled identification of young water ingression into wells, which poses the risk of bacteriological contamination from the surface into the drinking water.

  11. Representativeness of the ground observational sites and up-scaling of the point soil moisture measurements

    NASA Astrophysics Data System (ADS)

    Chen, Jinlei; Wen, Jun; Tian, Hui

    2016-02-01

    Soil moisture plays an increasingly important role in the cycle of energy-water exchange, climate change, and hydrologic processes. It is usually measured at a point site, but regional soil moisture is essential for validating remote sensing products and numerical modeling results. In the study reported in this paper, the minimal number of required sites (NRS) for establishing a research observational network and the representative single sites for regional soil moisture estimation are discussed using the soil moisture data derived from the ;Maqu soil moisture observational network; (101°40‧-102°40‧E, 33°30‧-35°45‧N), which is supported by Chinese Academy of Science. Furthermore, the best up-scaling method suitable for this network has been studied by evaluating four commonly used up-scaling methods. The results showed that (1) Under a given accuracy requirement R ⩾ 0.99, RMSD ⩽ 0.02 m3/m3, NRS at both 5 and 10 cm depth is 10. (2) Representativeness of the sites has been validated by time stability analysis (TSA), time sliding correlation analysis (TSCA) and optimal combination of sites (OCS). NST01 is the most representative site at 5 cm depth for the first two methods; NST07 and NST02 are the most representative sites at 10 cm depth. The optimum combination sites at 5 cm depth are NST01, NST02, and NST07. NST05, NST08, and NST13 are the best group at 10 cm depth. (3) Linear fitting, compared with other three methods, is the best up-scaling method for all types of representative sites obtained above, and linear regression equations between a single site and regional soil moisture are established hereafter. ;Single site; obtained by OCS has the greatest up-scaling effect, and TSCA takes the second place. (4) Linear fitting equations show good practicability in estimating the variation of regional soil moisture from July 3, 2013 to July 3, 2014, when a large number of observed soil moisture data are lost.

  12. Registration of terrestrial mobile laser data on 2D or 3D geographic database by use of a non-rigid ICP approach.

    NASA Astrophysics Data System (ADS)

    Monnier, F.; Vallet, B.; Paparoditis, N.; Papelard, J.-P.; David, N.

    2013-10-01

    This article presents a generic and efficient method to register terrestrial mobile data with imperfect location on a geographic database with better overall accuracy but less details. The registration method proposed in this paper is based on a semi-rigid point to plane ICP ("Iterative Closest Point"). The main applications of such registration is to improve existing geographic databases, particularly in terms of accuracy, level of detail and diversity of represented objects. Other applications include fine geometric modelling and fine façade texturing, object extraction such as trees, poles, road signs marks, facilities, vehicles, etc. The geopositionning system of mobile mapping systems is affected by GPS masks that are only partially corrected by an Inertial Navigation System (INS) which can cause an important drift. As this drift varies non-linearly, but slowly in time, it will be modelled by a translation defined as a piecewise linear function of time which variation over time will be minimized (rigidity term). For each iteration of the ICP, the drift is estimated in order to minimise the distance between laser points and planar model primitives (data attachment term). The method has been tested on real data (a scan of the city of Paris of 3.6 million laser points registered on a 3D model of approximately 71,400 triangles).

  13. Combining 3d Volume and Mesh Models for Representing Complicated Heritage Buildings

    NASA Astrophysics Data System (ADS)

    Tsai, F.; Chang, H.; Lin, Y.-W.

    2017-08-01

    This study developed a simple but effective strategy to combine 3D volume and mesh models for representing complicated heritage buildings and structures. The idea is to seamlessly integrate 3D parametric or polyhedral models and mesh-based digital surfaces to generate a hybrid 3D model that can take advantages of both modeling methods. The proposed hybrid model generation framework is separated into three phases. Firstly, after acquiring or generating 3D point clouds of the target, these 3D points are partitioned into different groups. Secondly, a parametric or polyhedral model of each group is generated based on plane and surface fitting algorithms to represent the basic structure of that region. A "bare-bones" model of the target can subsequently be constructed by connecting all 3D volume element models. In the third phase, the constructed bare-bones model is used as a mask to remove points enclosed by the bare-bones model from the original point clouds. The remaining points are then connected to form 3D surface mesh patches. The boundary points of each surface patch are identified and these boundary points are projected onto the surfaces of the bare-bones model. Finally, new meshes are created to connect the projected points and original mesh boundaries to integrate the mesh surfaces with the 3D volume model. The proposed method was applied to an open-source point cloud data set and point clouds of a local historical structure. Preliminary results indicated that the reconstructed hybrid models using the proposed method can retain both fundamental 3D volume characteristics and accurate geometric appearance with fine details. The reconstructed hybrid models can also be used to represent targets in different levels of detail according to user and system requirements in different applications.

  14. The time-delayed inverted pendulum: Implications for human balance control

    NASA Astrophysics Data System (ADS)

    Milton, John; Cabrera, Juan Luis; Ohira, Toru; Tajima, Shigeru; Tonosaki, Yukinori; Eurich, Christian W.; Campbell, Sue Ann

    2009-06-01

    The inverted pendulum is frequently used as a starting point for discussions of how human balance is maintained during standing and locomotion. Here we examine three experimental paradigms of time-delayed balance control: (1) mechanical inverted time-delayed pendulum, (2) stick balancing at the fingertip, and (3) human postural sway during quiet standing. Measurements of the transfer function (mechanical stick balancing) and the two-point correlation function (Hurst exponent) for the movements of the fingertip (real stick balancing) and the fluctuations in the center of pressure (postural sway) demonstrate that the upright fixed point is unstable in all three paradigms. These observations imply that the balanced state represents a more complex and bounded time-dependent state than a fixed-point attractor. Although mathematical models indicate that a sufficient condition for instability is for the time delay to make a corrective movement, τn, be greater than a critical delay τc that is proportional to the length of the pendulum, this condition is satisfied only in the case of human stick balancing at the fingertip. Thus it is suggested that a common cause of instability in all three paradigms stems from the difficulty of controlling both the angle of the inverted pendulum and the position of the controller simultaneously using time-delayed feedback. Considerations of the problematic nature of control in the presence of delay and random perturbations ("noise") suggest that neural control for the upright position likely resembles an adaptive-type controller in which the displacement angle is allowed to drift for small displacements with active corrections made only when θ exceeds a threshold. This mechanism draws attention to an overlooked type of passive control that arises from the interplay between retarded variables and noise.

  15. The Golden Age of Greece: Imperial Democracy 500-400 B.C. A Unit of Study for Grades 6-12.

    ERIC Educational Resources Information Center

    Cheoros, Peter; And Others

    This unit is one of a series that represents specific moments in history from which students focus on the meanings of landmark events. This unit explores Greece's most glorious century, the high point of Athenian culture. Rarely has so much genius been concentrated in one small region over such a short period of time. Students discover in studying…

  16. Repetitive Breech Presentations at Term

    PubMed Central

    Zigo, Imrich; Sivakova, Jana; Moricova, Petra; Kapustova, Ivana; Krivus, Stefan; Danko, Jan

    2013-01-01

    The authors present a case of 38-year-old laboring woman with four-time repetitive breech presentation of the fetus at term. This rare condition affects the mode of delivery and represents serious obstetrical problem as it is associated with increased perinatal morbidity or mortality. The authors give details on risk factors for breech presentation, its diagnosis, and the discussion points on possible causes leading to repetitive breeches in laboring women. PMID:23984133

  17. La Stella di Betlemme in arte e scienza

    NASA Astrophysics Data System (ADS)

    Sigismondi, Costantino

    2014-05-01

    The star of Bethlehem has been represented in many artworks, starting from II century AD in Priscilla Catacumbs in Rome. The 14 pointed silver star of 1717 which is located in the place of birth of Jesus in Bethlehem remembers the numbers of generations 14 repeated three times since Abraham to Jesus in Matthew 1: 1-17. Finally the hypotehsis of Mira Ceti as star of Bethlehem is reviewed

  18. On power series representing solutions of the one-dimensional time-independent Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Trotsenko, N. P.

    2017-06-01

    For the equation χ″( x) = u( x)χ( x) with infinitely smooth u( x), the general solution χ( x) is found in the form of a power series. The coefficients of the series are expressed via all derivatives u ( m)( y) of the function u( x) at a fixed point y. Examples of solutions for particular functions u( x) are considered.

  19. Do trigeminal autonomic cephalalgias represent primary diagnoses or points on a continuum?

    PubMed

    Charleston, Larry

    2015-06-01

    The question of whether the trigeminal autonomic cephalalgias (TACs) represent primary diagnoses or points on a continuum has been debatable for a number of years. Patients with TACs may present with similar clinical characteristics, and occasionally, TACS respond to similar treatments. Prima facie, these disorders may seem to be intimately related. However, due to the current evidence, it would be challenging to accurately conclude whether they represent different primary headache diagnoses or the same primary headache disorder represented by different points on the same continuum. Ultimately, the TACs may utilize similar pathways and activate nociceptive responses that result in similar clinical phenotypes but "original and initiating" etiology may differ, and these disorders may not be points on the same continuum. This paper seeks to provide a brief comparison of TACs via diagnostic criteria, secondary causes, brief overview of pathophysiology, and the use of some key treatments and their mechanism of actions to illustrate the TAC similarities and differences.

  20. Gel point and fractal microstructure of incipient blood clots are significant new markers of hemostasis for healthy and anticoagulated blood.

    PubMed

    Evans, Phillip A; Hawkins, Karl; Morris, Roger H K; Thirumalai, Naresh; Munro, Roger; Wakeman, Lisa; Lawrence, Matthew J; Williams, P Rhodri

    2010-10-28

    Here we report the first application of a fractal analysis of the viscoelastic properties of incipient blood clots. We sought to ascertain whether the incipient clot's fractal dimension, D(f,) could be used as a functional biomarker of hemostasis. The incipient clot is formed at the gel point (GP) of coagulating blood, the GP demarcating a functional change from viscoelastic liquid to a viscoelastic solid. Incipient clots formed in whole healthy blood show a clearly defined value of D(f) within a narrow range that represents an index of clotting in health, where D(f) = 1.74 (± 0.07). A significant relationship is found between the incipient clot formation time, T(GP), and the activated partial thromboplastin time, whereas the association of D(f) with the microstructural characteristics of the incipient clot is supported by its significant correlation with fibrinogen. Our study reveals that unfractionated heparin not only prolongs the onset of clot formation but has a significant effect on its fractal microstructure. A progressive increase in unfractionated heparin concentration results in a linear decrease in D(f) and a corresponding prolongation in T(GP). The results represent a new, quantitative measure of clot quality derived from measurements on whole blood samples.

  1. An analytical approach for the simulation of flow in a heterogeneous confined aquifer with a parameter zonation structure

    NASA Astrophysics Data System (ADS)

    Huang, Ching-Sheng; Yeh, Hund-Der

    2016-11-01

    This study introduces an analytical approach to estimate drawdown induced by well extraction in a heterogeneous confined aquifer with an irregular outer boundary. The aquifer domain is divided into a number of zones according to the zonation method for representing the spatial distribution of a hydraulic parameter field. The lateral boundary of the aquifer can be considered under the Dirichlet, Neumann or Robin condition at different parts of the boundary. Flow across the interface between two zones satisfies the continuities of drawdown and flux. Source points, each of which has an unknown volumetric rate representing the boundary effect on the drawdown, are allocated around the boundary of each zone. The solution of drawdown in each zone is expressed as a series in terms of the Theis equation with unknown volumetric rates from the source points. The rates are then determined based on the aquifer boundary conditions and the continuity requirements. The estimated aquifer drawdown by the present approach agrees well with a finite element solution developed based on the Mathematica function NDSolve. As compared with the existing numerical approaches, the present approach has a merit of directly computing the drawdown at any given location and time and therefore takes much less computing time to obtain the required results in engineering applications.

  2. Method and apparatus for automatically detecting patterns in digital point-ordered signals

    DOEpatents

    Brudnoy, David M.

    1998-01-01

    The present invention is a method and system for detecting a physical feature of a test piece by detecting a pattern in a signal representing data from inspection of the test piece. The pattern is detected by automated additive decomposition of a digital point-ordered signal which represents the data. The present invention can properly handle a non-periodic signal. A physical parameter of the test piece is measured. A digital point-ordered signal representative of the measured physical parameter is generated. The digital point-ordered signal is decomposed into a baseline signal, a background noise signal, and a peaks/troughs signal. The peaks/troughs from the peaks/troughs signal are located and peaks/troughs information indicating the physical feature of the test piece is output.

  3. Method and apparatus for automatically detecting patterns in digital point-ordered signals

    DOEpatents

    Brudnoy, D.M.

    1998-10-20

    The present invention is a method and system for detecting a physical feature of a test piece by detecting a pattern in a signal representing data from inspection of the test piece. The pattern is detected by automated additive decomposition of a digital point-ordered signal which represents the data. The present invention can properly handle a non-periodic signal. A physical parameter of the test piece is measured. A digital point-ordered signal representative of the measured physical parameter is generated. The digital point-ordered signal is decomposed into a baseline signal, a background noise signal, and a peaks/troughs signal. The peaks/troughs from the peaks/troughs signal are located and peaks/troughs information indicating the physical feature of the test piece is output. 14 figs.

  4. Min-Cut Based Segmentation of Airborne LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Ural, S.; Shan, J.

    2012-07-01

    Introducing an organization to the unstructured point cloud before extracting information from airborne lidar data is common in many applications. Aggregating the points with similar features into segments in 3-D which comply with the nature of actual objects is affected by the neighborhood, scale, features and noise among other aspects. In this study, we present a min-cut based method for segmenting the point cloud. We first assess the neighborhood of each point in 3-D by investigating the local geometric and statistical properties of the candidates. Neighborhood selection is essential since point features are calculated within their local neighborhood. Following neighborhood determination, we calculate point features and determine the clusters in the feature space. We adapt a graph representation from image processing which is especially used in pixel labeling problems and establish it for the unstructured 3-D point clouds. The edges of the graph that are connecting the points with each other and nodes representing feature clusters hold the smoothness costs in the spatial domain and data costs in the feature domain. Smoothness costs ensure spatial coherence, while data costs control the consistency with the representative feature clusters. This graph representation formalizes the segmentation task as an energy minimization problem. It allows the implementation of an approximate solution by min-cuts for a global minimum of this NP hard minimization problem in low order polynomial time. We test our method with airborne lidar point cloud acquired with maximum planned post spacing of 1.4 m and a vertical accuracy 10.5 cm as RMSE. We present the effects of neighborhood and feature determination in the segmentation results and assess the accuracy and efficiency of the implemented min-cut algorithm as well as its sensitivity to the parameters of the smoothness and data cost functions. We find that smoothness cost that only considers simple distance parameter does not strongly conform to the natural structure of the points. Including shape information within the energy function by assigning costs based on the local properties may help to achieve a better representation for segmentation.

  5. Generation of Ground Truth Datasets for the Analysis of 3d Point Clouds in Urban Scenes Acquired via Different Sensors

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Sun, Z.; Boerner, R.; Koch, T.; Hoegner, L.; Stilla, U.

    2018-04-01

    In this work, we report a novel way of generating ground truth dataset for analyzing point cloud from different sensors and the validation of algorithms. Instead of directly labeling large amount of 3D points requiring time consuming manual work, a multi-resolution 3D voxel grid for the testing site is generated. Then, with the help of a set of basic labeled points from the reference dataset, we can generate a 3D labeled space of the entire testing site with different resolutions. Specifically, an octree-based voxel structure is applied to voxelize the annotated reference point cloud, by which all the points are organized by 3D grids of multi-resolutions. When automatically annotating the new testing point clouds, a voting based approach is adopted to the labeled points within multiple resolution voxels, in order to assign a semantic label to the 3D space represented by the voxel. Lastly, robust line- and plane-based fast registration methods are developed for aligning point clouds obtained via various sensors. Benefiting from the labeled 3D spatial information, we can easily create new annotated 3D point clouds of different sensors of the same scene directly by considering the corresponding labels of 3D space the points located, which would be convenient for the validation and evaluation of algorithms related to point cloud interpretation and semantic segmentation.

  6. Epidemic models for phase transitions: application to a physical gel

    NASA Astrophysics Data System (ADS)

    Bilge, A. H.; Pekcan, O.; Kara, S.; Ogrenci, A. S.

    2017-09-01

    Carrageenan gels are characterized by reversible sol-gel and gel-sol transitions under cooling and heating processes and these transitions are approximated by generalized logistic growth curves. We express the transitions of carrageenan-water system, as a representative of reversible physical gels, in terms of a modified Susceptible-Infected-Susceptible epidemic model, as opposed to the Susceptible-Infected-Removed model used to represent the (irreversible) chemical gel formation in the previous work. We locate the gel point Tc of sol-gel and gel-sol transitions and we find that, for the sol-gel transition (cooling), Tc > Tsg (transition temperature), i.e. Tc is earlier in time for all carrageenan contents and moves forward in time and gets closer to Tsg as the carrageenan content increases. For the gel-sol transition (heating), Tc is relatively closer to Tgs; it is greater than Tgs, i.e. later in time for low carrageenan contents and moves backward as carrageenan content increases.

  7. The positive impact of simultaneous implementation of the BD FocalPoint GS Imaging System and lean principles on the operation of gynecologic cytology.

    PubMed

    Wong, Rebecca; Levi, Angelique W; Harigopal, Malini; Schofield, Kevin; Chhieng, David C

    2012-02-01

    Our cytology laboratory, like many others, is under pressure to improve quality and provide test results faster while decreasing costs. We sought to address these issues by introducing new technology and lean principles. To determine the combined impact of the FocalPoint Guided Screener (GS) Imaging System (BD Diagnostics-TriPath, Burlington, North Carolina) and lean manufacturing principles on the turnaround time (TAT) and productivity of the gynecologic cytology operation. We established a baseline measure of the TAT for Papanicolaou tests. We then compared that to the performance after implementing the FocalPoint GS Imaging System and lean principles. The latter included value-stream mapping, workflow modification, and a first in-first out policy. The mean (SD) TAT for Papanicolaou tests before and after the implementation of FocalPoint GS Imaging System and lean principles was 4.38 (1.28) days and 3.20 (1.32) days, respectively. This represented a 27% improvement in the average TAT, which was statistically significant (P < .001). In addition, the productivity of staff improved 17%, as evidenced by the increase in slides screened from 8.85/h to 10.38/h. The false-negative fraction decreased from 1.4% to 0.9%, representing a 36% improvement. In our laboratory, the implementation of FocalPoint GS Imaging System in conjunction with lean principles resulted in a significant decrease in the average TAT for Papanicolaou tests and a substantial increase in the productivity of cytotechnologists while maintaining the diagnostic quality of gynecologic cytology.

  8. The impact of HMOs on hospital-based uncompensated care.

    PubMed

    Thorpe, K E; Seiber, E E; Florence, C S

    2001-06-01

    Managed care in general and HMOs in particular have become the vehicle of choice for controlling health care spending in the private sector. By several accounts, managed care has achieved its cost-containment objectives. At the same time, the percentage of Americans without health insurance coverage continues to rise. For-profit and not-for-profit hospitals have traditionally financed care for the uninsured from profits derived from patients with insurance. Thus the relationship between growth in managed care and HMOs, hospital "profits," and care for the uninsured represent an important policy question. Using national data over an eight-year period, we find that a ten-percentage point increase in managed care penetration is associated with a two-percentage point reduction in hospital total profit margin and a 0.6 percentage point decrease in uncompensated care.

  9. From Laser Scanning to Finite Element Analysis of Complex Buildings by Using a Semi-Automatic Procedure.

    PubMed

    Castellazzi, Giovanni; D'Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro

    2015-07-28

    In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation.

  10. Quantifying Mixed Uncertainties in Cyber Attacker Payoffs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatterjee, Samrat; Halappanavar, Mahantesh; Tipireddy, Ramakrishna

    Representation and propagation of uncertainty in cyber attacker payoffs is a key aspect of security games. Past research has primarily focused on representing the defender’s beliefs about attacker payoffs as point utility estimates. More recently, within the physical security domain, attacker payoff uncertainties have been represented as Uniform and Gaussian probability distributions, and intervals. Within cyber-settings, continuous probability distributions may still be appropriate for addressing statistical (aleatory) uncertainties where the defender may assume that the attacker’s payoffs differ over time. However, systematic (epistemic) uncertainties may exist, where the defender may not have sufficient knowledge or there is insufficient information aboutmore » the attacker’s payoff generation mechanism. Such epistemic uncertainties are more suitably represented as probability boxes with intervals. In this study, we explore the mathematical treatment of such mixed payoff uncertainties.« less

  11. Role of Erosion in Shaping Point Bars

    NASA Astrophysics Data System (ADS)

    Moody, J.; Meade, R.

    2012-04-01

    A powerful metaphor in fluvial geomorphology has been that depositional features such as point bars (and other floodplain features) constitute the river's historical memory in the form of uniformly thick sedimentary deposits waiting for the geomorphologist to dissect and interpret the past. For the past three decades, along the channel of Powder River (Montana USA) we have documented (with annual cross-sectional surveys and pit trenches) the evolution of the shape of three point bars that were created when an extreme flood in 1978 cut new channels across the necks of two former meander bends and radically shifted the location of a third bend. Subsequent erosion has substantially reshaped, at different time scales, the relic sediment deposits of varying age. At the weekly to monthly time scale (i.e., floods from snowmelt or floods from convective or cyclonic storms), the maximum scour depth was computed (by using a numerical model) at locations spaced 1 m apart across the entire point bar for a couple of the largest floods. The maximum predicted scour is about 0.22 m. At the annual time scale, repeated cross-section topographic surveys (25 during 32 years) indicate that net annual erosion at a single location can be as great as 0.5 m, and that the net erosion is greater than net deposition during 8, 16, and 32% of the years for the three point bars. On average, the median annual net erosion was 21, 36, and 51% of the net deposition. At the decadal time scale, an index of point bar preservation often referred to as completeness was defined for each cross section as the percentage of the initial deposit (older than 10 years) that was still remaining in 2011; computations indicate that 19, 41, and 36% of the initial deposits of sediment were eroded. Initial deposits were not uniform in thickness and often represented thicker pods of sediment connected by thin layers of sediment or even isolated pods at different elevations across the point bar in response to multiple floods during a water year. Erosion often was preferential and removed part or all of pods at lower elevations, and in time left what appears to be a random arrangement of sediment pods forming the point bar. Thus, we conclude that the erosional process is as important as the deposition process in shaping the final form of the point bar, and that point bars are not uniformly aggradational or transgressive deposits of sediment in which the age of the deposit increases monotonically downward at all locations across the point bar.

  12. Representativeness and optimal use of body mass index (BMI) in the UK Clinical Practice Research Datalink (CPRD)

    PubMed Central

    Bhaskaran, Krishnan; Forbes, Harriet J; Douglas, Ian; Leon, David A; Smeeth, Liam

    2013-01-01

    Objectives To assess the completeness and representativeness of body mass index (BMI) data in the Clinical Practice Research Datalink (CPRD), and determine an optimal strategy for their use. Design Descriptive study. Setting Electronic healthcare records from primary care. Participants A million patient random sample from the UK CPRD primary care database, aged ≥16 years. Primary and secondary outcome measures BMI completeness in CPRD was evaluated by age, sex and calendar period. CPRD-based summary BMI statistics for each calendar year (2003–2010) were age-standardised and sex-standardised and compared with equivalent statistics from the Health Survey for England (HSE). Results BMI completeness increased over calendar time from 37% in 1990–1994 to 77% in 2005–2011, was higher among females and increased with age. When BMI at specific time points was assigned based on the most recent record, calendar–year-specific mean BMI statistics underestimated equivalent HSE statistics by 0.75–1.1 kg/m2. Restriction to those with a recent (≤3 years) BMI resulted in mean BMI estimates closer to HSE (≤0.28 kg/m2 underestimation), but excluded up to 47% of patients. An alternative strategy of imputing up-to-date BMI based on modelled changes in BMI over time since the last available record also led to mean BMI estimates that were close to HSE (≤0.37 kg/m2 underestimation). Conclusions Completeness of BMI in CPRD increased over time and varied by age and sex. At a given point in time, a large proportion of the most recent BMIs are unlikely to reflect current BMI; consequent BMI misclassification might be reduced by employing model-based imputation of current BMI. PMID:24038008

  13. Simulating Ice Shelf Response to Potential Triggers of Collapse Using the Material Point Method

    NASA Astrophysics Data System (ADS)

    Huth, A.; Smith, B. E.

    2017-12-01

    Weakening or collapse of an ice shelf can reduce the buttressing effect of the shelf on its upstream tributaries, resulting in sea level rise as the flux of grounded ice into the ocean increases. Here we aim to improve sea level rise projections by developing a prognostic 2D plan-view model that simulates the response of an ice sheet/ice shelf system to potential triggers of ice shelf weakening or collapse, such as calving events, thinning, and meltwater ponding. We present initial results for Larsen C. Changes in local ice shelf stresses can affect flow throughout the entire domain, so we place emphasis on calibrating our model to high-resolution data and precisely evolving fracture-weakening and ice geometry throughout the simulations. We primarily derive our initial ice geometry from CryoSat-2 data, and initialize the model by conducting a dual inversion for the ice viscosity parameter and basal friction coefficient that minimizes mismatch between modeled velocities and velocities derived from Landsat data. During simulations, we implement damage mechanics to represent fracture-weakening, and track ice thickness evolution, grounding line position, and ice front position. Since these processes are poorly represented by the Finite Element Method (FEM) due to mesh resolution issues and numerical diffusion, we instead implement the Material Point Method (MPM) for our simulations. In MPM, the ice domain is discretized into a finite set of Lagrangian material points that carry all variables and are tracked throughout the simulation. Each time step, information from the material points is projected to a Eulerian grid where the momentum balance equation (shallow shelf approximation) is solved similarly to FEM, but essentially treating the material points as integration points. The grid solution is then used to determine the new positions of the material points and update variables such as thickness and damage in a diffusion-free Lagrangian frame. The grid does not store any variables permanently, and can be replaced at any time step. MPM naturally tracks the ice front and grounding line at a subgrid scale. MPM also facilitates the implementation of rift propagation in arbitrary directions, and therefore shows promise for predicting calving events. To our knowledge, this is the first application of MPM to ice flow modeling.

  14. Volatility of linear and nonlinear time series

    NASA Astrophysics Data System (ADS)

    Kalisky, Tomer; Ashkenazy, Yosef; Havlin, Shlomo

    2005-07-01

    Previous studies indicated that nonlinear properties of Gaussian distributed time series with long-range correlations, ui , can be detected and quantified by studying the correlations in the magnitude series ∣ui∣ , the “volatility.” However, the origin for this empirical observation still remains unclear and the exact relation between the correlations in ui and the correlations in ∣ui∣ is still unknown. Here we develop analytical relations between the scaling exponent of linear series ui and its magnitude series ∣ui∣ . Moreover, we find that nonlinear time series exhibit stronger (or the same) correlations in the magnitude time series compared with linear time series with the same two-point correlations. Based on these results we propose a simple model that generates multifractal time series by explicitly inserting long range correlations in the magnitude series; the nonlinear multifractal time series is generated by multiplying a long-range correlated time series (that represents the magnitude series) with uncorrelated time series [that represents the sign series sgn(ui) ]. We apply our techniques on daily deep ocean temperature records from the equatorial Pacific, the region of the El-Ninõ phenomenon, and find: (i) long-range correlations from several days to several years with 1/f power spectrum, (ii) significant nonlinear behavior as expressed by long-range correlations of the volatility series, and (iii) broad multifractal spectrum.

  15. Estimation of main diversification time-points of hantaviruses using phylogenetic analyses of complete genomes.

    PubMed

    Castel, Guillaume; Tordo, Noël; Plyusnin, Alexander

    2017-04-02

    Because of the great variability of their reservoir hosts, hantaviruses are excellent models to evaluate the dynamics of virus-host co-evolution. Intriguing questions remain about the timescale of the diversification events that influenced this evolution. In this paper we attempted to estimate the first ever timing of hantavirus diversification based on thirty five available complete genomes representing five major groups of hantaviruses and the assumption of co-speciation of hantaviruses with their respective mammal hosts. Phylogenetic analyses were used to estimate the main diversification points during hantavirus evolution in mammals while host diversification was mostly estimated from independent calibrators taken from fossil records. Our results support an earlier developed hypothesis of co-speciation of known hantaviruses with their respective mammal hosts and hence a common ancestor for all hantaviruses carried by placental mammals. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Understanding the representativeness of FLUXNET for upscaling carbon flux from eddy covariance measurements

    DOE PAGES

    Kumar, Jitendra; Hoffman, Forrest M.; Hargrove, William W.; ...

    2016-08-23

    Eddy covariance data from regional flux networks are direct in situ measurement of carbon, water, and energy fluxes and are of vital importance for understanding the spatio-temporal dynamics of the the global carbon cycle. FLUXNET links regional networks of eddy covariance sites across the globe to quantify the spatial and temporal variability of fluxes at regional to global scales and to detect emergent ecosystem properties. This study presents an assessment of the representativeness of FLUXNET based on the recently released FLUXNET2015 data set. We present a detailed high resolution analysis of the evolving representativeness of FLUXNET through time. Results providemore » quantitative insights into the extent that various biomes are sampled by the network of networks, the role of the spatial distribution of the sites on the network scale representativeness at any given time, and how that representativeness has changed through time due to changing operational status and data availability at sites in the network. To realize the full potential of FLUXNET observations for understanding emergent ecosystem properties at regional and global scales, we present an approach for upscaling eddy covariance measurements. Informed by the representativeness of observations at the flux sites in the network, the upscaled data reflects the spatio-temporal dynamics of the carbon cycle captured by the in situ measurements. In conclusion, this study presents a method for optimal use of the rich point measurements from FLUXNET to derive an understanding of upscaled carbon fluxes, which can be routinely updated as new data become available, and direct network expansion by identifying regions poorly sampled by the current network.« less

  17. Time-lapse analysis of methane quantity in Mary Lee group of coal seams using filter-based multiple-point geostatistical simulation

    USGS Publications Warehouse

    Karacan, C. Özgen; Olea, Ricardo A.

    2013-01-01

    The systematic approach presented in this paper is the first time in literature that history matching, TIs of GIPs and filter simulations are used for degasification performance evaluation and for assessing GIP for mining safety. Results from this study showed that using production history matching of coalbed methane wells to determine time-lapsed reservoir data could be used to compute spatial GIP and representative GIP TIs generated through Voronoi decomposition. Furthermore, performing filter simulations using point-wise data and TIs could be used to predict methane quantity in coal seams subjected to degasification. During the course of the study, it was shown that the material balance of gas produced by wellbores and the GIP reductions in coal seams predicted using filter simulations compared very well, showing the success of filter simulations for continuous variables in this case study. Quantitative results from filter simulations of GIP within the studied area briefly showed that GIP was reduced from an initial ∼73 Bcf (median) to ∼46 Bcf (2011), representing a 37 % decrease and varying spatially through degasification. It is forecasted that there will be an additional ∼2 Bcf reduction in methane quantity between 2011 and 2015. This study and presented results showed that the applied methodology and utilized techniques can be used to map GIP and its change within coal seams after degasification, which can further be used for ventilation design for methane control in coal mines.

  18. Image and information management system

    NASA Technical Reports Server (NTRS)

    Robertson, Tina L. (Inventor); Raney, Michael C. (Inventor); Dougherty, Dennis M. (Inventor); Kent, Peter C. (Inventor); Brucker, Russell X. (Inventor); Lampert, Daryl A. (Inventor)

    2009-01-01

    A system and methods through which pictorial views of an object's configuration, arranged in a hierarchical fashion, are navigated by a person to establish a visual context within the configuration. The visual context is automatically translated by the system into a set of search parameters driving retrieval of structured data and content (images, documents, multimedia, etc.) associated with the specific context. The system places ''hot spots'', or actionable regions, on various portions of the pictorials representing the object. When a user interacts with an actionable region, a more detailed pictorial from the hierarchy is presented representing that portion of the object, along with real-time feedback in the form of a popup pane containing information about that region, and counts-by-type reflecting the number of items that are available within the system associated with the specific context and search filters established at that point in time.

  19. Image and information management system

    NASA Technical Reports Server (NTRS)

    Robertson, Tina L. (Inventor); Kent, Peter C. (Inventor); Raney, Michael C. (Inventor); Dougherty, Dennis M. (Inventor); Brucker, Russell X. (Inventor); Lampert, Daryl A. (Inventor)

    2007-01-01

    A system and methods through which pictorial views of an object's configuration, arranged in a hierarchical fashion, are navigated by a person to establish a visual context within the configuration. The visual context is automatically translated by the system into a set of search parameters driving retrieval of structured data and content (images, documents, multimedia, etc.) associated with the specific context. The system places hot spots, or actionable regions, on various portions of the pictorials representing the object. When a user interacts with an actionable region, a more detailed pictorial from the hierarchy is presented representing that portion of the object, along with real-time feedback in the form of a popup pane containing information about that region, and counts-by-type reflecting the number of items that are available within the system associated with the specific context and search filters established at that point in time.

  20. Causal structure of oscillations in gene regulatory networks: Boolean analysis of ordinary differential equation attractors.

    PubMed

    Sun, Mengyang; Cheng, Xianrui; Socolar, Joshua E S

    2013-06-01

    A common approach to the modeling of gene regulatory networks is to represent activating or repressing interactions using ordinary differential equations for target gene concentrations that include Hill function dependences on regulator gene concentrations. An alternative formulation represents the same interactions using Boolean logic with time delays associated with each network link. We consider the attractors that emerge from the two types of models in the case of a simple but nontrivial network: a figure-8 network with one positive and one negative feedback loop. We show that the different modeling approaches give rise to the same qualitative set of attractors with the exception of a possible fixed point in the ordinary differential equation model in which concentrations sit at intermediate values. The properties of the attractors are most easily understood from the Boolean perspective, suggesting that time-delay Boolean modeling is a useful tool for understanding the logic of regulatory networks.

  1. Ask your doctor: the construction of smoking in advertising posters produced in 1946 and 2004.

    PubMed

    Street, Annette F

    2004-12-01

    This paper examines two full-page A3 poster advertisements in mass magazines produced at two time points over a 60-year period depicting smoking and its effects, with particular relation to lung cancer. Each poster represents the social and cultural milieu of its time. The writings of Foucault are used to explore the disciplinary technologies of sign systems as depicted in the two posters. The relationships between government, tobacco companies and drug companies and the technologies of production are examined with regard to the development of smoking cessation strategies. The technologies of power are associated with the constructions of risk and lifestyles. The technologies of the self locate smokers as culpable subjects responsible for their individual health. Finally, the meshing of these technologies places the doctor in the frame as "authoritative knower" and representative of expert systems.

  2. Episodic-like memory trace in awake replay of hippocampal place cell activity sequences.

    PubMed

    Takahashi, Susumu

    2015-10-20

    Episodic memory retrieval of events at a specific place and time is effective for future planning. Sequential reactivation of the hippocampal place cells along familiar paths while the animal pauses is well suited to such a memory retrieval process. It is, however, unknown whether this awake replay represents events occurring along the path. Using a subtask switching protocol in which the animal experienced three subtasks as 'what' information in a maze, I here show that the replay represents a trial type, consisting of path and subtask, in terms of neuronal firing timings and rates. The actual trial type to be rewarded could only be reliably predicted from replays that occurred at the decision point. This trial-type representation implies that not only 'where and when' but also 'what' information is contained in the replay. This result supports the view that awake replay is an episodic-like memory retrieval process.

  3. Multi-Criterion Preliminary Design of a Tetrahedral Truss Platform

    NASA Technical Reports Server (NTRS)

    Wu, K. Chauncey

    1995-01-01

    An efficient method is presented for multi-criterion preliminary design and demonstrated for a tetrahedral truss platform. The present method requires minimal analysis effort and permits rapid estimation of optimized truss behavior for preliminary design. A 14-m-diameter, 3-ring truss platform represents a candidate reflector support structure for space-based science spacecraft. The truss members are divided into 9 groups by truss ring and position. Design variables are the cross-sectional area of all members in a group, and are either 1, 3 or 5 times the minimum member area. Non-structural mass represents the node and joint hardware used to assemble the truss structure. Taguchi methods are used to efficiently identify key points in the set of Pareto-optimal truss designs. Key points identified using Taguchi methods are the maximum frequency, minimum mass, and maximum frequency-to-mass ratio truss designs. Low-order polynomial curve fits through these points are used to approximate the behavior of the full set of Pareto-optimal designs. The resulting Pareto-optimal design curve is used to predict frequency and mass for optimized trusses. Performance improvements are plotted in frequency-mass (criterion) space and compared to results for uniform trusses. Application of constraints to frequency and mass and sensitivity to constraint variation are demonstrated.

  4. 40 CFR 430.113 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Fine and Lightweight Papers from Purchased Pulp Subcategory § 430.113 Effluent limitations... existing point source subject to this subpart shall achieve the following effluent limitations representing...

  5. 40 CFR 430.123 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Tissue, Filter, Non-Woven, and Paperboard From Purchased Pulp Subcategory § 430.123 Effluent... existing point source subject to this subpart shall achieve the following effluent limitations representing...

  6. Street curb recognition in 3d point cloud data using morphological operations

    NASA Astrophysics Data System (ADS)

    Rodríguez-Cuenca, Borja; Concepción Alonso-Rodríguez, María; García-Cortés, Silverio; Ordóñez, Celestino

    2015-04-01

    Accurate and automatic detection of cartographic-entities saves a great deal of time and money when creating and updating cartographic databases. The current trend in remote sensing feature extraction is to develop methods that are as automatic as possible. The aim is to develop algorithms that can obtain accurate results with the least possible human intervention in the process. Non-manual curb detection is an important issue in road maintenance, 3D urban modeling, and autonomous navigation fields. This paper is focused on the semi-automatic recognition of curbs and street boundaries using a 3D point cloud registered by a mobile laser scanner (MLS) system. This work is divided into four steps. First, a coordinate system transformation is carried out, moving from a global coordinate system to a local one. After that and in order to simplify the calculations involved in the procedure, a rasterization based on the projection of the measured point cloud on the XY plane was carried out, passing from the 3D original data to a 2D image. To determine the location of curbs in the image, different image processing techniques such as thresholding and morphological operations were applied. Finally, the upper and lower edges of curbs are detected by an unsupervised classification algorithm on the curvature and roughness of the points that represent curbs. The proposed method is valid in both straight and curved road sections and applicable both to laser scanner and stereo vision 3D data due to the independence of its scanning geometry. This method has been successfully tested with two datasets measured by different sensors. The first dataset corresponds to a point cloud measured by a TOPCON sensor in the Spanish town of Cudillero. That point cloud comprises more than 6,000,000 points and covers a 400-meter street. The second dataset corresponds to a point cloud measured by a RIEGL sensor in the Austrian town of Horn. That point cloud comprises 8,000,000 points and represents a 160-meter street. The proposed method provides success rates in curb recognition of over 85% in both datasets.

  7. A novel method for sampling the suspended sediment load in the tidal environment using bi-directional time-integrated mass-flux sediment (TIMS) samplers

    NASA Astrophysics Data System (ADS)

    Elliott, Emily A.; Monbureau, Elaine; Walters, Glenn W.; Elliott, Mark A.; McKee, Brent A.; Rodriguez, Antonio B.

    2017-12-01

    Identifying the source and abundance of sediment transported within tidal creeks is essential for studying the connectivity between coastal watersheds and estuaries. The fine-grained suspended sediment load (SSL) makes up a substantial portion of the total sediment load carried within an estuarine system and efficient sampling of the SSL is critical to our understanding of nutrient and contaminant transport, anthropogenic influence, and the effects of climate. Unfortunately, traditional methods of sampling the SSL, including instantaneous measurements and automatic samplers, can be labor intensive, expensive and often yield insufficient mass for comprehensive geochemical analysis. In estuaries this issue is even more pronounced due to bi-directional tidal flow. This study tests the efficacy of a time-integrated mass sediment sampler (TIMS) design, originally developed for uni-directional flow within the fluvial environment, modified in this work for implementation the tidal environment under bi-directional flow conditions. Our new TIMS design utilizes an 'L' shaped outflow tube to prevent backflow, and when deployed in mirrored pairs, each sampler collects sediment uniquely in one direction of tidal flow. Laboratory flume experiments using dye and particle image velocimetry (PIV) were used to characterize the flow within the sampler, specifically, to quantify the settling velocities and identify stagnation points. Further laboratory tests of sediment indicate that bidirectional TIMS capture up to 96% of incoming SSL across a range of flow velocities (0.3-0.6 m s-1). The modified TIMS design was tested in the field at two distinct sampling locations within the tidal zone. Single-time point suspended sediment samples were collected at high and low tide and compared to time-integrated suspended sediment samples collected by the bi-directional TIMS over the same four-day period. Particle-size composition from the bi-directional TIMS were representative of the array of single time point samples, but yielded greater mass, representative of flow and sediment-concentration conditions at the site throughout the deployment period. This work proves the efficacy of the modified bi-directional TIMS design, offering a novel tool for collection of suspended sediment in the tidally-dominated portion of the watershed.

  8. Leaps and lulls in the developmental transcriptome of Dictyostelium discoideum.

    PubMed

    Rosengarten, Rafael David; Santhanam, Balaji; Fuller, Danny; Katoh-Kurasawa, Mariko; Loomis, William F; Zupan, Blaz; Shaulsky, Gad

    2015-04-13

    Development of the soil amoeba Dictyostelium discoideum is triggered by starvation. When placed on a solid substrate, the starving solitary amoebae cease growth, communicate via extracellular cAMP, aggregate by tens of thousands and develop into multicellular organisms. Early phases of the developmental program are often studied in cells starved in suspension while cAMP is provided exogenously. Previous studies revealed massive shifts in the transcriptome under both developmental conditions and a close relationship between gene expression and morphogenesis, but were limited by the sampling frequency and the resolution of the methods. Here, we combine the superior depth and specificity of RNA-seq-based analysis of mRNA abundance with high frequency sampling during filter development and cAMP pulsing in suspension. We found that the developmental transcriptome exhibits mostly gradual changes interspersed by a few instances of large shifts. For each time point we treated the entire transcriptome as single phenotype, and were able to characterize development as groups of similar time points separated by gaps. The grouped time points represented gradual changes in mRNA abundance, or molecular phenotype, and the gaps represented times during which many genes are differentially expressed rapidly, and thus the phenotype changes dramatically. Comparing developmental experiments revealed that gene expression in filter developed cells lagged behind those treated with exogenous cAMP in suspension. The high sampling frequency revealed many genes whose regulation is reproducibly more complex than indicated by previous studies. Gene Ontology enrichment analysis suggested that the transition to multicellularity coincided with rapid accumulation of transcripts associated with DNA processes and mitosis. Later development included the up-regulation of organic signaling molecules and co-factor biosynthesis. Our analysis also demonstrated a high level of synchrony among the developing structures throughout development. Our data describe D. discoideum development as a series of coordinated cellular and multicellular activities. Coordination occurred within fields of aggregating cells and among multicellular bodies, such as mounds or migratory slugs that experience both cell-cell contact and various soluble signaling regimes. These time courses, sampled at the highest temporal resolution to date in this system, provide a comprehensive resource for studies of developmental gene expression.

  9. [Corporate and technological changes in São Paulo medicine in 1930].

    PubMed

    Mota, André; Schraiber, Lilia Blima

    2009-01-01

    Through the historical study of the corporate and technological changes experienced by doctors in São Paulo in the 1930s, we intend to identify how changes in the fields of equipment and knowledge came from the emergence of specialties, which led to corporate changes and rearrangements in the face of the dilemmas introduced by the Getúlio Vargas government and its policy of centralizing power. Connections are pointed out of a symbolic and representative order, backed by doctors considered 'old-school' and those that represented the 'new' times in medicine, evidencing the clashes between these currents vis-à-vis the specialization movement and particular landmarks in the history of São Paulo.

  10. User's manual for the Graphical Constituent Loading Analysis System (GCLAS)

    USGS Publications Warehouse

    Koltun, G.F.; Eberle, Michael; Gray, J.R.; Glysson, G.D.

    2006-01-01

    This manual describes the Graphical Constituent Loading Analysis System (GCLAS), an interactive cross-platform program for computing the mass (load) and average concentration of a constituent that is transported in stream water over a period of time. GCLAS computes loads as a function of an equal-interval streamflow time series and an equal- or unequal-interval time series of constituent concentrations. The constituent-concentration time series may be composed of measured concentrations or a combination of measured and estimated concentrations. GCLAS is not intended for use in situations where concentration data (or an appropriate surrogate) are collected infrequently or where an appreciable amount of the concentration values are censored. It is assumed that the constituent-concentration time series used by GCLAS adequately represents the true time-varying concentration. Commonly, measured constituent concentrations are collected at a frequency that is less than ideal (from a load-computation standpoint), so estimated concentrations must be inserted in the time series to better approximate the expected chemograph. GCLAS provides tools to facilitate estimation and entry of instantaneous concentrations for that purpose. Water-quality samples collected for load computation frequently are collected in a single vertical or at single point in a stream cross section. Several factors, some of which may vary as a function of time and (or) streamflow, can affect whether the sample concentrations are representative of the mean concentration in the cross section. GCLAS provides tools to aid the analyst in assessing whether concentrations in samples collected in a single vertical or at single point in a stream cross section exhibit systematic bias with respect to the mean concentrations. In cases where bias is evident, the analyst can construct coefficient relations in GCLAS to reduce or eliminate the observed bias. GCLAS can export load and concentration data in formats suitable for entry into the U.S. Geological Survey's National Water Information System. GCLAS can also import and export data in formats that are compatible with various commonly used spreadsheet and statistics programs.

  11. The Relationship between OCT-measured Central Retinal Thickness and Visual Acuity in Diabetic Macular Edema

    PubMed Central

    2008-01-01

    Objective To compare optical coherence tomography (OCT)-measured retinal thickness and visual acuity in eyes with diabetic macular edema (DME) both before and after macular laser photocoagulation. Design Cross-sectional and longitudinal study. Participants 210 subjects (251 eyes) with DME enrolled in a randomized clinical trial of laser techniques. Methods Retinal thickness was measured with OCT and visual acuity was measured with the electronic-ETDRS procedure. Main Outcome Measures OCT-measured center point thickness and visual acuity Results The correlation coefficients for visual acuity versus OCT center point thickness were 0.52 at baseline and 0.49, 0.36, and 0.38 at 3.5, 8, and 12 months post-laser photocoagulation. The slope of the best fit line to the baseline data was approximately 4.4 letters (95% C.I.: 3.5, 5.3) better visual acuity for every 100 microns decrease in center point thickness at baseline with no important difference at follow-up visits. Approximately one-third of the variation in visual acuity could be predicted by a linear regression model that incorporated OCT center point thickness, age, hemoglobin A1C, and severity of fluorescein leakage in the center and inner subfields. The correlation between change in visual acuity and change in OCT center point thickening 3.5 months after laser treatment was 0.44 with no important difference at the other follow-up times. A subset of eyes showed paradoxical improvements in visual acuity with increased center point thickening (7–17% at the three time points) or paradoxical worsening of visual acuity with a decrease in center point thickening (18%–26% at the three time points). Conclusions There is modest correlation between OCT-measured center point thickness and visual acuity, and modest correlation of changes in retinal thickening and visual acuity following focal laser treatment for DME. However, a wide range of visual acuity may be observed for a given degree of retinal edema and paradoxical increases in center point thickening with increases in visual acuity as well as paradoxical decreases in center point thickening with decreases in visual acuity were not uncommon. Thus, although OCT measurements of retinal thickness represent an important tool in clinical evaluation, they cannot reliably substitute as a surrogate for visual acuity at a given point in time. This study does not address whether short-term changes on OCT are predictive of long-term effects on visual acuity. PMID:17123615

  12. Multi-star processing and gyro filtering for the video inertial pointing system

    NASA Technical Reports Server (NTRS)

    Murphy, J. P.

    1976-01-01

    The video inertial pointing (VIP) system is being developed to satisfy the acquisition and pointing requirements of astronomical telescopes. The VIP system uses a single video sensor to provide star position information that can be used to generate three-axis pointing error signals (multi-star processing) and for input to a cathode ray tube (CRT) display of the star field. The pointing error signals are used to update the telescope's gyro stabilization system (gyro filtering). The CRT display facilitates target acquisition and positioning of the telescope by a remote operator. Linearized small angle equations are used for the multistar processing and a consideration of error performance and singularities lead to star pair location restrictions and equation selection criteria. A discrete steady-state Kalman filter which uses the integration of the gyros is developed and analyzed. The filter includes unit time delays representing asynchronous operations of the VIP microprocessor and video sensor. A digital simulation of a typical gyro stabilized gimbal is developed and used to validate the approach to the gyro filtering.

  13. Using Pattern Recognition and Discriminance Analysis to Predict Critical Events in Large Signal Databases

    NASA Astrophysics Data System (ADS)

    Feller, Jens; Feller, Sebastian; Mauersberg, Bernhard; Mergenthaler, Wolfgang

    2009-09-01

    Many applications in plant management require close monitoring of equipment performance, in particular with the objective to prevent certain critical events. At each point in time, the information available to classify the criticality of the process, is represented through the historic signal database as well as the actual measurement. This paper presents an approach to detect and predict critical events, based on pattern recognition and discriminance analysis.

  14. Long term economic relationships from cointegration maps

    NASA Astrophysics Data System (ADS)

    Vicente, Renato; Pereira, Carlos de B.; Leite, Vitor B. P.; Caticha, Nestor

    2007-07-01

    We employ the Bayesian framework to define a cointegration measure aimed to represent long term relationships between time series. For visualization of these relationships we introduce a dissimilarity matrix and a map based on the sorting points into neighborhoods (SPIN) technique, which has been previously used to analyze large data sets from DNA arrays. We exemplify the technique in three data sets: US interest rates (USIR), monthly inflation rates and gross domestic product (GDP) growth rates.

  15. The Hotel Industrys Role In Combatting Sex Trafficking

    DTIC Science & Technology

    2017-12-01

    expectations that society has of organizations at a given point in time.”10 Carroll posits that businesses must first, “produce goods and services that...reuse programs that offer guests the option to forego daily laundering services are now commonly used throughout the lodging industry to conserve...million in funding—which represents an increase of $5.9 million over FY2015—to 33 victim service providers.47 Government funding offers agencies and NGOs

  16. New Compatible Estimators for Survivor Growth and Ingrowth from Remeasured Horizontal Point Samples

    Treesearch

    Francis A. Roesch; Edwin J. Green; Charles T. Scott

    1989-01-01

    Forest volume growth between two measurements is often decomposed into the components of survivor growth (S), ingrowth(Z), mortality (M), and cut (C) (for example, Beers 1962 or Van Deusen et al. 1986). Net change between volumes at times 1 and 2 (V1 - V2) is then represented by the equation V,-V,=S+I-M-C. Two new compatible pairs of estimators for S and Z in this...

  17. Guidance, Navigation, and Control Performance for the GOES-R Spacecraft

    NASA Technical Reports Server (NTRS)

    Chapel, Jim; Stancliffe, Devin; Bevacqua, TIm; Winkler, Stephen; Clapp, Brian; Rood, Tim; Gaylor, David; Freesland, Doug; Krimchansky, Alexander

    2014-01-01

    The Geostationary Operational Environmental Satellite-R Series (GOES-R) is the first of the next generation geostationary weather satellites. The series represents a dramatic increase in Earth observation capabilities, with 4 times the resolution, 5 times the observation rate, and 3 times the number of spectral bands. GOES-R also provides unprecedented availability, with less than 120 minutes per year of lost observation time. This paper presents the Guidance Navigation & Control (GN&C) requirements necessary to realize the ambitious pointing, knowledge, and Image Navigation and Registration (INR) objectives of GOES-R. Because the suite of instruments is sensitive to disturbances over a broad spectral range, a high fidelity simulation of the vehicle has been created with modal content over 500 Hz to assess the pointing stability requirements. Simulation results are presented showing acceleration, shock response spectra (SRS), and line of sight (LOS) responses for various disturbances from 0 Hz to 512 Hz. Simulation results demonstrate excellent performance relative to the pointing and pointing stability requirements, with LOS jitter for the isolated instrument platform of approximately 1 micro-rad. Attitude and attitude rate knowledge are provided directly to the instrument with an accuracy defined by the Integrated Rate Error (IRE) requirements. The data are used internally for motion compensation. The final piece of the INR performance is orbit knowledge, which GOES-R achieves with GPS navigation. Performance results are shown demonstrating compliance with the 50 to 75 m orbit position accuracy requirements. As presented in this paper, the GN&C performance supports the challenging mission objectives of GOES-R.

  18. Simultaneous optical flow and source estimation: Space–time discretization and preconditioning

    PubMed Central

    Andreev, R.; Scherzer, O.; Zulehner, W.

    2015-01-01

    We consider the simultaneous estimation of an optical flow field and an illumination source term in a movie sequence. The particular optical flow equation is obtained by assuming that the image intensity is a conserved quantity up to possible sources and sinks which represent varying illumination. We formulate this problem as an energy minimization problem and propose a space–time simultaneous discretization for the optimality system in saddle-point form. We investigate a preconditioning strategy that renders the discrete system well-conditioned uniformly in the discretization resolution. Numerical experiments complement the theory. PMID:26435561

  19. Methane Flux Estimation from Point Sources using GOSAT Target Observation: Detection Limit and Improvements with Next Generation Instruments

    NASA Astrophysics Data System (ADS)

    Kuze, A.; Suto, H.; Kataoka, F.; Shiomi, K.; Kondo, Y.; Crisp, D.; Butz, A.

    2017-12-01

    Atmospheric methane (CH4) has an important role in global radiative forcing of climate but its emission estimates have larger uncertainties than carbon dioxide (CO2). The area of anthropogenic emission sources is usually much smaller than 100 km2. The Thermal And Near infrared Sensor for carbon Observation Fourier-Transform Spectrometer (TANSO-FTS) onboard the Greenhouse gases Observing SATellite (GOSAT) has measured CO2 and CH4 column density using sun light reflected from the earth's surface. It has an agile pointing system and its footprint can cover 87-km2 with a single detector. By specifying pointing angles and observation time for every orbit, TANSO-FTS can target various CH4 point sources together with reference points every 3 day over years. We selected a reference point that represents CH4 background density before or after targeting a point source. By combining satellite-measured enhancement of the CH4 column density and surface measured wind data or estimates from the Weather Research and Forecasting (WRF) model, we estimated CH4emission amounts. Here, we picked up two sites in the US West Coast, where clear sky frequency is high and a series of data are available. The natural gas leak at Aliso Canyon showed a large enhancement and its decrease with time since the initial blowout. We present time series of flux estimation assuming the source is single point without influx. The observation of the cattle feedlot in Chino, California has weather station within the TANSO-FTS footprint. The wind speed is monitored continuously and the wind direction is stable at the time of GOSAT overpass. The large TANSO-FTS footprint and strong wind decreases enhancement below noise level. Weak wind shows enhancements in CH4, but the velocity data have large uncertainties. We show the detection limit of single samples and how to reduce uncertainty using time series of satellite data. We will propose that the next generation instruments for accurate anthropogenic CO2 and CH4 flux estimation have improve spatial resolution (˜1km2 ) to further enhance column density changes. We also propose adding imaging capability to monitor plume orientation. We will present laboratory model results and a sampling pattern optimization study that combines local emission source and global survey observations.

  20. Instance-based learning: integrating sampling and repeated decisions from experience.

    PubMed

    Gonzalez, Cleotilde; Dutt, Varun

    2011-10-01

    In decisions from experience, there are 2 experimental paradigms: sampling and repeated-choice. In the sampling paradigm, participants sample between 2 options as many times as they want (i.e., the stopping point is variable), observe the outcome with no real consequences each time, and finally select 1 of the 2 options that cause them to earn or lose money. In the repeated-choice paradigm, participants select 1 of the 2 options for a fixed number of times and receive immediate outcome feedback that affects their earnings. These 2 experimental paradigms have been studied independently, and different cognitive processes have often been assumed to take place in each, as represented in widely diverse computational models. We demonstrate that behavior in these 2 paradigms relies upon common cognitive processes proposed by the instance-based learning theory (IBLT; Gonzalez, Lerch, & Lebiere, 2003) and that the stopping point is the only difference between the 2 paradigms. A single cognitive model based on IBLT (with an added stopping point rule in the sampling paradigm) captures human choices and predicts the sequence of choice selections across both paradigms. We integrate the paradigms through quantitative model comparison, where IBLT outperforms the best models created for each paradigm separately. We discuss the implications for the psychology of decision making. © 2011 American Psychological Association

  1. Gradient-free determination of isoelectric points of proteins on chip.

    PubMed

    Łapińska, Urszula; Saar, Kadi L; Yates, Emma V; Herling, Therese W; Müller, Thomas; Challa, Pavan K; Dobson, Christopher M; Knowles, Tuomas P J

    2017-08-30

    The isoelectric point (pI) of a protein is a key characteristic that influences its overall electrostatic behaviour. The majority of conventional methods for the determination of the isoelectric point of a molecule rely on the use of spatial gradients in pH, although significant practical challenges are associated with such techniques, notably the difficulty in generating a stable and well controlled pH gradient. Here, we introduce a gradient-free approach, exploiting a microfluidic platform which allows us to perform rapid pH change on chip and probe the electrophoretic mobility of species in a controlled field. In particular, in this approach, the pH of the electrolyte solution is modulated in time rather than in space, as in the case for conventional determinations of the isoelectric point. To demonstrate the general approachability of this platform, we have measured the isoelectric points of representative set of seven proteins, bovine serum albumin, β-lactoglobulin, ribonuclease A, ovalbumin, human transferrin, ubiquitin and myoglobin in microlitre sample volumes. The ability to conduct measurements in free solution thus provides the basis for the rapid determination of isoelectric points of proteins under a wide variety of solution conditions and in small volumes.

  2. Growing Degree Vegetation Production Index (GDVPI): A Novel and Data-Driven Approach to Delimit Season Cycles

    NASA Astrophysics Data System (ADS)

    Graham, W. D.; Spruce, J.; Ross, K. W.; Gasser, J.; Grulke, N.

    2014-12-01

    Growing Degree Vegetation Production Index (GDVPI) is a parametric approach to delimiting vegetation seasonal growth and decline cycles using incremental growing degree days (GDD), and NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) normalized difference vegetation index (NDVI) 8-day composite cumulative integral data. We obtain a specific location's daily minimum and maximum temperatures from the nearest National Oceanic and Atmospheric Administration (NOAA) weather stations posted on the National Climate Data Center (NCDC) Climate Data Online (CDO) archive and compute GDD. The date range for this study is January 1, 2000 through December 31, 2012. We employ a novel process, a repeating logistic product (RLP), to compensate for short-term weather variability and data drops from the recording stations and fit a curve to the median daily GDD values, adjusting for asymmetry, amplitude, and phase shift that minimize the sum of squared errors when comparing the observed and predicted GDD. The resulting curve, here referred to as the surrogate GDD, is the time-temperature phasing parameter used to convert Cartesian NDVI values into polar coordinate pairs, multiplying the NDVI values as the radial by the cosine and sine of the surrogate GDD as the angular. Depending on the vegetation type and the original NDVI curve, the polar NDVI curve may be nearly circular, kidney-shaped, or pear-shaped in the case of conifers, deciduous, or agriculture, respectively. We examine the points of tangency about the polar coordinate NDVI curve, identifying values of 1, 0, -1, or infinity, as each of these represent natural inflection points. Lines connecting the origin to each tangent point illustrate and quantify the parametrically segmentation of the growing season based on the GDD and NDVI ostensible dependency. Furthermore, the area contained by each segment represents the apparent vegetation production. A particular benefit is that the inflection points are determined near real-time, as MODIS NDVI, 8-day composite data become available, affording an effective forecasting and hindcasting tool.

  3. Clusterless Decoding of Position From Multiunit Activity Using A Marked Point Process Filter

    PubMed Central

    Deng, Xinyi; Liu, Daniel F.; Kay, Kenneth; Frank, Loren M.; Eden, Uri T.

    2016-01-01

    Point process filters have been applied successfully to decode neural signals and track neural dynamics. Traditionally, these methods assume that multiunit spiking activity has already been correctly spike-sorted. As a result, these methods are not appropriate for situations where sorting cannot be performed with high precision such as real-time decoding for brain-computer interfaces. As the unsupervised spike-sorting problem remains unsolved, we took an alternative approach that takes advantage of recent insights about clusterless decoding. Here we present a new point process decoding algorithm that does not require multiunit signals to be sorted into individual units. We use the theory of marked point processes to construct a function that characterizes the relationship between a covariate of interest (in this case, the location of a rat on a track) and features of the spike waveforms. In our example, we use tetrode recordings, and the marks represent a four-dimensional vector of the maximum amplitudes of the spike waveform on each of the four electrodes. In general, the marks may represent any features of the spike waveform. We then use Bayes’ rule to estimate spatial location from hippocampal neural activity. We validate our approach with a simulation study and with experimental data recorded in the hippocampus of a rat moving through a linear environment. Our decoding algorithm accurately reconstructs the rat’s position from unsorted multiunit spiking activity. We then compare the quality of our decoding algorithm to that of a traditional spike-sorting and decoding algorithm. Our analyses show that the proposed decoding algorithm performs equivalently or better than algorithms based on sorted single-unit activity. These results provide a path toward accurate real-time decoding of spiking patterns that could be used to carry out content-specific manipulations of population activity in hippocampus or elsewhere in the brain. PMID:25973549

  4. Robust estimation of pulse wave transit time using group delay.

    PubMed

    Meloni, Antonella; Zymeski, Heather; Pepe, Alessia; Lombardi, Massimo; Wood, John C

    2014-03-01

    To evaluate the efficiency of a novel transit time (Δt) estimation method from cardiovascular magnetic resonance flow curves. Flow curves were estimated from phase contrast images of 30 patients. Our method (TT-GD: transit time group delay) operates in the frequency domain and models the ascending aortic waveform as an input passing through a discrete-component "filter," producing the observed descending aortic waveform. The GD of the filter represents the average time delay (Δt) across individual frequency bands of the input. This method was compared with two previously described time-domain methods: TT-point using the half-maximum of the curves and TT-wave using cross-correlation. High temporal resolution flow images were studied at multiple downsampling rates to study the impact of differences in temporal resolution. Mean Δts obtained with the three methods were comparable. The TT-GD method was the most robust to reduced temporal resolution. While the TT-GD and the TT-wave produced comparable results for velocity and flow waveforms, the TT-point resulted in significant shorter Δts when calculated from velocity waveforms (difference: 1.8±2.7 msec; coefficient of variability: 8.7%). The TT-GD method was the most reproducible, with an intraobserver variability of 3.4% and an interobserver variability of 3.7%. Compared to the traditional TT-point and TT-wave methods, the TT-GD approach was more robust to the choice of temporal resolution, waveform type, and observer. Copyright © 2013 Wiley Periodicals, Inc.

  5. The interaction between atomic displacement cascades and tilt symmetrical grain boundaries in α-zirconium

    NASA Astrophysics Data System (ADS)

    Kapustin, P.; Svetukhin, V.; Tikhonchev, M.

    2017-06-01

    The atomic displacement cascade simulations near symmetric tilt grain boundaries (GBs) in hexagonal close packed-Zirconium were considered in this paper. Further defect structure analysis was conducted. Four symmetrical tilt GBs -∑14?, ∑14? with the axis of rotation [0 0 0 1] and ∑32?, ∑32? with the axis of rotation ? - were considered. The molecular dynamics method was used for atomic displacement cascades' simulation. A tendency of the point defects produced in the cascade to accumulate near the GB plane, which was an obstacle to the spread of the cascade, was discovered. The results of the point defects' clustering produced in the cascade were obtained. The clusters of both types were represented mainly by single point defects. At the same time, vacancies formed clusters of a large size (more than 20 vacancies per cluster), while self-interstitial atom clusters were small-sized.

  6. Theoretical Models for Evaluation of Volatile Emissions to Air During Dredged Material Disposal with Applications to New Bedford Harbor, Massachusetts

    DTIC Science & Technology

    1989-05-01

    model otherwise. 7. One or more partial differential equations can be presented to describe the minutia of chemical behavior in the various locales of...enjoys a voluminous literature heritage beyond the point of realistic applications to the natural environment in many cases. Hill, Myers, and Brannon...represented by the point A in Figure 2. This point may be representative of the recently deposited and exposed surface of dredged sediment in the delta

  7. Region 9 NPDES Outfalls 2012

    EPA Pesticide Factsheets

    Point geospatial dataset representing locations of NPDES outfalls/dischargers for facilities which generally represent the site of the discharge. NPDES (National Pollution Discharge Elimination System) is an EPA permit program that regulates direct discharges from treated waste water that is discharged into waters of the US. Facilities are issued NPDES permits regulating their discharge as required by the Clean Water Act. A facility may have one or more dischargers. The location represents the discharge point of a discrete conveyance such as a pipe or man made ditch.

  8. Development and application of a reactive plume-in-grid model: evaluation over Greater Paris

    NASA Astrophysics Data System (ADS)

    Korsakissok, I.; Mallet, V.

    2010-09-01

    Emissions from major point sources are badly represented by classical Eulerian models. An overestimation of the horizontal plume dilution, a bad representation of the vertical diffusion as well as an incorrect estimate of the chemical reaction rates are the main limitations of such models in the vicinity of major point sources. The plume-in-grid method is a multiscale modeling technique that couples a local-scale Gaussian puff model with an Eulerian model in order to better represent these emissions. We present the plume-in-grid model developed in the air quality modeling system Polyphemus, with full gaseous chemistry. The model is evaluated on the metropolitan Île-de-France region, during six months (summer 2001). The subgrid-scale treatment is used for 89 major point sources, a selection based on the emission rates of NOx and SO2. Results with and without the subgrid treatment of point emissions are compared, and their performance by comparison to the observations on measurement stations is assessed. A sensitivity study is also carried out, on several local-scale parameters as well as on the vertical diffusion within the urban area. Primary pollutants are shown to be the most impacted by the plume-in-grid treatment. SO2 is the most impacted pollutant, since the point sources account for an important part of the total SO2 emissions, whereas NOx emissions are mostly due to traffic. The spatial impact of the subgrid treatment is localized in the vicinity of the sources, especially for reactive species (NOx and O3). Ozone is mostly sensitive to the time step between two puff emissions which influences the in-plume chemical reactions, whereas the almost-passive species SO2 is more sensitive to the injection time, which determines the duration of the subgrid-scale treatment. Future developments include an extension to handle aerosol chemistry, and an application to the modeling of line sources in order to use the subgrid treatment with road emissions. The latter is expected to lead to more striking results, due to the importance of traffic emissions for the pollutants of interest.

  9. Legendre submanifolds in contact manifolds as attractors and geometric nonequilibrium thermodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goto, Shin-itiro, E-mail: sgoto@ims.ac.jp

    It has been proposed that equilibrium thermodynamics is described on Legendre submanifolds in contact geometry. It is shown in this paper that Legendre submanifolds embedded in a contact manifold can be expressed as attractors in phase space for a certain class of contact Hamiltonian vector fields. By giving a physical interpretation that points outside the Legendre submanifold can represent nonequilibrium states of thermodynamic variables, in addition to that points of a given Legendre submanifold can represent equilibrium states of the variables, this class of contact Hamiltonian vector fields is physically interpreted as a class of relaxation processes, in which thermodynamicmore » variables achieve an equilibrium state from a nonequilibrium state through a time evolution, a typical nonequilibrium phenomenon. Geometric properties of such vector fields on contact manifolds are characterized after introducing a metric tensor field on a contact manifold. It is also shown that a contact manifold and a strictly convex function induce a lower dimensional dually flat space used in information geometry where a geometrization of equilibrium statistical mechanics is constructed. Legendre duality on contact manifolds is explicitly stated throughout.« less

  10. Estimating parameter values of a socio-hydrological flood model

    NASA Astrophysics Data System (ADS)

    Holkje Barendrecht, Marlies; Viglione, Alberto; Kreibich, Heidi; Vorogushyn, Sergiy; Merz, Bruno; Blöschl, Günter

    2018-06-01

    Socio-hydrological modelling studies that have been published so far show that dynamic coupled human-flood models are a promising tool to represent the phenomena and the feedbacks in human-flood systems. So far these models are mostly generic and have not been developed and calibrated to represent specific case studies. We believe that applying and calibrating these type of models to real world case studies can help us to further develop our understanding about the phenomena that occur in these systems. In this paper we propose a method to estimate the parameter values of a socio-hydrological model and we test it by applying it to an artificial case study. We postulate a model that describes the feedbacks between floods, awareness and preparedness. After simulating hypothetical time series with a given combination of parameters, we sample few data points for our variables and try to estimate the parameters given these data points using Bayesian Inference. The results show that, if we are able to collect data for our case study, we would, in theory, be able to estimate the parameter values for our socio-hydrological flood model.

  11. Efficient iris recognition by characterizing key local variations.

    PubMed

    Ma, Li; Tan, Tieniu; Wang, Yunhong; Zhang, Dexin

    2004-06-01

    Unlike other biometrics such as fingerprints and face, the distinct aspect of iris comes from randomly distributed features. This leads to its high reliability for personal identification, and at the same time, the difficulty in effectively representing such details in an image. This paper describes an efficient algorithm for iris recognition by characterizing key local variations. The basic idea is that local sharp variation points, denoting the appearing or vanishing of an important image structure, are utilized to represent the characteristics of the iris. The whole procedure of feature extraction includes two steps: 1) a set of one-dimensional intensity signals is constructed to effectively characterize the most important information of the original two-dimensional image; 2) using a particular class of wavelets, a position sequence of local sharp variation points in such signals is recorded as features. We also present a fast matching scheme based on exclusive OR operation to compute the similarity between a pair of position sequences. Experimental results on 2255 iris images show that the performance of the proposed method is encouraging and comparable to the best iris recognition algorithm found in the current literature.

  12. Novel point estimation from a semiparametric ratio estimator (SPRE): long-term health outcomes from short-term linear data, with application to weight loss in obesity.

    PubMed

    Weissman-Miller, Deborah

    2013-11-02

    Point estimation is particularly important in predicting weight loss in individuals or small groups. In this analysis, a new health response function is based on a model of human response over time to estimate long-term health outcomes from a change point in short-term linear regression. This important estimation capability is addressed for small groups and single-subject designs in pilot studies for clinical trials, medical and therapeutic clinical practice. These estimations are based on a change point given by parameters derived from short-term participant data in ordinary least squares (OLS) regression. The development of the change point in initial OLS data and the point estimations are given in a new semiparametric ratio estimator (SPRE) model. The new response function is taken as a ratio of two-parameter Weibull distributions times a prior outcome value that steps estimated outcomes forward in time, where the shape and scale parameters are estimated at the change point. The Weibull distributions used in this ratio are derived from a Kelvin model in mechanics taken here to represent human beings. A distinct feature of the SPRE model in this article is that initial treatment response for a small group or a single subject is reflected in long-term response to treatment. This model is applied to weight loss in obesity in a secondary analysis of data from a classic weight loss study, which has been selected due to the dramatic increase in obesity in the United States over the past 20 years. A very small relative error of estimated to test data is shown for obesity treatment with the weight loss medication phentermine or placebo for the test dataset. An application of SPRE in clinical medicine or occupational therapy is to estimate long-term weight loss for a single subject or a small group near the beginning of treatment.

  13. Early-life predictors of leisure-time physical inactivity in midadulthood: findings from a prospective British birth cohort.

    PubMed

    Pinto Pereira, Snehal M; Li, Leah; Power, Chris

    2014-12-01

    Much adult physical inactivity research ignores early-life factors from which later influences may originate. In the 1958 British birth cohort (followed from 1958 to 2008), leisure-time inactivity, defined as activity frequency of less than once a week, was assessed at ages 33, 42, and 50 years (n = 12,776). Early-life factors (at ages 0-16 years) were categorized into 3 domains (i.e., physical, social, and behavioral). We assessed associations of adult inactivity 1) with factors within domains, 2) with the 3 domains combined, and 3) allowing for adult factors. At each age, approximately 32% of subjects were inactive. When domains were combined, factors associated with inactivity (e.g., at age 50 years) were prepubertal stature (5% lower odds per 1-standard deviation higher height), hand control/coordination problems (14% higher odds per 1-point increase on a 4-point scale), cognition (10% lower odds per 1-standard deviation greater ability), parental divorce (21% higher odds), institutional care (29% higher odds), parental social class at child's birth (9% higher odds per 1-point reduction on a 4-point scale), minimal parental education (13% higher odds), household amenities (2% higher odds per increase (representing poorer amenities) on a 19-point scale), inactivity (8% higher odds per 1-point reduction in activity on a 4-point scale), low sports aptitude (13% higher odds), and externalizing behaviors (i.e., conduct problems) (5% higher odds per 1-standard deviation higher score). Adjustment for adult covariates weakened associations slightly. Factors from early life were associated with adult leisure-time inactivity, allowing for early identification of groups vulnerable to inactivity. © The Author 2014. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. Moral Severity is Represented as a Domain-General Magnitude.

    PubMed

    Powell, Derek; Horne, Zachary

    2017-03-01

    The severity of moral violations can vary by degree. For instance, although both are immoral, murder is a more severe violation than lying. Though this point is well established in Ethics and the law, relatively little research has been directed at examining how moral severity is represented psychologically. Most prominent moral psychological theories are aimed at explaining first-order moral judgments and are silent on second-order metaethical judgments, such as comparisons of severity. Here, the relative severity of 20 moral violations was established in a preliminary study. Then, a second group of participants were asked to decide which of two moral violations was more severe for all possible combinations of these 20 violations. Participant's response times exhibited two signatures of domain-general magnitude comparisons: we observed both a distance effect and a semantic congruity effect. These findings suggest that moral severity is represented in a similar fashion as other continuous magnitudes.

  15. The evolving block universe and the meshing together of times.

    PubMed

    Ellis, George F R

    2014-10-01

    It has been proposed that spacetime should be regarded as an evolving block universe, bounded to the future by the present time, which continually extends to the future. This future boundary is defined at each time by measuring proper time along Ricci eigenlines from the start of the universe. A key point, then, is that physical reality can be represented at many different scales: hence, the passage of time may be seen as different at different scales, with quantum gravity determining the evolution of spacetime itself at the Planck scale, but quantum field theory and classical physics determining the evolution of events within spacetime at larger scales. The fundamental issue then arises as to how the effective times at different scales mesh together, leading to the concepts of global and local times. © 2014 New York Academy of Sciences.

  16. "I Do Feel Like a Scientist at Times": A Qualitative Study of the Acceptability of Molecular Point-Of-Care Testing for Chlamydia and Gonorrhoea to Primary Care Professionals in a Remote High STI Burden Setting.

    PubMed

    Natoli, Lisa; Guy, Rebecca J; Shephard, Mark; Causer, Louise; Badman, Steven G; Hengel, Belinda; Tangey, Annie; Ward, James; Coburn, Tony; Anderson, David; Kaldor, John; Maher, Lisa

    2015-01-01

    Point-of-care tests for chlamydia (CT) and gonorrhoea (NG) could increase the uptake and timeliness of testing and treatment, contribute to improved disease control and reduce reproductive morbidity. The GeneXpert (Xpert CT/NG assay), suited to use at the point-of-care, is being used in the TTANGO randomised controlled trial (RCT) in 12 remote Australian health services with a high burden of sexually transmissible infections (STIs). This represents the first ever routine use of a molecular point-of-care diagnostic for STIs in primary care. The purpose of this study was to explore the acceptability of the GeneXpert to primary care staff in remote Australia. In-depth qualitative interviews were conducted with 16 staff (registered or enrolled nurses and Aboriginal Health Workers/Practitioners) trained and experienced with GeneXpert testing. Interviews were digitally-recorded and transcribed verbatim prior to content analysis. Most participants displayed positive attitudes, indicating the test was both easy to use and useful in their clinical context. Participants indicated that point-of-care testing had improved management of STIs, resulting in more timely and targeted treatment, earlier commencement of partner notification, and reduced follow up efforts associated with client recall. Staff expressed confidence in point-of-care test results and treating patients on this basis, and reported greater job satisfaction. While point-of-care testing did not negatively impact on client flow, several found the manual documentation processes time consuming, suggesting that improved electronic connectivity and test result transfer between the GeneXpert and patient management systems could overcome this. Managing positive test results in a shorter time frame was challenging for some but most found it satisfying to complete episodes of care more quickly. In the context of a RCT, health professionals working in remote primary care in Australia found the GeneXpert highly acceptable. These findings have implications for use in other primary care settings around the world.

  17. Inception of a national multidisciplinary registry for stereotactic radiosurgery.

    PubMed

    Sheehan, Jason P; Kavanagh, Brian D; Asher, Anthony; Harbaugh, Robert E

    2016-01-01

    Stereotactic radiosurgery (SRS) represents a multidisciplinary approach to the delivery of ionizing high-dose radiation to treat a wide variety of disorders. Much of the radiosurgical literature is based upon retrospective single-center studies along with a few randomized controlled clinical trials. More timely and effective evidence is needed to enhance the consistency and quality of and clinical outcomes achieved with SRS. The authors summarize the creation and implementation of a national SRS registry. The American Association of Neurological Surgeons (AANS) through NeuroPoint Alliance, Inc., started a successful registry effort with its lumbar spine initiative. Following a similar approach, the AANS and NeuroPoint Alliance collaborated with corporate partners and the American Society for Radiation Oncology to devise a data dictionary for an SRS registry. Through administrative and financial support from professional societies and corporate partners, a framework for implementation of the registry was created. Initial plans were devised for a 3-year effort encompassing 30 high-volume SRS centers across the country. Device-specific web-based data-extraction platforms were built by the corporate partners. Data uploaders were then used to port the data to a common repository managed by Quintiles, a national and international health care trials company. Audits of the data for completeness and veracity will be undertaken by Quintiles to ensure data fidelity. Data governance and analysis are overseen by an SRS board comprising equal numbers of representatives from the AANS and NeuroPoint Alliance. Over time, quality outcome assessments and post hoc research can be performed to advance the field of SRS. Stereotactic radiosurgery offers a high-technology approach to treating complex intracranial disorders. Improvements in the consistency and quality of care delivered to patients who undergo SRS should be afforded by the national registry effort that is underway.

  18. A cross sectional study of the association between walnut consumption and cognitive function among adult US populations represented in NHANES.

    PubMed

    Arab, L; Ang, A

    2015-03-01

    To examine the association between walnut consumption and measures of cognitive function in the US population. Nationally representative cross sectional study using 24 hour dietary recalls of intakes to assess walnut and other nut consumption as compared to the group reporting no nut consumption. 1988-1994 and 1999-2002 rounds of the National Health and Nutrition Examination Survey (NHANES). Representative weighted sample of US adults 20 to 90 years of age. The Neurobehavioral Evaluation System 2 (NES2), consisting of simple reaction time (SRTT), symbol digit substitution (SDST), the single digit learning (SDLT), Story Recall (SRT) and digit-symbol substitution (DSST) tests. Adults 20-59 years old reporting walnut consumption of an average of 10.3 g/d required 16.4ms less time to respond on the SRTT, P=0.03, and 0.39s less for the SDST, P=0.01. SDLT scores were also significantly lower by 2.38s (P=0.05). Similar results were obtained when tertiles of walnut consumption were examined in trend analyses. Significantly better outcomes were noted in all cognitive test scores among those with higher walnut consumption (P < 0.01). Among adults 60 years and older, walnut consumers averaged 13.1 g/d, scored 7.1 percentile points higher, P=0.03 on the SRT and 7.3 percentile points higher on the DSST, P=0.05. Here also trend analyses indicate significant improvements in all cognitive test scores (P < 0.01) except for SRTT (P = 0.06) in the fully adjusted models. These significant, positive associations between walnut consumption and cognitive functions among all adults, regardless of age, gender or ethnicity suggest that daily walnut intake may be a simple beneficial dietary behavior.

  19. Using the Shuttle In Situ Window and Radiator Data for Meteoroid Measurements

    NASA Technical Reports Server (NTRS)

    Matney, Mark

    2015-01-01

    Every time NASA's Space Shuttle flew in orbit, it was exposed to the natural meteoroid and artificial debris environment. NASA Johnson Space Center maintains a database of impact cratering data of 60 Shuttle missions flown since the mid-1990's that were inspected after flight. These represent a total net exposure time to the space environment of 2 years. Impact damage was recorded on the windows and radiators, and in many cases information on the impactor material was determined by later analysis of the crater residue. This information was used to segregate damage caused by natural meteoroids and artificial space debris. The windows represent a total area of 3.565 sq m, and were capable of resolving craters down to about 10 micrometers in size. The radiators represent a total area of 119.26 sq m, and saw damage from objects up to approximately 1 mm in diameter. These data were used extensively in the development of NASA's ORDEM 3.0 Orbital Debris Environment Model, and gives a continuous picture of the orbital debris environment in material type and size ranging from about 10 micrometers to 1 mm. However, the meteoroid data from the Shuttles have never been fully analyzed. For the orbital debris work, special "as flown" files were created that tracked the pointing of the surface elements and their shadowing by structure (such as the ISS during docking). Unfortunately, such files for the meteoroid environment have not yet been created. This talk will introduce these unique impact data and describe how they were used for orbital debris measurements. We will then discuss some simple first-order analyses of the meteoroid data, and point the way for future analyses.

  20. Cross-cultural differences in mental representations of time: evidence from an implicit nonlinguistic task.

    PubMed

    Fuhrman, Orly; Boroditsky, Lera

    2010-11-01

    Across cultures people construct spatial representations of time. However, the particular spatial layouts created to represent time may differ across cultures. This paper examines whether people automatically access and use culturally specific spatial representations when reasoning about time. In Experiment 1, we asked Hebrew and English speakers to arrange pictures depicting temporal sequences of natural events, and to point to the hypothesized location of events relative to a reference point. In both tasks, English speakers (who read left to right) arranged temporal sequences to progress from left to right, whereas Hebrew speakers (who read right to left) arranged them from right to left, replicating previous work. In Experiments 2 and 3, we asked the participants to make rapid temporal order judgments about pairs of pictures presented one after the other (i.e., to decide whether the second picture showed a conceptually earlier or later time-point of an event than the first picture). Participants made responses using two adjacent keyboard keys. English speakers were faster to make "earlier" judgments when the "earlier" response needed to be made with the left response key than with the right response key. Hebrew speakers showed exactly the reverse pattern. Asking participants to use a space-time mapping inconsistent with the one suggested by writing direction in their language created interference, suggesting that participants were automatically creating writing-direction consistent spatial representations in the course of their normal temporal reasoning. It appears that people automatically access culturally specific spatial representations when making temporal judgments even in nonlinguistic tasks. Copyright © 2010 Cognitive Science Society, Inc.

  1. MS Malenchenko tapes brackets in Zvezda during STS-106

    NASA Image and Video Library

    2000-09-13

    S106-E-5175 (13 September) --- Cosmonaut Yuri I. Malenchenko, representing the Russian Aviation and Space Agency, tapes brackets for the Zvezda during work on the service module. The mission specialist and the other STS-106 astronauts and cosmonaut are continuing electrical work and transfer activities as they near the halfway point of docked operations with the International Space Station. In all the crew will have 189 hours, 40 minutes of planned Atlantis-ISS docked time.

  2. Representative Atmospheric Plume Development for Elevated Releases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eslinger, Paul W.; Lowrey, Justin D.; McIntyre, Justin I.

    2014-02-01

    An atmospheric explosion of a low-yield nuclear device will produce a large number of radioactive isotopes, some of which can be measured with airborne detection systems. However, properly equipped aircraft may not arrive in the region where an explosion occurred for a number of hours after the event. Atmospheric conditions will have caused the radioactive plume to move and diffuse before the aircraft arrives. The science behind predicting atmospheric plume movement has advanced enough that the location of the maximum concentrations in the plume can be determined reasonably accurately in real time, or near real time. Given the assumption thatmore » an aircraft can follow a plume, this study addresses the amount of atmospheric dilution expected to occur in a representative plume as a function of time past the release event. The approach models atmospheric transport of hypothetical releases from a single location for every day in a year using the publically available HYSPLIT code. The effective dilution factors for the point of maximum concentration in an elevated plume based on a release of a non-decaying, non-depositing tracer can vary by orders of magnitude depending on the day of the release, even for the same number of hours after the release event. However, the median of the dilution factors based on releases for 365 consecutive days at one site follows a power law relationship in time, as shown in Figure S-1. The relationship is good enough to provide a general rule of thumb for estimating typical future dilution factors in a plume starting at the same point. However, the coefficients of the power law function may vary for different release point locations. Radioactive decay causes the effective dilution factors to decrease more quickly with the time past the release event than the dilution factors based on a non-decaying tracer. An analytical expression for the dilution factors of isotopes with different half-lives can be developed given the power law expression for the non-decaying tracer. If the power-law equation for the median dilution factor, Df, based on a non-decaying tracer has the general form Df=a(×t)^(-b) for time t after the release event, then the equation has the form Df=e^(-λt)×a×t^(-b) for a radioactive isotope, where λ is the decay constant for the isotope.« less

  3. A simple method for correcting spatially resolved solar intensity oscillation observations for variations in scattered light

    NASA Technical Reports Server (NTRS)

    Jefferies, S. M.; Duvall, T. L., Jr.

    1991-01-01

    A measurement of the intensity distribution in an image of the solar disk will be corrupted by a spatial redistribution of the light that is caused by the earth's atmosphere and the observing instrument. A simple correction method is introduced here that is applicable for solar p-mode intensity observations obtained over a period of time in which there is a significant change in the scattering component of the point spread function. The method circumvents the problems incurred with an accurate determination of the spatial point spread function and its subsequent deconvolution from the observations. The method only corrects the spherical harmonic coefficients that represent the spatial frequencies present in the image and does not correct the image itself.

  4. Blending Velocities In Task Space In Computing Robot Motions

    NASA Technical Reports Server (NTRS)

    Volpe, Richard A.

    1995-01-01

    Blending of linear and angular velocities between sequential specified points in task space constitutes theoretical basis of improved method of computing trajectories followed by robotic manipulators. In method, generalized velocity-vector-blending technique provides relatively simple, common conceptual framework for blending linear, angular, and other parametric velocities. Velocity vectors originate from straight-line segments connecting specified task-space points, called "via frames" and represent specified robot poses. Linear-velocity-blending functions chosen from among first-order, third-order-polynomial, and cycloidal options. Angular velocities blended by use of first-order approximation of previous orientation-matrix-blending formulation. Angular-velocity approximation yields small residual error, quantified and corrected. Method offers both relative simplicity and speed needed for generation of robot-manipulator trajectories in real time.

  5. Simulation and analysis of chemical release in the ionosphere

    NASA Astrophysics Data System (ADS)

    Gao, Jing-Fan; Guo, Li-Xin; Xu, Zheng-Wen; Zhao, Hai-Sheng; Feng, Jie

    2018-05-01

    Ionospheric inhomogeneous plasma produced by single point chemical release has simple space-time structure, and cannot impact radio wave frequencies higher than Very High Frequency (VHF) band. In order to produce more complicated ionospheric plasma perturbation structure and trigger instabilities phenomena, multiple-point chemical release scheme is presented in this paper. The effects of chemical release on low latitude ionospheric plasma are estimated by linear instability growth rate theory that high growth rate represents high irregularities, ionospheric scintillation occurrence probability and high scintillation intension in scintillation duration. The amplitude scintillations and the phase scintillations of 150 MHz, 400 MHz, and 1000 MHz are calculated based on the theory of multiple phase screen (MPS), when they propagate through the disturbed area.

  6. Emotion Estimation Algorithm from Facial Image Analyses of e-Learning Users

    NASA Astrophysics Data System (ADS)

    Shigeta, Ayuko; Koike, Takeshi; Kurokawa, Tomoya; Nosu, Kiyoshi

    This paper proposes an emotion estimation algorithm from e-Learning user's facial image. The algorithm characteristics are as follows: The criteria used to relate an e-Learning use's emotion to a representative emotion were obtained from the time sequential analysis of user's facial expressions. By examining the emotions of the e-Learning users and the positional change of the facial expressions from the experiment results, the following procedures are introduce to improve the estimation reliability; (1) some effective features points are chosen by the emotion estimation (2) dividing subjects into two groups by the change rates of the face feature points (3) selection of the eigenvector of the variance-co-variance matrices (cumulative contribution rate>=95%) (4) emotion calculation using Mahalanobis distance.

  7. On the improvement of blood sample collection at clinical laboratories

    PubMed Central

    2014-01-01

    Background Blood samples are usually collected daily from different collection points, such hospitals and health centers, and transported to a core laboratory for testing. This paper presents a project to improve the collection routes of two of the largest clinical laboratories in Spain. These routes must be designed in a cost-efficient manner while satisfying two important constraints: (i) two-hour time windows between collection and delivery, and (ii) vehicle capacity. Methods A heuristic method based on a genetic algorithm has been designed to solve the problem of blood sample collection. The user enters the following information for each collection point: postal address, average collecting time, and average demand (in thermal containers). After implementing the algorithm using C programming, this is run and, in few seconds, it obtains optimal (or near-optimal) collection routes that specify the collection sequence for each vehicle. Different scenarios using various types of vehicles have been considered. Unless new collection points are added or problem parameters are changed substantially, routes need to be designed only once. Results The two laboratories in this study previously planned routes manually for 43 and 74 collection points, respectively. These routes were covered by an external carrier company. With the implementation of this algorithm, the number of routes could be reduced from ten to seven in one laboratory and from twelve to nine in the other, which represents significant annual savings in transportation costs. Conclusions The algorithm presented can be easily implemented in other laboratories that face this type of problem, and it is particularly interesting and useful as the number of collection points increases. The method designs blood collection routes with reduced costs that meet the time and capacity constraints of the problem. PMID:24406140

  8. Method for contour extraction for object representation

    DOEpatents

    Skourikhine, Alexei N.; Prasad, Lakshman

    2005-08-30

    Contours are extracted for representing a pixelated object in a background pixel field. An object pixel is located that is the start of a new contour for the object and identifying that pixel as the first pixel of the new contour. A first contour point is then located on the mid-point of a transition edge of the first pixel. A tracing direction from the first contour point is determined for tracing the new contour. Contour points on mid-points of pixel transition edges are sequentially located along the tracing direction until the first contour point is again encountered to complete tracing the new contour. The new contour is then added to a list of extracted contours that represent the object. The contour extraction process associates regions and contours by labeling all the contours belonging to the same object with the same label.

  9. Mapping Cortical Morphology in Youth with Velo-Cardio-Facial (22q11.2 Deletion) Syndrome

    PubMed Central

    Kates, Wendy R.; Bansal, Ravi; Fremont, Wanda; Antshel, Kevin M.; Hao, Xuejun; Higgins, Anne Marie; Liu, Jun; Shprintzen, Robert J.; Peterson, Bradley S.

    2010-01-01

    Objective Velo-cardio-facial syndrome (VCFS; 22q11.2 deletion syndrome) represents one of the highest known risk factors for schizophrenia. Insofar as up to thirty percent of individuals with this genetic disorder develop schizophrenia, VCFS constitutes a unique, etiologically homogeneous model for understanding the pathogenesis of schizophrenia. Method Using a longitudinal, case-control design, we acquired anatomic magnetic resonance images to investigate both cross-sectional and longitudinal alterations in surface cortical morphology in a cohort of adolescents with VCFS and age-matched typical controls. All participants were scanned at two time points. Results Relative to controls, youth with VCFS exhibited alterations in inferior frontal, dorsal frontal, occipital, and cerebellar brain regions at both time points. We observed little change over time in surface morphology of either study group. However, within the VCFS group only, worsening psychosocial functioning over time was associated with Time 2 surface contractions in left middle and inferior temporal gyri. Further, prodromal symptoms at Time 2 were associated with surface contractions in left and right orbitofrontal, temporal and cerebellar regions, as well as surface protrusions of supramarginal gyrus. Conclusions These findings advance our understanding of cortical disturbances in VCFS that produce vulnerability for psychosis in this high risk population. PMID:21334567

  10. Effects of psychosocial work characteristics on hair cortisol - findings from a post-trial study.

    PubMed

    Herr, Raphael M; Barrech, Amira; Gündel, Harald; Lang, Jessica; Quinete, Natalia Soares; Angerer, Peter; Li, Jian

    2017-07-01

    Prolonged work stress, as indicated by the effort-reward imbalance (ERI) model, jeopardizes health. Cortisol represents a candidate mechanism connecting stress to ill health. However, previous findings appear inconclusive, and recommendations were made to assess work stress at multiple time points and also to investigate ERI (sub-)components. This study therefore examines the effects of two single time points, as well as the mean and change scores between time points of ERI and its components on hair cortisol concentration (HCC), a long-term cortisol measurement. Participants were 66 male factory workers (age: 40.68 ± 6.74 years; HCC: 9.00 ± 7.11 pg/mg), who were followed up after a stress management intervention (2006-2008). In 2008 (T1) and 2015 (T2), participants completed a 23-item ERI questionnaire, assessing effort, the three reward components (esteem, job security, job promotion) and over-commitment. In 2015, participants also provided a 3-cm hair segment close to the scalp for HCC analysis, as well as information on relevant confounders (i.e. medication intake, age, work characteristics, socioeconomic and lifestyle factors, number of stressful life events). Linear regressions revealed hardly any cross-sectional or longitudinal effect of ERI and its components on HCC. Only the change scores between T1 and T2 of job security were negatively associated with lower HCC in unadjusted (β = -.320; p = .009) and adjusted (β = -.288; p = .044) models. In this study, only a decrease of perceived job security over time was significantly associated with higher HCC, and other predictors were not related to this outcome. Especially after correction for multiple testing, this study revealed just a weak association of different psychosocial work measurements with HCC. Lay summary This study showed that an increase in perceived job insecurity is correlated with higher levels of the stress hormone cortisol. The higher levels of cortisol might represent a biological explanation for the negative health effects of job insecurity. The association was, however, relatively low, and more and more voices are questioning whether cortisol in hair is a reliable marker for perceived work stress.

  11. Predicting the Impacts of Climate Change on Runoff and Sediment Processes in Agricultural Watersheds: A Case Study from the Sunflower Watershed in the Lower Mississippi Basin

    NASA Astrophysics Data System (ADS)

    Elkadiri, R.; Momm, H.; Yasarer, L.; Armour, G. L.

    2017-12-01

    Climatic conditions play a major role in physical processes impacting soil and agrochemicals detachment and transportation from/in agricultural watersheds. In addition, these climatic conditions are projected to significantly vary spatially and temporally in the 21st century, leading to vast uncertainties about the future of sediment and non-point source pollution transport in agricultural watersheds. In this study, we selected the sunflower basin in the lower Mississippi River basin, USA to contribute in the understanding of how climate change affects watershed processes and the transport of pollutant loads. The climate projections used in this study were retrieved from the archive of World Climate Research Programme's (WCRP) Coupled Model Intercomparison Phase 5 (CMIP5) project. The CMIP5 dataset was selected because it contains the most up-to-date spatially downscaled and bias corrected climate projections. A subset of ten GCMs representing a range in projected climate were spatially downscaled for the sunflower watershed. Statistics derived from downscaled GCM output representing the 2011-2040, 2041-2070 and 2071-2100 time periods were used to generate maximum/minimum temperature and precipitation on a daily time step using the USDA Synthetic Weather Generator, SYNTOR. These downscaled climate data were then utilized as inputs to run in the Annualized Agricultural Non-Point Source (AnnAGNPS) pollution watershed model to estimate time series of runoff, sediment, and nutrient loads produced from the watershed. For baseline conditions a validated simulation of the watershed was created and validated using historical data from 2000 until 2015.

  12. Saturn Ring

    NASA Image and Video Library

    2007-12-12

    Like Earth, Saturn has an invisible ring of energetic ions trapped in its magnetic field. This feature is known as a "ring current." This ring current has been imaged with a special camera on Cassini sensitive to energetic neutral atoms. This is a false color map of the intensity of the energetic neutral atoms emitted from the ring current through a processed called charged exchange. In this process a trapped energetic ion steals and electron from cold gas atoms and becomes neutral and escapes the magnetic field. The Cassini Magnetospheric Imaging Instrument's ion and neutral camera records the intensity of the escaping particles, which provides a map of the ring current. In this image, the colors represent the intensity of the neutral emission, which is a reflection of the trapped ions. This "ring" is much farther from Saturn (roughly five times farther) than Saturn's famous icy rings. Red in the image represents the higher intensity of the particles, while blue is less intense. Saturn's ring current had not been mapped before on a global scale, only "snippets" or areas were mapped previously but not in this detail. This instrument allows scientists to produce movies (see PIA10083) that show how this ring changes over time. These movies reveal a dynamic system, which is usually not as uniform as depicted in this image. The ring current is doughnut shaped but in some instances it appears as if someone took a bite out of it. This image was obtained on March 19, 2007, at a latitude of about 54.5 degrees and radial distance 1.5 million kilometres (920,000 miles). Saturn is at the center, and the dotted circles represent the orbits of the moon's Rhea and Titan. The Z axis points parallel to Saturn's spin axis, the X axis points roughly sunward in the sun-spin axis plane, and the Y axis completes the system, pointing roughly toward dusk. The ion and neutral camera's field of view is marked by the white line and accounts for the cut-off of the image on the left. The image is an average of the activity over a (roughly) 3-hour period. http://photojournal.jpl.nasa.gov/catalog/PIA10094

  13. From Laser Scanning to Finite Element Analysis of Complex Buildings by Using a Semi-Automatic Procedure

    PubMed Central

    Castellazzi, Giovanni; D’Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro

    2015-01-01

    In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation. PMID:26225978

  14. Salient Point Detection in Protrusion Parts of 3D Object Robust to Isometric Variations

    NASA Astrophysics Data System (ADS)

    Mirloo, Mahsa; Ebrahimnezhad, Hosein

    2018-03-01

    In this paper, a novel method is proposed to detect 3D object salient points robust to isometric variations and stable against scaling and noise. Salient points can be used as the representative points from object protrusion parts in order to improve the object matching and retrieval algorithms. The proposed algorithm is started by determining the first salient point of the model based on the average geodesic distance of several random points. Then, according to the previous salient point, a new point is added to this set of points in each iteration. By adding every salient point, decision function is updated. Hence, a condition is created for selecting the next point in which the iterative point is not extracted from the same protrusion part so that drawing out of a representative point from every protrusion part is guaranteed. This method is stable against model variations with isometric transformations, scaling, and noise with different levels of strength due to using a feature robust to isometric variations and considering the relation between the salient points. In addition, the number of points used in averaging process is decreased in this method, which leads to lower computational complexity in comparison with the other salient point detection algorithms.

  15. Superposition and alignment of labeled point clouds.

    PubMed

    Fober, Thomas; Glinca, Serghei; Klebe, Gerhard; Hüllermeier, Eyke

    2011-01-01

    Geometric objects are often represented approximately in terms of a finite set of points in three-dimensional euclidean space. In this paper, we extend this representation to what we call labeled point clouds. A labeled point cloud is a finite set of points, where each point is not only associated with a position in three-dimensional space, but also with a discrete class label that represents a specific property. This type of model is especially suitable for modeling biomolecules such as proteins and protein binding sites, where a label may represent an atom type or a physico-chemical property. Proceeding from this representation, we address the question of how to compare two labeled points clouds in terms of their similarity. Using fuzzy modeling techniques, we develop a suitable similarity measure as well as an efficient evolutionary algorithm to compute it. Moreover, we consider the problem of establishing an alignment of the structures in the sense of a one-to-one correspondence between their basic constituents. From a biological point of view, alignments of this kind are of great interest, since mutually corresponding molecular constituents offer important information about evolution and heredity, and can also serve as a means to explain a degree of similarity. In this paper, we therefore develop a method for computing pairwise or multiple alignments of labeled point clouds. To this end, we proceed from an optimal superposition of the corresponding point clouds and construct an alignment which is as much as possible in agreement with the neighborhood structure established by this superposition. We apply our methods to the structural analysis of protein binding sites.

  16. Expression profiles of differentially regulated genes during the early stages of apple flower infection with Erwinia amylovora

    PubMed Central

    Sarowar, Sujon; Zhao, Youfu; Soria-Guerra, Ruth Elena; Ali, Shahjahan; Zheng, Danman; Wang, Dongping; Korban, Schuyler S.

    2011-01-01

    To identify genes involved in the response to the fire blight pathogen Erwinia amylovora in apple (Malus×domestica), expression profiles were investigated using an apple oligo (70-mer) array representing 40, 000 genes. Blossoms of a fire blight-susceptible apple cultivar Gala were collected from trees growing in the orchard, placed on a tray in the laboratory, and spray-inoculated with a suspension of E. amylovora at a concentration of 108 cfu ml−1. Uninoculated detached flowers served as controls at each time point. Expression profiles were captured at three different time points post-inoculation at 2, 8, and 24 h, together with those at 0 h (uninoculated). A total of about 3500 genes were found to be significantly modulated in response to at least one of the three time points. Among those, a total of 770, 855, and 1002 genes were up-regulated, by 2-fold, at 2, 8, and 24 h following inoculation, respectively; while, 748, 1024, and 1455 genes were down-regulated, by 2-fold, at 2, 8, and 24 h following inoculation, respectively. Over the three time points post-inoculation, 365 genes were commonly up-regulated and 374 genes were commonly down-regulated. Both sets of genes were classified based on their functional categories. The majority of up-regulated genes were involved in metabolism, signal transduction, signalling, transport, and stress response. A number of transcripts encoding proteins/enzymes known to be up-regulated under particular biotic and abiotic stress were also up-regulated following E. amylovora treatment. Those up- or down-regulated genes encode transcription factors, signaling components, defense-related, transporter, and metabolism, all of which have been associated with disease responses in Arabidopsis and rice, suggesting similar response pathways are involved in apple blossoms. PMID:21725032

  17. Factors That Influence the Rating of Perceived Exertion After Endurance Training.

    PubMed

    Roos, Lilian; Taube, Wolfgang; Tuch, Carolin; Frei, Klaus Michael; Wyss, Thomas

    2018-03-15

    Session rating of perceived exertion (sRPE) is an often used measure to assess athletes' training load. However, little is known which factors could optimize the quality of data collection thereof. The aim of the present study was to investigate the effects of (i) the survey methods and (ii) the time points when sRPE was assessed on the correlation between subjective (sRPE) and objective (heart rate training impulse; TRIMP) assessment of training load. In the first part, 45 well-trained subjects (30 men, 15 women) performed 20 running sessions with a heart rate monitor and reported sRPE 30 minutes after training cessation. For the reporting the subjects were grouped into three survey method groups (paper-pencil, online questionnaire, and mobile device). In the second part of the study, another 40 athletes (28 men, 12 women) performed 4x5 running sessions with the four time points to report the sRPE randomly assigned (directly after training cessation, 30 minutes post-exercise, in the evening of the same day, the next morning directly after waking up). The assessment of sRPE is influenced by time point, survey method, TRIMP, sex, and training type. It is recommended to assess sRPE values via a mobile device or online tool, as the survey method "paper" displayed lower correlations between sRPE and TRIMP. Subjective training load measures are highly individual. When compared at the same relative intensity, lower sRPE values were reported by women, for the training types representing slow runs, and for time points with greater duration between training cessation and sRPE assessment. The assessment method for sRPE should be kept constant for each athlete and comparisons between athletes or sexes are not recommended.

  18. Spatially localized phosphorous metabolism of skeletal muscle in Duchenne muscular dystrophy patients: 24-month follow-up.

    PubMed

    Hooijmans, M T; Doorenweerd, N; Baligand, C; Verschuuren, J J G M; Ronen, I; Niks, E H; Webb, A G; Kan, H E

    2017-01-01

    To assess the changes in phosphodiester (PDE)-levels, detected by 31P magnetic resonance spectroscopy (MRS), over 24-months to determine the potential of PDE as marker for muscle tissue changes in Duchenne Muscular Dystrophy (DMD) patients. Spatially resolved phosphorous datasets were acquired in the right lower leg of 18 DMD patients (range: 5-15.4 years) and 12 age-matched healthy controls (range: 5-14 years) at three time-points (baseline, 12-months, and 24-months) using a 7T MR-System (Philips Achieva). 3-point Dixon images were acquired at 3T (Philips Ingenia) to determine muscle fat fraction. Analyses were done for six muscles that represent different stages of muscle wasting. Differences between groups and time-points were assessed with non-parametric tests with correction for multiple comparisons. Coefficient of variance (CV) were determined for PDE in four healthy adult volunteers in high and low signal-to-noise ratio (SNR) datasets. PDE-levels were significantly higher (two-fold) in DMD patients compared to controls in all analyzed muscles at almost every time point and did not change over the study period. Fat fraction was significantly elevated in all muscles at all time points compared to healthy controls, and increased significantly over time, except in the tibialis posterior muscle. The mean within subject CV for PDE-levels was 4.3% in datasets with high SNR (>10:1) and 5.7% in datasets with low SNR. The stable two-fold increase in PDE-levels found in DMD patients in muscles with different levels of muscle wasting over 2-year time, including DMD patients as young as 5.5 years-old, suggests that PDE-levels may increase very rapidly early in the disease process and remain elevated thereafter. The low CV values in high and low SNR datasets show that PDE-levels can be accurately and reproducibly quantified in all conditions. Our data confirms the great potential of PDE as a marker for muscle tissue changes in DMD patients.

  19. Spatially localized phosphorous metabolism of skeletal muscle in Duchenne muscular dystrophy patients: 24–month follow-up

    PubMed Central

    Doorenweerd, N.; Baligand, C.; Verschuuren, J. J. G. M.; Ronen, I.; Niks, E. H.; Webb, A. G.; Kan, H. E.

    2017-01-01

    Objectives To assess the changes in phosphodiester (PDE)-levels, detected by 31P magnetic resonance spectroscopy (MRS), over 24-months to determine the potential of PDE as marker for muscle tissue changes in Duchenne Muscular Dystrophy (DMD) patients. Methods Spatially resolved phosphorous datasets were acquired in the right lower leg of 18 DMD patients (range: 5–15.4 years) and 12 age-matched healthy controls (range: 5–14 years) at three time-points (baseline, 12-months, and 24-months) using a 7T MR-System (Philips Achieva). 3-point Dixon images were acquired at 3T (Philips Ingenia) to determine muscle fat fraction. Analyses were done for six muscles that represent different stages of muscle wasting. Differences between groups and time-points were assessed with non-parametric tests with correction for multiple comparisons. Coefficient of variance (CV) were determined for PDE in four healthy adult volunteers in high and low signal-to-noise ratio (SNR) datasets. Results PDE-levels were significantly higher (two-fold) in DMD patients compared to controls in all analyzed muscles at almost every time point and did not change over the study period. Fat fraction was significantly elevated in all muscles at all time points compared to healthy controls, and increased significantly over time, except in the tibialis posterior muscle. The mean within subject CV for PDE-levels was 4.3% in datasets with high SNR (>10:1) and 5.7% in datasets with low SNR. Discussion and conclusion The stable two-fold increase in PDE-levels found in DMD patients in muscles with different levels of muscle wasting over 2-year time, including DMD patients as young as 5.5 years-old, suggests that PDE-levels may increase very rapidly early in the disease process and remain elevated thereafter. The low CV values in high and low SNR datasets show that PDE-levels can be accurately and reproducibly quantified in all conditions. Our data confirms the great potential of PDE as a marker for muscle tissue changes in DMD patients. PMID:28763477

  20. Investigating the Accuracy of Point Clouds Generated for Rock Surfaces

    NASA Astrophysics Data System (ADS)

    Seker, D. Z.; Incekara, A. H.

    2016-12-01

    Point clouds which are produced by means of different techniques are widely used to model the rocks and obtain the properties of rock surfaces like roughness, volume and area. These point clouds can be generated by applying laser scanning and close range photogrammetry techniques. Laser scanning is the most common method to produce point cloud. In this method, laser scanner device produces 3D point cloud at regular intervals. In close range photogrammetry, point cloud can be produced with the help of photographs taken in appropriate conditions depending on developing hardware and software technology. Many photogrammetric software which is open source or not currently provide the generation of point cloud support. Both methods are close to each other in terms of accuracy. Sufficient accuracy in the mm and cm range can be obtained with the help of a qualified digital camera and laser scanner. In both methods, field work is completed in less time than conventional techniques. In close range photogrammetry, any part of rock surfaces can be completely represented owing to overlapping oblique photographs. In contrast to the proximity of the data, these two methods are quite different in terms of cost. In this study, whether or not point cloud produced by photographs can be used instead of point cloud produced by laser scanner device is investigated. In accordance with this purpose, rock surfaces which have complex and irregular shape located in İstanbul Technical University Ayazaga Campus were selected as study object. Selected object is mixture of different rock types and consists of both partly weathered and fresh parts. Study was performed on a part of 30m x 10m rock surface. 2D and 3D analysis were performed for several regions selected from the point clouds of the surface models. 2D analysis is area-based and 3D analysis is volume-based. Analysis conclusions showed that point clouds in both are similar and can be used as alternative to each other. This proved that point cloud produced using photographs which are both economical and enables to produce data in less time can be used in several studies instead of point cloud produced by laser scanner.

  1. Does competitive food and beverage legislation hurt meal participation or revenues in high schools?

    PubMed

    Peart, Tasha; Kao, Janice; Crawford, Patricia B; Samuels, Sarah E; Craypo, Lisa; Woodward-Lopez, Gail

    2012-08-01

    There is limited evidence to evaluate the influence of competitive food and beverage legislation on school meal program participation and revenues. A representative sample of 56 California high schools was recruited to collect school-level data before (2006–2007) and the year after (2007–2008) policies regarding limiting competitive foods and beverages were required to be implemented. Data were obtained from school records, observations, and questionnaires. Paired t-tests assessed significance of change between the two time points. Average participation in lunch increased from 21.7% to 25.3% (p < 0.001), representing a 17.0% increase, while average participation in breakfast increased from 8.9% to 10.3% (p = 0.02), representing a 16.0% increase. There was a significant (23.0%) increase in average meal revenue, from $0.70 to $0.86 (per student per day) (p < 0.001). There was a nonsignificant decrease (18.0%) in average sales from à la carte foods, from $0.45 to $0.37 (per student per day). Compliance with food and beverage standards also increased significantly. At end point, compliance with beverage standards was higher (71.0%) than compliance with food standards (65.7%). Competitive food and beverage legislation can increase food service revenues when accompanied by increased rates of participation in the meal program. Future studies collecting expense data will be needed to determine impact on net revenues.

  2. "The Effect of Alternative Representations of Lake ...

    EPA Pesticide Factsheets

    Lakes can play a significant role in regional climate, modulating inland extremes in temperature and enhancing precipitation. Representing these effects becomes more important as regional climate modeling (RCM) efforts focus on simulating smaller scales. When using the Weather Research and Forecasting (WRF) model to downscale future global climate model (GCM) projections into RCM simulations, model users typically must rely on the GCM to represent temperatures at all water points. However, GCMs have insufficient resolution to adequately represent even large inland lakes, such as the Great Lakes. Some interpolation methods, such as setting lake surface temperatures (LSTs) equal to the nearest water point, can result in inland lake temperatures being set from sea surface temperatures (SSTs) that are hundreds of km away. In other cases, a single point is tasked with representing multiple large, heterogeneous lakes. Similar consequences can result from interpolating ice from GCMs to inland lake points, resulting in lakes as large as Lake Superior freezing completely in the space of a single timestep. The use of a computationally-efficient inland lake model can improve RCM simulations where the input data is too coarse to adequately represent inland lake temperatures and ice (Gula and Peltier 2012). This study examines three scenarios under which ice and LSTs can be set within the WRF model when applied as an RCM to produce 2-year simulations at 12 km gri

  3. Dynamics of a linear system coupled to a chain of light nonlinear oscillators analyzed through a continuous approximation

    NASA Astrophysics Data System (ADS)

    Charlemagne, S.; Ture Savadkoohi, A.; Lamarque, C.-H.

    2018-07-01

    The continuous approximation is used in this work to describe the dynamics of a nonlinear chain of light oscillators coupled to a linear main system. A general methodology is applied to an example where the chain has local nonlinear restoring forces. The slow invariant manifold is detected at fast time scale. At slow time scale, equilibrium and singular points are sought around this manifold in order to predict periodic regimes and strongly modulated responses of the system. Analytical predictions are in good accordance with numerical results and represent a potent tool for designing nonlinear chains for passive control purposes.

  4. Single-mode fiber systems for deep space communication network

    NASA Technical Reports Server (NTRS)

    Lutes, G.

    1982-01-01

    The present investigation is concerned with the development of single-mode optical fiber distribution systems. It is pointed out that single-mode fibers represent potentially a superior medium for the distribution of frequency and timing reference signals and wideband (400 MHz) IF signals. In this connection, single-mode fibers have the potential to improve the capability and precision of NASA's Deep Space Network (DSN). Attention is given to problems related to precise time synchronization throughout the DSN, questions regarding the selection of a transmission medium, and the function of the distribution systems, taking into account specific improvements possible by an employment of single-mode fibers.

  5. Indirect synchronization control in a starlike network of phase oscillators

    NASA Astrophysics Data System (ADS)

    Kuptsov, Pavel V.; Kuptsova, Anna V.

    2018-04-01

    A starlike network of non-identical phase oscillators is considered that contains the hub and tree rays each having a single node. In such network effect of indirect synchronization control is reported: changing the natural frequency and the coupling strength of one of the peripheral oscillators one can switch on an off the synchronization of the others. The controlling oscillator at that is not synchronized with them and has a frequency that is approximately four time higher then the frequency of the synchronization. The parameter planes showing a corresponding synchronization tongue are represented and time dependencies of phase differences are plotted for points within and outside of the tongue.

  6. Cavity master equation for the continuous time dynamics of discrete-spin models.

    PubMed

    Aurell, E; Del Ferraro, G; Domínguez, E; Mulet, R

    2017-05-01

    We present an alternate method to close the master equation representing the continuous time dynamics of interacting Ising spins. The method makes use of the theory of random point processes to derive a master equation for local conditional probabilities. We analytically test our solution studying two known cases, the dynamics of the mean-field ferromagnet and the dynamics of the one-dimensional Ising system. We present numerical results comparing our predictions with Monte Carlo simulations in three different models on random graphs with finite connectivity: the Ising ferromagnet, the random field Ising model, and the Viana-Bray spin-glass model.

  7. Cavity master equation for the continuous time dynamics of discrete-spin models

    NASA Astrophysics Data System (ADS)

    Aurell, E.; Del Ferraro, G.; Domínguez, E.; Mulet, R.

    2017-05-01

    We present an alternate method to close the master equation representing the continuous time dynamics of interacting Ising spins. The method makes use of the theory of random point processes to derive a master equation for local conditional probabilities. We analytically test our solution studying two known cases, the dynamics of the mean-field ferromagnet and the dynamics of the one-dimensional Ising system. We present numerical results comparing our predictions with Monte Carlo simulations in three different models on random graphs with finite connectivity: the Ising ferromagnet, the random field Ising model, and the Viana-Bray spin-glass model.

  8. Real-time in situ nanoclustering during initial stages of artificial aging of Al-Cu alloys

    NASA Astrophysics Data System (ADS)

    Zatsepin, Nadia A.; Dilanian, Ruben A.; Nikulin, Andrei Y.; Gao, Xiang; Muddle, Barry C.; Matveev, Victor N.; Sakata, Osami

    2010-01-01

    We report an experimental demonstration of real-time in situ x-ray diffraction investigations of clustering and dynamic strain in early stages of nanoparticle growth in Al-Cu alloys. Simulations involving a simplified model of local strain are well correlated with the x-ray diffraction data, suggesting a redistribution of point defects and the formation of nanoscale clusters in the bulk material. A modal, representative nanoparticle size is determined subsequent to the final stage of artificial aging. Such investigations are imperative for the understanding, and ultimately the control, of nanoparticle nucleation and growth in this technologically important alloy.

  9. A new continuous light source for high-speed imaging

    NASA Astrophysics Data System (ADS)

    Paton, R. T.; Hall, R. E.; Skews, B. W.

    2017-02-01

    Xenon arc lamps have been identified as a suitable continuous light source for high-speed imaging, specifically high-speed Schlieren and shadowgraphy. One issue when setting us such systems is the time that it takes to reduce a finite source to the approximation of a point source for z-type schlieren. A preliminary design of a compact compound lens for use with a commercial Xenon arc lamp was tested for suitability. While it was found that there is some dimming of the illumination at the spot periphery, the overall spectral and luminance distribution of the compact source is quite acceptable, especially considering the time benefit that it represents.

  10. Real-time, interactive animation of deformable two- and three-dimensional objects

    DOEpatents

    Desbrun, Mathieu; Schroeder, Peter; Meyer, Mark; Barr, Alan H.

    2003-06-03

    A method of updating in real-time the locations and velocities of mass points of a two- or three-dimensional object represented by a mass-spring system. A modified implicit Euler integration scheme is employed to determine the updated locations and velocities. In an optional post-integration step, the updated locations are corrected to preserve angular momentum. A processor readable medium and a network server each tangibly embodying the method are also provided. A system comprising a processor in combination with the medium, and a system comprising the server in combination with a client for accessing the server over a computer network, are also provided.

  11. Rapid update of discrete Fourier transform for real-time signal processing

    NASA Astrophysics Data System (ADS)

    Sherlock, Barry G.; Kakad, Yogendra P.

    2001-10-01

    In many identification and target recognition applications, the incoming signal will have properties that render it amenable to analysis or processing in the Fourier domain. In such applications, however, it is usually essential that the identification or target recognition be performed in real time. An important constraint upon real-time processing in the Fourier domain is the time taken to perform the Discrete Fourier Transform (DFT). Ideally, a new Fourier transform should be obtained after the arrival of every new data point. However, the Fast Fourier Transform (FFT) algorithm requires on the order of N log2 N operations, where N is the length of the transform, and this usually makes calculation of the transform for every new data point computationally prohibitive. In this paper, we develop an algorithm to update the existing DFT to represent the new data series that results when a new signal point is received. Updating the DFT in this way uses less computational order by a factor of log2 N. The algorithm can be modified to work in the presence of data window functions. This is a considerable advantage, because windowing is often necessary to reduce edge effects that occur because the implicit periodicity of the Fourier transform is not exhibited by the real-world signal. Versions are developed in this paper for use with the boxcar window, the split triangular, Hanning, Hamming, and Blackman windows. Generalization of these results to 2D is also presented.

  12. Interpreting the handling qualities of aircraft with stability and control augmentation

    NASA Technical Reports Server (NTRS)

    Hodgkinson, J.; Potsdam, E. H.; Smith, R. E.

    1990-01-01

    The general process of designing an aircraft for good flying qualities is first discussed. Lessons learned are pointed out, with piloted evaluation emerging as a crucial element. Two sources of rating variability in performing these evaluations are then discussed. First, the finite endpoints of the Cooper-Harper scale do not bias parametric statistical analyses unduly. Second, the wording of the scale does introduce some scatter. Phase lags generated by augmentation systems, as represented by equivalent time delays, often cause poor flying qualities. An analysis is introduced which allows a designer to relate any level of time delay to a probability of loss of aircraft control. This view of time delays should, it is hoped, allow better visibility of the time delays in the design process.

  13. Corrigendum: First principles calculation of field emission from nanostructures using time-dependent density functional theory: A simplified approach

    NASA Astrophysics Data System (ADS)

    Tawfik, Sherif A.; El-Sheikh, S. M.; Salem, N. M.

    2016-09-01

    Recently we have become aware that the description of the quantum wave functions in Sec. 2.1 is incorrect. In the published version of the paper, we have stated that the states are expanded in terms of plane waves. However, the correct description of the quantum states in the context of the real space implementation (using the Octopus code) is that states are represented by discrete points in a real space grid.

  14. Wave Information Studies of US Coastlines: Hindcast Wave Information for the Great Lakes: Lake Erie

    DTIC Science & Technology

    1991-10-01

    total ice cover) for individual grid cells measuring 5 km square. 42. The GLERL analyzed each half-month data set to provide the maximum, minimum...average, median, and modal ice concentrations for each 5-km cell . The median value, which represents an estimate of the 50-percent point of the ice...incorporating the progression and decay of the time-dependent ice cover was complicated by the fact that different grid cell sizes were used for mapping the ice

  15. A New Application of the Channel Packet Method for Low Energy 1-D Elastic Scattering

    DTIC Science & Technology

    2006-09-01

    matter. On a cosmic scale, we wonder if a collision between an asteroid and Earth led to the extinction of the dinosaurs . Collisions are important...in Figure 12. In an effort to have the computation time reasonable was chosen to be for this simulation. In order to represent the intermediate...linear regions joined by the two labeled points. However, based on Figure 13 the two potential functions are reasonably close and so one would not

  16. Research in Seismology

    DTIC Science & Technology

    1978-03-31

    detailed analysis of the data is made in an attempt to reach more definitive conclusion on that matter. Analysis of Data The largest foreshock (OT-II:22...represented with a trapezoid of unit area defined with three time segments (2.5, 1.0, 2.5 seconds). The same pattern is seen in the foreshock as shown in...parameters were taken to be the same as in the case of the aftershock. In the previous report it was pointed out that foreshock shows a secondary arrival

  17. MS Morukov prepares Zvezda for habitation during STS-106

    NASA Image and Video Library

    2000-09-13

    S106-E-5173 (13 September 2000) --- Cosmonaut Boris V. Morukov, mission specialist representing the Russian Aviation and Space Agency, is part of the team effort to ready the International Space Station (ISS) for permanent habitation. The STS-106 astronauts and cosmonauts are continuing electrical work and transfer activities as they near the halfway point of docked operations with the International Space Station. In all, the crew will have 189 hours, 40 minutes of planned Atlantis-ISS docked time.

  18. Surface representations of two- and three-dimensional fluid flow topology

    NASA Technical Reports Server (NTRS)

    Helman, James L.; Hesselink, Lambertus

    1990-01-01

    We discuss our work using critical point analysis to generate representations of the vector field topology of numerical flow data sets. Critical points are located and characterized in a two-dimensional domain, which may be either a two-dimensional flow field or the tangential velocity field near a three-dimensional body. Tangent curves are then integrated out along the principal directions of certain classes of critical points. The points and curves are linked to form a skeleton representing the two-dimensional vector field topology. When generated from the tangential velocity field near a body in a three-dimensional flow, the skeleton includes the critical points and curves which provide a basis for analyzing the three-dimensional structure of the flow separation. The points along the separation curves in the skeleton are used to start tangent curve integrations to generate surfaces representing the topology of the associated flow separations.

  19. How Mathematics Describes Life

    NASA Astrophysics Data System (ADS)

    Teklu, Abraham

    2017-01-01

    The circle of life is something we have all heard of from somewhere, but we don't usually try to calculate it. For some time we have been working on analyzing a predator-prey model to better understand how mathematics can describe life, in particular the interaction between two different species. The model we are analyzing is called the Holling-Tanner model, and it cannot be solved analytically. The Holling-Tanner model is a very common model in population dynamics because it is a simple descriptor of how predators and prey interact. The model is a system of two differential equations. The model is not specific to any particular set of species and so it can describe predator-prey species ranging from lions and zebras to white blood cells and infections. One thing all these systems have in common are critical points. A critical point is a value for both populations that keeps both populations constant. It is important because at this point the differential equations are equal to zero. For this model there are two critical points, a predator free critical point and a coexistence critical point. Most of the analysis we did is on the coexistence critical point because the predator free critical point is always unstable and frankly less interesting than the coexistence critical point. What we did is consider two regimes for the differential equations, large B and small B. B, A, and C are parameters in the differential equations that control the system where B measures how responsive the predators are to change in the population, A represents predation of the prey, and C represents the satiation point of the prey population. For the large B case we were able to approximate the system of differential equations by a single scalar equation. For the small B case we were able to predict the limit cycle. The limit cycle is a process of the predator and prey populations growing and shrinking periodically. This model has a limit cycle in the regime of small B, that we solved for numerically. With some assumptions to reduce the differential equations we were able to create a system of equations and unknowns to predict the behavior of the limit cycle for small B.

  20. A catalog of rules, variables, and definitions applied to accelerometer data in the National Health and Nutrition Examination Survey, 2003-2006.

    PubMed

    Tudor-Locke, Catrine; Camhi, Sarah M; Troiano, Richard P

    2012-01-01

    The National Health and Nutrition Examination Survey (NHANES) included accelerometry in the 2003-2006 data collection cycles. Researchers have used these data since their release in 2007, but the data have not been consistently treated, examined, or reported. The objective of this study was to aggregate data from studies using NHANES accelerometry data and to catalogue study decision rules, derived variables, and cut point definitions to facilitate a more uniform approach to these data. We conducted a PubMed search of English-language articles published (or indicated as forthcoming) from January 2007 through December 2011. Our initial search yielded 74 articles, plus 1 article that was not indexed in PubMed. After excluding 21 articles, we extracted and tabulated details on 54 studies to permit comparison among studies. The 54 articles represented various descriptive, methodological, and inferential analyses. Although some decision rules for treating data (eg, criteria for minimal wear-time) were consistently applied, cut point definitions used for accelerometer-derived variables (eg, time spent in various intensities of physical activity) were especially diverse. Unique research questions may require equally unique analytical approaches; some inconsistency in approaches must be tolerated if scientific discovery is to be encouraged. This catalog provides a starting point for researchers to consider relevant and/or comparable accelerometer decision rules, derived variables, and cut point definitions for their own research questions.

  1. Advanced yellow fever virus genome detection in point-of-care facilities and reference laboratories.

    PubMed

    Domingo, Cristina; Patel, Pranav; Yillah, Jasmin; Weidmann, Manfred; Méndez, Jairo A; Nakouné, Emmanuel Rivalyn; Niedrig, Matthias

    2012-12-01

    Reported methods for the detection of the yellow fever viral genome are beset by limitations in sensitivity, specificity, strain detection spectra, and suitability to laboratories with simple infrastructure in areas of endemicity. We describe the development of two different approaches affording sensitive and specific detection of the yellow fever genome: a real-time reverse transcription-quantitative PCR (RT-qPCR) and an isothermal protocol employing the same primer-probe set but based on helicase-dependent amplification technology (RT-tHDA). Both assays were evaluated using yellow fever cell culture supernatants as well as spiked and clinical samples. We demonstrate reliable detection by both assays of different strains of yellow fever virus with improved sensitivity and specificity. The RT-qPCR assay is a powerful tool for reference or diagnostic laboratories with real-time PCR capability, while the isothermal RT-tHDA assay represents a useful alternative to earlier amplification techniques for the molecular diagnosis of yellow fever by field or point-of-care laboratories.

  2. Coordinates for Representing Radiation Belt Particle Flux

    NASA Astrophysics Data System (ADS)

    Roederer, Juan G.; Lejosne, Solène

    2018-02-01

    Fifty years have passed since the parameter "L-star" was introduced in geomagnetically trapped particle dynamics. It is thus timely to review the use of adiabatic theory in present-day studies of the radiation belts, with the intention of helping to prevent common misinterpretations and the frequent confusion between concepts like "distance to the equatorial point of a field line," McIlwain's L-value, and the trapped particle's adiabatic L* parameter. And too often do we miss in the recent literature a proper discussion of the extent to which some observed time and space signatures of particle flux could simply be due to changes in magnetospheric field, especially insofar as off-equatorial particles are concerned. We present a brief review on the history of radiation belt parameterization, some "recipes" on how to compute adiabatic parameters, and we illustrate our points with a real event in which magnetospheric disturbance is shown to adiabatically affect the particle fluxes measured onboard the Van Allen Probes.

  3. A Possible Approach to Inclusion of Space and Time in Frame Fields of Quantum Representations of Real and Complex Numbers

    DOE PAGES

    Benioff, Paul

    2009-01-01

    Tmore » his work is based on the field of reference frames based on quantum representations of real and complex numbers described in other work. Here frame domains are expanded to include space and time lattices. Strings of qukits are described as hybrid systems as they are both mathematical and physical systems. As mathematical systems they represent numbers. As physical systems in each frame the strings have a discrete Schrodinger dynamics on the lattices. he frame field has an iterative structure such that the contents of a stage j frame have images in a stage j - 1 (parent) frame. A discussion of parent frame images includes the proposal that points of stage j frame lattices have images as hybrid systems in parent frames. he resulting association of energy with images of lattice point locations, as hybrid systems states, is discussed. Representations and images of other physical systems in the different frames are also described.« less

  4. A novel method for the line-of-response and time-of-flight reconstruction in TOF-PET detectors based on a library of synchronized model signals

    NASA Astrophysics Data System (ADS)

    Moskal, P.; Zoń, N.; Bednarski, T.; Białas, P.; Czerwiński, E.; Gajos, A.; Kamińska, D.; Kapłon, Ł.; Kochanowski, A.; Korcyl, G.; Kowal, J.; Kowalski, P.; Kozik, T.; Krzemień, W.; Kubicz, E.; Niedźwiecki, Sz.; Pałka, M.; Raczyński, L.; Rudy, Z.; Rundel, O.; Salabura, P.; Sharma, N. G.; Silarski, M.; Słomski, A.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Wiślicki, W.; Zieliński, M.

    2015-03-01

    A novel method of hit time and hit position reconstruction in scintillator detectors is described. The method is based on comparison of detector signals with results stored in a library of synchronized model signals registered for a set of well-defined positions of scintillation points. The hit position is reconstructed as the one corresponding to the signal from the library which is most similar to the measurement signal. The time of the interaction is determined as a relative time between the measured signal and the most similar one in the library. A degree of similarity of measured and model signals is defined as the distance between points representing the measurement- and model-signal in the multi-dimensional measurement space. Novelty of the method lies also in the proposed way of synchronization of model signals enabling direct determination of the difference between time-of-flights (TOF) of annihilation quanta from the annihilation point to the detectors. The introduced method was validated using experimental data obtained by means of the double strip prototype of the J-PET detector and 22Na sodium isotope as a source of annihilation gamma quanta. The detector was built out from plastic scintillator strips with dimensions of 5 mm×19 mm×300 mm, optically connected at both sides to photomultipliers, from which signals were sampled by means of the Serial Data Analyzer. Using the introduced method, the spatial and TOF resolution of about 1.3 cm (σ) and 125 ps (σ) were established, respectively.

  5. A modelling approach to assessing the timescale uncertainties in proxy series with chronological errors

    NASA Astrophysics Data System (ADS)

    Divine, D. V.; Godtliebsen, F.; Rue, H.

    2012-01-01

    The paper proposes an approach to assessment of timescale errors in proxy-based series with chronological uncertainties. The method relies on approximation of the physical process(es) forming a proxy archive by a random Gamma process. Parameters of the process are partly data-driven and partly determined from prior assumptions. For a particular case of a linear accumulation model and absolutely dated tie points an analytical solution is found suggesting the Beta-distributed probability density on age estimates along the length of a proxy archive. In a general situation of uncertainties in the ages of the tie points the proposed method employs MCMC simulations of age-depth profiles yielding empirical confidence intervals on the constructed piecewise linear best guess timescale. It is suggested that the approach can be further extended to a more general case of a time-varying expected accumulation between the tie points. The approach is illustrated by using two ice and two lake/marine sediment cores representing the typical examples of paleoproxy archives with age models based on tie points of mixed origin.

  6. Upper Limb Kinematics in Stroke and Healthy Controls Using Target-to-Target Task in Virtual Reality.

    PubMed

    Hussain, Netha; Alt Murphy, Margit; Sunnerhagen, Katharina S

    2018-01-01

    Kinematic analysis using virtual reality (VR) environment provides quantitative assessment of upper limb movements. This technique has rarely been used in evaluating motor function in stroke despite its availability in stroke rehabilitation. To determine the discriminative validity of VR-based kinematics during target-to-target pointing task in individuals with mild or moderate arm impairment following stroke and in healthy controls. Sixty-seven participants with moderate (32-57 points) or mild (58-65 points) stroke impairment as assessed with Fugl-Meyer Assessment for Upper Extremity were included from the Stroke Arm Longitudinal study at the University of Gothenburg-SALGOT cohort of non-selected individuals within the first year of stroke. The stroke groups and 43 healthy controls performed the target-to-target pointing task, where 32 circular targets appear one after the other and disappear when pointed at by the haptic handheld stylus in a three-dimensional VR environment. The kinematic parameters captured by the stylus included movement time, velocities, and smoothness of movement. The movement time, mean velocity, and peak velocity were discriminative between groups with moderate and mild stroke impairment and healthy controls. The movement time was longer and mean and peak velocity were lower for individuals with stroke. The number of velocity peaks, representing smoothness, was also discriminative and significantly higher in both stroke groups (mild, moderate) compared to controls. Movement trajectories in stroke more frequently showed clustering (spider's web) close to the target indicating deficits in movement precision. The target-to-target pointing task can provide valuable and specific information about sensorimotor impairment of the upper limb following stroke that might not be captured using traditional clinical scale. The trial was registered with register number NCT01115348 at clinicaltrials.gov, on May 4, 2010. URL: https://clinicaltrials.gov/ct2/show/NCT01115348.

  7. Optimal impulsive time-fixed orbital rendezvous and interception with path constraints

    NASA Technical Reports Server (NTRS)

    Taur, D.-R.; Prussing, J. E.; Coverstone-Carroll, V.

    1990-01-01

    Minimum-fuel, impulsive, time-fixed solutions are obtained for the problem of orbital rendezvous and interception with interior path constraints. Transfers between coplanar circular orbits in an inverse-square gravitational field are considered, subject to a circular path constraint representing a minimum or maximum permissible orbital radius. Primer vector theory is extended to incorporate path constraints. The optimal number of impulses, their times and positions, and the presence of initial or final coasting arcs are determined. The existence of constraint boundary arcs and boundary points is investigated as well as the optimality of a class of singular arc solutions. To illustrate the complexities introduced by path constraints, an analysis is made of optimal rendezvous in field-free space subject to a minimum radius constraint.

  8. Episodic-like memory trace in awake replay of hippocampal place cell activity sequences

    PubMed Central

    Takahashi, Susumu

    2015-01-01

    Episodic memory retrieval of events at a specific place and time is effective for future planning. Sequential reactivation of the hippocampal place cells along familiar paths while the animal pauses is well suited to such a memory retrieval process. It is, however, unknown whether this awake replay represents events occurring along the path. Using a subtask switching protocol in which the animal experienced three subtasks as ‘what’ information in a maze, I here show that the replay represents a trial type, consisting of path and subtask, in terms of neuronal firing timings and rates. The actual trial type to be rewarded could only be reliably predicted from replays that occurred at the decision point. This trial-type representation implies that not only ‘where and when’ but also ‘what’ information is contained in the replay. This result supports the view that awake replay is an episodic-like memory retrieval process. DOI: http://dx.doi.org/10.7554/eLife.08105.001 PMID:26481131

  9. Influence of well-known risk factors for hearing loss in a longitudinal twin study.

    PubMed

    Johnson, Ann-Christin; Bogo, Renata; Farah, Ahmed; Karlsson, Kjell K; Muhr, Per; Sjöström, Mattias; Svensson, Eva B; Skjönsberg, Åsa; Svartengren, Magnus

    2017-01-01

    The aim was to investigate the influence of environmental exposures on hearing loss in a twin cohort. Male twins born 1914-1958, representing an unscreened population, were tested for hearing loss at two occasions, 18 years apart. Clinical audiometry and a questionnaire were performed at both time points in this longitudinal study. Noise and solvent exposure were assessed using occupational work codes and a job exposure matrix. Hearing impairment was investigated using two different pure tone averages: PTA4 (0.5, 1, 2, and 4 kHz) and HPTA4 (3, 4, 6, and 8 kHz). Age affected all outcome measures. Noise exposure between time point one and two affected the threshold shifts of PTA4 and HPTA4 more in participants with a pre-existing hearing loss at time point one. Lifetime occupational noise exposure was a risk factor especially for the low-frequency hearing threshold PTA4. Firearm use was a statistically significant risk factor for all outcome measures. Pre-existing hearing loss can increase the risk of hearing impairment due to occupational noise exposure. An increased risk for NIHL was also seen in the group with exposures below 85 dB(A), a result that indicates awareness of NIHL should be raised even for those working in environments where sound levels are below 85 dB(A).

  10. The study of infrared target recognition at sea background based on visual attention computational model

    NASA Astrophysics Data System (ADS)

    Wang, Deng-wei; Zhang, Tian-xu; Shi, Wen-jun; Wei, Long-sheng; Wang, Xiao-ping; Ao, Guo-qing

    2009-07-01

    Infrared images at sea background are notorious for the low signal-to-noise ratio, therefore, the target recognition of infrared image through traditional methods is very difficult. In this paper, we present a novel target recognition method based on the integration of visual attention computational model and conventional approach (selective filtering and segmentation). The two distinct techniques for image processing are combined in a manner to utilize the strengths of both. The visual attention algorithm searches the salient regions automatically, and represented them by a set of winner points, at the same time, demonstrated the salient regions in terms of circles centered at these winner points. This provides a priori knowledge for the filtering and segmentation process. Based on the winner point, we construct a rectangular region to facilitate the filtering and segmentation, then the labeling operation will be added selectively by requirement. Making use of the labeled information, from the final segmentation result we obtain the positional information of the interested region, label the centroid on the corresponding original image, and finish the localization for the target. The cost time does not depend on the size of the image but the salient regions, therefore the consumed time is greatly reduced. The method is used in the recognition of several kinds of real infrared images, and the experimental results reveal the effectiveness of the algorithm presented in this paper.

  11. 7 CFR 800.72 - Explanation of additional service fees for services performed in the United States only.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... representative to the service location (at other than a specified duty point) is more than 25 miles from an FGIS... representative will be assessed from the FGIS office to the service point and return. When commercial modes of transportation (e.g., airplanes) are required, the actual expense incurred for the round-trip travel will be...

  12. 7 CFR 205.309 - Agricultural products in other than packaged form at the point of retail sale that are sold...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... the point of retail sale that are sold, labeled, or represented as âmade with organic (specified... of retail sale that are sold, labeled, or represented as “made with organic (specified ingredients or... food group(s)),” to modify the name of the product in retail display, labeling, and display containers...

  13. 7 CFR 205.309 - Agricultural products in other than packaged form at the point of retail sale that are sold...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... the point of retail sale that are sold, labeled, or represented as âmade with organic (specified... of retail sale that are sold, labeled, or represented as “made with organic (specified ingredients or... food group(s)),” to modify the name of the product in retail display, labeling, and display containers...

  14. 7 CFR 205.309 - Agricultural products in other than packaged form at the point of retail sale that are sold...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... the point of retail sale that are sold, labeled, or represented as âmade with organic (specified... of retail sale that are sold, labeled, or represented as “made with organic (specified ingredients or... food group(s)),” to modify the name of the product in retail display, labeling, and display containers...

  15. Determination of geostatistically representative sampling locations in Porsuk Dam Reservoir (Turkey)

    NASA Astrophysics Data System (ADS)

    Aksoy, A.; Yenilmez, F.; Duzgun, S.

    2013-12-01

    Several factors such as wind action, bathymetry and shape of a lake/reservoir, inflows, outflows, point and diffuse pollution sources result in spatial and temporal variations in water quality of lakes and reservoirs. The guides by the United Nations Environment Programme and the World Health Organization to design and implement water quality monitoring programs suggest that even a single monitoring station near the center or at the deepest part of a lake will be sufficient to observe long-term trends if there is good horizontal mixing. In stratified water bodies, several samples can be required. According to the guide of sampling and analysis under the Turkish Water Pollution Control Regulation, a minimum of five sampling locations should be employed to characterize the water quality in a reservoir or a lake. The European Union Water Framework Directive (2000/60/EC) states to select a sufficient number of monitoring sites to assess the magnitude and impact of point and diffuse sources and hydromorphological pressures in designing a monitoring program. Although existing regulations and guidelines include frameworks for the determination of sampling locations in surface waters, most of them do not specify a procedure in establishment of monitoring aims with representative sampling locations in lakes and reservoirs. In this study, geostatistical tools are used to determine the representative sampling locations in the Porsuk Dam Reservoir (PDR). Kernel density estimation and kriging were used in combination to select the representative sampling locations. Dissolved oxygen and specific conductivity were measured at 81 points. Sixteen of them were used for validation. In selection of the representative sampling locations, care was given to keep similar spatial structure in distributions of measured parameters. A procedure was proposed for that purpose. Results indicated that spatial structure was lost under 30 sampling points. This was as a result of varying water quality in the reservoir due to inflows, point and diffuse inputs, and reservoir hydromorphology. Moreover, hot spots were determined based on kriging and standard error maps. Locations of minimum number of sampling points that represent the actual spatial structure of DO distribution in the Porsuk Dam Reservoir

  16. Three-Dimensional Structure and Evolution of Extreme-Ultraviolet Bright Points Observed by STEREO/SECCHI/EUVI

    NASA Technical Reports Server (NTRS)

    Kwon, Ryun Young; Chae, Jongchul; Davila, Joseph M.; Zhang, Jie; Moon, Yong-Jae; Poomvises, Watanachak; Jones, Shaela I.

    2012-01-01

    We unveil the three-dimensional structure of quiet-Sun EUV bright points and their temporal evolution by applying a triangulation method to time series of images taken by SECCHI/EUVI on board the STEREO twin spacecraft. For this study we examine the heights and lengths as the components of the three-dimensional structure of EUV bright points and their temporal evolutions. Among them we present three bright points which show three distinct changes in the height and length: decreasing, increasing, and steady. We show that the three distinct changes are consistent with the motions (converging, diverging, and shearing, respectively) of their photospheric magnetic flux concentrations. Both growth and shrinkage of the magnetic fluxes occur during their lifetimes and they are dominant in the initial and later phases, respectively. They are all multi-temperature loop systems which have hot loops (approx. 10(exp 6.2) K) overlying cooler ones (approx 10(exp 6.0) K) with cool legs (approx 10(exp 4.9) K) during their whole evolutionary histories. Our results imply that the multi-thermal loop system is a general character of EUV bright points. We conclude that EUV bright points are flaring loops formed by magnetic reconnection and their geometry may represent the reconnected magnetic field lines rather than the separator field lines.

  17. 21 CFR 111.80 - What representative samples must you collect?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Process Control System § 111.80 What representative samples must you collect? The representative samples... unique lot within each unique shipment); (b) Representative samples of in-process materials for each manufactured batch at points, steps, or stages, in the manufacturing process as specified in the master...

  18. Method and apparatus for modeling interactions

    DOEpatents

    Xavier, Patrick G.

    2000-08-08

    A method and apparatus for modeling interactions between bodies. The method comprises representing two bodies undergoing translations and rotations by two hierarchical swept volume representations. Interactions such as nearest approach and collision can be modeled based on the swept body representations. The present invention can serve as a practical tool in motion planning, CAD systems, simulation systems, safety analysis, and applications that require modeling time-based interactions. A body can be represented in the present invention by a union of convex polygons and convex polyhedra. As used generally herein, polyhedron includes polygon, and polyhedra includes polygons. The body undergoing translation can be represented by a swept body representation, where the swept body representation comprises a hierarchical bounding volume representation whose leaves each contain a representation of the region swept by a section of the body during the translation, and where the union of the regions is a superset of the region swept by the surface of the body during translation. Interactions between two bodies thus represented can be modeled by modeling interactions between the convex hulls of the finite sets of discrete points in the swept body representations.

  19. [Deficiency, disability, neurology and art].

    PubMed

    Cano de la Cuerda, Roberto; Collado-Vazquez, Susana

    2010-07-16

    Disability is a complex phenomenon, and the ways it has been conceived, explained and treated have varied notably throughout history. As the years go by, human beings have evolved and, at the same time, so have medicine and art. And therein lies the extraordinary value, from the ontological point of view, of many works of art, which would never have been produced without the intervention of disease and the practice of the medical art. The aim of this work is to address the study of some deficiencies, disabilities and neurological pathologies that have been represented in paintings at different times in history. This article begins with the study of pictures that deal with dwarves and other misnamed freaks of nature that have been represented by painters from Velazquez to Titian or Rubens. The study looks at paintings of cripples, pictures containing the mentally disabled, with examples by Bruegel the Elder or Munch, as well as certain neurological disorders that have been portrayed in paintings, such as Escaping criticism by Pere Borrell or Sad inheritance by Sorolla. Likewise, we also reflect on the trite concept of disease and artistic creativity. The artistic representation of deficiency and disability has evolved in parallel to the feelings of men and women in each period of history and, at the same time, their social evolution. Nowadays, this concept continues to advance and some artists no longer represent the sick person, but instead the illness itself.

  20. Profiles of verbal working memory growth predict speech and language development in children with cochlear implants.

    PubMed

    Kronenberger, William G; Pisoni, David B; Harris, Michael S; Hoen, Helena M; Xu, Huiping; Miyamoto, Richard T

    2013-06-01

    Verbal short-term memory (STM) and working memory (WM) skills predict speech and language outcomes in children with cochlear implants (CIs) even after conventional demographic, device, and medical factors are taken into account. However, prior research has focused on single end point outcomes as opposed to the longitudinal process of development of verbal STM/WM and speech-language skills. In this study, the authors investigated relations between profiles of verbal STM/WM development and speech-language development over time. Profiles of verbal STM/WM development were identified through the use of group-based trajectory analysis of repeated digit span measures over at least a 2-year time period in a sample of 66 children (ages 6-16 years) with CIs. Subjects also completed repeated assessments of speech and language skills during the same time period. Clusters representing different patterns of development of verbal STM (digit span forward scores) were related to the growth rate of vocabulary and language comprehension skills over time. Clusters representing different patterns of development of verbal WM (digit span backward scores) were related to the growth rate of vocabulary and spoken word recognition skills over time. Different patterns of development of verbal STM/WM capacity predict the dynamic process of development of speech and language skills in this clinical population.

  1. Looking for Off-Fault Deformation and Measuring Strain Accumulation During the Past 70 years on a Portion of the Locked San Andreas Fault

    NASA Astrophysics Data System (ADS)

    Vadman, M.; Bemis, S. P.

    2017-12-01

    Even at high tectonic rates, detection of possible off-fault plastic/aseismic deformation and variability in far-field strain accumulation requires high spatial resolution data and likely decades of measurements. Due to the influence that variability in interseismic deformation could have on the timing, size, and location of future earthquakes and the calculation of modern geodetic estimates of strain, we attempt to use historical aerial photographs to constrain deformation through time across a locked fault. Modern photo-based 3D reconstruction techniques facilitate the creation of dense point clouds from historical aerial photograph collections. We use these tools to generate a time series of high-resolution point clouds that span 10-20 km across the Carrizo Plain segment of the San Andreas fault. We chose this location due to the high tectonic rates along the San Andreas fault and lack of vegetation, which may obscure tectonic signals. We use ground control points collected with differential GPS to establish scale and georeference the aerial photograph-derived point clouds. With a locked fault assumption, point clouds can be co-registered (to one another and/or the 1.7 km wide B4 airborne lidar dataset) along the fault trace to calculate relative displacements away from the fault. We use CloudCompare to compute 3D surface displacements, which reflect the interseismic strain accumulation that occurred in the time interval between photo collections. As expected, we do not observe clear surface displacements along the primary fault trace in our comparisons of the B4 lidar data against the aerial photograph-derived point clouds. However, there may be small scale variations within the lidar swath area that represent near-fault plastic deformation. With large-scale historical photographs available for the Carrizo Plain extending back to at least the 1940s, we can potentially sample nearly half the interseismic period since the last major earthquake on this portion of this fault (1857). Where sufficient aerial photograph coverage is available, this approach has the potential to illuminate complex fault zone processes for this and other major strike-slip faults.

  2. Crystallization of supercooled liquids

    NASA Astrophysics Data System (ADS)

    Odagaki, Takashi; Shikuya, Yuuna

    2014-03-01

    We investigate the crystallization process on the basis of the free energy landscape (FEL) approach to non-equilibrium systems. In this approach, the crystallization time is given by the first passage time of the representative point arriving at the crystalline basin in the FEL. We devise an efficient method to obtain the first passage time exploiting a specific boundary condition. Applying this formalism to a model system, we show that the first passage time is determined by two competing effects; one is the difference in the free energy of the initial and the final basins, and the other is the slow relaxation. As the temperature is reduced, the former accelerates the crystallization and the latter retards it. We show that these competing effects give rise to the typical nose-shape form of the time-temperature transformation curve and that the retardation of the crystallization is related to the mean waiting time of the jump motion.

  3. Reliability of Travel Time: Challenges Posed by a Multimodal Transport Participation

    NASA Astrophysics Data System (ADS)

    Wanjek, Monika; Hauger, Georg

    2017-10-01

    Travel time reliability represents an essential component in individual decision making processes for transport participants, particularly regarding mode choices. As criteria that describe the quality of both transportation systems and transportation modes, travel time reliability is already frequently compiled, analysed and quoted as an argument. Currently, travel time reliability is solely mentioned on monomodal trips, while it has remained unconsidered on multimodal transport participation. Given the fact that multimodality gained significantly in importance, it is crucial to discuss how travel time reliability could be determined on multimodal trips. This paper points out the challenges that occur for applying travel time reliability on multimodal transport participation. Therefore, examples will be given within this paper. In order to illustrate theoretical ideas, trips and influencing factors that could be expected within the everyday transport behaviour of commuters in a (sub)urban area will be described.

  4. Probabilistic seismic hazard in the San Francisco Bay area based on a simplified viscoelastic cycle model of fault interactions

    USGS Publications Warehouse

    Pollitz, F.F.; Schwartz, D.P.

    2008-01-01

    We construct a viscoelastic cycle model of plate boundary deformation that includes the effect of time-dependent interseismic strain accumulation, coseismic strain release, and viscoelastic relaxation of the substrate beneath the seismogenic crust. For a given fault system, time-averaged stress changes at any point (not on a fault) are constrained to zero; that is, kinematic consistency is enforced for the fault system. The dates of last rupture, mean recurrence times, and the slip distributions of the (assumed) repeating ruptures are key inputs into the viscoelastic cycle model. This simple formulation allows construction of stress evolution at all points in the plate boundary zone for purposes of probabilistic seismic hazard analysis (PSHA). Stress evolution is combined with a Coulomb failure stress threshold at representative points on the fault segments to estimate the times of their respective future ruptures. In our PSHA we consider uncertainties in a four-dimensional parameter space: the rupture peridocities, slip distributions, time of last earthquake (for prehistoric ruptures) and Coulomb failure stress thresholds. We apply this methodology to the San Francisco Bay region using a recently determined fault chronology of area faults. Assuming single-segment rupture scenarios, we find that fature rupture probabilities of area faults in the coming decades are the highest for the southern Hayward, Rodgers Creek, and northern Calaveras faults. This conclusion is qualitatively similar to that of Working Group on California Earthquake Probabilities, but the probabilities derived here are significantly higher. Given that fault rupture probabilities are highly model-dependent, no single model should be used to assess to time-dependent rupture probabilities. We suggest that several models, including the present one, be used in a comprehensive PSHA methodology, as was done by Working Group on California Earthquake Probabilities.

  5. A new interpolation method for gridded extensive variables with application in Lagrangian transport and dispersion models

    NASA Astrophysics Data System (ADS)

    Hittmeir, Sabine; Philipp, Anne; Seibert, Petra

    2017-04-01

    In discretised form, an extensive variable usually represents an integral over a 3-dimensional (x,y,z) grid cell. In the case of vertical fluxes, gridded values represent integrals over a horizontal (x,y) grid face. In meteorological models, fluxes (precipitation, turbulent fluxes, etc.) are usually written out as temporally integrated values, thus effectively forming 3D (x,y,t) integrals. Lagrangian transport models require interpolation of all relevant variables towards the location in 4D space of each of the computational particles. Trivial interpolation algorithms usually implicitly assume the integral value to be a point value valid at the grid centre. If the integral value would be reconstructed from the interpolated point values, it would in general not be correct. If nonlinear interpolation methods are used, non-negativity cannot easily be ensured. This problem became obvious with respect to the interpolation of precipitation for the calculation of wet deposition FLEXPART (http://flexpart.eu) which uses ECMWF model output or other gridded input data. The presently implemented method consists of a special preprocessing in the input preparation software and subsequent linear interpolation in the model. The interpolated values are positive but the criterion of cell-wise conservation of the integral property is violated; it is also not very accurate as it smoothes the field. A new interpolation algorithm was developed which introduces additional supporting grid points in each time interval with linear interpolation to be applied in FLEXPART later between them. It preserves the integral precipitation in each time interval, guarantees the continuity of the time series, and maintains non-negativity. The function values of the remapping algorithm at these subgrid points constitute the degrees of freedom which can be prescribed in various ways. Combining the advantages of different approaches leads to a final algorithm respecting all the required conditions. To improve the monotonicity behaviour we additionally derived a filter to restrict over- or undershooting. At the current stage, the algorithm is meant primarily for the temporal dimension. It can also be applied with operator-splitting to include the two horizontal dimensions. An extension to 2D appears feasible, while a fully 3D version would most likely not justify the effort compared to the operator-splitting approach.

  6. Mobile-Based Nutrition and Child Health Monitoring to Inform Program Development: An Experience From Liberia.

    PubMed

    Guyon, Agnes; Bock, Ariella; Buback, Laura; Knittel, Barbara

    2016-12-23

    Implementing complex nutrition and other public health projects and tracking nutrition interventions, such as women's diet and supplementation and infant and young child feeding practices, requires reliable routine data to identify potential program gaps and to monitor trends in behaviors in real time. However, current monitoring and evaluation practices generally do not create an environment for this real-time tracking. This article describes the development and application of a mobile-based nutrition and health monitoring system, which collected monitoring data on project activities, women's nutrition, and infant and young child feeding practices in real time. The Liberia Agricultural Upgrading Nutrition and Child Health (LAUNCH) project implemented a nutrition and health monitoring system between April 2012 and June 2014. The LAUNCH project analyzed project monitoring and outcome data from the system and shared selected behavioral and programmatic indicators with program managers through a short report, which later evolved into a visual data dashboard, during program-update meetings. The project designed protocols to ensure representativeness of program participants. LAUNCH made programmatic adjustments in response to findings from the monitoring system; these changes were then reflected in subsequent quarterly trends, indicating that the availability of timely data allowed for the project to react quickly to issues and adapt the program appropriately. Such issues included lack of participation in community groups and insufficient numbers of food distribution points. Likewise, the system captured trends in key outcome indicators such as breastfeeding and complementary feeding practices, linking them to project activities and external factors including seasonal changes and national health campaigns. Digital data collection platforms can play a vital role in improving routine programmatic functions. Fixed gathering locations such as food distribution points represent an opportunity to easily access program participants and enable managers to identify strengths and weaknesses in project implementation. For programs that track individuals over time, a mobile tool combined with a strong database can greatly improve efficiency and data visibility and reduce resource leakages. © Guyon et al.

  7. Decoding the spatial signatures of multi-scale climate variability - a climate network perspective

    NASA Astrophysics Data System (ADS)

    Donner, R. V.; Jajcay, N.; Wiedermann, M.; Ekhtiari, N.; Palus, M.

    2017-12-01

    During the last years, the application of complex networks as a versatile tool for analyzing complex spatio-temporal data has gained increasing interest. Establishing this approach as a new paradigm in climatology has already provided valuable insights into key spatio-temporal climate variability patterns across scales, including novel perspectives on the dynamics of the El Nino Southern Oscillation or the emergence of extreme precipitation patterns in monsoonal regions. In this work, we report first attempts to employ network analysis for disentangling multi-scale climate variability. Specifically, we introduce the concept of scale-specific climate networks, which comprises a sequence of networks representing the statistical association structure between variations at distinct time scales. For this purpose, we consider global surface air temperature reanalysis data and subject the corresponding time series at each grid point to a complex-valued continuous wavelet transform. From this time-scale decomposition, we obtain three types of signals per grid point and scale - amplitude, phase and reconstructed signal, the statistical similarity of which is then represented by three complex networks associated with each scale. We provide a detailed analysis of the resulting connectivity patterns reflecting the spatial organization of climate variability at each chosen time-scale. Global network characteristics like transitivity or network entropy are shown to provide a new view on the (global average) relevance of different time scales in climate dynamics. Beyond expected trends originating from the increasing smoothness of fluctuations at longer scales, network-based statistics reveal different degrees of fragmentation of spatial co-variability patterns at different scales and zonal shifts among the key players of climate variability from tropically to extra-tropically dominated patterns when moving from inter-annual to decadal scales and beyond. The obtained results demonstrate the potential usefulness of systematically exploiting scale-specific climate networks, whose general patterns are in line with existing climatological knowledge, but provide vast opportunities for further quantifications at local, regional and global scales that are yet to be explored.

  8. Pedestrian Pathfinding in Urban Environments: Preliminary Results

    NASA Astrophysics Data System (ADS)

    López-Pazos, G.; Balado, J.; Díaz-Vilariño, L.; Arias, P.; Scaioni, M.

    2017-12-01

    With the rise of urban population, many initiatives are focused upon the smart city concept, in which mobility of citizens arises as one of the main components. Updated and detailed spatial information of outdoor environments is needed to accurate path planning for pedestrians, especially for people with reduced mobility, in which physical barriers should be considered. This work presents a methodology to use point clouds to direct path planning. The starting point is a classified point cloud in which ground elements have been previously classified as roads, sidewalks, crosswalks, curbs and stairs. The remaining points compose the obstacle class. The methodology starts by individualizing ground elements and simplifying them into representative points, which are used as nodes in the graph creation. The region of influence of obstacles is used to refine the graph. Edges of the graph are weighted according to distance between nodes and according to their accessibility for wheelchairs. As a result, we obtain a very accurate graph representing the as-built environment. The methodology has been tested in a couple of real case studies and Dijkstra algorithm was used to pathfinding. The resulting paths represent the optimal according to motor skills and safety.

  9. Predicting Grain Growth in Nanocrystalline Materials: A Thermodynamic and Kinetic-Based Model Informed by High Temperature X-ray Diffraction Experiments

    DTIC Science & Technology

    2014-10-01

    and d) Γb0. The scatter of the data points is due to the variation in the other parameters at 1 h. The line represents a best fit linear regression...parameters: a) Hseg, b) QL, c) γ0, and d) Γb0. The scatter of the data points is due to the variation in the other parameters at 1 h. The line represents...concentration x0 for the nanocrystalline Fe–Zr system. The white square data point shows the location of the experimental data used for fitting the

  10. Hidden topological constellations and polyvalent charges in chiral nematic droplets

    NASA Astrophysics Data System (ADS)

    Posnjak, Gregor; Čopar, Simon; Muševič, Igor

    2017-02-01

    Topology has an increasingly important role in the physics of condensed matter, quantum systems, material science, photonics and biology, with spectacular realizations of topological concepts in liquid crystals. Here we report on long-lived hidden topological states in thermally quenched, chiral nematic droplets, formed from string-like, triangular and polyhedral constellations of monovalent and polyvalent singular point defects. These topological defects are regularly packed into a spherical liquid volume and stabilized by the elastic energy barrier due to the helical structure and confinement of the liquid crystal in the micro-sphere. We observe, for the first time, topological three-dimensional point defects of the quantized hedgehog charge q=-2, -3. These higher-charge defects act as ideal polyvalent artificial atoms, binding the defects into polyhedral constellations representing topological molecules.

  11. Monitoring dynamic loads on wind tunnel force balances

    NASA Technical Reports Server (NTRS)

    Ferris, Alice T.; White, William C.

    1989-01-01

    Two devices have been developed at NASA Langley to monitor the dynamic loads incurred during wind-tunnel testing. The Balance Dynamic Display Unit (BDDU), displays and monitors the combined static and dynamic forces and moments in the orthogonal axes. The Balance Critical Point Analyzer scales and sums each normalized signal from the BDDU to obtain combined dynamic and static signals that represent the dynamic loads at predefined high-stress points. The display of each instrument is a multiplex of six analog signals in a way that each channel is displayed sequentially as one-sixth of the horizontal axis on a single oscilloscope trace. Thus this display format permits the operator to quickly and easily monitor the combined static and dynamic level of up to six channels at the same time.

  12. Hidden topological constellations and polyvalent charges in chiral nematic droplets

    PubMed Central

    Posnjak, Gregor; Čopar, Simon; Muševič, Igor

    2017-01-01

    Topology has an increasingly important role in the physics of condensed matter, quantum systems, material science, photonics and biology, with spectacular realizations of topological concepts in liquid crystals. Here we report on long-lived hidden topological states in thermally quenched, chiral nematic droplets, formed from string-like, triangular and polyhedral constellations of monovalent and polyvalent singular point defects. These topological defects are regularly packed into a spherical liquid volume and stabilized by the elastic energy barrier due to the helical structure and confinement of the liquid crystal in the micro-sphere. We observe, for the first time, topological three-dimensional point defects of the quantized hedgehog charge q=−2, −3. These higher-charge defects act as ideal polyvalent artificial atoms, binding the defects into polyhedral constellations representing topological molecules. PMID:28220770

  13. Composite analysis for Escherichia coli at coastal beaches

    USGS Publications Warehouse

    Bertke, E.E.

    2007-01-01

    At some coastal beaches, concentrations of fecal-indicator bacteria can differ substantially between multiple points at the same beach at the same time. Because of this spatial variability, the recreational water quality at beaches is sometimes determined by stratifying a beach into several areas and collecting a sample from each area to analyze for the concentration of fecal-indicator bacteria. The average concentration of bacteria from those points is often used to compare to the recreational standard for advisory postings. Alternatively, if funds are limited, a single sample is collected to represent the beach. Compositing the samples collected from each section of the beach may yield equally accurate data as averaging concentrations from multiple points, at a reduced cost. In the study described herein, water samples were collected at multiple points from three Lake Erie beaches and analyzed for Escherichia coli on modified mTEC agar (EPA Method 1603). From the multiple-point samples, a composite sample (n = 116) was formed at each beach by combining equal aliquots of well-mixed water from each point. Results from this study indicate that E. coli concentrations from the arithmetic average of multiple-point samples and from composited samples are not significantly different (t = 1.59, p = 0.1139) and yield similar measures of recreational water quality; additionally, composite samples could result in a significant cost savings.

  14. Designing for time-dependent material response in spacecraft structures

    NASA Technical Reports Server (NTRS)

    Hyer, M. W.; Oleksuk, Lynda L. S.; Bowles, D. E.

    1992-01-01

    To study the influence on overall deformations of the time-dependent constitutive properties of fiber-reinforced polymeric matrix composite materials being considered for use in orbiting precision segmented reflectors, simple sandwich beam models are developed. The beam models include layers representing the face sheets, the core, and the adhesive bonding of the face sheets to the core. A three-layer model lumps the adhesive layers with the face sheets or core, while a five-layer model considers the adhesive layers explicitly. The deformation response of the three-layer and five-layer sandwich beam models to a midspan point load is studied. This elementary loading leads to a simple analysis, and it is easy to create this loading in the laboratory. Using the correspondence principle of viscoelasticity, the models representing the elastic behavior of the two beams are transformed into time-dependent models. Representative cases of time-dependent material behavior for the facesheet material, the core material, and the adhesive are used to evaluate the influence of these constituents being time-dependent on the deformations of the beam. As an example of the results presented, if it assumed that, as a worst case, the polymer-dominated shear properties of the core behave as a Maxwell fluid such that under constant shear stress the shear strain increases by a factor of 10 in 20 years, then it is shown that the beam deflection increases by a factor of 1.4 during that time. In addition to quantitative conclusions, several assumptions are discussed which simplify the analyses for use with more complicated material models. Finally, it is shown that the simpler three-layer model suffices in many situations.

  15. Exploring the use of memory colors for image enhancement

    NASA Astrophysics Data System (ADS)

    Xue, Su; Tan, Minghui; McNamara, Ann; Dorsey, Julie; Rushmeier, Holly

    2014-02-01

    Memory colors refer to those colors recalled in association with familiar objects. While some previous work introduces this concept to assist digital image enhancement, their basis, i.e., on-screen memory colors, are not appropriately investigated. In addition, the resulting adjustment methods developed are not evaluated from a perceptual view of point. In this paper, we first perform a context-free perceptual experiment to establish the overall distributions of screen memory colors for three pervasive objects. Then, we use a context-based experiment to locate the most representative memory colors; at the same time, we investigate the interactions of memory colors between different objects. Finally, we show a simple yet effective application using representative memory colors to enhance digital images. A user study is performed to evaluate the performance of our technique.

  16. Measurement of reach envelopes with a four-camera Selective Spot Recognition (SELSPOT) system

    NASA Technical Reports Server (NTRS)

    Stramler, J. H., Jr.; Woolford, B. J.

    1983-01-01

    The basic Selective Spot Recognition (SELSPOT) system is essentially a system which uses infrared LEDs and a 'camera' with an infrared-sensitive photodetector, a focusing lens, and some A/D electronics to produce a digital output representing an X and Y coordinate for each LED for each camera. When the data are synthesized across all cameras with appropriate calibrations, an XYZ set of coordinates is obtained for each LED at a given point in time. Attention is given to the operating modes, a system checkout, and reach envelopes and software. The Video Recording Adapter (VRA) represents the main addition to the basic SELSPOT system. The VRA contains a microprocessor and other electronics which permit user selection of several options and some interaction with the system.

  17. Method and apparatus for fiber optic multiple scattering suppression

    NASA Technical Reports Server (NTRS)

    Ackerson, Bruce J. (Inventor)

    2000-01-01

    The instant invention provides a method and apparatus for use in laser induced dynamic light scattering which attenuates the multiple scattering component in favor of the single scattering component. The preferred apparatus utilizes two light detectors that are spatially and/or angularly separated and which simultaneously record the speckle pattern from a single sample. The recorded patterns from the two detectors are then cross correlated in time to produce one point on a composite single/multiple scattering function curve. By collecting and analyzing cross correlation measurements that have been taken at a plurality of different spatial/angular positions, the signal representative of single scattering may be differentiated from the signal representative of multiple scattering, and a near optimum detector separation angle for use in taking future measurements may be determined.

  18. Dataset reporting the perceiver identification rates of basic emotions expressed by male, female and ambiguous gendered walkers in full-light, point-light and synthetically modelled point-light walkers.

    PubMed

    Halovic, Shaun; Kroos, Christian

    2017-12-01

    This data set describes the experimental data collected and reported in the research article "Walking my way? Walker gender and display format confounds the perception of specific emotions" (Halovic and Kroos, in press) [1]. The data set represent perceiver identification rates for different emotions (happiness, sadness, anger, fear and neutral), as displayed by full-light, point-light and synthetic point-light walkers. The perceiver identification scores have been transformed into H t rates, which represent proportions/percentages of correct identifications above what would be expected by chance. This data set also provides H t rates separately for male, female and ambiguously gendered walkers.

  19. STREAM PROCESSING ALGORITHMS FOR DYNAMIC 3D SCENE ANALYSIS

    DTIC Science & Technology

    2018-02-15

    23 9 Ground truth creation based on marked building feature points in two different views 50 frames apart in...between just two views , each row in the current figure represents a similar assessment however between one camera and all other cameras within the dataset...BA4S. While Fig. 44 depicted the epipolar lines for the point correspondences between just two views , the current figure represents a similar

  20. Compensation for unfavorable characteristics of irregular individual shift rotas.

    PubMed

    Knauth, Peter; Jung, Detlev; Bopp, Winfried; Gauderer, Patric C; Gissel, Andreas

    2006-01-01

    Some employees of TV companies, such as those who produce remote TV programs, have to cope with very irregular rotas and many short-term schedule deviations. Many of these employees complain about the negative effects of such on their wellbeing and private life. Therefore, a working group of employers, council representatives, and researchers developed a so-called bonus system. Based on the criteria of the BESIAK system, the following list of criteria for the ergonomic assessment of irregular shift systems was developed: proportion of night hours worked between 22 : 00 and 01 : 00 h and between 06 : 00 and 07 : 00 h, proportion of night hours worked between 01 : 00 and 06 : 00 h, number of successive night shifts, number of successive working days, number of shifts longer than 9 h, proportion of phase advances, off hours on weekends, work hours between 17 : 00 and 23 : 00 h from Monday to Friday, number of working days with leisure time at remote places, and sudden deviations from the planned shift rota. Each individual rota was evaluated in retrospect. If pre-defined thresholds of criteria were surpassed, bonus points were added to the worker's account. In general, more bonus points add up to more free time. Only in particular cases was monetary compensation possible for some criteria. The bonus point system, which was implemented in the year 2002 for about 850 employees of the TV company, has the advantages of more transparency concerning the unfavorable characteristics of working-time arrangements, incentive for superiors to design "good" rosters that avoid the bonus point thresholds (to reduce costs), positive short-term effects on the employee social life, and expected positive long-term effects on the employee health. In general, the most promising approach to cope with the problems of shift workers in irregular and flexible shift systems seems to be to increase their influence on the arrangement of working times. If this is not possible, bonus point systems may help to achieve greater transparency and fairness in the distribution of unfavorable working-time arrangements within a team, and even reduce the unnecessary unfavorable aspects of shift systems.

  1. Watershed modeling of dissolved oxygen and biochemical oxygen demand using a hydrological simulation Fortran program.

    PubMed

    Liu, Zhijun; Kieffer, Janna M; Kingery, William L; Huddleston, David H; Hossain, Faisal

    2007-11-01

    Several inland water bodies in the St. Louis Bay watershed have been identified as being potentially impaired due to low level of dissolved oxygen (DO). In order to calculate the total maximum daily loads (TMDL), a standard watershed model supported by U.S. Environmental Protection Agency, Hydrological Simulation Program Fortran (HSPF), was used to simulate water temperature, DO, and bio-chemical oxygen demand (BOD). Both point and non-point sources of BOD were included in watershed modeling. The developed model was calibrated at two time periods: 1978 to 1986 and 2000 to 2001 with simulated DO closely matched the observed data and captured the seasonal variations. The model represented the general trend and average condition of observed BOD. Water temperature and BOD decay are the major factors that affect DO simulation, whereas nutrient processes, including nitrification, denitrification, and phytoplankton cycle, have slight impacts. The calibrated water quality model provides a representative linkage between the sources of BOD and in-stream DO\\BOD concentrations. The developed input parameters in this research could be extended to similar coastal watersheds for TMDL determination and Best Management Practice (BMP) evaluation.

  2. The Hunters Point cogeneration project: Environmental justice in power plant siting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosloff, L.H.; Varanini, E.E. III

    1997-12-31

    The recent Hunters Point, San Francisco power plant siting process in California represents the first time that environmental justice has arisen as a major power plant siting issue. Intervenors argued that the siting process was racially and economically biased and were supported by leading environmental justice activists at the Golden Gate Law School`s Environmental Justice Clinic, a leading thinker in this field. The applicant argued that environmental justice charges cannot realistically be made against a modern natural-gas energy facility with state-of-the-art environmental controls. The applicant also argued that environmental justice concerns were fully addressed through the extensive environmental and socioeconomicmore » review carried out by California Energy Commission staff. After extensive testimony and cross-examination, the Commission agreed with the applicant. This case has important lessons for companies that could be charged with environmental justice violations and environmental justice activists who must decide where to most effectively target their efforts. This paper reviews the proceeding and its lessons and makes recommendations regarding future applicability of environmental justice issues to the power generation sector. The authors represented the applicant in the facility siting proceeding.« less

  3. Invariant functionals in higher-spin theory

    NASA Astrophysics Data System (ADS)

    Vasiliev, M. A.

    2017-03-01

    A new construction for gauge invariant functionals in the nonlinear higher-spin theory is proposed. Being supported by differential forms closed by virtue of the higher-spin equations, invariant functionals are associated with central elements of the higher-spin algebra. In the on-shell AdS4 higher-spin theory we identify a four-form conjectured to represent the generating functional for 3d boundary correlators and a two-form argued to support charges for black hole solutions. Two actions for 3d boundary conformal higher-spin theory are associated with the two parity-invariant higher-spin models in AdS4. The peculiarity of the spinorial formulation of the on-shell AdS3 higher-spin theory, where the invariant functional is supported by a two-form, is conjectured to be related to the holomorphic factorization at the boundary. The nonlinear part of the star-product function F* (B (x)) in the higher-spin equations is argued to lead to divergencies in the boundary limit representing singularities at coinciding boundary space-time points of the factors of B (x), which can be regularized by the point splitting. An interpretation of the RG flow in terms of proposed construction is briefly discussed.

  4. Dynamic miRNA-mRNA regulations are essential for maintaining Drosophila immune homeostasis during Micrococcus luteus infection.

    PubMed

    Wei, Guanyun; Sun, Lianjie; Li, Ruimin; Li, Lei; Xu, Jiao; Ma, Fei

    2018-04-01

    Pathogen bacteria infections can lead to dynamic changes of microRNA (miRNA) and mRNA expression profiles, which may control synergistically the outcome of immune responses. To reveal the role of dynamic miRNA-mRNA regulation in Drosophila innate immune responses, we have detailedly analyzed the paired miRNA and mRNA expression profiles at three time points during Drosophila adult males with Micrococcus luteus (M. luteus) infection using RNA- and small RNA-seq data. Our results demonstrate that differentially expressed miRNAs and mRNAs represent extensively dynamic changes over three time points during Drosophila with M. luteus infection. The pathway enrichment analysis indicates that differentially expressed genes are involved in diverse signaling pathways, including Toll and Imd as well as orther signaling pathways at three time points during Drosophila with M. luteus infection. Remarkably, the dynamic change of miRNA expression is delayed by compared to mRNA expression change over three time points, implying that the "time" parameter should be considered when the function of miRNA/mRNA is further studied. In particular, the dynamic miRNA-mRNA regulatory networks have shown that miRNAs may synergistically regulate gene expressions of different signaling pathways to promote or inhibit innate immune responses and maintain homeostasis in Drosophila, and some new regulators involved in Drosophila innate immune response have been identified. Our findings strongly suggest that miRNA regulation is a key mechanism involved in fine-tuning cooperatively gene expressions of diverse signaling pathways to maintain innate immune response and homeostasis in Drosophila. Taken together, the present study reveals a novel role of dynamic miRNA-mRNA regulation in immune response to bacteria infection, and provides a new insight into the underlying molecular regulatory mechanism of Drosophila innate immune responses. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Associations of stressful life events with coping strategies of 12-15-year-old Norwegian adolescents.

    PubMed

    Undheim, Anne Mari; Sund, Anne Mari

    2017-08-01

    Successful adaptation to the environment requires strategies to cope with stressful situations. The aim of this study was to examine the role of stressful life events in coping strategies during early adolescence. A representative sample of 2464 adolescents in Norway were assessed at two time-points, one year apart (i.e., at T1, mean age 13.7 years, and at T2, mean age 14.9 years), with identical questionnaires. The participation rate was 88.3% at T1. Stressful life events and daily hassles were measured by questionnaires constructed for this study. Coping with stress was measured by a modified version of the Coping Inventory for Stressful Situations (CISS), which measures three coping dimensions: emotional, task and avoidance coping. Depressive symptoms were assessed by the Mood and Feelings Questionnaire (MFQ). Standard multiple linear regression methods were applied. Different domains of stressful life events were associated with the coping strategies, and these relationships differed at various time-points by gender. In sum, school stress and stressful life events in one's network (network stress) was associated with coping strategies more strongly among girls, while family and miscellaneous stress showed a stronger association among boys. These relationships were partly mediated by depressive symptom levels, more strongly in cross-sectional than in longitudinal analyses. However, daily hassles seemed to represent smaller events of no importance in coping strategies. In preventive work, reducing stressful events, treating depression and teaching healthier coping strategies are important.

  6. Efficient generation of discontinuity-preserving adaptive triangulations from range images.

    PubMed

    Garcia, Miguel Angel; Sappa, Angel Domingo

    2004-10-01

    This paper presents an efficient technique for generating adaptive triangular meshes from range images. The algorithm consists of two stages. First, a user-defined number of points is adaptively sampled from the given range image. Those points are chosen by taking into account the surface shapes represented in the range image in such a way that points tend to group in areas of high curvature and to disperse in low-variation regions. This selection process is done through a noniterative, inherently parallel algorithm in order to gain efficiency. Once the image has been subsampled, the second stage applies a two and one half-dimensional Delaunay triangulation to obtain an initial triangular mesh. To favor the preservation of surface and orientation discontinuities (jump and crease edges) present in the original range image, the aforementioned triangular mesh is iteratively modified by applying an efficient edge flipping technique. Results with real range images show accurate triangular approximations of the given range images with low processing times.

  7. Some limitations of frequency as a component of risk: an expository note.

    PubMed

    Cox, Louis Anthony

    2009-02-01

    Students of risk analysis are often taught that "risk is frequency times consequence" or, more generally, that risk is determined by the frequency and severity of adverse consequences. But is it? This expository note reviews the concepts of frequency as average annual occurrence rate and as the reciprocal of mean time to failure (MTTF) or mean time between failures (MTBF) in a renewal process. It points out that if two risks (represented as two (frequency, severity) pairs for adverse consequences) have identical values for severity but different values of frequency, then it is not necessarily true that the one with the smaller value of frequency is preferable-and this is true no matter how frequency is defined. In general, there is not necessarily an increasing relation between the reciprocal of the mean time until an event occurs, its long-run average occurrences per year, and other criteria, such as the probability or expected number of times that it will happen over a specific interval of interest, such as the design life of a system. Risk depends on more than frequency and severity of consequences. It also depends on other information about the probability distribution for the time of a risk event that can become lost in simple measures of event "frequency." More flexible descriptions of risky processes, such as point process models can avoid these limitations.

  8. Sound-conducting mechanisms for echolocation hearing of a dolphin

    NASA Astrophysics Data System (ADS)

    Ryabov, Vyacheslav A.

    2005-09-01

    The morphological study of the lower jaw of a dolphin (Tursiops truncatus p.), and the modeling and calculation of its structures from the acoustic point of view have been conducted. It was determined that the cross-sectional area of the mandibular canal (MC) increases exponentially. The MC represents the acoustical horn. The mental foramens (MFs) is positioned in the horn throat, representing the nonequidistant array of waveguide delay lines (NAWDL). The acoustical horn ensures the traveling wave conditions inside the MC and intensifies sonar echoes up to 1514 times. This ``ideal'' traveling wave antenna is created by nature, representing the combination of the NAWDL and the acoustical horn. The dimensions and sequence of morphological structures of the lower jaw are optimal both for reception and forming the beam pattern, and for the amplification and transmission of sonar echoes up to the bulla tympani. Morphological structures of the lower jaw are considered as components of the peripheral section of the dolphin echolocation hearing.

  9. FORTRAN programs for calculating nonlinear seismic ground response in two dimensions

    USGS Publications Warehouse

    Joyner, W.B.

    1978-01-01

    The programs described here were designed for calculating the nonlinear seismic response of a two-dimensional configuration of soil underlain by a semi-infinite elastic medium representing bedrock. There are two programs. One is for plane strain motions, that is, motions in the plane perpendicular to the long axis of the structure, and the other is for antiplane strain motions, that is motions parallel to the axis. The seismic input is provided by specifying what the motion of the rock-soil boundary would be if the soil were absent and the boundary were a free surface. This may be done by supplying a magnetic tape containing the values of particle velocity for every boundary point at every instant of time. Alternatively, a punch card deck may be supplied giving acceleration values at every instant of time. In the plane strain program it is assumed that the acceleration values apply simultaneously to every point on the boundary; in the antiplane strain program it is assumed that the acceleration values characterize a plane shear wave propagating upward in the underlying elastic medium at a specified angle with the vertical. The nonlinear hysteretic behavior of the soil is represented by a three-dimensional rheological model. A boundary condition is used which takes account of finite rigidity in the elastic substratum. The computations are performed by an explicit finite-difference scheme that proceeds step by step in space and time. Computations are done in terms of stress departures from an unspecified initial state. Source listings are provided here along with instructions for preparing the input. A more detailed discussion of the method is presented elsewhere.

  10. “I Do Feel Like a Scientist at Times”: A Qualitative Study of the Acceptability of Molecular Point-Of-Care Testing for Chlamydia and Gonorrhoea to Primary Care Professionals in a Remote High STI Burden Setting

    PubMed Central

    Natoli, Lisa; Guy, Rebecca J.; Shephard, Mark; Causer, Louise; Badman, Steven G.; Hengel, Belinda; Tangey, Annie; Ward, James; Coburn, Tony; Anderson, David; Kaldor, John; Maher, Lisa

    2015-01-01

    Background Point-of-care tests for chlamydia (CT) and gonorrhoea (NG) could increase the uptake and timeliness of testing and treatment, contribute to improved disease control and reduce reproductive morbidity. The GeneXpert (Xpert CT/NG assay), suited to use at the point-of-care, is being used in the TTANGO randomised controlled trial (RCT) in 12 remote Australian health services with a high burden of sexually transmissible infections (STIs). This represents the first ever routine use of a molecular point-of-care diagnostic for STIs in primary care. The purpose of this study was to explore the acceptability of the GeneXpert to primary care staff in remote Australia. Methods In-depth qualitative interviews were conducted with 16 staff (registered or enrolled nurses and Aboriginal Health Workers/Practitioners) trained and experienced with GeneXpert testing. Interviews were digitally-recorded and transcribed verbatim prior to content analysis. Results Most participants displayed positive attitudes, indicating the test was both easy to use and useful in their clinical context. Participants indicated that point-of-care testing had improved management of STIs, resulting in more timely and targeted treatment, earlier commencement of partner notification, and reduced follow up efforts associated with client recall. Staff expressed confidence in point-of-care test results and treating patients on this basis, and reported greater job satisfaction. While point-of-care testing did not negatively impact on client flow, several found the manual documentation processes time consuming, suggesting that improved electronic connectivity and test result transfer between the GeneXpert and patient management systems could overcome this. Managing positive test results in a shorter time frame was challenging for some but most found it satisfying to complete episodes of care more quickly. Conclusions In the context of a RCT, health professionals working in remote primary care in Australia found the GeneXpert highly acceptable. These findings have implications for use in other primary care settings around the world. PMID:26713441

  11. A new template matching method based on contour information

    NASA Astrophysics Data System (ADS)

    Cai, Huiying; Zhu, Feng; Wu, Qingxiao; Li, Sicong

    2014-11-01

    Template matching is a significant approach in machine vision due to its effectiveness and robustness. However, most of the template matching methods are so time consuming that they can't be used to many real time applications. The closed contour matching method is a popular kind of template matching methods. This paper presents a new closed contour template matching method which is suitable for two dimensional objects. Coarse-to-fine searching strategy is used to improve the matching efficiency and a partial computation elimination scheme is proposed to further speed up the searching process. The method consists of offline model construction and online matching. In the process of model construction, triples and distance image are obtained from the template image. A certain number of triples which are composed by three points are created from the contour information that is extracted from the template image. The rule to select the three points is that the template contour is divided equally into three parts by these points. The distance image is obtained here by distance transform. Each point on the distance image represents the nearest distance between current point and the points on the template contour. During the process of matching, triples of the searching image are created with the same rule as the triples of the model. Through the similarity that is invariant to rotation, translation and scaling between triangles, the triples corresponding to the triples of the model are found. Then we can obtain the initial RST (rotation, translation and scaling) parameters mapping the searching contour to the template contour. In order to speed up the searching process, the points on the searching contour are sampled to reduce the number of the triples. To verify the RST parameters, the searching contour is projected into the distance image, and the mean distance can be computed rapidly by simple operations of addition and multiplication. In the fine searching process, the initial RST parameters are discrete to obtain the final accurate pose of the object. Experimental results show that the proposed method is reasonable and efficient, and can be used in many real time applications.

  12. A calibration protocol for population-specific accelerometer cut-points in children.

    PubMed

    Mackintosh, Kelly A; Fairclough, Stuart J; Stratton, Gareth; Ridgers, Nicola D

    2012-01-01

    To test a field-based protocol using intermittent activities representative of children's physical activity behaviours, to generate behaviourally valid, population-specific accelerometer cut-points for sedentary behaviour, moderate, and vigorous physical activity. Twenty-eight children (46% boys) aged 10-11 years wore a hip-mounted uniaxial GT1M ActiGraph and engaged in 6 activities representative of children's play. A validated direct observation protocol was used as the criterion measure of physical activity. Receiver Operating Characteristics (ROC) curve analyses were conducted with four semi-structured activities to determine the accelerometer cut-points. To examine classification differences, cut-points were cross-validated with free-play and DVD viewing activities. Cut-points of ≤ 372, >2160 and >4806 counts • min(-1) representing sedentary, moderate and vigorous intensity thresholds, respectively, provided the optimal balance between the related needs for sensitivity (accurately detecting activity) and specificity (limiting misclassification of the activity). Cross-validation data demonstrated that these values yielded the best overall kappa scores (0.97; 0.71; 0.62), and a high classification agreement (98.6%; 89.0%; 87.2%), respectively. Specificity values of 96-97% showed that the developed cut-points accurately detected physical activity, and sensitivity values (89-99%) indicated that minutes of activity were seldom incorrectly classified as inactivity. The development of an inexpensive and replicable field-based protocol to generate behaviourally valid and population-specific accelerometer cut-points may improve the classification of physical activity levels in children, which could enhance subsequent intervention and observational studies.

  13. Long term variability of the annual hydrological regime and sensitivity to temperature phase shifts in Saxony/Germany

    NASA Astrophysics Data System (ADS)

    Renner, M.; Bernhofer, C.

    2011-01-01

    The timing of the seasons strongly effects ecosystems and human activities. Recently, there is increasing evidence of changes in the timing of the seasons, such as earlier spring seasons detected in phenological records, advanced seasonal timing of surface temperature, earlier snow melt or streamflow timing. For water resources management there is a need to quantitatively describe the variability in the timing of hydrological regimes and to understand how climatic changes control the seasonal water budget of river basins on the regional scale. In this study, changes of the annual cycle of hydrological variables are analysed for 27 river basins in Saxony/Germany. Thereby monthly series of basin runoff ratios, the ratio of runoff and basin precipitation are investigated for changes and variability of their annual periodicity over the period 1930-2009. Approximating the annual cycle by the means of harmonic functions gave acceptable results, while only two parameters, phase and amplitude, are required. It has been found that the annual phase of runoff ratio, representing the timing of the hydrological regime, is subject to considerable year-to-year variability, being concurrent with basins in similar hydro-climatic conditions. Two distinct basin classes have been identified, whereby basin elevation has been found to be the delimiting factor. An increasing importance of snow on the basin water balance with elevation is apparent and mainly governs the temporal variability of the annual timing of hydrological regimes. Further there is evidence of coincident changes in trend direction (change points in 1971 and 1988) in snow melt influenced basins. In these basins the timing of the runoff ratio is significantly correlated with the timing of temperature, and effects on runoff by temperature phase changes are even amplified. Interestingly, temperature effects may explain the low frequent variability of the second change point until today. However, the first change point can not be explained by temperature alone and other causes have to be considered.

  14. Development and application of a reactive plume-in-grid model: evaluation over Greater Paris

    NASA Astrophysics Data System (ADS)

    Korsakissok, I.; Mallet, V.

    2010-02-01

    Emissions from major point sources are badly represented by classical Eulerian models. An overestimation of the horizontal plume dilution, a bad representation of the vertical diffusion as well as an incorrect estimate of the chemical reaction rates are the main limitations of such models in the vicinity of major point sources. The plume-in-grid method is a multiscale modeling technique that couples a local-scale Gaussian puff model with an Eulerian model in order to better represent these emissions. We present the plume-in-grid model developed in the air quality modeling system Polyphemus, with full gaseous chemistry. The model is evaluated on the metropolitan Île-de-France region, during six months (summer 2001). The subgrid-scale treatment is used for 89 major point sources, a selection based on the emission rates of NOx and SO2. Results with and without the subgrid treatment of point emissions are compared, and their performance by comparison to the observations at measurement stations is assessed. A sensitivity study is also carried out, on several local-scale parameters as well as on the vertical diffusion within the urban area. Primary pollutants are shown to be the most impacted by the plume-in-grid treatment, with a decrease in RMSE by up to about -17% for SO2 and -7% for NO at measurement stations. SO2 is the most impacted pollutant, since the point sources account for an important part of the total SO2 emissions, whereas NOx emissions are mostly due to traffic. The spatial impact of the subgrid treatment is localized in the vicinity of the sources, especially for reactive species (NOx and O3). Reactive species are mostly sensitive to the local-scale parameters, such as the time step between two puff emissions which influences the in-plume chemical reactions, whereas the almost-passive species SO2 is more sensitive to the injection time, which determines the duration of the subgrid-scale treatment. Future developments include an extension to handle aerosol chemistry, and an application to the modeling of line sources in order to use the subgrid treatment with road emissions. The latter is expected to lead to more striking results, due to the importance of traffic emissions for the pollutants of interest.

  15. A three-dimensional mapping of the ocean based on environmental data

    USGS Publications Warehouse

    Sayre, Roger; Wright, Dawn J.; Breyer, Sean P.; Butler, Kevin; Van Graafeiland, Keith; Costello, Mark J.; Harris, Peter T.; Goodin, Kathleen; Guinotte, John M.; Basher, Zeenatul; Kavanaugh, Maria T.; Halpin, Patrick N.; Monaco, Mark E.; Cressie, Noel; Aniello, Peter; Frye, Charles; Stephens, Drew

    2017-01-01

    The existence, sources, distribution, circulation, and physicochemical nature of macroscale oceanic water bodies have long been a focus of oceanographic inquiry. Building on that work, this paper describes an objectively derived and globally comprehensive set of 37 distinct volumetric region units, called ecological marine units (EMUs). They are constructed on a regularly spaced ocean point-mesh grid, from sea surface to seafloor, and attributed with data from the 2013 World Ocean Atlas version 2. The point attribute data are the means of the decadal averages from a 57-year climatology of six physical and chemical environment parameters (temperature, salinity, dissolved oxygen, nitrate, phosphate, and silicate). The database includes over 52 million points that depict the global ocean in x, y, and z dimensions. The point data were statistically clustered to define the 37 EMUs, which represent physically and chemically distinct water volumes based on spatial variation in the six marine environmental characteristics used. The aspatial clustering to produce the 37 EMUs did not include point location or depth as a determinant, yet strong geographic and vertical separation was observed. Twenty-two of the 37 EMUs are globally or regionally extensive, and account for 99% of the ocean volume, while the remaining 15 are smaller and shallower, and occur around coastal features. We assessed the vertical distribution of EMUs in the water column and placed them into classical depth zones representing epipelagic (0 m to 200 m), mesopelagic (200 m to 1,000 m), bathypelagic (1,000 m to 4,000 m) and abyssopelagic (>4,000 m) layers. The mapping and characterization of the EMUs represent a new spatial framework for organizing and understanding the physical, chemical, and ultimately biological properties and processes of oceanic water bodies. The EMUs are an initial objective partitioning of the ocean using long-term historical average data, and could be extended in the future by adding new classification variables and by introducing functionality to develop time-specific EMU distribution maps. The EMUs are an open-access resource, and as both a standardized geographic framework and a baseline physicochemical characterization of the oceanic environment, they are intended to be useful for disturbance assessments, ecosystem accounting exercises, conservation priority setting, and marine protected area network design, along with other research and management applications.

  16. Global Dynamic Modeling of Space-Geodetic Data

    NASA Technical Reports Server (NTRS)

    Bird, Peter

    1995-01-01

    The proposal had outlined a year for program conversion, a year for testing and debugging, and two years for numerical experiments. We kept to that schedule. In first (partial) year, author designed a finite element for isostatic thin-shell deformation on a sphere, derived all of its algebraic and stiffness properties, and embedded it in a new finite element code which derives its basic solution strategy (and some critical subroutines) from earlier flat-Earth codes. Also designed and programmed a new fault element to represent faults along plate boundaries. Wrote a preliminary version of a spherical graphics program for the display of output. Tested this new code for accuracy on individual model plates. Made estimates of the computer-time/cost efficiency of the code for whole-earth grids, which were reasonable. Finally, converted an interactive graphical grid-designer program from Cartesian to spherical geometry to permit the beginning of serious modeling. For reasons of cost efficiency, models are isostatic, and do not consider the local effects of unsupported loads or bending stresses. The requirements are: (1) ability to represent rigid rotation on a sphere; (2) ability to represent a spatially uniform strain-rate tensor in the limit of small elements; and (3) continuity of velocity across all element boundaries. Author designed a 3-node triangle shell element which has two different sets of basis functions to represent (vector) velocity and all other (scalar) variables. Such elements can be shown to converge to the formulas for plane triangles in the limit of small size, but can also applied to cover any area smaller than a hemisphere. The difficult volume integrals involved in computing the stiffness of such elements are performed numerically using 7 Gauss integration points on the surface of the sphere, beneath each of which a vertical integral is performed using about 100 points.

  17. Light beam range finder

    DOEpatents

    McEwan, Thomas E.

    1998-01-01

    A "laser tape measure" for measuring distance which includes a transmitter such as a laser diode which transmits a sequence of electromagnetic pulses in response to a transmit timing signal. A receiver samples reflections from objects within the field of the sequence of visible electromagnetic pulses with controlled timing, in response to a receive timing signal. The receiver generates a sample signal in response to the samples which indicates distance to the object causing the reflections. The timing circuit supplies the transmit timing signal to the transmitter and supplies the receive timing signal to the receiver. The receive timing signal causes the receiver to sample the reflection such that the time between transmission of pulses in the sequence in sampling by the receiver sweeps over a range of delays. The transmit timing signal causes the transmitter to transmit the sequence of electromagnetic pulses at a pulse repetition rate, and the received timing signal sweeps over the range of delays in a sweep cycle such that reflections are sampled at the pulse repetition rate and with different delays in the range of delays, such that the sample signal represents received reflections in equivalent time. The receiver according to one aspect of the invention includes an avalanche photodiode and a sampling gate coupled to the photodiode which is responsive to the received timing signal. The transmitter includes a laser diode which supplies a sequence of visible electromagnetic pulses. A bright spot projected on to the target clearly indicates the point that is being measured, and the user can read the range to that point with precision of better than 0.1%.

  18. Light beam range finder

    DOEpatents

    McEwan, T.E.

    1998-06-16

    A ``laser tape measure`` for measuring distance is disclosed which includes a transmitter such as a laser diode which transmits a sequence of electromagnetic pulses in response to a transmit timing signal. A receiver samples reflections from objects within the field of the sequence of visible electromagnetic pulses with controlled timing, in response to a receive timing signal. The receiver generates a sample signal in response to the samples which indicates distance to the object causing the reflections. The timing circuit supplies the transmit timing signal to the transmitter and supplies the receive timing signal to the receiver. The receive timing signal causes the receiver to sample the reflection such that the time between transmission of pulses in the sequence in sampling by the receiver sweeps over a range of delays. The transmit timing signal causes the transmitter to transmit the sequence of electromagnetic pulses at a pulse repetition rate, and the received timing signal sweeps over the range of delays in a sweep cycle such that reflections are sampled at the pulse repetition rate and with different delays in the range of delays, such that the sample signal represents received reflections in equivalent time. The receiver according to one aspect of the invention includes an avalanche photodiode and a sampling gate coupled to the photodiode which is responsive to the received timing signal. The transmitter includes a laser diode which supplies a sequence of visible electromagnetic pulses. A bright spot projected on to the target clearly indicates the point that is being measured, and the user can read the range to that point with precision of better than 0.1%. 7 figs.

  19. Determination system for solar cell layout in traffic light network using dominating set

    NASA Astrophysics Data System (ADS)

    Eka Yulia Retnani, Windi; Fambudi, Brelyanes Z.; Slamin

    2018-04-01

    Graph Theory is one of the fields in Mathematics that solves discrete problems. In daily life, the applications of Graph Theory are used to solve various problems. One of the topics in the Graph Theory that is used to solve the problem is the dominating set. The concept of dominating set is used, for example, to locate some objects systematically. In this study, the dominating set are used to determine the dominating points for solar panels, where the vertex represents the traffic light point and the edge represents the connection between the points of the traffic light. To search the dominating points for solar panels using the greedy algorithm. This algorithm is used to determine the location of solar panel. This research produced applications that can determine the location of solar panels with optimal results, that is, the minimum dominating points.

  20. The time-dependent response of 3- and 5-layer sandwich beams

    NASA Technical Reports Server (NTRS)

    Hyer, M. W.; Oleksuk, L. S. S.; Bowles, D. E.

    1992-01-01

    Simple sandwich beam models have been developed to study the effect of the time-dependent constitutive properties of fiber-reinforced polymer matrix composites, considered for use in orbiting precision segmented reflectors, on the overall deformations. The 3- and 5-layer beam models include layers representing the face sheets, the core, and the adhesive. The static elastic deformation response of the sandwich beam models to a midspan point load is studied using the principle of stationary potential energy. In addition to quantitative conclusions, several assumptions are discussed which simplify the analysis for the case of more complicated material models. It is shown that the simple three-layer model is sufficient in many situations.

  1. Uncertainty Quantification of Water Quality in Tamsui River in Taiwan

    NASA Astrophysics Data System (ADS)

    Kao, D.; Tsai, C.

    2017-12-01

    In Taiwan, modeling of non-point source pollution is unavoidably associated with uncertainty. The main purpose of this research is to better understand water contamination in the metropolitan Taipei area, and also to provide a new analysis method for government or companies to establish related control and design measures. In this research, three methods are utilized to carry out the uncertainty analysis step by step with Mike 21, which is widely used for hydro-dynamics and water quality modeling, and the study area is focused on Tamsui river watershed. First, a sensitivity analysis is conducted which can be used to rank the order of influential parameters and variables such as Dissolved Oxygen, Nitrate, Ammonia and Phosphorous. Then we use the First-order error method (FOEA) to determine the number of parameters that could significantly affect the variability of simulation results. Finally, a state-of-the-art method for uncertainty analysis called the Perturbance moment method (PMM) is applied in this research, which is more efficient than the Monte-Carlo simulation (MCS). For MCS, the calculations may become cumbersome when involving multiple uncertain parameters and variables. For PMM, three representative points are used for each random variable, and the statistical moments (e.g., mean value, standard deviation) for the output can be presented by the representative points and perturbance moments based on the parallel axis theorem. With the assumption of the independent parameters and variables, calculation time is significantly reduced for PMM as opposed to MCS for a comparable modeling accuracy.

  2. Can Detectability Analysis Improve the Utility of Point Counts for Temperate Forest Raptors?

    EPA Science Inventory

    Temperate forest breeding raptors are poorly represented in typical point count surveys because these birds are cryptic and typically breed at low densities. In recent years, many new methods for estimating detectability during point counts have been developed, including distanc...

  3. A cognitive vulnerability model on sleep and mood in adolescents under naturalistically restricted and extended sleep opportunities.

    PubMed

    Bei, Bei; Wiley, Joshua F; Allen, Nicholas B; Trinder, John

    2015-03-01

    School terms and vacations represent naturally occurring periods of restricted and extended sleep opportunities. A cognitive model of the relationships among objective sleep, subjective sleep, and negative mood was tested across these periods, with sleep-specific (i.e., dysfunctional beliefs and attitudes about sleep) and global (i.e., dysfunctional attitudes) cognitive vulnerabilities as moderators. Longitudinal study over the last week of a school term (Time-E), the following 2-w vacation (Time-V), and the first week of the next term (Time-S). General community. 146 adolescents, 47.3% male, mean age =16.2 years (standard deviation +/- 1 year). N/A. Objective sleep was measured continuously by actigraphy. Sociodemographics and cognitive vulnerabilities were assessed at Time-E; subjective sleep, negative mood (anxiety and depressive symptoms), and academic stress were measured at each time point. Controlling for academic stress and sex, subjective sleep quality mediated the relationship between objective sleep and negative mood at all time points. During extended (Time-V), but not restricted (Time-E and Time-S) sleep opportunity, this mediation was moderated by global cognitive vulnerability, with the indirect effects stronger with higher vulnerability. Further, at Time-E and Time-V, but not Time-S, greater sleep-specific and global cognitive vulnerabilities were associated with poorer subjective sleep quality and mood, respectively. Results highlighted the importance of subjective sleep perception in the development of sleep related mood problems, and supported the role of cognitive vulnerabilities as potential mechanisms in the relationships between objective sleep, subjective sleep, and negative mood. Adolescents with higher cognitive vulnerability are more susceptible to perceived poor sleep and sleep related mood problems. These findings have practical implications for interventions. © 2015 Associated Professional Sleep Societies, LLC.

  4. Mapping with MAV: Experimental Study on the Contribution of Absolute and Relative Aerial Position Control

    NASA Astrophysics Data System (ADS)

    Skaloud, J.; Rehak, M.; Lichti, D.

    2014-03-01

    This study highlights the benefit of precise aerial position control in the context of mapping using frame-based imagery taken by small UAVs. We execute several flights with a custom Micro Aerial Vehicle (MAV) octocopter over a small calibration field equipped with 90 signalized targets and 25 ground control points. The octocopter carries a consumer grade RGB camera, modified to insure precise GPS time stamping of each exposure, as well as a multi-frequency/constellation GNSS receiver. The GNSS antenna and camera are rigidly mounted together on a one-axis gimbal that allows control of the obliquity of the captured imagery. The presented experiments focus on including absolute and relative aerial control. We confirm practically that both approaches are very effective: the absolute control allows omission of ground control points while the relative requires only a minimum number of control points. Indeed, the latter method represents an attractive alternative in the context of MAVs for two reasons. First, the procedure is somewhat simplified (e.g. the lever-arm between the camera perspective and antenna phase centers does not need to be determined) and, second, its principle allows employing a single-frequency antenna and carrier-phase GNSS receiver. This reduces the cost of the system as well as the payload, which in turn increases the flying time.

  5. FIR signature verification system characterizing dynamics of handwriting features

    NASA Astrophysics Data System (ADS)

    Thumwarin, Pitak; Pernwong, Jitawat; Matsuura, Takenobu

    2013-12-01

    This paper proposes an online signature verification method based on the finite impulse response (FIR) system characterizing time-frequency characteristics of dynamic handwriting features. First, the barycenter determined from both the center point of signature and two adjacent pen-point positions in the signing process, instead of one pen-point position, is used to reduce the fluctuation of handwriting motion. In this paper, among the available dynamic handwriting features, motion pressure and area pressure are employed to investigate handwriting behavior. Thus, the stable dynamic handwriting features can be described by the relation of the time-frequency characteristics of the dynamic handwriting features. In this study, the aforesaid relation can be represented by the FIR system with the wavelet coefficients of the dynamic handwriting features as both input and output of the system. The impulse response of the FIR system is used as the individual feature for a particular signature. In short, the signature can be verified by evaluating the difference between the impulse responses of the FIR systems for a reference signature and the signature to be verified. The signature verification experiments in this paper were conducted using the SUBCORPUS MCYT-100 signature database consisting of 5,000 signatures from 100 signers. The proposed method yielded equal error rate (EER) of 3.21% on skilled forgeries.

  6. Broadband Tomography System: Direct Time-Space Reconstruction Algorithm

    NASA Astrophysics Data System (ADS)

    Biagi, E.; Capineri, Lorenzo; Castellini, Guido; Masotti, Leonardo F.; Rocchi, Santina

    1989-10-01

    In this paper a new ultrasound tomographic image algorithm is presented. A complete laboratory system is built up to test the algorithm in experimental conditions. The proposed system is based on a physical model consisting of a bidimensional distribution of single scattering elements. Multiple scattering is neglected, so Born approximation is assumed. This tomographic technique only requires two orthogonal scanning sections. For each rotational position of the object, data are collected by means of the complete data set method in transmission mode. After a numeric envelope detection, the received signals are back-projected in the space-domain through a scalar function. The reconstruction of each scattering element is accomplished by correlating the ultrasound time of flight and attenuation with the points' loci given by the possible positions of the scattering element. The points' locus is represented by an ellipse with the focuses located on the transmitter and receiver positions. In the image matrix the ellipses' contributions are coherently summed in the position of the scattering element. Computer simulations of cylindrical-shaped objects have pointed out the performances of the reconstruction algorithm. Preliminary experimental results show the laboratory system features. On the basis of these results an experimental procedure to test the confidence and repeatability of ultrasonic measurements on human carotid vessel is proposed.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beltran, C; Kamal, H

    Purpose: To provide a multicriteria optimization algorithm for intensity modulated radiation therapy using pencil proton beam scanning. Methods: Intensity modulated radiation therapy using pencil proton beam scanning requires efficient optimization algorithms to overcome the uncertainties in the Bragg peaks locations. This work is focused on optimization algorithms that are based on Monte Carlo simulation of the treatment planning and use the weights and the dose volume histogram (DVH) control points to steer toward desired plans. The proton beam treatment planning process based on single objective optimization (representing a weighted sum of multiple objectives) usually leads to time-consuming iterations involving treatmentmore » planning team members. We proved a time efficient multicriteria optimization algorithm that is developed to run on NVIDIA GPU (Graphical Processing Units) cluster. The multicriteria optimization algorithm running time benefits from up-sampling of the CT voxel size of the calculations without loss of fidelity. Results: We will present preliminary results of Multicriteria optimization for intensity modulated proton therapy based on DVH control points. The results will show optimization results of a phantom case and a brain tumor case. Conclusion: The multicriteria optimization of the intensity modulated radiation therapy using pencil proton beam scanning provides a novel tool for treatment planning. Work support by a grant from Varian Inc.« less

  8. Dynamic laser speckle analyzed considering inhomogeneities in the biological sample

    NASA Astrophysics Data System (ADS)

    Braga, Roberto A.; González-Peña, Rolando J.; Viana, Dimitri Campos; Rivera, Fernando Pujaico

    2017-04-01

    Dynamic laser speckle phenomenon allows a contactless and nondestructive way to monitor biological changes that are quantified by second-order statistics applied in the images in time using a secondary matrix known as time history of the speckle pattern (THSP). To avoid being time consuming, the traditional way to build the THSP restricts the data to a line or column. Our hypothesis is that the spatial restriction of the information could compromise the results, particularly when undesirable and unexpected optical inhomogeneities occur, such as in cell culture media. It tested a spatial random approach to collect the points to form a THSP. Cells in a culture medium and in drying paint, representing homogeneous samples in different levels, were tested, and a comparison with the traditional method was carried out. An alternative random selection based on a Gaussian distribution around a desired position was also presented. The results showed that the traditional protocol presented higher variation than the outcomes using the random method. The higher the inhomogeneity of the activity map, the higher the efficiency of the proposed method using random points. The Gaussian distribution proved to be useful when there was a well-defined area to monitor.

  9. Ergodic Theory, Interpretations of Probability and the Foundations of Statistical Mechanics

    NASA Astrophysics Data System (ADS)

    van Lith, Janneke

    The traditional use of ergodic theory in the foundations of equilibrium statistical mechanics is that it provides a link between thermodynamic observables and microcanonical probabilities. First of all, the ergodic theorem demonstrates the equality of microcanonical phase averages and infinite time averages (albeit for a special class of systems, and up to a measure zero set of exceptions). Secondly, one argues that actual measurements of thermodynamic quantities yield time averaged quantities, since measurements take a long time. The combination of these two points is held to be an explanation why calculating microcanonical phase averages is a successful algorithm for predicting the values of thermodynamic observables. It is also well known that this account is problematic. This survey intends to show that ergodic theory nevertheless may have important roles to play, and it explores three other uses of ergodic theory. Particular attention is paid, firstly, to the relevance of specific interpretations of probability, and secondly, to the way in which the concern with systems in thermal equilibrium is translated into probabilistic language. With respect to the latter point, it is argued that equilibrium should not be represented as a stationary probability distribution as is standardly done; instead, a weaker definition is presented.

  10. Automated Planar Tracking the Waving Bodies of Multiple Zebrafish Swimming in Shallow Water.

    PubMed

    Wang, Shuo Hong; Cheng, Xi En; Qian, Zhi-Ming; Liu, Ye; Chen, Yan Qiu

    2016-01-01

    Zebrafish (Danio rerio) is one of the most widely used model organisms in collective behavior research. Multi-object tracking with high speed camera is currently the most feasible way to accurately measure their motion states for quantitative study of their collective behavior. However, due to difficulties such as their similar appearance, complex body deformation and frequent occlusions, it is a big challenge for an automated system to be able to reliably track the body geometry of each individual fish. To accomplish this task, we propose a novel fish body model that uses a chain of rectangles to represent fish body. Then in detection stage, the point of maximum curvature along fish boundary is detected and set as fish nose point. Afterwards, in tracking stage, we firstly apply Kalman filter to track fish head, then use rectangle chain fitting to fit fish body, which at the same time further judge the head tracking results and remove the incorrect ones. At last, a tracklets relinking stage further solves trajectory fragmentation due to occlusion. Experiment results show that the proposed tracking system can track a group of zebrafish with their body geometry accurately even when occlusion occurs from time to time.

  11. Automated Planar Tracking the Waving Bodies of Multiple Zebrafish Swimming in Shallow Water

    PubMed Central

    Wang, Shuo Hong; Cheng, Xi En; Qian, Zhi-Ming; Liu, Ye; Chen, Yan Qiu

    2016-01-01

    Zebrafish (Danio rerio) is one of the most widely used model organisms in collective behavior research. Multi-object tracking with high speed camera is currently the most feasible way to accurately measure their motion states for quantitative study of their collective behavior. However, due to difficulties such as their similar appearance, complex body deformation and frequent occlusions, it is a big challenge for an automated system to be able to reliably track the body geometry of each individual fish. To accomplish this task, we propose a novel fish body model that uses a chain of rectangles to represent fish body. Then in detection stage, the point of maximum curvature along fish boundary is detected and set as fish nose point. Afterwards, in tracking stage, we firstly apply Kalman filter to track fish head, then use rectangle chain fitting to fit fish body, which at the same time further judge the head tracking results and remove the incorrect ones. At last, a tracklets relinking stage further solves trajectory fragmentation due to occlusion. Experiment results show that the proposed tracking system can track a group of zebrafish with their body geometry accurately even when occlusion occurs from time to time. PMID:27128096

  12. Stability and chaos in Kustaanheimo-Stiefel space induced by the Hopf fibration

    NASA Astrophysics Data System (ADS)

    Roa, Javier; Urrutxua, Hodei; Peláez, Jesús

    2016-07-01

    The need for the extra dimension in Kustaanheimo-Stiefel (KS) regularization is explained by the topology of the Hopf fibration, which defines the geometry and structure of KS space. A trajectory in Cartesian space is represented by a four-dimensional manifold called the fundamental manifold. Based on geometric and topological aspects classical concepts of stability are translated to KS language. The separation between manifolds of solutions generalizes the concept of Lyapunov stability. The dimension-raising nature of the fibration transforms fixed points, limit cycles, attractive sets, and Poincaré sections to higher dimensional subspaces. From these concepts chaotic systems are studied. In strongly perturbed problems, the numerical error can break the topological structure of KS space: points in a fibre are no longer transformed to the same point in Cartesian space. An observer in three dimensions will see orbits departing from the same initial conditions but diverging in time. This apparent randomness of the integration can only be understood in four dimensions. The concept of topological stability results in a simple method for estimating the time-scale in which numerical simulations can be trusted. Ideally, all trajectories departing from the same fibre should be KS transformed to a unique trajectory in three-dimensional space, because the fundamental manifold that they constitute is unique. By monitoring how trajectories departing from one fibre separate from the fundamental manifold a critical time, equivalent to the Lyapunov time, is estimated. These concepts are tested on N-body examples: the Pythagorean problem, and an example of field stars interacting with a binary.

  13. Microarray characterization of gene expression changes in blood during acute ethanol exposure

    PubMed Central

    2013-01-01

    Background As part of the civil aviation safety program to define the adverse effects of ethanol on flying performance, we performed a DNA microarray analysis of human whole blood samples from a five-time point study of subjects administered ethanol orally, followed by breathalyzer analysis, to monitor blood alcohol concentration (BAC) to discover significant gene expression changes in response to the ethanol exposure. Methods Subjects were administered either orange juice or orange juice with ethanol. Blood samples were taken based on BAC and total RNA was isolated from PaxGene™ blood tubes. The amplified cDNA was used in microarray and quantitative real-time polymerase chain reaction (RT-qPCR) analyses to evaluate differential gene expression. Microarray data was analyzed in a pipeline fashion to summarize and normalize and the results evaluated for relative expression across time points with multiple methods. Candidate genes showing distinctive expression patterns in response to ethanol were clustered by pattern and further analyzed for related function, pathway membership and common transcription factor binding within and across clusters. RT-qPCR was used with representative genes to confirm relative transcript levels across time to those detected in microarrays. Results Microarray analysis of samples representing 0%, 0.04%, 0.08%, return to 0.04%, and 0.02% wt/vol BAC showed that changes in gene expression could be detected across the time course. The expression changes were verified by qRT-PCR. The candidate genes of interest (GOI) identified from the microarray analysis and clustered by expression pattern across the five BAC points showed seven coordinately expressed groups. Analysis showed function-based networks, shared transcription factor binding sites and signaling pathways for members of the clusters. These include hematological functions, innate immunity and inflammation functions, metabolic functions expected of ethanol metabolism, and pancreatic and hepatic function. Five of the seven clusters showed links to the p38 MAPK pathway. Conclusions The results of this study provide a first look at changing gene expression patterns in human blood during an acute rise in blood ethanol concentration and its depletion because of metabolism and excretion, and demonstrate that it is possible to detect changes in gene expression using total RNA isolated from whole blood. The analysis approach for this study serves as a workflow to investigate the biology linked to expression changes across a time course and from these changes, to identify target genes that could serve as biomarkers linked to pilot performance. PMID:23883607

  14. Maternal employment and childhood overweight in Germany.

    PubMed

    Meyer, Sophie-Charlotte

    2016-12-01

    A widespread finding among studies from the US and the UK is that maternal employment is correlated with an increased risk of child overweight, even in a causal manner, whereas studies from other countries obtain less conclusive results. As evidence for Germany is still scarce, the purpose of this study is to identify the effect of maternal employment on childhood overweight in Germany using two sets of representative micro data. We further explore potential underlying mechanisms that might explain this relationship. In order to address the selection into maternal full-time employment, we use an instrumental variable strategy exploiting the number of younger siblings in the household as an instrument. While the OLS models suggest that maternal full-time employment is related to a 5 percentage point higher probability of the child to be overweight, IV estimates indicate a 25 percentage points higher overweight probability due to maternal full-time employment. Exploring various possible pathways, we find that maternal full-time employment promotes unhealthy dietary and activity behavior which might explain the positive effect of maternal employment on child overweight to some extent. Although there are limitations to our IV approach, several sensitivity analyses confirm the robustness of our findings. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Characteristic dynamics near two coalescing eigenvalues incorporating continuum threshold effects

    NASA Astrophysics Data System (ADS)

    Garmon, Savannah; Ordonez, Gonzalo

    2017-06-01

    It has been reported in the literature that the survival probability P(t) near an exceptional point where two eigenstates coalesce should generally exhibit an evolution P (t ) ˜t2e-Γ t, in which Γ is the decay rate of the coalesced eigenstate; this has been verified in a microwave billiard experiment [B. Dietz et al., Phys. Rev. E 75, 027201 (2007)]. However, the heuristic effective Hamiltonian that is usually employed to obtain this result ignores the possible influence of the continuum threshold on the dynamics. By contrast, in this work we employ an analytical approach starting from the microscopic Hamiltonian representing two simple models in order to show that the continuum threshold has a strong influence on the dynamics near exceptional points in a variety of circumstances. To report our results, we divide the exceptional points in Hermitian open quantum systems into two cases: at an EP2A two virtual bound states coalesce before forming a resonance, anti-resonance pair with complex conjugate eigenvalues, while at an EP2B two resonances coalesce before forming two different resonances. For the EP2B, which is the case studied in the microwave billiard experiment, we verify that the survival probability exhibits the previously reported modified exponential decay on intermediate time scales, but this is replaced with an inverse power law on very long time scales. Meanwhile, for the EP2A the influence from the continuum threshold is so strong that the evolution is non-exponential on all time scales and the heuristic approach fails completely. When the EP2A appears very near the threshold, we obtain the novel evolution P (t ) ˜1 -C1√{t } on intermediate time scales, while further away the parabolic decay (Zeno dynamics) on short time scales is enhanced.

  16. Ground penetrating radar imaging and time-domain modelling of the infiltration of diesel fuel in a sandbox experiment

    NASA Astrophysics Data System (ADS)

    Bano, Maksim; Loeffler, Olivier; Girard, Jean-François

    2009-10-01

    Ground penetrating radar (GPR) is a non-destructive method which, over the past 10 years, has been successfully used not only to estimate the water content of soil, but also to detect and monitor the infiltration of pollutants on sites contaminated by light non-aqueous phase liquids (LNAPL). We represented a model water table aquifer (72 cm depth) by injecting water into a sandbox that also contains several buried objects. The GPR measurements were carried out with shielded antennae of 900 and 1200 MHz, respectively, for common mid point (CMP) and constant offset (CO) profiles. We extended the work reported by Loeffler and Bano by injecting 100 L of diesel fuel (LNAPL) from the top of the sandbox. We used the same acquisition procedure and the same profile configuration as before fuel injection. The GPR data acquired on the polluted sand did not show any clear reflections from the plume pollution; nevertheless, travel times are very strongly affected by the presence of the fuel and the main changes are on the velocity anomalies. We can notice that the reflection from the bottom of the sandbox, which is recorded at a constant time when no fuel is present, is deformed by the pollution. The area close to the fuel injection point is characterized by a higher velocity than the area situated further away. The area farther away from the injection point shows a low velocity anomaly which indicates an increase in travel time. It seems that pore water has been replaced by fuel as a result of a lateral flow. We also use finite-difference time-domain (FDTD) numerical GPR modelling in combination with dielectric property mixing models to estimate the volume and the physical characteristics of the contaminated sand.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hraber, Peter; Korber, Bette; Wagh, Kshitij

    Within-host genetic sequencing from samples collected over time provides a dynamic view of how viruses evade host immunity. Immune-driven mutations might stimulate neutralization breadth by selecting antibodies adapted to cycles of immune escape that generate within-subject epitope diversity. Comprehensive identification of immune-escape mutations is experimentally and computationally challenging. With current technology, many more viral sequences can readily be obtained than can be tested for binding and neutralization, making down-selection necessary. Typically, this is done manually, by picking variants that represent different time-points and branches on a phylogenetic tree. Such strategies are likely to miss many relevant mutations and combinations ofmore » mutations, and to be redundant for other mutations. Longitudinal Antigenic Sequences and Sites from Intrahost Evolution (LASSIE) uses transmitted founder loss to identify virus “hot-spots” under putative immune selection and chooses sequences that represent recurrent mutations in selected sites. LASSIE favors earliest sequences in which mutations arise. Here, with well-characterized longitudinal Env sequences, we confirmed selected sites were concentrated in antibody contacts and selected sequences represented diverse antigenic phenotypes. Finally, practical applications include rapidly identifying immune targets under selective pressure within a subject, selecting minimal sets of reagents for immunological assays that characterize evolving antibody responses, and for immunogens in polyvalent “cocktail” vaccines.« less

  18. Spatial variability of turbulent fluxes in the roughness sublayer of an even-aged pine forest

    USGS Publications Warehouse

    Katul, G.; Hsieh, C.-I.; Bowling, D.; Clark, K.; Shurpali, N.; Turnipseed, A.; Albertson, J.; Tu, K.; Hollinger, D.; Evans, B. M.; Offerle, B.; Anderson, D.; Ellsworth, D.; Vogel, C.; Oren, R.

    1999-01-01

    The spatial variability of turbulent flow statistics in the roughness sublayer (RSL) of a uniform even-aged 14 m (= h) tall loblolly pine forest was investigated experimentally. Using seven existing walkup towers at this stand, high frequency velocity, temperature, water vapour and carbon dioxide concentrations were measured at 15.5 m above the ground surface from October 6 to 10 in 1997. These seven towers were separated by at least 100 m from each other. The objective of this study was to examine whether single tower turbulence statistics measurements represent the flow properties of RSL turbulence above a uniform even-aged managed loblolly pine forest as a best-case scenario for natural forested ecosystems. From the intensive space-time series measurements, it was demonstrated that standard deviations of longitudinal and vertical velocities (??(u), ??(w)) and temperature (??(T)) are more planar homogeneous than their vertical flux of momentum (u(*)2) and sensible heat (H) counterparts. Also, the measured H is more horizontally homogeneous when compared to fluxes of other scalar entities such as CO2 and water vapour. While the spatial variability in fluxes was significant (> 15%), this unique data set confirmed that single tower measurements represent the 'canonical' structure of single-point RSL turbulence statistics, especially flux-variance relationships. Implications to extending the 'moving-equilibrium' hypothesis for RSL flows are discussed. The spatial variability in all RSL flow variables was not constant in time and varied strongly with spatially averaged friction velocity u(*), especially when u(*) was small. It is shown that flow properties derived from two-point temporal statistics such as correlation functions are more sensitive to local variability in leaf area density when compared to single point flow statistics. Specifically, that the local relationship between the reciprocal of the vertical velocity integral time scale (I(w)) and the arrival frequency of organized structures (u??/h) predicted from a mixing-layer theory exhibited dependence on the local leaf area index. The broader implications of these findings to the measurement and modelling of RSL flows are also discussed.

  19. Data-Driven Residential Load Modeling and Validation in GridLAB-D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gotseff, Peter; Lundstrom, Blake

    Accurately characterizing the impacts of high penetrations of distributed energy resources (DER) on the electric distribution system has driven modeling methods from traditional static snap shots, often representing a critical point in time (e.g., summer peak load), to quasi-static time series (QSTS) simulations capturing all the effects of variable DER, associated controls and hence, impacts on the distribution system over a given time period. Unfortunately, the high time resolution DER source and load data required for model inputs is often scarce or non-existent. This paper presents work performed within the GridLAB-D model environment to synthesize, calibrate, and validate 1-second residentialmore » load models based on measured transformer loads and physics-based models suitable for QSTS electric distribution system modeling. The modeling and validation approach taken was to create a typical GridLAB-D model home that, when replicated to represent multiple diverse houses on a single transformer, creates a statistically similar load to a measured load for a given weather input. The model homes are constructed to represent the range of actual homes on an instrumented transformer: square footage, thermal integrity, heating and cooling system definition as well as realistic occupancy schedules. House model calibration and validation was performed using the distribution transformer load data and corresponding weather. The modeled loads were found to be similar to the measured loads for four evaluation metrics: 1) daily average energy, 2) daily average and standard deviation of power, 3) power spectral density, and 4) load shape.« less

  20. A structural and a functional aspect of stable information processing by the brain

    PubMed Central

    2007-01-01

    Brain is an expert in producing the same output from a particular set of inputs, even from a very noisy environment. In this article a model of neural circuit in the brain has been proposed which is composed of cyclic sub-circuits. A big loop has been defined to be consisting of a feed forward path from the sensory neurons to the highest processing area of the brain and feed back paths from that region back up to close to the same sensory neurons. It has been mathematically shown how some smaller cycles can amplify signal. A big loop processes information by contrast and amplify principle. How a pair of presynaptic and postsynaptic neurons can be identified by an exact synchronization detection method has also been mentioned. It has been assumed that the spike train coming out of a firing neuron encodes all the information produced by it as output. It is possible to extract this information over a period of time by Fourier transforms. The Fourier coefficients arranged in a vector form will uniquely represent the neural spike train over a period of time. The information emanating out of all the neurons in a given neural circuit over a period of time can be represented by a collection of points in a multidimensional vector space. This cluster of points represents the functional or behavioral form of the neural circuit. It has been proposed that a particular cluster of vectors as the representation of a new behavior is chosen by the brain interactively with respect to the memory stored in that circuit and the amount of emotion involved. It has been proposed that in this situation a Coulomb force like expression governs the dynamics of functioning of the circuit and stability of the system is reached at the minimum of all the minima of a potential function derived from the force like expression. The calculations have been done with respect to a pseudometric defined in a multidimensional vector space. PMID:19003500

  1. Combining fixed effects and instrumental variable approaches for estimating the effect of psychosocial job quality on mental health: evidence from 13 waves of a nationally representative cohort study.

    PubMed

    Milner, Allison; Aitken, Zoe; Kavanagh, Anne; LaMontagne, Anthony D; Pega, Frank; Petrie, Dennis

    2017-06-23

    Previous studies suggest that poor psychosocial job quality is a risk factor for mental health problems, but they use conventional regression analytic methods that cannot rule out reverse causation, unmeasured time-invariant confounding and reporting bias. This study combines two quasi-experimental approaches to improve causal inference by better accounting for these biases: (i) linear fixed effects regression analysis and (ii) linear instrumental variable analysis. We extract 13 annual waves of national cohort data including 13 260 working-age (18-64 years) employees. The exposure variable is self-reported level of psychosocial job quality. The instruments used are two common workplace entitlements. The outcome variable is the Mental Health Inventory (MHI-5). We adjust for measured time-varying confounders. In the fixed effects regression analysis adjusted for time-varying confounders, a 1-point increase in psychosocial job quality is associated with a 1.28-point improvement in mental health on the MHI-5 scale (95% CI: 1.17, 1.40; P < 0.001). When the fixed effects was combined with the instrumental variable analysis, a 1-point increase psychosocial job quality is related to 1.62-point improvement on the MHI-5 scale (95% CI: -0.24, 3.48; P = 0.088). Our quasi-experimental results provide evidence to confirm job stressors as risk factors for mental ill health using methods that improve causal inference. © The Author 2017. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  2. Shared decision making and behavioral impairment: a national study among children with special health care needs

    PubMed Central

    2012-01-01

    Background The Institute of Medicine has prioritized shared decision making (SDM), yet little is known about the impact of SDM over time on behavioral outcomes for children. This study examined the longitudinal association of SDM with behavioral impairment among children with special health care needs (CSHCN). Method CSHCN aged 5-17 years in the 2002-2006 Medical Expenditure Panel Survey were followed for 2 years. The validated Columbia Impairment Scale measured impairment. SDM was measured with 7 items addressing the 4 components of SDM. The main exposures were (1) the mean level of SDM across the 2 study years and (2) the change in SDM over the 2 years. Using linear regression, we measured the association of SDM and behavioral impairment. Results Among 2,454 subjects representing 10.2 million CSHCN, SDM increased among 37% of the population, decreased among 36% and remained unchanged among 27%. For CSHCN impaired at baseline, the change in SDM was significant with each 1-point increase in SDM over time associated with a 2-point decrease in impairment (95% CI: 0.5, 3.4), whereas the mean level of SDM was not associated with impairment. In contrast, among those below the impairment threshold, the mean level of SDM was significant with each one point increase in the mean level of SDM associated with a 1.1-point decrease in impairment (0.4, 1.7), but the change was not associated with impairment. Conclusion Although the change in SDM may be more important for children with behavioral impairment and the mean level over time for those below the impairment threshold, results suggest that both the change in SDM and the mean level may impact behavioral health for CSHCN. PMID:22998626

  3. Big Geo Data Services: From More Bytes to More Barrels

    NASA Astrophysics Data System (ADS)

    Misev, Dimitar; Baumann, Peter

    2016-04-01

    The data deluge is affecting the oil and gas industry just as much as many other industries. However, aside from the sheer volume there is the challenge of data variety, such as regular and irregular grids, multi-dimensional space/time grids, point clouds, and TINs and other meshes. A uniform conceptualization for modelling and serving them could save substantial effort, such as the proverbial "department of reformatting". The notion of a coverage actually can accomplish this. Its abstract model in ISO 19123 together with the concrete, interoperable OGC Coverage Implementation Schema (CIS), which is currently under adoption as ISO 19123-2, provieds a common platform for representing any n-D grid type, point clouds, and general meshes. This is paired by the OGC Web Coverage Service (WCS) together with its datacube analytics language, the OGC Web Coverage Processing Service (WCPS). The OGC WCS Core Reference Implementation, rasdaman, relies on Array Database technology, i.e. a NewSQL/NoSQL approach. It supports the grid part of coverages, with installations of 100+ TB known and single queries parallelized across 1,000+ cloud nodes. Recent research attempts to address the point cloud and mesh part through a unified query model. The Holy Grail envisioned is that these approaches can be merged into a single service interface at some time. We present both grid amd point cloud / mesh approaches and discuss status, implementation, standardization, and research perspectives, including a live demo.

  4. Is the time and effort worth it? One library's evaluation of using social networking tools for outreach.

    PubMed

    Vucovich, Lee A; Gordon, Valerie S; Mitchell, Nicole; Ennis, Lisa A

    2013-01-01

    Librarians are using social networking sites as one means of sharing information and connecting with users from diverse groups. Usage statistics and other metrics compiled in 2011 for the library's Facebook page, representative library blogs, and the library YouTube channel were analyzed in an effort to understand how patrons use the library's social networking sites. Librarians also hoped to get a sense of these tools' effectiveness in reaching users at the point of need and engaging them in different ways.

  5. MS Burbank and MS Malenchenko working in Zvezda during STS-106

    NASA Image and Video Library

    2000-09-13

    S106-E-5174 (13 September 2000) --- Cosmonaut Yuri I. Malenchenko (left), representing the Russian Aviation and Space Agency, and astronaut Daniel C. Burbank are part of the team effort to ready the International Space Station (ISS) for permanent habitation. These two mission specialists and the other STS-106 astronauts and cosmonaut are continuing electrical work and transfer activities as they near the halfway point of docked operations with the International Space Station. In all the crew will have 189 hours, 40 minutes of planned Atlantis-ISS docked time.

  6. Impact of survey workflow on precision and accuracy of terrestrial LiDAR datasets

    NASA Astrophysics Data System (ADS)

    Gold, P. O.; Cowgill, E.; Kreylos, O.

    2009-12-01

    Ground-based LiDAR (Light Detection and Ranging) survey techniques are enabling remote visualization and quantitative analysis of geologic features at unprecedented levels of detail. For example, digital terrain models computed from LiDAR data have been used to measure displaced landforms along active faults and to quantify fault-surface roughness. But how accurately do terrestrial LiDAR data represent the true ground surface, and in particular, how internally consistent and precise are the mosaiced LiDAR datasets from which surface models are constructed? Addressing this question is essential for designing survey workflows that capture the necessary level of accuracy for a given project while minimizing survey time and equipment, which is essential for effective surveying of remote sites. To address this problem, we seek to define a metric that quantifies how scan registration error changes as a function of survey workflow. Specifically, we are using a Trimble GX3D laser scanner to conduct a series of experimental surveys to quantify how common variables in field workflows impact the precision of scan registration. Primary variables we are testing include 1) use of an independently measured network of control points to locate scanner and target positions, 2) the number of known-point locations used to place the scanner and point clouds in 3-D space, 3) the type of target used to measure distances between the scanner and the known points, and 4) setting up the scanner over a known point as opposed to resectioning of known points. Precision of the registered point cloud is quantified using Trimble Realworks software by automatic calculation of registration errors (errors between locations of the same known points in different scans). Accuracy of the registered cloud (i.e., its ground-truth) will be measured in subsequent experiments. To obtain an independent measure of scan-registration errors and to better visualize the effects of these errors on a registered point cloud, we scan from multiple locations an object of known geometry (a cylinder mounted above a square box). Preliminary results show that even in a controlled experimental scan of an object of known dimensions, there is significant variability in the precision of the registered point cloud. For example, when 3 scans of the central object are registered using 4 known points (maximum time, maximum equipment), the point clouds align to within ~1 cm (normal to the object surface). However, when the same point clouds are registered with only 1 known point (minimum time, minimum equipment), misalignment of the point clouds can range from 2.5 to 5 cm, depending on target type. The greater misalignment of the 3 point clouds when registered with fewer known points stems from the field method employed in acquiring the dataset and demonstrates the impact of field workflow on LiDAR dataset precision. By quantifying the degree of scan mismatch in results such as this, we can provide users with the information needed to maximize efficiency in remote field surveys.

  7. Human action recognition based on spatial-temporal descriptors using key poses

    NASA Astrophysics Data System (ADS)

    Hu, Shuo; Chen, Yuxin; Wang, Huaibao; Zuo, Yaqing

    2014-11-01

    Human action recognition is an important area of pattern recognition today due to its direct application and need in various occasions like surveillance and virtual reality. In this paper, a simple and effective human action recognition method is presented based on the key poses of human silhouette and the spatio-temporal feature. Firstly, the contour points of human silhouette have been gotten, and the key poses are learned by means of K-means clustering based on the Euclidean distance between each contour point and the centre point of the human silhouette, and then the type of each action is labeled for further match. Secondly, we obtain the trajectories of centre point of each frame, and create a spatio-temporal feature value represented by W to describe the motion direction and speed of each action. The value W contains the information of location and temporal order of each point on the trajectories. Finally, the matching stage is performed by comparing the key poses and W between training sequences and test sequences, the nearest neighbor sequences is found and its label supplied the final result. Experiments on the public available Weizmann datasets show the proposed method can improve accuracy by distinguishing amphibious poses and increase suitability for real-time applications by reducing the computational cost.

  8. An analytical model for the calculation of the change in transmembrane potential produced by an ultrawideband electromagnetic pulse.

    PubMed

    Hart, Francis X; Easterly, Clay E

    2004-05-01

    The electric field pulse shape and change in transmembrane potential produced at various points within a sphere by an intense, ultrawideband pulse are calculated in a four stage, analytical procedure. Spheres of two sizes are used to represent the head of a human and the head of a rat. In the first stage, the pulse is decomposed into its Fourier components. In the second stage, Mie scattering analysis (MSA) is performed for a particular point in the sphere on each of the Fourier components, and the resulting electric field pulse shape is obtained for that point. In the third stage, the long wavelength approximation (LWA) is used to obtain the change in transmembrane potential in a cell at that point. In the final stage, an energy analysis is performed. These calculations are performed at 45 points within each sphere. Large electric fields and transmembrane potential changes on the order of a millivolt are produced within the brain, but on a time scale on the order of nanoseconds. The pulse shape within the brain differs considerably from that of the incident pulse. Comparison of the results for spheres of different sizes indicates that scaling of such pulses across species is complicated. Published 2004 Wiley-Liss, Inc.

  9. Feature relevance assessment for the semantic interpretation of 3D point cloud data

    NASA Astrophysics Data System (ADS)

    Weinmann, M.; Jutzi, B.; Mallet, C.

    2013-10-01

    The automatic analysis of large 3D point clouds represents a crucial task in photogrammetry, remote sensing and computer vision. In this paper, we propose a new methodology for the semantic interpretation of such point clouds which involves feature relevance assessment in order to reduce both processing time and memory consumption. Given a standard benchmark dataset with 1.3 million 3D points, we first extract a set of 21 geometric 3D and 2D features. Subsequently, we apply a classifier-independent ranking procedure which involves a general relevance metric in order to derive compact and robust subsets of versatile features which are generally applicable for a large variety of subsequent tasks. This metric is based on 7 different feature selection strategies and thus addresses different intrinsic properties of the given data. For the example of semantically interpreting 3D point cloud data, we demonstrate the great potential of smaller subsets consisting of only the most relevant features with 4 different state-of-the-art classifiers. The results reveal that, instead of including as many features as possible in order to compensate for lack of knowledge, a crucial task such as scene interpretation can be carried out with only few versatile features and even improved accuracy.

  10. Beach response dynamics of a littoral cell using a 17-year single-point time series of sand thickness

    USGS Publications Warehouse

    Barnard, P.L.; Hubbard, D.M.; Dugan, J.E.

    2012-01-01

    A 17-year time series of near-daily sand thickness measurements at a single intertidal location was compared with 5. years of semi-annual 3-dimensional beach surveys at the same beach, and at two other beaches within the same littoral cell. The daily single point measurements correlated extremely well with the mean beach elevation and shoreline position of ten high-spatial resolution beach surveys. Correlations were statistically significant at all spatial scales, even for beach surveys 10s of kilometers downcoast, and therefore variability at the single point monitoring site was representative of regional coastal behavior, allowing us to examine nearly two decades of continuous coastal evolution. The annual cycle of beach oscillations dominated the signal, typical of this region, with additional, less intense spectral peaks associated with seasonal wave energy fluctuations (~. 45 to 90. days), as well as full lunar (~. 29. days) and semi-lunar (~. 13. days; spring-neap cycle) tidal cycles. Sand thickness variability was statistically linked to wave energy with a 2. month peak lag, as well as the average of the previous 7-8. months of wave energy. Longer term anomalies in sand thickness were also apparent on time scales up to 15. months. Our analyses suggest that spatially-limited morphological data sets can be extremely valuable (with robust validation) for understanding the details of beach response to wave energy over timescales that are not resolved by typical survey intervals, as well as the regional behavior of coastal systems. ?? 2011.

  11. Perceived Barriers by University Students in the Practice of Physical Activities

    PubMed Central

    Gómez-López, Manuel; Gallegos, Antonio Granero; Extremera, Antonio Baena

    2010-01-01

    The main goal of this research is to study in detail the main characteristics of university students in order to find out the reasons why they have adopted an inactive lifestyle. In order to do so, a questionnaire on the analysis of sports habits and lifestyle was given to 323 students. They were taken from a representative sample of 1834 students. These 323 students had pointed out at the moment of the fieldwork, not having practiced any sport in their spare time. Our findings point out that there are diverse reasons for this. On one hand, reasons referred to as external barriers such as lack of time, on the other hand, internal barriers such as not liking the physical activity, not seeing its practicality or usefulness, feeling lazy or with apathy, or thinking that they are not competent in this type of activities. Other reasons such as the lack of social support are grouped within the external barriers. Finally, it is important to stress that there are also differences based on gender with respect to motivation. Key points External barriers prevail in university students. The lack of time is among the most highlighted ones. Statistically significant results have been found regarding the gender variable. The results are very important since they are considered to be valuable information for university institutions when guiding and diversifying their offer of physical and sport activities. Also as a guide in the design of support policies and national sport management guidelines. PMID:24149629

  12. Perceived barriers by university students in the practice of physical activities.

    PubMed

    Gómez-López, Manuel; Gallegos, Antonio Granero; Extremera, Antonio Baena

    2010-01-01

    The main goal of this research is to study in detail the main characteristics of university students in order to find out the reasons why they have adopted an inactive lifestyle. In order to do so, a questionnaire on the analysis of sports habits and lifestyle was given to 323 students. They were taken from a representative sample of 1834 students. These 323 students had pointed out at the moment of the fieldwork, not having practiced any sport in their spare time. Our findings point out that there are diverse reasons for this. On one hand, reasons referred to as external barriers such as lack of time, on the other hand, internal barriers such as not liking the physical activity, not seeing its practicality or usefulness, feeling lazy or with apathy, or thinking that they are not competent in this type of activities. Other reasons such as the lack of social support are grouped within the external barriers. Finally, it is important to stress that there are also differences based on gender with respect to motivation. Key pointsExternal barriers prevail in university students. The lack of time is among the most highlighted ones.Statistically significant results have been found regarding the gender variable.The results are very important since they are considered to be valuable information for university institutions when guiding and diversifying their offer of physical and sport activities. Also as a guide in the design of support policies and national sport management guidelines.

  13. Advanced Stent Graft Treatment of Venous Stenosis Affecting Hemodialysis Vascular Access: Case Illustrations

    PubMed Central

    Patel, Darshan; Ray, Charles E.; Lokken, R. Peter; Bui, James T.; Lipnik, Andrew J.; Gaba, Ron C.

    2016-01-01

    Surgically placed dialysis access is an important component of dialysis replacement therapy. The vast majority of patients undergoing dialysis will have surgically placed accesses at some point in the course of their disease, and for many patients these accesses may represent their definitive renal replacement option. Most, if not all, arteriovenous fistulae and grafts will require interventions at some point in time. Percutaneous angioplasty is the typical first treatment performed for venous stenoses, with stents and stent grafts being reserved for patients in whom angioplasty and surgical options are exhausted. In some salvage situations, stent graft placement may be the only or best option for patients. This article describes, using case illustrations, placement of stent grafts in such patients; a focus will also be made on the techniques utilized in such salvage situations. PMID:27011426

  14. All about Eve: Secret Sharing using Quantum Effects

    NASA Technical Reports Server (NTRS)

    Jackson, Deborah J.

    2005-01-01

    This document discusses the nature of light (including classical light and photons), encryption, quantum key distribution (QKD), light polarization and beamsplitters and their application to information communication. A quantum of light represents the smallest possible subdivision of radiant energy (light) and is called a photon. The QKD key generation sequence is outlined including the receiver broadcasting the initial signal indicating reception availability, timing pulses from the sender to provide reference for gated detection of photons, the sender generating photons through random polarization while the receiver detects photons with random polarization and communicating via data link to mutually establish random keys. The QKD network vision includes inter-SATCOM, point-to-point Gnd Fiber and SATCOM-fiber nodes. QKD offers an unconditionally secure method of exchanging encryption keys. Ongoing research will focus on how to increase the key generation rate.

  15. E-recruitment based clinical research: notes for Research Ethics Committees/Institutional Review Boards.

    PubMed

    Refolo, P; Sacchini, D; Minacori, R; Daloiso, V; Spagnolo, A G

    2015-01-01

    Patient recruitment is a critical point of today's clinical research. Several proposals have been made for improving it, but the effectiveness of these measures is actually uncertain. The use of Internet (e-recruitment) could represent a great chance to improve patient enrolment, even though the effectiveness of this implementation is not so evident. E-recruitment could bring some advantages, such as better interaction between clinical research demand and clinical research supply, time and resources optimization, and reduction of data entry errors. It raises some issues too, such as sampling errors, validity of informed consent, and protection of privacy. Research Ethics Committees/Institutional Review Boards should consider these critical points. The paper deals with Internet recruitment for clinical research. It also attempts to provide Research Ethics Committees/Institutional Review Boards with notes for assessing e-recruitment based clinical protocols.

  16. A cluster merging method for time series microarray with production values.

    PubMed

    Chira, Camelia; Sedano, Javier; Camara, Monica; Prieto, Carlos; Villar, Jose R; Corchado, Emilio

    2014-09-01

    A challenging task in time-course microarray data analysis is to cluster genes meaningfully combining the information provided by multiple replicates covering the same key time points. This paper proposes a novel cluster merging method to accomplish this goal obtaining groups with highly correlated genes. The main idea behind the proposed method is to generate a clustering starting from groups created based on individual temporal series (representing different biological replicates measured in the same time points) and merging them by taking into account the frequency by which two genes are assembled together in each clustering. The gene groups at the level of individual time series are generated using several shape-based clustering methods. This study is focused on a real-world time series microarray task with the aim to find co-expressed genes related to the production and growth of a certain bacteria. The shape-based clustering methods used at the level of individual time series rely on identifying similar gene expression patterns over time which, in some models, are further matched to the pattern of production/growth. The proposed cluster merging method is able to produce meaningful gene groups which can be naturally ranked by the level of agreement on the clustering among individual time series. The list of clusters and genes is further sorted based on the information correlation coefficient and new problem-specific relevant measures. Computational experiments and results of the cluster merging method are analyzed from a biological perspective and further compared with the clustering generated based on the mean value of time series and the same shape-based algorithm.

  17. Spatio-temporal evaluation of organic contaminants and their transformation products along a river basin affected by urban, agricultural and industrial pollution.

    PubMed

    Gómez, María José; Herrera, Sonia; Solé, David; García-Calvo, Eloy; Fernández-Alba, Amadeo R

    2012-03-15

    This study aims to assess the occurrence, fate and temporal and spatial distribution of anthropogenic contaminants in a river subjected to different pressures (industrial, agricultural, wastewater discharges). For this purpose, the Henares River basin (central Spain) can be considered a representative basin within a continental Mediterranean climate. As the studied river runs through several residential, industrial and agricultural areas, it would be expected that the chemical water quality is modified along its course. Thereby the selection of sampling points and timing of sample collection are critical factors in the monitoring of a river basin. In this study, six different monitoring campaigns were performed in 2010 and contaminants were measured at the effluent point of the main wastewater treatment plant (WWTP) in the river basin and at five different points upstream and downstream from the WWTP emission point. The target compounds evaluated were personal care products (PCPs), polycyclic aromatic hydrocarbons (PAHs) and pesticides. Results show that the river is clearly influenced by wastewater discharges and also by its proximity to agricultural areas. The contaminants detected at higher concentrations were the PCPs. The spatial distribution of the contaminants indicates that the studied contaminants persist along the river. In the time period studied no great seasonal variations of PCPs at the river collection points were observed. In contrast, a temporal trend of pesticides and PAHs was observed. Besides the target compounds, other new contaminants were identified and evaluated in the water samples, some of them being investigated for the first time in the aquatic environment. The behaviour of three important transformation products was also evaluated: 9,10-anthracenodione, galaxolide-lactone and 4-amino-musk xylene. These were found at higher concentrations than their parent compounds, indicating the significance of including the study of transformation products in the monitoring programmes. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. A model for the implementation of a two-shift municipal solid waste and recyclable material collection plan that offers greater convenience to residents.

    PubMed

    Lin, Hung-Yueh; Tsai, Zong-Pei; Chen, Guan-Hwa; Kao, Jehng-Jung

    2011-01-01

    Separating recyclables from municipal solid waste (MSW) before collection reduces not only the quantity of MSW that needs to be treated but also the depletion of resources. However, the participation of residents is essential for a successful recycling program, and the level of participation usually depends on the degree of convenience associated with accessing recycling collection points. The residential accessing convenience (RAC) of a collection plan is determined by the proximity of its collection points to all residents and its temporal flexibility in response to resident requirements. The degree of proximity to all residents is determined by using a coverage radius that represents the maximum distance residents need to travel to access a recycling point. The temporal flexibility is assessed by the availability of proximal recycling points at times suitable to the lifestyles of all residents concerned. In Taiwan, the MSW collection is implemented at fixed locations and at fixed times. Residents must deposit their garbage directly into the collection vehicle. To facilitate the assignment of collection vehicles and to encourage residents to thoroughly separate their recyclables, in Taiwan MSW and recyclable materials are usually collected at the same time by different vehicles. A heuristic procedure including an integer programming (IP) model and ant colony optimization (ACO) is explored in this study to determine an efficient two-shift collection plan that takes into account RAC factors. The IP model has been developed to determine convenient collection points in each shift on the basis of proximity, and then the ACO algorithm is applied to determine the most effective routing plan of each shift. With the use of a case study involving a city in Taiwan, this study has demonstrated that collection plans generated using the above procedure are superior to current collection plans on the basis of proximity and total collection distance.

  19. Analysis of dynamically stable patterns in a maze-like corridor using the Wasserstein metric.

    PubMed

    Ishiwata, Ryosuke; Kinukawa, Ryota; Sugiyama, Yuki

    2018-04-23

    The two-dimensional optimal velocity (2d-OV) model represents a dissipative system with asymmetric interactions, thus being suitable to reproduce behaviours such as pedestrian dynamics and the collective motion of living organisms. In this study, we found that particles in the 2d-OV model form optimal patterns in a maze-like corridor. Then, we estimated the stability of such patterns using the Wasserstein metric. Furthermore, we mapped these patterns into the Wasserstein metric space and represented them as points in a plane. As a result, we discovered that the stability of the dynamical patterns is strongly affected by the model sensitivity, which controls the motion of each particle. In addition, we verified the existence of two stable macroscopic patterns which were cohesive, stable, and appeared regularly over the time evolution of the model.

  20. MICA: Multiple interval-based curve alignment

    NASA Astrophysics Data System (ADS)

    Mann, Martin; Kahle, Hans-Peter; Beck, Matthias; Bender, Bela Johannes; Spiecker, Heinrich; Backofen, Rolf

    2018-01-01

    MICA enables the automatic synchronization of discrete data curves. To this end, characteristic points of the curves' shapes are identified. These landmarks are used within a heuristic curve registration approach to align profile pairs by mapping similar characteristics onto each other. In combination with a progressive alignment scheme, this enables the computation of multiple curve alignments. Multiple curve alignments are needed to derive meaningful representative consensus data of measured time or data series. MICA was already successfully applied to generate representative profiles of tree growth data based on intra-annual wood density profiles or cell formation data. The MICA package provides a command-line and graphical user interface. The R interface enables the direct embedding of multiple curve alignment computation into larger analyses pipelines. Source code, binaries and documentation are freely available at https://github.com/BackofenLab/MICA

  1. Isolation and identification of chitin in the black coral Parantipathes larix (Anthozoa: Cnidaria).

    PubMed

    Bo, Marzia; Bavestrello, Giorgio; Kurek, Denis; Paasch, Silvia; Brunner, Eike; Born, René; Galli, Roberta; Stelling, Allison L; Sivkov, Viktor N; Petrova, Olga V; Vyalikh, Denis; Kummer, Kurt; Molodtsov, Serguei L; Nowak, Dorota; Nowak, Jakub; Ehrlich, Hermann

    2012-01-01

    Until now, there is a lack of knowledge about the presence of chitin in numerous representatives of corals (Cnidaria). However, investigations concerning the chitin-based skeletal organization in different coral taxa are significant from biochemical, structural, developmental, ecological and evolutionary points of view. In this paper, we present a thorough screening for the presence of chitin within the skeletal formations of a poorly investigated Mediterranean black coral, Parantipathes larix (Esper, 1792), as a typical representative of the Schizopathidae family. Using a wide array variety of techniques ((13)C solid state NMR, Fourier transform infrared (FTIR), Raman, NEXAFS, Morgan-Elson assay and Calcofluor White Staining), we unambiguously show for the first time that chitin is an important component within the skeletal stalks as well as pinnules of this coral. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Gut Microbiota and Salivary Diagnostics: The Mouth Is Salivating to Tell Us Something.

    PubMed

    Kodukula, Krishna; Faller, Douglas V; Harpp, David N; Kanara, Iphigenia; Pernokas, Julie; Pernokas, Mark; Powers, Whitney R; Soukos, Nikolaos S; Steliou, Kosta; Moos, Walter H

    2017-01-01

    The microbiome of the human body represents a symbiosis of microbial networks spanning multiple organ systems. Bacteria predominantly represent the diversity of human microbiota, but not to be forgotten are fungi, viruses, and protists. Mounting evidence points to the fact that the "microbial signature" is host-specific and relatively stable over time. As our understanding of the human microbiome and its relationship to the health of the host increases, it is becoming clear that many and perhaps most chronic conditions have a microbial involvement. The oral and gastrointestinal tract microbiome constitutes the bulk of the overall human microbial load, and thus presents unique opportunities for advancing human health prognosis, diagnosis, and therapy development. This review is an attempt to catalog a broad diversity of recent evidence and focus it toward opportunities for prevention and treatment of debilitating illnesses.

  3. Rapid production of Candida albicans chlamydospores in liquid media under various incubation conditions.

    PubMed

    Alicia, Zavalza-Stiker; Blanca, Ortiz-Saldivar; Mariana, García-Hernández; Magdalena, Castillo-Casanova; Alexandro, Bonifaz

    2006-01-01

    The production of chlamydospores is a diagnostic tool used to identify Candida albicans; these structures also represent a model for morphogenetic research. The time required to produce them with standard methods is 48-72 hours in rice meal agar and tensoactive agents. This time can be shorted using liquid media such as cornmeal broth (CMB) and dairy supplements. Five media were tested: CMB plus 1% Tween-80, CMB plus 5% milk, CMB plus 5% milk serum, milk serum, and milk serum plus 1% Tween-80, under different incubation conditions: at 28 degrees C and 37 degrees C in a metabolic bath stirring at 150 rpm, and at 28 degrees C in a culture stove. The reading time points were established at 8 and 16 hours. The best results were obtained at 16 hours with CMB plus 5% milk under incubation at 28 degrees C and stirring at 150 rpm. The next most efficient methods were CMB plus 5% milk serum and CMB plus 1% Tween-80, under the same incubation conditions. The other media were ineffective in producing chlamydospores. The absence of stirring at 28 degrees C prevented the formation of chlamydospores within the set time points, and incubation at 37 degrees C decreased their production. This paper reports that the time to form C. albicans chlamydospores can be reduced.

  4. Kelvin-Voigt model of wave propagation in fragmented geomaterials with impact damping

    NASA Astrophysics Data System (ADS)

    Khudyakov, Maxim; Pasternak, Elena; Dyskin, Arcady

    2017-04-01

    When a wave propagates through real materials, energy dissipation occurs. The effect of loss of energy in homogeneous materials can be accounted for by using simple viscous models. However, a reliable model representing the effect in fragmented geomaterials has not been established yet. The main reason for that is a mechanism how vibrations are transmitted between the elements (fragments) in these materials. It is hypothesised that the fragments strike against each other, in the process of oscillation, and the impacts lead to the energy loss. We assume that the energy loss is well represented by the restitution coefficient. The principal element of this concept is the interaction of two adjacent blocks. We model it by a simple linear oscillator (a mass on an elastic spring) with an additional condition: each time the system travels through the neutral point, where the displacement is equal to zero, the velocity reduces by multiplying itself by the restitution coefficient, which characterises an impact of the fragments. This additional condition renders the system non-linear. We show that the behaviour of such a model averaged over times much larger than the system period can approximately be represented by a conventional linear oscillator with linear damping characterised by a damping coefficient expressible through the restitution coefficient. Based on this the wave propagation at times considerably greater than the resonance period of oscillations of the neighbouring blocks can be modelled using the Kelvin-Voigt model. The wave velocities and the dispersion relations are obtained.

  5. Social imagery, tobacco independence, and the truthsm campaign.

    PubMed

    Evans, W Douglas; Price, Simani; Blahut, Steven; Hersey, James; Niederdeppe, Jeffrey; Ray, Sarah

    2004-01-01

    This study investigated relationships among exposure to the truthsm campaign, differences in social imagery about not smoking and related measures, and smoking behavior. We asked, "How does truthsm work? Through what psychological mechanisms does it affect smoking behavior?" We developed a framework to explain how receptivity to truthsm ads might influence youth cognitive states and subsequent effects on progression to established smoking. The main hypotheses were that social imagery about not smoking and related beliefs and attitudes about tobacco use mediate the relationship between truthsm exposure and smoking status. The study was based on data from the Legacy Media Tracking Survey (LMTS), waves I-III, which were conducted at three time points from 1999 through 2001. A nationally representative sample of 20,058 respondents aged 12-24 from the three time points was used in the analysis. We developed a structural equation model (SEM) based on constructs drawn from the LMTS. We investigated the model and tested our hypotheses about the psychological and behavioral effects of campaign exposure. We tested our constructs and model using a two-stage structural equation modeling approach. We first conducted a confirmatory factor analysis (CFA) to test the measurement model. Our model achieved satisfactory fit, and we conducted the SEM to test our hypotheses. We found that social imagery and perceived tobacco independence mediate the relationship between truthsm exposure and smoking status. We found meaningful differences between paths for segmented samples based on age, gender, and race/ethnicity subgroups and over time. The truthsm campaign operates through individuals'sense of tobacco independence and social imagery about not smoking. This study indicates that the campaign's strategy has worked as predicted and represents an effective model for social marketing to change youth risk behaviors. Future studies should further investigate subgroup differences in campaign reactions and utilize contextual information about the truthsm campaign's evolution to explain changes in reactions over time.

  6. Non-invasive cortisol measurements as indicators of physiological stress responses in guinea pigs

    PubMed Central

    Pschernig, Elisabeth; Wallner, Bernard; Millesi, Eva

    2016-01-01

    Non-invasive measurements of glucocorticoid (GC) concentrations, including cortisol and corticosterone, serve as reliable indicators of adrenocortical activities and physiological stress loads in a variety of species. As an alternative to invasive analyses based on plasma, GC concentrations in saliva still represent single-point-of-time measurements, suitable for studying short-term or acute stress responses, whereas fecal GC metabolites (FGMs) reflect overall stress loads and stress responses after a species-specific time frame in the long-term. In our study species, the domestic guinea pig, GC measurements are commonly used to indicate stress responses to different environmental conditions, but the biological relevance of non-invasive measurements is widely unknown. We therefore established an experimental protocol based on the animals’ natural stress responses to different environmental conditions and compared GC levels in plasma, saliva, and fecal samples during non-stressful social isolations and stressful two-hour social confrontations with unfamiliar individuals. Plasma and saliva cortisol concentrations were significantly increased directly after the social confrontations, and plasma and saliva cortisol levels were strongly correlated. This demonstrates a high biological relevance of GC measurements in saliva. FGM levels measured 20 h afterwards, representing the reported mean gut passage time based on physiological validations, revealed that the overall stress load was not affected by the confrontations, but also no relations to plasma cortisol levels were detected. We therefore measured FGMs in two-hour intervals for 24 h after another social confrontation and detected significantly increased levels after four to twelve hours, reaching peak concentrations already after six hours. Our findings confirm that non-invasive GC measurements in guinea pigs are highly biologically relevant in indicating physiological stress responses compared to circulating levels in plasma in the short- and long-term. Our approach also underlines the importance of detailed investigations on how to use and interpret non-invasive measurements, including the determination of appropriate time points for sample collections. PMID:26839750

  7. Who gets the gametes? An argument for a points system for fertility patients

    PubMed Central

    Jenkins, Simon; Ives, Jonathan; Avery, Sue

    2017-01-01

    Abstract This paper argues that the convention of allocating donated gametes on a ‘first come, first served’ basis should be replaced with an allocation system that takes into account more morally relevant criteria than waiting time. This conclusion was developed using an empirical bioethics methodology, which involved a study of the views of 18 staff members from seven U.K. fertility clinics, and 20 academics, policy‐makers, representatives of patient groups, and other relevant professionals, on the allocation of donated sperm and eggs. Against these views, we consider some nuanced ways of including criteria in a points allocation system. We argue that such a system is more ethically robust than ‘first come, first served’, but we acknowledge that our results suggest that a points system will meet with resistance from those working in the field. We conclude that criteria such as a patient's age, potentially damaging substance use, and parental status should be used to allocate points and determine which patients receive treatment and in what order. These and other factors should be applied according to how they bear on considerations like child welfare, patient welfare, and the effectiveness of the proposed treatment. PMID:29194680

  8. Circular Data Images for Directional Data

    NASA Technical Reports Server (NTRS)

    Morpet, William J.

    2004-01-01

    Directional data includes vectors, points on a unit sphere, axis orientation, angular direction, and circular or periodic data. The theoretical statistics for circular data (random points on a unit circle) or spherical data (random points on a unit sphere) are a recent development. An overview of existing graphical methods for the display of directional data is given. Cross-over occurs when periodic data are measured on a scale for the measurement of linear variables. For example, if angle is represented by a linear color gradient changing uniformly from dark blue at -180 degrees to bright red at +180 degrees, the color image will be discontinuous at +180 degrees and -180 degrees, which are the same location. The resultant color would depend on the direction of approach to the cross-over point. A new graphical method for imaging directional data is described, which affords high resolution without color discontinuity from "cross-over". It is called the circular data image. The circular data image uses a circular color scale in which colors repeat periodically. Some examples of the circular data image include direction of earth winds on a global scale, rocket motor internal flow, earth global magnetic field direction, and rocket motor nozzle vector direction vs. time.

  9. In-Flight Guidance, Navigation, and Control Performance Results for the GOES-16 Spacecraft

    NASA Technical Reports Server (NTRS)

    Chapel, Jim; Stancliffe, Devin; Bevacqua, Tim; Winkler, Stephen; Clapp, Brian; Rood, Tim; Freesland, Doug; Reth, Alan; Early, Derrick; Walsh, Tim; hide

    2017-01-01

    The Geostationary Operational Environmental Satellite-R Series (GOES-R), which launched in November 2016, is the first of the next generation geostationary weather satellites. GOES-R provides 4 times the resolution, 5 times the observation rate, and 3 times the number of spectral bands for Earth observations compared with its predecessor spacecraft. Additionally, Earth relative and Sun-relative pointing and pointing stability requirements are maintained throughout reaction wheel desaturation events and station keeping activities, allowing GOES-R to provide continuous Earth and sun observations. This paper reviews the pointing control, pointing stability, attitude knowledge, and orbit knowledge requirements necessary to realize the ambitious Image Navigation and Registration (INR) objectives of GOES-R. This paper presents a comparison between low-frequency on-orbit pointing results and simulation predictions for both the Earth Pointed Platform (EPP) and Sun Pointed Platform (SPP). Results indicate excellent agreement between simulation predictions and observed on-orbit performance, and compliance with pointing performance requirements. The EPP instrument suite includes 6 seismic accelerometers sampled at 2 KHz, allowing in-flight verification of jitter responses and comparison back to simulation predictions. This paper presents flight results of acceleration, shock response spectrum (SRS), and instrument line of sight responses for various operational scenarios and instrument observation modes. The results demonstrate the effectiveness of the dual-isolation approach employed on GOES-R. The spacecraft provides attitude and rate data to the primary Earth-observing instrument at 100 Hz, which are used to adjust instrument scanning. The data must meet accuracy and latency numbers defined by the Integrated Rate Error (IRE) requirements. This paper discusses the on-orbit IRE results, showing compliance to these requirements with margin. During the spacecraft checkout period, IRE disturbances were observed and subsequently attributed to thermal control of the Inertial Measurement Unit (IMU) mounting interface. Adjustments of IMU thermal control and the resulting improvements in IRE are presented. Orbit knowledge represents the final element of INR performance. Extremely accurate orbital position is achieved by GPS navigation at Geosynchronous Earth Orbit (GEO). On-orbit performance results are shown demonstrating compliance with the stringent orbit position accuracy requirements of GOES-R, including during station keeping activities and momentum desaturation events. As we show in this paper, the on-orbit performance of the GNC design provides the necessary capabilities to achieve GOES-R mission objectives.

  10. Localised burst reconstruction from space-time PODs in a turbulent channel

    NASA Astrophysics Data System (ADS)

    Garcia-Gutierrez, Adrian; Jimenez, Javier

    2017-11-01

    The traditional proper orthogonal decomposition of the turbulent velocity fluctuations in a channel is extended to time under the assumption that the attractor is statistically stationary and can be treated as periodic for long-enough times. The objective is to extract space- and time-localised eddies that optimally represent the kinetic energy (and two-event correlation) of the flow. Using time-resolved data of a small-box simulation at Reτ = 1880 , minimal for y / h 0.25 , PODs are computed from the two-point spectral-density tensor Φ(kx ,kz , y ,y' , ω) . They are Fourier components in x, z and time, and depend on y and on the temporal frequency ω, or, equivalently, on the convection velocity c = ω /kx . Although the latter depends on y, a spatially and temporally localised `burst' can be synthesised by adding a range of PODs with specific phases. The results are localised bursts that are amplified and tilted, in a time-periodic version of Orr-like behaviour. Funded by the ERC COTURB project.

  11. 40 CFR 424.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... representing the degree of effluent reduction attainable by the application of the best available technology... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS FERROALLOY MANUFACTURING POINT SOURCE CATEGORY Open Electric... representing the degree of effluent reduction attainable by the application of the best available technology...

  12. 40 CFR 424.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... representing the degree of effluent reduction attainable by the application of the best available technology... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS FERROALLOY MANUFACTURING POINT SOURCE CATEGORY Open Electric... representing the degree of effluent reduction attainable by the application of the best available technology...

  13. 40 CFR 424.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... representing the degree of effluent reduction attainable by the application of the best available technology... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS FERROALLOY MANUFACTURING POINT SOURCE CATEGORY Open Electric... representing the degree of effluent reduction attainable by the application of the best available technology...

  14. 40 CFR 424.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... representing the degree of effluent reduction attainable by the application of the best available technology... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS FERROALLOY MANUFACTURING POINT SOURCE CATEGORY Open Electric... representing the degree of effluent reduction attainable by the application of the best available technology...

  15. 40 CFR 424.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... representing the degree of effluent reduction attainable by the application of the best available technology... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS FERROALLOY MANUFACTURING POINT SOURCE CATEGORY Open Electric... representing the degree of effluent reduction attainable by the application of the best available technology...

  16. 40 CFR 463.33 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... representing the degree of effluent reduction attainable by the application of the best available technology... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS PLASTICS MOLDING AND FORMING POINT SOURCE CATEGORY Finishing Water Subcategory § 463.33 Effluent limitations guidelines representing the degree of effluent reduction...

  17. 40 CFR 463.23 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... representing the degree of effluent reduction attainable by the application of the best available technology... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS PLASTICS MOLDING AND FORMING POINT SOURCE CATEGORY Cleaning Water Subcategory § 463.23 Effluent limitations guidelines representing the degree of effluent reduction...

  18. Real-time viability and apoptosis kinetic detection method of 3D multicellular tumor spheroids using the Celigo Image Cytometer.

    PubMed

    Kessel, Sarah; Cribbes, Scott; Bonasu, Surekha; Rice, William; Qiu, Jean; Chan, Leo Li-Ying

    2017-09-01

    The development of three-dimensional (3D) multicellular tumor spheroid models for cancer drug discovery research has increased in the recent years. The use of 3D tumor spheroid models may be more representative of the complex in vivo tumor microenvironments in comparison to two-dimensional (2D) assays. Currently, viability of 3D multicellular tumor spheroids has been commonly measured on standard plate-readers using metabolic reagents such as CellTiter-Glo® for end point analysis. Alternatively, high content image cytometers have been used to measure drug effects on spheroid size and viability. Previously, we have demonstrated a novel end point drug screening method for 3D multicellular tumor spheroids using the Celigo Image Cytometer. To better characterize the cancer drug effects, it is important to also measure the kinetic cytotoxic and apoptotic effects on 3D multicellular tumor spheroids. In this work, we demonstrate the use of PI and caspase 3/7 stains to measure viability and apoptosis for 3D multicellular tumor spheroids in real-time. The method was first validated by staining different types of tumor spheroids with PI and caspase 3/7 and monitoring the fluorescent intensities for 16 and 21 days. Next, PI-stained and nonstained control tumor spheroids were digested into single cell suspension to directly measure viability in a 2D assay to determine the potential toxicity of PI. Finally, extensive data analysis was performed on correlating the time-dependent PI and caspase 3/7 fluorescent intensities to the spheroid size and necrotic core formation to determine an optimal starting time point for cancer drug testing. The ability to measure real-time viability and apoptosis is highly important for developing a proper 3D model for screening tumor spheroids, which can allow researchers to determine time-dependent drug effects that usually are not captured by end point assays. This would improve the current tumor spheroid analysis method to potentially better identify more qualified cancer drug candidates for drug discovery research. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  19. A physical data model for fields and agents

    NASA Astrophysics Data System (ADS)

    de Jong, Kor; de Bakker, Merijn; Karssenberg, Derek

    2016-04-01

    Two approaches exist in simulation modeling: agent-based and field-based modeling. In agent-based (or individual-based) simulation modeling, the entities representing the system's state are represented by objects, which are bounded in space and time. Individual objects, like an animal, a house, or a more abstract entity like a country's economy, have properties representing their state. In an agent-based model this state is manipulated. In field-based modeling, the entities representing the system's state are represented by fields. Fields capture the state of a continuous property within a spatial extent, examples of which are elevation, atmospheric pressure, and water flow velocity. With respect to the technology used to create these models, the domains of agent-based and field-based modeling have often been separate worlds. In environmental modeling, widely used logical data models include feature data models for point, line and polygon objects, and the raster data model for fields. Simulation models are often either agent-based or field-based, even though the modeled system might contain both entities that are better represented by individuals and entities that are better represented by fields. We think that the reason for this dichotomy in kinds of models might be that the traditional object and field data models underlying those models are relatively low level. We have developed a higher level conceptual data model for representing both non-spatial and spatial objects, and spatial fields (De Bakker et al. 2016). Based on this conceptual data model we designed a logical and physical data model for representing many kinds of data, including the kinds used in earth system modeling (e.g. hydrological and ecological models). The goal of this work is to be able to create high level code and tools for the creation of models in which entities are representable by both objects and fields. Our conceptual data model is capable of representing the traditional feature data models and the raster data model, among many other data models. Our physical data model is capable of storing a first set of kinds of data, like omnipresent scalars, mobile spatio-temporal points and property values, and spatio-temporal rasters. With our poster we will provide an overview of the physical data model expressed in HDF5 and show examples of how it can be used to capture both object- and field-based information. References De Bakker, M, K. de Jong, D. Karssenberg. 2016. A conceptual data model and language for fields and agents. European Geosciences Union, EGU General Assembly, 2016, Vienna.

  20. The morphology and electrical geometry of rat jaw-elevator motoneurones.

    PubMed Central

    Moore, J A; Appenteng, K

    1991-01-01

    1. The aim of this work was to quantify both the morphology and electrical geometry of the dendritic trees of jaw-elevator motoneurones. To do this we have made intracellular recordings from identified motoneurones in anaesthetized rats, determined their membrane properties and then filled them with horseradish peroxidase by ionophoretic ejection. Four neurones were subsequently fully reconstructed and the lengths and diameters of all the dendritic segments measured. 2. The mean soma diameter was 25 microns and values of mean dendritic length for individual cells ranged from 514 to 773 microns. Dendrites branched on average 9.1 times to produce 10.2 end-terminations. Dendritic segments could be represented as constant diameter cylinders between branch points. Values of dendritic surface area ranged from 1.08 to 2.52 x 10(5) microns 2 and values of dendritic to total surface area from 98 to 99%. 3. At branch points the ratio of the summed diameters of the daughter dendrites to the 3/2 power against the parent dendrite to the 3/2 power was exactly 1.0. Therefore the individual branch points could be collapsed into a single cylinder. Furthermore for an individual dendrite the diameter of this cylinder remained constant with increasing electrical distance from the soma. Thus individual dendrites can be represented electrically as cylinders of constant diameter. 4. However dendrites of a given neurone terminated at different electrical distances from the soma. The equivalent-cylinder diameter of the combined dendritic tree remained constant over the proximal half and then showed a pronounced reduction over the distal half. The reduction in equivalent diameter could be ascribed to the termination of dendrites at differing electrical distances from the soma. Therefore the complete dendritic tree of these motoneurones is best represented as a cylinder over the proximal half of their electrical length but as a cone over the distal half. PMID:1804966

  1. Augmenting Parametric Optimal Ascent Trajectory Modeling with Graph Theory

    NASA Technical Reports Server (NTRS)

    Dees, Patrick D.; Zwack, Matthew R.; Edwards, Stephen; Steffens, Michael

    2016-01-01

    It has been well documented that decisions made in the early stages of Conceptual and Pre-Conceptual design commit up to 80% of total Life-Cycle Cost (LCC) while engineers know the least about the product they are designing [1]. Once within Preliminary and Detailed design however, making changes to the design becomes far more difficult to enact in both cost and schedule. Primarily this has been due to a lack of detailed data usually uncovered later during the Preliminary and Detailed design phases. In our current budget-constrained environment, making decisions within Conceptual and Pre-Conceptual design which minimize LCC while meeting requirements is paramount to a program's success. Within the arena of launch vehicle design, optimizing the ascent trajectory is critical for minimizing the costs present within such concerns as propellant, aerodynamic, aeroheating, and acceleration loads while meeting requirements such as payload delivered to a desired orbit. In order to optimize the vehicle design its constraints and requirements must be known, however as the design cycle proceeds it is all but inevitable that the conditions will change. Upon that change, the previously optimized trajectory may no longer be optimal, or meet design requirements. The current paradigm for adjusting to these updates is generating point solutions for every change in the design's requirements [2]. This can be a tedious, time-consuming task as changes in virtually any piece of a launch vehicle's design can have a disproportionately large effect on the ascent trajectory, as the solution space of the trajectory optimization problem is both non-linear and multimodal [3]. In addition, an industry standard tool, Program to Optimize Simulated Trajectories (POST), requires an expert analyst to produce simulated trajectories that are feasible and optimal [4]. In a previous publication the authors presented a method for combatting these challenges [5]. In order to bring more detailed information into Conceptual and Pre-Conceptual design, knowledge of the effects originating from changes to the vehicle must be calculated. In order to do this, a model capable of quantitatively describing any vehicle within the entire design space under consideration must be constructed. This model must be based upon analysis of acceptable fidelity, which in this work comes from POST. Design space interrogation can be achieved with surrogate modeling, a parametric, polynomial equation representing a tool. A surrogate model must be informed by data from the tool with enough points to represent the solution space for the chosen number of variables with an acceptable level of error. Therefore, Design Of Experiments (DOE) is used to select points within the design space to maximize information gained on the design space while minimizing number of data points required. To represent a design space with a non-trivial number of variable parameters the number of points required still represent an amount of work which would take an inordinate amount of time via the current paradigm of manual analysis, and so an automated method was developed. The best practices of expert trajectory analysts working within NASA Marshall's Advanced Concepts Office (ACO) were implemented within a tool called multiPOST. These practices include how to use the output data from a previous run of POST to inform the next, determining whether a trajectory solution is feasible from a real-world perspective, and how to handle program execution errors. The tool was then augmented with multiprocessing capability to enable analysis on multiple trajectories simultaneously, allowing throughput to scale with available computational resources. In this update to the previous work the authors discuss issues with the method and solutions.

  2. The interpersonal effects of Facebook reassurance seeking.

    PubMed

    Clerkin, Elise M; Smith, April R; Hames, Jennifer L

    2013-11-01

    Social networking sites like Facebook represent a potentially valuable means for individuals with low self-esteem or interpersonal difficulties to connect with others; however, recent research indicates that individuals who are most in need of social benefits from Facebook may be ineffective in their communication strategies, and thereby sabotage their potential to benefit interpersonally. The current study tested whether reassurance seeking via Facebook negatively influenced self-esteem, and whether this change in self-esteem mediated the relationship between Facebook reassurance seeking and greater thwarted belongingness and perceived burdensomeness. Participants completed measures online at two time-points approximately 24 days apart. Results provided evidence that Facebook reassurance seeking predicted lower levels of self-esteem, which in turn predicted increased feelings that one does not belong and that one is a burden. Key limitations to this study include our use of a predominantly young, female, Caucasian sample, a novel reassurance seeking measure, and only evaluating two time points. These results suggest that Facebook usage has the potential for negative and far-reaching influences on one's interpersonal functioning. Published by Elsevier B.V.

  3. Reproducibility of electronic tooth colour measurements.

    PubMed

    Ratzmann, Anja; Klinke, Thomas; Schwahn, Christian; Treichel, Anja; Gedrange, Tomasz

    2008-10-01

    Clinical methods of investigation, such as tooth colour determination, should be simple, quick and reproducible. The determination of tooth colours usually relies upon manual comparison of a patient's tooth colour with a colour ring. After some days, however, measurement results frequently lack unequivocal reproducibility. This study aimed to examine an electronic method for reliable colour measurement. The colours of the teeth 14 to 24 were determined by three different examiners in 10 subjects using the colour measuring device Shade Inspector. In total, 12 measurements per tooth were taken. Two measurement time points were scheduled to be taken, namely at study onset (T(1)) and after 6 months (T(2)). At either time point, two measurement series per subject were taken by the different examiners at 2-week intervals. The inter-examiner and intra-examiner agreement of the measurement results was assessed. The concordance for lightness and colour intensity (saturation) was represented by the intra-class correlation coefficient. The categorical variable colour shade (hue) was assessed using the kappa statistic. The study results show that tooth colour can be measured independently of the examiner. Good agreement was found between the examiners.

  4. Non-musicians also have a piano in the head: evidence for spatial-musical associations from line bisection tracking.

    PubMed

    Hartmann, Matthias

    2017-02-01

    The spatial representation of ordinal sequences (numbers, time, tones) seems to be a fundamental cognitive property. While an automatic association between horizontal space and pitch height (left-low pitch, right-high pitch) is constantly reported in musicians, the evidence for such an association in non-musicians is mixed. In this study, 20 non-musicians performed a line bisection task while listening to irrelevant high- and low-pitched tones and white noise (control condition). While pitch height had no influence on the final bisection point, participants' movement trajectories showed systematic biases: When approaching the line and touching the line for the first time (initial bisection point), the mouse cursor was directed more rightward for high-pitched tones compared to low-pitched tones and noise. These results show that non-musicians also have a subtle but nevertheless automatic association between pitch height and the horizontal space. This suggests that spatial-musical associations do not necessarily depend on constant sensorimotor experiences (as it is the case for musicians) but rather reflect the seemingly inescapable tendency to represent ordinal information on a horizontal line.

  5. Complexity and multifractal behaviors of multiscale-continuum percolation financial system for Chinese stock markets

    NASA Astrophysics Data System (ADS)

    Zeng, Yayun; Wang, Jun; Xu, Kaixuan

    2017-04-01

    A new financial agent-based time series model is developed and investigated by multiscale-continuum percolation system, which can be viewed as an extended version of continuum percolation system. In this financial model, for different parameters of proportion and density, two Poisson point processes (where the radii of points represent the ability of receiving or transmitting information among investors) are applied to model a random stock price process, in an attempt to investigate the fluctuation dynamics of the financial market. To validate its effectiveness and rationality, we compare the statistical behaviors and the multifractal behaviors of the simulated data derived from the proposed model with those of the real stock markets. Further, the multiscale sample entropy analysis is employed to study the complexity of the returns, and the cross-sample entropy analysis is applied to measure the degree of asynchrony of return autocorrelation time series. The empirical results indicate that the proposed financial model can simulate and reproduce some significant characteristics of the real stock markets to a certain extent.

  6. Women with breast cancer: self-reported distress in early survivorship.

    PubMed

    Lester, Joanne; Crosthwaite, Kara; Stout, Robin; Jones, Rachel N; Holloman, Christopher; Shapiro, Charles; Andersen, Barbara L

    2015-01-01

    To identify and compare levels of distress and sources of problems among patients with breast cancer in early survivorship. Descriptive, cross-sectional. A National Cancer Institute-designated comprehensive cancer center. 100 breast cancer survivors were selected to represent four time points in the cancer trajectory. Distress was self-reported using the Distress Thermometer and its 38-item problem list. Analysis of variance and chi-square analyses were performed as appropriate. Distress scores, problem reports, and time groups. Participants scored in range of the cutoff of more than 4 (range = 4.1-5.1) from treatment through three months post-treatment. At six months post-treatment, distress levels were significantly lower. Significant differences were found between groups on the total problem list score (p = 0.007) and emotional (p = 0.01) and physical subscale scores (p = 0.003). Comparison of groups at different points in the cancer trajectory found similar elevated levels from diagnosis through three months. Distress remained elevated in early survivorship but significantly decreased at six months post-treatment. Interventions to reduce or prevent distress may improve outcomes in early survivorship.

  7. A building extraction approach for Airborne Laser Scanner data utilizing the Object Based Image Analysis paradigm

    NASA Astrophysics Data System (ADS)

    Tomljenovic, Ivan; Tiede, Dirk; Blaschke, Thomas

    2016-10-01

    In the past two decades Object-Based Image Analysis (OBIA) established itself as an efficient approach for the classification and extraction of information from remote sensing imagery and, increasingly, from non-image based sources such as Airborne Laser Scanner (ALS) point clouds. ALS data is represented in the form of a point cloud with recorded multiple returns and intensities. In our work, we combined OBIA with ALS point cloud data in order to identify and extract buildings as 2D polygons representing roof outlines in a top down mapping approach. We performed rasterization of the ALS data into a height raster for the purpose of the generation of a Digital Surface Model (DSM) and a derived Digital Elevation Model (DEM). Further objects were generated in conjunction with point statistics from the linked point cloud. With the use of class modelling methods, we generated the final target class of objects representing buildings. The approach was developed for a test area in Biberach an der Riß (Germany). In order to point out the possibilities of the adaptation-free transferability to another data set, the algorithm has been applied ;as is; to the ISPRS Benchmarking data set of Toronto (Canada). The obtained results show high accuracies for the initial study area (thematic accuracies of around 98%, geometric accuracy of above 80%). The very high performance within the ISPRS Benchmark without any modification of the algorithm and without any adaptation of parameters is particularly noteworthy.

  8. Inference of sigma factor controlled networks by using numerical modeling applied to microarray time series data of the germinating prokaryote.

    PubMed

    Strakova, Eva; Zikova, Alice; Vohradsky, Jiri

    2014-01-01

    A computational model of gene expression was applied to a novel test set of microarray time series measurements to reveal regulatory interactions between transcriptional regulators represented by 45 sigma factors and the genes expressed during germination of a prokaryote Streptomyces coelicolor. Using microarrays, the first 5.5 h of the process was recorded in 13 time points, which provided a database of gene expression time series on genome-wide scale. The computational modeling of the kinetic relations between the sigma factors, individual genes and genes clustered according to the similarity of their expression kinetics identified kinetically plausible sigma factor-controlled networks. Using genome sequence annotations, functional groups of genes that were predominantly controlled by specific sigma factors were identified. Using external binding data complementing the modeling approach, specific genes involved in the control of the studied process were identified and their function suggested.

  9. Modeling fatigue.

    PubMed

    Sumner, Walton; Xu, Jin Zhong

    2002-01-01

    The American Board of Family Practice is developing a patient simulation program to evaluate diagnostic and management skills. The simulator must give temporally and physiologically reasonable answers to symptom questions such as "Have you been tired?" A three-step process generates symptom histories. In the first step, the simulator determines points in time where it should calculate instantaneous symptom status. In the second step, a Bayesian network implementing a roughly physiologic model of the symptom generates a value on a severity scale at each sampling time. Positive, zero, and negative values represent increased, normal, and decreased status, as applicable. The simulator plots these values over time. In the third step, another Bayesian network inspects this plot and reports how the symptom changed over time. This mechanism handles major trends, multiple and concurrent symptom causes, and gradually effective treatments. Other temporal insights, such as observations about short-term symptom relief, require complimentary mechanisms.

  10. Economic Efficiency and Investment Timing for Dual Water Systems

    NASA Astrophysics Data System (ADS)

    Leconte, Robert; Hughes, Trevor C.; Narayanan, Rangesan

    1987-10-01

    A general methodology to evaluate the economic feasibility of dual water systems is presented. In a first step, a static analysis (evaluation at a single point in time) is developed. The analysis requires the evaluation of consumers' and producer's surpluses from water use and the capital cost of the dual (outdoor) system. The analysis is then extended to a dynamic approach where the water demand increases with time (as a result of a population increase) and where the dual system is allowed to expand. The model determines whether construction of a dual system represents a net benefit, and if so, what is the best time to initiate the system (corresponding to maximization of social welfare). Conditions under which an analytic solution is possible are discussed and results of an application are summarized (including sensitivity to different parameters). The analysis allows identification of key parameters influencing attractiveness of dual water systems.

  11. Real-Time Model and Simulation Architecture for Half- and Full-Bridge Modular Multilevel Converters

    NASA Astrophysics Data System (ADS)

    Ashourloo, Mojtaba

    This work presents an equivalent model and simulation architecture for real-time electromagnetic transient analysis of either half-bridge or full-bridge modular multilevel converter (MMC) with 400 sub-modules (SMs) per arm. The proposed CPU/FPGA-based architecture is optimized for the parallel implementation of the presented MMC model on the FPGA and is beneficiary of a high-throughput floating-point computational engine. The developed real-time simulation architecture is capable of simulating MMCs with 400 SMs per arm at 825 nanoseconds. To address the difficulties of the sorting process implementation, a modified Odd-Even Bubble sorting is presented in this work. The comparison of the results under various test scenarios reveals that the proposed real-time simulator is representing the system responses in the same way of its corresponding off-line counterpart obtained from the PSCAD/EMTDC program.

  12. Structural evolution of the Sarandí del Yí Shear Zone, Uruguay: kinematics, deformation conditions and tectonic significance

    NASA Astrophysics Data System (ADS)

    Oriolo, S.; Oyhantçabal, P.; Heidelbach, F.; Wemmer, K.; Siegesmund, S.

    2015-10-01

    The Sarandí del Yí Shear Zone is a crustal-scale shear zone that separates the Piedra Alta Terrane from the Nico Pérez Terrane and the Dom Feliciano Belt in southern Uruguay. It represents the eastern margin of the Río de la Plata Craton and, consequently, one of the main structural features of the Precambrian basement of Western Gondwana. This shear zone first underwent dextral shearing under upper to middle amphibolite facies conditions, giving rise to the reactivation of pre-existing crustal fabrics in the easternmost Piedra Alta Terrane. Afterwards, pure-shear-dominated sinistral shearing with contemporaneous magmatism took place under lower amphibolite to upper greenschist facies conditions. The mylonites resulting from this event were then locally reactivated by a cataclastic deformation. This evolution points to strain localization under progressively retrograde conditions with time, indicating that the Sarandí del Yí Shear Zone represents an example of a thinning shear zone related to the collisional to post-collisional evolution of the Dom Feliciano Belt that occurred between the Meso- to Neoproterozoic (>600 Ma) and late Ediacaran-lower Cambrian times.

  13. NMP22 BladderChek Test: point-of-care technology with life- and money-saving potential.

    PubMed

    Tomera, Kevin M

    2004-11-01

    A new, relatively obscure tumor marker assay, the NMP22 BladderChek Test (Matritech, Inc.), represents a paradigm shift in the diagnosis and management of urinary bladder cancer (transitional cell carcinoma). Specifically, BladderChek should be employed every time a cystoscopy is performed, with corresponding changes in the diagnostic protocol and the guidelines of the American Urological Association for the diagnosis and management of bladder cancer. Currently, cystoscopy is the reference standard and NMP22 BladderChek Test in combination with cystoscopy improves the performance of cystoscopy. At every stage of disease, BladderChek provides a higher sensitivity for the detection of bladder cancer than cytology, which now represents the adjunctive standard of care. Moreover, BladderChek is four-times more sensitive than cytology and is available at half the cost. Early detection of bladder cancer improves prognosis, quality of life and survival. BladderChek may be analogous to the prostate-specific antigen test and eventually expand beyond the urologic setting into the primary care setting for the testing of high-risk patients characterized by smoking history, occupational exposures or age.

  14. The LISST-SL streamlined isokinetic suspended-sediment profiler

    USGS Publications Warehouse

    Gray, John R.; Agrawal, Yogesh C.; Pottsmith, H. Charles

    2004-01-01

    The new manually deployed Laser In Situ Scattering Transmissometer-StreamLined profiler (LISST-SL) represents a major technological advance for suspended-sediment measurements in rivers. The LISST-SL is being designed to provide real-time data on sediment concentrations and particle-size distributions. A pressure sensor and current meter provide real-time depth and ambient velocity data, respectively. The velocity data are also used to control pumpage across an internal laser so that the intake velocity is constantly adjusted to match the ambient stream velocity. Such isokinetic withdrawal is necessary for obtaining representative sedimentary measurements in streamflow, and ensures compliance with established practices. The velocity and sediment-concentration data are used to compute fluxes for up to 32 particle-size classes at points, verticals, or in the entire stream cross section. All data are stored internally, as well as transmitted via a 2-wire conductor to the operator using a specially developed communication protocol. The LISST-SL's performance will be measured and compared to published sedimentological accuracy criteria, and a performance summary will be placed on-line.

  15. 40 CFR 427.32 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Paper (Starch Binder) Subcategory § 427.32 Effluent limitations guidelines representing...

  16. 40 CFR 427.62 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Roofing Subcategory § 427.62 Effluent limitations guidelines representing the degree of...

  17. 40 CFR 427.63 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Roofing Subcategory § 427.63 Effluent limitations guidelines representing the degree of effluent...

  18. 40 CFR 427.72 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Floor Tile Subcategory § 427.72 Effluent limitations guidelines representing the degree...

  19. 40 CFR 427.42 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Paper (Elastomeric Binder) Subcategory § 427.42 Effluent limitations guidelines representing the...

  20. 40 CFR 427.22 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos-Cement Sheet Subcategory § 427.22 Effluent limitations guidelines representing the degree...

  1. 40 CFR 427.72 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Floor Tile Subcategory § 427.72 Effluent limitations guidelines representing the degree of...

  2. 40 CFR 427.32 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Paper (Starch Binder) Subcategory § 427.32 Effluent limitations guidelines representing the degree...

  3. 40 CFR 427.33 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Paper (Starch Binder) Subcategory § 427.33 Effluent limitations guidelines representing the degree...

  4. 40 CFR 427.62 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Roofing Subcategory § 427.62 Effluent limitations guidelines representing the degree of effluent...

  5. 40 CFR 427.23 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos-Cement Sheet Subcategory § 427.23 Effluent limitations guidelines representing the degree of...

  6. 40 CFR 427.12 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos-Cement Pipe Subcategory § 427.12 Effluent limitations guidelines representing the degree of...

  7. 40 CFR 427.53 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Millboard Subcategory § 427.53 Effluent limitations guidelines representing the degree of effluent...

  8. 40 CFR 427.12 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos-Cement Pipe Subcategory § 427.12 Effluent limitations guidelines representing the degree...

  9. 40 CFR 427.62 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Roofing Subcategory § 427.62 Effluent limitations guidelines representing the degree of...

  10. 40 CFR 427.22 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos-Cement Sheet Subcategory § 427.22 Effluent limitations guidelines representing the degree of...

  11. 40 CFR 427.33 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Paper (Starch Binder) Subcategory § 427.33 Effluent limitations guidelines representing the degree...

  12. 40 CFR 427.12 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos-Cement Pipe Subcategory § 427.12 Effluent limitations guidelines representing the degree...

  13. 40 CFR 427.82 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Coating or Finishing of Asbestos Textiles Subcategory § 427.82 Effluent limitations guidelines representing...

  14. 40 CFR 427.43 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Paper (Elastomeric Binder) Subcategory § 427.43 Effluent limitations guidelines representing the...

  15. 40 CFR 427.32 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Paper (Starch Binder) Subcategory § 427.32 Effluent limitations guidelines representing the degree...

  16. 40 CFR 427.72 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Floor Tile Subcategory § 427.72 Effluent limitations guidelines representing the degree of...

  17. 40 CFR 427.52 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Millboard Subcategory § 427.52 Effluent limitations guidelines representing the degree of...

  18. 40 CFR 427.52 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Millboard Subcategory § 427.52 Effluent limitations guidelines representing the degree of...

  19. 40 CFR 427.62 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Roofing Subcategory § 427.62 Effluent limitations guidelines representing the degree of effluent...

  20. 40 CFR 427.52 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Millboard Subcategory § 427.52 Effluent limitations guidelines representing the degree of effluent...

  1. 40 CFR 427.32 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Paper (Starch Binder) Subcategory § 427.32 Effluent limitations guidelines representing...

  2. 40 CFR 427.12 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos-Cement Pipe Subcategory § 427.12 Effluent limitations guidelines representing the degree...

  3. 40 CFR 427.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos-Cement Pipe Subcategory § 427.13 Effluent limitations guidelines representing the degree of...

  4. 40 CFR 427.63 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Roofing Subcategory § 427.63 Effluent limitations guidelines representing the degree of effluent...

  5. 40 CFR 427.43 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Paper (Elastomeric Binder) Subcategory § 427.43 Effluent limitations guidelines representing the...

  6. 40 CFR 427.23 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos-Cement Sheet Subcategory § 427.23 Effluent limitations guidelines representing the degree of...

  7. 40 CFR 427.53 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Millboard Subcategory § 427.53 Effluent limitations guidelines representing the degree of effluent...

  8. 40 CFR 427.52 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Millboard Subcategory § 427.52 Effluent limitations guidelines representing the degree of...

  9. 40 CFR 427.62 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Roofing Subcategory § 427.62 Effluent limitations guidelines representing the degree of...

  10. 40 CFR 427.73 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Floor Tile Subcategory § 427.73 Effluent limitations guidelines representing the degree of...

  11. 40 CFR 427.43 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Paper (Elastomeric Binder) Subcategory § 427.43 Effluent limitations guidelines representing the...

  12. 40 CFR 427.42 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Paper (Elastomeric Binder) Subcategory § 427.42 Effluent limitations guidelines representing the...

  13. 40 CFR 427.73 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Floor Tile Subcategory § 427.73 Effluent limitations guidelines representing the degree of...

  14. 40 CFR 427.22 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos-Cement Sheet Subcategory § 427.22 Effluent limitations guidelines representing the degree...

  15. 40 CFR 427.22 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos-Cement Sheet Subcategory § 427.22 Effluent limitations guidelines representing the degree of...

  16. 40 CFR 427.53 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Millboard Subcategory § 427.53 Effluent limitations guidelines representing the degree of effluent...

  17. 40 CFR 427.32 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Paper (Starch Binder) Subcategory § 427.32 Effluent limitations guidelines representing...

  18. 40 CFR 427.23 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos-Cement Sheet Subcategory § 427.23 Effluent limitations guidelines representing the degree of...

  19. 40 CFR 427.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos-Cement Pipe Subcategory § 427.13 Effluent limitations guidelines representing the degree of...

  20. 40 CFR 427.33 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Paper (Starch Binder) Subcategory § 427.33 Effluent limitations guidelines representing the degree...

  1. 40 CFR 427.72 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Floor Tile Subcategory § 427.72 Effluent limitations guidelines representing the degree...

  2. 40 CFR 427.73 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Floor Tile Subcategory § 427.73 Effluent limitations guidelines representing the degree of...

  3. 40 CFR 427.63 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Roofing Subcategory § 427.63 Effluent limitations guidelines representing the degree of effluent...

  4. 40 CFR 427.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos-Cement Pipe Subcategory § 427.13 Effluent limitations guidelines representing the degree of...

  5. 40 CFR 427.72 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Floor Tile Subcategory § 427.72 Effluent limitations guidelines representing the degree...

  6. 40 CFR 427.52 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Millboard Subcategory § 427.52 Effluent limitations guidelines representing the degree of effluent...

  7. 40 CFR 427.22 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos-Cement Sheet Subcategory § 427.22 Effluent limitations guidelines representing the degree...

  8. 40 CFR 427.12 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos-Cement Pipe Subcategory § 427.12 Effluent limitations guidelines representing the degree of...

  9. 40 CFR 418.72 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS FERTILIZER MANUFACTURING POINT SOURCE CATEGORY Mixed and Blend Fertilizer Production Subcategory § 418.72 Effluent limitations guidelines representing the...

  10. 40 CFR 418.72 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS FERTILIZER MANUFACTURING POINT SOURCE CATEGORY Mixed and Blend Fertilizer Production Subcategory § 418.72 Effluent limitations guidelines representing the...

  11. 40 CFR 418.72 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS FERTILIZER MANUFACTURING POINT SOURCE CATEGORY Mixed and Blend Fertilizer Production Subcategory § 418.72 Effluent limitations guidelines representing the...

  12. 40 CFR 426.132 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GLASS MANUFACTURING POINT SOURCE CATEGORY Hand Pressed and Blown Glass Manufacturing Subcategory § 426.132 Effluent limitations guidelines representing...

  13. 40 CFR 426.22 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GLASS MANUFACTURING POINT SOURCE CATEGORY Sheet Glass Manufacturing Subcategory § 426.22 Effluent limitations guidelines representing the degree of...

  14. 40 CFR 426.132 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GLASS MANUFACTURING POINT SOURCE CATEGORY Hand Pressed and Blown Glass Manufacturing Subcategory § 426.132 Effluent limitations guidelines representing...

  15. 40 CFR 426.32 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GLASS MANUFACTURING POINT SOURCE CATEGORY Rolled Glass Manufacturing Subcategory § 426.32 Effluent limitations guidelines representing the degree of...

  16. 40 CFR 426.32 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GLASS MANUFACTURING POINT SOURCE CATEGORY Rolled Glass Manufacturing Subcategory § 426.32 Effluent limitations guidelines representing the degree of...

  17. 40 CFR 426.42 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GLASS MANUFACTURING POINT SOURCE CATEGORY Plate Glass Manufacturing Subcategory § 426.42 Effluent limitations guidelines representing the degree of...

  18. 40 CFR 426.22 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GLASS MANUFACTURING POINT SOURCE CATEGORY Sheet Glass Manufacturing Subcategory § 426.22 Effluent limitations guidelines representing the degree of...

  19. 40 CFR 426.42 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GLASS MANUFACTURING POINT SOURCE CATEGORY Plate Glass Manufacturing Subcategory § 426.42 Effluent limitations guidelines representing the degree of...

  20. 40 CFR 418.72 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS FERTILIZER MANUFACTURING POINT SOURCE CATEGORY Mixed and Blend Fertilizer Production Subcategory § 418.72 Effluent limitations guidelines representing the...

  1. 40 CFR 429.143 - Effluent limitations representing the degree of effluent reduction attainable by the application...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS TIMBER PRODUCTS PROCESSING POINT SOURCE CATEGORY Particleboard Manufacturing Subcategory § 429.143 Effluent limitations representing the degree of effluent reduction...

  2. 40 CFR 429.142 - Effluent limitations representing the degree of effluent reduction attainable by the application...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS TIMBER PRODUCTS PROCESSING POINT SOURCE CATEGORY Particleboard Manufacturing Subcategory § 429.142 Effluent limitations representing the degree of effluent...

  3. 40 CFR 429.141 - Effluent limitations representing the degree of effluent reduction attainable by the application...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS TIMBER PRODUCTS PROCESSING POINT SOURCE CATEGORY Particleboard Manufacturing Subcategory § 429.141 Effluent limitations representing the degree of effluent reduction...

  4. Robotics virtual rail system and method

    DOEpatents

    Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID; Walton, Miles C [Idaho Falls, ID

    2011-07-05

    A virtual track or rail system and method is described for execution by a robot. A user, through a user interface, generates a desired path comprised of at least one segment representative of the virtual track for the robot. Start and end points are assigned to the desired path and velocities are also associated with each of the at least one segment of the desired path. A waypoint file is generated including positions along the virtual track representing the desired path with the positions beginning from the start point to the end point including the velocities of each of the at least one segment. The waypoint file is sent to the robot for traversing along the virtual track.

  5. Urinary Tract Infection Antibiotic Trial Study Design: A Systematic Review.

    PubMed

    Basmaci, Romain; Vazouras, Konstantinos; Bielicki, Julia; Folgori, Laura; Hsia, Yingfen; Zaoutis, Theoklis; Sharland, Mike

    2017-12-01

    Urinary tract infections (UTIs) represent common bacterial infections in children. No guidance on the conduct of pediatric febrile UTI clinical trials (CTs) exist. To assess the criteria used for patient selection and the efficacy end points in febrile pediatric UTI CTs. Medline, Embase, Cochrane central databases, and clinicaltrials.gov were searched between January 1, 1990, and November 24, 2016. We combined Medical Subject Headings terms and free-text terms for "urinary tract infections" and "therapeutics" and "clinical trials" in children (0-18 years), identifying 3086 articles. Two independent reviewers assessed study quality and performed data extraction. We included 40 CTs in which a total of 4381 cases of pediatric UTIs were investigated. Positive urine culture results and fever were the most common inclusion criteria (93% and 78%, respectively). Urine sampling method, pyuria, and colony thresholds were highly variable. Clinical and microbiological end points were assessed in 88% and 93% of the studies, respectively. Timing for end point assessment was highly variable, and only 3 studies (17%) out of the 18 performed after the Food and Drug Administration 1998 guidance publication assessed primary and secondary end points consistently with this guidance. Our limitations included a mixed population of healthy children and children with an underlying condition. In 6 trials, researchers studied a subgroup of patients with afebrile UTI. We observed a wide variability in the microbiological inclusion criteria and the timing for end point assessment. The available guidance for adults appear not to be used by pediatricians and do not seem applicable to the childhood UTI. A harmonized design for pediatric UTIs CT is necessary. Copyright © 2017 by the American Academy of Pediatrics.

  6. Dike emplacement and the birth of the Yellowstone hotspot, western USA

    NASA Astrophysics Data System (ADS)

    Glen, J. M.; Ponce, D. A.; Nomade, S.; John, D. A.

    2003-04-01

    The birth of the Yellowstone hotspot in middle Miocene time was marked by extensive flood basalt volcanism. Prominent aeromagnetic anomalies (referred to collectively as the Northern Nevada rifts), extending hundreds of kilometers across Nevada, are thought to represent dike swarms injected at the time of flood volcanism. Until now, however, dikes from only one of these anomalies (eastern) have been documented, sampled, and dated (40Ar/ 39Ar ages range from 15.4 +/-0.2 to 16.7 +/-0.5Ma; John et al., 2000, ages recalculated using the FCS standard age of 28.02 +/-0.28Ma). We present new paleomagnetic data and an 40Ar/ 39Ar age of 16.6 +/-0.3Ma for a mafic dike suggesting that all the anomalies likely originate from the same mid-Miocene fracturing event. The magnetic anomalies, together with the trends of dike swarms, faults, and fold axes produce a radiating pattern that converges on a point near the Oregon-Idaho boarder. We speculate that this pattern formed by stresses imposed by the impact of the Yellowstone hotspot. Glen and Ponce (2002) propose a simple stress model to account for this fracture pattern that consists of a point source of stress at the base of the crust and a regional stress field aligned with the presumed middle Miocene stress direction. Overlapping point and regional stresses result in stress trajectories that form a radiating pattern near the point source (i.e., hotspot). Far from the influence of the point stress, however, stress trajectories verge towards the NNW-trending regional stress direction (i.e., plate boundary stresses), similar to the pattern of dike swarm traces. Glen and Ponce, 2002, Geology, 30, 7, 647-650 John et al., 2000, Geol. Soc. Nev. Sym. Proc., May 15-18, 2000, 127-154

  7. Correction of measured Gamma-Knife output factors for angular dependence of diode detectors and PinPoint ionization chamber.

    PubMed

    Hršak, Hrvoje; Majer, Marija; Grego, Timor; Bibić, Juraj; Heinrich, Zdravko

    2014-12-01

    Dosimetry for Gamma-Knife requires detectors with high spatial resolution and minimal angular dependence of response. Angular dependence and end effect time for p-type silicon detectors (PTW Diode P and Diode E) and PTW PinPoint ionization chamber were measured with Gamma-Knife beams. Weighted angular dependence correction factors were calculated for each detector. The Gamma-Knife output factors were corrected for angular dependence and end effect time. For Gamma-Knife beams angle range of 84°-54°. Diode P shows considerable angular dependence of 9% and 8% for the 18 mm and 14, 8, 4 mm collimator, respectively. For Diode E this dependence is about 4% for all collimators. PinPoint ionization chamber shows angular dependence of less than 3% for 18, 14 and 8 mm helmet and 10% for 4 mm collimator due to volumetric averaging effect in a small photon beam. Corrected output factors for 14 mm helmet are in very good agreement (within ±0.3%) with published data and values recommended by vendor (Elekta AB, Stockholm, Sweden). For the 8 mm collimator diodes are still in good agreement with recommended values (within ±0.6%), while PinPoint gives 3% less value. For the 4 mm helmet Diodes P and E show over-response of 2.8% and 1.8%, respectively. For PinPoint chamber output factor of 4 mm collimator is 25% lower than Elekta value which is generally not consequence of angular dependence, but of volumetric averaging effect and lack of lateral electronic equilibrium. Diodes P and E represent good choice for Gamma-Knife dosimetry. Copyright © 2014 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  8. Childhood Self-Control Predicts Smoking Throughout Life: Evidence From 21,000 Cohort Study Participants

    PubMed Central

    2016-01-01

    Objective: Low self-control has been linked with smoking, yet it remains unclear whether childhood self-control underlies the emergence of lifetime smoking patterns. We examined the contribution of childhood self-control to early smoking initiation and smoking across adulthood. Methods: 21,132 participants were drawn from 2 nationally representative cohort studies; the 1970 British Cohort Study (BCS) and the 1958 National Child Development Study (NCDS). Child self-control was teacher-rated at age 10 in the BCS and at ages 7 and 11 in the NCDS. Participants reported their smoking status and number of cigarettes smoked per day at 5 time-points in the BCS (ages 26–42) and 6 time-points in the NCDS (ages 23–55). Both studies controlled for socioeconomic background, cognitive ability, psychological distress, gender, and parental smoking; the NCDS also controlled for an extended set of background characteristics. Results: Early self-control made a substantial graded contribution to (not) smoking throughout life. In adjusted regression models, a 1-SD increase in self-control predicted a 6.9 percentage point lower probability of smoking in the BCS, and this was replicated in the NCDS (5.2 point reduced risk). Adolescent smoking explained over half of the association between self-control and adult smoking. Childhood self-control was positively related to smoking cessation and negatively related to smoking initiation, relapse to smoking, and the number of cigarettes smoked in adulthood. Conclusions: This study provides strong evidence that low childhood self-control predicts an increased risk of smoking throughout adulthood and points to adolescent smoking as a key pathway through which this may occur. PMID:27607137

  9. On the construction, comparison, and variability of airsheds for interpreting semivolatile organic compounds in passively sampled air.

    PubMed

    Westgate, John N; Wania, Frank

    2011-10-15

    Air mass origin as determined by back trajectories often aids in explaining some of the short-term variability in the atmospheric concentrations of semivolatile organic contaminants. Airsheds, constructed by amalgamating large numbers of back trajectories, capture average air mass origins over longer time periods and thus have found use in interpreting air concentrations obtained by passive air samplers. To explore some of their key characteristics, airsheds for 54 locations on Earth were constructed and compared for roundness, seasonality, and interannual variability. To avoid the so-called "pole problem" and to simplify the calculation of roundness, a "geodesic grid" was used to bin the back-trajectory end points. Departures from roundness were seen to occur at all latitudes and to correlate significantly with local slope but no strong relationship between latitude and roundness was revealed. Seasonality and interannual variability vary widely enough to imply that static models of transport are not sufficient to describe the proximity of an area to potential sources of contaminants. For interpreting an air measurement an airshed should be generated specifically for the deployment time of the sampler, especially when investigating long-term trends. Samples taken in a single season may not represent the average annual atmosphere, and samples taken in linear, as opposed to round, airsheds may not represent the average atmosphere in the area. Simple methods are proposed to ascertain the significance of an airshed or individual cell. It is recommended that when establishing potential contaminant source regions only end points with departure heights of less than ∼700 m be considered.

  10. Gender trends in authorship in oral and maxillofacial surgery literature: A 30-year analysis.

    PubMed

    Nkenke, Emeka; Seemann, Rudolf; Vairaktaris, Elefterios; Schaller, Hans-Günter; Rohde, Maximilian; Stelzle, Florian; Knipfer, Christian

    2015-07-01

    The aim of the present study was to perform a bibliometric analysis of the gender distribution of first and senior authorships in important oral and maxillofacial journals over the 30-year period from 1980 to 2010. Articles published in three representative oral and maxillofacial surgery journals were selected. The years 1980, 1990, 2000, and 2010 were chosen as representative points in time for article selection. Original research, case reports, technical notes, and reviews were included in the analysis. Case reports and technical notes were pooled in one group. For each article, the gender of the first author as well as that of the senior author was determined, based on the inspection of their first name. The type of article was determined and the country of origin of the article was documented. A total 1412 articles were subjected to the data analysis. A significant increase in female authorship in oral and maxillofacial surgery could be identified over the chosen 30-year period. However, the number of publications by male authors was still significantly higher at all points of time, exceeding those of female authors by at least 3.8 fold in 2010. As there is a trend towards feminization of medicine and dentistry, the results of the present study may serve as the basis for further analysis of the current situation, and the identification of necessary actions to accelerate the closure of the gender gap in publishing in oral and maxillofacial surgery. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  11. 40 CFR 409.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...

  12. 40 CFR 409.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...

  13. Beyond Slopes and Points: Teaching Students How Graphs Describe the Relationships between Scientific Pheomena

    ERIC Educational Resources Information Center

    Harris, David; Gomez Zwiep, Susan

    2013-01-01

    Graphs represent complex information. They show relationships and help students see patterns and compare data. Students often do not appreciate the illuminating power of graphs, interpreting them literally rather than as symbolic representations (Leinhardt, Zaslavsky, and Stein 1990). Students often read graphs point by point instead of seeing…

  14. 40 CFR 409.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...

  15. 40 CFR 409.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...

  16. 40 CFR 409.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...

  17. When Do Infants Begin to Follow a Point?

    ERIC Educational Resources Information Center

    Bertenthal, Bennett I.; Boyer, Ty W.; Harding, Samuel

    2014-01-01

    Infants' understanding of a pointing gesture represents a major milestone in their communicative development. The current consensus is that infants are not capable of following a pointing gesture until 9-12 months of age. In this article, we present evidence from 4- and 6-month-old infants challenging this conclusion. Infants were tested with…

  18. 40 CFR 91.316 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... deviation from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization...

  19. 40 CFR 90.316 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization. Prior...

  20. 40 CFR 90.316 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization. Prior...

  1. 40 CFR 90.316 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization. Prior...

  2. 40 CFR 91.316 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... deviation from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization...

  3. 40 CFR 91.316 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... deviation from a least-squares best-fit straight line is two percent or less of the value at each data point... exceeds two percent at any point, use the best-fit non-linear equation which represents the data to within two percent of each test point to determine concentration. (d) Oxygen interference optimization...

  4. Drawing Lines with Light in Holographic Space

    NASA Astrophysics Data System (ADS)

    Chang, Yin-Ren; Richardson, Martin

    2013-02-01

    This paper explores the dynamic and expressive possibilities of holographic art through a comparison of art history and technical media such as photography, film and holographic technologies. Examples of modern art and creative expression of time and motions are examined using the early 20th century art movement, Cubism, where subjects are portrayed to be seen simultaneously from different angles. Folding space is represented as subject matter as it can depict space from multiple points of time. The paper also investigates the way holographic art has explored time and space. The lenticular lens-based media reveal a more subjective poetic art in the form of the lyrical images and messages as spectators pass through time, or walk along with the piece of work through an interactive process. It is argued that photographic practice is another example of artistic representation in the form of aesthetic medium of time movement and as such shares a common ground with other dynamic expression that require time based interaction.

  5. Adjusted variable plots for Cox's proportional hazards regression model.

    PubMed

    Hall, C B; Zeger, S L; Bandeen-Roche, K J

    1996-01-01

    Adjusted variable plots are useful in linear regression for outlier detection and for qualitative evaluation of the fit of a model. In this paper, we extend adjusted variable plots to Cox's proportional hazards model for possibly censored survival data. We propose three different plots: a risk level adjusted variable (RLAV) plot in which each observation in each risk set appears, a subject level adjusted variable (SLAV) plot in which each subject is represented by one point, and an event level adjusted variable (ELAV) plot in which the entire risk set at each failure event is represented by a single point. The latter two plots are derived from the RLAV by combining multiple points. In each point, the regression coefficient and standard error from a Cox proportional hazards regression is obtained by a simple linear regression through the origin fit to the coordinates of the pictured points. The plots are illustrated with a reanalysis of a dataset of 65 patients with multiple myeloma.

  6. Separation of Dynamics in the Free Energy Landscape

    NASA Astrophysics Data System (ADS)

    Ekimoto, Toru; Odagaki, Takashi; Yoshimori, Akira

    2008-02-01

    The dynamics of a representative point in a model free energy landscape (FEL) is analyzed by the Langevin equation with the FEL as the driving potential. From the detailed analysis of the generalized susceptibility, fast, slow and Johari-Goldstein (JG) processes are shown to be well described by the FEL. Namely, the fast process is determined by the stochastic motion confined in a basin of the FEL and the relaxation time is related to the curvature of the FEL at the bottom of the basin. The jump motion among basins gives rise to the slow relaxation whose relaxation time is determined by the distribution of the barriers in the FEL and the JG process is produced by weak modulation of the FEL.

  7. Development of a Nonequilibrium Radiative Heating Prediction Method for Coupled Flowfield Solutions

    NASA Technical Reports Server (NTRS)

    Hartung, Lin C.

    1991-01-01

    A method for predicting radiative heating and coupling effects in nonequilibrium flow-fields has been developed. The method resolves atomic lines with a minimum number of spectral points, and treats molecular radiation using the smeared band approximation. To further minimize computational time, the calculation is performed on an optimized spectrum, which is computed for each flow condition to enhance spectral resolution. Additional time savings are obtained by performing the radiation calculation on a subgrid optimally selected for accuracy. Representative results from the new method are compared to previous work to demonstrate that the speedup does not cause a loss of accuracy and is sufficient to make coupled solutions practical. The method is found to be a useful tool for studies of nonequilibrium flows.

  8. Lensless microscopy technique for static and dynamic colloidal systems.

    PubMed

    Alvarez-Palacio, D C; Garcia-Sucerquia, J

    2010-09-15

    We present the application of a lensless microscopy technique known as digital in-line holographic microscopy (DIHM) to image dynamic and static colloidal systems of microspheres. DIHM has been perfected up to the point that submicrometer lateral resolution with several hundreds of micrometers depth of field is achieved with visible light; it is shown that the lateral resolution of DIHM is enough to resolve self-assembled colloidal monolayers built up from polystyrene spheres with submicrometer diameters. The time resolution of DIHM is of the order of 4 frames/s at 2048 x 2048 pixels, which represents an overall improvement of 16 times the time resolution of confocal scanning microscopy. This feature is applied to the visualization of the migration of dewetting fronts in dynamic colloidal systems and the formation of front-like arrangements of particles. Copyright 2010 Elsevier Inc. All rights reserved.

  9. Transformation to equivalent dimensions—a new methodology to study earthquake clustering

    NASA Astrophysics Data System (ADS)

    Lasocki, Stanislaw

    2014-05-01

    A seismic event is represented by a point in a parameter space, quantified by the vector of parameter values. Studies of earthquake clustering involve considering distances between such points in multidimensional spaces. However, the metrics of earthquake parameters are different, hence the metric in a multidimensional parameter space cannot be readily defined. The present paper proposes a solution of this metric problem based on a concept of probabilistic equivalence of earthquake parameters. Under this concept the lengths of parameter intervals are equivalent if the probability for earthquakes to take values from either interval is the same. Earthquake clustering is studied in an equivalent rather than the original dimensions space, where the equivalent dimension (ED) of a parameter is its cumulative distribution function. All transformed parameters are of linear scale in [0, 1] interval and the distance between earthquakes represented by vectors in any ED space is Euclidean. The unknown, in general, cumulative distributions of earthquake parameters are estimated from earthquake catalogues by means of the model-free non-parametric kernel estimation method. Potential of the transformation to EDs is illustrated by two examples of use: to find hierarchically closest neighbours in time-space and to assess temporal variations of earthquake clustering in a specific 4-D phase space.

  10. Wildland Fire Forecasting: Predicting Wildfire Behavior, Growth, and Feedbacks on Weather

    NASA Astrophysics Data System (ADS)

    Coen, J. L.

    2005-12-01

    Recent developments in wildland fire research models have represented more complex of fire behavior. The cost has been to increase the computational requirements. When operational constraints are included, such as the need to produce such forecasts faster than real time, the challenge becomes a balance of how much complexity (with corresponding gains in realism) and accuracy can be achieved in producing the quantities of interest while meeting the specified operational constraints. Current field tools are calculator or Palm-Pilot based algorithms such as BEHAVE and BEHAVE Plus that produce timely estimates of instantaneous fire spread rates, flame length, and fire intensity at a point using readily estimated inputs of fuel model, terrain slope, and atmospheric wind speed at a point. At the cost of requiring a PC and slower calculation, FARSITE represents two-dimensional fire spread and adds capabilities including a parameterized representation of crown fire ignition, This work describes how a coupled atmosphere-fire model previously used as a research tool has been adapted for production of real-time forecasts of fire growth and its interactions with weather over a domain focusing on Colorado during summer 2004. The coupled atmosphere-wildland fire-environment (CAWFE) model composed of a 3-dimensional atmospheric prediction model that has been two-way coupled with an empirical fire spread model. The models are connected in that atmospheric conditions (and fuel conditions influenced by the atmosphere) affect the rate and direction of fire propagation, which releases sensible and latent heat (i.e. thermal and water vapor fluxes) to the atmosphere that in turn alter the winds and atmospheric structure around the fire. Thus, it can represent time and spatially-varying weather and the fire feedbacks on the atmospheric which are at the heart of sudden changes in fire behavior and examples of extreme fire behavior such as blow ups, which are now not predictable with current tools. Thus, although this work shows that is it possible to perform more detailed simulations in real time, fire behavior forecasting remains a challenging problem. This is due to challenges in weather prediction, particularly at fine spatial and temporal scales considered "nowcasting" (0-6 hrs), uncertainties in fire behavior even with known meteorological conditions, limitations in quantitative datasets on fuel properties such as fuel loading, and verification. This work describes efforts to advance these capabilities with input from remote sensing data on fuel characteristics and dynamic steering and object-based verification with remotely sensed fire perimeters.

  11. Investigating why and for whom management ethnic representativeness influences interpersonal mistreatment in the workplace.

    PubMed

    Lindsey, Alex P; Avery, Derek R; Dawson, Jeremy F; King, Eden B

    2017-11-01

    Preliminary research suggests that employees use the demographic makeup of their organization to make sense of diversity-related incidents at work. The authors build on this work by examining the impact of management ethnic representativeness-the degree to which the ethnic composition of managers in an organization mirrors or is misaligned with the ethnic composition of employees in that organization. To do so, they integrate signaling theory and a sense-making perspective into a relational demography framework to investigate why and for whom management ethnic representativeness may have an impact on interpersonal mistreatment at work. Specifically, in three complementary studies, the authors examine the relationship between management ethnic representativeness and interpersonal mistreatment. First, they analyze the relationship between management ethnic representativeness and perceptions of harassment, bullying, and abuse the next year, as moderated by individuals' ethnic similarity to others in their organizations in a sample of 60,602 employees of Britain's National Health Service. Second, a constructive replication investigates perceived behavioral integrity as an explanatory mechanism that can account for the effects of representativeness using data from a nationally representative survey of working adults in the United States. Third and finally, online survey data collected at two time points replicated these patterns and further integrated the effects of representativeness and dissimilarity when they are measured using both objective and subjective strategies. Results support the authors' proposed moderated mediation model in which management ethnic representation is negatively related to interpersonal mistreatment through the mediator of perceived behavioral integrity, with effects being stronger for ethnically dissimilar employees. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. 40 CFR 447.12 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) INK FORMULATING POINT SOURCE CATEGORY Oil-Base Solvent Wash Ink Subcategory § 447.12 Effluent limitations guidelines representing the degree...

  13. 40 CFR 447.12 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) INK FORMULATING POINT SOURCE CATEGORY Oil-Base Solvent Wash Ink Subcategory § 447.12 Effluent limitations guidelines representing the degree...

  14. 40 CFR 447.12 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) INK FORMULATING POINT SOURCE CATEGORY Oil-Base Solvent Wash Ink Subcategory § 447.12 Effluent limitations guidelines representing the degree...

  15. 40 CFR 427.83 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Coating or Finishing of Asbestos Textiles Subcategory § 427.83 Effluent limitations guidelines representing the degree...

  16. 40 CFR 427.33 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Paper (Starch Binder) Subcategory § 427.33 Effluent limitations guidelines representing the degree of effluent...

  17. 40 CFR 427.43 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS ASBESTOS MANUFACTURING POINT SOURCE CATEGORY Asbestos Paper (Elastomeric Binder) Subcategory § 427.43 Effluent limitations guidelines representing the degree of effluent...

  18. 40 CFR 429.73 - Effluent limitations representing the degree of effluent reduction attainable by the application...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) EFFLUENT GUIDELINES AND STANDARDS TIMBER PRODUCTS PROCESSING POINT SOURCE CATEGORY Wood Preserving-Water Borne or Nonpressure Subcategory § 429.73 Effluent limitations representing the degree of effluent...

  19. 40 CFR 429.73 - Effluent limitations representing the degree of effluent reduction attainable by the application...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) EFFLUENT GUIDELINES AND STANDARDS TIMBER PRODUCTS PROCESSING POINT SOURCE CATEGORY Wood Preserving-Water Borne or Nonpressure Subcategory § 429.73 Effluent limitations representing the degree of effluent...

  20. 40 CFR 418.77 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS FERTILIZER MANUFACTURING POINT SOURCE CATEGORY Mixed and Blend Fertilizer Production Subcategory § 418.77 Effluent limitations guidelines representing the degree of...

Top