Computationally Efficient Resampling of Nonuniform Oversampled SAR Data
2010-05-01
noncoherently . The resample data is calculated using both a simple average and a weighted average of the demodulated data. The average nonuniform...trials with randomly varying accelerations. The results are shown in Fig. 5 for the noncoherent power difference and Fig. 6 for and coherent power...simple average. Figure 5. Noncoherent difference between SAR imagery generated with uniform sampling and nonuniform sampling that was resampled
An optical systems analysis approach to image resampling
NASA Technical Reports Server (NTRS)
Lyon, Richard G.
1997-01-01
All types of image registration require some type of resampling, either during the registration or as a final step in the registration process. Thus the image(s) must be regridded into a spatially uniform, or angularly uniform, coordinate system with some pre-defined resolution. Frequently the ending resolution is not the resolution at which the data was observed with. The registration algorithm designer and end product user are presented with a multitude of possible resampling methods each of which modify the spatial frequency content of the data in some way. The purpose of this paper is threefold: (1) to show how an imaging system modifies the scene from an end to end optical systems analysis approach, (2) to develop a generalized resampling model, and (3) empirically apply the model to simulated radiometric scene data and tabulate the results. A Hanning windowed sinc interpolator method will be developed based upon the optical characterization of the system. It will be discussed in terms of the effects and limitations of sampling, aliasing, spectral leakage, and computational complexity. Simulated radiometric scene data will be used to demonstrate each of the algorithms. A high resolution scene will be "grown" using a fractal growth algorithm based on mid-point recursion techniques. The result scene data will be convolved with a point spread function representing the optical response. The resultant scene will be convolved with the detection systems response and subsampled to the desired resolution. The resultant data product will be subsequently resampled to the correct grid using the Hanning windowed sinc interpolator and the results and errors tabulated and discussed.
Image re-sampling detection through a novel interpolation kernel.
Hilal, Alaa
2018-06-01
Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.
Arctic Acoustic Workshop Proceedings, 14-15 February 1989.
1989-06-01
measurements. The measurements reported by Levine et al. (1987) were taken from current and temperature sensors moored in two triangular grids . The internal...requires a resampling of the data series on a uniform depth-time grid . Statistics calculated from the resampled series will be used to test numerical...from an isolated keel. Figure 2: 2-D Modeling Geometry - The model is based on a 2-D Cartesian grid with an axis of symmetry on the left. A pulsed
Reconstruction of dynamical systems from resampled point processes produced by neuron models
NASA Astrophysics Data System (ADS)
Pavlova, Olga N.; Pavlov, Alexey N.
2018-04-01
Characterization of dynamical features of chaotic oscillations from point processes is based on embedding theorems for non-uniformly sampled signals such as the sequences of interspike intervals (ISIs). This theoretical background confirms the ability of attractor reconstruction from ISIs generated by chaotically driven neuron models. The quality of such reconstruction depends on the available length of the analyzed dataset. We discuss how data resampling improves the reconstruction for short amount of data and show that this effect is observed for different types of mechanisms for spike generation.
Efficient high-quality volume rendering of SPH data.
Fraedrich, Roland; Auer, Stefan; Westermann, Rüdiger
2010-01-01
High quality volume rendering of SPH data requires a complex order-dependent resampling of particle quantities along the view rays. In this paper we present an efficient approach to perform this task using a novel view-space discretization of the simulation domain. Our method draws upon recent work on GPU-based particle voxelization for the efficient resampling of particles into uniform grids. We propose a new technique that leverages a perspective grid to adaptively discretize the view-volume, giving rise to a continuous level-of-detail sampling structure and reducing memory requirements compared to a uniform grid. In combination with a level-of-detail representation of the particle set, the perspective grid allows effectively reducing the amount of primitives to be processed at run-time. We demonstrate the quality and performance of our method for the rendering of fluid and gas dynamics SPH simulations consisting of many millions of particles.
Global Data Spatially Interrelate System for Scientific Big Data Spatial-Seamless Sharing
NASA Astrophysics Data System (ADS)
Yu, J.; Wu, L.; Yang, Y.; Lei, X.; He, W.
2014-04-01
A good data sharing system with spatial-seamless services will prevent the scientists from tedious, boring, and time consuming work of spatial transformation, and hence encourage the usage of the scientific data, and increase the scientific innovation. Having been adopted as the framework of Earth datasets by Group on Earth Observation (GEO), Earth System Spatial Grid (ESSG) is potential to be the spatial reference of the Earth datasets. Based on the implementation of ESSG, SDOG-ESSG, a data sharing system named global data spatially interrelate system (GASE) was design to make the data sharing spatial-seamless. The architecture of GASE was introduced. The implementation of the two key components, V-Pools, and interrelating engine, and the prototype is presented. Any dataset is firstly resampled into SDOG-ESSG, and is divided into small blocks, and then are mapped into hierarchical system of the distributed file system in V-Pools, which together makes the data serving at a uniform spatial reference and at a high efficiency. Besides, the datasets from different data centres are interrelated by the interrelating engine at the uniform spatial reference of SDOGESSG, which enables the system to sharing the open datasets in the internet spatial-seamless.
Surface Fitting for Quasi Scattered Data from Coordinate Measuring Systems.
Mao, Qing; Liu, Shugui; Wang, Sen; Ma, Xinhui
2018-01-13
Non-uniform rational B-spline (NURBS) surface fitting from data points is wildly used in the fields of computer aided design (CAD), medical imaging, cultural relic representation and object-shape detection. Usually, the measured data acquired from coordinate measuring systems is neither gridded nor completely scattered. The distribution of this kind of data is scattered in physical space, but the data points are stored in a way consistent with the order of measurement, so it is named quasi scattered data in this paper. Therefore they can be organized into rows easily but the number of points in each row is random. In order to overcome the difficulty of surface fitting from this kind of data, a new method based on resampling is proposed. It consists of three major steps: (1) NURBS curve fitting for each row, (2) resampling on the fitted curve and (3) surface fitting from the resampled data. Iterative projection optimization scheme is applied in the first and third step to yield advisable parameterization and reduce the time cost of projection. A resampling approach based on parameters, local peaks and contour curvature is proposed to overcome the problems of nodes redundancy and high time consumption in the fitting of this kind of scattered data. Numerical experiments are conducted with both simulation and practical data, and the results show that the proposed method is fast, effective and robust. What's more, by analyzing the fitting results acquired form data with different degrees of scatterness it can be demonstrated that the error introduced by resampling is negligible and therefore it is feasible.
NASA Astrophysics Data System (ADS)
Han, Tao; Chen, Lingyun; Lai, Chao-Jen; Liu, Xinming; Shen, Youtao; Zhong, Yuncheng; Ge, Shuaiping; Yi, Ying; Wang, Tianpeng; Shaw, Chris C.
2009-02-01
Images of mastectomy breast specimens have been acquired with a bench top experimental Cone beam CT (CBCT) system. The resulting images have been segmented to model an uncompressed breast for simulation of various CBCT techniques. To further simulate conventional or tomosynthesis mammographic imaging for comparison with the CBCT technique, a deformation technique was developed to convert the CT data for an uncompressed breast to a compressed breast without altering the breast volume or regional breast density. With this technique, 3D breast deformation is separated into two 2D deformations in coronal and axial views. To preserve the total breast volume and regional tissue composition, each 2D deformation step was achieved by altering the square pixels into rectangular ones with the pixel areas unchanged and resampling with the original square pixels using bilinear interpolation. The compression was modeled by first stretching the breast in the superior-inferior direction in the coronal view. The image data were first deformed by distorting the voxels with a uniform distortion ratio. These deformed data were then deformed again using distortion ratios varying with the breast thickness and re-sampled. The deformation procedures were applied in the axial view to stretch the breast in the chest wall to nipple direction while shrinking it in the mediolateral to lateral direction re-sampled and converted into data for uniform cubic voxels. Threshold segmentation was applied to the final deformed image data to obtain the 3D compressed breast model. Our results show that the original segmented CBCT image data were successfully converted into those for a compressed breast with the same volume and regional density preserved. Using this compressed breast model, conventional and tomosynthesis mammograms were simulated for comparison with CBCT.
Assessing Uncertainties in Surface Water Security: A Probabilistic Multi-model Resampling approach
NASA Astrophysics Data System (ADS)
Rodrigues, D. B. B.
2015-12-01
Various uncertainties are involved in the representation of processes that characterize interactions between societal needs, ecosystem functioning, and hydrological conditions. Here, we develop an empirical uncertainty assessment of water security indicators that characterize scarcity and vulnerability, based on a multi-model and resampling framework. We consider several uncertainty sources including those related to: i) observed streamflow data; ii) hydrological model structure; iii) residual analysis; iv) the definition of Environmental Flow Requirement method; v) the definition of critical conditions for water provision; and vi) the critical demand imposed by human activities. We estimate the overall uncertainty coming from the hydrological model by means of a residual bootstrap resampling approach, and by uncertainty propagation through different methodological arrangements applied to a 291 km² agricultural basin within the Cantareira water supply system in Brazil. Together, the two-component hydrograph residual analysis and the block bootstrap resampling approach result in a more accurate and precise estimate of the uncertainty (95% confidence intervals) in the simulated time series. We then compare the uncertainty estimates associated with water security indicators using a multi-model framework and provided by each model uncertainty estimation approach. The method is general and can be easily extended forming the basis for meaningful support to end-users facing water resource challenges by enabling them to incorporate a viable uncertainty analysis into a robust decision making process.
Restoration and reconstruction from overlapping images
NASA Technical Reports Server (NTRS)
Reichenbach, Stephen E.; Kaiser, Daniel J.; Hanson, Andrew L.; Li, Jing
1997-01-01
This paper describes a technique for restoring and reconstructing a scene from overlapping images. In situations where there are multiple, overlapping images of the same scene, it may be desirable to create a single image that most closely approximates the scene, based on all of the data in the available images. For example, successive swaths acquired by NASA's planned Moderate Imaging Spectrometer (MODIS) will overlap, particularly at wide scan angles, creating a severe visual artifact in the output image. Resampling the overlapping swaths to produce a more accurate image on a uniform grid requires restoration and reconstruction. The one-pass restoration and reconstruction technique developed in this paper yields mean-square-optimal resampling, based on a comprehensive end-to-end system model that accounts for image overlap, and subject to user-defined and data-availability constraints on the spatial support of the filter.
NASA Astrophysics Data System (ADS)
Baisden, W. T.; Prior, C.; Lambie, S.; Tate, K.; Bruhn, F.; Parfitt, R.; Schipper, L.; Wilde, R. H.; Ross, C.
2006-12-01
Soil organic matter contains more C than terrestrial biomass and atmospheric CO2 combined, and reacts to climate and land-use change on timescales requiring long-term experiments or monitoring. The direction and uncertainty of soil C stock changes has been difficult to predict and incorporate in decision support tools for climate change policies. Moreover, standardization of approaches has been difficult because historic methods of soil sampling have varied regionally, nationally and temporally. The most common and uniform type of historic sampling is soil profiles, which have commonly been collected, described and archived in the course of both soil survey studies and research. Resampling soil profiles has considerable utility in carbon monitoring and in parameterizing models to understand the ecosystem responses to global change. Recent work spanning seven soil orders in New Zealand's grazed pastures has shown that, averaged over approximately 20 years, 31 soil profiles lost 106 g C m-2 y-1 (p=0.01) and 9.1 g N m{^-2} y-1 (p=0.002). These losses are unexpected and appear to extend well below the upper 30 cm of soil. Following on these recent results, additional advantages of resampling soil profiles can be emphasized. One of the most powerful applications afforded by resampling archived soils is the use of the pulse label of radiocarbon injected into the atmosphere by thermonuclear weapons testing circa 1963 as a tracer of soil carbon dynamics. This approach allows estimation of the proportion of soil C that is `passive' or `inert' and therefore unlikely to respond to global change. Evaluation of resampled soil horizons in a New Zealand soil chronosequence confirms that the approach yields consistent values for the proportion of `passive' soil C, reaching 25% of surface horizon soil C over 12,000 years. Across whole profiles, radiocarbon data suggest that the proportion of `passive' C in New Zealand grassland soil can be less than 40% of total soil C. Below 30 cm, 1 kg C m-2 or more may be reactive on decadal timescales, supporting evidence of soil C losses from throughout the soil profiles. Information from resampled soil profiles can be combined with additional contemporary measurements to test hypotheses about mechanisms for soil C changes. For example, Δ14C in excess of 200‰ in water extractable dissolved organic C (DOC) from surface soil horizons supports the hypothesis that decadal movement of DOC represents an important translocation of soil C. These preliminary results demonstrate that resampling whole soil profiles can support substantial progress in C cycle science, ranging from updating operational C accounting systems to the frontiers of research. Resampling can be complementary or superior to fixed-depth interval sampling of surface soil layers. Resampling must however be undertaken with relative urgency to maximize the potential interpretive power of bomb-derived radiocarbon.
NASA Astrophysics Data System (ADS)
Müller, H.; Haberlandt, U.
2018-01-01
Rainfall time series of high temporal resolution and spatial density are crucial for urban hydrology. The multiplicative random cascade model can be used for temporal rainfall disaggregation of daily data to generate such time series. Here, the uniform splitting approach with a branching number of 3 in the first disaggregation step is applied. To achieve a final resolution of 5 min, subsequent steps after disaggregation are necessary. Three modifications at different disaggregation levels are tested in this investigation (uniform splitting at Δt = 15 min, linear interpolation at Δt = 7.5 min and Δt = 3.75 min). Results are compared both with observations and an often used approach, based on the assumption that a time steps with Δt = 5.625 min, as resulting if a branching number of 2 is applied throughout, can be replaced with Δt = 5 min (called the 1280 min approach). Spatial consistence is implemented in the disaggregated time series using a resampling algorithm. In total, 24 recording stations in Lower Saxony, Northern Germany with a 5 min resolution have been used for the validation of the disaggregation procedure. The urban-hydrological suitability is tested with an artificial combined sewer system of about 170 hectares. The results show that all three variations outperform the 1280 min approach regarding reproduction of wet spell duration, average intensity, fraction of dry intervals and lag-1 autocorrelation. Extreme values with durations of 5 min are also better represented. For durations of 1 h, all approaches show only slight deviations from the observed extremes. The applied resampling algorithm is capable to achieve sufficient spatial consistence. The effects on the urban hydrological simulations are significant. Without spatial consistence, flood volumes of manholes and combined sewer overflow are strongly underestimated. After resampling, results using disaggregated time series as input are in the range of those using observed time series. Best overall performance regarding rainfall statistics are obtained by the method in which the disaggregation process ends at time steps with 7.5 min duration, deriving the 5 min time steps by linear interpolation. With subsequent resampling this method leads to a good representation of manhole flooding and combined sewer overflow volume in terms of hydrological simulations and outperforms the 1280 min approach.
Resampling method for applying density-dependent habitat selection theory to wildlife surveys.
Tardy, Olivia; Massé, Ariane; Pelletier, Fanie; Fortin, Daniel
2015-01-01
Isodar theory can be used to evaluate fitness consequences of density-dependent habitat selection by animals. A typical habitat isodar is a regression curve plotting competitor densities in two adjacent habitats when individual fitness is equal. Despite the increasing use of habitat isodars, their application remains largely limited to areas composed of pairs of adjacent habitats that are defined a priori. We developed a resampling method that uses data from wildlife surveys to build isodars in heterogeneous landscapes without having to predefine habitat types. The method consists in randomly placing blocks over the survey area and dividing those blocks in two adjacent sub-blocks of the same size. Animal abundance is then estimated within the two sub-blocks. This process is done 100 times. Different functional forms of isodars can be investigated by relating animal abundance and differences in habitat features between sub-blocks. We applied this method to abundance data of raccoons and striped skunks, two of the main hosts of rabies virus in North America. Habitat selection by raccoons and striped skunks depended on both conspecific abundance and the difference in landscape composition and structure between sub-blocks. When conspecific abundance was low, raccoons and striped skunks favored areas with relatively high proportions of forests and anthropogenic features, respectively. Under high conspecific abundance, however, both species preferred areas with rather large corn-forest edge densities and corn field proportions. Based on random sampling techniques, we provide a robust method that is applicable to a broad range of species, including medium- to large-sized mammals with high mobility. The method is sufficiently flexible to incorporate multiple environmental covariates that can reflect key requirements of the focal species. We thus illustrate how isodar theory can be used with wildlife surveys to assess density-dependent habitat selection over large geographic extents.
NASA Astrophysics Data System (ADS)
Mishra, C.; Samantaray, A. K.; Chakraborty, G.
2016-09-01
Vibration analysis for diagnosis of faults in rolling element bearings is complicated when the rotor speed is variable or slow. In the former case, the time interval between the fault-induced impact responses in the vibration signal are non-uniform and the signal strength is variable. In the latter case, the fault-induced impact response strength is weak and generally gets buried in the noise, i.e. noise dominates the signal. This article proposes a diagnosis scheme based on a combination of a few signal processing techniques. The proposed scheme initially represents the vibration signal in terms of uniformly resampled angular position of the rotor shaft by using the interpolated instantaneous angular position measurements. Thereafter, intrinsic mode functions (IMFs) are generated through empirical mode decomposition (EMD) of resampled vibration signal which is followed by thresholding of IMFs and signal reconstruction to de-noise the signal and envelope order tracking to diagnose the faults. Data for validating the proposed diagnosis scheme are initially generated from a multi-body simulation model of rolling element bearing which is developed using bond graph approach. This bond graph model includes the ball and cage dynamics, localized fault geometry, contact mechanics, rotor unbalance, and friction and slip effects. The diagnosis scheme is finally validated with experiments performed with the help of a machine fault simulator (MFS) system. Some fault scenarios which could not be experimentally recreated are then generated through simulations and analyzed through the developed diagnosis scheme.
A novel fruit shape classification method based on multi-scale analysis
NASA Astrophysics Data System (ADS)
Gui, Jiangsheng; Ying, Yibin; Rao, Xiuqin
2005-11-01
Shape is one of the major concerns and which is still a difficult problem in automated inspection and sorting of fruits. In this research, we proposed the multi-scale energy distribution (MSED) for object shape description, the relationship between objects shape and its boundary energy distribution at multi-scale was explored for shape extraction. MSED offers not only the mainly energy which represent primary shape information at the lower scales, but also subordinate energy which represent local shape information at higher differential scales. Thus, it provides a natural tool for multi resolution representation and can be used as a feature for shape classification. We addressed the three main processing steps in the MSED-based shape classification. They are namely, 1) image preprocessing and citrus shape extraction, 2) shape resample and shape feature normalization, 3) energy decomposition by wavelet and classification by BP neural network. Hereinto, shape resample is resample 256 boundary pixel from a curve which is approximated original boundary by using cubic spline in order to get uniform raw data. A probability function was defined and an effective method to select a start point was given through maximal expectation, which overcame the inconvenience of traditional methods in order to have a property of rotation invariants. The experiment result is relatively well normal citrus and serious abnormality, with a classification rate superior to 91.2%. The global correct classification rate is 89.77%, and our method is more effective than traditional method. The global result can meet the request of fruit grading.
Stereo reconstruction from multiperspective panoramas.
Li, Yin; Shum, Heung-Yeung; Tang, Chi-Keung; Szeliski, Richard
2004-01-01
A new approach to computing a panoramic (360 degrees) depth map is presented in this paper. Our approach uses a large collection of images taken by a camera whose motion has been constrained to planar concentric circles. We resample regular perspective images to produce a set of multiperspective panoramas and then compute depth maps directly from these resampled panoramas. Our panoramas sample uniformly in three dimensions: rotation angle, inverse radial distance, and vertical elevation. The use of multiperspective panoramas eliminates the limited overlap present in the original input images and, thus, problems as in conventional multibaseline stereo can be avoided. Our approach differs from stereo matching of single-perspective panoramic images taken from different locations, where the epipolar constraints are sine curves. For our multiperspective panoramas, the epipolar geometry, to the first order approximation, consists of horizontal lines. Therefore, any traditional stereo algorithm can be applied to multiperspective panoramas with little modification. In this paper, we describe two reconstruction algorithms. The first is a cylinder sweep algorithm that uses a small number of resampled multiperspective panoramas to obtain dense 3D reconstruction. The second algorithm, in contrast, uses a large number of multiperspective panoramas and takes advantage of the approximate horizontal epipolar geometry inherent in multiperspective panoramas. It comprises a novel and efficient 1D multibaseline matching technique, followed by tensor voting to extract the depth surface. Experiments show that our algorithms are capable of producing comparable high quality depth maps which can be used for applications such as view interpolation.
Testing block subdivision algorithms on block designs
NASA Astrophysics Data System (ADS)
Wiseman, Natalie; Patterson, Zachary
2016-01-01
Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.
Zhou, Shuntai; Jones, Corbin; Mieczkowski, Piotr
2015-01-01
ABSTRACT Validating the sampling depth and reducing sequencing errors are critical for studies of viral populations using next-generation sequencing (NGS). We previously described the use of Primer ID to tag each viral RNA template with a block of degenerate nucleotides in the cDNA primer. We now show that low-abundance Primer IDs (offspring Primer IDs) are generated due to PCR/sequencing errors. These artifactual Primer IDs can be removed using a cutoff model for the number of reads required to make a template consensus sequence. We have modeled the fraction of sequences lost due to Primer ID resampling. For a typical sequencing run, less than 10% of the raw reads are lost to offspring Primer ID filtering and resampling. The remaining raw reads are used to correct for PCR resampling and sequencing errors. We also demonstrate that Primer ID reveals bias intrinsic to PCR, especially at low template input or utilization. cDNA synthesis and PCR convert ca. 20% of RNA templates into recoverable sequences, and 30-fold sequence coverage recovers most of these template sequences. We have directly measured the residual error rate to be around 1 in 10,000 nucleotides. We use this error rate and the Poisson distribution to define the cutoff to identify preexisting drug resistance mutations at low abundance in an HIV-infected subject. Collectively, these studies show that >90% of the raw sequence reads can be used to validate template sampling depth and to dramatically reduce the error rate in assessing a genetically diverse viral population using NGS. IMPORTANCE Although next-generation sequencing (NGS) has revolutionized sequencing strategies, it suffers from serious limitations in defining sequence heterogeneity in a genetically diverse population, such as HIV-1 due to PCR resampling and PCR/sequencing errors. The Primer ID approach reveals the true sampling depth and greatly reduces errors. Knowing the sampling depth allows the construction of a model of how to maximize the recovery of sequences from input templates and to reduce resampling of the Primer ID so that appropriate multiplexing can be included in the experimental design. With the defined sampling depth and measured error rate, we are able to assign cutoffs for the accurate detection of minority variants in viral populations. This approach allows the power of NGS to be realized without having to guess about sampling depth or to ignore the problem of PCR resampling, while also being able to correct most of the errors in the data set. PMID:26041299
Forensic identification of resampling operators: A semi non-intrusive approach.
Cao, Gang; Zhao, Yao; Ni, Rongrong
2012-03-10
Recently, several new resampling operators have been proposed and successfully invalidate the existing resampling detectors. However, the reliability of such anti-forensic techniques is unaware and needs to be investigated. In this paper, we focus on the forensic identification of digital image resampling operators including the traditional type and the anti-forensic type which hides the trace of traditional resampling. Various resampling algorithms involving geometric distortion (GD)-based, dual-path-based and postprocessing-based are investigated. The identification is achieved in the manner of semi non-intrusive, supposing the resampling software could be accessed. Given an input pattern of monotone signal, polarity aberration of GD-based resampled signal's first derivative is analyzed theoretically and measured by effective feature metric. Dual-path-based and postprocessing-based resampling can also be identified by feeding proper test patterns. Experimental results on various parameter settings demonstrate the effectiveness of the proposed approach. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Datamining approaches for modeling tumor control probability.
Naqa, Issam El; Deasy, Joseph O; Mu, Yi; Huang, Ellen; Hope, Andrew J; Lindsay, Patricia E; Apte, Aditya; Alaly, James; Bradley, Jeffrey D
2010-11-01
Tumor control probability (TCP) to radiotherapy is determined by complex interactions between tumor biology, tumor microenvironment, radiation dosimetry, and patient-related variables. The complexity of these heterogeneous variable interactions constitutes a challenge for building predictive models for routine clinical practice. We describe a datamining framework that can unravel the higher order relationships among dosimetric dose-volume prognostic variables, interrogate various radiobiological processes, and generalize to unseen data before when applied prospectively. Several datamining approaches are discussed that include dose-volume metrics, equivalent uniform dose, mechanistic Poisson model, and model building methods using statistical regression and machine learning techniques. Institutional datasets of non-small cell lung cancer (NSCLC) patients are used to demonstrate these methods. The performance of the different methods was evaluated using bivariate Spearman rank correlations (rs). Over-fitting was controlled via resampling methods. Using a dataset of 56 patients with primary NCSLC tumors and 23 candidate variables, we estimated GTV volume and V75 to be the best model parameters for predicting TCP using statistical resampling and a logistic model. Using these variables, the support vector machine (SVM) kernel method provided superior performance for TCP prediction with an rs=0.68 on leave-one-out testing compared to logistic regression (rs=0.4), Poisson-based TCP (rs=0.33), and cell kill equivalent uniform dose model (rs=0.17). The prediction of treatment response can be improved by utilizing datamining approaches, which are able to unravel important non-linear complex interactions among model variables and have the capacity to predict on unseen data for prospective clinical applications.
Assessing uncertainties in surface water security: An empirical multimodel approach
NASA Astrophysics Data System (ADS)
Rodrigues, Dulce B. B.; Gupta, Hoshin V.; Mendiondo, Eduardo M.; Oliveira, Paulo Tarso S.
2015-11-01
Various uncertainties are involved in the representation of processes that characterize interactions among societal needs, ecosystem functioning, and hydrological conditions. Here we develop an empirical uncertainty assessment of water security indicators that characterize scarcity and vulnerability, based on a multimodel and resampling framework. We consider several uncertainty sources including those related to (i) observed streamflow data; (ii) hydrological model structure; (iii) residual analysis; (iv) the method for defining Environmental Flow Requirement; (v) the definition of critical conditions for water provision; and (vi) the critical demand imposed by human activities. We estimate the overall hydrological model uncertainty by means of a residual bootstrap resampling approach, and by uncertainty propagation through different methodological arrangements applied to a 291 km2 agricultural basin within the Cantareira water supply system in Brazil. Together, the two-component hydrograph residual analysis and the block bootstrap resampling approach result in a more accurate and precise estimate of the uncertainty (95% confidence intervals) in the simulated time series. We then compare the uncertainty estimates associated with water security indicators using a multimodel framework and the uncertainty estimates provided by each model uncertainty estimation approach. The range of values obtained for the water security indicators suggests that the models/methods are robust and performs well in a range of plausible situations. The method is general and can be easily extended, thereby forming the basis for meaningful support to end-users facing water resource challenges by enabling them to incorporate a viable uncertainty analysis into a robust decision-making process.
Kozak, M; Karaman, M
2001-07-01
Digital beamforming based on oversampled delta-sigma (delta sigma) analog-to-digital (A/D) conversion can reduce the overall cost, size, and power consumption of phased array front-end processing. The signal resampling involved in dynamic delta sigma beamforming, however, disrupts synchronization between the modulators and demodulator, causing significant degradation in the signal-to-noise ratio. As a solution to this, we have explored a new digital beamforming approach based on non-uniform oversampling delta sigma A/D conversion. Using this approach, the echo signals received by the transducer array are sampled at time instants determined by the beamforming timing and then digitized by single-bit delta sigma A/D conversion prior to the coherent beam summation. The timing information involves a non-uniform sampling scheme employing different clocks at each array channel. The delta sigma coded beamsums obtained by adding the delayed 1-bit coded RF echo signals are then processed through a decimation filter to produce final beamforming outputs. The performance and validity of the proposed beamforming approach are assessed by means of emulations using experimental raw RF data.
Evaluation of burst-mode LDA spectra with implications
NASA Astrophysics Data System (ADS)
Velte, Clara; George, William
2009-11-01
Burst-mode LDA spectra, as described in [1], are compared to spectra obtained from corresponding HWA measurements using the FFT in a round jet and cylinder wake experiment. The phrase ``burst-mode LDA'' refers to an LDA which operates with at most one particle present in the measuring volume at a time. Due to the random sampling and velocity bias of the LDA signal, the Direct Fourier Transform with accompanying weighting by the measured residence times was applied to obtain a correct interpretation of the spectral estimate. Further, the self-noise was removed as described in [2]. In addition, resulting spectra from common interpolation and uniform resampling techniques are compared to the above mentioned estimates. The burst-mode LDA spectra are seen to concur well with the HWA spectra up to the emergence of the noise floor, caused mainly by the intermittency of the LDA signal. The interpolated and resampled counterparts yield unphysical spectra, which are buried in frequency dependent noise and step noise, except at very high LDA data rates where they perform well up to a limited frequency.[4pt] [1] Buchhave, P. PhD Thesis, SUNY/Buffalo, 1979.[0pt] [2] Velte, C.M. PhD Thesis, DTU/Copenhagen, 2009.
NASA Astrophysics Data System (ADS)
Angrisano, Antonio; Maratea, Antonio; Gaglione, Salvatore
2018-01-01
In the absence of obstacles, a GPS device is generally able to provide continuous and accurate estimates of position, while in urban scenarios buildings can generate multipath and echo-only phenomena that severely affect the continuity and the accuracy of the provided estimates. Receiver autonomous integrity monitoring (RAIM) techniques are able to reduce the negative consequences of large blunders in urban scenarios, but require both a good redundancy and a low contamination to be effective. In this paper a resampling strategy based on bootstrap is proposed as an alternative to RAIM, in order to estimate accurately position in case of low redundancy and multiple blunders: starting with the pseudorange measurement model, at each epoch the available measurements are bootstrapped—that is random sampled with replacement—and the generated a posteriori empirical distribution is exploited to derive the final position. Compared to standard bootstrap, in this paper the sampling probabilities are not uniform, but vary according to an indicator of the measurement quality. The proposed method has been compared with two different RAIM techniques on a data set collected in critical conditions, resulting in a clear improvement on all considered figures of merit.
0-2 Ma Paleomagnetic Field Behavior from Lava Flow Data Sets
NASA Astrophysics Data System (ADS)
Johnson, C. L.; Constable, C.; Tauxe, L.; Cromwell, G.
2010-12-01
The global time-averaged (TAF) structure of the paleomagnetic field and paleosecular variation (PSV) provide important constraints for numerical geodynamo simulations. Studies of the TAF have sought to characterize the nature of non-geocentric-axial dipole contributions to the field, in particular any such contributions that may be diagnostic of the influence of core-mantle boundary conditions on field generation. Similarly geographical variations in PSV are of interest, in particular the long-standing debate concerning anomalously low VGP (virtual geomagnetic pole) dispersion at Hawaii. Here, we analyze updated global directional data sets from lava flows. We present global models for the time-averaged field for the Brunhes and Matuyama epochs. New TAF models based on lava flow directional data for the Brunhes show longitudinal structure. In particular, high latitude flux lobes are observed, constrained by improved data sets from N. and S. America, Japan, and New Zealand. Anomalous TAF structure is also observed in the region around Hawaii. At Hawaii, previous inferences of the anomalous TAF (large inclination anomaly) and PSV (low VGP dispersion) have been argued to be the result of temporal sampling bias toward young flows. We use resampling techniques to examine possible biases in the TAF and PSV incurred by uneven temporal sampling. Resampling of the paleodirectional data onto a uniform temporal distribution, incorporating site ages and age errors leads to a TAF estimate for the Brunhes that is close to that reported for the actual data set, but an estimate for VGP dispersion that is increased relative to that obtained from the unevenly sampled data. Future investigations will incorporate the temporal resampling procedures into TAF modeling efforts, as well as recent progress in modeling the 0-2 Ma paleomagnetic dipole moment.
NASA Technical Reports Server (NTRS)
Benner, R.; Young, W.
1977-01-01
The results of an experimental study conducted to determine the geometric and radiometric effects of double resampling (bi-resampling) performed on image data in the process of performing map projection transformations are reported.
Lawrence, E.O.; Brobeck, W.M.
1959-04-14
ABS>An ion source is described for a calutron especially designed to improve the uniformity of charge vapor flow when the vapor encounters the arc. The inventive feature of the source consists of a specific source block construction wherein heater means prevents condensation from taking place within the block, and a separate vapor generator is supported on the wall of the block by a hollow thimble. The thimble communicates with a bore cavity in the block and the vapor flows therethrough into the cavity and uniformly out a slot along the length of the cavity where the arc discharge is located.
Framework for computing the spatial coherence effects of polycapillary x-ray optics
Zysk, Adam M.; Schoonover, Robert W.; Xu, Qiaofeng; Anastasio, Mark A.
2012-01-01
Despite the extensive use of polycapillary x-ray optics for focusing and collimating applications, there remains a significant need for characterization of the coherence properties of the output wavefield. In this work, we present the first quantitative computational method for calculation of the spatial coherence effects of polycapillary x-ray optical devices. This method employs the coherent mode decomposition of an extended x-ray source, geometric optical propagation of individual wavefield modes through a polycapillary device, output wavefield calculation by ray data resampling onto a uniform grid, and the calculation of spatial coherence properties by way of the spectral degree of coherence. PMID:22418154
Watermarking on 3D mesh based on spherical wavelet transform.
Jin, Jian-Qiu; Dai, Min-Ya; Bao, Hu-Jun; Peng, Qun-Sheng
2004-03-01
In this paper we propose a robust watermarking algorithm for 3D mesh. The algorithm is based on spherical wavelet transform. Our basic idea is to decompose the original mesh into a series of details at different scales by using spherical wavelet transform; the watermark is then embedded into the different levels of details. The embedding process includes: global sphere parameterization, spherical uniform sampling, spherical wavelet forward transform, embedding watermark, spherical wavelet inverse transform, and at last resampling the mesh watermarked to recover the topological connectivity of the original model. Experiments showed that our algorithm can improve the capacity of the watermark and the robustness of watermarking against attacks.
Preprocessing the Nintendo Wii Board Signal to Derive More Accurate Descriptors of Statokinesigrams.
Audiffren, Julien; Contal, Emile
2016-08-01
During the past few years, the Nintendo Wii Balance Board (WBB) has been used in postural control research as an affordable but less reliable replacement for laboratory grade force platforms. However, the WBB suffers some limitations, such as a lower accuracy and an inconsistent sampling rate. In this study, we focus on the latter, namely the non uniform acquisition frequency. We show that this problem, combined with the poor signal to noise ratio of the WBB, can drastically decrease the quality of the obtained information if not handled properly. We propose a new resampling method, Sliding Window Average with Relevance Interval Interpolation (SWARII), specifically designed with the WBB in mind, for which we provide an open source implementation. We compare it with several existing methods commonly used in postural control, both on synthetic and experimental data. The results show that some methods, such as linear and piecewise constant interpolations should definitely be avoided, particularly when the resulting signal is differentiated, which is necessary to estimate speed, an important feature in postural control. Other methods, such as averaging on sliding windows or SWARII, perform significantly better on synthetic dataset, and produce results more similar to the laboratory-grade AMTI force plate (AFP) during experiments. Those methods should be preferred when resampling data collected from a WBB.
Preprocessing the Nintendo Wii Board Signal to Derive More Accurate Descriptors of Statokinesigrams
Audiffren, Julien; Contal, Emile
2016-01-01
During the past few years, the Nintendo Wii Balance Board (WBB) has been used in postural control research as an affordable but less reliable replacement for laboratory grade force platforms. However, the WBB suffers some limitations, such as a lower accuracy and an inconsistent sampling rate. In this study, we focus on the latter, namely the non uniform acquisition frequency. We show that this problem, combined with the poor signal to noise ratio of the WBB, can drastically decrease the quality of the obtained information if not handled properly. We propose a new resampling method, Sliding Window Average with Relevance Interval Interpolation (SWARII), specifically designed with the WBB in mind, for which we provide an open source implementation. We compare it with several existing methods commonly used in postural control, both on synthetic and experimental data. The results show that some methods, such as linear and piecewise constant interpolations should definitely be avoided, particularly when the resulting signal is differentiated, which is necessary to estimate speed, an important feature in postural control. Other methods, such as averaging on sliding windows or SWARII, perform significantly better on synthetic dataset, and produce results more similar to the laboratory-grade AMTI force plate (AFP) during experiments. Those methods should be preferred when resampling data collected from a WBB. PMID:27490545
Two-dimensional segmentation for analyzing Hi-C data
Lévy-Leduc, Celine; Delattre, M.; Mary-Huard, T.; Robin, S.
2014-01-01
Motivation: The spatial conformation of the chromosome has a deep influence on gene regulation and expression. Hi-C technology allows the evaluation of the spatial proximity between any pair of loci along the genome. It results in a data matrix where blocks corresponding to (self-)interacting regions appear. The delimitation of such blocks is critical to better understand the spatial organization of the chromatin. From a computational point of view, it results in a 2D segmentation problem. Results: We focus on the detection of cis-interacting regions, which appear to be prominent in observed data. We define a block-wise segmentation model for the detection of such regions. We prove that the maximization of the likelihood with respect to the block boundaries can be rephrased in terms of a 1D segmentation problem, for which the standard dynamic programming applies. The performance of the proposed methods is assessed by a simulation study on both synthetic and resampled data. A comparative study on public data shows good concordance with biologically confirmed regions. Availability and implementation: The HiCseg R package is available from the Comprehensive R Archive Network and from the Web page of the corresponding author. Contact: celine.levy-leduc@agroparistech.fr PMID:25161224
System for monitoring non-coincident, nonstationary process signals
Gross, Kenneth C.; Wegerich, Stephan W.
2005-01-04
An improved system for monitoring non-coincident, non-stationary, process signals. The mean, variance, and length of a reference signal is defined by an automated system, followed by the identification of the leading and falling edges of a monitored signal and the length of the monitored signal. The monitored signal is compared to the reference signal, and the monitored signal is resampled in accordance with the reference signal. The reference signal is then correlated with the resampled monitored signal such that the reference signal and the resampled monitored signal are coincident in time with each other. The resampled monitored signal is then compared to the reference signal to determine whether the resampled monitored signal is within a set of predesignated operating conditions.
Communication Optimizations for a Wireless Distributed Prognostic Framework
NASA Technical Reports Server (NTRS)
Saha, Sankalita; Saha, Bhaskar; Goebel, Kai
2009-01-01
Distributed architecture for prognostics is an essential step in prognostic research in order to enable feasible real-time system health management. Communication overhead is an important design problem for such systems. In this paper we focus on communication issues faced in the distributed implementation of an important class of algorithms for prognostics - particle filters. In spite of being computation and memory intensive, particle filters lend well to distributed implementation except for one significant step - resampling. We propose new resampling scheme called parameterized resampling that attempts to reduce communication between collaborating nodes in a distributed wireless sensor network. Analysis and comparison with relevant resampling schemes is also presented. A battery health management system is used as a target application. A new resampling scheme for distributed implementation of particle filters has been discussed in this paper. Analysis and comparison of this new scheme with existing resampling schemes in the context for minimizing communication overhead have also been discussed. Our proposed new resampling scheme performs significantly better compared to other schemes by attempting to reduce both the communication message length as well as number total communication messages exchanged while not compromising prediction accuracy and precision. Future work will explore the effects of the new resampling scheme in the overall computational performance of the whole system as well as full implementation of the new schemes on the Sun SPOT devices. Exploring different network architectures for efficient communication is an importance future research direction as well.
NASA Astrophysics Data System (ADS)
Pan, Hao; Qu, Xinghua; Shi, Chunzhao; Zhang, Fumin; Li, Yating
2018-06-01
The non-uniform interval resampling method has been widely used in frequency modulated continuous wave (FMCW) laser ranging. In the large-bandwidth and long-distance measurements, the range peak is deteriorated due to the fiber dispersion mismatch. In this study, we analyze the frequency-sampling error caused by the mismatch and measure it using the spectroscopy of molecular frequency references line. By using the adjacent points' replacement and spline interpolation technique, the sampling errors could be eliminated. The results demonstrated that proposed method is suitable for resolution-enhancement and high-precision measurement. Moreover, using the proposed method, we achieved the precision of absolute distance less than 45 μm within 8 m.
Evaluating video digitizer errors
NASA Astrophysics Data System (ADS)
Peterson, C.
2016-01-01
Analog output video cameras remain popular for recording meteor data. Although these cameras uniformly employ electronic detectors with fixed pixel arrays, the digitization process requires resampling the horizontal lines as they are output in order to reconstruct the pixel data, usually resulting in a new data array of different horizontal dimensions than the native sensor. Pixel timing is not provided by the camera, and must be reconstructed based on line sync information embedded in the analog video signal. Using a technique based on hot pixels, I present evidence that jitter, sync detection, and other timing errors introduce both position and intensity errors which are not present in cameras which internally digitize their sensors and output the digital data directly.
Resampling: A Marriage of Computers and Statistics. ERIC/TM Digest.
ERIC Educational Resources Information Center
Rudner, Lawrence M.; Shafer, Mary Morello
Advances in computer technology are making it possible for educational researchers to use simpler statistical methods to address a wide range of questions with smaller data sets and fewer, and less restrictive, assumptions. This digest introduces computationally intensive statistics, collectively called resampling techniques. Resampling is a…
Exchangeability, extreme returns and Value-at-Risk forecasts
NASA Astrophysics Data System (ADS)
Huang, Chun-Kai; North, Delia; Zewotir, Temesgen
2017-07-01
In this paper, we propose a new approach to extreme value modelling for the forecasting of Value-at-Risk (VaR). In particular, the block maxima and the peaks-over-threshold methods are generalised to exchangeable random sequences. This caters for the dependencies, such as serial autocorrelation, of financial returns observed empirically. In addition, this approach allows for parameter variations within each VaR estimation window. Empirical prior distributions of the extreme value parameters are attained by using resampling procedures. We compare the results of our VaR forecasts to that of the unconditional extreme value theory (EVT) approach and the conditional GARCH-EVT model for robust conclusions.
An add-in implementation of the RESAMPLING syntax under Microsoft EXCEL.
Meineke, I
2000-10-01
The RESAMPLING syntax defines a set of powerful commands, which allow the programming of probabilistic statistical models with few, easily memorized statements. This paper presents an implementation of the RESAMPLING syntax using Microsoft EXCEL with Microsoft WINDOWS(R) as a platform. Two examples are given to demonstrate typical applications of RESAMPLING in biomedicine. Details of the implementation with special emphasis on the programming environment are discussed at length. The add-in is available electronically to interested readers upon request. The use of the add-in facilitates numerical statistical analyses of data from within EXCEL in a comfortable way.
ERIC Educational Resources Information Center
Hand, Michael L.
1990-01-01
Use of the bootstrap resampling technique (BRT) is assessed in its application to resampling analysis associated with measurement of payment allocation errors by federally funded Family Assistance Programs. The BRT is applied to a food stamp quality control database in Oregon. This analysis highlights the outlier-sensitivity of the…
Application of a New Resampling Method to SEM: A Comparison of S-SMART with the Bootstrap
ERIC Educational Resources Information Center
Bai, Haiyan; Sivo, Stephen A.; Pan, Wei; Fan, Xitao
2016-01-01
Among the commonly used resampling methods of dealing with small-sample problems, the bootstrap enjoys the widest applications because it often outperforms its counterparts. However, the bootstrap still has limitations when its operations are contemplated. Therefore, the purpose of this study is to examine an alternative, new resampling method…
Assessment of Person Fit Using Resampling-Based Approaches
ERIC Educational Resources Information Center
Sinharay, Sandip
2016-01-01
De la Torre and Deng suggested a resampling-based approach for person-fit assessment (PFA). The approach involves the use of the [math equation unavailable] statistic, a corrected expected a posteriori estimate of the examinee ability, and the Monte Carlo (MC) resampling method. The Type I error rate of the approach was closer to the nominal level…
Assessment of resampling methods for causality testing: A note on the US inflation behavior
Kyrtsou, Catherine; Kugiumtzis, Dimitris; Diks, Cees
2017-01-01
Different resampling methods for the null hypothesis of no Granger causality are assessed in the setting of multivariate time series, taking into account that the driving-response coupling is conditioned on the other observed variables. As appropriate test statistic for this setting, the partial transfer entropy (PTE), an information and model-free measure, is used. Two resampling techniques, time-shifted surrogates and the stationary bootstrap, are combined with three independence settings (giving a total of six resampling methods), all approximating the null hypothesis of no Granger causality. In these three settings, the level of dependence is changed, while the conditioning variables remain intact. The empirical null distribution of the PTE, as the surrogate and bootstrapped time series become more independent, is examined along with the size and power of the respective tests. Additionally, we consider a seventh resampling method by contemporaneously resampling the driving and the response time series using the stationary bootstrap. Although this case does not comply with the no causality hypothesis, one can obtain an accurate sampling distribution for the mean of the test statistic since its value is zero under H0. Results indicate that as the resampling setting gets more independent, the test becomes more conservative. Finally, we conclude with a real application. More specifically, we investigate the causal links among the growth rates for the US CPI, money supply and crude oil. Based on the PTE and the seven resampling methods, we consistently find that changes in crude oil cause inflation conditioning on money supply in the post-1986 period. However this relationship cannot be explained on the basis of traditional cost-push mechanisms. PMID:28708870
Assessment of resampling methods for causality testing: A note on the US inflation behavior.
Papana, Angeliki; Kyrtsou, Catherine; Kugiumtzis, Dimitris; Diks, Cees
2017-01-01
Different resampling methods for the null hypothesis of no Granger causality are assessed in the setting of multivariate time series, taking into account that the driving-response coupling is conditioned on the other observed variables. As appropriate test statistic for this setting, the partial transfer entropy (PTE), an information and model-free measure, is used. Two resampling techniques, time-shifted surrogates and the stationary bootstrap, are combined with three independence settings (giving a total of six resampling methods), all approximating the null hypothesis of no Granger causality. In these three settings, the level of dependence is changed, while the conditioning variables remain intact. The empirical null distribution of the PTE, as the surrogate and bootstrapped time series become more independent, is examined along with the size and power of the respective tests. Additionally, we consider a seventh resampling method by contemporaneously resampling the driving and the response time series using the stationary bootstrap. Although this case does not comply with the no causality hypothesis, one can obtain an accurate sampling distribution for the mean of the test statistic since its value is zero under H0. Results indicate that as the resampling setting gets more independent, the test becomes more conservative. Finally, we conclude with a real application. More specifically, we investigate the causal links among the growth rates for the US CPI, money supply and crude oil. Based on the PTE and the seven resampling methods, we consistently find that changes in crude oil cause inflation conditioning on money supply in the post-1986 period. However this relationship cannot be explained on the basis of traditional cost-push mechanisms.
Ghoshal, Tandra; Holmes, Justin D; Morris, Michael A
2018-05-08
In an effort to develop block copolymer lithography to create high aspect vertical pore arrangements in a substrate surface we have used a microphase separated poly(ethylene oxide) -b- polystyrene (PEO-b-PS) block copolymer (BCP) thin film where (and most unusually) PS not PEO is the cylinder forming phase and PEO is the majority block. Compared to previous work, we can amplify etch contrast by inclusion of hard mask material into the matrix block allowing the cylinder polymer to be removed and the exposed substrate subject to deep etching thereby generating uniform, arranged, sub-25 nm cylindrical nanopore arrays. Briefly, selective metal ion inclusion into the PEO matrix and subsequent processing (etch/modification) was applied for creating iron oxide nanohole arrays. The oxide nanoholes (22 nm diameter) were cylindrical, uniform diameter and mimics the original BCP nanopatterns. The oxide nanohole network is demonstrated as a resistant mask to fabricate ultra dense, well ordered, good sidewall profile silicon nanopore arrays on substrate surface through the pattern transfer approach. The Si nanopores have uniform diameter and smooth sidewalls throughout their depth. The depth of the porous structure can be controlled via the etch process.
Katayama, R; Sakai, S; Sakaguchi, T; Maeda, T; Takada, K; Hayabuchi, N; Morishita, J
2008-07-20
PURPOSE/AIM OF THE EXHIBIT: The purpose of this exhibit is: 1. To explain "resampling", an image data processing, performed by the digital radiographic system based on flat panel detector (FPD). 2. To show the influence of "resampling" on the basic imaging properties. 3. To present accurate measurement methods of the basic imaging properties of the FPD system. 1. The relationship between the matrix sizes of the output image and the image data acquired on FPD that automatically changes depending on a selected image size (FOV). 2. The explanation of the image data processing of "resampling". 3. The evaluation results of the basic imaging properties of the FPD system using two types of DICOM image to which "resampling" was performed: characteristic curves, presampled MTFs, noise power spectra, detective quantum efficiencies. CONCLUSION/SUMMARY: The major points of the exhibit are as follows: 1. The influence of "resampling" should not be disregarded in the evaluation of the basic imaging properties of the flat panel detector system. 2. It is necessary for the basic imaging properties to be measured by using DICOM image to which no "resampling" is performed.
Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.
de Nijs, Robin
2015-07-21
In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.
Off-axis full-field swept-source optical coherence tomography using holographic refocusing
NASA Astrophysics Data System (ADS)
Hillmann, Dierck; Franke, Gesa; Hinkel, Laura; Bonin, Tim; Koch, Peter; Hüttmann, Gereon
2013-03-01
We demonstrate a full-field swept-source OCT using an off-axis geometry of the reference illumination. By using holographic refocusing techniques, a uniform lateral resolution is achieved over the measurement depth of approximately 80 Rayleigh lengths. Compared to a standard on-axis setup, artifacts and autocorrelation signals are suppressed and the measurement depth is doubled by resolving the complex conjugate ambiguity. Holographic refocusing was done efficiently by Fourier-domain resampling as demonstrated before in inverse scattering and holoscopy. It allowed to reconstruct a complete volume with about 10μm resolution over the complete measurement depth of more than 10mm. Off-axis full-field swept-source OCT enables high measurement depths, spanning many Rayleigh lengths with reduced artifacts.
Improving the quality of extracting dynamics from interspike intervals via a resampling approach
NASA Astrophysics Data System (ADS)
Pavlova, O. N.; Pavlov, A. N.
2018-04-01
We address the problem of improving the quality of characterizing chaotic dynamics based on point processes produced by different types of neuron models. Despite the presence of embedding theorems for non-uniformly sampled dynamical systems, the case of short data analysis requires additional attention because the selection of algorithmic parameters may have an essential influence on estimated measures. We consider how the preliminary processing of interspike intervals (ISIs) can increase the precision of computing the largest Lyapunov exponent (LE). We report general features of characterizing chaotic dynamics from point processes and show that independently of the selected mechanism for spike generation, the performed preprocessing reduces computation errors when dealing with a limited amount of data.
Wavelet analysis in ecology and epidemiology: impact of statistical tests
Cazelles, Bernard; Cazelles, Kévin; Chavez, Mario
2014-01-01
Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the ‘beta-surrogate’ method. PMID:24284892
Wavelet analysis in ecology and epidemiology: impact of statistical tests.
Cazelles, Bernard; Cazelles, Kévin; Chavez, Mario
2014-02-06
Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the 'beta-surrogate' method.
Lu, Jennifer Q; Yi, Sung Soo
2006-04-25
A monolayer of gold-containing surface micelles has been produced by spin-coating solution micelles formed by the self-assembly of the gold-modified polystyrene-b-poly(2-vinylpyridine) block copolymer in toluene. After oxygen plasma removed the block copolymer template, highly ordered and uniformly sized nanoparticles have been generated. Unlike other published methods that require reduction treatments to form gold nanoparticles in the zero-valent state, these as-synthesized nanoparticles are in form of metallic gold. These gold nanoparticles have been demonstrated to be an excellent catalyst system for growing small-diameter silicon nanowires. The uniformly sized gold nanoparticles have promoted the controllable synthesis of silicon nanowires with a narrow diameter distribution. Because of the ability to form a monolayer of surface micelles with a high degree of order, evenly distributed gold nanoparticles have been produced on a surface. As a result, uniformly distributed, high-density silicon nanowires have been generated. The process described herein is fully compatible with existing semiconductor processing techniques and can be readily integrated into device fabrication.
Improving MRI surface coil decoupling to reduce B1 distortion
NASA Astrophysics Data System (ADS)
Larson, Christian
As clinical MRI systems continue to advance, larger focus is being given to image uniformity. Good image uniformity begins with generating uniform magnetic fields, which are easily distorted by induced currents on receive-only surface coils. It has become an industry standard to combat these induced currents by placing RF blocking networks on surface coils. This paper explores the effect of blocking network impedance of phased array surface coils on B1 distortion. It has been found and verified, that traditional approaches for blocking network design in complex phased arrays can leave undesirable B1 distortions at 3 Tesla. The traditional approach of LC tank blocking is explored, but shifts from the idea that higher impedance equals better B1 distortion at 3T. The result is a new design principle for a tank with a finite inductive reactance at the Larmor Frequency. The solution is demonstrated via simulation using a simple, single, large tuning loop. The same loop, along with a smaller loop, is used to derive the new design principle, which is then applied to a complex phased array structure.
Experimental and numerical modeling research of rubber material during microwave heating process
NASA Astrophysics Data System (ADS)
Chen, Hailong; Li, Tao; Li, Kunling; Li, Qingling
2018-05-01
This paper aims to investigate the heating behaviors of block rubber by experimental and simulated method. The COMSOL Multiphysics 5.0 software was utilized in numerical simulation work. The effects of microwave frequency, power and sample size on temperature distribution are examined. The effect of frequency on temperature distribution is obvious. The maximum and minimum temperatures of block rubber increase first and then decrease with frequency increasing. The microwave heating efficiency is maximum in the microwave frequency of 2450 MHz. However, more uniform temperature distribution is presented in other microwave frequencies. The influence of microwave power on temperature distribution is also remarkable. The smaller the power, the more uniform the temperature distribution on the block rubber. The effect of power on microwave heating efficiency is not obvious. The effect of sample size on temperature distribution is evidently found. The smaller the sample size, the more uniform the temperature distribution on the block rubber. However, the smaller the sample size, the lower the microwave heating efficiency. The results can serve as references for the research on heating rubber material by microwave technology.
Resampling methods in Microsoft Excel® for estimating reference intervals
Theodorsson, Elvar
2015-01-01
Computer- intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular. Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples. PMID:26527366
Resampling methods in Microsoft Excel® for estimating reference intervals.
Theodorsson, Elvar
2015-01-01
Computer-intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular. Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples.
Testing for Granger Causality in the Frequency Domain: A Phase Resampling Method.
Liu, Siwei; Molenaar, Peter
2016-01-01
This article introduces phase resampling, an existing but rarely used surrogate data method for making statistical inferences of Granger causality in frequency domain time series analysis. Granger causality testing is essential for establishing causal relations among variables in multivariate dynamic processes. However, testing for Granger causality in the frequency domain is challenging due to the nonlinear relation between frequency domain measures (e.g., partial directed coherence, generalized partial directed coherence) and time domain data. Through a simulation study, we demonstrate that phase resampling is a general and robust method for making statistical inferences even with short time series. With Gaussian data, phase resampling yields satisfactory type I and type II error rates in all but one condition we examine: when a small effect size is combined with an insufficient number of data points. Violations of normality lead to slightly higher error rates but are mostly within acceptable ranges. We illustrate the utility of phase resampling with two empirical examples involving multivariate electroencephalography (EEG) and skin conductance data.
Ordered porous mesostructured materials from nanoparticle-block copolymer self-assembly
Warren, Scott; Wiesner, Ulrich; DiSalvo, Jr., Francis J
2013-10-29
The invention provides mesostructured materials and methods of preparing mesostructured materials including metal-rich mesostructured nanoparticle-block copolymer hybrids, porous metal-nonmetal nanocomposite mesostructures, and ordered metal mesostructures with uniform pores. The nanoparticles can be metal, metal alloy, metal mixture, intermetallic, metal-carbon, metal-ceramic, semiconductor-carbon, semiconductor-ceramic, insulator-carbon or insulator-ceramic nanoparticles, or combinations thereof. A block copolymer/ligand-stabilized nanoparticle solution is cast, resulting in the formation of a metal-rich (or semiconductor-rich or insulator-rich) mesostructured nanoparticle-block copolymer hybrid. The hybrid is heated to an elevated temperature, resulting in the formation of an ordered porous nanocomposite mesostructure. A nonmetal component (e.g., carbon or ceramic) is then removed to produce an ordered mesostructure with ordered and large uniform pores.
Tensil Film Clamps And Mounting Block For Viscoelastometers
NASA Technical Reports Server (NTRS)
Stoakley, Diane M.; St. Clair, Anne K.; Little, Bruce D.
1989-01-01
Set of clamps and mounting block developed for use in determining tensile moduli and damping properties of films in manually operated or automated commercial viscoelastometer. These clamps and block provide uniformity of sample gripping and alignment in instrument. Dependence on operator and variability of data greatly reduced.
Effect of aerated concrete blockwork joints on the heat transfer performance uniformity
NASA Astrophysics Data System (ADS)
Pukhkal, Viktor; Murgul, Vera
2018-03-01
Analysis of data on the effect of joints of the aerated concrete blocks on the heat transfer uniformity of exterior walls was carried out. It was concluded, that the values of the heat transfer performance uniformity factor in the literature sources were obtained for the regular fragment of a wall construction by approximate addition of thermal conductivities. Heat flow patterns for the aerated concrete exterior walls amid different values of the thermal conductivity factors and design ambient air temperature of -26 °C were calculated with the use of "ELCUT" software for modelling of thermal patterns by finite element method. There were defined the values for the heat transfer performance uniformity factor, reduced total thermal resistance and heat-flux density for the exterior walls. The calculated values of the heat transfer performance uniformity factors, as a function of the coefficient of thermal conductivity of aerated concrete blocks, differ from the known data by a more rigorous thermal and physical substantiation.
NASA Astrophysics Data System (ADS)
Adjorlolo, Clement; Mutanga, Onisimo; Cho, Moses A.; Ismail, Riyad
2013-04-01
In this paper, a user-defined inter-band correlation filter function was used to resample hyperspectral data and thereby mitigate the problem of multicollinearity in classification analysis. The proposed resampling technique convolves the spectral dependence information between a chosen band-centre and its shorter and longer wavelength neighbours. Weighting threshold of inter-band correlation (WTC, Pearson's r) was calculated, whereby r = 1 at the band-centre. Various WTC (r = 0.99, r = 0.95 and r = 0.90) were assessed, and bands with coefficients beyond a chosen threshold were assigned r = 0. The resultant data were used in the random forest analysis to classify in situ C3 and C4 grass canopy reflectance. The respective WTC datasets yielded improved classification accuracies (kappa = 0.82, 0.79 and 0.76) with less correlated wavebands when compared to resampled Hyperion bands (kappa = 0.76). Overall, the results obtained from this study suggested that resampling of hyperspectral data should account for the spectral dependence information to improve overall classification accuracy as well as reducing the problem of multicollinearity.
Modeling Equity for Alternative Water Rate Structures
NASA Astrophysics Data System (ADS)
Griffin, R.; Mjelde, J.
2011-12-01
The rising popularity of increasing block rates for urban water runs counter to mainstream economic recommendations, yet decision makers in rate design forums are attracted to the notion of higher prices for larger users. Among economists, it is widely appreciated that uniform rates have stronger efficiency properties than increasing block rates, especially when volumetric prices incorporate intrinsic water value. Yet, except for regions where water market purchases have forced urban authorities to include water value in water rates, economic arguments have weakly penetrated policy. In this presentation, recent evidence will be reviewed regarding long term trends in urban rate structures while observing economic principles pertaining to these choices. The main objective is to investigate the equity of increasing block rates as contrasted to uniform rates for a representative city. Using data from four Texas cities, household water demand is established as a function of marginal price, income, weather, number of residents, and property characteristics. Two alternative rate proposals are designed on the basis of recent experiences for both water and wastewater rates. After specifying a reasonable number (~200) of diverse households populating the city and parameterizing each household's characteristics, every household's consumption selections are simulated for twelve months. This procedure is repeated for both rate systems. Monthly water and wastewater bills are also computed for each household. Most importantly, while balancing the budget of the city utility we compute the effect of switching rate structures on the welfares of households of differing types. Some of the empirical findings are as follows. Under conditions of absent water scarcity, households of opposing characters such as low versus high income do not have strong preferences regarding rate structure selection. This changes as water scarcity rises and as water's opportunity costs are allowed to influence uniform rates. The welfare results of these exercises indicate that popular conceptions about increasing block rates may be incorrect insofar as the scarcity-endogenous uniform rate favors low-income households. That is, under scarcity conditions a switch from increasing block rates to full price uniform rates redistributes welfare so as to place more of the welfare burden of conservation on high-income households. Similarly, any household characteristic that tends to accompany low water use (e.g. low property value) generates a the same rate structure preference. These results are an intriguing addition to existing knowledge pertaining to the properties of increasing block rates and uniform rates with respect to criteria such as efficiency, simplicity, effectiveness, and (now) equity.
Comparison of parametric and bootstrap method in bioequivalence test.
Ahn, Byung-Jin; Yim, Dong-Seok
2009-10-01
The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption.
Comparison of Parametric and Bootstrap Method in Bioequivalence Test
Ahn, Byung-Jin
2009-01-01
The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption. PMID:19915699
Method and apparatus for determining two-phase flow in rock fracture
Persoff, Peter; Pruess, Karsten; Myer, Larry
1994-01-01
An improved method and apparatus as disclosed for measuring the permeability of multiple phases through a rock fracture. The improvement in the method comprises delivering the respective phases through manifolds to uniformly deliver and collect the respective phases to and from opposite edges of the rock fracture in a distributed manner across the edge of the fracture. The improved apparatus comprises first and second manifolds comprising bores extending within porous blocks parallel to the rock fracture for distributing and collecting the wetting phase to and from surfaces of the porous blocks, which respectively face the opposite edges of the rock fracture. The improved apparatus further comprises other manifolds in the form of plenums located adjacent the respective porous blocks for uniform delivery of the non-wetting phase to parallel grooves disposed on the respective surfaces of the porous blocks facing the opposite edges of the rock fracture and generally perpendicular to the rock fracture.
Image restoration techniques as applied to Landsat MSS and TM data
Meyer, David
1987-01-01
Two factors are primarily responsible for the loss of image sharpness in processing digital Landsat images. The first factor is inherent in the data because the sensor's optics and electronics, along with other sensor elements, blur and smear the data. Digital image restoration can be used to reduce this degradation. The second factor, which further degrades by blurring or aliasing, is the resampling performed during geometric correction. An image restoration procedure, when used in place of typical resampled techniques, reduces sensor degradation without introducing the artifacts associated with resampling. The EROS Data Center (EDC) has implemented the restoration proceed for Landsat multispectral scanner (MSS) and thematic mapper (TM) data. This capability, developed at the University of Arizona by Dr. Robert Schowengerdt and Lynette Wood, combines restoration and resampling in a single step to produce geometrically corrected MSS and TM imagery. As with resampling, restoration demands a tradeoff be made between aliasing, which occurs when attempting to extract maximum sharpness from an image, and blurring, which reduces the aliasing problem but sacrifices image sharpness. The restoration procedure used at EDC minimizes these artifacts by being adaptive, tailoring the tradeoff to be optimal for individual images.
Pabon, Peter; Ternström, Sten; Lamarche, Anick
2011-06-01
To describe a method for unified description, statistical modeling, and comparison of voice range profile (VRP) contours, even from diverse sources. A morphologic modeling technique, which is based on Fourier descriptors (FDs), is applied to the VRP contour. The technique, which essentially involves resampling of the curve of the contour, is assessed and also is compared to density-based VRP averaging methods that use the overlap count. VRP contours can be usefully described and compared using FDs. The method also permits the visualization of the local covariation along the contour average. For example, the FD-based analysis shows that the population variance for ensembles of VRP contours is usually smallest at the upper left part of the VRP. To illustrate the method's advantages and possible further application, graphs are given that compare the averaged contours from different authors and recording devices--for normal, trained, and untrained male and female voices as well as for child voices. The proposed technique allows any VRP shape to be brought to the same uniform base. On this uniform base, VRP contours or contour elements coming from a variety of sources may be placed within the same graph for comparison and for statistical analysis.
Tensile film clamps and mounting block for the rheovibron and autovibron viscoelastometer
NASA Technical Reports Server (NTRS)
Stoakley, Diane M. (Inventor); St.clair, Anne K. (Inventor); Little, Bruce D. (Inventor)
1989-01-01
A set of film clamps and a mounting block for use in the determination of tensile modulus and damping properties of films in a manually operated or automated Rheovibron is diagrammed. These clamps and mounting block provide uniformity of sample gripping and alignment in the instrument. Operator dependence and data variability are greatly reduced.
Cui, Ming; Xu, Lili; Wang, Huimin; Ju, Shaoqing; Xu, Shuizhu; Jing, Rongrong
2017-12-01
Measurement uncertainty (MU) is a metrological concept, which can be used for objectively estimating the quality of test results in medical laboratories. The Nordtest guide recommends an approach that uses both internal quality control (IQC) and external quality assessment (EQA) data to evaluate the MU. Bootstrap resampling is employed to simulate the unknown distribution based on the mathematical statistics method using an existing small sample of data, where the aim is to transform the small sample into a large sample. However, there have been no reports of the utilization of this method in medical laboratories. Thus, this study applied the Nordtest guide approach based on bootstrap resampling for estimating the MU. We estimated the MU for the white blood cell (WBC) count, red blood cell (RBC) count, hemoglobin (Hb), and platelets (Plt). First, we used 6months of IQC data and 12months of EQA data to calculate the MU according to the Nordtest method. Second, we combined the Nordtest method and bootstrap resampling with the quality control data and calculated the MU using MATLAB software. We then compared the MU results obtained using the two approaches. The expanded uncertainty results determined for WBC, RBC, Hb, and Plt using the bootstrap resampling method were 4.39%, 2.43%, 3.04%, and 5.92%, respectively, and 4.38%, 2.42%, 3.02%, and 6.00% with the existing quality control data (U [k=2]). For WBC, RBC, Hb, and Plt, the differences between the results obtained using the two methods were lower than 1.33%. The expanded uncertainty values were all less than the target uncertainties. The bootstrap resampling method allows the statistical analysis of the MU. Combining the Nordtest method and bootstrap resampling is considered a suitable alternative method for estimating the MU. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Li, Dongmei; Le Pape, Marc A; Parikh, Nisha I; Chen, Will X; Dye, Timothy D
2013-01-01
Microarrays are widely used for examining differential gene expression, identifying single nucleotide polymorphisms, and detecting methylation loci. Multiple testing methods in microarray data analysis aim at controlling both Type I and Type II error rates; however, real microarray data do not always fit their distribution assumptions. Smyth's ubiquitous parametric method, for example, inadequately accommodates violations of normality assumptions, resulting in inflated Type I error rates. The Significance Analysis of Microarrays, another widely used microarray data analysis method, is based on a permutation test and is robust to non-normally distributed data; however, the Significance Analysis of Microarrays method fold change criteria are problematic, and can critically alter the conclusion of a study, as a result of compositional changes of the control data set in the analysis. We propose a novel approach, combining resampling with empirical Bayes methods: the Resampling-based empirical Bayes Methods. This approach not only reduces false discovery rates for non-normally distributed microarray data, but it is also impervious to fold change threshold since no control data set selection is needed. Through simulation studies, sensitivities, specificities, total rejections, and false discovery rates are compared across the Smyth's parametric method, the Significance Analysis of Microarrays, and the Resampling-based empirical Bayes Methods. Differences in false discovery rates controls between each approach are illustrated through a preterm delivery methylation study. The results show that the Resampling-based empirical Bayes Methods offer significantly higher specificity and lower false discovery rates compared to Smyth's parametric method when data are not normally distributed. The Resampling-based empirical Bayes Methods also offers higher statistical power than the Significance Analysis of Microarrays method when the proportion of significantly differentially expressed genes is large for both normally and non-normally distributed data. Finally, the Resampling-based empirical Bayes Methods are generalizable to next generation sequencing RNA-seq data analysis.
Amplifying (Im)perfection: The Impact of Crystallinity in Discrete and Disperse Block Co-oligomers
2017-01-01
Crystallinity is seldomly utilized as part of the microphase segregation process in ultralow-molecular-weight block copolymers. Here, we show the preparation of two types of discrete, semicrystalline block co-oligomers, comprising an amorphous oligodimethylsiloxane block and a crystalline oligo-l-lactic acid or oligomethylene block. The self-assembly of these discrete materials results in lamellar structures with unforeseen uniformity in the domain spacing. A systematic introduction of dispersity reveals the extreme sensitivity of the microphase segregation process toward chain length dispersity in the crystalline block. PMID:28994585
Amplifying (Im)perfection: The Impact of Crystallinity in Discrete and Disperse Block Co-oligomers.
van Genabeek, Bas; Lamers, Brigitte A G; de Waal, Bas F M; van Son, Martin H C; Palmans, Anja R A; Meijer, E W
2017-10-25
Crystallinity is seldomly utilized as part of the microphase segregation process in ultralow-molecular-weight block copolymers. Here, we show the preparation of two types of discrete, semicrystalline block co-oligomers, comprising an amorphous oligodimethylsiloxane block and a crystalline oligo-l-lactic acid or oligomethylene block. The self-assembly of these discrete materials results in lamellar structures with unforeseen uniformity in the domain spacing. A systematic introduction of dispersity reveals the extreme sensitivity of the microphase segregation process toward chain length dispersity in the crystalline block.
Lawrence, Gregory B.; Fernandez, Ivan J.; Richter, Daniel D.; Ross, Donald S.; Hazlett, Paul W.; Bailey, Scott W.; Oiumet, Rock; Warby, Richard A.F.; Johnson, Arthur H.; Lin, Henry; Kaste, James M.; Lapenis, Andrew G.; Sullivan, Timothy J.
2013-01-01
Environmental change is monitored in North America through repeated measurements of weather, stream and river flow, air and water quality, and most recently, soil properties. Some skepticism remains, however, about whether repeated soil sampling can effectively distinguish between temporal and spatial variability, and efforts to document soil change in forest ecosystems through repeated measurements are largely nascent and uncoordinated. In eastern North America, repeated soil sampling has begun to provide valuable information on environmental problems such as air pollution. This review synthesizes the current state of the science to further the development and use of soil resampling as an integral method for recording and understanding environmental change in forested settings. The origins of soil resampling reach back to the 19th century in England and Russia. The concepts and methodologies involved in forest soil resampling are reviewed and evaluated through a discussion of how temporal and spatial variability can be addressed with a variety of sampling approaches. Key resampling studies demonstrate the type of results that can be obtained through differing approaches. Ongoing, large-scale issues such as recovery from acidification, long-term N deposition, C sequestration, effects of climate change, impacts from invasive species, and the increasing intensification of soil management all warrant the use of soil resampling as an essential tool for environmental monitoring and assessment. Furthermore, with better awareness of the value of soil resampling, studies can be designed with a long-term perspective so that information can be efficiently obtained well into the future to address problems that have not yet surfaced.
Backward Registration Based Aspect Ratio Similarity (ARS) for Image Retargeting Quality Assessment.
Zhang, Yabin; Fang, Yuming; Lin, Weisi; Zhang, Xinfeng; Li, Leida
2016-06-28
During the past few years, there have been various kinds of content-aware image retargeting operators proposed for image resizing. However, the lack of effective objective retargeting quality assessment metrics limits the further development of image retargeting techniques. Different from traditional Image Quality Assessment (IQA) metrics, the quality degradation during image retargeting is caused by artificial retargeting modifications, and the difficulty for Image Retargeting Quality Assessment (IRQA) lies in the alternation of the image resolution and content, which makes it impossible to directly evaluate the quality degradation like traditional IQA. In this paper, we interpret the image retargeting in a unified framework of resampling grid generation and forward resampling. We show that the geometric change estimation is an efficient way to clarify the relationship between the images. We formulate the geometric change estimation as a Backward Registration problem with Markov Random Field (MRF) and provide an effective solution. The geometric change aims to provide the evidence about how the original image is resized into the target image. Under the guidance of the geometric change, we develop a novel Aspect Ratio Similarity metric (ARS) to evaluate the visual quality of retargeted images by exploiting the local block changes with a visual importance pooling strategy. Experimental results on the publicly available MIT RetargetMe and CUHK datasets demonstrate that the proposed ARS can predict more accurate visual quality of retargeted images compared with state-of-the-art IRQA metrics.
Zhang, Yeqing; Wang, Meiling; Li, Yafeng
2018-01-01
For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully. PMID:29495301
Zhang, Yeqing; Wang, Meiling; Li, Yafeng
2018-02-24
For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90-94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7-5.6% per millisecond, with most satellites acquired successfully.
Code of Federal Regulations, 2010 CFR
2010-10-01
... (Community Services; Preventive Health and Health Services; Alcohol, Drug Abuse, and Mental Health Services... Block Grant and Part C of Title V, Mental Health Service for the Homeless Block Grant). (3) Grants to... DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION UNIFORM ADMINISTRATIVE REQUIREMENTS FOR GRANTS...
Code of Federal Regulations, 2012 CFR
2012-10-01
... (Community Services; Preventive Health and Health Services; Alcohol, Drug Abuse, and Mental Health Services... Block Grant and Part C of Title V, Mental Health Service for the Homeless Block Grant). (3) Grants to... DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION UNIFORM ADMINISTRATIVE REQUIREMENTS FOR GRANTS...
Code of Federal Regulations, 2011 CFR
2011-10-01
... (Community Services; Preventive Health and Health Services; Alcohol, Drug Abuse, and Mental Health Services... Block Grant and Part C of Title V, Mental Health Service for the Homeless Block Grant). (3) Grants to... DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION UNIFORM ADMINISTRATIVE REQUIREMENTS FOR GRANTS...
Biases in Time-Averaged Field and Paleosecular Variation Studies
NASA Astrophysics Data System (ADS)
Johnson, C. L.; Constable, C.
2009-12-01
Challenges to constructing time-averaged field (TAF) and paleosecular variation (PSV) models of Earth’s magnetic field over million year time scales are the uneven geographical and temporal distribution of paleomagnetic data and the absence of full vector records of the magnetic field variability at any given site. Recent improvements in paleomagnetic data sets now allow regional assessment of the biases introduced by irregular temporal sampling and the absence of full vector information. We investigate these effects over the past few Myr for regions with large paleomagnetic data sets, where the TAF and/or PSV have been of previous interest (e.g., significant departures of the TAF from the field predicted by a geocentric axial dipole). We calculate the effects of excluding paleointensity data from TAF calculations, and find these to be small. For example, at Hawaii, we find that for the past 50 ka, estimates of the TAF direction are minimally affected if only paleodirectional data versus the full paleofield vector are used. We use resampling techniques to investigate biases incurred by the uneven temporal distribution. Key to the latter issue is temporal information on a site-by-site basis. At Hawaii, resampling of the paleodirectional data onto a uniform temporal distribution, assuming no error in the site ages, reduces the magnitude of the inclination anomaly for the Brunhes, Gauss and Matuyama epochs. However inclusion of age errors in the sampling procedure leads to TAF estimates that are close to those reported for the original data sets. We discuss the implications of our results for global field models.
NASA Astrophysics Data System (ADS)
Yuan, H. Z.; Wang, Y.; Shu, C.
2017-12-01
This paper presents an adaptive mesh refinement-multiphase lattice Boltzmann flux solver (AMR-MLBFS) for effective simulation of complex binary fluid flows at large density ratios. In this method, an AMR algorithm is proposed by introducing a simple indicator on the root block for grid refinement and two possible statuses for each block. Unlike available block-structured AMR methods, which refine their mesh by spawning or removing four child blocks simultaneously, the present method is able to refine its mesh locally by spawning or removing one to four child blocks independently when the refinement indicator is triggered. As a result, the AMR mesh used in this work can be more focused on the flow region near the phase interface and its size is further reduced. In each block of mesh, the recently proposed MLBFS is applied for the solution of the flow field and the level-set method is used for capturing the fluid interface. As compared with existing AMR-lattice Boltzmann models, the present method avoids both spatial and temporal interpolations of density distribution functions so that converged solutions on different AMR meshes and uniform grids can be obtained. The proposed method has been successfully validated by simulating a static bubble immersed in another fluid, a falling droplet, instabilities of two-layered fluids, a bubble rising in a box, and a droplet splashing on a thin film with large density ratios and high Reynolds numbers. Good agreement with the theoretical solution, the uniform-grid result, and/or the published data has been achieved. Numerical results also show its effectiveness in saving computational time and virtual memory as compared with computations on uniform meshes.
Memory hierarchy using row-based compression
Loh, Gabriel H.; O'Connor, James M.
2016-10-25
A system includes a first memory and a device coupleable to the first memory. The device includes a second memory to cache data from the first memory. The second memory includes a plurality of rows, each row including a corresponding set of compressed data blocks of non-uniform sizes and a corresponding set of tag blocks. Each tag block represents a corresponding compressed data block of the row. The device further includes decompression logic to decompress data blocks accessed from the second memory. The device further includes compression logic to compress data blocks to be stored in the second memory.
The conditional resampling model STARS: weaknesses of the modeling concept and development
NASA Astrophysics Data System (ADS)
Menz, Christoph
2016-04-01
The Statistical Analogue Resampling Scheme (STARS) is based on a modeling concept of Werner and Gerstengarbe (1997). The model uses a conditional resampling technique to create a simulation time series from daily observations. Unlike other time series generators (such as stochastic weather generators) STARS only needs a linear regression specification of a single variable as the target condition for the resampling. Since its first implementation the algorithm was further extended in order to allow for a spatially distributed trend signal, to preserve the seasonal cycle and the autocorrelation of the observation time series (Orlovsky, 2007; Orlovsky et al., 2008). This evolved version was successfully used in several climate impact studies. However a detaild evaluation of the simulations revealed two fundamental weaknesses of the utilized resampling technique. 1. The restriction of the resampling condition on a single individual variable can lead to a misinterpretation of the change signal of other variables when the model is applied to a mulvariate time series. (F. Wechsung and M. Wechsung, 2014). As one example, the short-term correlations between precipitation and temperature (cooling of the near-surface air layer after a rainfall event) can be misinterpreted as a climatic change signal in the simulation series. 2. The model restricts the linear regression specification to the annual mean time series, refusing the specification of seasonal varying trends. To overcome these fundamental weaknesses a redevelopment of the whole algorithm was done. The poster discusses the main weaknesses of the earlier model implementation and the methods applied to overcome these in the new version. Based on the new model idealized simulations were conducted to illustrate the enhancement.
Resampling probability values for weighted kappa with multiple raters.
Mielke, Paul W; Berry, Kenneth J; Johnston, Janis E
2008-04-01
A new procedure to compute weighted kappa with multiple raters is described. A resampling procedure to compute approximate probability values for weighted kappa with multiple raters is presented. Applications of weighted kappa are illustrated with an example analysis of classifications by three independent raters.
Zhang, Bo; Liu, Wei; Zhang, Zhiwei; Qu, Yanping; Chen, Zhen; Albert, Paul S
2017-08-01
Joint modeling and within-cluster resampling are two approaches that are used for analyzing correlated data with informative cluster sizes. Motivated by a developmental toxicity study, we examined the performances and validity of these two approaches in testing covariate effects in generalized linear mixed-effects models. We show that the joint modeling approach is robust to the misspecification of cluster size models in terms of Type I and Type II errors when the corresponding covariates are not included in the random effects structure; otherwise, statistical tests may be affected. We also evaluate the performance of the within-cluster resampling procedure and thoroughly investigate the validity of it in modeling correlated data with informative cluster sizes. We show that within-cluster resampling is a valid alternative to joint modeling for cluster-specific covariates, but it is invalid for time-dependent covariates. The two methods are applied to a developmental toxicity study that investigated the effect of exposure to diethylene glycol dimethyl ether.
Numbers of center points appropriate to blocked response surface experiments
NASA Technical Reports Server (NTRS)
Holms, A. G.
1979-01-01
Tables are given for the numbers of center points to be used with blocked sequential designs of composite response surface experiments as used in empirical optimum seeking. The star point radii for exact orthogonal blocking is presented. The center point options varied from a lower limit of one to an upper limit equal to the numbers proposed by Box and Hunter for approximate rotatability and uniform variance, and exact orthogonal blocking. Some operating characteristics of the proposed options are described.
WE-E-18A-01: Large Area Avalanche Amorphous Selenium Sensors for Low Dose X-Ray Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scheuermann, J; Goldan, A; Zhao, W
2014-06-15
Purpose: A large area indirect flat panel imager (FPI) with avalanche gain is being developed to achieve x-ray quantum noise limited low dose imaging. It uses a thin optical sensing layer of amorphous selenium (a-Se), known as High-Gain Avalanche Rushing Photoconductor (HARP), to detect optical photons generated from a high resolution x-ray scintillator. We will report initial results in the fabrication of a solid-state HARP structure suitable for a large area FPI. Our objective is to establish the blocking layer structures and defect suppression mechanisms that provide stable and uniform avalanche gain. Methods: Samples were fabricated as follows: (1) ITOmore » signal electrode. (2) Electron blocking layer. (3) A 15 micron layer of intrinsic a-Se. (4) Transparent hole blocking layer. (5) Multiple semitransparent bias electrodes to investigate avalanche gain uniformity over a large area. The sample was exposed to 50ps optical excitation pulses through the bias electrode. Transient time of flight (TOF) and integrated charge was measured. A charge transport simulation was developed to investigate the effects of varying blocking layer charge carrier mobility on defect suppression, avalanche gain and temporal performance. Results: Avalanche gain of ∼200 was achieved experimentally with our multi-layer HARP samples. Simulations using the experimental sensor structure produced the same magnitude of gain as a function of electric field. The simulation predicted that the high dark current at a point defect can be reduced by two orders of magnitude by blocking layer optimization which can prevent irreversible damage while normal operation remained unaffected. Conclusion: We presented the first solid state HARP structure directly scalable to a large area FPI. We have shown reproducible and uniform avalanche gain of 200. By reducing mobility of the blocking layers we can suppress defects and maintain stable avalanche. Future work will optimize the blocking layers to prevent lag and ghosting.« less
Yang, Yang; DeGruttola, Victor
2016-01-01
Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients. PMID:22740584
Yang, Yang; DeGruttola, Victor
2012-06-22
Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients.
Spectral classification with the International Ultraviolet Explorer: An atlas of B-type spectra
NASA Technical Reports Server (NTRS)
Rountree, Janet; Sonneborn, George
1993-01-01
New criteria for the spectral classification of B stars in the ultraviolet show that photospheric absorption lines in the 1200-1900A wavelength region can be used to classify the spectra of B-type dwarfs, subgiants, and giants on a 2-D system consistent with the optical MK system. This atlas illustrates a large number of such spectra at the scale used for classification. These spectra provide a dense matrix of standard stars, and also show the effects of rapid stellar rotation and stellar winds on the spectra and their classification. The observational material consists of high-dispersion spectra from the International Ultraviolet Explorer archives, resampled to a resolution of 0.25 A, uniformly normalized, and plotted at 10 A/cm. The atlas should be useful for the classification of other IUE high-dispersion spectra, especially for stars that have not been observed in the optical.
Introduction to Permutation and Resampling-Based Hypothesis Tests
ERIC Educational Resources Information Center
LaFleur, Bonnie J.; Greevy, Robert A.
2009-01-01
A resampling-based method of inference--permutation tests--is often used when distributional assumptions are questionable or unmet. Not only are these methods useful for obvious departures from parametric assumptions (e.g., normality) and small sample sizes, but they are also more robust than their parametric counterparts in the presences of…
Method of making high breakdown voltage semiconductor device
Arthur, Stephen D.; Temple, Victor A. K.
1990-01-01
A semiconductor device having at least one P-N junction and a multiple-zone junction termination extension (JTE) region which uniformly merges with the reverse blocking junction is disclosed. The blocking junction is graded into multiple zones of lower concentration dopant adjacent termination to facilitate merging of the JTE to the blocking junction and placing of the JTE at or near the high field point of the blocking junction. Preferably, the JTE region substantially overlaps the graded blocking junction region. A novel device fabrication method is also provided which eliminates the prior art step of separately diffusing the JTE region.
Sariyar, Murat; Hoffmann, Isabell; Binder, Harald
2014-02-26
Molecular data, e.g. arising from microarray technology, is often used for predicting survival probabilities of patients. For multivariate risk prediction models on such high-dimensional data, there are established techniques that combine parameter estimation and variable selection. One big challenge is to incorporate interactions into such prediction models. In this feasibility study, we present building blocks for evaluating and incorporating interactions terms in high-dimensional time-to-event settings, especially for settings in which it is computationally too expensive to check all possible interactions. We use a boosting technique for estimation of effects and the following building blocks for pre-selecting interactions: (1) resampling, (2) random forests and (3) orthogonalization as a data pre-processing step. In a simulation study, the strategy that uses all building blocks is able to detect true main effects and interactions with high sensitivity in different kinds of scenarios. The main challenge are interactions composed of variables that do not represent main effects, but our findings are also promising in this regard. Results on real world data illustrate that effect sizes of interactions frequently may not be large enough to improve prediction performance, even though the interactions are potentially of biological relevance. Screening interactions through random forests is feasible and useful, when one is interested in finding relevant two-way interactions. The other building blocks also contribute considerably to an enhanced pre-selection of interactions. We determined the limits of interaction detection in terms of necessary effect sizes. Our study emphasizes the importance of making full use of existing methods in addition to establishing new ones.
ERIC Educational Resources Information Center
Fan, Xitao
This paper empirically and systematically assessed the performance of bootstrap resampling procedure as it was applied to a regression model. Parameter estimates from Monte Carlo experiments (repeated sampling from population) and bootstrap experiments (repeated resampling from one original bootstrap sample) were generated and compared. Sample…
Anomalous change detection in imagery
Theiler, James P [Los Alamos, NM; Perkins, Simon J [Santa Fe, NM
2011-05-31
A distribution-based anomaly detection platform is described that identifies a non-flat background that is specified in terms of the distribution of the data. A resampling approach is also disclosed employing scrambled resampling of the original data with one class specified by the data and the other by the explicit distribution, and solving using binary classification.
De-Dopplerization of Acoustic Measurements
2017-08-10
band energy obtained from fractional octave band digital filters generates a de-Dopplerized spectrum without complex resampling algorithms. An...energy obtained from fractional octave band digital filters generates a de-Dopplerized spectrum without complex resampling algorithms. An equation...fractional octave representation and smearing that occurs within the spectrum11, digital filtering techniques were not considered by these earlier
Thematic mapper design parameter investigation
NASA Technical Reports Server (NTRS)
Colby, C. P., Jr.; Wheeler, S. G.
1978-01-01
This study simulated the multispectral data sets to be expected from three different Thematic Mapper configurations, and the ground processing of these data sets by three different resampling techniques. The simulated data sets were then evaluated by processing them for multispectral classification, and the Thematic Mapper configuration, and resampling technique which provided the best classification accuracy were identified.
permGPU: Using graphics processing units in RNA microarray association studies.
Shterev, Ivo D; Jung, Sin-Ho; George, Stephen L; Owzar, Kouros
2010-06-16
Many analyses of microarray association studies involve permutation, bootstrap resampling and cross-validation, that are ideally formulated as embarrassingly parallel computing problems. Given that these analyses are computationally intensive, scalable approaches that can take advantage of multi-core processor systems need to be developed. We have developed a CUDA based implementation, permGPU, that employs graphics processing units in microarray association studies. We illustrate the performance and applicability of permGPU within the context of permutation resampling for a number of test statistics. An extensive simulation study demonstrates a dramatic increase in performance when using permGPU on an NVIDIA GTX 280 card compared to an optimized C/C++ solution running on a conventional Linux server. permGPU is available as an open-source stand-alone application and as an extension package for the R statistical environment. It provides a dramatic increase in performance for permutation resampling analysis in the context of microarray association studies. The current version offers six test statistics for carrying out permutation resampling analyses for binary, quantitative and censored time-to-event traits.
Trends and Correlation Estimation in Climate Sciences: Effects of Timescale Errors
NASA Astrophysics Data System (ADS)
Mudelsee, M.; Bermejo, M. A.; Bickert, T.; Chirila, D.; Fohlmeister, J.; Köhler, P.; Lohmann, G.; Olafsdottir, K.; Scholz, D.
2012-12-01
Trend describes time-dependence in the first moment of a stochastic process, and correlation measures the linear relation between two random variables. Accurately estimating the trend and correlation, including uncertainties, from climate time series data in the uni- and bivariate domain, respectively, allows first-order insights into the geophysical process that generated the data. Timescale errors, ubiquitious in paleoclimatology, where archives are sampled for proxy measurements and dated, poses a problem to the estimation. Statistical science and the various applied research fields, including geophysics, have almost completely ignored this problem due to its theoretical almost-intractability. However, computational adaptations or replacements of traditional error formulas have become technically feasible. This contribution gives a short overview of such an adaptation package, bootstrap resampling combined with parametric timescale simulation. We study linear regression, parametric change-point models and nonparametric smoothing for trend estimation. We introduce pairwise-moving block bootstrap resampling for correlation estimation. Both methods share robustness against autocorrelation and non-Gaussian distributional shape. We shortly touch computing-intensive calibration of bootstrap confidence intervals and consider options to parallelize the related computer code. Following examples serve not only to illustrate the methods but tell own climate stories: (1) the search for climate drivers of the Agulhas Current on recent timescales, (2) the comparison of three stalagmite-based proxy series of regional, western German climate over the later part of the Holocene, and (3) trends and transitions in benthic oxygen isotope time series from the Cenozoic. Financial support by Deutsche Forschungsgemeinschaft (FOR 668, FOR 1070, MU 1595/4-1) and the European Commission (MC ITN 238512, MC ITN 289447) is acknowledged.
NASA Astrophysics Data System (ADS)
Zhang, Shengjun; Sandwell, David T.; Jin, Taoyong; Li, Dawei
2017-02-01
The accuracy and resolution of marine gravity field derived from satellite altimetry mainly depends on the range precision and dense spatial distribution. This paper aims at modeling a regional marine gravity field with improved accuracy and higher resolution (1‧ × 1‧) over Southeastern China Seas using additional data from CryoSat-2 as well as new data from AltiKa. Three approaches are used to enhance the precision level of satellite-derived gravity anomalies. Firstly we evaluate a suite of published retracking algorithms and find the two-step retracker is optimal for open ocean waveforms. Secondly, we evaluate the filtering and resampling procedure used to reduce the full 20 or 40 Hz data to a lower rate having lower noise. We adopt a uniform low-pass filter for all altimeter missions and resample at 5 Hz and then perform a second editing based on sea surface slope estimates from previous models. Thirdly, we selected WHU12 model to update the corrections provided in geophysical data record. We finally calculated the 1‧ × 1‧ marine gravity field model by using EGM2008 model as reference field during the remove/restore procedure. The root mean squares of the discrepancies between the new result and DTU10, DTU13, V23.1, EGM2008 are within the range of 1.8- 3.9 mGal, while the verification with respect to shipboard gravity data shows that the accuracy of the new result reached a comparable level with DTU13 and was slightly superior to V23.1, DTU10 and EGM2008 models. Moreover, the new result has a 2 mGal better accuracy over open seas than coastal areas with shallow water depth.
A Sequential Ensemble Prediction System at Convection Permitting Scales
NASA Astrophysics Data System (ADS)
Milan, M.; Simmer, C.
2012-04-01
A Sequential Assimilation Method (SAM) following some aspects of particle filtering with resampling, also called SIR (Sequential Importance Resampling), is introduced and applied in the framework of an Ensemble Prediction System (EPS) for weather forecasting on convection permitting scales, with focus to precipitation forecast. At this scale and beyond, the atmosphere increasingly exhibits chaotic behaviour and non linear state space evolution due to convectively driven processes. One way to take full account of non linear state developments are particle filter methods, their basic idea is the representation of the model probability density function by a number of ensemble members weighted by their likelihood with the observations. In particular particle filter with resampling abandons ensemble members (particles) with low weights restoring the original number of particles adding multiple copies of the members with high weights. In our SIR-like implementation we substitute the likelihood way to define weights and introduce a metric which quantifies the "distance" between the observed atmospheric state and the states simulated by the ensemble members. We also introduce a methodology to counteract filter degeneracy, i.e. the collapse of the simulated state space. To this goal we propose a combination of resampling taking account of simulated state space clustering and nudging. By keeping cluster representatives during resampling and filtering, the method maintains the potential for non linear system state development. We assume that a particle cluster with initially low likelihood may evolve in a state space with higher likelihood in a subsequent filter time thus mimicking non linear system state developments (e.g. sudden convection initiation) and remedies timing errors for convection due to model errors and/or imperfect initial condition. We apply a simplified version of the resampling, the particles with highest weights in each cluster are duplicated; for the model evolution for each particle pair one particle evolves using the forward model; the second particle, however, is nudged to the radar and satellite observation during its evolution based on the forward model.
Multistage Computerized Adaptive Testing with Uniform Item Exposure
ERIC Educational Resources Information Center
Edwards, Michael C.; Flora, David B.; Thissen, David
2012-01-01
This article describes a computerized adaptive test (CAT) based on the uniform item exposure multi-form structure (uMFS). The uMFS is a specialization of the multi-form structure (MFS) idea described by Armstrong, Jones, Berliner, and Pashley (1998). In an MFS CAT, the examinee first responds to a small fixed block of items. The items comprising…
49 CFR Appendix A to Part 23 - Uniform Report of ACDBE Participation
Code of Federal Regulations, 2011 CFR
2011-10-01
... BUSINESS ENTERPRISE IN AIRPORT CONCESSIONS Pt. 23, App. A Appendix A to Part 23—Uniform Report of ACDBE... purchased by the airport itself or by concessionaires and management contractors from certified DBEs. Block... concessionaires (prime and sub) and purchases of goods and services (ACDBE and non-ACDBE combined) at the airport...
49 CFR Appendix A to Part 23 - Uniform Report of ACDBE Participation
Code of Federal Regulations, 2010 CFR
2010-10-01
... BUSINESS ENTERPRISE IN AIRPORT CONCESSIONS Pt. 23, App. A Appendix A to Part 23—Uniform Report of ACDBE... purchased by the airport itself or by concessionaires and management contractors from certified DBEs. Block... concessionaires (prime and sub) and purchases of goods and services (ACDBE and non-ACDBE combined) at the airport...
System health monitoring using multiple-model adaptive estimation techniques
NASA Astrophysics Data System (ADS)
Sifford, Stanley Ryan
Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.
Method for hot pressing irregularly shaped refractory articles
Steinkamp, William E.; Ballard, Ambrose H.
1982-01-01
The present invention is directed to a method for hot pressing irregularly haped refractory articles with these articles of varying thickness being provided with high uniform density and dimensional accuracy. Two partially pressed compacts of the refractory material are placed in a die cavity between displaceable die punches having compact-contacting surfaces of the desired article configuration. A floating, rotatable block is disposed between the compacts. The displacement of the die punches towards one another causes the block to rotate about an axis normal to the direction of movement of the die punches to uniformly distribute the pressure loading upon the compacts for maintaining substantially equal volume displacement of the powder material during the hot pressing operation.
The Beginner's Guide to the Bootstrap Method of Resampling.
ERIC Educational Resources Information Center
Lane, Ginny G.
The bootstrap method of resampling can be useful in estimating the replicability of study results. The bootstrap procedure creates a mock population from a given sample of data from which multiple samples are then drawn. The method extends the usefulness of the jackknife procedure as it allows for computation of a given statistic across a maximal…
ERIC Educational Resources Information Center
Nevitt, Jonathan; Hancock, Gregory R.
2001-01-01
Evaluated the bootstrap method under varying conditions of nonnormality, sample size, model specification, and number of bootstrap samples drawn from the resampling space. Results for the bootstrap suggest the resampling-based method may be conservative in its control over model rejections, thus having an impact on the statistical power associated…
Resampling and Distribution of the Product Methods for Testing Indirect Effects in Complex Models
ERIC Educational Resources Information Center
Williams, Jason; MacKinnon, David P.
2008-01-01
Recent advances in testing mediation have found that certain resampling methods and tests based on the mathematical distribution of 2 normal random variables substantially outperform the traditional "z" test. However, these studies have primarily focused only on models with a single mediator and 2 component paths. To address this limitation, a…
Gaussian Process Interpolation for Uncertainty Estimation in Image Registration
Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William
2014-01-01
Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127
NASA Astrophysics Data System (ADS)
Tian, Pengyi; Tao, Dashuai; Yin, Wei; Zhang, Xiangjun; Meng, Yonggang; Tian, Yu
2016-09-01
Comprehension of stick-slip motion is very important for understanding tribological principles. The transition from creep-dominated to inertia-dominated stick-slip as the increase of sliding velocity has been described by researchers. However, the associated micro-contact behavior during this transition has not been fully disclosed yet. In this study, we investigated the stick-slip behaviors of two polymethyl methacrylate blocks actively modulated from the creep-dominated to inertia-dominated dynamics through a non-uniform loading along the interface by slightly tilting the angle of the two blocks. Increasing the tilt angle increases the critical transition velocity from creep-dominated to inertia-dominated stick-slip behaviors. Results from finite element simulation disclosed that a positive tilt angle led to a higher normal stress and a higher temperature on blocks at the opposite side of the crack initiating edge, which enhanced the creep of asperities during sliding friction. Acoustic emission (AE) during the stick-slip has also been measured, which is closely related to the different rupture modes regulated by the distribution of the ratio of shear to normal stress along the sliding interface. This study provided a more comprehensive understanding of the effect of tilted non-uniform loading on the local stress ratio, the local temperature, and the stick-slip behaviors.
Tian, Pengyi; Tao, Dashuai; Yin, Wei; Zhang, Xiangjun; Meng, Yonggang; Tian, Yu
2016-09-19
Comprehension of stick-slip motion is very important for understanding tribological principles. The transition from creep-dominated to inertia-dominated stick-slip as the increase of sliding velocity has been described by researchers. However, the associated micro-contact behavior during this transition has not been fully disclosed yet. In this study, we investigated the stick-slip behaviors of two polymethyl methacrylate blocks actively modulated from the creep-dominated to inertia-dominated dynamics through a non-uniform loading along the interface by slightly tilting the angle of the two blocks. Increasing the tilt angle increases the critical transition velocity from creep-dominated to inertia-dominated stick-slip behaviors. Results from finite element simulation disclosed that a positive tilt angle led to a higher normal stress and a higher temperature on blocks at the opposite side of the crack initiating edge, which enhanced the creep of asperities during sliding friction. Acoustic emission (AE) during the stick-slip has also been measured, which is closely related to the different rupture modes regulated by the distribution of the ratio of shear to normal stress along the sliding interface. This study provided a more comprehensive understanding of the effect of tilted non-uniform loading on the local stress ratio, the local temperature, and the stick-slip behaviors.
Tian, Pengyi; Tao, Dashuai; Yin, Wei; Zhang, Xiangjun; Meng, Yonggang; Tian, Yu
2016-01-01
Comprehension of stick-slip motion is very important for understanding tribological principles. The transition from creep-dominated to inertia-dominated stick-slip as the increase of sliding velocity has been described by researchers. However, the associated micro-contact behavior during this transition has not been fully disclosed yet. In this study, we investigated the stick-slip behaviors of two polymethyl methacrylate blocks actively modulated from the creep-dominated to inertia-dominated dynamics through a non-uniform loading along the interface by slightly tilting the angle of the two blocks. Increasing the tilt angle increases the critical transition velocity from creep-dominated to inertia-dominated stick-slip behaviors. Results from finite element simulation disclosed that a positive tilt angle led to a higher normal stress and a higher temperature on blocks at the opposite side of the crack initiating edge, which enhanced the creep of asperities during sliding friction. Acoustic emission (AE) during the stick-slip has also been measured, which is closely related to the different rupture modes regulated by the distribution of the ratio of shear to normal stress along the sliding interface. This study provided a more comprehensive understanding of the effect of tilted non-uniform loading on the local stress ratio, the local temperature, and the stick-slip behaviors. PMID:27641908
NASA Technical Reports Server (NTRS)
Zell, Peter
2012-01-01
A document describes a new way to integrate thermal protection materials on external surfaces of vehicles that experience the severe heating environments of atmospheric entry from space. Cured blocks of thermal protection materials are bonded into a compatible, large-cell honeycomb matrix that can be applied on the external surfaces of the vehicles. The honeycomb matrix cell size, and corresponding thermal protection material block size, is envisioned to be between 1 and 4 in. (.2.5 and 10 cm) on a side, with a depth required to protect the vehicle. The cell wall thickness is thin, between 0.01 and 0.10 in. (.0.025 and 0.25 cm). A key feature is that the honeycomb matrix is attached to the vehicle fs unprotected external surface prior to insertion of the thermal protection material blocks. The attachment integrity of the honeycomb can then be confirmed over the full range of temperature and loads that the vehicle will experience. Another key feature of the innovation is the use of uniform-sized thermal protection material blocks. This feature allows for the mass production of these blocks at a size that is convenient for quality control inspection. The honeycomb that receives the blocks must have cells with a compatible set of internal dimensions. The innovation involves the use of a faceted subsurface under the honeycomb. This provides a predictable surface with perpendicular cell walls for the majority of the blocks. Some cells will have positive tapers to accommodate mitered joints between honeycomb panels on each facet of the subsurface. These tapered cells have dimensions that may fall within the boundaries of the uniform-sized blocks.
Yu, H; Qiu, X; Behzad, A R; Musteata, V; Smilgies, D-M; Nunes, S P; Peinemann, K-V
2016-10-04
Membranes with a hierarchical porous structure could be manufactured from a block copolymer blend by pure solvent evaporation. Uniform pores in a 30 nm thin skin layer supported by a macroporous structure were formed. This new process is attractive for membrane production because of its simplicity and the lack of liquid waste.
Research on Near Field Pattern Effects.
1981-01-01
block numbr) High frequency solutions Prolate spheroid mounted antennas Uniform Geometrical Theory of Diffraction Airborne antenna pattern predicti...Geometrical Theory of Diffraction solutions which were developed previously were DD 1473 EDITION OF I NOV 66 IS OBSOLETE UCASFE SECURITY CLASSIFICATION...be used later to simulate the fuselage of a general aircraft. The general uniform Geometrical Theory of Diffraction (GTD) solutions [1i which are
NASA Astrophysics Data System (ADS)
Olafsdottir, Kristin B.; Mudelsee, Manfred
2013-04-01
Estimation of the Pearson's correlation coefficient between two time series to evaluate the influences of one time depended variable on another is one of the most often used statistical method in climate sciences. Various methods are used to estimate confidence interval to support the correlation point estimate. Many of them make strong mathematical assumptions regarding distributional shape and serial correlation, which are rarely met. More robust statistical methods are needed to increase the accuracy of the confidence intervals. Bootstrap confidence intervals are estimated in the Fortran 90 program PearsonT (Mudelsee, 2003), where the main intention was to get an accurate confidence interval for correlation coefficient between two time series by taking the serial dependence of the process that generated the data into account. However, Monte Carlo experiments show that the coverage accuracy for smaller data sizes can be improved. Here we adapt the PearsonT program into a new version called PearsonT3, by calibrating the confidence interval to increase the coverage accuracy. Calibration is a bootstrap resampling technique, which basically performs a second bootstrap loop or resamples from the bootstrap resamples. It offers, like the non-calibrated bootstrap confidence intervals, robustness against the data distribution. Pairwise moving block bootstrap is used to preserve the serial correlation of both time series. The calibration is applied to standard error based bootstrap Student's t confidence intervals. The performances of the calibrated confidence intervals are examined with Monte Carlo simulations, and compared with the performances of confidence intervals without calibration, that is, PearsonT. The coverage accuracy is evidently better for the calibrated confidence intervals where the coverage error is acceptably small (i.e., within a few percentage points) already for data sizes as small as 20. One form of climate time series is output from numerical models which simulate the climate system. The method is applied to model data from the high resolution ocean model, INALT01 where the relationship between the Agulhas Leakage and the North Brazil Current is evaluated. Preliminary results show significant correlation between the two variables when there is 10 year lag between them, which is more or less the time that takes the Agulhas Leakage water to reach the North Brazil Current. Mudelsee, M., 2003. Estimating Pearson's correlation coefficient with bootstrap confidence interval from serially dependent time series. Mathematical Geology 35, 651-665.
NASA Astrophysics Data System (ADS)
Collins, Jarrod A.; Heiselman, Jon S.; Weis, Jared A.; Clements, Logan W.; Simpson, Amber L.; Jarnagin, William R.; Miga, Michael I.
2017-03-01
In image-guided liver surgery (IGLS), sparse representations of the anterior organ surface may be collected intraoperatively to drive image-to-physical space registration. Soft tissue deformation represents a significant source of error for IGLS techniques. This work investigates the impact of surface data quality on current surface based IGLS registration methods. In this work, we characterize the robustness of our IGLS registration methods to noise in organ surface digitization. We study this within a novel human-to-phantom data framework that allows a rapid evaluation of clinically realistic data and noise patterns on a fully characterized hepatic deformation phantom. Additionally, we implement a surface data resampling strategy that is designed to decrease the impact of differences in surface acquisition. For this analysis, n=5 cases of clinical intraoperative data consisting of organ surface and salient feature digitizations from open liver resection were collected and analyzed within our human-to-phantom validation framework. As expected, results indicate that increasing levels of noise in surface acquisition cause registration fidelity to deteriorate. With respect to rigid registration using the raw and resampled data at clinically realistic levels of noise (i.e. a magnitude of 1.5 mm), resampling improved TRE by 21%. In terms of nonrigid registration, registrations using resampled data outperformed the raw data result by 14% at clinically realistic levels and were less susceptible to noise across the range of noise investigated. These results demonstrate the types of analyses our novel human-to-phantom validation framework can provide and indicate the considerable benefits of resampling strategies.
Accelerated spike resampling for accurate multiple testing controls.
Harrison, Matthew T
2013-02-01
Controlling for multiple hypothesis tests using standard spike resampling techniques often requires prohibitive amounts of computation. Importance sampling techniques can be used to accelerate the computation. The general theory is presented, along with specific examples for testing differences across conditions using permutation tests and for testing pairwise synchrony and precise lagged-correlation between many simultaneously recorded spike trains using interval jitter.
Exact and Monte carlo resampling procedures for the Wilcoxon-Mann-Whitney and Kruskal-Wallis tests.
Berry, K J; Mielke, P W
2000-12-01
Exact and Monte Carlo resampling FORTRAN programs are described for the Wilcoxon-Mann-Whitney rank sum test and the Kruskal-Wallis one-way analysis of variance for ranks test. The program algorithms compensate for tied values and do not depend on asymptotic approximations for probability values, unlike most algorithms contained in PC-based statistical software packages.
Cellular neural network-based hybrid approach toward automatic image registration
NASA Astrophysics Data System (ADS)
Arun, Pattathal VijayaKumar; Katiyar, Sunil Kumar
2013-01-01
Image registration is a key component of various image processing operations that involve the analysis of different image data sets. Automatic image registration domains have witnessed the application of many intelligent methodologies over the past decade; however, inability to properly model object shape as well as contextual information has limited the attainable accuracy. A framework for accurate feature shape modeling and adaptive resampling using advanced techniques such as vector machines, cellular neural network (CNN), scale invariant feature transform (SIFT), coreset, and cellular automata is proposed. CNN has been found to be effective in improving feature matching as well as resampling stages of registration and complexity of the approach has been considerably reduced using coreset optimization. The salient features of this work are cellular neural network approach-based SIFT feature point optimization, adaptive resampling, and intelligent object modelling. Developed methodology has been compared with contemporary methods using different statistical measures. Investigations over various satellite images revealed that considerable success was achieved with the approach. This system has dynamically used spectral and spatial information for representing contextual knowledge using CNN-prolog approach. This methodology is also illustrated to be effective in providing intelligent interpretation and adaptive resampling.
Goldstein, Darlene R
2006-10-01
Studies of gene expression using high-density short oligonucleotide arrays have become a standard in a variety of biological contexts. Of the expression measures that have been proposed to quantify expression in these arrays, multi-chip-based measures have been shown to perform well. As gene expression studies increase in size, however, utilizing multi-chip expression measures is more challenging in terms of computing memory requirements and time. A strategic alternative to exact multi-chip quantification on a full large chip set is to approximate expression values based on subsets of chips. This paper introduces an extrapolation method, Extrapolation Averaging (EA), and a resampling method, Partition Resampling (PR), to approximate expression in large studies. An examination of properties indicates that subset-based methods can perform well compared with exact expression quantification. The focus is on short oligonucleotide chips, but the same ideas apply equally well to any array type for which expression is quantified using an entire set of arrays, rather than for only a single array at a time. Software implementing Partition Resampling and Extrapolation Averaging is under development as an R package for the BioConductor project.
Pesticides in Wyoming Groundwater, 2008-10
Eddy-Miller, Cheryl A.; Bartos, Timothy T.; Taylor, Michelle L.
2013-01-01
Groundwater samples were collected from 296 wells during 1995-2006 as part of a baseline study of pesticides in Wyoming groundwater. In 2009, a previous report summarized the results of the baseline sampling and the statistical evaluation of the occurrence of pesticides in relation to selected natural and anthropogenic (human-related) characteristics. During 2008-10, the U.S. Geological Survey, in cooperation with the Wyoming Department of Agriculture, resampled a subset (52) of the 296 wells sampled during 1995-2006 baseline study in order to compare detected compounds and respective concentrations between the two sampling periods and to evaluate the detections of new compounds. The 52 wells were distributed similarly to sites used in the 1995-2006 baseline study with respect to geographic area and land use within the geographic area of interest. Because of the use of different types of reporting levels and variability in reporting-level values during both the 1995-2006 baseline study and the 2008-10 resampling study, analytical results received from the laboratory were recensored. Two levels of recensoring were used to compare pesticides—a compound-specific assessment level (CSAL) that differed by compound and a common assessment level (CAL) of 0.07 microgram per liter. The recensoring techniques and values used for both studies, with the exception of the pesticide 2,4-D methyl ester, were the same. Twenty-eight different pesticides were detected in samples from the 52 wells during the 2008-10 resampling study. Pesticide concentrations were compared with several U.S. Environmental Protection Agency drinking-water standards or health advisories for finished (treated) water established under the Safe Drinking Water Act. All detected pesticides were measured at concentrations smaller than U.S. Environmental Protection Agency drinking-water standards or health advisories where applicable (many pesticides did not have standards or advisories). One or more pesticides were detected at concentrations greater than the CAL in water from 16 of 52 wells sampled (about 31 percent) during the resampling study. Detected pesticides were classified into one of six types: herbicides, herbicide degradates, insecticides, insecticide degradates, fungicides, or fungicide degradates. At least 95 percent of detected pesticides were classified as herbicides or herbicide degradates. The number of different pesticides detected in samples from the 52 wells was similar between the 1995-2006 baseline study (30 different pesticides) and 2008-2010 resampling study (28 different pesticides). Thirteen pesticides were detected during both studies. The change in the number of pesticides detected (without regard to which pesticide was detected) in groundwater samples from each of the 52 wells was evaluated and the number of pesticides detected in groundwater did not change for most of the wells (32). Of those that did have a difference between the two studies, 17 wells had more pesticide detections in groundwater during the 1995-2006 baseline study, whereas only 3 wells had more detections during the 2008-2010 resampling study. The difference in pesticide concentrations in groundwater samples from each of the 52 wells was determined. Few changes in concentration between the 1995-2006 baseline study and the 2008-2010 resampling study were seen for most detected pesticides. Seven pesticides had a greater concentration detected in the groundwater from the same well during the baseline sampling compared to the resampling study. Concentrations of prometon, which was detected in 17 wells, were greater in the baseline study sample compared to the resampling study sample from the same well 100 percent of the time. The change in the number of pesticides detected (without regard to which pesticide was detected) in groundwater samples from each of the 52 wells with respect to land use and geographic area was calculated. All wells with land use classified as agricultural had the same or a smaller number of pesticides detected in the resampling study compared to the baseline study. All wells in the Bighorn Basin geographic area also had the same or a smaller number of pesticides detected in the resampling study compared to the baseline study.
Experimental study of digital image processing techniques for LANDSAT data
NASA Technical Reports Server (NTRS)
Rifman, S. S. (Principal Investigator); Allendoerfer, W. B.; Caron, R. H.; Pemberton, L. J.; Mckinnon, D. M.; Polanski, G.; Simon, K. W.
1976-01-01
The author has identified the following significant results. Results are reported for: (1) subscene registration, (2) full scene rectification and registration, (3) resampling techniques, (4) and ground control point (GCP) extraction. Subscenes (354 pixels x 234 lines) were registered to approximately 1/4 pixel accuracy and evaluated by change detection imagery for three cases: (1) bulk data registration, (2) precision correction of a reference subscene using GCP data, and (3) independently precision processed subscenes. Full scene rectification and registration results were evaluated by using a correlation technique to measure registration errors of 0.3 pixel rms thoughout the full scene. Resampling evaluations of nearest neighbor and TRW cubic convolution processed data included change detection imagery and feature classification. Resampled data were also evaluated for an MSS scene containing specular solar reflections.
General Syntheses of Nanotubes Induced by Block Copolymer Self-Assembly.
Zhao, Jianming; Huang, Wei; Si, Pengchao; Ulstrup, Jens; Diao, Fangyuan; Zhang, Jingdong
2018-06-01
Amphiphilic block copolymer templating strategies are extensively used for syntheses of mesoporous materials. However, monodisperse tubular nanostructures are limited. Here, a general method is developed to synthesize monodisperse nanotubes with narrow diameter distribution induced by self-assembly of block copolymer. 3-Aminophenol (AP) and formaldehyde (F) polymerize and self-assemble with cylindrical PS-b-PEO micelles into worm-like PS-b-PEO@APF composites with uniform diameter (49 ± 3 nm). After template extraction, worm-like APF polymer nanotubes are formed. The structure and morphology of the polymer nanotubes can be tuned by regulating the synthesis conditions. Furthermore, PS-b-PEO@APF composites are uniformly converted to isomorphic carbon nanotubes with large surface area of 662 m 2 g -1 , abundant hierarchical porous frameworks and nitrogen doping. The synthesis can be extended to silica nanotubes. These findings open an avenue to the design of porous materials with controlled structural framework, composition, and properties for a wide range of applications. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
ERIC Educational Resources Information Center
Baker, Bruce D.; Ramsey, Matthew J.
2010-01-01
Over the past few decades, a handful of states have chosen to provide state financing of special education programs through a method referred to as "Census-Based" funding--an approach which involves allocated block-grant funding on an assumed basis of uniform distribution of children with disabilities across school districts. The…
Maximum a posteriori resampling of noisy, spatially correlated data
NASA Astrophysics Data System (ADS)
Goff, John A.; Jenkins, Chris; Calder, Brian
2006-08-01
In any geologic application, noisy data are sources of consternation for researchers, inhibiting interpretability and marring images with unsightly and unrealistic artifacts. Filtering is the typical solution to dealing with noisy data. However, filtering commonly suffers from ad hoc (i.e., uncalibrated, ungoverned) application. We present here an alternative to filtering: a newly developed method for correcting noise in data by finding the "best" value given available information. The motivating rationale is that data points that are close to each other in space cannot differ by "too much," where "too much" is governed by the field covariance. Data with large uncertainties will frequently violate this condition and therefore ought to be corrected, or "resampled." Our solution for resampling is determined by the maximum of the a posteriori density function defined by the intersection of (1) the data error probability density function (pdf) and (2) the conditional pdf, determined by the geostatistical kriging algorithm applied to proximal data values. A maximum a posteriori solution can be computed sequentially going through all the data, but the solution depends on the order in which the data are examined. We approximate the global a posteriori solution by randomizing this order and taking the average. A test with a synthetic data set sampled from a known field demonstrates quantitatively and qualitatively the improvement provided by the maximum a posteriori resampling algorithm. The method is also applied to three marine geology/geophysics data examples, demonstrating the viability of the method for diverse applications: (1) three generations of bathymetric data on the New Jersey shelf with disparate data uncertainties; (2) mean grain size data from the Adriatic Sea, which is a combination of both analytic (low uncertainty) and word-based (higher uncertainty) sources; and (3) side-scan backscatter data from the Martha's Vineyard Coastal Observatory which are, as is typical for such data, affected by speckle noise. Compared to filtering, maximum a posteriori resampling provides an objective and optimal method for reducing noise, and better preservation of the statistical properties of the sampled field. The primary disadvantage is that maximum a posteriori resampling is a computationally expensive procedure.
Anisotropic scene geometry resampling with occlusion filling for 3DTV applications
NASA Astrophysics Data System (ADS)
Kim, Jangheon; Sikora, Thomas
2006-02-01
Image and video-based rendering technologies are receiving growing attention due to their photo-realistic rendering capability in free-viewpoint. However, two major limitations are ghosting and blurring due to their sampling-based mechanism. The scene geometry which supports to select accurate sampling positions is proposed using global method (i.e. approximate depth plane) and local method (i.e. disparity estimation). This paper focuses on the local method since it can yield more accurate rendering quality without large number of cameras. The local scene geometry has two difficulties which are the geometrical density and the uncovered area including hidden information. They are the serious drawback to reconstruct an arbitrary viewpoint without aliasing artifacts. To solve the problems, we propose anisotropic diffusive resampling method based on tensor theory. Isotropic low-pass filtering accomplishes anti-aliasing in scene geometry and anisotropic diffusion prevents filtering from blurring the visual structures. Apertures in coarse samples are estimated following diffusion on the pre-filtered space, the nonlinear weighting of gradient directions suppresses the amount of diffusion. Aliasing artifacts from low density are efficiently removed by isotropic filtering and the edge blurring can be solved by the anisotropic method at one process. Due to difference size of sampling gap, the resampling condition is defined considering causality between filter-scale and edge. Using partial differential equation (PDE) employing Gaussian scale-space, we iteratively achieve the coarse-to-fine resampling. In a large scale, apertures and uncovered holes can be overcoming because only strong and meaningful boundaries are selected on the resolution. The coarse-level resampling with a large scale is iteratively refined to get detail scene structure. Simulation results show the marked improvements of rendering quality.
A multistate dynamic site occupancy model for spatially aggregated sessile communities
Fukaya, Keiichi; Royle, J. Andrew; Okuda, Takehiro; Nakaoka, Masahiro; Noda, Takashi
2017-01-01
Estimation of transition probabilities of sessile communities seems easy in principle but may still be difficult in practice because resampling error (i.e. a failure to resample exactly the same location at fixed points) may cause significant estimation bias. Previous studies have developed novel analytical methods to correct for this estimation bias. However, they did not consider the local structure of community composition induced by the aggregated distribution of organisms that is typically observed in sessile assemblages and is very likely to affect observations.We developed a multistate dynamic site occupancy model to estimate transition probabilities that accounts for resampling errors associated with local community structure. The model applies a nonparametric multivariate kernel smoothing methodology to the latent occupancy component to estimate the local state composition near each observation point, which is assumed to determine the probability distribution of data conditional on the occurrence of resampling error.By using computer simulations, we confirmed that an observation process that depends on local community structure may bias inferences about transition probabilities. By applying the proposed model to a real data set of intertidal sessile communities, we also showed that estimates of transition probabilities and of the properties of community dynamics may differ considerably when spatial dependence is taken into account.Results suggest the importance of accounting for resampling error and local community structure for developing management plans that are based on Markovian models. Our approach provides a solution to this problem that is applicable to broad sessile communities. It can even accommodate an anisotropic spatial correlation of species composition, and may also serve as a basis for inferring complex nonlinear ecological dynamics.
NASA Astrophysics Data System (ADS)
Tweedie, C. E.; Ebert-May, D.; Hollister, R. D.; Johnson, D. R.; Lara, M. J.; Villarreal, S.; Spasojevic, M.; Webber, P.
2010-12-01
The International Polar Year-Back to the Future (IPY-BTF) is an endorsed International Polar Year project (IPY project #214). The overarching goal of this program is to determine how key structural and functional characteristics of high latitude/altitude terrestrial ecosystems have changed over the past 25 or more years and assess if such trajectories of change are likely to continue in the future. By rescuing data, revisiting, re-sampling historic research sites and assessing environmental change over time, we aim to provide greater understanding of how tundra is changing and what the possible drivers of these changes are. Resampling of sites established by Patrick J. Webber between 1964 and 1975 in northern Baffin Island, Northern Alaska and in the Rocky Mountains form a key contribution to the BTF project. Here we report on resampling efforts at each of these locations and initial results of a synthesis effort that finds similarities and differences in change between sites. Results suggest that although shifts in plant community composition are detectable at each location, the magnitude and direction of change differ among locations. Vegetation shifts along soil moisture gradients is apparent at most of the sites resampled. Interestingly, however, wet communities seem to have changed more than dry communities in the Arctic locations, while plant communities at the alpine site appear to be becoming more distinct regardless of soil moisture status. Ecosystem function studies performed in conjunction with plant community change suggest that there has been an increase in plant productivity at most sites resampled, especially in wet and mesic land cover types.
ISAP: ISO Spectral Analysis Package
NASA Astrophysics Data System (ADS)
Ali, Babar; Bauer, Otto; Brauher, Jim; Buckley, Mark; Harwood, Andrew; Hur, Min; Khan, Iffat; Li, Jing; Lord, Steve; Lutz, Dieter; Mazzarella, Joe; Molinari, Sergio; Morris, Pat; Narron, Bob; Seidenschwang, Karla; Sidher, Sunil; Sturm, Eckhard; Swinyard, Bruce; Unger, Sarah; Verstraete, Laurent; Vivares, Florence; Wieprecht, Ecki
2014-03-01
ISAP, written in IDL, simplifies the process of visualizing, subsetting, shifting, rebinning, masking, combining scans with weighted means or medians, filtering, and smoothing Auto Analysis Results (AARs) from post-pipeline processing of the Infrared Space Observatory's (ISO) Short Wavelength Spectrometer (SWS) and Long Wavelength Spectrometer (LWS) data. It can also be applied to PHOT-S and CAM-CVF data, and data from practically any spectrometer. The result of a typical ISAP session is expected to be a "simple spectrum" (single-valued spectrum which may be resampled to a uniform wavelength separation if desired) that can be further analyzed and measured either with other ISAP functions, native IDL functions, or exported to other analysis package (e.g., IRAF, MIDAS) if desired. ISAP provides many tools for further analysis, line-fitting, and continuum measurements, such as routines for unit conversions, conversions from wavelength space to frequency space, line and continuum fitting, flux measurement, synthetic photometry and models such as a zodiacal light model to predict and subtract the dominant foreground at some wavelengths.
Polleux, Julien; Rasp, Matthias; Louban, Ilia; Plath, Nicole; Feldhoff, Armin; Spatz, Joachim P
2011-08-23
Simultaneous synthesis and assembly of nanoparticles that exhibit unique physicochemical properties are critically important for designing new functional devices at the macroscopic scale. In the present study, we report a simple version of block copolymer micellar lithography (BCML) to synthesize gold and titanium dioxide (TiO(2)) nanoarrays by using benzyl alcohol (BnOH) as a solvent. In contrast to toluene, BnOH can lead to the formation of various gold nanopatterns via salt-induced micellization of polystyrene-block-poly(vinylpyridine) (PS-b-P2VP). In the case of titania, the use of BCML with a nonaqueous sol-gel method, the "benzyl alcohol route", enables the fabrication of nanopatterns made of quasi-hexagonally organized particles or parallel wires upon aging a (BnOH-TiCl(4)-PS(846)-b-P2VP(171))-containing solution for four weeks to grow TiO(2) building blocks in situ. This approach was found to depend mainly on the relative lengths of the polymer blocks, which allows nanoparticle-induced micellization and self-assembly during solvent evaporation. Moreover, this versatile route enables the design of uniform and quasi-ordered gold-TiO(2) binary nanoarrays with a precise particle density due to the absence of graphoepitaxy during the deposition of TiO(2) onto gold nanopatterns. © 2011 American Chemical Society
Cummins, Cian; Mokarian-Tabari, Parvaneh; Andreazza, Pascal; Sinturel, Christophe; Morris, Michael A
2016-03-01
Solvothermal vapor annealing (STVA) was employed to induce microphase separation in a lamellar forming block copolymer (BCP) thin film containing a readily degradable block. Directed self-assembly of poly(styrene)-block-poly(d,l-lactide) (PS-b-PLA) BCP films using topographically patterned silicon nitride was demonstrated with alignment over macroscopic areas. Interestingly, we observed lamellar patterns aligned parallel as well as perpendicular (perpendicular microdomains to substrate in both cases) to the topography of the graphoepitaxial guiding patterns. PS-b-PLA BCP microphase separated with a high degree of order in an atmosphere of tetrahydrofuran (THF) at an elevated vapor pressure (at approximately 40-60 °C). Grazing incidence small-angle X-ray scattering (GISAXS) measurements of PS-b-PLA films reveal the through-film uniformity of perpendicular microdomains after STVA. Perpendicular lamellar orientation was observed on both hydrophilic and relatively hydrophobic surfaces with a domain spacing (L0) of ∼32.5 nm. The rapid removal of the PLA microdomains is demonstrated using a mild basic solution for the development of a well-defined PS mask template. GISAXS data reveal the through-film uniformity is retained following wet etching. The experimental results in this article demonstrate highly oriented PS-b-PLA microdomains after a short annealing period and facile PLA removal to form porous on-chip etch masks for nanolithography application.
Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping.
Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca
2015-08-12
Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights.
Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping
Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca
2015-01-01
Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights. PMID:26274960
Shen, Chung-Wei; Chen, Yi-Hau
2018-03-13
We propose a model selection criterion for semiparametric marginal mean regression based on generalized estimating equations. The work is motivated by a longitudinal study on the physical frailty outcome in the elderly, where the cluster size, that is, the number of the observed outcomes in each subject, is "informative" in the sense that it is related to the frailty outcome itself. The new proposal, called Resampling Cluster Information Criterion (RCIC), is based on the resampling idea utilized in the within-cluster resampling method (Hoffman, Sen, and Weinberg, 2001, Biometrika 88, 1121-1134) and accommodates informative cluster size. The implementation of RCIC, however, is free of performing actual resampling of the data and hence is computationally convenient. Compared with the existing model selection methods for marginal mean regression, the RCIC method incorporates an additional component accounting for variability of the model over within-cluster subsampling, and leads to remarkable improvements in selecting the correct model, regardless of whether the cluster size is informative or not. Applying the RCIC method to the longitudinal frailty study, we identify being female, old age, low income and life satisfaction, and chronic health conditions as significant risk factors for physical frailty in the elderly. © 2018, The International Biometric Society.
ION PRODUCING MECHANISM (ARC EXTERNAL TO BLOCK)
Brobeck, W.H.
1958-09-01
This patent pentains to an ion producing mechanism employed in a calutron which has the decided advantage of an increased amount of ionization effectuated by the arc, and a substantially uniform arc in poiat of time, i arc location and along the arc length. The unique features of the disclosed ion source lie in the specific structural arrangement of the source block, gas ionizing passage, filament shield and filament whereby the arc is established both within the ionizing passage and immediately outside the exit of the ionizing passage at the block face.
Brandon M. Collins; Richard G. Everett; Scott L. Stephens
2011-01-01
We re-sampled areas included in an unbiased 1911 timber inventory conducted by the U.S. Forest Service over a 4000 ha study area. Over half of the re-sampled area burned in relatively recent management- and lightning-ignited fires. This allowed for comparisons of both areas that have experienced recent fire and areas with no recent fire, to the same areas historically...
Quasi-Epipolar Resampling of High Resolution Satellite Stereo Imagery for Semi Global Matching
NASA Astrophysics Data System (ADS)
Tatar, N.; Saadatseresht, M.; Arefi, H.; Hadavand, A.
2015-12-01
Semi-global matching is a well-known stereo matching algorithm in photogrammetric and computer vision society. Epipolar images are supposed as input of this algorithm. Epipolar geometry of linear array scanners is not a straight line as in case of frame camera. Traditional epipolar resampling algorithms demands for rational polynomial coefficients (RPCs), physical sensor model or ground control points. In this paper we propose a new solution for epipolar resampling method which works without the need for these information. In proposed method, automatic feature extraction algorithms are employed to generate corresponding features for registering stereo pairs. Also original images are divided into small tiles. In this way by omitting the need for extra information, the speed of matching algorithm increased and the need for high temporal memory decreased. Our experiments on GeoEye-1 stereo pair captured over Qom city in Iran demonstrates that the epipolar images are generated with sub-pixel accuracy.
Janssen, Steve M J; Chessa, Antonio G; Murre, Jaap M J
2007-10-01
The reminiscence bump is the effect that people recall more personal events from early adulthood than from childhood or adulthood. The bump has been examined extensively. However, the question of whether the bump is caused by differential encoding or re-sampling is still unanswered. To examine this issue, participants were asked to name their three favourite books, movies, and records. Furthermore,they were asked when they first encountered them. We compared the temporal distributions and found that they all showed recency effects and reminiscence bumps. The distribution of favourite books had the largest recency effect and the distribution of favourite records had the largest reminiscence bump. We can explain these results by the difference in rehearsal. Books are read two or three times, movies are watched more frequently, whereas records are listened to numerous times. The results suggest that differential encoding initially causes the reminiscence bump and that re-sampling increases the bump further.
Thermoreversible networks for moldable photo-responsive elastomers (Presentation Recording)
NASA Astrophysics Data System (ADS)
Kornfield, Julia A.; Kurji, Zuleikha
2015-10-01
Soft-solids that retain the responsive optical anisotropy of liquid crystals (LC) can be used as mechano-optical, electro-optical and electro-mechanical elements. We use self-assembly of block copolymers to create reversible LC gels and elastomers that flow at elevated temperatures and physically cross link upon cooling. In the melt, they can be spun, coated or molded. Segregation of the end-blocks forms uniform and uniformly spaced crosslinks. Matched sets of block copolymers are synthesized from a single "prepolymer." Specifically, we begin with polymers having polystyrene (PS) end blocks and a poly(1,2-butadiene) midblock. The pendant vinyl groups along the backbone of the midblock are used to graft mesogens, converting it to a side-group LC polymer (SGLCP). In the present case, cyanobiphenyl groups are used as the nonphotoresponsive mesogens and azobenzene groups are used as photoresponsive mesogens. Here we show that matched pairs of block copolymers, with and without photo-responsive mesogens, provide model systems in which the optical density can be adjusted while holding other properties fixed (cross-link density, modulus, birefringence, isotropic-nematic transition temperature). For example, a triblock in which the SGLCP block has 95% cyanobiphenyl and 5% azo side groups is miscible with one having 100% cyanobiphenyl side groups. Simply blending the two gives a series of LC elastomers that have from 0 to 5% azo, while having all other physical properties matched. Results will be presented that show the outcomesof this approach to systematic and largely independent control of optical density and photo-mechanical sensitivity.
Dense blocks of energetic ions driven by multi-petawatt lasers
Weng, S. M.; Liu, M.; Sheng, Z. M.; Murakami, M.; Chen, M.; Yu, L. L.; Zhang, J.
2016-01-01
Laser-driven ion accelerators have the advantages of compact size, high density, and short bunch duration over conventional accelerators. Nevertheless, it is still challenging to simultaneously enhance the yield and quality of laser-driven ion beams for practical applications. Here we propose a scheme to address this challenge via the use of emerging multi-petawatt lasers and a density-modulated target. The density-modulated target permits its ions to be uniformly accelerated as a dense block by laser radiation pressure. In addition, the beam quality of the accelerated ions is remarkably improved by embedding the target in a thick enough substrate, which suppresses hot electron refluxing and thus alleviates plasma heating. Particle-in-cell simulations demonstrate that almost all ions in a solid-density plasma of a few microns can be uniformly accelerated to about 25% of the speed of light by a laser pulse at an intensity around 1022 W/cm2. The resulting dense block of energetic ions may drive fusion ignition and more generally create matter with unprecedented high energy density. PMID:26924793
NASA Astrophysics Data System (ADS)
Wicaksono, Pramaditya; Salivian Wisnu Kumara, Ignatius; Kamal, Muhammad; Afif Fauzan, Muhammad; Zhafarina, Zhafirah; Agus Nurswantoro, Dwi; Noviaris Yogyantoro, Rifka
2017-12-01
Although spectrally different, seagrass species may not be able to be mapped from multispectral remote sensing images due to the limitation of their spectral resolution. Therefore, it is important to quantitatively assess the possibility of mapping seagrass species using multispectral images by resampling seagrass species spectra to multispectral bands. Seagrass species spectra were measured on harvested seagrass leaves. Spectral resolution of multispectral images used in this research was adopted from WorldView-2, Quickbird, Sentinel-2A, ASTER VNIR, and Landsat 8 OLI. These images are widely available and can be a good representative and baseline for previous or future remote sensing images. Seagrass species considered in this research are Enhalus acoroides (Ea), Thalassodendron ciliatum (Tc), Thalassia hemprichii (Th), Cymodocea rotundata (Cr), Cymodocea serrulata (Cs), Halodule uninervis (Hu), Halodule pinifolia (Hp), Syringodum isoetifolium (Si), Halophila ovalis (Ho), and Halophila minor (Hm). Multispectral resampling analysis indicate that the resampled spectra exhibit similar shape and pattern with the original spectra but less precise, and they lose the unique absorption feature of seagrass species. Relying on spectral bands alone, multispectral image is not effective in mapping these seagrass species individually, which is shown by the poor and inconsistent result of Spectral Angle Mapper (SAM) classification technique in classifying seagrass species using seagrass species spectra as pure endmember. Only Sentinel-2A produced acceptable classification result using SAM.
NASA Astrophysics Data System (ADS)
Qi, Juanjuan; Chen, Ke; Zhang, Shuhao; Yang, Yun; Guo, Lin; Yang, Shihe
2017-03-01
The controllable self-assembly of nanosized building blocks into larger specific structures can provide an efficient method of synthesizing novel materials with excellent properties. The self-assembly of nanocrystals by assisted means is becoming an extremely active area of research, because it provides a method of producing large-scale advanced functional materials with potential applications in the areas of energy, electronics, optics, and biologics. In this study, we applied an efficient strategy, namely, the use of ‘pressure control’ to the assembly of silver sulfide (Ag2S) nanospheres with a diameter of approximately 33 nm into large-scale, uniform Ag2S sub-microspheres with a size of about 0.33 μm. More importantly, this strategy realizes the online control of the overall reaction system, including the pressure, reaction time, and temperature, and could also be used to easily fabricate other functional materials on an industrial scale. Moreover, the thermodynamics and kinetics parameters for the thermal decomposition of silver diethyldithiocarbamate (Ag(DDTC)) are also investigated to explore the formation mechanism of the Ag2S nanosized building blocks which can be assembled into uniform sub-micron scale architecture. As a method of producing sub-micron Ag2S particles by means of the pressure-controlled self-assembly of nanoparticles, we foresee this strategy being an efficient and universally applicable option for constructing other new building blocks and assembling novel and large functional micromaterials on an industrial scale.
NASA Astrophysics Data System (ADS)
Beckers, J.; Weerts, A.; Tijdeman, E.; Welles, E.; McManamon, A.
2013-12-01
To provide reliable and accurate seasonal streamflow forecasts for water resources management several operational hydrologic agencies and hydropower companies around the world use the Extended Streamflow Prediction (ESP) procedure. The ESP in its original implementation does not accommodate for any additional information that the forecaster may have about expected deviations from climatology in the near future. Several attempts have been conducted to improve the skill of the ESP forecast, especially for areas which are affected by teleconnetions (e,g. ENSO, PDO) via selection (Hamlet and Lettenmaier, 1999) or weighting schemes (Werner et al., 2004; Wood and Lettenmaier, 2006; Najafi et al., 2012). A disadvantage of such schemes is that they lead to a reduction of the signal to noise ratio of the probabilistic forecast. To overcome this, we propose a resampling method conditional on climate indices to generate meteorological time series to be used in the ESP. The method can be used to generate a large number of meteorological ensemble members in order to improve the statistical properties of the ensemble. The effectiveness of the method was demonstrated in a real-time operational hydrologic seasonal forecasts system for the Columbia River basin operated by the Bonneville Power Administration. The forecast skill of the k-nn resampler was tested against the original ESP for three basins at the long-range seasonal time scale. The BSS and CRPSS were used to compare the results to those of the original ESP method. Positive forecast skill scores were found for the resampler method conditioned on different indices for the prediction of spring peak flows in the Dworshak and Hungry Horse basin. For the Libby Dam basin however, no improvement of skill was found. The proposed resampling method is a promising practical approach that can add skill to ESP forecasts at the seasonal time scale. Further improvement is possible by fine tuning the method and selecting the most informative climate indices for the region of interest.
Paleosecular Variation and Time-Averaged Field Behavior: Global and Regional Signatures
NASA Astrophysics Data System (ADS)
Johnson, C. L.; Cromwell, G.; Tauxe, L.; Constable, C.
2012-12-01
We use an updated global dataset of directional and intensity data from lava flows to investigate time-averaged field (TAF) and paleosecular variation (PSV) signatures regionally and globally. The data set includes observations from the past 10 Ma, but we focus our investigations on the field structure over past 5 Ma, in particular during the Brunhes and Matuyama. We restrict our analyses to sites with at least 5 samples (all of which have been stepwise demagnetized), and for which the estimate of the Fisher precision parameter, k, is at least 50. The data set comprises 1572 sites from the past 5 Ma that span latitudes 78oS to 71oN; of these ˜40% are from the Brunhes chron and ˜20% are from the Matuyama chron. Age control at the site level is variable because radiometric dates are available for only about one third of our sites. New TAF models for the Brunhes show longitudinal structure. In particular, high latitude flux lobes are observed, constrained by improved data sets from N. and S. America, Japan, and New Zealand. We use resampling techniques to examine possible biases in the TAF and PSV incurred by uneven temporal sampling, and the limited age information available for many sites. Results from Hawaii indicate that resampling of the paleodirectional data onto a uniform temporal distribution, incorporating site ages and age errors leads to a TAF estimate for the Brunhes that is close to that reported for the actual data set, but a PSV estimate (virtual geomagnetic pole dispersion) that is increased relative to that obtained from the unevenly sampled data. The global distribution of sites in our dataset allows us to investigate possible hemispheric asymmetries in field structure, in particular differences between north and south high latitude field behavior and low latitude differences between the Pacific and Atlantic hemispheres.
An Assessment of the Uniform Funding Policy of DoD Directive 3200.11.
1980-09-01
34 Unpublished master’s thesis. GSM/SM/73-10, AFIT/EN, Wright-Patterson AFB OH 45433, 7 January 1974. Horngren , Charles T. Cost Accounting : A Management...reverse side if noceeeary aid identify by block number) Uniform Funding Policy Test Facilities Test and Evaluation Cost Accounting Accounting 20...segregated from overhead as a cost accounting device in both Government and industry. Historically, this distinc- tion has merely aided distribution of total
Controlled Synthesis of Millimeter-Long Silicon Nanowires with Uniform Electronic Properties
Park, Won Il; Zheng, Gengfeng; Jiang, Xiaocheng; Tian, Bozhi; Lieber, Charles M.
2009-01-01
We report the nanocluster-catalyzed growth of ultra-long and highly-uniform single-crystalline silicon nanowires (SiNWs) with millimeter-scale lengths and aspect ratios up to ca. 100,000. The average SiNW growth rate using disilane (Si2H6) at 400 °C was 31 µm/min, while the growth rate determined for silane (SiH4) reactant under similar growth conditions was 130 times lower. Transmission electron microscopy studies of millimeter-long SiNWs with diameters of 20–80 nm show that the nanowires grow preferentially along the <110> direction independent of diameter. In addition, ultra-long SiNWs were used as building blocks to fabricate one-dimensional arrays of field-effect transistors (FETs) consisting of ca. 100 independent devices per nanowire. Significantly, electrical transport measurements demonstrated that the millimeter-long SiNWs had uniform electrical properties along the entire length of wires, and each device can behave as a reliable FET with an on-state current, threshold voltage, and transconductance values (average ± 1 standard deviation) of 1.8 ± 0.3 µA, 6.0 ± 1.1 V, 210 ± 60 nS, respectively. Electronically-uniform millimeter-long SiNWs were also functionalized with monoclonal antibody receptors, and used to demonstrate multiplexed detection of cancer marker proteins with a single nanowire. The synthesis of structurally- and electronically-uniform ultra-long SiNWs may open up new opportunities for integrated nanoelectronics, and could serve as unique building blocks linking integrated structures from the nanometer through millimeter length scales. PMID:18710294
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, J; Dossa, D; Gokhale, M
Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe:more » (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.« less
Correcting Evaluation Bias of Relational Classifiers with Network Cross Validation
2010-01-01
classi- fication algorithms: simple random resampling (RRS), equal-instance random resampling (ERS), and network cross-validation ( NCV ). The first two... NCV procedure that eliminates overlap between test sets altogether. The procedure samples for k disjoint test sets that will be used for evaluation...propLabeled ∗ S) nodes from train Pool in f erenceSet =network − trainSet F = F ∪ < trainSet, test Set, in f erenceSet > end for output: F NCV addresses
NASA Astrophysics Data System (ADS)
Nigro, Fabrizio; Renda, Pietro; Favara, Rocco
2010-05-01
We can distinguish two morphological evolutions of the drainage basins which develop in the earth's sectors subjected to uplift and tilting, in relationship to their antecedence or subsequence in comparison to the tectonic process. If this process begins in concomitance with a geomorphic cycle the main valleys of the drainage basins will longitudinally be developed according to the tilting direction which the crustal block is subjected. But if the non-uniform vertical movement develops in a sector already characterized by the presence of a idrographic network, this can be influenced in its pattern in various ways. A crustal block contemporarily subject to uplift and tilting will be characterized to its inside, at the end of this process, by more elevated and less elevated sectors. The erosive ground processes suffer this non-uniform vertical movement and since it gradually develops in time, landforms, as valleys of drainage basins, will suffer analogous variations. If pre-existing, the slopes of the valleys will be subjected to tilting also and one of the characteristics in the evolution of the reliefs connected with the uplift and the tilting of crustal blocks are represented by the progressive asymmetry of the slopes of a valley. The uplift and the tilting of the block progressively determines a difference of inclination of the slopes of the incising valley. This effect is given by the progressive incision and migration of the axis of the valley that it determines slopes with crests to different middle elevation between the right side and that left. The erosional process that determines him with the uplift and the tilting of the crustal blocks are characterized by a greater erosion rate in the sectors of head of the slope that is mostly raised. Likewise, the migration of the river consistent with the tilting direction determines a greater rate of erosion along one of the banks. The general morphometric result can be that of the individualization of slopes that -in section- constitute polylines assimilable to arcs of circumference with different rays of bending. In map view, the evolution of the drainage network is characterised also from a different development of the river channels of different orders. Particularly, in the slope that is more subject to uplift the drainage network is more branched, with larger formation of river orders with respect to the opposite slope. If we suppose that the crustal block underwent to uplift and tilting is eroded from several idrographic networks that are identified in more drainage basins, at the end of the process, in absence of large-scale deformations as folding and faulting, the slope asimmetry of every main valley is maintained but, for tilting direction about orthogonal with respect to the directions of the rivers, a different altimetric development will be observed of the main rivers. If to the tilting and uplift of the crustal block are associated internal deformations as folds and faults, then the asymmetry of the slopes not always may result clearly evident, as well as the altimetric development of the main valleys. Regarding the above concepts, we recognised a non-uniform uplift and large-scale recent faulting in Northern Sicily (Central Mediterranean), both from drainage network pattern analisys, slopes geometries and structural data. The data sets have been compared with the uplift rate and seismicity distributions, allowing us to recognise different crustal blocks in which the northern Sicily chain may be divided. Each chain block reflects characteristic morphometric pattern of the drainage basins. The morphostructural setting, the distribution of seismicity and the orientation of the recent faults indicate that the main neotectonic narrow deformation zones bounding the crustal blocks range from NW-SE, NE-SW and W-E.
Proposed hardware architectures of particle filter for object tracking
NASA Astrophysics Data System (ADS)
Abd El-Halym, Howida A.; Mahmoud, Imbaby Ismail; Habib, SED
2012-12-01
In this article, efficient hardware architectures for particle filter (PF) are presented. We propose three different architectures for Sequential Importance Resampling Filter (SIRF) implementation. The first architecture is a two-step sequential PF machine, where particle sampling, weight, and output calculations are carried out in parallel during the first step followed by sequential resampling in the second step. For the weight computation step, a piecewise linear function is used instead of the classical exponential function. This decreases the complexity of the architecture without degrading the results. The second architecture speeds up the resampling step via a parallel, rather than a serial, architecture. This second architecture targets a balance between hardware resources and the speed of operation. The third architecture implements the SIRF as a distributed PF composed of several processing elements and central unit. All the proposed architectures are captured using VHDL synthesized using Xilinx environment, and verified using the ModelSim simulator. Synthesis results confirmed the resource reduction and speed up advantages of our architectures.
NASA Astrophysics Data System (ADS)
Guo, Jun; Lu, Siliang; Zhai, Chao; He, Qingbo
2018-02-01
An automatic bearing fault diagnosis method is proposed for permanent magnet synchronous generators (PMSGs), which are widely installed in wind turbines subjected to low rotating speeds, speed fluctuations, and electrical device noise interferences. The mechanical rotating angle curve is first extracted from the phase current of a PMSG by sequentially applying a series of algorithms. The synchronous sampled vibration signal of the fault bearing is then resampled in the angular domain according to the obtained rotating phase information. Considering that the resampled vibration signal is still overwhelmed by heavy background noise, an adaptive stochastic resonance filter is applied to the resampled signal to enhance the fault indicator and facilitate bearing fault identification. Two types of fault bearings with different fault sizes in a PMSG test rig are subjected to experiments to test the effectiveness of the proposed method. The proposed method is fully automated and thus shows potential for convenient, highly efficient and in situ bearing fault diagnosis for wind turbines subjected to harsh environments.
Modified Polar-Format Software for Processing SAR Data
NASA Technical Reports Server (NTRS)
Chen, Curtis
2003-01-01
HMPF is a computer program that implements a modified polar-format algorithm for processing data from spaceborne synthetic-aperture radar (SAR) systems. Unlike prior polar-format processing algorithms, this algorithm is based on the assumption that the radar signal wavefronts are spherical rather than planar. The algorithm provides for resampling of SAR pulse data from slant range to radial distance from the center of a reference sphere that is nominally the local Earth surface. Then, invoking the projection-slice theorem, the resampled pulse data are Fourier-transformed over radial distance, arranged in the wavenumber domain according to the acquisition geometry, resampled to a Cartesian grid, and inverse-Fourier-transformed. The result of this process is the focused SAR image. HMPF, and perhaps other programs that implement variants of the algorithm, may give better accuracy than do prior algorithms for processing strip-map SAR data from high altitudes and may give better phase preservation relative to prior polar-format algorithms for processing spotlight-mode SAR data.
Residential water demand model under block rate pricing: A case study of Beijing, China
NASA Astrophysics Data System (ADS)
Chen, H.; Yang, Z. F.
2009-05-01
In many cities, the inconsistency between water supply and water demand has become a critical problem because of deteriorating water shortage and increasing water demand. Uniform price of residential water cannot promote the efficient water allocation. In China, block water price will be put into practice in the future, but the outcome of such regulation measure is unpredictable without theory support. In this paper, the residential water is classified by the volume of water usage based on economic rules and block water is considered as different kinds of goods. A model based on extended linear expenditure system (ELES) is constructed to simulate the relationship between block water price and water demand, which provide theoretical support for the decision-makers. Finally, the proposed model is used to simulate residential water demand under block rate pricing in Beijing.
2010-11-04
In this image from NASA Mars Odyssey the majority of the surface appears uniform with a few small hills, the region of fractured blocks sticks out as omething different, perhaps remnants of crater ejecta, or an area of a different type of rock.
Development of guidelines for the installation of marked crosswalks.
DOT National Transportation Integrated Search
2004-01-01
The Manual on Uniform Traffic Control Devices (MUTCD) provides little guidance on the installation of marked crosswalks, especially at locations other than intersections, i.e., mid-block locations. Crosswalks have typically been installed and designe...
Koh, Haeng-Deog; Kim, Mi-Jeong
2016-01-01
A photo-crosslinked polystyrene (PS) thin film is investigated as a potential guiding sub-layer for polystyrene-block-poly (methyl methacrylate) block copolymer (BCP) cylindrical nanopattern formation via topographic directed self-assembly (DSA). When compared to a non-crosslinked PS brush sub-layer, the photo-crosslinked PS sub-layer provided longer correlation lengths of the BCP nanostructure, resulting in a highly uniform DSA nanopattern with a low number of BCP dislocation defects. Depending on the thickness of the sub-layer used, parallel or orthogonal orientations of DSA nanopattern arrays were obtained that covered the entire surface of patterned Si substrates, including both trench and mesa regions. The design of DSA sub-layers and guide patterns, such as hardening the sub-layer by photo-crosslinking, nano-structuring on mesas, the relation between trench/mesa width, and BCP equilibrium period, were explored with a view to developing defect-reduced DSA lithography technology. PMID:28773768
NASA Astrophysics Data System (ADS)
Wang, Fangzhou; Chen, Wanjun; Wang, Zeheng; Sun, Ruize; Wei, Jin; Li, Xuan; Shi, Yijun; Jin, Xiaosheng; Xu, Xiaorui; Chen, Nan; Zhou, Qi; Zhang, Bo
2017-05-01
To achieve uniform low turn-on voltage and high reverse blocking capability, an AlGaN/GaN power field effect rectifier with trench heterojunction anode (THA-FER) is proposed and investigated in this work which includes only simulated data and no real experimental result. VT has a low saturation value when trench height (HT) is beyond 300 nm, confirming it is possible to control the VT accurately without precisely controlling the HT in the THA-FER. Meanwhile, high HT anode reduces reverse leakage current and yields high breakdown voltage (VB). A superior high Baliga's Figure of Merits (BFOM = VB2/Ron,sp, Ron,sp is specific-on resistance) of 1228 MW/cm2 reveals the THA-FER caters for the demands of high efficiency GaN power applications.
A Downloadable Three-Dimensional Virtual Model of the Visible Ear
Wang, Haobing; Merchant, Saumil N.; Sorensen, Mads S.
2008-01-01
Purpose To develop a three-dimensional (3-D) virtual model of a human temporal bone and surrounding structures. Methods A fresh-frozen human temporal bone was serially sectioned and digital images of the surface of the tissue block were recorded (the ‘Visible Ear’). The image stack was resampled at a final resolution of 50 × 50 × 50/100 µm/voxel, registered in custom software and segmented in PhotoShop® 7.0. The segmented image layers were imported into Amira® 3.1 to generate smooth polygonal surface models. Results The 3-D virtual model presents the structures of the middle, inner and outer ears in their surgically relevant surroundings. It is packaged within a cross-platform freeware, which allows for full rotation, visibility and transparency control, as well as the ability to slice the 3-D model open at any section. The appropriate raw image can be superimposed on the cleavage plane. The model can be downloaded at https://research.meei.harvard.edu/Otopathology/3dmodels/ PMID:17124433
NASA Astrophysics Data System (ADS)
Meadors, Grant David; Krishnan, Badri; Papa, Maria Alessandra; Whelan, John T.; Zhang, Yuanhao
2018-02-01
Continuous-wave (CW) gravitational waves (GWs) call for computationally-intensive methods. Low signal-to-noise ratio signals need templated searches with long coherent integration times and thus fine parameter-space resolution. Longer integration increases sensitivity. Low-mass x-ray binaries (LMXBs) such as Scorpius X-1 (Sco X-1) may emit accretion-driven CWs at strains reachable by current ground-based observatories. Binary orbital parameters induce phase modulation. This paper describes how resampling corrects binary and detector motion, yielding source-frame time series used for cross-correlation. Compared to the previous, detector-frame, templated cross-correlation method, used for Sco X-1 on data from the first Advanced LIGO observing run (O1), resampling is about 20 × faster in the costliest, most-sensitive frequency bands. Speed-up factors depend on integration time and search setup. The speed could be reinvested into longer integration with a forecast sensitivity gain, 20 to 125 Hz median, of approximately 51%, or from 20 to 250 Hz, 11%, given the same per-band cost and setup. This paper's timing model enables future setup optimization. Resampling scales well with longer integration, and at 10 × unoptimized cost could reach respectively 2.83 × and 2.75 × median sensitivities, limited by spin-wandering. Then an O1 search could yield a marginalized-polarization upper limit reaching torque-balance at 100 Hz. Frequencies from 40 to 140 Hz might be probed in equal observing time with 2 × improved detectors.
Recommended GIS Analysis Methods for Global Gridded Population Data
NASA Astrophysics Data System (ADS)
Frye, C. E.; Sorichetta, A.; Rose, A.
2017-12-01
When using geographic information systems (GIS) to analyze gridded, i.e., raster, population data, analysts need a detailed understanding of several factors that affect raster data processing, and thus, the accuracy of the results. Global raster data is most often provided in an unprojected state, usually in the WGS 1984 geographic coordinate system. Most GIS functions and tools evaluate data based on overlay relationships (area) or proximity (distance). Area and distance for global raster data can be either calculated directly using the various earth ellipsoids or after transforming the data to equal-area/equidistant projected coordinate systems to analyze all locations equally. However, unlike when projecting vector data, not all projected coordinate systems can support such analyses equally, and the process of transforming raster data from one coordinate space to another often results unmanaged loss of data through a process called resampling. Resampling determines which values to use in the result dataset given an imperfect locational match in the input dataset(s). Cell size or resolution, registration, resampling method, statistical type, and whether the raster represents continuous or discreet information potentially influence the quality of the result. Gridded population data represent estimates of population in each raster cell, and this presentation will provide guidelines for accurately transforming population rasters for analysis in GIS. Resampling impacts the display of high resolution global gridded population data, and we will discuss how to properly handle pyramid creation using the Aggregate tool with the sum option to create overviews for mosaic datasets.
A comparison of resampling schemes for estimating model observer performance with small ensembles
NASA Astrophysics Data System (ADS)
Elshahaby, Fatma E. A.; Jha, Abhinav K.; Ghaly, Michael; Frey, Eric C.
2017-09-01
In objective assessment of image quality, an ensemble of images is used to compute the 1st and 2nd order statistics of the data. Often, only a finite number of images is available, leading to the issue of statistical variability in numerical observer performance. Resampling-based strategies can help overcome this issue. In this paper, we compared different combinations of resampling schemes (the leave-one-out (LOO) and the half-train/half-test (HT/HT)) and model observers (the conventional channelized Hotelling observer (CHO), channelized linear discriminant (CLD) and channelized quadratic discriminant). Observer performance was quantified by the area under the ROC curve (AUC). For a binary classification task and for each observer, the AUC value for an ensemble size of 2000 samples per class served as a gold standard for that observer. Results indicated that each observer yielded a different performance depending on the ensemble size and the resampling scheme. For a small ensemble size, the combination [CHO, HT/HT] had more accurate rankings than the combination [CHO, LOO]. Using the LOO scheme, the CLD and CHO had similar performance for large ensembles. However, the CLD outperformed the CHO and gave more accurate rankings for smaller ensembles. As the ensemble size decreased, the performance of the [CHO, LOO] combination seriously deteriorated as opposed to the [CLD, LOO] combination. Thus, it might be desirable to use the CLD with the LOO scheme when smaller ensemble size is available.
An Angular Method with Position Control for Block Mesh Squareness Improvement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, J.; Stillman, D.
We optimize a target function de ned by angular properties with a position control term for a basic stencil with a block-structured mesh, to improve element squareness in 2D and 3D. Comparison with the condition number method shows that besides a similar mesh quality regarding orthogonality can be achieved as the former does, the new method converges faster and provides a more uniform global mesh spacing in our numerical tests.
Increasing circular synthetic aperture sonar resolution via adapted wave atoms deconvolution.
Pailhas, Yan; Petillot, Yvan; Mulgrew, Bernard
2017-04-01
Circular Synthetic Aperture Sonar (CSAS) processing computes coherently Synthetic Aperture Sonar (SAS) data acquired along a circular trajectory. This approach has a number of advantages, in particular it maximises the aperture length of a SAS system, producing very high resolution sonar images. CSAS image reconstruction using back-projection algorithms, however, introduces a dissymmetry in the impulse response, as the imaged point moves away from the centre of the acquisition circle. This paper proposes a sampling scheme for the CSAS image reconstruction which allows every point, within the full field of view of the system, to be considered as the centre of a virtual CSAS acquisition scheme. As a direct consequence of using the proposed resampling scheme, the point spread function (PSF) is uniform for the full CSAS image. Closed form solutions for the CSAS PSF are derived analytically, both in the image and the Fourier domain. The thorough knowledge of the PSF leads naturally to the proposed adapted atom waves basis for CSAS image decomposition. The atom wave deconvolution is successfully applied to simulated data, increasing the image resolution by reducing the PSF energy leakage.
A Robust Kalman Framework with Resampling and Optimal Smoothing
Kautz, Thomas; Eskofier, Bjoern M.
2015-01-01
The Kalman filter (KF) is an extremely powerful and versatile tool for signal processing that has been applied extensively in various fields. We introduce a novel Kalman-based analysis procedure that encompasses robustness towards outliers, Kalman smoothing and real-time conversion from non-uniformly sampled inputs to a constant output rate. These features have been mostly treated independently, so that not all of their benefits could be exploited at the same time. Here, we present a coherent analysis procedure that combines the aforementioned features and their benefits. To facilitate utilization of the proposed methodology and to ensure optimal performance, we also introduce a procedure to calculate all necessary parameters. Thereby, we substantially expand the versatility of one of the most widely-used filtering approaches, taking full advantage of its most prevalent extensions. The applicability and superior performance of the proposed methods are demonstrated using simulated and real data. The possible areas of applications for the presented analysis procedure range from movement analysis over medical imaging, brain-computer interfaces to robot navigation or meteorological studies. PMID:25734647
Benchmarking of a T-wave alternans detection method based on empirical mode decomposition.
Blanco-Velasco, Manuel; Goya-Esteban, Rebeca; Cruz-Roldán, Fernando; García-Alberola, Arcadi; Rojo-Álvarez, José Luis
2017-07-01
T-wave alternans (TWA) is a fluctuation of the ST-T complex occurring on an every-other-beat basis of the surface electrocardiogram (ECG). It has been shown to be an informative risk stratifier for sudden cardiac death, though the lack of gold standard to benchmark detection methods has promoted the use of synthetic signals. This work proposes a novel signal model to study the performance of a TWA detection. Additionally, the methodological validation of a denoising technique based on empirical mode decomposition (EMD), which is used here along with the spectral method, is also tackled. The proposed test bed system is based on the following guidelines: (1) use of open source databases to enable experimental replication; (2) use of real ECG signals and physiological noise; (3) inclusion of randomized TWA episodes. Both sensitivity (Se) and specificity (Sp) are separately analyzed. Also a nonparametric hypothesis test, based on Bootstrap resampling, is used to determine whether the presence of the EMD block actually improves the performance. The results show an outstanding specificity when the EMD block is used, even in very noisy conditions (0.96 compared to 0.72 for SNR = 8 dB), being always superior than that of the conventional SM alone. Regarding the sensitivity, using the EMD method also outperforms in noisy conditions (0.57 compared to 0.46 for SNR=8 dB), while it decreases in noiseless conditions. The proposed test setting designed to analyze the performance guarantees that the actual physiological variability of the cardiac system is reproduced. The use of the EMD-based block in noisy environment enables the identification of most patients with fatal arrhythmias. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhu, X.
2017-12-01
On 12 May, 2008, the Sichuan province in China suffered the catastrophic Wenchuan earthquake (MS 8). Prior to the event, a large number of small to moderate earthquakes occurred in the area were recorded at stations of SiChuan Seismic Network (SCSN). The wave data were collected during the years 2006-2008, The Fourier amplitude spectra of Lg wave are used to determine attenuation and site responses. We analyze over 3300 seismograms for Lg-wave propagation from 291 local and regional earthquakes recorded at distances from 100 to 700 km, the earthquakes varied in ML2.0 and 5.7.A joint inversion method estimating attenuation and site responses from seismic spectral ratios is implemented in the study; modeling errors are determined using a delete-j jackknife resampling technique.Variations of the Lg attenuation in a chronological order are studied. The event occurred on the Longmen Shan Fault (LSF), the LSF constitutes boundary betweeb Bayan Har block and eastern. The data are divided into two subgroups based on the seismic ray paths which contained entirely within the SiChuan basin or the Bayan Har block. The waveforms were processed in a frequency range of 1-7 Hz with an interval of 0.2 Hz. On the vertical component, Lg Attenuation in the Bayan Har block are fit by a frequency-dependent function Q(f)=250.2±13.7f0.52±0.03,the SiChuan basin is characterized by function Q(f)=193±23f0.0.81±0.05. The obtained attenuation curves indicate that the spectral amplitudes decay faster in the SiChuan basin than in the Bayan Har block. Site responses from the 48 stations are estimated, the site responses vary among these stations by more than a factor of 10 within the frequency range of interest.The results from the regrouping of data in chronological order show that when the Whenchuan earthquake is approaching, the changes in attenuation occur significantly, but the changes in site responses do not occur.
Dual-Hierarchy Graph Method for Object Indexing and Recognition
2014-07-01
from examples would be too late for the prey. Mythical monsters in movies or cartoons can look quite scary even though we have never seen their...uniform, at 25 blocks per parent, but depends on the number of SIFT features in the parent blocks. If we have more features we create more children for...method mentioned above to these descriptors to derive the 3D structure and pose of the object. In effect , we replace the previous “spatial verification
Han, Guanghui; Liu, Xiabi; Zheng, Guangyuan; Wang, Murong; Huang, Shan
2018-06-06
Ground-glass opacity (GGO) is a common CT imaging sign on high-resolution CT, which means the lesion is more likely to be malignant compared to common solid lung nodules. The automatic recognition of GGO CT imaging signs is of great importance for early diagnosis and possible cure of lung cancers. The present GGO recognition methods employ traditional low-level features and system performance improves slowly. Considering the high-performance of CNN model in computer vision field, we proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling is performed on multi-views and multi-receptive fields, which reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has the ability to obtain the optimal fine-tuning model. Multi-CNN models fusion strategy obtains better performance than any single trained model. We evaluated our method on the GGO nodule samples in publicly available LIDC-IDRI dataset of chest CT scans. The experimental results show that our method yields excellent results with 96.64% sensitivity, 71.43% specificity, and 0.83 F1 score. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images. Graphical abstract We proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has ability to obtain the optimal fine-tuning model. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shrestha, S; Vedantham, S; Karellas, A
Purpose: Detectors with hexagonal pixels require resampling to square pixels for distortion-free display of acquired images. In this work, the presampling modulation transfer function (MTF) of a hexagonal pixel array photon-counting CdTe detector for region-of-interest fluoroscopy was measured and the optimal square pixel size for resampling was determined. Methods: A 0.65mm thick CdTe Schottky sensor capable of concurrently acquiring up to 3 energy-windowed images was operated in a single energy-window mode to include ≥10 KeV photons. The detector had hexagonal pixels with apothem of 30 microns resulting in pixel spacing of 60 and 51.96 microns along the two orthogonal directions.more » Images of a tungsten edge test device acquired under IEC RQA5 conditions were double Hough transformed to identify the edge and numerically differentiated. The presampling MTF was determined from the finely sampled line spread function that accounted for the hexagonal sampling. The optimal square pixel size was determined in two ways; the square pixel size for which the aperture function evaluated at the Nyquist frequencies along the two orthogonal directions matched that from the hexagonal pixel aperture functions, and the square pixel size for which the mean absolute difference between the square and hexagonal aperture functions was minimized over all frequencies up to the Nyquist limit. Results: Evaluation of the aperture functions over the entire frequency range resulted in square pixel size of 53 microns with less than 2% difference from the hexagonal pixel. Evaluation of the aperture functions at Nyquist frequencies alone resulted in 54 microns square pixels. For the photon-counting CdTe detector and after resampling to 53 microns square pixels using quadratic interpolation, the presampling MTF at Nyquist frequency of 9.434 cycles/mm along the two directions were 0.501 and 0.507. Conclusion: Hexagonal pixel array photon-counting CdTe detector after resampling to square pixels provides high-resolution imaging suitable for fluoroscopy.« less
Kück, Patrick; Meusemann, Karen; Dambach, Johannes; Thormann, Birthe; von Reumont, Björn M; Wägele, Johann W; Misof, Bernhard
2010-03-31
Methods of alignment masking, which refers to the technique of excluding alignment blocks prior to tree reconstructions, have been successful in improving the signal-to-noise ratio in sequence alignments. However, the lack of formally well defined methods to identify randomness in sequence alignments has prevented a routine application of alignment masking. In this study, we compared the effects on tree reconstructions of the most commonly used profiling method (GBLOCKS) which uses a predefined set of rules in combination with alignment masking, with a new profiling approach (ALISCORE) based on Monte Carlo resampling within a sliding window, using different data sets and alignment methods. While the GBLOCKS approach excludes variable sections above a certain threshold which choice is left arbitrary, the ALISCORE algorithm is free of a priori rating of parameter space and therefore more objective. ALISCORE was successfully extended to amino acids using a proportional model and empirical substitution matrices to score randomness in multiple sequence alignments. A complex bootstrap resampling leads to an even distribution of scores of randomly similar sequences to assess randomness of the observed sequence similarity. Testing performance on real data, both masking methods, GBLOCKS and ALISCORE, helped to improve tree resolution. The sliding window approach was less sensitive to different alignments of identical data sets and performed equally well on all data sets. Concurrently, ALISCORE is capable of dealing with different substitution patterns and heterogeneous base composition. ALISCORE and the most relaxed GBLOCKS gap parameter setting performed best on all data sets. Correspondingly, Neighbor-Net analyses showed the most decrease in conflict. Alignment masking improves signal-to-noise ratio in multiple sequence alignments prior to phylogenetic reconstruction. Given the robust performance of alignment profiling, alignment masking should routinely be used to improve tree reconstructions. Parametric methods of alignment profiling can be easily extended to more complex likelihood based models of sequence evolution which opens the possibility of further improvements.
Audi, Ahmad; Pierrot-Deseilligny, Marc; Meynard, Christophe
2017-01-01
Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l’information géographique) camera, which has an IMU (Inertial Measurement Unit) sensor and an SoC (System on Chip)/FPGA (Field-Programmable Gate Array). To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N-th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test) detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work. PMID:28718788
Audi, Ahmad; Pierrot-Deseilligny, Marc; Meynard, Christophe; Thom, Christian
2017-07-18
Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l'information géographique) camera, which has an IMU (Inertial Measurement Unit) sensor and an SoC (System on Chip)/FPGA (Field-Programmable Gate Array). To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N -th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test) detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work.
Assessing uncertainties in superficial water provision by different bootstrap-based techniques
NASA Astrophysics Data System (ADS)
Rodrigues, Dulce B. B.; Gupta, Hoshin V.; Mendiondo, Eduardo Mario
2014-05-01
An assessment of water security can incorporate several water-related concepts, characterizing the interactions between societal needs, ecosystem functioning, and hydro-climatic conditions. The superficial freshwater provision level depends on the methods chosen for 'Environmental Flow Requirement' estimations, which integrate the sources of uncertainty in the understanding of how water-related threats to aquatic ecosystem security arise. Here, we develop an uncertainty assessment of superficial freshwater provision based on different bootstrap techniques (non-parametric resampling with replacement). To illustrate this approach, we use an agricultural basin (291 km2) within the Cantareira water supply system in Brazil monitored by one daily streamflow gage (24-year period). The original streamflow time series has been randomly resampled for different times or sample sizes (N = 500; ...; 1000), then applied to the conventional bootstrap approach and variations of this method, such as: 'nearest neighbor bootstrap'; and 'moving blocks bootstrap'. We have analyzed the impact of the sampling uncertainty on five Environmental Flow Requirement methods, based on: flow duration curves or probability of exceedance (Q90%, Q75% and Q50%); 7-day 10-year low-flow statistic (Q7,10); and presumptive standard (80% of the natural monthly mean ?ow). The bootstrap technique has been also used to compare those 'Environmental Flow Requirement' (EFR) methods among themselves, considering the difference between the bootstrap estimates and the "true" EFR characteristic, which has been computed averaging the EFR values of the five methods and using the entire streamflow record at monitoring station. This study evaluates the bootstrapping strategies, the representativeness of streamflow series for EFR estimates and their confidence intervals, in addition to overview of the performance differences between the EFR methods. The uncertainties arisen during EFR methods assessment will be propagated through water security indicators referring to water scarcity and vulnerability, seeking to provide meaningful support to end-users and water managers facing the incorporation of uncertainties in the decision making process.
NASA Astrophysics Data System (ADS)
Ruggeri, Paolo; Irving, James; Holliger, Klaus
2015-08-01
We critically examine the performance of sequential geostatistical resampling (SGR) as a model proposal mechanism for Bayesian Markov-chain-Monte-Carlo (MCMC) solutions to near-surface geophysical inverse problems. Focusing on a series of simple yet realistic synthetic crosshole georadar tomographic examples characterized by different numbers of data, levels of data error and degrees of model parameter spatial correlation, we investigate the efficiency of three different resampling strategies with regard to their ability to generate statistically independent realizations from the Bayesian posterior distribution. Quite importantly, our results show that, no matter what resampling strategy is employed, many of the examined test cases require an unreasonably high number of forward model runs to produce independent posterior samples, meaning that the SGR approach as currently implemented will not be computationally feasible for a wide range of problems. Although use of a novel gradual-deformation-based proposal method can help to alleviate these issues, it does not offer a full solution. Further, we find that the nature of the SGR is found to strongly influence MCMC performance; however no clear rule exists as to what set of inversion parameters and/or overall proposal acceptance rate will allow for the most efficient implementation. We conclude that although the SGR methodology is highly attractive as it allows for the consideration of complex geostatistical priors as well as conditioning to hard and soft data, further developments are necessary in the context of novel or hybrid MCMC approaches for it to be considered generally suitable for near-surface geophysical inversions.
NASA Astrophysics Data System (ADS)
Plaza Guingla, D. A.; Pauwels, V. R.; De Lannoy, G. J.; Matgen, P.; Giustarini, L.; De Keyser, R.
2012-12-01
The objective of this work is to analyze the improvement in the performance of the particle filter by including a resample-move step or by using a modified Gaussian particle filter. Specifically, the standard particle filter structure is altered by the inclusion of the Markov chain Monte Carlo move step. The second choice adopted in this study uses the moments of an ensemble Kalman filter analysis to define the importance density function within the Gaussian particle filter structure. Both variants of the standard particle filter are used in the assimilation of densely sampled discharge records into a conceptual rainfall-runoff model. In order to quantify the obtained improvement, discharge root mean square errors are compared for different particle filters, as well as for the ensemble Kalman filter. First, a synthetic experiment is carried out. The results indicate that the performance of the standard particle filter can be improved by the inclusion of the resample-move step, but its effectiveness is limited to situations with limited particle impoverishment. The results also show that the modified Gaussian particle filter outperforms the rest of the filters. Second, a real experiment is carried out in order to validate the findings from the synthetic experiment. The addition of the resample-move step does not show a considerable improvement due to performance limitations in the standard particle filter with real data. On the other hand, when an optimal importance density function is used in the Gaussian particle filter, the results show a considerably improved performance of the particle filter.
NASA Astrophysics Data System (ADS)
Adjorlolo, Clement; Cho, Moses A.; Mutanga, Onisimo; Ismail, Riyad
2012-01-01
Hyperspectral remote-sensing approaches are suitable for detection of the differences in 3-carbon (C3) and four carbon (C4) grass species phenology and composition. However, the application of hyperspectral sensors to vegetation has been hampered by high-dimensionality, spectral redundancy, and multicollinearity problems. In this experiment, resampling of hyperspectral data to wider wavelength intervals, around a few band-centers, sensitive to the biophysical and biochemical properties of C3 or C4 grass species is proposed. The approach accounts for an inherent property of vegetation spectral response: the asymmetrical nature of the inter-band correlations between a waveband and its shorter- and longer-wavelength neighbors. It involves constructing a curve of weighting threshold of correlation (Pearson's r) between a chosen band-center and its neighbors, as a function of wavelength. In addition, data were resampled to some multispectral sensors-ASTER, GeoEye-1, IKONOS, QuickBird, RapidEye, SPOT 5, and WorldView-2 satellites-for comparative purposes, with the proposed method. The resulting datasets were analyzed, using the random forest algorithm. The proposed resampling method achieved improved classification accuracy (κ=0.82), compared to the resampled multispectral datasets (κ=0.78, 0.65, 0.62, 0.59, 0.65, 0.62, 0.76, respectively). Overall, results from this study demonstrated that spectral resolutions for C3 and C4 grasses can be optimized and controlled for high dimensionality and multicollinearity problems, yet yielding high classification accuracies. The findings also provide a sound basis for programming wavebands for future sensors.
Adaptive topographic mass correction for satellite gravity and gravity gradient data
NASA Astrophysics Data System (ADS)
Holzrichter, Nils; Szwillus, Wolfgang; Götze, Hans-Jürgen
2014-05-01
Subsurface modelling with gravity data includes a reliable topographic mass correction. Since decades, this mandatory step is a standard procedure. However, originally methods were developed for local terrestrial surveys. Therefore, these methods often include defaults like a limited correction area of 167 km around an observation point, resampling topography depending on the distance to the station or disregard the curvature of the earth. New satellite gravity data (e.g. GOCE) can be used for large scale lithospheric modelling with gravity data. The investigation areas can include thousands of kilometres. In addition, measurements are located in the flight height of the satellite (e.g. ~250 km for GOCE). The standard definition of the correction area and the specific grid spacing around an observation point was not developed for stations located in these heights and areas of these dimensions. This asks for a revaluation of the defaults used for topographic correction. We developed an algorithm which resamples the topography based on an adaptive approach. Instead of resampling topography depending on the distance to the station, the grids will be resampled depending on its influence at the station. Therefore, the only value the user has to define is the desired accuracy of the topographic correction. It is not necessary to define the grid spacing and a limited correction area. Furthermore, the algorithm calculates the topographic mass response with a spherical shaped polyhedral body. We show examples for local and global gravity datasets and compare the results of the topographic mass correction to existing approaches. We provide suggestions how satellite gravity and gradient data should be corrected.
NASA Astrophysics Data System (ADS)
Yuan, Shenfang; Chen, Jian; Yang, Weibo; Qiu, Lei
2017-08-01
Fatigue crack growth prognosis is important for prolonging service time, improving safety, and reducing maintenance cost in many safety-critical systems, such as in aircraft, wind turbines, bridges, and nuclear plants. Combining fatigue crack growth models with the particle filter (PF) method has proved promising to deal with the uncertainties during fatigue crack growth and reach a more accurate prognosis. However, research on prognosis methods integrating on-line crack monitoring with the PF method is still lacking, as well as experimental verifications. Besides, the PF methods adopted so far are almost all sequential importance resampling-based PFs, which usually encounter sample impoverishment problems, and hence performs poorly. To solve these problems, in this paper, the piezoelectric transducers (PZTs)-based active Lamb wave method is adopted for on-line crack monitoring. The deterministic resampling PF (DRPF) is proposed to be used in fatigue crack growth prognosis, which can overcome the sample impoverishment problem. The proposed method is verified through fatigue tests of attachment lugs, which are a kind of important joint component in aerospace systems.
Homogeneous Atomic Fermi Gases
NASA Astrophysics Data System (ADS)
Mukherjee, Biswaroop; Yan, Zhenjie; Patel, Parth B.; Hadzibabic, Zoran; Yefsah, Tarik; Struck, Julian; Zwierlein, Martin W.
2017-03-01
We report on the creation of homogeneous Fermi gases of ultracold atoms in a uniform potential. In the momentum distribution of a spin-polarized gas, we observe the emergence of the Fermi surface and the saturated occupation of one particle per momentum state: the striking consequence of Pauli blocking in momentum space for a degenerate gas. Cooling a spin-balanced Fermi gas at unitarity, we create homogeneous superfluids and observe spatially uniform pair condensates. For thermodynamic measurements, we introduce a hybrid potential that is harmonic in one dimension and uniform in the other two. The spatially resolved compressibility reveals the superfluid transition in a spin-balanced Fermi gas, saturation in a fully polarized Fermi gas, and strong attraction in the polaronic regime of a partially polarized Fermi gas.
Lee, Yi-Huan; Chen, Wei-Chih; Yang, Yi-Lung; Chiang, Chi-Ju; Yokozawa, Tsutomu; Dai, Chi-An
2014-05-21
Driven by molecular affinity and balance in the crystallization kinetics, the ability to co-crystallize dissimilar yet self-crystallizable blocks of a block copolymer (BCP) into a uniform domain may strongly affect its phase diagram. In this study, we synthesize a new series of crystalline and monodisperse all-π-conjugated poly(2,5-dihexyloxy-p-phenylene)-b-poly(3-(2-ethylhexyl)thiophene) (PPP-P3EHT) BCPs and investigate this multi-crystallization effect. Despite vastly different side-chain and main-chain structures, PPP and P3EHT blocks are able to co-crystallize into a single uniform domain comprising PPP and P3EHT main-chains with mutually interdigitated side-chains spaced in-between. With increasing P3EHT fraction, PPP-P3EHTs undergo sequential phase transitions and form hierarchical superstructures including predominately PPP nanofibrils, co-crystalline nanofibrils, a bilayer co-crystalline/pure P3EHT lamellar structure, a microphase-separated bilayer PPP-P3EHT lamellar structure, and finally P3EHT nanofibrils. In particular, the presence of the new co-crystalline lamellar structure is the manifestation of the interaction balance between self-crystallization and co-crystallization of the dissimilar polymers on the resulting nanostructure of the BCP. The current study demonstrates the co-crystallization nature of all-conjugated BCPs with different main-chain moieties and may provide new guidelines for the organization of π-conjugated BCPs for future optoelectronic applications.
A CNN based Hybrid approach towards automatic image registration
NASA Astrophysics Data System (ADS)
Arun, Pattathal V.; Katiyar, Sunil K.
2013-06-01
Image registration is a key component of various image processing operations which involve the analysis of different image data sets. Automatic image registration domains have witnessed the application of many intelligent methodologies over the past decade; however inability to properly model object shape as well as contextual information had limited the attainable accuracy. In this paper, we propose a framework for accurate feature shape modeling and adaptive resampling using advanced techniques such as Vector Machines, Cellular Neural Network (CNN), SIFT, coreset, and Cellular Automata. CNN has found to be effective in improving feature matching as well as resampling stages of registration and complexity of the approach has been considerably reduced using corset optimization The salient features of this work are cellular neural network approach based SIFT feature point optimisation, adaptive resampling and intelligent object modelling. Developed methodology has been compared with contemporary methods using different statistical measures. Investigations over various satellite images revealed that considerable success was achieved with the approach. System has dynamically used spectral and spatial information for representing contextual knowledge using CNN-prolog approach. Methodology also illustrated to be effective in providing intelligent interpretation and adaptive resampling. Rejestracja obrazu jest kluczowym składnikiem różnych operacji jego przetwarzania. W ostatnich latach do automatycznej rejestracji obrazu wykorzystuje się metody sztucznej inteligencji, których największą wadą, obniżającą dokładność uzyskanych wyników jest brak możliwości dobrego wymodelowania kształtu i informacji kontekstowych. W niniejszej pracy zaproponowano zasady dokładnego modelowania kształtu oraz adaptacyjnego resamplingu z wykorzystaniem zaawansowanych technik, takich jak Vector Machines (VM), komórkowa sieć neuronowa (CNN), przesiewanie (SIFT), Coreset i automaty komórkowe. Stwierdzono, że za pomocą CNN można skutecznie poprawiać dopasowanie obiektów obrazowych oraz resampling kolejnych kroków rejestracji, zaś zastosowanie optymalizacji metodą Coreset znacznie redukuje złożoność podejścia. Zasadniczym przedmiotem pracy są: optymalizacja punktów metodą SIFT oparta na podejściu CNN, adaptacyjny resampling oraz inteligentne modelowanie obiektów. Opracowana metoda została porównana ze współcześnie stosowanymi metodami wykorzystującymi różne miary statystyczne. Badania nad różnymi obrazami satelitarnymi wykazały, że stosując opracowane podejście osiągnięto bardzo dobre wyniki. System stosując podejście CNN-prolog dynamicznie wykorzystuje informacje spektralne i przestrzenne dla reprezentacji wiedzy kontekstowej. Metoda okazała się również skuteczna w dostarczaniu inteligentnych interpretacji i w adaptacyjnym resamplingu.
NASA Astrophysics Data System (ADS)
Kim, Do Hyung; Kim, Min-Dae; Choi, Cheol-Woong; Chung, Chung-Wook; Ha, Seung Hee; Kim, Cy Hyun; Shim, Yong-Ho; Jeong, Young-Il; Kang, Dae Hwan
2012-01-01
Sorafenib-incoporated nanoparticles were prepared using a block copolymer that is composed of dextran and poly( DL-lactide- co-glycolide) [Dex bLG] for antitumor drug delivery. Sorafenib-incorporated nanoparticles were prepared by a nanoprecipitation-dialysis method. Sorafenib-incorporated Dex bLG nanoparticles were uniformly distributed in an aqueous solution regardless of the content of sorafenib. Transmission electron microscopy of the sorafenib-incorporated Dex bLG nanoparticles revealed a spherical shape with a diameter < 300 nm. Sorafenib-incorporated Dex bLG nanoparticles at a polymer/drug weight ratio of 40:5 showed a relatively uniform size and morphology. Higher initial drug feeding was associated with increased drug content in nanoparticles and in nanoparticle size. A drug release study revealed a decreased drug release rate with increasing drug content. In an in vitro anti-proliferation assay using human cholangiocarcinoma cells, sorafenib-incorporated Dex bLG nanoparticles showed a similar antitumor activity as sorafenib. Sorafenib-incorporated Dex bLG nanoparticles are promising candidates as vehicles for antitumor drug targeting.
1982-02-08
is printed in any year-month block when the extreme value Is based on an in- complete month (at least one day missing for the month). When a month has...means, standard deviations, and total number of valid observations for each month and annual (all months). An asterisk (*) is printed n each data block...becomes the extreme or monthly total in any of these tables it is printed as "TRACE." Continued on Reverse Side Values ’or means and standard
1979-02-20
for e;--ch month and annual (all motnhs) and the total valid aebserntion count. An asterisk (%*)J is printed in any yar-montha block -.en the en-rme...annuel (all months). An asterisk (*) is printed in each data block if one or more days are missing for the mantt. No occurxvnces for a month ae indicted...in the sae msaar as in the extreme tables above. If a trace becoms the extreme or wathly total in say of these tables it is printed as r Continaed an
NASA Astrophysics Data System (ADS)
Khaki, M.; Hoteit, I.; Kuhn, M.; Awange, J.; Forootan, E.; van Dijk, A. I. J. M.; Schumacher, M.; Pattiaratchi, C.
2017-09-01
The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively, improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.
Gleeson, Helena K; Wiley, Veronica; Wilcken, Bridget; Elliott, Elizabeth; Cowell, Christopher; Thonsett, Michael; Byrne, Geoffrey; Ambler, Geoffrey
2008-10-01
To assess the benefits and practicalities of setting up a newborn screening (NBS) program in Australia for congenital adrenal hyperplasia (CAH) through a 2 year pilot screening in ACT/NSW and comparing with case surveillance in other states. The pilot newborn screening occurred between 1/10/95 and 30/9/97 in NSW/ACT. Concurrently, case reporting for all new CAH cases occurred through the Australian Paediatric Surveillance Unit (APSU) across Australia. Details of clinical presentation, re-sampling and laboratory performance were assessed. 185,854 newborn infants were screened for CAH in NSW/ACT. Concurrently, 30 cases of CAH were reported to APSU, twelve of which were from NSW/ACT. CAH incidence was 1 in 15 488 (screened population) vs 1 in 18,034 births (unscreened) (difference not significant). Median age of initial notification was day 8 with confirmed diagnosis at 13(5-23) days in the screened population vs 16(7-37) days in the unscreened population (not significant). Of the 5 clinically unsuspected males in the screened population, one had mild salt-wasting by the time of notification, compared with salt-wasting crisis in all 6 males from the unscreened population. 96% of results were reported by day 10. Resampling was requested in 637 (0.4%) and median re-sampling delay was 11(0-28) days with higher resample rates in males (p < 0.0001). The within-laboratory cost per case of clinically unsuspected cases was A$42 717. There seems good justification for NBS for CAH based on clear prevention of salt-wasting crises and their potential long-term consequences. Also, prospects exist for enhancing screening performance.
NASA Astrophysics Data System (ADS)
Sargent, Steven D.; Greenman, Mark E.; Hansen, Scott M.
1998-11-01
The Spatial Infrared Imaging Telescope (SPIRIT III) is the primary sensor aboard the Midcourse Space Experiment (MSX), which was launched 24 April 1996. SPIRIT III included a Fourier transform spectrometer that collected terrestrial and celestial background phenomenology data for the Ballistic Missile Defense Organization (BMDO). This spectrometer used a helium-neon reference laser to measure the optical path difference (OPD) in the spectrometer and to command the analog-to-digital conversion of the infrared detector signals, thereby ensuring the data were sampled at precise increments of OPD. Spectrometer data must be sampled at accurate increments of OPD to optimize the spectral resolution and spectral position of the transformed spectra. Unfortunately, a failure in the power supply preregulator at the MSX spacecraft/SPIRIT III interface early in the mission forced the spectrometer to be operated without the reference laser until a failure investigation was completed. During this time data were collected in a backup mode that used an electronic clock to sample the data. These data were sampled evenly in time, and because the scan velocity varied, at nonuniform increments of OPD. The scan velocity profile depended on scan direction and scan length, and varied over time, greatly degrading the spectral resolution and spectral and radiometric accuracy of the measurements. The Convert software used to process the SPIRIT III data was modified to resample the clock-sampled data at even increments of OPD, using scan velocity profiles determined from ground and on-orbit data, greatly improving the quality of the clock-sampled data. This paper presents the resampling algorithm, the characterization of the scan velocity profiles, and the results of applying the resampling algorithm to on-orbit data.
NASA Astrophysics Data System (ADS)
Suvorova, S.; Clearwater, P.; Melatos, A.; Sun, L.; Moran, W.; Evans, R. J.
2017-11-01
A hidden Markov model (HMM) scheme for tracking continuous-wave gravitational radiation from neutron stars in low-mass x-ray binaries (LMXBs) with wandering spin is extended by introducing a frequency-domain matched filter, called the J -statistic, which sums the signal power in orbital sidebands coherently. The J -statistic is similar but not identical to the binary-modulated F -statistic computed by demodulation or resampling. By injecting synthetic LMXB signals into Gaussian noise characteristic of the Advanced Laser Interferometer Gravitational-wave Observatory (Advanced LIGO), it is shown that the J -statistic HMM tracker detects signals with characteristic wave strain h0≥2 ×10-26 in 370 d of data from two interferometers, divided into 37 coherent blocks of equal length. When applied to data from Stage I of the Scorpius X-1 Mock Data Challenge organized by the LIGO Scientific Collaboration, the tracker detects all 50 closed injections (h0≥6.84 ×10-26), recovering the frequency with a root-mean-square accuracy of ≤1.95 ×10-5 Hz . Of the 50 injections, 43 (with h0≥1.09 ×10-25) are detected in a single, coherent 10 d block of data. The tracker employs an efficient, recursive HMM solver based on the Viterbi algorithm, which requires ˜105 CPU-hours for a typical broadband (0.5 kHz) LMXB search.
NASA Astrophysics Data System (ADS)
Hagstrum, J. T.; Wells, R. E.; Evarts, R. C.; Niem, A. R.; Sawlan, M. G.; Blakely, R. J.
2008-12-01
Identification of individual flows within the Columbia River Basalt Group (CRBG) has mostly relied on minor differences in geochemistry, but magnetic polarity has also proved useful in differentiating flows and establishing a temporal framework. Within the thick, rapidly erupted Grande Ronde Basalt four major polarity chrons (R1 to N2) have been identified. Because cooling times of CRBG flows are brief compared to rates of paleosecular variation (PSV), within-flow paleomagnetic directions are expected to be constant across the extensive east-west reaches of these flows. Vertical-axis rotations in OR and WA, driven by northward-oblique subduction of the Juan de Fuca plate, thus can be measured by comparing directions for western sampling localities to directions for the same flow units on the relatively stable Columbia Plateau. Clockwise rotations calculated for outcrop locations within the Coast Range (CR) block are uniformly about 30° (N=102 sites). East of the northwest-trending en échelon Mt. Angel-Gales Creek, Portland Hills, and northern unnamed fault zones, as well as north of the CR block's northern boundary (~Columbia River), clockwise rotations abruptly drop to about 15° (N=39 sites), with offsets in these bounding fault zones corresponding to the Portland and Willamette pull-apart basins. The general agreement of vertical- axis rotation rates estimated from CRBG magnetizations with those determined from modern GPS velocities indicates a relatively steady rate over the last 10 to 15 Myr. Unusual directions due to PSV, field excursions, or polarity transitions could provide useful stratigraphic markers. Individual flow directions, however, have not been routinely used to identify flows. One reason this has been difficult is that remagnetization is prevalent, particularly in the Coast Ranges, coupled with earlier demagnetization techniques that did not completely remove overprint components. Except for the Ginkgo and Pomona flows of the Wanapum and Saddle Mountains Basalts, reference Plateau directions for the CRBG are poorly known. Moreover, field and drill- core relations indicate that flows with different chemistries were erupted at the same time. Renewed sampling, therefore, has been undertaken eastward from the Portland area into the Columbia River Gorge and out onto the Plateau. Resampling of the Patrick Grade section (23 flows) in southeastern WA has shown that overprint magnetizations were not successfully removed in many flows at this locality in an earlier study [1]. This brings into question blanket demagnetization studies of the CRBG as well as polarity measurements routinely made in the field with hand-held fluxgate magnetometers. [1] Choiniere and Swanson, 1979, Am. J. Sci., 279, p. 755
Novel Self-Assembling Amino Acid-Derived Block Copolymer with Changeable Polymer Backbone Structure.
Koga, Tomoyuki; Aso, Eri; Higashi, Nobuyuki
2016-11-29
Block copolymers have attracted much attention as potentially interesting building blocks for the development of novel nanostructured materials in recent years. Herein, we report a new type of self-assembling block copolymer with changeable polymer backbone structure, poly(Fmoc-Ser) ester -b-PSt, which was synthesized by combining the polycondensation of 9-fluorenylmethoxycarbonyl-serine (Fmoc-Ser) with the reversible addition-fragmentation chain transfer (RAFT) polymerization of styrene (St). This block copolymer showed the direct conversion of the backbone structure from polyester to polypeptide through a multi O,N-acyl migration triggered by base-induced deprotection of Fmoc groups in organic solvent. Such polymer-to-polymer conversion was found to occur quantitatively without decrease in degree of polymerization and to cause a drastic change in self-assembling property of the block copolymer. On the basis of several morphological analyses using FTIR spectroscopy, atomic force, and transmission and scanning electron microscopies, the resulting peptide block copolymer was found to self-assemble into a vesicle-like hollow nanosphere with relatively uniform diameter of ca. 300 nm in toluene. In this case, the peptide block generated from polyester formed β-sheet structure, indicating the self-assembly via peptide-guided route. We believe the findings presented in this study offer a new concept for the development of self-assembling block copolymer system.
NASA Astrophysics Data System (ADS)
Gibanov, Nikita S.; Sheremet, Mikhail A.; Oztop, Hakan F.; Al-Salem, Khaled
2018-04-01
In this study, natural convection combined with entropy generation of Fe3O4-water nanofluid within a square open cavity filled with two different porous blocks under the influence of uniform horizontal magnetic field is numerically studied. Porous blocks of different thermal properties, permeability and porosity are located on the bottom wall. The bottom wall of the cavity is kept at hot temperature Th, while upper open boundary is at constant cold temperature Tc and other walls of the cavity are supposed to be adiabatic. Governing equations with corresponding boundary conditions formulated in dimensionless stream function and vorticity using Brinkman-extended Darcy model for porous blocks have been solved numerically using finite difference method. Numerical analysis has been carried out for wide ranges of Hartmann number, nanoparticles volume fraction and length of the porous blocks. It has been found that an addition of spherical ferric oxide nanoparticles can order the flow structures inside the cavity.
Stoykovich, Mark P; Kang, Huiman; Daoulas, Kostas Ch; Liu, Guoliang; Liu, Chi-Chun; de Pablo, Juan J; Müller, Marcus; Nealey, Paul F
2007-10-01
Self-assembling block copolymers are of interest for nanomanufacturing due to the ability to realize sub-100 nm dimensions, thermodynamic control over the size and uniformity and density of features, and inexpensive processing. The insertion point of these materials in the production of integrated circuits, however, is often conceptualized in the short term for niche applications using the dense periodic arrays of spots or lines that characterize bulk block copolymer morphologies, or in the long term for device layouts completely redesigned into periodic arrays. Here we show that the domain structure of block copolymers in thin films can be directed to assemble into nearly the complete set of essential dense and isolated patterns as currently defined by the semiconductor industry. These results suggest that block copolymer materials, with their intrinsically advantageous self-assembling properties, may be amenable for broad application in advanced lithography, including device layouts used in existing nanomanufacturing processes.
Multi-purpose wind tunnel reaction control model block
NASA Technical Reports Server (NTRS)
Dresser, H. S.; Daileda, J. J. (Inventor)
1978-01-01
A reaction control system nozzle block is provided for testing the response characteristics of space vehicles to a variety of reaction control thruster configurations. A pressurized air system is connected with the supply lines which lead to the individual jet nozzles. Each supply line terminates in a compact cylindrical plenum volume, axially perpendicular and adjacent to the throat of the jet nozzle. The volume of the cylindrical plenum is sized to provide uniform thrust characteristics from each jet nozzle irrespective of the angle of approach of the supply line to the plenum. Each supply line may be plugged or capped to stop the air supply to selected jet nozzles, thereby enabling a variety of nozzle configurations to be obtained from a single model nozzle block.
Table-driven image transformation engine algorithm
NASA Astrophysics Data System (ADS)
Shichman, Marc
1993-04-01
A high speed image transformation engine (ITE) was designed and a prototype built for use in a generic electronic light table and image perspective transformation application code. The ITE takes any linear transformation, breaks the transformation into two passes and resamples the image appropriately for each pass. The system performance is achieved by driving the engine with a set of look up tables computed at start up time for the calculation of pixel output contributions. Anti-aliasing is done automatically in the image resampling process. Operations such as multiplications and trigonometric functions are minimized. This algorithm can be used for texture mapping, image perspective transformation, electronic light table, and virtual reality.
An Algorithm Framework for Isolating Anomalous Signals in Electromagnetic Data
NASA Astrophysics Data System (ADS)
Kappler, K. N.; Schneider, D.; Bleier, T.; MacLean, L. S.
2016-12-01
QuakeFinder and its international collaborators have installed and currently maintain an array of 165 three-axis induction magnetometer instrument sites in California, Peru, Taiwan, Greece, Chile and Sumatra. Based on research by Bleier et al. (2009), Fraser-Smith et al. (1990), and Freund (2007), the electromagnetic data from these instruments are being analyzed for pre-earthquake signatures. This analysis consists of both private research by QuakeFinder, and institutional collaborators (PUCP in Peru, NCU in Taiwan, NOA in Greece, LASP at University of Colorado, Stanford, UCLA, NASA-ESI, NASA-AMES and USC-CSEP). QuakeFinder has developed an algorithm framework aimed at isolating anomalous signals (pulses) in the time series. Results are presented from an application of this framework to induction-coil magnetometer data. Our data driven approach starts with sliding windows applied to uniformly resampled array data with a variety of lengths and overlap. Data variance (a proxy for energy) is calculated on each window and a short-term average/ long-term average (STA/LTA) filter is applied to the variance time series. Pulse identification is done by flagging time intervals in the STA/LTA filtered time series which exceed a threshold. Flagged time intervals are subsequently fed into a feature extraction program which computes statistical properties of the resampled data. These features are then filtered using a Principal Component Analysis (PCA) based method to cluster similar pulses. We explore the extent to which this approach categorizes pulses with known sources (e.g. cars, lightning, etc.) and the remaining pulses of unknown origin can be analyzed with respect to their relationship with seismicity. We seek a correlation between these daily pulse-counts (with known sources removed) and subsequent (days to weeks) seismic events greater than M5 within 15km radius. Thus we explore functions which map daily pulse-counts to a time series representing the likelihood of a seismic event occurring at some future time. These "pseudo-probabilities" can in turn be represented as Molchan diagrams. The Molchan curve provides an effective cost function for optimization and allows for a rigorous statistical assessment of the validity of pre-earthquake signals in the electromagnetic data.
NASA Astrophysics Data System (ADS)
Carr, B. B.; Vaughan, R. G.
2017-12-01
The thermal areas in Yellowstone National Park (Wyoming, USA) are constantly changing. Persistent monitoring of these areas is necessary to better understand the behavior and potential hazards of both the thermal features and the deeper hydrothermal system driving the observed surface activity. As part of the Park's monitoring program, thousands of visual and thermal infrared (TIR) images have been acquired from a variety of airborne platforms over the past decade. We have used structure-from-motion (SfM) photogrammetry techniques to generate a variety of data products from these images, including orthomosaics, temperature maps, and digital elevation models (DEMs). Temperature maps were generated for Upper Geyser Basin and Norris Geyser Basin for the years 2009-2015, by applying SfM to nighttime TIR images collected from an aircraft-mounted forward-looking infrared (FLIR) camera. Temperature data were preserved through the SfM processing by applying a uniform linear stretch over the entire image set to convert between temperature and a 16-bit digital number. Mosaicked temperature maps were compared to the original FLIR image frames and to ground-based temperature data to constrain the accuracy of the method. Due to pixel averaging and resampling, among other issues, the derived temperature values are typically within 5-10 ° of the values of the un-resampled image frame. We also created sub-meter resolution DEMs from airborne daytime visual images of individual thermal areas. These DEMs can be used for resource and hazard management, and in cases where multiple DEMs exist from different times, for measuring topographic change, including change due to thermal activity. For example, we examined the sensitivity of the DEMs to topographic change by comparing DEMs of the travertine terraces at Mammoth Hot Springs, which can grow at > 1 m per year. These methods are generally applicable to images from airborne platforms, including planes, helicopters, and unmanned aerial systems, and can be used to monitor thermal areas on a variety of spatial and temporal scales.
On removing interpolation and resampling artifacts in rigid image registration.
Aganj, Iman; Yeo, Boon Thye Thomas; Sabuncu, Mert R; Fischl, Bruce
2013-02-01
We show that image registration using conventional interpolation and summation approximations of continuous integrals can generally fail because of resampling artifacts. These artifacts negatively affect the accuracy of registration by producing local optima, altering the gradient, shifting the global optimum, and making rigid registration asymmetric. In this paper, after an extensive literature review, we demonstrate the causes of the artifacts by comparing inclusion and avoidance of resampling analytically. We show the sum-of-squared-differences cost function formulated as an integral to be more accurate compared with its traditional sum form in a simple case of image registration. We then discuss aliasing that occurs in rotation, which is due to the fact that an image represented in the Cartesian grid is sampled with different rates in different directions, and propose the use of oscillatory isotropic interpolation kernels, which allow better recovery of true global optima by overcoming this type of aliasing. Through our experiments on brain, fingerprint, and white noise images, we illustrate the superior performance of the integral registration cost function in both the Cartesian and spherical coordinates, and also validate the introduced radial interpolation kernel by demonstrating the improvement in registration.
On Removing Interpolation and Resampling Artifacts in Rigid Image Registration
Aganj, Iman; Yeo, Boon Thye Thomas; Sabuncu, Mert R.; Fischl, Bruce
2013-01-01
We show that image registration using conventional interpolation and summation approximations of continuous integrals can generally fail because of resampling artifacts. These artifacts negatively affect the accuracy of registration by producing local optima, altering the gradient, shifting the global optimum, and making rigid registration asymmetric. In this paper, after an extensive literature review, we demonstrate the causes of the artifacts by comparing inclusion and avoidance of resampling analytically. We show the sum-of-squared-differences cost function formulated as an integral to be more accurate compared with its traditional sum form in a simple case of image registration. We then discuss aliasing that occurs in rotation, which is due to the fact that an image represented in the Cartesian grid is sampled with different rates in different directions, and propose the use of oscillatory isotropic interpolation kernels, which allow better recovery of true global optima by overcoming this type of aliasing. Through our experiments on brain, fingerprint, and white noise images, we illustrate the superior performance of the integral registration cost function in both the Cartesian and spherical coordinates, and also validate the introduced radial interpolation kernel by demonstrating the improvement in registration. PMID:23076044
Generating Virtual Patients by Multivariate and Discrete Re-Sampling Techniques.
Teutonico, D; Musuamba, F; Maas, H J; Facius, A; Yang, S; Danhof, M; Della Pasqua, O
2015-10-01
Clinical Trial Simulations (CTS) are a valuable tool for decision-making during drug development. However, to obtain realistic simulation scenarios, the patients included in the CTS must be representative of the target population. This is particularly important when covariate effects exist that may affect the outcome of a trial. The objective of our investigation was to evaluate and compare CTS results using re-sampling from a population pool and multivariate distributions to simulate patient covariates. COPD was selected as paradigm disease for the purposes of our analysis, FEV1 was used as response measure and the effects of a hypothetical intervention were evaluated in different populations in order to assess the predictive performance of the two methods. Our results show that the multivariate distribution method produces realistic covariate correlations, comparable to the real population. Moreover, it allows simulation of patient characteristics beyond the limits of inclusion and exclusion criteria in historical protocols. Both methods, discrete resampling and multivariate distribution generate realistic pools of virtual patients. However the use of a multivariate distribution enable more flexible simulation scenarios since it is not necessarily bound to the existing covariate combinations in the available clinical data sets.
Mattfeldt, Torsten
2011-04-01
Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.
Enabling complex nanoscale pattern customization using directed self-assembly.
Doerk, Gregory S; Cheng, Joy Y; Singh, Gurpreet; Rettner, Charles T; Pitera, Jed W; Balakrishnan, Srinivasan; Arellano, Noel; Sanders, Daniel P
2014-12-16
Block copolymer directed self-assembly is an attractive method to fabricate highly uniform nanoscale features for various technological applications, but the dense periodicity of block copolymer features limits the complexity of the resulting patterns and their potential utility. Therefore, customizability of nanoscale patterns has been a long-standing goal for using directed self-assembly in device fabrication. Here we show that a hybrid organic/inorganic chemical pattern serves as a guiding pattern for self-assembly as well as a self-aligned mask for pattern customization through cotransfer of aligned block copolymer features and an inorganic prepattern. As informed by a phenomenological model, deliberate process engineering is implemented to maintain global alignment of block copolymer features over arbitrarily shaped, 'masking' features incorporated into the chemical patterns. These hybrid chemical patterns with embedded customization information enable deterministic, complex two-dimensional nanoscale pattern customization through directed self-assembly.
Pressure activated diaphragm bonder
Evans, L.B.; Malba, V.
1997-05-27
A device is available for bonding one component to another, particularly for bonding electronic components of integrated circuits, such as chips, to a substrate. The bonder device in one embodiment includes a bottom metal block having a machined opening wherein a substrate is located, a template having machined openings which match solder patterns on the substrate, a thin diaphragm placed over the template after the chips have been positioned in the openings therein, and a top metal block positioned over the diaphragm and secured to the bottom block, with the diaphragm retained therebetween. The top block includes a countersink portion which extends over at least the area of the template and an opening through which a high pressure inert gas is supplied to exert uniform pressure distribution over the diaphragm to keep the chips in place during soldering. A heating means is provided to melt the solder patterns on the substrate and thereby solder the chips thereto. 4 figs.
Pressure activated diaphragm bonder
Evans, Leland B.; Malba, Vincent
1997-01-01
A device is available for bonding one component to another, particularly for bonding electronic components of integrated circuits, such as chips, to a substrate. The bonder device in one embodiment includes a bottom metal block having a machined opening wherein a substrate is located, a template having machined openings which match solder patterns on the substrate, a thin diaphragm placed over the template after the chips have been positioned in the openings therein, and a top metal block positioned over the diaphragm and secured to the bottom block, with the diaphragm retained therebetween. The top block includes a countersink portion which extends over at least the area of the template and an opening through which a high pressure inert gas is supplied to exert uniform pressure distribution over the diaphragm to keep the chips in place during soldering. A heating means is provided to melt the solder patterns on the substrate and thereby solder the chips thereto.
Radio-tracer techniques for the study of flow in saturated porous materials
Skibitzke, H.E.; Chapman, H.T.; Robinson, G.M.; McCullough, Richard A.
1961-01-01
An experiment was conducted by the U.S. Geological Survey to determine the feasibility of using a radioactive substance as a tracer in the study of microscopic flow in a saturated porous solid. A radioactive tracer was chosen in preference to dye or other chemical in order to eliminate effects of the tracer itself on the flow system such as those relating to density, viscosity and surface tension. The porous solid was artificial "sandstone" composed of uniform fine grains of sand bonded together with an epoxy adhesive. The sides of the block thus made were sealed with an epoxy coating compound to insure water-tightness. Because of the chemical inertness of the block it was possible to use radioactive phosphorus (P32). Ion-exchange equilibrium was created between the block and nonradioactive phosphoric acid. Then a tracer tagged with P32 was injected into the block in the desired geometric configuration, in this case, a line source. After equilibrium in isotopic exchange was reached between the block and the line source, the block was rinsed, drained and sawn into slices. It was found that a quantitative analysis of the flow system may be made by assaying the dissected block. ?? 1961.
Kent, Robert; Belitz, Kenneth; Fram, Miranda S.
2014-01-01
The Priority Basin Project (PBP) of the Groundwater Ambient Monitoring and Assessment (GAMA) Program was developed in response to the Groundwater Quality Monitoring Act of 2001 and is being conducted by the U.S. Geological Survey (USGS) in cooperation with the California State Water Resources Control Board (SWRCB). The GAMA-PBP began sampling, primarily public supply wells in May 2004. By the end of February 2006, seven (of what would eventually be 35) study units had been sampled over a wide area of the State. Selected wells in these first seven study units were resampled for water quality from August 2007 to November 2008 as part of an assessment of temporal trends in water quality by the GAMA-PBP. The initial sampling was designed to provide a spatially unbiased assessment of the quality of raw groundwater used for public water supplies within the seven study units. In the 7 study units, 462 wells were selected by using a spatially distributed, randomized grid-based method to provide statistical representation of the study area. Wells selected this way are referred to as grid wells or status wells. Approximately 3 years after the initial sampling, 55 of these previously sampled status wells (approximately 10 percent in each study unit) were randomly selected for resampling. The seven resampled study units, the total number of status wells sampled for each study unit, and the number of these wells resampled for trends are as follows, in chronological order of sampling: San Diego Drainages (53 status wells, 7 trend wells), North San Francisco Bay (84, 10), Northern San Joaquin Basin (51, 5), Southern Sacramento Valley (67, 7), San Fernando–San Gabriel (35, 6), Monterey Bay and Salinas Valley Basins (91, 11), and Southeast San Joaquin Valley (83, 9). The groundwater samples were analyzed for a large number of synthetic organic constituents (volatile organic compounds [VOCs], pesticides, and pesticide degradates), constituents of special interest (perchlorate, N-nitrosodimethylamine [NDMA], and 1,2,3-trichloropropane [1,2,3-TCP]), and naturally-occurring inorganic constituents (nutrients, major and minor ions, and trace elements). Naturally-occurring isotopes (tritium, carbon-14, and stable isotopes of hydrogen and oxygen in water) also were measured to help identify processes affecting groundwater quality and the sources and ages of the sampled groundwater. Nearly 300 constituents and water-quality indicators were investigated. Quality-control samples (blanks, replicates, and samples for matrix spikes) were collected at 24 percent of the 55 status wells resampled for trends, and the results for these samples were used to evaluate the quality of the data for the groundwater samples. Field blanks rarely contained detectable concentrations of any constituent, suggesting that contamination was not a noticeable source of bias in the data for the groundwater samples. Differences between replicate samples were mostly within acceptable ranges, indicating acceptably low variability in analytical results. Matrix-spike recoveries were within the acceptable range (70 to 130 percent) for 75 percent of the compounds for which matrix spikes were collected. This study did not attempt to evaluate the quality of water delivered to consumers. After withdrawal, groundwater typically is treated, disinfected, and blended with other waters to maintain acceptable water quality. The benchmarks used in this report apply to treated water that is served to the consumer, not to untreated groundwater. To provide some context for the results, however, concentrations of constituents measured in these groundwater samples were compared with benchmarks established by the U.S. Environmental Protection Agency (USEPA) and California Department of Public Health (CDPH). Comparisons between data collected for this study and benchmarks for drinking water are for illustrative purposes only and are not indicative of compliance or non-compliance with those benchmarks. Most constituents that were detected in groundwater samples from the trend wells were found at concentrations less than drinking-water benchmarks. Four VOCs—trichloroethene, tetrachloroethene, 1,2-dibromo-3-chloropropane, and methyl tert-butyl ether—were detected in one or more wells at concentrations greater than their health-based benchmarks, and six VOCs were detected in at least 10 percent of the samples during initial sampling or resampling of the trend wells. No pesticides were detected at concentrations near or greater than their health-based benchmarks. Three pesticide constituents—atrazine, deethylatrazine, and simazine—were detected in more than 10 percent of the trend-well samples during both sampling periods. Perchlorate, a constituent of special interest, was detected more frequently, and at greater concentrations during resampling than during initial sampling, but this may be due to a change in analytical method between the sampling periods, rather than to a change in groundwater quality. Another constituent of special interest, 1,2,3-TCP, was also detected more frequently during resampling than during initial sampling, but this pattern also may not reflect a change in groundwater quality. Samples from several of the wells where 1,2,3-TCP was detected by low-concentration-level analysis during resampling were not analyzed for 1,2,3-TCP using a low-level method during initial sampling. Most detections of nutrients and trace elements in samples from trend wells were less than health-based benchmarks during both sampling periods. Exceptions include nitrate, arsenic, boron, and vanadium, all detected at concentrations greater than their health-based benchmarks in at least one well during both sampling periods, and molybdenum, detected at concentrations greater than its health-based benchmark during resampling only. The isotopic ratios of oxygen and hydrogen in water and tritium and carbon-14 activities generally changed little between sampling periods, suggesting that the predominant sources and ages of groundwater in most trend wells were consistent between the sampling periods.
Multi-Image Registration for an Enhanced Vision System
NASA Technical Reports Server (NTRS)
Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn
2002-01-01
An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.
NASA Astrophysics Data System (ADS)
Huang, Limin; Chen, Zhuoying; Wilson, James D.; Banerjee, Sarbajit; Robinson, Richard D.; Herman, Irving P.; Laibowitz, Robert; O'Brien, Stephen
2006-08-01
Advanced applications for high k dielectric and ferroelectric materials in the electronics industry continues to demand an understanding of the underlying physics in decreasing dimensions into the nanoscale. We report the synthesis, processing, and electrical characterization of thin (<100nm thick) nanostructured thin films of barium titanate (BaTiO3) built from uniform nanoparticles (<20nm in diameter). We introduce a form of processing as a step toward the ability to prepare textured films based on assembly of nanoparticles. Essential to this approach is an understanding of the nanoparticle as a building block, combined with an ability to integrate them into thin films that have uniform and characteristic electrical properties. Our method offers a versatile means of preparing BaTiO3 nanocrystals, which can be used as a basis for micropatterned or continuous BaTiO3 nanocrystal thin films. We observe the BaTiO3 nanocrystals crystallize with evidence of tetragonality. We investigated the preparation of well-isolated BaTiO3 nanocrystals smaller than 10nm with control over aggregation and crystal densities on various substrates such as Si, Si /SiO2, Si3N4/Si, and Pt-coated Si substrates. BaTiO3 nanocrystal thin films were then prepared, resulting in films with a uniform nanocrystalline grain texture. Electric field dependent polarization measurements show spontaneous polarization and hysteresis, indicating ferroelectric behavior for the BaTiO3 nanocrystalline films with grain sizes in the range of 10-30nm. Dielectric measurements of the films show dielectic constants in the range of 85-90 over the 1KHz -100KHz, with low loss. We present nanocrystals as initial building blocks for the preparation of thin films which exhibit highly uniform nanostructured texture and grain sizes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Yao; Liang, Meng; Fu, Jiajia
2015-03-15
In this work, novel double Electron Blocking Layers for InGaN/GaN multiple quantum wells light-emitting diodes were proposed to mitigate the efficiency droop at high current density. The band diagram and carriers distributions were investigated numerically. The results indicate that due to a newly formed holes stack in the p-GaN near the active region, the hole injection has been improved and an uniform carriers distribution can be achieved. As a result, in our new structure with double Electron Blocking Layers, the efficiency droop has been reduced to 15.5 % in comparison with 57.3 % for the LED with AlGaN EBL atmore » the current density of 100 A/cm{sup 2}.« less
Chavis, Michelle A.; Smilgies, Detlef-M.; Wiesner, Ulrich B.; Ober, Christopher K.
2015-01-01
Thin films of block copolymers are extremely attractive for nanofabrication because of their ability to form uniform and periodic nanoscale structures by microphase separation. One shortcoming of this approach is that to date the design of a desired equilibrium structure requires synthesis of a block copolymer de novo within the corresponding volume ratio of the blocks. In this work, we investigated solvent vapor annealing in supported thin films of poly(2-hydroxyethyl methacrylate)-block-poly(methyl methacrylate) [PHEMA-b-PMMA] by means of grazing incidence small angle X–ray scattering (GISAXS). A spin-coated thin film of lamellar block copolymer was solvent vapor annealed to induce microphase separation and improve the long-range order of the self-assembled pattern. Annealing in a mixture of solvent vapors using a controlled volume ratio of solvents (methanol, MeOH, and tetrahydrofuran, THF), which are chosen to be preferential for each block, enabled selective formation of ordered lamellae, gyroid, hexagonal or spherical morphologies from a single block copolymer with a fixed volume fraction. The selected microstructure was then kinetically trapped in the dry film by rapid drying. To our knowledge, this paper describes the first reported case where in-situ methods are used to study the transition of block copolymer films from one initial disordered morphology to four different ordered morphologies, covering much of the theoretical diblock copolymer phase diagram. PMID:26819574
32 CFR Table 1 to Part 855 - Purpose of Use/Verification/Approval Authority/Fees
Code of Federal Regulations, 2011 CFR
2011-07-01
... change of station, etc.) or for private, non revenue flights Social security number in block 1 on DD Form... of a uniformed service member Identification card (DD Form 1173) number or social security number... Form 1173) number or social security number, identification card expiration date, sponsor's retirement...
32 CFR Table 1 to Part 855 - Purpose of Use/Verification/Approval Authority/Fees
Code of Federal Regulations, 2013 CFR
2013-07-01
... change of station, etc.) or for private, non revenue flights Social security number in block 1 on DD Form... of a uniformed service member Identification card (DD Form 1173) number or social security number... Form 1173) number or social security number, identification card expiration date, sponsor's retirement...
Scaled up Fabrication of High-Throughout SWNT Nanoelectronics and Nanosensor Devices
2007-04-20
copolymer, polystyrene-block-ferrocenylethylmethylsilane (PS-b- PFEMS ) as the polymer-based catalyst for optimized SWNT growth. Spin coating a dilute...solution of 1 wt % PS-b- PFEMS in toluene provides uniform ~ 100nm thick films of the catalyst. In order to get well-separated individual SWNT or SWNT rope
Landscape ecology and forest management
Thomas R. Crow
1999-01-01
Almost all forest management activities affect landscape pattern to some extent. Among the most obvious impacts are those associated with forest harvesting and road building. These activities profoundly affect the size, shape, and configuration of patches in the landscape matrix. Even-age management such as clearcutting has been applied in blocks of uniform size, shape...
77 FR 16903 - National Day of Honor
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-22
... and unwavering commitment to duty, our men and women in uniform served tour after tour, fighting block... their love of country, nearly 4,500 men and women are eternally bound; though we have laid them to rest... through trials, we will always emerge stronger than before. Now, our Nation reaffirms our commitment to...
1980-03-01
L.F., and Gillespie, H.K. (1971). Marihuana and the temporal span of awareness. Arch. Gen. Psychiat.,24,564-567. Masland, R.H. (1979) in Symposium 106...Defense Technical Information Center (rTIC) ATTN: DTIC-DDA Cameron Station Alexandria, VA 22314 1 copy Dean School of Medicine Uniformed Services
Guiding out-migrating juvenile sea lamprey (Petromyzon marinus) with pulsed direct current
Johnson, Nicholas S.; Miehls, Scott M.
2014-01-01
Non-physical stimuli can deter or guide fish without affecting water flow or navigation and therefore have been investigated to improve fish passage at anthropogenic barriers and to control movement of invasive fish. Upstream fish migration can be blocked or guided without physical structure by electrifying the water, but directional downstream fish guidance with electricity has received little attention. We tested two non-uniform pulsed direct current electric systems, each having different electrode orientations (vertical versus horizontal), to determine their ability to guide out-migrating juvenile sea lamprey (Petromyzon marinus) and rainbow trout (Oncorhynchus mykiss). Both systems guided significantly more juvenile sea lamprey to a specific location in our experimental raceway when activated than when deactivated, but guidance efficiency decreased at the highest water velocities tested. At the electric field setting that effectively guided sea lamprey, rainbow trout were guided by the vertical electrode system, but most were blocked by the horizontal electrode system. Additional research should characterize the response of other species to non-uniform fields of pulsed DC and develop electrode configurations that guide fish over a range of water velocity.
Variational optimization algorithms for uniform matrix product states
NASA Astrophysics Data System (ADS)
Zauner-Stauber, V.; Vanderstraeten, L.; Fishman, M. T.; Verstraete, F.; Haegeman, J.
2018-01-01
We combine the density matrix renormalization group (DMRG) with matrix product state tangent space concepts to construct a variational algorithm for finding ground states of one-dimensional quantum lattices in the thermodynamic limit. A careful comparison of this variational uniform matrix product state algorithm (VUMPS) with infinite density matrix renormalization group (IDMRG) and with infinite time evolving block decimation (ITEBD) reveals substantial gains in convergence speed and precision. We also demonstrate that VUMPS works very efficiently for Hamiltonians with long-range interactions and also for the simulation of two-dimensional models on infinite cylinders. The new algorithm can be conveniently implemented as an extension of an already existing DMRG implementation.
Methods of Soil Resampling to Monitor Changes in the Chemical Concentrations of Forest Soils.
Lawrence, Gregory B; Fernandez, Ivan J; Hazlett, Paul W; Bailey, Scott W; Ross, Donald S; Villars, Thomas R; Quintana, Angelica; Ouimet, Rock; McHale, Michael R; Johnson, Chris E; Briggs, Russell D; Colter, Robert A; Siemion, Jason; Bartlett, Olivia L; Vargas, Olga; Antidormi, Michael R; Koppers, Mary M
2016-11-25
Recent soils research has shown that important chemical soil characteristics can change in less than a decade, often the result of broad environmental changes. Repeated sampling to monitor these changes in forest soils is a relatively new practice that is not well documented in the literature and has only recently been broadly embraced by the scientific community. The objective of this protocol is therefore to synthesize the latest information on methods of soil resampling in a format that can be used to design and implement a soil monitoring program. Successful monitoring of forest soils requires that a study unit be defined within an area of forested land that can be characterized with replicate sampling locations. A resampling interval of 5 years is recommended, but if monitoring is done to evaluate a specific environmental driver, the rate of change expected in that driver should be taken into consideration. Here, we show that the sampling of the profile can be done by horizon where boundaries can be clearly identified and horizons are sufficiently thick to remove soil without contamination from horizons above or below. Otherwise, sampling can be done by depth interval. Archiving of sample for future reanalysis is a key step in avoiding analytical bias and providing the opportunity for additional analyses as new questions arise.
Kent, Robert; Landon, Matthew K.
2016-01-01
From 2004 to 2011, the U.S. Geological Survey collected samples from 1686 wells across the State of California as part of the California State Water Resources Control Board’s Groundwater Ambient Monitoring and Assessment (GAMA) Priority Basin Project (PBP). From 2007 to 2013, 224 of these wells were resampled to assess temporal trends in water quality. The samples were analyzed for 216 water-quality constituents, including inorganic and organic compounds as well as isotopic tracers. The resampled wells were grouped into five hydrogeologic zones. A nonparametric hypothesis test was used to test the differences between initial sampling and resampling results to evaluate possible step trends in water-quality, statewide, and within each hydrogeologic zone. The hypothesis tests were performed on the 79 constituents that were detected in more than 5 % of the samples collected during either sampling period in at least one hydrogeologic zone. Step trends were detected for 17 constituents. Increasing trends were detected for alkalinity, aluminum, beryllium, boron, lithium, orthophosphate, perchlorate, sodium, and specific conductance. Decreasing trends were detected for atrazine, cobalt, dissolved oxygen, lead, nickel, pH, simazine, and tritium. Tritium was expected to decrease due to decreasing values in precipitation, and the detection of decreases indicates that the method is capable of resolving temporal trends.
Methods of Soil Resampling to Monitor Changes in the Chemical Concentrations of Forest Soils
Lawrence, Gregory B.; Fernandez, Ivan J.; Hazlett, Paul W.; Bailey, Scott W.; Ross, Donald S.; Villars, Thomas R.; Quintana, Angelica; Ouimet, Rock; McHale, Michael R.; Johnson, Chris E.; Briggs, Russell D.; Colter, Robert A.; Siemion, Jason; Bartlett, Olivia L.; Vargas, Olga; Antidormi, Michael R.; Koppers, Mary M.
2016-01-01
Recent soils research has shown that important chemical soil characteristics can change in less than a decade, often the result of broad environmental changes. Repeated sampling to monitor these changes in forest soils is a relatively new practice that is not well documented in the literature and has only recently been broadly embraced by the scientific community. The objective of this protocol is therefore to synthesize the latest information on methods of soil resampling in a format that can be used to design and implement a soil monitoring program. Successful monitoring of forest soils requires that a study unit be defined within an area of forested land that can be characterized with replicate sampling locations. A resampling interval of 5 years is recommended, but if monitoring is done to evaluate a specific environmental driver, the rate of change expected in that driver should be taken into consideration. Here, we show that the sampling of the profile can be done by horizon where boundaries can be clearly identified and horizons are sufficiently thick to remove soil without contamination from horizons above or below. Otherwise, sampling can be done by depth interval. Archiving of sample for future reanalysis is a key step in avoiding analytical bias and providing the opportunity for additional analyses as new questions arise. PMID:27911419
Methods of soil resampling to monitor changes in the chemical concentrations of forest soils
Lawrence, Gregory B.; Fernandez, Ivan J.; Hazlett, Paul W.; Bailey, Scott W.; Ross, Donald S.; Villars, Thomas R.; Quintana, Angelica; Ouimet, Rock; McHale, Michael; Johnson, Chris E.; Briggs, Russell D.; Colter, Robert A.; Siemion, Jason; Bartlett, Olivia L.; Vargas, Olga; Antidormi, Michael; Koppers, Mary Margaret
2016-01-01
Recent soils research has shown that important chemical soil characteristics can change in less than a decade, often the result of broad environmental changes. Repeated sampling to monitor these changes in forest soils is a relatively new practice that is not well documented in the literature and has only recently been broadly embraced by the scientific community. The objective of this protocol is therefore to synthesize the latest information on methods of soil resampling in a format that can be used to design and implement a soil monitoring program. Successful monitoring of forest soils requires that a study unit be defined within an area of forested land that can be characterized with replicate sampling locations. A resampling interval of 5 years is recommended, but if monitoring is done to evaluate a specific environmental driver, the rate of change expected in that driver should be taken into consideration. Here, we show that the sampling of the profile can be done by horizon where boundaries can be clearly identified and horizons are sufficiently thick to remove soil without contamination from horizons above or below. Otherwise, sampling can be done by depth interval. Archiving of sample for future reanalysis is a key step in avoiding analytical bias and providing the opportunity for additional analyses as new questions arise.
2015-01-01
Understanding protein–surface interactions is crucial to solid-state biomedical applications whose functionality is directly correlated with the precise control of the adsorption configuration, surface packing, loading density, and bioactivity of protein molecules. Because of the small dimensions and highly amphiphilic nature of proteins, investigation of protein adsorption performed on nanoscale topology can shed light on subprotein-level interaction preferences. In this study, we examine the adsorption and assembly behavior of a highly elongated protein, fibrinogen, on both chemically uniform (as-is and buffered HF-treated SiO2/Si, and homopolymers of polystyrene and poly(methyl methacrylate)) and varying (polystyrene-block-poly(methyl methacrylate)) surfaces. By focusing on high-resolution imaging of individual protein molecules whose configurations are influenced by protein–surface rather than protein–protein interactions, fibrinogen conformations characteristic to each surface are identified and statistically analyzed for structural similarities/differences in key protein domains. By exploiting block copolymer nanodomains whose repeat distance is commensurate with the length of the individual protein, we determine that fibrinogen exhibits a more neutral tendency for interaction with both polystyrene and poly(methyl methacrylate) blocks relative to the case of common globular proteins. Factors affecting fibrinogen–polymer interactions are discussed in terms of hydrophobic and electrostatic interactions. In addition, assembly and packing attributes of fibrinogen are determined at different loading conditions. Primary orientations of fibrinogen and its rearrangements with respect to the underlying diblock nanodomains associated with different surface coverage are explained by pertinent protein interaction mechanisms. On the basis of two-dimensional stacking behavior, a protein assembly model is proposed for the formation of an extended fibrinogen network on the diblock copolymer. PMID:24708538
Zhao, Dan; Di Nicola, Matteo; Khani, Mohammad M; Jestin, Jacques; Benicewicz, Brian C; Kumar, Sanat K
2016-09-14
We compare the self-assembly of silica nanoparticles (NPs) with physically adsorbed polystyrene-block-poly(2-vinylpyridine) (PS-b-P2VP) copolymers (BCP) against NPs with grafted bimodal (BM) brushes comprised of long, sparsely grafted PS chains and a short dense carpet of P2VP chains. As with grafted NPs, the dispersion state of the BCP NPs can be facilely tuned in PS matrices by varying the PS coverage on the NP surface or by changes in the ratio of the PS graft to matrix chain lengths. Surprisingly, the BCP NPs are remarkably better dispersed than the NPs tethered with bimodal brushes at comparable PS grafting densities. We postulate that this difference arises because of two factors inherent in the synthesis of the NPs: In the case of the BCP NPs the adsorption process is analogous to the chains being "grafted to" the NP surface, while the BM case corresponds to "grafting from" the surface. We have shown that the "grafted from" protocol yields patchy NPs even if the graft points are uniformly placed on each particle. This phenomenon, which is caused by chain conformation fluctuations, is exacerbated by the distribution function associated with the (small) number of grafts per particle. In contrast, in the case of BCP adsorption, each NP is more uniformly coated by a P2VP monolayer driven by the strongly favorable P2VP-silica interactions. Since each P2VP block is connected to a PS chain we conjecture that these adsorbed systems are closer to the limit of spatially uniform sparse brush coverage than the chemically grafted case. We finally show that the better NP dispersion resulting from BCP adsorption leads to larger mechanical reinforcement than those achieved with BM particles. These results emphasize that physical adsorption of BCPs is a simple, effective and practically promising strategy to direct NP dispersion in a chemically unfavorable polymer matrix.
Maximum likelihood resampling of noisy, spatially correlated data
NASA Astrophysics Data System (ADS)
Goff, J.; Jenkins, C.
2005-12-01
In any geologic application, noisy data are sources of consternation for researchers, inhibiting interpretability and marring images with unsightly and unrealistic artifacts. Filtering is the typical solution to dealing with noisy data. However, filtering commonly suffers from ad hoc (i.e., uncalibrated, ungoverned) application, which runs the risk of erasing high variability components of the field in addition to the noise components. We present here an alternative to filtering: a newly developed methodology for correcting noise in data by finding the "best" value given the data value, its uncertainty, and the data values and uncertainties at proximal locations. The motivating rationale is that data points that are close to each other in space cannot differ by "too much", where how much is "too much" is governed by the field correlation properties. Data with large uncertainties will frequently violate this condition, and in such cases need to be corrected, or "resampled." The best solution for resampling is determined by the maximum of the likelihood function defined by the intersection of two probability density functions (pdf): (1) the data pdf, with mean and variance determined by the data value and square uncertainty, respectively, and (2) the geostatistical pdf, whose mean and variance are determined by the kriging algorithm applied to proximal data values. A Monte Carlo sampling of the data probability space eliminates non-uniqueness, and weights the solution toward data values with lower uncertainties. A test with a synthetic data set sampled from a known field demonstrates quantitatively and qualitatively the improvement provided by the maximum likelihood resampling algorithm. The method is also applied to three marine geology/geophysics data examples: (1) three generations of bathymetric data on the New Jersey shelf with disparate data uncertainties; (2) mean grain size data from the Adriatic Sea, which is combination of both analytic (low uncertainty) and word-based (higher uncertainty) sources; and (3) sidescan backscatter data from the Martha's Vineyard Coastal Observatory which are, as is typical for such data, affected by speckly noise.
Signal-Conditioning Block of a 1 × 200 CMOS Detector Array for a Terahertz Real-Time Imaging System
Yang, Jong-Ryul; Lee, Woo-Jae; Han, Seong-Tae
2016-01-01
A signal conditioning block of a 1 × 200 Complementary Metal-Oxide-Semiconductor (CMOS) detector array is proposed to be employed with a real-time 0.2 THz imaging system for inspecting large areas. The plasmonic CMOS detector array whose pixel size including an integrated antenna is comparable to the wavelength of the THz wave for the imaging system, inevitably carries wide pixel-to-pixel variation. To make the variant outputs from the array uniform, the proposed signal conditioning block calibrates the responsivity of each pixel by controlling the gate bias of each detector and the voltage gain of the lock-in amplifiers in the block. The gate bias of each detector is modulated to 1 MHz to improve the signal-to-noise ratio of the imaging system via the electrical modulation by the conditioning block. In addition, direct current (DC) offsets of the detectors in the array are cancelled by initializing the output voltage level from the block. Real-time imaging using the proposed signal conditioning block is demonstrated by obtaining images at the rate of 19.2 frame-per-sec of an object moving on the conveyor belt with a scan width of 20 cm and a scan speed of 25 cm/s. PMID:26950128
Signal-Conditioning Block of a 1 × 200 CMOS Detector Array for a Terahertz Real-Time Imaging System.
Yang, Jong-Ryul; Lee, Woo-Jae; Han, Seong-Tae
2016-03-02
A signal conditioning block of a 1 × 200 Complementary Metal-Oxide-Semiconductor (CMOS) detector array is proposed to be employed with a real-time 0.2 THz imaging system for inspecting large areas. The plasmonic CMOS detector array whose pixel size including an integrated antenna is comparable to the wavelength of the THz wave for the imaging system, inevitably carries wide pixel-to-pixel variation. To make the variant outputs from the array uniform, the proposed signal conditioning block calibrates the responsivity of each pixel by controlling the gate bias of each detector and the voltage gain of the lock-in amplifiers in the block. The gate bias of each detector is modulated to 1 MHz to improve the signal-to-noise ratio of the imaging system via the electrical modulation by the conditioning block. In addition, direct current (DC) offsets of the detectors in the array are cancelled by initializing the output voltage level from the block. Real-time imaging using the proposed signal conditioning block is demonstrated by obtaining images at the rate of 19.2 frame-per-sec of an object moving on the conveyor belt with a scan width of 20 cm and a scan speed of 25 cm/s.
A new dry hypothesis for the formation of Martian linear gullies
Diniega, Serina; Hansen, Candice J.; McElwaine, Jim N.; Hugenholtz, C.H.; Dundas, Colin M.; McEwen, Alfred S.; Bourke, Mary C.
2013-01-01
Long, narrow grooves found on the slopes of martian sand dunes have been cited as evidence of liquid water via the hypothesis that melt-water initiated debris flows eroded channels and deposited lateral levées. However, this theory has several short-comings for explaining the observed morphology and activity of these linear gullies. We present an alternative hypothesis that is consistent with the observed morphology, location, and current activity: that blocks of CO2 ice break from over-steepened cornices as sublimation processes destabilize the surface in the spring, and these blocks move downslope, carving out levéed grooves of relatively uniform width and forming terminal pits. To test this hypothesis, we describe experiments involving water and CO2 blocks on terrestrial dunes and then compare results with the martian features. Furthermore, we present a theoretical model of the initiation of block motion due to sublimation and use this to quantitatively compare the expected behavior of blocks on the Earth and Mars. The model demonstrates that CO2 blocks can be expected to move via our proposed mechanism on the Earth and Mars, and the experiments show that the motion of these blocks will naturally create the main morphological features of linear gullies seen on Mars.
Classifier performance prediction for computer-aided diagnosis using a limited dataset.
Sahiner, Berkman; Chan, Heang-Ping; Hadjiiski, Lubomir
2008-04-01
In a practical classifier design problem, the true population is generally unknown and the available sample is finite-sized. A common approach is to use a resampling technique to estimate the performance of the classifier that will be trained with the available sample. We conducted a Monte Carlo simulation study to compare the ability of the different resampling techniques in training the classifier and predicting its performance under the constraint of a finite-sized sample. The true population for the two classes was assumed to be multivariate normal distributions with known covariance matrices. Finite sets of sample vectors were drawn from the population. The true performance of the classifier is defined as the area under the receiver operating characteristic curve (AUC) when the classifier designed with the specific sample is applied to the true population. We investigated methods based on the Fukunaga-Hayes and the leave-one-out techniques, as well as three different types of bootstrap methods, namely, the ordinary, 0.632, and 0.632+ bootstrap. The Fisher's linear discriminant analysis was used as the classifier. The dimensionality of the feature space was varied from 3 to 15. The sample size n2 from the positive class was varied between 25 and 60, while the number of cases from the negative class was either equal to n2 or 3n2. Each experiment was performed with an independent dataset randomly drawn from the true population. Using a total of 1000 experiments for each simulation condition, we compared the bias, the variance, and the root-mean-squared error (RMSE) of the AUC estimated using the different resampling techniques relative to the true AUC (obtained from training on a finite dataset and testing on the population). Our results indicated that, under the study conditions, there can be a large difference in the RMSE obtained using different resampling methods, especially when the feature space dimensionality is relatively large and the sample size is small. Under this type of conditions, the 0.632 and 0.632+ bootstrap methods have the lowest RMSE, indicating that the difference between the estimated and the true performances obtained using the 0.632 and 0.632+ bootstrap will be statistically smaller than those obtained using the other three resampling methods. Of the three bootstrap methods, the 0.632+ bootstrap provides the lowest bias. Although this investigation is performed under some specific conditions, it reveals important trends for the problem of classifier performance prediction under the constraint of a limited dataset.
Porto, Paolo; Walling, Des E; Alewell, Christine; Callegari, Giovanni; Mabit, Lionel; Mallimo, Nicola; Meusburger, Katrin; Zehringer, Markus
2014-12-01
Soil erosion and both its on-site and off-site impacts are increasingly seen as a serious environmental problem across the world. The need for an improved evidence base on soil loss and soil redistribution rates has directed attention to the use of fallout radionuclides, and particularly (137)Cs, for documenting soil redistribution rates. This approach possesses important advantages over more traditional means of documenting soil erosion and soil redistribution. However, one key limitation of the approach is the time-averaged or lumped nature of the estimated erosion rates. In nearly all cases, these will relate to the period extending from the main period of bomb fallout to the time of sampling. Increasing concern for the impact of global change, particularly that related to changing land use and climate change, has frequently directed attention to the need to document changes in soil redistribution rates within this period. Re-sampling techniques, which should be distinguished from repeat-sampling techniques, have the potential to meet this requirement. As an example, the use of a re-sampling technique to derive estimates of the mean annual net soil loss from a small (1.38 ha) forested catchment in southern Italy is reported. The catchment was originally sampled in 1998 and samples were collected from points very close to the original sampling points again in 2013. This made it possible to compare the estimate of mean annual erosion for the period 1954-1998 with that for the period 1999-2013. The availability of measurements of sediment yield from the catchment for parts of the overall period made it possible to compare the results provided by the (137)Cs re-sampling study with the estimates of sediment yield for the same periods. In order to compare the estimates of soil loss and sediment yield for the two different periods, it was necessary to establish the uncertainty associated with the individual estimates. In the absence of a generally accepted procedure for such calculations, key factors influencing the uncertainty of the estimates were identified and a procedure developed. The results of the study demonstrated that there had been no significant change in mean annual soil loss in recent years and this was consistent with the information provided by the estimates of sediment yield from the catchment for the same periods. The study demonstrates the potential for using a re-sampling technique to document recent changes in soil redistribution rates. Copyright © 2014. Published by Elsevier Ltd.
Significance of the impact of motion compensation on the variability of PET image features
NASA Astrophysics Data System (ADS)
Carles, M.; Bach, T.; Torres-Espallardo, I.; Baltas, D.; Nestle, U.; Martí-Bonmatí, L.
2018-03-01
In lung cancer, quantification by positron emission tomography/computed tomography (PET/CT) imaging presents challenges due to respiratory movement. Our primary aim was to study the impact of motion compensation implied by retrospectively gated (4D)-PET/CT on the variability of PET quantitative parameters. Its significance was evaluated by comparison with the variability due to (i) the voxel size in image reconstruction and (ii) the voxel size in image post-resampling. The method employed for feature extraction was chosen based on the analysis of (i) the effect of discretization of the standardized uptake value (SUV) on complementarity between texture features (TF) and conventional indices, (ii) the impact of the segmentation method on the variability of image features, and (iii) the variability of image features across the time-frame of 4D-PET. Thirty-one PET-features were involved. Three SUV discretization methods were applied: a constant width (SUV resolution) of the resampling bin (method RW), a constant number of bins (method RN) and RN on the image obtained after histogram equalization (method EqRN). The segmentation approaches evaluated were 40% of SUVmax and the contrast oriented algorithm (COA). Parameters derived from 4D-PET images were compared with values derived from the PET image obtained for (i) the static protocol used in our clinical routine (3D) and (ii) the 3D image post-resampled to the voxel size of the 4D image and PET image derived after modifying the reconstruction of the 3D image to comprise the voxel size of the 4D image. Results showed that TF complementarity with conventional indices was sensitive to the SUV discretization method. In the comparison of COA and 40% contours, despite the values not being interchangeable, all image features showed strong linear correlations (r > 0.91, p\\ll 0.001 ). Across the time-frames of 4D-PET, all image features followed a normal distribution in most patients. For our patient cohort, the compensation of tumor motion did not have a significant impact on the quantitative PET parameters. The variability of PET parameters due to voxel size in image reconstruction was more significant than variability due to voxel size in image post-resampling. In conclusion, most of the parameters (apart from the contrast of neighborhood matrix) were robust to the motion compensation implied by 4D-PET/CT. The impact on parameter variability due to the voxel size in image reconstruction and in image post-resampling could not be assumed to be equivalent.
Porto, Paolo; Walling, Desmond E; Cogliandro, Vanessa; Callegari, Giovanni
2016-11-01
In recent years, the fallout radionuclides caesium-137 ( 137 Cs) and unsupported lead-210 ( 210 Pb ex) have been successfully used to document rates of soil erosion in many areas of the world, as an alternative to conventional measurements. By virtue of their different half-lives, these two radionuclides are capable of providing information related to different time windows. 137 Cs measurements are commonly used to generate information on mean annual erosion rates over the past ca. 50-60 years, whereas 210 Pb ex measurements are able to provide information relating to a longer period of up to ca. 100 years. However, the time-integrated nature of the estimates of soil redistribution provided by 137 Cs and 210 Pb ex measurements can be seen as a limitation, particularly when viewed in the context of global change and interest in the response of soil redistribution rates to contemporary climate change and land use change. Re-sampling techniques used with these two fallout radionuclides potentially provide a basis for providing information on recent changes in soil redistribution rates. By virtue of the effectively continuous fallout input, of 210 Pb, the response of the 210 Pb ex inventory of a soil profile to changing soil redistribution rates and thus its potential for use with the re-sampling approach differs from that of 137 Cs. Its greater sensitivity to recent changes in soil redistribution rates suggests that 210 Pb ex may have advantages over 137 Cs for use in the re-sampling approach. The potential for using 210 Pb ex measurements in re-sampling studies is explored further in this contribution. Attention focuses on a small (1.38 ha) forested catchment in southern Italy. The catchment was originally sampled for 210 Pb ex measurements in 2001 and equivalent samples were collected from points very close to the original sampling points again in 2013. This made it possible to compare the estimates of mean annual erosion related to two different time windows. This comparison suggests that mean annual rates of net soil loss had increased during the period between the two sampling campaigns and that this increase was associated with a shift to an increased sediment delivery ratio. This change was consistent with independent information on likely changes in the sediment response of the study catchment provided by the available records of annual sediment yield and changes in the annual rainfall documented for the local area. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mapping cerebrovascular reactivity using concurrent fMRI and near infrared spectroscopy
NASA Astrophysics Data System (ADS)
Tong, Yunjie; Bergethon, Peter R.; Frederick, Blaise d.
2011-02-01
Cerebrovascular reactivity (CVR) reflects the compensatory dilatory capacity of cerebral vasculature to a dilatory stimulus and is an important indicator of brain vascular reserve. fMRI has been proven to be an effective imaging technique to obtain the CVR map when the subjects perform CO2 inhalation or the breath holding task (BH). However, the traditional data analysis inaccurately models the BOLD using a boxcar function with fixed time delay. We propose a novel way to process the fMRI data obtained during a blocked BH by using the simultaneously collected near infrared spectroscopy (NIRS) data as regressor1. In this concurrent NIRS and fMRI study, 6 healthy subjects performed a blocked BH (5 breath holds with 20s durations intermitted by 40s of regular breathing). A NIRS probe of two sources and two detectors separated by 3 cm was placed on the right side of prefrontal area of the subjects. The time course of changes in oxy-hemoglobin (Δ[HbO]) was calculated from NIRS data and shifted in time by various amounts, and resampled to the fMRI acquisition rate. Each shifted time course was used as regressor in FEAT (the analysis tool in FSL). The resulting z-statistic maps were concatenated in time and the maximal value was taken along the time for all the voxels to generate a 3-D CVR map. The new method produces more accurate and thorough CVR maps; moreover, it enables us to produce a comparable baseline cerebral vascular map if applied to resting state (RS) data.
Low-head sea lamprey barrier effects on stream habitat and fish communities in the Great Lakes basin
Dodd, H.R.; Hayes, D.B.; Baylis, J.R.; Carl, L.M.; Goldstein, J.D.; McLaughlin, R.L.; Noakes, D.L.G.; Porto, L.M.; Jones, M.L.
2003-01-01
Low-head barriers are used to block adult sea lamprey (Petromyzon marinus) from upstream spawning habitat. However, these barriers may impact stream fish communities through restriction of fish movement and habitat alteration. During the summer of 1996, the fish community and habitat conditions in twenty-four stream pairs were sampled across the Great Lakes basin. Seven of these stream pairs were re-sampled in 1997. Each pair consisted of a barrier stream with a low-head barrier and a reference stream without a low-head barrier. On average, barrier streams were significantly deeper (df = 179, P = 0.0018) and wider (df = 179, P = 0.0236) than reference streams, but temperature and substrate were similar (df = 183, P = 0.9027; df = 179, P = 0.999). Barrier streams contained approximately four more fish species on average than reference streams. However, streams with low-head barriers showed a greater upstream decline in species richness compared to reference streams with a net loss of 2.4 species. Barrier streams also showed a peak in richness directly downstream of the barriers, indicating that these barriers block fish movement upstream. Using S??renson's similarity index (based on presence/absence), a comparison of fish community assemblages above and below low-head barriers was not significantly different than upstream and downstream sites on reference streams (n = 96, P > 0.05), implying they have relatively little effect on overall fish assemblage composition. Differences in the frequency of occurrence and abundance between barrier and reference streams was apparent for some species, suggesting their sensitivity to barriers.
NASA Astrophysics Data System (ADS)
Pytlak, E.; McManamon, A.; Hughes, S. P.; Van Der Zweep, R. A.; Butcher, P.; Karafotias, C.; Beckers, J.; Welles, E.
2016-12-01
Numerous studies have documented the impacts that large scale weather patterns and climate phenomenon like the El Niño Southern Oscillation (ENSO), Pacific-North American (PNA) Pattern, and others can have on seasonal temperature and precipitation in the Columbia River Basin (CRB). While far from perfect in terms of seasonal predictability in specific locations, these intra-annual weather and climate signal do tilt the odds toward different temperature and precipitation outcomes, which in turn can have impacts on seasonal snowpacks, streamflows and water supply in large river basins like the CRB. We hypothesize that intraseasonal climate signals and long wave jet stream patterns can be objectively incorporated into what it is otherwise a climatology-based set of Ensemble Streamflow Forecasts, and can increase the predictive skill and utility of these forecasts used for mid-range hydropower planning. The Bonneville Power Administration (BPA) and Deltares have developed a subsampling-resampling method to incorporate climate mode information into the Ensemble Streamflow Prediction (ESP) forecasts (Beckers, et al., 2016). Since 2015, BPA and Deltares USA have experimented with this method in pre-operational use, using five objective multivariate climate indices that appear to have the greatest predictive value for seasonal temperature and precipitation in the CRB. The indices are used to objectively select historical weather from about twenty analog years in the 66-year (1949-2015) historical ESP set. These twenty scenarios then serve as the starting point to generate monthly synthetic weather and streamflow time series to return to a set of 66 streamflow traces. Our poster will share initial results from the 2015 and 2016 water years, which included large swings in the Quasi-Biennial Oscillation, persistent blocking jet stream patterns, and the development of a strong El Niño event. While the results are very preliminary and for only two seasons, there may be some value in incorporating objectively-identified climate signals into ESP-based streamflow forecasts.Beckers, J. V. L., Weerts, A. H., Tijdeman, E., and Welles, E.: ENSO-Conditioned Weather Resampling Method for Seasonal Ensemble Streamflow Prediction, Hydrol. Earth Syst. Sci. Discuss., doi:10.5194/hess-2016-72, in review, 2016.
Implementation of a Real-Time Stacking Algorithm in a Photogrammetric Digital Camera for Uavs
NASA Astrophysics Data System (ADS)
Audi, A.; Pierrot-Deseilligny, M.; Meynard, C.; Thom, C.
2017-08-01
In the recent years, unmanned aerial vehicles (UAVs) have become an interesting tool in aerial photography and photogrammetry activities. In this context, some applications (like cloudy sky surveys, narrow-spectral imagery and night-vision imagery) need a longexposure time where one of the main problems is the motion blur caused by the erratic camera movements during image acquisition. This paper describes an automatic real-time stacking algorithm which produces a high photogrammetric quality final composite image with an equivalent long-exposure time using several images acquired with short-exposure times. Our method is inspired by feature-based image registration technique. The algorithm is implemented on the light-weight IGN camera, which has an IMU sensor and a SoC/FPGA. To obtain the correct parameters for the resampling of images, the presented method accurately estimates the geometrical relation between the first and the Nth image, taking into account the internal parameters and the distortion of the camera. Features are detected in the first image by the FAST detector, than homologous points on other images are obtained by template matching aided by the IMU sensors. The SoC/FPGA in the camera is used to speed up time-consuming parts of the algorithm such as features detection and images resampling in order to achieve a real-time performance as we want to write only the resulting final image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images, as well as block diagrams of the described architecture. The resulting stacked image obtained on real surveys doesn't seem visually impaired. Timing results demonstrate that our algorithm can be used in real-time since its processing time is less than the writing time of an image in the storage device. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real-time the gyrometers of the IMU.
Adaptive Resampling Particle Filters for GPS Carrier-Phase Navigation and Collision Avoidance System
NASA Astrophysics Data System (ADS)
Hwang, Soon Sik
This dissertation addresses three problems: 1) adaptive resampling technique (ART) for Particle Filters, 2) precise relative positioning using Global Positioning System (GPS) Carrier-Phase (CP) measurements applied to nonlinear integer resolution problem for GPS CP navigation using Particle Filters, and 3) collision detection system based on GPS CP broadcasts. First, Monte Carlo filters, called Particle Filters (PF), are widely used where the system is non-linear and non-Gaussian. In real-time applications, their estimation accuracies and efficiencies are significantly affected by the number of particles and the scheduling of relocating weights and samples, the so-called resampling step. In this dissertation, the appropriate number of particles is estimated adaptively such that the error of the sample mean and variance stay in bounds. These bounds are given by the confidence interval of a normal probability distribution for a multi-variate state. Two required number of samples maintaining the mean and variance error within the bounds are derived. The time of resampling is determined when the required sample number for the variance error crosses the required sample number for the mean error. Second, the PF using GPS CP measurements with adaptive resampling is applied to precise relative navigation between two GPS antennas. In order to make use of CP measurements for navigation, the unknown number of cycles between GPS antennas, the so called integer ambiguity, should be resolved. The PF is applied to this integer ambiguity resolution problem where the relative navigation states estimation involves nonlinear observations and nonlinear dynamics equation. Using the PF, the probability density function of the states is estimated by sampling from the position and velocity space and the integer ambiguities are resolved without using the usual hypothesis tests to search for the integer ambiguity. The ART manages the number of position samples and the frequency of the resampling step for real-time kinematics GPS navigation. The experimental results demonstrate the performance of the ART and the insensitivity of the proposed approach to GPS CP cycle-slips. Third, the GPS has great potential for the development of new collision avoidance systems and is being considered for the next generation Traffic alert and Collision Avoidance System (TCAS). The current TCAS equipment, is capable of broadcasting GPS code information to nearby airplanes, and also, the collision avoidance system using the navigation information based on GPS code has been studied by researchers. In this dissertation, the aircraft collision detection system using GPS CP information is addressed. The PF with position samples is employed for the CP based relative position estimation problem and the same algorithm can be used to determine the vehicle attitude if multiple GPS antennas are used. For a reliable and enhanced collision avoidance system, three dimensional trajectories are projected using the estimates of the relative position, velocity, and the attitude. It is shown that the performance of GPS CP based collision detecting algorithm meets the accuracy requirements for a precise approach of flight for auto landing with significantly less unnecessary collision false alarms and no miss alarms.
NASA Astrophysics Data System (ADS)
A. AL-Salhi, Yahya E.; Lu, Songfeng
2016-08-01
Quantum steganography can solve some problems that are considered inefficient in image information concealing. It researches on Quantum image information concealing to have been widely exploited in recent years. Quantum image information concealing can be categorized into quantum image digital blocking, quantum image stereography, anonymity and other branches. Least significant bit (LSB) information concealing plays vital roles in the classical world because many image information concealing algorithms are designed based on it. Firstly, based on the novel enhanced quantum representation (NEQR), image uniform blocks clustering around the concrete the least significant Qu-block (LSQB) information concealing algorithm for quantum image steganography is presented. Secondly, a clustering algorithm is proposed to optimize the concealment of important data. Finally, we used Con-Steg algorithm to conceal the clustered image blocks. Information concealing located on the Fourier domain of an image can achieve the security of image information, thus we further discuss the Fourier domain LSQu-block information concealing algorithm for quantum image based on Quantum Fourier Transforms. In our algorithms, the corresponding unitary Transformations are designed to realize the aim of concealing the secret information to the least significant Qu-block representing color of the quantum cover image. Finally, the procedures of extracting the secret information are illustrated. Quantum image LSQu-block image information concealing algorithm can be applied in many fields according to different needs.
Evaluation of emerging factors blocking filtration of high-adjunct-ratio wort.
Ma, Ting; Zhu, Linjiang; Zheng, Feiyun; Li, Yongxian; Li, Qi
2014-08-20
Corn starch has become a common adjunct for beer brewing in Chinese breweries. However, with increasing ratio of corn starch, problems like poor wort filtration performance arise, which will decrease production capacity of breweries. To solve this problem, factors affecting wort filtration were evaluated, such as the size of corn starch particle, special yellow floats formed during liquefaction of corn starch, and residual substance after liquefaction. The effects of different enzyme preparations including β-amylase and β-glucanase on filtration rate were also evaluated. The results indicate that the emerging yellow floats do not severely block filtration, while the fine and uniform-shape corn starch particle and its incompletely hydrolyzed residue after liquefaction are responsible for filtration blocking. Application of β-amylase preparation increased the filtration rate of liquefied corn starch. This study is useful for our insight into the filtration blocking problem arising in the process of high-adjunct-ratio beer brewing and also provides a feasible solution using enzyme preparations.
APPARATUS FOR PRODUCING IONS OF VAPORIZABLE MATERIALS
Wright, B.T.
1958-01-28
a uniform and copious supply of ions. The source comprises a hollow arc- block and means for establishing a magnetic field through the arc-block. Vaporization of the material to be ionized is produced by an electric heated filament. The arc producing structure within the arc-block consists of a cathode disposed between a pair of collimating electrodes along with an anode adjacent each collimating electrode on the side opposite the cathode. A positive potential applied to the anodes and collimating electrodes, with respect to the cathode, and the magnetic field act to accelerate the electrons from the cathode through a slit in each collimating clectrode towards the respective anode. In this manner a pair of collinear arc discharges are produced in the gas region which can be tapped for an abundant supply of ions of the material being analyzed.
NASA Astrophysics Data System (ADS)
Farid, Sidra; Kuljic, Rade; Poduri, Shripriya; Dutta, Mitra; Darling, Seth B.
2018-06-01
High-density arrays of gold nanodots and nanoholes on indium tin oxide (ITO)-coated glass surfaces are fabricated using a nanoporous template fabricated by the self-assembly of diblock copolymers of poly (styrene-block-methyl methacrylate) (PS-b-PMMA) structures. By balancing the interfacial interactions between the polymer blocks and the substrate using random copolymer, cylindrical block copolymer microdomains oriented perpendicular to the plane of the substrate have been obtained. Nanoporous PS films are created by selectively etching PMMA cylinders, a straightforward route to form highly ordered nanoscale porous films. Deposition of gold on the template followed by lift off and sonication leaves a highly dense array of gold nanodots. These materials can serve as templates for the vapor-liquid-solid (VLS) growth of semiconductor nanorod arrays for next generation hybrid optoelectronic applications.
General synthesis of inorganic single-walled nanotubes
Ni, Bing; Liu, Huiling; Wang, Peng-peng; He, Jie; Wang, Xun
2015-01-01
The single-walled nanotube (SWNT) is an interesting nanostructure for fundamental research and potential applications. However, very few inorganic SWNTs are available to date due to the lack of efficient fabrication methods. Here we synthesize four types of SWNT: sulfide; hydroxide; phosphate; and polyoxometalate. Each type of SWNT possesses essentially uniform diameters. Detailed studies illustrate that the formation of SWNTs is initiated by the self-coiling of the corresponding ultrathin nanostructure embryo/building blocks on the base of weak interactions between them, which is not limited to specific compounds or crystal structures. The interactions between building blocks can be modulated by varying the solvents used, thus multi-walled tubes can also be obtained. Our results reveal that the generalized synthesis of inorganic SWNTs can be achieved by the self-coiling of ultrathin building blocks under the proper weak interactions. PMID:26510862
The Bootstrap, the Jackknife, and the Randomization Test: A Sampling Taxonomy.
Rodgers, J L
1999-10-01
A simple sampling taxonomy is defined that shows the differences between and relationships among the bootstrap, the jackknife, and the randomization test. Each method has as its goal the creation of an empirical sampling distribution that can be used to test statistical hypotheses, estimate standard errors, and/or create confidence intervals. Distinctions between the methods can be made based on the sampling approach (with replacement versus without replacement) and the sample size (replacing the whole original sample versus replacing a subset of the original sample). The taxonomy is useful for teaching the goals and purposes of resampling schemes. An extension of the taxonomy implies other possible resampling approaches that have not previously been considered. Univariate and multivariate examples are presented.
49 CFR Appendix A to Part 23 - Uniform Report of ACDBE Participation
Code of Federal Regulations, 2012 CFR
2012-10-01
... participation only. In this block, provide the overall non-car rental percentage goal and the race-conscious (RC... concessionaires (prime and sub) and purchases of goods and services (ACDBE and non-ACDBE combined) at the airport... revenues listed in Column C into the portions that are attributable to race-conscious and race-neutral...
49 CFR Appendix A to Part 23 - Uniform Report of ACDBE Participation
Code of Federal Regulations, 2013 CFR
2013-10-01
... participation only. In this block, provide the overall non-car rental percentage goal and the race-conscious (RC... concessionaires (prime and sub) and purchases of goods and services (ACDBE and non-ACDBE combined) at the airport... revenues listed in Column C into the portions that are attributable to race-conscious and race-neutral...
49 CFR Appendix A to Part 23 - Uniform Report of ACDBE Participation
Code of Federal Regulations, 2014 CFR
2014-10-01
... participation only. In this block, provide the overall non-car rental percentage goal and the race-conscious (RC... concessionaires (prime and sub) and purchases of goods and services (ACDBE and non-ACDBE combined) at the airport... revenues listed in Column C into the portions that are attributable to race-conscious and race-neutral...
Formation of Enhanced Uniform Chiral Fields in Symmetric Dimer Nanostructures
Tian, Xiaorui; Fang, Yurui; Sun, Mengtao
2015-01-01
Chiral fields with large optical chirality are very important in chiral molecules analysis, sensing and other measurements. Plasmonic nanostructures have been proposed to realize such super chiral fields for enhancing weak chiral signals. However, most of them cannot provide uniform chiral near-fields close to the structures, which makes these nanostructures not so efficient for applications. Plasmonic helical nanostructures and blocked squares have been proved to provide uniform chiral near-fields, but structure fabrication is a challenge. In this paper, we show that very simple plasmonic dimer structures can provide uniform chiral fields in the gaps with large enhancement of both near electric fields and chiral fields under linearly polarized light illumination with polarization off the dimer axis at dipole resonance. An analytical dipole model is utilized to explain this behavior theoretically. 30 times of volume averaged chiral field enhancement is gotten in the whole gap. Chiral fields with opposite handedness can be obtained simply by changing the polarization to the other side of the dimer axis. It is especially useful in Raman optical activity measurement and chiral sensing of small quantity of chiral molecule. PMID:26621558
Modeling air concentration over macro roughness conditions by Artificial Intelligence techniques
NASA Astrophysics Data System (ADS)
Roshni, T.; Pagliara, S.
2018-05-01
Aeration is improved in rivers by the turbulence created in the flow over macro and intermediate roughness conditions. Macro and intermediate roughness flow conditions are generated by flows over block ramps or rock chutes. The measurements are taken in uniform flow region. Efficacy of soft computing methods in modeling hydraulic parameters are not common so far. In this study, modeling efficiencies of MPMR model and FFNN model are found for estimating the air concentration over block ramps under macro roughness conditions. The experimental data are used for training and testing phases. Potential capability of MPMR and FFNN model in estimating air concentration are proved through this study.
Performance of a novel wafer scale CMOS active pixel sensor for bio-medical imaging.
Esposito, M; Anaxagoras, T; Konstantinidis, A C; Zheng, Y; Speller, R D; Evans, P M; Allinson, N M; Wells, K
2014-07-07
Recently CMOS active pixels sensors (APSs) have become a valuable alternative to amorphous silicon and selenium flat panel imagers (FPIs) in bio-medical imaging applications. CMOS APSs can now be scaled up to the standard 20 cm diameter wafer size by means of a reticle stitching block process. However, despite wafer scale CMOS APS being monolithic, sources of non-uniformity of response and regional variations can persist representing a significant challenge for wafer scale sensor response. Non-uniformity of stitched sensors can arise from a number of factors related to the manufacturing process, including variation of amplification, variation between readout components, wafer defects and process variations across the wafer due to manufacturing processes. This paper reports on an investigation into the spatial non-uniformity and regional variations of a wafer scale stitched CMOS APS. For the first time a per-pixel analysis of the electro-optical performance of a wafer CMOS APS is presented, to address inhomogeneity issues arising from the stitching techniques used to manufacture wafer scale sensors. A complete model of the signal generation in the pixel array has been provided and proved capable of accounting for noise and gain variations across the pixel array. This novel analysis leads to readout noise and conversion gain being evaluated at pixel level, stitching block level and in regions of interest, resulting in a coefficient of variation ⩽1.9%. The uniformity of the image quality performance has been further investigated in a typical x-ray application, i.e. mammography, showing a uniformity in terms of CNR among the highest when compared with mammography detectors commonly used in clinical practice. Finally, in order to compare the detection capability of this novel APS with the technology currently used (i.e. FPIs), theoretical evaluation of the detection quantum efficiency (DQE) at zero-frequency has been performed, resulting in a higher DQE for this detector compared to FPIs. Optical characterization, x-ray contrast measurements and theoretical DQE evaluation suggest that a trade off can be found between the need of a large imaging area and the requirement of a uniform imaging performance, making the DynAMITe large area CMOS APS suitable for a range of bio-medical applications.
An electrostatic Particle-In-Cell code on multi-block structured meshes
NASA Astrophysics Data System (ADS)
Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca; Vernon, Louis J.; Moulton, J. David
2017-12-01
We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. Despite the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where an arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma-material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. Compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.
An electrostatic Particle-In-Cell code on multi-block structured meshes
Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca; ...
2017-09-14
We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. In spite of the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where anmore » arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma–material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. And compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.« less
An electrostatic Particle-In-Cell code on multi-block structured meshes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca
We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. In spite of the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where anmore » arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma–material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. And compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.« less
Atmospheric Science Data Center
2016-08-22
MISBR MISR Browse Data: Color browse image of the Ellipsoid product for each camera resampled to 2.2 km resolution. ... Tool: Order Data Readme Files: Processing Status Production Report Read Software ...
NASA Astrophysics Data System (ADS)
Jannati, Mojtaba; Valadan Zoej, Mohammad Javad; Mokhtarzade, Mehdi
2018-03-01
This paper presents a novel approach to epipolar resampling of cross-track linear pushbroom imagery using orbital parameters model (OPM). The backbone of the proposed method relies on modification of attitude parameters of linear array stereo imagery in such a way to parallelize the approximate conjugate epipolar lines (ACELs) with the instantaneous base line (IBL) of the conjugate image points (CIPs). Afterward, a complementary rotation is applied in order to parallelize all the ACELs throughout the stereo imagery. The new estimated attitude parameters are evaluated based on the direction of the IBL and the ACELs. Due to the spatial and temporal variability of the IBL (respectively changes in column and row numbers of the CIPs) and nonparallel nature of the epipolar lines in the stereo linear images, some polynomials in the both column and row numbers of the CIPs are used to model new attitude parameters. As the instantaneous position of sensors remains fix, the digital elevation model (DEM) of the area of interest is not required in the resampling process. According to the experimental results obtained from two pairs of SPOT and RapidEye stereo imagery with a high elevation relief, the average absolute values of remained vertical parallaxes of CIPs in the normalized images were obtained 0.19 and 0.28 pixels respectively, which confirm the high accuracy and applicability of the proposed method.
Digital audio watermarking using moment-preserving thresholding
NASA Astrophysics Data System (ADS)
Choi, DooSeop; Jung, Hae Kyung; Choi, Hyuk; Kim, Taejeong
2007-09-01
The Moment-Preserving Thresholding technique for digital images has been used in digital image processing for decades, especially in image binarization and image compression. Its main strength lies in that the binary values that the MPT produces as a result, called representative values, are usually unaffected when the signal being thresholded goes through a signal processing operation. The two representative values in MPT together with the threshold value are obtained by solving the system of the preservation equations for the first, second, and third moment. Relying on this robustness of the representative values to various signal processing attacks considered in the watermarking context, this paper proposes a new watermarking scheme for audio signals. The watermark is embedded in the root-sum-square (RSS) of the two representative values of each signal block using the quantization technique. As a result, the RSS values are modified by scaling the signal according to the watermark bit sequence under the constraint of inaudibility relative to the human psycho-acoustic model. We also address and suggest solutions to the problem of synchronization and power scaling attacks. Experimental results show that the proposed scheme maintains high audio quality and robustness to various attacks including MP3 compression, re-sampling, jittering, and, DA/AD conversion.
Real-Time Data Streaming and Storing Structure for the LHD's Fusion Plasma Experiments
NASA Astrophysics Data System (ADS)
Nakanishi, Hideya; Ohsuna, Masaki; Kojima, Mamoru; Imazu, Setsuo; Nonomura, Miki; Emoto, Masahiko; Yoshida, Masanobu; Iwata, Chie; Ida, Katsumi
2016-02-01
The LHD data acquisition and archiving system, i.e., LABCOM system, has been fully equipped with high-speed real-time acquisition, streaming, and storage capabilities. To deal with more than 100 MB/s continuously generated data at each data acquisition (DAQ) node, DAQ tasks have been implemented as multitasking and multithreaded ones in which the shared memory plays the most important role for inter-process fast and massive data handling. By introducing a 10-second time chunk named “subshot,” endless data streams can be stored into a consecutive series of fixed length data blocks so that they will soon become readable by other processes even while the write process is continuing. Real-time device and environmental monitoring are also implemented in the same way with further sparse resampling. The central data storage has been separated into two layers to be capable of receiving multiple 100 MB/s inflows in parallel. For the frontend layer, high-speed SSD arrays are used as the GlusterFS distributed filesystem which can provide max. 2 GB/s throughput. Those design optimizations would be informative for implementing the next-generation data archiving system in big physics, such as ITER.
USDA-ARS?s Scientific Manuscript database
The Hard Red Spring Wheat Uniform Regional Nursery (HRSWURN) was planted for the 86th year in 2016. The nursery contained 26 entries submitted by 8 different scientific or industry breeding programs, and 5 checks (Table 1). Trials were conducted as randomized complete blocks with three replicates ...
Lattice and compact family block designs in forest genetics
E. Bayne Snyder
1966-01-01
One of the principles of experimental design is that replicates be relatively homogeneous. Thus, in forest research a replicate is often assigned to a single crew for planting in a single day on a uniform site. When treatments are numerous, a large area is required per replication, and homogeneity of site is difficult to achieve. In this situation, crop scientists (...
Minyi Zhou; Thomas J. Dean
2004-01-01
As a part of the continuing studies of the Cooperative Research in Sustainable Silviculture and Soil Productivity (CRiSSSP), 24 experimental plots in a loblolly pine (Pinus taeda L.) stand have recently been installed near Natchitoches, LA. The plots were uniformly assigned to 3 blocks based on topography (i.e., up slope, midslope, and down slope)....
ION PRODUCING MECHANISM (CHARGE CUPS)
Brobeck, W.W.
1959-04-21
The problems of confining a charge material in a calutron and uniformly distributing heat to the charge is described. The charge is held in a cup of thermally conductive material removably disposed within the charge chamber of the ion source block. A central thermally conducting stem is incorporated within the cup for conducting heat to the central portion of the charge contained within the cup.
USDA-ARS?s Scientific Manuscript database
The Hard Red Spring Wheat Uniform Regional Nursery (HRSWURN) was planted for the 84th year in 2014. The nursery contained 26 entries submitted by 6 different scientific or industry breeding programs, and 5 checks (Table 1). Trials were conducted as randomized complete blocks with three replicates ex...
Optimizing transformations of stencil operations for parallel cache-based architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bassetti, F.; Davis, K.
This paper describes a new technique for optimizing serial and parallel stencil- and stencil-like operations for cache-based architectures. This technique takes advantage of the semantic knowledge implicity in stencil-like computations. The technique is implemented as a source-to-source program transformation; because of its specificity it could not be expected of a conventional compiler. Empirical results demonstrate a uniform factor of two speedup. The experiments clearly show the benefits of this technique to be a consequence, as intended, of the reduction in cache misses. The test codes are based on a 5-point stencil obtained by the discretization of the Poisson equation andmore » applied to a two-dimensional uniform grid using the Jacobi method as an iterative solver. Results are presented for a 1-D tiling for a single processor, and in parallel using 1-D data partition. For the parallel case both blocking and non-blocking communication are tested. The same scheme of experiments has bee n performed for the 2-D tiling case. However, for the parallel case the 2-D partitioning is not discussed here, so the parallel case handled for 2-D is 2-D tiling with 1-D data partitioning.« less
Wear behavioral study of as cast and 7 hr homogenized Al25Mg2Si2Cu4Ni alloy at constant load
NASA Astrophysics Data System (ADS)
Harlapur, M. D.; Sondur, D. G.; Akkimardi, V. G.; Mallapur, D. G.
2018-04-01
In the current study, the wear behavior of as cast and 7 hr homogenized Al25Mg2Si2Cu4Ni alloy has been investigated. Microstructure, SEM and EDS results confirm the presence of different intermetallic and their effects on wear properties of Al25Mg2Si2Cu4Ni alloy in as cast as well as aged condition. Alloying main elements like Si, Cu, Mg and Ni partly dissolve in the primary α-Al matrix and to some amount present in the form of intermetallic phases. SEM structure of as cast alloy shows blocks of Mg2Si which is at random distributed in the aluminium matrix. Precipitates of Al2Cu in the form of Chinese script are also observed. Also `Q' phase (Al-Si-Cu-Mg) be distributed uniformly into the aluminium matrix. Few coarsened platelets of Ni are seen. In case of 7 hr homogenized samples blocks of Mg2Si get rounded at the corners, Platelets of Ni get fragmented and distributed uniformly in the aluminium matrix. Results show improved volumetric wear resistance and reduced coefficient of friction after homogenizing heat treatment.
Wrong-site nerve blocks: A systematic literature review to guide principles for prevention.
Deutsch, Ellen S; Yonash, Robert A; Martin, Donald E; Atkins, Joshua H; Arnold, Theresa V; Hunt, Christina M
2018-05-01
Wrong-site nerve blocks (WSBs) are a significant, though rare, source of perioperative morbidity. WSBs constitute the most common type of perioperative wrong-site procedure reported to the Pennsylvania Patient Safety Authority. This systematic literature review aggregates information about the incidence, patient consequences, and conditions that contribute to WSBs, as well as evidence-based methods to prevent them. A systematic search of English-language publications was performed, using the PRISMA process. Seventy English-language publications were identified. Analysis of four publications reporting on at least 10,000 blocks provides a rate of 0.52 to 5.07 WSB per 10,000 blocks, unilateral blocks, or "at risk" procedures. The most commonly mentioned potential consequence was local anesthetic toxicity. The most commonly mentioned contributory factors were time pressure, personnel factors, and lack of site-mark visibility (including no site mark placed). Components of the block process that were addressed include preoperative nerve-block verification, nerve-block site marking, time-outs, and the healthcare facility's structure and culture of safety. A lack of uniform reporting criteria and divergence in the data and theories presented may reflect the variety of circumstances affecting when and how nerve blocks are performed, as well as the infrequency of a WSB. However, multiple authors suggest three procedural steps that may help to prevent WSBs: (1) verify the nerve-block procedure using multiple sources of information, including the patient; (2) identify the nerve-block site with a visible mark; and (3) perform time-outs immediately prior to injection or instillation of the anesthetic. Hospitals, ambulatory surgical centers, and anesthesiology practices should consider creating site-verification processes with clinician input and support to develop sustainable WSB-prevention practices. Copyright © 2017 Elsevier Inc. All rights reserved.
Paraffin tissue microarrays constructed with a cutting board and cutting board arrayer.
Vogel, Ulrich Felix
2010-05-01
Paraffin tissue microarrays (PTMAs) are blocks of paraffin containing up to 1300 paraffin tissue core biopsies (PTCBs). Normally, these PTCBs are punched from routine paraffin tissue blocks, which contain tissues of differing thicknesses. Therefore, the PTCBs are of different lengths. In consequence, the sections of the deeper portions of the PTMA do not contain all of the desired PTCBs. To overcome this drawback, cutting boards were constructed from panels of plastic with a thickness of 4 mm. Holes were drilled into the plastic and filled completely with at least one PTCB per hole. After being trimmed to a uniform length of 4 mm, these PTCBs were pushed from the cutting board into corresponding holes in a recipient block by means of a plate with steel pins. Up to 1000 sections per PTMA were cut without any significant loss of PTCBs, thereby increasing the efficacy of the PTMA technique.
Charged triblock copolymer self-assembly into charged micelles
NASA Astrophysics Data System (ADS)
Chen, Yingchao; Zhang, Ke; Zhu, Jiahua; Wooley, Karen; Pochan, Darrin; Department of Material Science; Engineering University of Delaware Team; Department of Chemistry Texas A&M University Collaboration
2011-03-01
Micelles were formed through the self-assembly of amphiphlic block copolymer poly(acrylic acid)-block-poly(methyl acrylate)-block-polystyrene (PAA-PMA-PS). ~Importantly, the polymer is complexed with diamine molecules in pure THF solution prior to water titration solvent processing-a critical aspect in the control of final micelle geometry. The addition of diamine triggers acid-base complexation ~between the carboxylic acid PAA side chains and amines. ~Remarkably uniform spheres were found to form close-packed patterns when forced into dried films and thin, solvated films when an excess of amine was used in the polymer assembly process. Surface properties and structural features of these hexagonal-packed spherical micelles with charged corona have been explored by various characterization methods including Transmission Electron Microscopy (TEM), cryogenic TEM, z-potential analysis and Dynamic Light Scattering. The forming mechanism for this pattern and morphology changes against external stimulate such as salt will be discussed.
Yang, Bin; Guo, Chen; Chen, Shu; Ma, Junhe; Wang, Jing; Liang, Xiangfeng; Zheng, Lily; Liu, Huizhou
2006-11-23
The acid effect on the aggregation of poly(ethylene oxide)-poly(propylene oxide)-poly(ethylene oxide) block copolymers EO(20)PO(70)EO(20) has been investigated by transmission electron microscopy (TEM), particle size analyzer (PSA), Fourier transformed infrared, and fluorescence spectroscopy. The critical micellization temperature for Pluronic P123 in different HCl aqueous solutions increases with the increase of acid concentration. Additionally, the hydrolysis degradation of PEO blocks is observed in strong acid concentrations at higher temperatures. When the acid concentration is low, TEM and PSA show the increase of the micelle mean diameter and the decrease of the micelle polydispersity at room temperature, which demonstrate the extension of EO corona and tendency of uniform micelle size because of the charge repulsion. When under strong acid conditions, the aggregation of micelles through the protonated water bridges was observed.
GPU and APU computations of Finite Time Lyapunov Exponent fields
NASA Astrophysics Data System (ADS)
Conti, Christian; Rossinelli, Diego; Koumoutsakos, Petros
2012-03-01
We present GPU and APU accelerated computations of Finite-Time Lyapunov Exponent (FTLE) fields. The calculation of FTLEs is a computationally intensive process, as in order to obtain the sharp ridges associated with the Lagrangian Coherent Structures an extensive resampling of the flow field is required. The computational performance of this resampling is limited by the memory bandwidth of the underlying computer architecture. The present technique harnesses data-parallel execution of many-core architectures and relies on fast and accurate evaluations of moment conserving functions for the mesh to particle interpolations. We demonstrate how the computation of FTLEs can be efficiently performed on a GPU and on an APU through OpenCL and we report over one order of magnitude improvements over multi-threaded executions in FTLE computations of bluff body flows.
NASA Technical Reports Server (NTRS)
Lawton, Pat
2004-01-01
The objective of this work was to support the design of improved IUE NEWSIPS high dispersion extraction algorithms. The purpose of this work was to evaluate use of the Linearized Image (LIHI) file versus the Re-Sampled Image (SIHI) file, evaluate various extraction, and design algorithms for evaluation of IUE High Dispersion spectra. It was concluded the use of the Re-Sampled Image (SIHI) file was acceptable. Since the Gaussian profile worked well for the core and the Lorentzian profile worked well for the wings, the Voigt profile was chosen for use in the extraction algorithm. It was found that the gamma and sigma parameters varied significantly across the detector, so gamma and sigma masks for the SWP detector were developed. Extraction code was written.
Fast Computation of the Two-Point Correlation Function in the Age of Big Data
NASA Astrophysics Data System (ADS)
Pellegrino, Andrew; Timlin, John
2018-01-01
We present a new code which quickly computes the two-point correlation function for large sets of astronomical data. This code combines the ease of use of Python with the speed of parallel shared libraries written in C. We include the capability to compute the auto- and cross-correlation statistics, and allow the user to calculate the three-dimensional and angular correlation functions. Additionally, the code automatically divides the user-provided sky masks into contiguous subsamples of similar size, using the HEALPix pixelization scheme, for the purpose of resampling. Errors are computed using jackknife and bootstrap resampling in a way that adds negligible extra runtime, even with many subsamples. We demonstrate comparable speed with other clustering codes, and code accuracy compared to known and analytic results.
Spatial resampling of IDR frames for low bitrate video coding with HEVC
NASA Astrophysics Data System (ADS)
Hosking, Brett; Agrafiotis, Dimitris; Bull, David; Easton, Nick
2015-03-01
As the demand for higher quality and higher resolution video increases, many applications fail to meet this demand due to low bandwidth restrictions. One factor contributing to this problem is the high bitrate requirement of the intra-coded Instantaneous Decoding Refresh (IDR) frames featuring in all video coding standards. Frequent coding of IDR frames is essential for error resilience in order to prevent the occurrence of error propagation. However, as each one consumes a huge portion of the available bitrate, the quality of future coded frames is hindered by high levels of compression. This work presents a new technique, known as Spatial Resampling of IDR Frames (SRIF), and shows how it can increase the rate distortion performance by providing a higher and more consistent level of video quality at low bitrates.
More About the Phase-Synchronized Enhancement Method
NASA Technical Reports Server (NTRS)
Jong, Jen-Yi
2004-01-01
A report presents further details regarding the subject matter of "Phase-Synchronized Enhancement Method for Engine Diagnostics" (MFS-26435), NASA Tech Briefs, Vol. 22, No. 1 (January 1998), page 54. To recapitulate: The phase-synchronized enhancement method (PSEM) involves the digital resampling of a quasi-periodic signal in synchronism with the instantaneous phase of one of its spectral components. This resampling transforms the quasi-periodic signal into a periodic one more amenable to analysis. It is particularly useful for diagnosis of a rotating machine through analysis of vibration spectra that include components at the fundamental and harmonics of a slightly fluctuating rotation frequency. The report discusses the machinery-signal-analysis problem, outlines the PSEM algorithms, presents the mathematical basis of the PSEM, and presents examples of application of the PSEM in some computational simulations.
Anstey, A; Taylor, D; Chalmers, I; Ansari, E
1999-10-01
Nine brands of contact lens marketed as "UV protective" were tested for ultraviolet (UV) transmission in order to assess potential suitability for psoralen-sensitised patients. UV-transmission characteristics of hydrated lenses was tested with a Bentham monochromator spectro-radiometer system. All lenses showed minimal transmission loss in the visible band. The performance of the nine lenses was uniform for ultraviolet B radiation with negligible transmission, but showed variation in transmission for ultraviolet A radiation. None of the lenses complied with UV-transmission criteria used previously to assess UV-blocking spectacles. Only two lenses had UV-blocking characteristics which came close to the arbitrary criteria used. The performance of ordinary soft and hard lenses was very similar, with negligible blocking of UV radiation. None of the nine contact lenses marketed as "UV protective" excluded sufficient UVA to comply with criteria in current use to assess UV protection in spectacles for psoralen-sensitised patients. However, the improved UV-blocking characteristics of contact lenses identified in this paper compared to previous studies suggests that such a contact lens will soon become available. Meanwhile, contact lens-wearing systemically sensitised PUVA patients should continue to wear approved spectacles for eye protection whilst photosensitised with psoralen.
How Crossover Speeds up Building Block Assembly in Genetic Algorithms.
Sudholt, Dirk
2017-01-01
We reinvestigate a fundamental question: How effective is crossover in genetic algorithms in combining building blocks of good solutions? Although this has been discussed controversially for decades, we are still lacking a rigorous and intuitive answer. We provide such answers for royal road functions and OneMax, where every bit is a building block. For the latter, we show that using crossover makes every ([Formula: see text]+[Formula: see text]) genetic algorithm at least twice as fast as the fastest evolutionary algorithm using only standard bit mutation, up to small-order terms and for moderate [Formula: see text] and [Formula: see text]. Crossover is beneficial because it can capitalize on mutations that have both beneficial and disruptive effects on building blocks: crossover is able to repair the disruptive effects of mutation in later generations. Compared to mutation-based evolutionary algorithms, this makes multibit mutations more useful. Introducing crossover changes the optimal mutation rate on OneMax from [Formula: see text] to [Formula: see text]. This holds both for uniform crossover and k-point crossover. Experiments and statistical tests confirm that our findings apply to a broad class of building block functions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vanderlaan, Marie E.; Hillmyer, Marc A.
We report the facile synthesis of well-defined ABA poly(lactide)-block-poly(styrene)-block-poly(lactide) (LSL) triblock copolymers having a disperse poly(styrene) midblock (Ð = 1.27–2.24). The direct synthesis of telechelic α,ω-hydroxypoly(styrene) (HO-PS-OH) midblocks was achieved using a commercially available difunctional free radical diazo initiator 2,2'-azobis[2-methyl-N-(2-hydroxyethyl)propionamide]. Poly(lactide) (PLA) end blocks were subsequently grown from HO-PS-OH macroinitiators via ring-opening transesterification polymerization of (±)-lactide using the most common and prevalent catalyst system available, tin(II) 2-ethylhexanoate. Fourteen LSL triblock copolymers with total molar masses Mn,total = 24–181 kg/mol and PLA volume fractions fPLA = 0.15–0.68 were synthesized and thoroughly characterized. The self-assembly of symmetric triblocks was analyzed in themore » bulk using small-angle X-ray scattering and in thin films using grazing incidence small-angle X-ray scattering and atomic force microscopy. We demonstrate both the bulk and thin film self-assembly of LSL disperse triblocks gave well-organized nanostructures with uniform domain sizes suitable for nanopatterning applications.« less
Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies
Theis, Fabian J.
2017-01-01
Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464
Rapamycin regulates autophagy and cell adhesion in induced pluripotent stem cells.
Sotthibundhu, Areechun; McDonagh, Katya; von Kriegsheim, Alexander; Garcia-Munoz, Amaya; Klawiter, Agnieszka; Thompson, Kerry; Chauhan, Kapil Dev; Krawczyk, Janusz; McInerney, Veronica; Dockery, Peter; Devine, Michael J; Kunath, Tilo; Barry, Frank; O'Brien, Timothy; Shen, Sanbing
2016-11-15
Cellular reprogramming is a stressful process, which requires cells to engulf somatic features and produce and maintain stemness machineries. Autophagy is a process to degrade unwanted proteins and is required for the derivation of induced pluripotent stem cells (iPSCs). However, the role of autophagy during iPSC maintenance remains undefined. Human iPSCs were investigated by microscopy, immunofluorescence, and immunoblotting to detect autophagy machinery. Cells were treated with rapamycin to activate autophagy and with bafilomycin to block autophagy during iPSC maintenance. High concentrations of rapamycin treatment unexpectedly resulted in spontaneous formation of round floating spheres of uniform size, which were analyzed for differentiation into three germ layers. Mass spectrometry was deployed to reveal altered protein expression and pathways associated with rapamycin treatment. We demonstrate that human iPSCs express high basal levels of autophagy, including key components of APMKα, ULK1/2, BECLIN-1, ATG13, ATG101, ATG12, ATG3, ATG5, and LC3B. Block of autophagy by bafilomycin induces iPSC death and rapamycin attenuates the bafilomycin effect. Rapamycin treatment upregulates autophagy in iPSCs in a dose/time-dependent manner. High concentration of rapamycin reduces NANOG expression and induces spontaneous formation of round and uniformly sized embryoid bodies (EBs) with accelerated differentiation into three germ layers. Mass spectrometry analysis identifies actin cytoskeleton and adherens junctions as the major targets of rapamycin in mediating iPSC detachment and differentiation. High levels of basal autophagy activity are present during iPSC derivation and maintenance. Rapamycin alters expression of actin cytoskeleton and adherens junctions, induces uniform EB formation, and accelerates differentiation. IPSCs are sensitive to enzyme dissociation and require a lengthy differentiation time. The shape and size of EBs also play a role in the heterogeneity of end cell products. This research therefore highlights the potential of rapamycin in producing uniform EBs and in shortening iPSC differentiation duration.
Method for Pre-Conditioning a Measured Surface Height Map for Model Validation
NASA Technical Reports Server (NTRS)
Sidick, Erkin
2012-01-01
This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to eliminate the surface measurement noise or measurement errors can also suffer from aliasing effects. During re-sampling of a surface map, this software preserves the low spatial-frequency characteristic of a given surface map through the use of Zernike-polynomial fit coefficients, and maintains mid- and high-spatial-frequency characteristics of the given surface map by the use of a PSD model derived from the two-dimensional PSD data of the mid- and high-spatial-frequency components of the original surface map. Because this new method creates the new surface map in the desired sampling format from analytical expressions only, it does not encounter any aliasing effects and does not cause any discontinuity in the resultant surface map.
High quality tissue miniarray technique using a conventional TV/radio telescopic antenna.
Elkablawy, Mohamed A; Albasri, Abdulkader M
2015-01-01
The tissue microarray (TMA) is widely accepted as a fast and cost-effective research tool for in situ tissue analysis in modern pathology. However, the current automated and manual TMA techniques have some drawbacks restricting their productivity. Our study aimed to introduce an improved manual tissue miniarray (TmA) technique that is simple and readily applicable to a broad range of tissue samples. In this study, a conventional TV/radio telescopic antenna was used to punch tissue cores manually from donor paraffin embedded tissue blocks which were pre-incubated at 40oC. The cores were manually transferred, organized and attached to a standard block mould, and filled with liquid paraffin to construct TmA blocks without any use of recipient paraffin blocks. By using a conventional TV/radio antenna, it was possible to construct TmA paraffin blocks with variable formats of array size and number (2-mm x 42, 2.5-mm x 30, 3-mm x 24, 4-mm x 20 and 5-mm x 12 cores). Up to 2-mm x 84 cores could be mounted and stained on a standard microscopic slide by cutting two sections from two different blocks and mounting them beside each other. The technique was simple and caused minimal damage to the donor blocks. H and E and immunostained slides showed well-defined tissue morphology and array configuration. This technique is easy to reproduce, quick, inexpensive and creates uniform blocks with abundant tissues without specialized equipment. It was found to improve the stability of the cores within the paraffin block and facilitated no losses during cutting and immunostaining.
Rapid Ordering in "Wet Brush" Block Copolymer/Homopolymer Ternary Blends.
Doerk, Gregory S; Yager, Kevin G
2017-12-26
The ubiquitous presence of thermodynamically unfavored but kinetically trapped topological defects in nanopatterns formed via self-assembly of block copolymer thin films may prevent their use for many envisioned applications. Here, we demonstrate that lamellae patterns formed by symmetric polystyrene-block-poly(methyl methacrylate) diblock copolymers self-assemble and order extremely rapidly when the diblock copolymers are blended with low molecular weight homopolymers of the constituent blocks. Being in the "wet brush" regime, the homopolymers uniformly distribute within their respective self-assembled microdomains, preventing increases in domain widths. An order-of-magnitude increase in topological grain size in blends over the neat (unblended) diblock copolymer is achieved within minutes of thermal annealing as a result of the significantly higher power law exponent for ordering kinetics in the blends. Moreover, the blends are demonstrated to be capable of rapid and robust domain alignment within micrometer-scale trenches, in contrast to the corresponding neat diblock copolymer. These results can be attributed to the lowering of energy barriers associated with domain boundaries by bringing the system closer to an order-disorder transition through low molecular weight homopolymer blending.
NASA Astrophysics Data System (ADS)
Nigro, Fabrizio; Renda, Pietro; Favara, Rocco
2010-05-01
In the young mountain chains underwent to emersion, the different crustal blocks which compose the belt may be subjected to differentiate tilting during uplift. The tilting process may be revealed both by the stratal pattern of the syn-uplifting deposits or deduced by the function altitude/area ratio. The prevailing of the uplift rate with respect to the tilting rate (and vice versa) result from the shape of this function. So, in young mountains the hypsometric analysis may results a useful tool for decipher how the crustal blocks are underwent to uplift. An integrate analysis based on stratigraphy, structural and morphometric data represents the correctly approach for characterise the landform evolution in regions underwent to active tectonics. In the aim to evaluate the recent tectonic history from topography in regions underwent to active deformations, by deducing the effect of tectonisms on landforms, the definition of the boundary conditions (regarding the crustal deformation) is fundamental for morphometric analysis. In fact, the morphologic style and the morphometric pattern in tectonically active settings are closely related to the dominance of rock masses exceeding for uplift (or failure for subsidence) with respect to the exogenous erosional processes. Collisional geodynamic processes induce crustal growth for faulting and folding. In this earth's sectors, the uplift of crustal blocks is a very common effect of compressional deformation. It reflects for example fold amplification and thrusting, but it is a very common process also in settings dominated by crustal thinning, where the viscoelastic properties of the lithosphere induce tilting and localised uplift of normal-faulted crustal blocks. The uplift rate is rarely uniform for wide areas within the orogens on the passive margins, but it changes from adjacent crustal blocks as the effect of space-variation of kinematics conditions or density. It also may change within a single block, as the effect of tilting, which induces synchronously mass elevation and subsidence. Not considering sea-level fluctuations and the climatic-lithologic parameters, the 2D distribution of uplift rate influences the landmass evolution in time. The tendency of rock masses to equilibrium resulting from concurrent tectonic building and denudation forces defines the geomorphic cycle. This evolution is checked by different stages, each characterised by a well-recognisable morphometric patterns. The dominance of uplift or erosion and concurrent block tilting induce characteristic a landform evolution tendency, which may be evaluated with the morphometric analysis. A lot of morphometric functions describe the equilibrium stage of landmasses, providing useful tools for deciphering how tectonics acts in typology (e.g. inducing uplift uniformly or with crustal block tilting) and resulting effects on landforms (magnitude of uplift rate vs tilting rate). We aim to contribute in the description of landforms evolution in Sicily (Central Mediterranean) under different morphoevolutive settings, where may prevails uplift, tilting or erosion, each characterised by different morphometric trends. The present-day elevation of Pliocene to upper Pleistocene deposits suggests that Northen Sicily underwent neotectonic uplift. The recent non-uniform uplift of Northern Sicily coastal sector is suggested by the different elevation of the Pliocene-Upper Pleistocene marine deposits. The maximum uplift rate characterise the NE Sicily and the minimum the NW Sicily. The overall westwards decreasing trend of uplift is in places broken in the sectors where are located a lot of morphostructures. Localised uplift rates higher than the adjacent coastal plains are suggested by the present-day elevation of the beachshore deposits of Tyrrhenian age. Northern Sicily may be divided into a lot of crustal blocks, underwent to different tilting and uplift rates. Accentuate tilting and uplift results from transtensional active faulting of the already emplaced chain units, as also suggested by seismicity and the focal plane solutions of recent strong earthquakes.
Kou, Xiaoxi; Li, Rui; Hou, Lixia; Zhang, Lihui; Wang, Shaojin
2018-03-23
Radio frequency (RF) heating has been successfully used for inactivating microorganisms in agricultural and food products. Athermal (non-thermal) effects of RF energy on microorganisms have been frequently proposed in the literature, resulting in difficulties for developing effective thermal treatment protocols. The purpose of this study was to identify if the athermal inactivation of microorganisms existed during RF treatments. Escherichia coli and Staphylococcus aureus in apple juice and mashed potato were exposed to both RF and conventional thermal energies to compare their inactivation populations. A thermal death time (TDT) heating block system was used as conventional thermal energy source to simulate the same heating treatment conditions, involving heating temperature, heating rate and uniformity, of a RF treatment at a frequency of 27.12 MHz. Results showed that a similar and uniform temperature distribution in tested samples was achieved in both heating systems, so that the central sample temperature could be used as representative one for evaluating thermal inactivation of microorganisms. The survival patterns of two target microorganisms in two food samples were similar both for RF and heating block treatments since their absolute difference of survival populations was <1 log CFU/ml. The statistical analysis indicated no significant difference (P > 0.05) in inactivating bacteria between the RF and the heating block treatments at each set of temperatures. The solid temperature and microbial inactivation data demonstrated that only thermal effect of RF energy at 27.12 MHz was observed on inactivating microorganisms in foods. Copyright © 2018 Elsevier B.V. All rights reserved.
Transport toward a well in highly heterogeneous aquifer
NASA Astrophysics Data System (ADS)
Di Dato, Mariaines; de Barros, Felipe P. J.; Bellin, Alberto; Fiori, Aldo
2017-04-01
Solute transport toward a well is a challenging subject in subsurface hydrology since the complexity of the mathematical model is tremendously increased by the non-uniformity of the mean flow and heterogeneity of the formation. Up to date, analytical solutions for such flow configurations are limited to low heterogeneous conditions. On the other hand, numerical simulations in 3D highly heterogeneous formations are computationally expensive and plagued by numerical errors. In this work we propose an analytical solution for the Breakthrough Curve (BTC) at the well for an instantaneous linear injection across the aquifer's thickness for any degree of heterogeneity of the porous medium. Our solution makes use of the Multi Indicator Model-Self Consistent Approximation (MIMSCA), by which the aquifer is conceptualized as an ensemble of blocks of constant hydraulic conductivity K randomly drawn from a lognormal distribution. In order to apply MIMSCA, we assume the flow as locally uniform, given that K is uniform within the block. With this approximation, the travel time to the well is equal to the superposition of the time spent by the solute particle within each block. We emphasize that, despite the approximations introduced, the model is able to reproduce the laboratory experiment of [1] without the need to fit any transport parameters. In this work, we present results for two different injection modes: a resident injection (e.g., residual DNAPL) and a flux proportional injection (e.g., leakage from a passive well). The proposed methodology allows to quantify the BTC at the well as a function of few parameters such as the injection mode and the statistical structure of the aquifer (geometric mean, variance and integral scale of the hydraulic conductivity field). Results illustrate that the release condition has a strong impact on the shape of the BTC. Furthermore, the difference between different injection modes increases with the heterogeneity of the K-field. The importance of the both injection mode and heterogeneity degree are also elucidated on the early and late solute arrival times at the well. Finally, we show how travel times become ergodic only for very thick aquifers, even in case of mild heterogeneity. We emphasize that the present framework has a practical validity, giving an affordable, although approximated, first estimation of mass arrival at an extraction well. References [1] Fernàndez-Garcia, D., T. H. Illangasekare, and H. Rajaram (2004), Conservative and sorptive forced-gradient and uniform flow tracer tests in a three-dimensional laboratory test aquifer, Water Resour. Res., 40, W10103, doi:10.1029/2004WR003112.
46 CFR 160.010-7 - Methods of sampling, inspections and tests.
Code of Federal Regulations, 2014 CFR
2014-10-01
... must be no damage that would render the apparatus unserviceable. (2) Beam loading test. The buoyant... with its center on the center of the wood block. The loading beam must be hinged at one end and a load applied at the other end at a uniform rate of 225 kg (500 lb.) per minute until the load at the end of the...
46 CFR 160.010-7 - Methods of sampling, inspections and tests.
Code of Federal Regulations, 2010 CFR
2010-10-01
... must be no damage that would render the apparatus unserviceable. (2) Beam loading test. The buoyant... with its center on the center of the wood block. The loading beam must be hinged at one end and a load applied at the other end at a uniform rate of 225 kg (500 lb.) per minute until the load at the end of the...
46 CFR 160.010-7 - Methods of sampling, inspections and tests.
Code of Federal Regulations, 2011 CFR
2011-10-01
... must be no damage that would render the apparatus unserviceable. (2) Beam loading test. The buoyant... with its center on the center of the wood block. The loading beam must be hinged at one end and a load applied at the other end at a uniform rate of 225 kg (500 lb.) per minute until the load at the end of the...
46 CFR 160.010-7 - Methods of sampling, inspections and tests.
Code of Federal Regulations, 2012 CFR
2012-10-01
... must be no damage that would render the apparatus unserviceable. (2) Beam loading test. The buoyant... with its center on the center of the wood block. The loading beam must be hinged at one end and a load applied at the other end at a uniform rate of 225 kg (500 lb.) per minute until the load at the end of the...
46 CFR 160.010-7 - Methods of sampling, inspections and tests.
Code of Federal Regulations, 2013 CFR
2013-10-01
... must be no damage that would render the apparatus unserviceable. (2) Beam loading test. The buoyant... with its center on the center of the wood block. The loading beam must be hinged at one end and a load applied at the other end at a uniform rate of 225 kg (500 lb.) per minute until the load at the end of the...
Final Data Usability Summary and Resampling Proposal for Fort Sheridan
1996-03-22
performed. The basic approach discussed here was determined in discussions between Fort Sheridan, the EPA, Illinois EPA, the Army Environmental Center, and its RI consultant, Environmental Science and Engineering, Inc.
Immersive volume rendering of blood vessels
NASA Astrophysics Data System (ADS)
Long, Gregory; Kim, Han Suk; Marsden, Alison; Bazilevs, Yuri; Schulze, Jürgen P.
2012-03-01
In this paper, we present a novel method of visualizing flow in blood vessels. Our approach reads unstructured tetrahedral data, resamples it, and uses slice based 3D texture volume rendering. Due to the sparse structure of blood vessels, we utilize an octree to efficiently store the resampled data by discarding empty regions of the volume. We use animation to convey time series data, wireframe surface to give structure, and utilize the StarCAVE, a 3D virtual reality environment, to add a fully immersive element to the visualization. Our tool has great value in interdisciplinary work, helping scientists collaborate with clinicians, by improving the understanding of blood flow simulations. Full immersion in the flow field allows for a more intuitive understanding of the flow phenomena, and can be a great help to medical experts for treatment planning.
Spatial Resolution Characterization for QuickBird Image Products 2003-2004 Season
NASA Technical Reports Server (NTRS)
Blonski, Slawomir
2006-01-01
This presentation focuses on spatial resolution characterization for QuickBird panochromatic images in 2003-2004 and presents data measurements and analysis of SSC edge target deployment and edge response extraction and modeling. The results of the characterization are shown as values of the Modulation Transfer Function (MTF) at the Nyquist spatial frequency and as the Relative Edge Response (RER) components. The results show that RER is much less sensitive to accuracy of the curve fitting than the value of MTF at Nyquist frequency. Therefore, the RER/edge response slope is a more robust estimator of the digital image spatial resolution than the MTF. For the QuickBird panochromatic images, the RER is consistently equal to 0.5 for images processed with the Cubic Convolution resampling and to 0.8 for the MTF resampling.
Voice Conversion Using Pitch Shifting Algorithm by Time Stretching with PSOLA and Re-Sampling
NASA Astrophysics Data System (ADS)
Mousa, Allam
2010-01-01
Voice changing has many applications in the industry and commercial filed. This paper emphasizes voice conversion using a pitch shifting method which depends on detecting the pitch of the signal (fundamental frequency) using Simplified Inverse Filter Tracking (SIFT) and changing it according to the target pitch period using time stretching with Pitch Synchronous Over Lap Add Algorithm (PSOLA), then resampling the signal in order to have the same play rate. The same study was performed to see the effect of voice conversion when some Arabic speech signal is considered. Treatment of certain Arabic voiced vowels and the conversion between male and female speech has shown some expansion or compression in the resulting speech. Comparison in terms of pitch shifting is presented here. Analysis was performed for a single frame and a full segmentation of speech.
Warton, David I; Thibaut, Loïc; Wang, Yi Alice
2017-01-01
Bootstrap methods are widely used in statistics, and bootstrapping of residuals can be especially useful in the regression context. However, difficulties are encountered extending residual resampling to regression settings where residuals are not identically distributed (thus not amenable to bootstrapping)-common examples including logistic or Poisson regression and generalizations to handle clustered or multivariate data, such as generalised estimating equations. We propose a bootstrap method based on probability integral transform (PIT-) residuals, which we call the PIT-trap, which assumes data come from some marginal distribution F of known parametric form. This method can be understood as a type of "model-free bootstrap", adapted to the problem of discrete and highly multivariate data. PIT-residuals have the key property that they are (asymptotically) pivotal. The PIT-trap thus inherits the key property, not afforded by any other residual resampling approach, that the marginal distribution of data can be preserved under PIT-trapping. This in turn enables the derivation of some standard bootstrap properties, including second-order correctness of pivotal PIT-trap test statistics. In multivariate data, bootstrapping rows of PIT-residuals affords the property that it preserves correlation in data without the need for it to be modelled, a key point of difference as compared to a parametric bootstrap. The proposed method is illustrated on an example involving multivariate abundance data in ecology, and demonstrated via simulation to have improved properties as compared to competing resampling methods.
Mir, Taskia; Dirks, Peter; Mason, Warren P; Bernstein, Mark
2014-10-01
This is a qualitative study designed to examine patient acceptability of re-sampling surgery for glioblastoma multiforme (GBM) electively post-therapy or at asymptomatic relapse. Thirty patients were selected using the convenience sampling method and interviewed. Patients were presented with hypothetical scenarios including a scenario in which the surgery was offered to them routinely and a scenario in which the surgery was in a clinical trial. The results of the study suggest that about two thirds of the patients offered the surgery on a routine basis would be interested, and half of the patients would agree to the surgery as part of a clinical trial. Several overarching themes emerged, some of which include: patients expressed ethical concerns about offering financial incentives or compensation to the patients or surgeons involved in the study; patients were concerned about appropriate communication and full disclosure about the procedures involved, the legalities of tumor ownership and the use of the tumor post-surgery; patients may feel alone or vulnerable when they are approached about the surgery; patients and their families expressed immense trust in their surgeon and indicated that this trust is a major determinant of their agreeing to surgery. The overall positive response to re-sampling surgery suggests that this procedure, if designed with all the ethical concerns attended to, would be welcomed by most patients. This approach of asking patients beforehand if a treatment innovation is acceptable would appear to be more practical and ethically desirable than previous practice.
NASA Astrophysics Data System (ADS)
Wang, Jinliang; Wu, Xuejiao
2010-11-01
Geometric correction of imagery is a basic application of remote sensing technology. Its precision will impact directly on the accuracy and reliability of applications. The accuracy of geometric correction depends on many factors, including the used model for correction and the accuracy of the reference map, the number of ground control points (GCP) and its spatial distribution, resampling methods. The ETM+ image of Kunming Dianchi Lake Basin and 1:50000 geographical maps had been used to compare different correction methods. The results showed that: (1) The correction errors were more than one pixel and some of them were several pixels when the polynomial model was used. The correction accuracy was not stable when the Delaunay model was used. The correction errors were less than one pixel when the collinearity equation was used. (2) 6, 9, 25 and 35 GCP were selected randomly for geometric correction using the polynomial correction model respectively, the best result was obtained when 25 GCPs were used. (3) The contrast ratio of image corrected by using nearest neighbor and the best resampling rate was compared to that of using the cubic convolution and bilinear model. But the continuity of pixel gravy value was not very good. The contrast of image corrected was the worst and the computation time was the longest by using the cubic convolution method. According to the above results, the result was the best by using bilinear to resample.
Phu, Jack; Bui, Bang V; Kalloniatis, Michael; Khuu, Sieu K
2018-03-01
The number of subjects needed to establish the normative limits for visual field (VF) testing is not known. Using bootstrap resampling, we determined whether the ground truth mean, distribution limits, and standard deviation (SD) could be approximated using different set size ( x ) levels, in order to provide guidance for the number of healthy subjects required to obtain robust VF normative data. We analyzed the 500 Humphrey Field Analyzer (HFA) SITA-Standard results of 116 healthy subjects and 100 HFA full threshold results of 100 psychophysically experienced healthy subjects. These VFs were resampled (bootstrapped) to determine mean sensitivity, distribution limits (5th and 95th percentiles), and SD for different ' x ' and numbers of resamples. We also used the VF results of 122 glaucoma patients to determine the performance of ground truth and bootstrapped results in identifying and quantifying VF defects. An x of 150 (for SITA-Standard) and 60 (for full threshold) produced bootstrapped descriptive statistics that were no longer different to the original distribution limits and SD. Removing outliers produced similar results. Differences between original and bootstrapped limits in detecting glaucomatous defects were minimized at x = 250. Ground truth statistics of VF sensitivities could be approximated using set sizes that are significantly smaller than the original cohort. Outlier removal facilitates the use of Gaussian statistics and does not significantly affect the distribution limits. We provide guidance for choosing the cohort size for different levels of error when performing normative comparisons with glaucoma patients.
Thibaut, Loïc; Wang, Yi Alice
2017-01-01
Bootstrap methods are widely used in statistics, and bootstrapping of residuals can be especially useful in the regression context. However, difficulties are encountered extending residual resampling to regression settings where residuals are not identically distributed (thus not amenable to bootstrapping)—common examples including logistic or Poisson regression and generalizations to handle clustered or multivariate data, such as generalised estimating equations. We propose a bootstrap method based on probability integral transform (PIT-) residuals, which we call the PIT-trap, which assumes data come from some marginal distribution F of known parametric form. This method can be understood as a type of “model-free bootstrap”, adapted to the problem of discrete and highly multivariate data. PIT-residuals have the key property that they are (asymptotically) pivotal. The PIT-trap thus inherits the key property, not afforded by any other residual resampling approach, that the marginal distribution of data can be preserved under PIT-trapping. This in turn enables the derivation of some standard bootstrap properties, including second-order correctness of pivotal PIT-trap test statistics. In multivariate data, bootstrapping rows of PIT-residuals affords the property that it preserves correlation in data without the need for it to be modelled, a key point of difference as compared to a parametric bootstrap. The proposed method is illustrated on an example involving multivariate abundance data in ecology, and demonstrated via simulation to have improved properties as compared to competing resampling methods. PMID:28738071
Comparison of bootstrap approaches for estimation of uncertainties of DTI parameters.
Chung, SungWon; Lu, Ying; Henry, Roland G
2006-11-01
Bootstrap is an empirical non-parametric statistical technique based on data resampling that has been used to quantify uncertainties of diffusion tensor MRI (DTI) parameters, useful in tractography and in assessing DTI methods. The current bootstrap method (repetition bootstrap) used for DTI analysis performs resampling within the data sharing common diffusion gradients, requiring multiple acquisitions for each diffusion gradient. Recently, wild bootstrap was proposed that can be applied without multiple acquisitions. In this paper, two new approaches are introduced called residual bootstrap and repetition bootknife. We show that repetition bootknife corrects for the large bias present in the repetition bootstrap method and, therefore, better estimates the standard errors. Like wild bootstrap, residual bootstrap is applicable to single acquisition scheme, and both are based on regression residuals (called model-based resampling). Residual bootstrap is based on the assumption that non-constant variance of measured diffusion-attenuated signals can be modeled, which is actually the assumption behind the widely used weighted least squares solution of diffusion tensor. The performances of these bootstrap approaches were compared in terms of bias, variance, and overall error of bootstrap-estimated standard error by Monte Carlo simulation. We demonstrate that residual bootstrap has smaller biases and overall errors, which enables estimation of uncertainties with higher accuracy. Understanding the properties of these bootstrap procedures will help us to choose the optimal approach for estimating uncertainties that can benefit hypothesis testing based on DTI parameters, probabilistic fiber tracking, and optimizing DTI methods.
Bias Corrections for Regional Estimates of the Time-averaged Geomagnetic Field
NASA Astrophysics Data System (ADS)
Constable, C.; Johnson, C. L.
2009-05-01
We assess two sources of bias in the time-averaged geomagnetic field (TAF) and paleosecular variation (PSV): inadequate temporal sampling, and the use of unit vectors in deriving temporal averages of the regional geomagnetic field. For the first temporal sampling question we use statistical resampling of existing data sets to minimize and correct for bias arising from uneven temporal sampling in studies of the time- averaged geomagnetic field (TAF) and its paleosecular variation (PSV). The techniques are illustrated using data derived from Hawaiian lava flows for 0-5~Ma: directional observations are an updated version of a previously published compilation of paleomagnetic directional data centered on ± 20° latitude by Lawrence et al./(2006); intensity data are drawn from Tauxe & Yamazaki, (2007). We conclude that poor temporal sampling can produce biased estimates of TAF and PSV, and resampling to appropriate statistical distribution of ages reduces this bias. We suggest that similar resampling should be attempted as a bias correction for all regional paleomagnetic data to be used in TAF and PSV modeling. The second potential source of bias is the use of directional data in place of full vector data to estimate the average field. This is investigated for the full vector subset of the updated Hawaiian data set. Lawrence, K.P., C.G. Constable, and C.L. Johnson, 2006, Geochem. Geophys. Geosyst., 7, Q07007, DOI 10.1029/2005GC001181. Tauxe, L., & Yamazkai, 2007, Treatise on Geophysics,5, Geomagnetism, Elsevier, Amsterdam, Chapter 13,p509
Burkness, Eric C; Hutchison, W D
2009-10-01
Populations of cabbage looper, Trichoplusiani (Lepidoptera: Noctuidae), were sampled in experimental plots and commercial fields of cabbage (Brasicca spp.) in Minnesota during 1998-1999 as part of a larger effort to implement an integrated pest management program. Using a resampling approach and the Wald's sequential probability ratio test, sampling plans with different sampling parameters were evaluated using independent presence/absence and enumerative data. Evaluations and comparisons of the different sampling plans were made based on the operating characteristic and average sample number functions generated for each plan and through the use of a decision probability matrix. Values for upper and lower decision boundaries, sequential error rates (alpha, beta), and tally threshold were modified to determine parameter influence on the operating characteristic and average sample number functions. The following parameters resulted in the most desirable operating characteristic and average sample number functions; action threshold of 0.1 proportion of plants infested, tally threshold of 1, alpha = beta = 0.1, upper boundary of 0.15, lower boundary of 0.05, and resampling with replacement. We found that sampling parameters can be modified and evaluated using resampling software to achieve desirable operating characteristic and average sample number functions. Moreover, management of T. ni by using binomial sequential sampling should provide a good balance between cost and reliability by minimizing sample size and maintaining a high level of correct decisions (>95%) to treat or not treat.
Present-day velocity field and block kinematics of Tibetan Plateau from GPS measurements
NASA Astrophysics Data System (ADS)
Wang, Wei; Qiao, Xuejun; Yang, Shaomin; Wang, Dijin
2017-02-01
In this study, we present a new synthesis of GPS velocities for tectonic deformation within the Tibetan Plateau and its surrounding areas, a combined data set of ˜1854 GPS-derived horizontal velocity vectors. Assuming that crustal deformation is localized along major faults, a block modelling approach is employed to interpret the GPS velocity field. We construct a 30-element block model to describe present-day deformation in western China, with half of them located within the Tibetan Plateau, and the remainder located in its surrounding areas. We model the GPS velocities simultaneously for the effects of block rotations and elastic strain induced by the bounding faults. Our model yields a good fit to the GPS data with a mean residual of 1.08 mm a-1 compared to the mean uncertainty of 1.36 mm a-1 for each velocity component, indicating a good agreement between the predicted and observed velocities. The major strike-slip faults such as the Altyn Tagh, Xianshuihe, Kunlun and Haiyuan faults have relatively uniform slip rates in a range of 5-12 mm a-1 along most of their segments, and the estimated fault slip rates agree well with previous geologic and geodetic results. Blocks having significant residuals are located at the southern and southeastern Tibetan Plateau, suggesting complex tectonic settings and further refinement of accurate definition of block geometry in these regions.
Controlling the Pore Size of Mesoporous Carbon Thin Films through Thermal and Solvent Annealing.
Zhou, Zhengping; Liu, Guoliang
2017-04-01
Herein an approach to controlling the pore size of mesoporous carbon thin films from metal-free polyacrylonitrile-containing block copolymers is described. A high-molecular-weight poly(acrylonitrile-block-methyl methacrylate) (PAN-b-PMMA) is synthesized via reversible addition-fragmentation chain transfer (RAFT) polymerization. The authors systematically investigate the self-assembly behavior of PAN-b-PMMA thin films during thermal and solvent annealing, as well as the pore size of mesoporous carbon thin films after pyrolysis. The as-spin-coated PAN-b-PMMA is microphase-separated into uniformly spaced globular nanostructures, and these globular nanostructures evolve into various morphologies after thermal or solvent annealing. Surprisingly, through thermal annealing and subsequent pyrolysis of PAN-b-PMMA into mesoporous carbon thin films, the pore size and center-to-center spacing increase significantly with thermal annealing temperature, different from most block copolymers. In addition, the choice of solvent in solvent annealing strongly influences the block copolymer nanostructure and the pore size of mesoporous carbon thin films. The discoveries herein provide a simple strategy to control the pore size of mesoporous carbon thin films by tuning thermal or solvent annealing conditions, instead of synthesizing a series of block copolymers of various molecular weights and compositions. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scheuermann, James R., E-mail: James.Scheuermann@stonybrook.edu; Goldan, Amir H.; Zhao, Wei
Purpose: Active matrix flat panel imagers (AMFPI) have limited performance in low dose applications due to the electronic noise of the thin film transistor (TFT) array. A uniform layer of avalanche amorphous selenium (a-Se) called high gain avalanche rushing photoconductor (HARP) allows for signal amplification prior to readout from the TFT array, largely eliminating the effects of the electronic noise. The authors report preliminary avalanche gain measurements from the first HARP structure developed for direct deposition onto a TFT array. Methods: The HARP structure is fabricated on a glass substrate in the form of p-i-n, i.e., the electron blocking layermore » (p) followed by an intrinsic (i) a-Se layer and finally the hole blocking layer (n). All deposition procedures are scalable to large area detectors. Integrated charge is measured from pulsed optical excitation incident on the top electrode (as would in an indirect AMFPI) under continuous high voltage bias. Avalanche gain measurements were obtained from samples fabricated simultaneously at different locations in the evaporator to evaluate performance uniformity across large area. Results: An avalanche gain of up to 80 was obtained, which showed field dependence consistent with previous measurements from n-i-p HARP structures established for vacuum tubes. Measurements from multiple samples demonstrate the spatial uniformity of performance using large area deposition methods. Finally, the results were highly reproducible during the time course of the entire study. Conclusions: We present promising avalanche gain measurement results from a novel HARP structure that can be deposited onto a TFT array. This is a crucial step toward the practical feasibility of AMFPI with avalanche gain, enabling quantum noise limited performance down to a single x-ray photon per pixel.« less
Development of solid-state avalanche amorphous selenium for medical imaging.
Scheuermann, James R; Goldan, Amir H; Tousignant, Olivier; Léveillé, Sébastien; Zhao, Wei
2015-03-01
Active matrix flat panel imagers (AMFPI) have limited performance in low dose applications due to the electronic noise of the thin film transistor (TFT) array. A uniform layer of avalanche amorphous selenium (a-Se) called high gain avalanche rushing photoconductor (HARP) allows for signal amplification prior to readout from the TFT array, largely eliminating the effects of the electronic noise. The authors report preliminary avalanche gain measurements from the first HARP structure developed for direct deposition onto a TFT array. The HARP structure is fabricated on a glass substrate in the form of p-i-n, i.e., the electron blocking layer (p) followed by an intrinsic (i) a-Se layer and finally the hole blocking layer (n). All deposition procedures are scalable to large area detectors. Integrated charge is measured from pulsed optical excitation incident on the top electrode (as would in an indirect AMFPI) under continuous high voltage bias. Avalanche gain measurements were obtained from samples fabricated simultaneously at different locations in the evaporator to evaluate performance uniformity across large area. An avalanche gain of up to 80 was obtained, which showed field dependence consistent with previous measurements from n-i-p HARP structures established for vacuum tubes. Measurements from multiple samples demonstrate the spatial uniformity of performance using large area deposition methods. Finally, the results were highly reproducible during the time course of the entire study. We present promising avalanche gain measurement results from a novel HARP structure that can be deposited onto a TFT array. This is a crucial step toward the practical feasibility of AMFPI with avalanche gain, enabling quantum noise limited performance down to a single x-ray photon per pixel.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) (density of mercury equals 13.595 grams per cubic centimeter). 1.9Thermocouple means a device consisting of... Cleaning Operations per Year T—Temperature t—Time V—Volume of Gas Consumed W—Weight of Test Block 2. Test.... 2.9.2Gas Measurements. 2.9.2.1Positive displacement meters. The gas meter to be used for measuring...
Mehl, S.; Hill, M.C.
2002-01-01
A new method of local grid refinement for two-dimensional block-centered finite-difference meshes is presented in the context of steady-state groundwater-flow modeling. The method uses an iteration-based feedback with shared nodes to couple two separate grids. The new method is evaluated by comparison with results using a uniform fine mesh, a variably spaced mesh, and a traditional method of local grid refinement without a feedback. Results indicate: (1) The new method exhibits quadratic convergence for homogeneous systems and convergence equivalent to uniform-grid refinement for heterogeneous systems. (2) Coupling the coarse grid with the refined grid in a numerically rigorous way allowed for improvement in the coarse-grid results. (3) For heterogeneous systems, commonly used linear interpolation of heads from the large model onto the boundary of the refined model produced heads that are inconsistent with the physics of the flow field. (4) The traditional method works well in situations where the better resolution of the locally refined grid has little influence on the overall flow-system dynamics, but if this is not true, lack of a feedback mechanism produced errors in head up to 3.6% and errors in cell-to-cell flows up to 25%. ?? 2002 Elsevier Science Ltd. All rights reserved.
Methods to achieve accurate projection of regional and global raster databases
Usery, E.L.; Seong, J.C.; Steinwand, D.R.; Finn, M.P.
2002-01-01
This research aims at building a decision support system (DSS) for selecting an optimum projection considering various factors, such as pixel size, areal extent, number of categories, spatial pattern of categories, resampling methods, and error correction methods. Specifically, this research will investigate three goals theoretically and empirically and, using the already developed empirical base of knowledge with these results, develop an expert system for map projection of raster data for regional and global database modeling. The three theoretical goals are as follows: (1) The development of a dynamic projection that adjusts projection formulas for latitude on the basis of raster cell size to maintain equal-sized cells. (2) The investigation of the relationships between the raster representation and the distortion of features, number of categories, and spatial pattern. (3) The development of an error correction and resampling procedure that is based on error analysis of raster projection.
Speckle reduction in digital holography with resampling ring masks
NASA Astrophysics Data System (ADS)
Zhang, Wenhui; Cao, Liangcai; Jin, Guofan
2018-01-01
One-shot digital holographic imaging has the advantages of high stability and low temporal cost. However, the reconstruction is affected by the speckle noise. Resampling ring-mask method in spectrum domain is proposed for speckle reduction. The useful spectrum of one hologram is divided into several sub-spectra by ring masks. In the reconstruction, angular spectrum transform is applied to guarantee the calculation accuracy which has no approximation. N reconstructed amplitude images are calculated from the corresponding sub-spectra. Thanks to speckle's random distribution, superimposing these N uncorrelated amplitude images would lead to a final reconstructed image with lower speckle noise. Normalized relative standard deviation values of the reconstructed image are used to evaluate the reduction of speckle. Effect of the method on the spatial resolution of the reconstructed image is also quantitatively evaluated. Experimental and simulation results prove the feasibility and effectiveness of the proposed method.
Statistical wiring of thalamic receptive fields optimizes spatial sampling of the retinal image
Wang, Xin; Sommer, Friedrich T.; Hirsch, Judith A.
2014-01-01
Summary It is widely assumed that mosaics of retinal ganglion cells establish the optimal representation of visual space. However, relay cells in the visual thalamus often receive convergent input from several retinal afferents and, in cat, outnumber ganglion cells. To explore how the thalamus transforms the retinal image, we built a model of the retinothalamic circuit using experimental data and simple wiring rules. The model shows how the thalamus might form a resampled map of visual space with the potential to facilitate detection of stimulus position in the presence of sensor noise. Bayesian decoding conducted with the model provides support for this scenario. Despite its benefits, however, resampling introduces image blur, thus impairing edge perception. Whole-cell recordings obtained in vivo suggest that this problem is mitigated by arrangements of excitation and inhibition within the receptive field that effectively boost contrast borders, much like strategies used in digital image processing. PMID:24559681
An Acoustic OFDM System with Symbol-by-Symbol Doppler Compensation for Underwater Communication
MinhHai, Tran; Rie, Saotome; Suzuki, Taisaku; Wada, Tomohisa
2016-01-01
We propose an acoustic OFDM system for underwater communication, specifically for vertical link communications such as between a robot in the sea bottom and a mother ship in the surface. The main contributions are (1) estimation of time varying Doppler shift using continual pilots in conjunction with monitoring the drift of Power Delay Profile and (2) symbol-by-symbol Doppler compensation in frequency domain by an ICI matrix representing nonuniform Doppler. In addition, we compare our proposal against a resampling method. Simulation and experimental results confirm that our system outperforms the resampling method when the velocity changes roughly over OFDM symbols. Overall, experimental results taken in Shizuoka, Japan, show our system using 16QAM, and 64QAM achieved a data throughput of 7.5 Kbit/sec with a transmitter moving at maximum 2 m/s, in a complicated trajectory, over 30 m vertically. PMID:27057558
Forecasting drought risks for a water supply storage system using bootstrap position analysis
Tasker, Gary; Dunne, Paul
1997-01-01
Forecasting the likelihood of drought conditions is an integral part of managing a water supply storage and delivery system. Position analysis uses a large number of possible flow sequences as inputs to a simulation of a water supply storage and delivery system. For a given set of operating rules and water use requirements, water managers can use such a model to forecast the likelihood of specified outcomes such as reservoir levels falling below a specified level or streamflows falling below statutory passing flows a few months ahead conditioned on the current reservoir levels and streamflows. The large number of possible flow sequences are generated using a stochastic streamflow model with a random resampling of innovations. The advantages of this resampling scheme, called bootstrap position analysis, are that it does not rely on the unverifiable assumption of normality and it allows incorporation of long-range weather forecasts into the analysis.
Measures of precision for dissimilarity-based multivariate analysis of ecological communities
Anderson, Marti J; Santana-Garcon, Julia
2015-01-01
Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. PMID:25438826
NASA Astrophysics Data System (ADS)
Lu, Siliang; Wang, Xiaoxian; He, Qingbo; Liu, Fang; Liu, Yongbin
2016-12-01
Transient signal analysis (TSA) has been proven an effective tool for motor bearing fault diagnosis, but has yet to be applied in processing bearing fault signals with variable rotating speed. In this study, a new TSA-based angular resampling (TSAAR) method is proposed for fault diagnosis under speed fluctuation condition via sound signal analysis. By applying the TSAAR method, the frequency smearing phenomenon is eliminated and the fault characteristic frequency is exposed in the envelope spectrum for bearing fault recognition. The TSAAR method can accurately estimate the phase information of the fault-induced impulses using neither complicated time-frequency analysis techniques nor external speed sensors, and hence it provides a simple, flexible, and data-driven approach that realizes variable-speed motor bearing fault diagnosis. The effectiveness and efficiency of the proposed TSAAR method are verified through a series of simulated and experimental case studies.
Simulation and statistics: Like rhythm and song
NASA Astrophysics Data System (ADS)
Othman, Abdul Rahman
2013-04-01
Simulation has been introduced to solve problems in the form of systems. By using this technique the following two problems can be overcome. First, a problem that has an analytical solution but the cost of running an experiment to solve is high in terms of money and lives. Second, a problem exists but has no analytical solution. In the field of statistical inference the second problem is often encountered. With the advent of high-speed computing devices, a statistician can now use resampling techniques such as the bootstrap and permutations to form pseudo sampling distribution that will lead to the solution of the problem that cannot be solved analytically. This paper discusses how a Monte Carlo simulation was and still being used to verify the analytical solution in inference. This paper also discusses the resampling techniques as simulation techniques. The misunderstandings about these two techniques are examined. The successful usages of both techniques are also explained.
Bishara, Anthony J; Hittner, James B
2012-09-01
It is well known that when data are nonnormally distributed, a test of the significance of Pearson's r may inflate Type I error rates and reduce power. Statistics textbooks and the simulation literature provide several alternatives to Pearson's correlation. However, the relative performance of these alternatives has been unclear. Two simulation studies were conducted to compare 12 methods, including Pearson, Spearman's rank-order, transformation, and resampling approaches. With most sample sizes (n ≥ 20), Type I and Type II error rates were minimized by transforming the data to a normal shape prior to assessing the Pearson correlation. Among transformation approaches, a general purpose rank-based inverse normal transformation (i.e., transformation to rankit scores) was most beneficial. However, when samples were both small (n ≤ 10) and extremely nonnormal, the permutation test often outperformed other alternatives, including various bootstrap tests.
A scale-invariant change detection method for land use/cover change research
NASA Astrophysics Data System (ADS)
Xing, Jin; Sieber, Renee; Caelli, Terrence
2018-07-01
Land Use/Cover Change (LUCC) detection relies increasingly on comparing remote sensing images with different spatial and spectral scales. Based on scale-invariant image analysis algorithms in computer vision, we propose a scale-invariant LUCC detection method to identify changes from scale heterogeneous images. This method is composed of an entropy-based spatial decomposition, two scale-invariant feature extraction methods, Maximally Stable Extremal Region (MSER) and Scale-Invariant Feature Transformation (SIFT) algorithms, a spatial regression voting method to integrate MSER and SIFT results, a Markov Random Field-based smoothing method, and a support vector machine classification method to assign LUCC labels. We test the scale invariance of our new method with a LUCC case study in Montreal, Canada, 2005-2012. We found that the scale-invariant LUCC detection method provides similar accuracy compared with the resampling-based approach but this method avoids the LUCC distortion incurred by resampling.
Bootstrap position analysis for forecasting low flow frequency
Tasker, Gary D.; Dunne, P.
1997-01-01
A method of random resampling of residuals from stochastic models is used to generate a large number of 12-month-long traces of natural monthly runoff to be used in a position analysis model for a water-supply storage and delivery system. Position analysis uses the traces to forecast the likelihood of specified outcomes such as reservoir levels falling below a specified level or streamflows falling below statutory passing flows conditioned on the current reservoir levels and streamflows. The advantages of this resampling scheme, called bootstrap position analysis, are that it does not rely on the unverifiable assumption of normality, fewer parameters need to be estimated directly from the data, and accounting for parameter uncertainty is easily done. For a given set of operating rules and water-use requirements for a system, water managers can use such a model as a decision-making tool to evaluate different operating rules. ?? ASCE,.
NASA Astrophysics Data System (ADS)
Chuan, Zun Liang; Ismail, Noriszura; Shinyie, Wendy Ling; Lit Ken, Tan; Fam, Soo-Fen; Senawi, Azlyna; Yusoff, Wan Nur Syahidah Wan
2018-04-01
Due to the limited of historical precipitation records, agglomerative hierarchical clustering algorithms widely used to extrapolate information from gauged to ungauged precipitation catchments in yielding a more reliable projection of extreme hydro-meteorological events such as extreme precipitation events. However, identifying the optimum number of homogeneous precipitation catchments accurately based on the dendrogram resulted using agglomerative hierarchical algorithms are very subjective. The main objective of this study is to propose an efficient regionalized algorithm to identify the homogeneous precipitation catchments for non-stationary precipitation time series. The homogeneous precipitation catchments are identified using average linkage hierarchical clustering algorithm associated multi-scale bootstrap resampling, while uncentered correlation coefficient as the similarity measure. The regionalized homogeneous precipitation is consolidated using K-sample Anderson Darling non-parametric test. The analysis result shows the proposed regionalized algorithm performed more better compared to the proposed agglomerative hierarchical clustering algorithm in previous studies.
Efficient geometric rectification techniques for spectral analysis algorithm
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Pang, S. S.; Curlander, J. C.
1992-01-01
The spectral analysis algorithm is a viable technique for processing synthetic aperture radar (SAR) data in near real time throughput rates by trading the image resolution. One major challenge of the spectral analysis algorithm is that the output image, often referred to as the range-Doppler image, is represented in the iso-range and iso-Doppler lines, a curved grid format. This phenomenon is known to be the fanshape effect. Therefore, resampling is required to convert the range-Doppler image into a rectangular grid format before the individual images can be overlaid together to form seamless multi-look strip imagery. An efficient algorithm for geometric rectification of the range-Doppler image is presented. The proposed algorithm, realized in two one-dimensional resampling steps, takes into consideration the fanshape phenomenon of the range-Doppler image as well as the high squint angle and updates of the cross-track and along-track Doppler parameters. No ground reference points are required.
Advanced compilation techniques in the PARADIGM compiler for distributed-memory multicomputers
NASA Technical Reports Server (NTRS)
Su, Ernesto; Lain, Antonio; Ramaswamy, Shankar; Palermo, Daniel J.; Hodges, Eugene W., IV; Banerjee, Prithviraj
1995-01-01
The PARADIGM compiler project provides an automated means to parallelize programs, written in a serial programming model, for efficient execution on distributed-memory multicomputers. .A previous implementation of the compiler based on the PTD representation allowed symbolic array sizes, affine loop bounds and array subscripts, and variable number of processors, provided that arrays were single or multi-dimensionally block distributed. The techniques presented here extend the compiler to also accept multidimensional cyclic and block-cyclic distributions within a uniform symbolic framework. These extensions demand more sophisticated symbolic manipulation capabilities. A novel aspect of our approach is to meet this demand by interfacing PARADIGM with a powerful off-the-shelf symbolic package, Mathematica. This paper describes some of the Mathematica routines that performs various transformations, shows how they are invoked and used by the compiler to overcome the new challenges, and presents experimental results for code involving cyclic and block-cyclic arrays as evidence of the feasibility of the approach.
A triaxial supramolecular weave
NASA Astrophysics Data System (ADS)
Lewandowska, Urszula; Zajaczkowski, Wojciech; Corra, Stefano; Tanabe, Junki; Borrmann, Ruediger; Benetti, Edmondo M.; Stappert, Sebastian; Watanabe, Kohei; Ochs, Nellie A. K.; Schaeublin, Robin; Li, Chen; Yashima, Eiji; Pisula, Wojciech; Müllen, Klaus; Wennemers, Helma
2017-11-01
Despite recent advances in the synthesis of increasingly complex topologies at the molecular level, nano- and microscopic weaves have remained difficult to achieve. Only a few diaxial molecular weaves exist—these were achieved by templation with metals. Here, we present an extended triaxial supramolecular weave that consists of self-assembled organic threads. Each thread is formed by the self-assembly of a building block comprising a rigid oligoproline segment with two perylene-monoimide chromophores spaced at 18 Å. Upon π stacking of the chromophores, threads form that feature alternating up- and down-facing voids at regular distances. These voids accommodate incoming building blocks and establish crossing points through CH-π interactions on further assembly of the threads into a triaxial woven superstructure. The resulting micrometre-scale supramolecular weave proved to be more robust than non-woven self-assemblies of the same building block. The uniform hexagonal pores of the interwoven network were able to host iridium nanoparticles, which may be of interest for practical applications.
Chen, Zhenfeng; Ge, Shuzhi Sam; Zhang, Yun; Li, Yanan
2014-11-01
This paper presents adaptive neural tracking control for a class of uncertain multiinput-multioutput (MIMO) nonlinear systems in block-triangular form. All subsystems within these MIMO nonlinear systems are of completely nonaffine pure-feedback form and allowed to have different orders. To deal with the nonaffine appearance of the control variables, the mean value theorem is employed to transform the systems into a block-triangular strict-feedback form with control coefficients being couplings among various inputs and outputs. A systematic procedure is proposed for the design of a new singularity-free adaptive neural tracking control strategy. Such a design procedure can remove the couplings among subsystems and hence avoids the possible circular control construction problem. As a consequence, all the signals in the closed-loop system are guaranteed to be semiglobally uniformly ultimately bounded. Moreover, the outputs of the systems are ensured to converge to a small neighborhood of the desired trajectories. Simulation studies verify the theoretical findings revealed in this paper.
A discrete element modelling approach for block impacts on trees
NASA Astrophysics Data System (ADS)
Toe, David; Bourrier, Franck; Olmedo, Ignatio; Berger, Frederic
2015-04-01
These past few year rockfall models explicitly accounting for block shape, especially those using the Discrete Element Method (DEM), have shown a good ability to predict rockfall trajectories. Integrating forest effects into those models still remain challenging. This study aims at using a DEM approach to model impacts of blocks on trees and identify the key parameters controlling the block kinematics after the impact on a tree. A DEM impact model of a block on a tree was developed and validated using laboratory experiments. Then, key parameters were assessed using a global sensitivity analyse. Modelling the impact of a block on a tree using DEM allows taking into account large displacements, material non-linearities and contacts between the block and the tree. Tree stems are represented by flexible cylinders model as plastic beams sustaining normal, shearing, bending, and twisting loading. Root soil interactions are modelled using a rotation stiffness acting on the bending moment at the bottom of the tree and a limit bending moment to account for tree overturning. The crown is taken into account using an additional mass distribute uniformly on the upper part of the tree. The block is represented by a sphere. The contact model between the block and the stem consists of an elastic frictional model. The DEM model was validated using laboratory impact tests carried out on 41 fresh beech (Fagus Sylvatica) stems. Each stem was 1,3 m long with a diameter between 3 to 7 cm. Wood stems were clamped on a rigid structure and impacted by a 149 kg charpy pendulum. Finally an intensive simulation campaign of blocks impacting trees was done to identify the input parameters controlling the block kinematics after the impact on a tree. 20 input parameters were considered in the DEM simulation model : 12 parameters were related to the tree and 8 parameters to the block. The results highlight that the impact velocity, the stem diameter, and the block volume are the three input parameters that control the block kinematics after impact.
Self-cleaning threaded rod spinneret for high-efficiency needleless electrospinning
NASA Astrophysics Data System (ADS)
Zheng, Gaofeng; Jiang, Jiaxin; Wang, Xiang; Li, Wenwang; Zhong, Weizheng; Guo, Shumin
2018-07-01
High-efficiency production of nanofibers is the key to the application of electrospinning technology. This work focuses on multi-jet electrospinning, in which a threaded rod electrode is utilized as the needless spinneret to achieve high-efficiency production of nanofibers. A slipper block, which fits into and moves through the threaded rod, is designed to transfer polymer solution evenly to the surface of the rod spinneret. The relative motion between the slipper block and the threaded rod electrode promotes the instable fluctuation of the solution surface, thus the rotation of threaded rod electrode decreases the critical voltage for the initial multi-jet ejection and the diameter of nanofibers. The residual solution on the surface of threaded rod is cleaned up by the moving slipper block, showing a great self-cleaning ability, which ensures the stable multi-jet ejection and increases the productivity of nanofibers. Each thread of the threaded rod electrode serves as an independent spinneret, which enhances the electric field strength and constrains the position of the Taylor cone, resulting in high productivity of uniform nanofibers. The diameter of nanofibers decreases with the increase of threaded rod rotation speed, and the productivity increases with the solution flow rate. The rotation of electrode provides an excess force for the ejection of charged jets, which also contributes to the high-efficiency production of nanofibers. The maximum productivity of nanofibers from the threaded rod spinneret is 5-6 g/h, about 250-300 times as high as that from the single-needle spinneret. The self-cleaning threaded rod spinneret is an effective way to realize continuous multi-jet electrospinning, which promotes industrial applications of uniform nanofibrous membrane.
Stratovolcano stability assessment methods and results from Citlaltepetl, Mexico
Zimbelman, D.R.; Watters, R.J.; Firth, I.R.; Breit, G.N.; Carrasco-Nunez, Gerardo
2004-01-01
Citlaltépetl volcano is the easternmost stratovolcano in the Trans-Mexican Volcanic Belt. Situated within 110 km of Veracruz, it has experienced two major collapse events and, subsequent to its last collapse, rebuilt a massive, symmetrical summit cone. To enhance hazard mitigation efforts we assess the stability of Citlaltépetl's summit cone, the area thought most likely to fail during a potential massive collapse event. Through geologic mapping, alteration mineralogy, geotechnical studies, and stability modeling we provide important constraints on the likelihood, location, and size of a potential collapse event. The volcano's summit cone is young, highly fractured, and hydrothermally altered. Fractures are most abundant within 5–20-m wide zones defined by multiple parallel to subparallel fractures. Alteration is most pervasive within the fracture systems and includes acid sulfate, advanced argillic, argillic, and silicification ranks. Fractured and altered rocks both have significantly reduced rock strengths, representing likely bounding surfaces for future collapse events. The fracture systems and altered rock masses occur non-uniformly, as an orthogonal set with N–S and E–W trends. Because these surfaces occur non-uniformly, hazards associated with collapse are unevenly distributed about the volcano. Depending on uncertainties in bounding surfaces, but constrained by detailed field studies, potential failure volumes are estimated to range between 0.04–0.5 km3. Stability modeling was used to assess potential edifice failure events. Modeled failure of the outer portion of the cone initially occurs as an "intact block" bounded by steeply dipping joints and outwardly dipping flow contacts. As collapse progresses, more of the inner cone fails and the outer "intact" block transforms into a collection of smaller blocks. Eventually, a steep face develops in the uppermost and central portion of the cone. This modeled failure morphology mimics collapse amphitheaters
NASA Astrophysics Data System (ADS)
Kim, Young-Rok; Park, Eunseo; Choi, Eun-Jung; Park, Sang-Young; Park, Chandeok; Lim, Hyung-Chul
2014-09-01
In this study, genetic resampling (GRS) approach is utilized for precise orbit determination (POD) using the batch filter based on particle filtering (PF). Two genetic operations, which are arithmetic crossover and residual mutation, are used for GRS of the batch filter based on PF (PF batch filter). For POD, Laser-ranging Precise Orbit Determination System (LPODS) and satellite laser ranging (SLR) observations of the CHAMP satellite are used. Monte Carlo trials for POD are performed by one hundred times. The characteristics of the POD results by PF batch filter with GRS are compared with those of a PF batch filter with minimum residual resampling (MRRS). The post-fit residual, 3D error by external orbit comparison, and POD repeatability are analyzed for orbit quality assessments. The POD results are externally checked by NASA JPL’s orbits using totally different software, measurements, and techniques. For post-fit residuals and 3D errors, both MRRS and GRS give accurate estimation results whose mean root mean square (RMS) values are at a level of 5 cm and 10-13 cm, respectively. The mean radial orbit errors of both methods are at a level of 5 cm. For POD repeatability represented as the standard deviations of post-fit residuals and 3D errors by repetitive PODs, however, GRS yields 25% and 13% more robust estimation results than MRRS for post-fit residual and 3D error, respectively. This study shows that PF batch filter with GRS approach using genetic operations is superior to PF batch filter with MRRS in terms of robustness in POD with SLR observations.
Jodice, Patrick G.R.; Garman, S.L.; Collopy, Michael W.
2001-01-01
Marbled Murrelets (Brachyramphus marmoratus) are threatened seabirds that nest in coastal old-growth coniferous forests throughout much of their breeding range. Currently, observer-based audio-visual surveys are conducted at inland forest sites during the breeding season primarily to determine nesting distribution and breeding status and are being used to estimate temporal or spatial trends in murrelet detections. Our goal was to assess the feasibility of using audio-visual survey data for such monitoring. We used an intensive field-based survey effort to record daily murrelet detections at seven survey stations in the Oregon Coast Range. We then used computer-aided resampling techniques to assess the effectiveness of twelve survey strategies with varying scheduling and a sampling intensity of 4-14 surveys per breeding season to estimate known means and SDs of murrelet detections. Most survey strategies we tested failed to provide estimates of detection means and SDs that were within A?20% of actual means and SDs. Estimates of daily detections were, however, frequently estimated to within A?50% of field data with sampling efforts of 14 days/breeding season. Additional resampling analyses with statistically generated detection data indicated that the temporal variability in detection data had a great effect on the reliability of the mean and SD estimates calculated from the twelve survey strategies, while the value of the mean had little effect. Effectiveness at estimating multi-year trends in detection data was similarly poor, indicating that audio-visual surveys might be reliably used to estimate annual declines in murrelet detections of the order of 50% per year.
Estimation of Rainfall Sampling Uncertainty: A Comparison of Two Diverse Approaches
NASA Technical Reports Server (NTRS)
Steiner, Matthias; Zhang, Yu; Baeck, Mary Lynn; Wood, Eric F.; Smith, James A.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)
2002-01-01
The spatial and temporal intermittence of rainfall causes the averages of satellite observations of rain rate to differ from the "true" average rain rate over any given area and time period, even if the satellite observations are perfectly accurate. The difference of satellite averages based on occasional observation by satellite systems and the continuous-time average of rain rate is referred to as sampling error. In this study, rms sampling error estimates are obtained for average rain rates over boxes 100 km, 200 km, and 500 km on a side, for averaging periods of 1 day, 5 days, and 30 days. The study uses a multi-year, merged radar data product provided by Weather Services International Corp. at a resolution of 2 km in space and 15 min in time, over an area of the central U.S. extending from 35N to 45N in latitude and 100W to 80W in longitude. The intervals between satellite observations are assumed to be equal, and similar In size to what present and future satellite systems are able to provide (from 1 h to 12 h). The sampling error estimates are obtained using a resampling method called "resampling by shifts," and are compared to sampling error estimates proposed by Bell based on earlier work by Laughlin. The resampling estimates are found to scale with areal size and time period as the theory predicts. The dependence on average rain rate and time interval between observations is also similar to what the simple theory suggests.
One-shot estimate of MRMC variance: AUC.
Gallas, Brandon D
2006-03-01
One popular study design for estimating the area under the receiver operating characteristic curve (AUC) is the one in which a set of readers reads a set of cases: a fully crossed design in which every reader reads every case. The variability of the subsequent reader-averaged AUC has two sources: the multiple readers and the multiple cases (MRMC). In this article, we present a nonparametric estimate for the variance of the reader-averaged AUC that is unbiased and does not use resampling tools. The one-shot estimate is based on the MRMC variance derived by the mechanistic approach of Barrett et al. (2005), as well as the nonparametric variance of a single-reader AUC derived in the literature on U statistics. We investigate the bias and variance properties of the one-shot estimate through a set of Monte Carlo simulations with simulated model observers and images. The different simulation configurations vary numbers of readers and cases, amounts of image noise and internal noise, as well as how the readers are constructed. We compare the one-shot estimate to a method that uses the jackknife resampling technique with an analysis of variance model at its foundation (Dorfman et al. 1992). The name one-shot highlights that resampling is not used. The one-shot and jackknife estimators behave similarly, with the one-shot being marginally more efficient when the number of cases is small. We have derived a one-shot estimate of the MRMC variance of AUC that is based on a probabilistic foundation with limited assumptions, is unbiased, and compares favorably to an established estimate.
V-type multicylinder internal combustion engine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsuboi, M.
1986-05-20
A V-type multicylinder internal combustion engine is described for vehicles comprising front and rear cylinder blocks arrayed in V shape longitudinally of a vehicle body, front and rear cylinder heads fixed on each cylinder block, pistons sliding in the cylinder blocks, a crank and transmission case formed uniformly on the cylinder blocks, a crankshaft extending transversely of the vehicle body borne rotatably on both side walls of the crank and transmission case at journals on both sides, the crankshaft being coupled to the pistons at a crank pin through connecting rods and provided with front and rear driving sprockets, frontmore » and rear cam shafts mounted rotatably on the cylinder heads with driven sprockets fixed thereon, a front timing chain laid between the front driving sprocket and the front driven sprocket and constituting together with the front driving and driven sprockets a front cam driving means, a rear timing chain laid between the rear driving sprocket and the rear driven sprocket and constituting together with the rear driving and driven sprockets a rear cam driving means, and a speed change gear coupled to the crankshaft by way of a primary reduction gear and a clutch.« less
Rapid Ordering in “Wet Brush” Block Copolymer/Homopolymer Ternary Blends
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doerk, Gregory S.; Yager, Kevin G.
The ubiquitous presence of thermodynamically unfavored but kinetically trapped topological defects in nanopatterns formed via self-assembly of block copolymer thin films may prevent their use for many envisioned applications. Here, we demonstrate that lamellae patterns formed by symmetric polystyrene-block-poly(methyl methacrylate) diblock copolymers self-assemble and order extremely rapidly when the diblock copolymers are blended with low molecular weight homopolymers of the constituent blocks. Being in the “wet brush” regime, the homopolymers uniformly distribute within their respective self-assembled microdomains, preventing increases in domain widths. An order-of-magnitude increase in topological grain size in blends over the neat (unblended) diblock copolymer is achieved withinmore » minutes of thermal annealing as a result of the significantly higher power law exponent for ordering kinetics in the blends. Moreover, the blends are demonstrated to be capable of rapid and robust domain alignment within micrometer-scale trenches, in contrast to the corresponding neat diblock copolymer. Furthermore, these results can be attributed to the lowering of energy barriers associated with domain boundaries by bringing the system closer to an order–disorder transition through low molecular weight homopolymer blending.« less
Rapid Ordering in “Wet Brush” Block Copolymer/Homopolymer Ternary Blends
Doerk, Gregory S.; Yager, Kevin G.
2017-12-01
The ubiquitous presence of thermodynamically unfavored but kinetically trapped topological defects in nanopatterns formed via self-assembly of block copolymer thin films may prevent their use for many envisioned applications. Here, we demonstrate that lamellae patterns formed by symmetric polystyrene-block-poly(methyl methacrylate) diblock copolymers self-assemble and order extremely rapidly when the diblock copolymers are blended with low molecular weight homopolymers of the constituent blocks. Being in the “wet brush” regime, the homopolymers uniformly distribute within their respective self-assembled microdomains, preventing increases in domain widths. An order-of-magnitude increase in topological grain size in blends over the neat (unblended) diblock copolymer is achieved withinmore » minutes of thermal annealing as a result of the significantly higher power law exponent for ordering kinetics in the blends. Moreover, the blends are demonstrated to be capable of rapid and robust domain alignment within micrometer-scale trenches, in contrast to the corresponding neat diblock copolymer. Furthermore, these results can be attributed to the lowering of energy barriers associated with domain boundaries by bringing the system closer to an order–disorder transition through low molecular weight homopolymer blending.« less
NASA Astrophysics Data System (ADS)
Sides, Scott; Jamroz, Ben; Crockett, Robert; Pletzer, Alexander
2012-02-01
Self-consistent field theory (SCFT) for dense polymer melts has been highly successful in describing complex morphologies in block copolymers. Field-theoretic simulations such as these are able to access large length and time scales that are difficult or impossible for particle-based simulations such as molecular dynamics. The modified diffusion equations that arise as a consequence of the coarse-graining procedure in the SCF theory can be efficiently solved with a pseudo-spectral (PS) method that uses fast-Fourier transforms on uniform Cartesian grids. However, PS methods can be difficult to apply in many block copolymer SCFT simulations (eg. confinement, interface adsorption) in which small spatial regions might require finer resolution than most of the simulation grid. Progress on using new solver algorithms to address these problems will be presented. The Tech-X Chompst project aims at marrying the best of adaptive mesh refinement with linear matrix solver algorithms. The Tech-X code PolySwift++ is an SCFT simulation platform that leverages ongoing development in coupling Chombo, a package for solving PDEs via block-structured AMR calculations and embedded boundaries, with PETSc, a toolkit that includes a large assortment of sparse linear solvers.
Geologic Materials Center - General Information | Alaska Division of
effective November 9, 2017. Set by DGGS Director's Order, the fees will help offset operational costs and -effective alternative to the tremendous expense of core drilling and resampling in the field. One foot of
NASA Astrophysics Data System (ADS)
Huang, Huan; Baddour, Natalie; Liang, Ming
2018-02-01
Under normal operating conditions, bearings often run under time-varying rotational speed conditions. Under such circumstances, the bearing vibrational signal is non-stationary, which renders ineffective the techniques used for bearing fault diagnosis under constant running conditions. One of the conventional methods of bearing fault diagnosis under time-varying speed conditions is resampling the non-stationary signal to a stationary signal via order tracking with the measured variable speed. With the resampled signal, the methods available for constant condition cases are thus applicable. However, the accuracy of the order tracking is often inadequate and the time-varying speed is sometimes not measurable. Thus, resampling-free methods are of interest for bearing fault diagnosis under time-varying rotational speed for use without tachometers. With the development of time-frequency analysis, the time-varying fault character manifests as curves in the time-frequency domain. By extracting the Instantaneous Fault Characteristic Frequency (IFCF) from the Time-Frequency Representation (TFR) and converting the IFCF, its harmonics, and the Instantaneous Shaft Rotational Frequency (ISRF) into straight lines, the bearing fault can be detected and diagnosed without resampling. However, so far, the extraction of the IFCF for bearing fault diagnosis is mostly based on the assumption that at each moment the IFCF has the highest amplitude in the TFR, which is not always true. Hence, a more reliable T-F curve extraction approach should be investigated. Moreover, if the T-F curves including the IFCF, its harmonic, and the ISRF can be all extracted from the TFR directly, no extra processing is needed for fault diagnosis. Therefore, this paper proposes an algorithm for multiple T-F curve extraction from the TFR based on a fast path optimization which is more reliable for T-F curve extraction. Then, a new procedure for bearing fault diagnosis under unknown time-varying speed conditions is developed based on the proposed algorithm and a new fault diagnosis strategy. The average curve-to-curve ratios are utilized to describe the relationship of the extracted curves and fault diagnosis can then be achieved by comparing the ratios to the fault characteristic coefficients. The effectiveness of the proposed method is validated by simulated and experimental signals.
NASA Astrophysics Data System (ADS)
Zhang, Yu-Ying; Reiprich, Thomas H.; Schneider, Peter; Clerc, Nicolas; Merloni, Andrea; Schwope, Axel; Borm, Katharina; Andernach, Heinz; Caretta, César A.; Wu, Xiang-Ping
2017-03-01
We present the relation of X-ray luminosity versus dynamical mass for 63 nearby clusters of galaxies in a flux-limited sample, the HIghest X-ray FLUx Galaxy Cluster Sample (HIFLUGCS, consisting of 64 clusters). The luminosity measurements are obtained based on 1.3 Ms of clean XMM-Newton data and ROSAT pointed observations. The masses are estimated using optical spectroscopic redshifts of 13647 cluster galaxies in total. We classify clusters into disturbed and undisturbed based on a combination of the X-ray luminosity concentration and the offset between the brightest cluster galaxy and X-ray flux-weighted center. Given sufficient numbers (I.e., ≥45) of member galaxies when the dynamical masses are computed, the luminosity versus mass relations agree between the disturbed and undisturbed clusters. The cool-core clusters still dominate the scatter in the luminosity versus mass relation even when a core-corrected X-ray luminosity is used, which indicates that the scatter of this scaling relation mainly reflects the structure formation history of the clusters. As shown by the clusters with only few spectroscopically confirmed members, the dynamical masses can be underestimated and thus lead to a biased scaling relation. To investigate the potential of spectroscopic surveys to follow up high-redshift galaxy clusters or groups observed in X-ray surveys for the identifications and mass calibrations, we carried out Monte Carlo resampling of the cluster galaxy redshifts and calibrated the uncertainties of the redshift and dynamical mass estimates when only reduced numbers of galaxy redshifts per cluster are available. The resampling considers the SPIDERS and 4MOST configurations, designed for the follow-up of the eROSITA clusters, and was carried out for each cluster in the sample at the actual cluster redshift as well as at the assigned input cluster redshifts of 0.2, 0.4, 0.6, and 0.8. To follow up very distant clusters or groups, we also carried out the mass calibration based on the resampling with only ten redshifts per cluster, and redshift calibration based on the resampling with only five and ten redshifts per cluster, respectively. Our results demonstrate the power of combining upcoming X-ray and optical spectroscopic surveys for mass calibration of clusters. The scatter in the dynamical mass estimates for the clusters with at least ten members is within 50%.
NASA Astrophysics Data System (ADS)
Brown, James D.; Wu, Limin; He, Minxue; Regonda, Satish; Lee, Haksu; Seo, Dong-Jun
2014-11-01
Retrospective forecasts of precipitation, temperature, and streamflow were generated with the Hydrologic Ensemble Forecast Service (HEFS) of the U.S. National Weather Service (NWS) for a 20-year period between 1979 and 1999. The hindcasts were produced for two basins in each of four River Forecast Centers (RFCs), namely the Arkansas-Red Basin RFC, the Colorado Basin RFC, the California-Nevada RFC, and the Middle Atlantic RFC. Precipitation and temperature forecasts were produced with the HEFS Meteorological Ensemble Forecast Processor (MEFP). Inputs to the MEFP comprised ;raw; precipitation and temperature forecasts from the frozen (circa 1997) version of the NWS Global Forecast System (GFS) and a climatological ensemble, which involved resampling historical observations in a moving window around the forecast valid date (;resampled climatology;). In both cases, the forecast horizon was 1-14 days. This paper outlines the hindcasting and verification strategy, and then focuses on the quality of the temperature and precipitation forecasts from the MEFP. A companion paper focuses on the quality of the streamflow forecasts from the HEFS. In general, the precipitation forecasts are more skillful than resampled climatology during the first week, but comprise little or no skill during the second week. In contrast, the temperature forecasts improve upon resampled climatology at all forecast lead times. However, there are notable differences among RFCs and for different seasons, aggregation periods and magnitudes of the observed and forecast variables, both for precipitation and temperature. For example, the MEFP-GFS precipitation forecasts show the highest correlations and greatest skill in the California Nevada RFC, particularly during the wet season (November-April). While generally reliable, the MEFP forecasts typically underestimate the largest observed precipitation amounts (a Type-II conditional bias). As a statistical technique, the MEFP cannot detect, and thus appropriately correct for, conditions that are undetected by the GFS. The calibration of the MEFP to provide reliable and skillful forecasts of a range of precipitation amounts (not only large amounts) is a secondary factor responsible for these Type-II conditional biases. Interpretation of the verification results leads to guidance on the expected performance and limitations of the MEFP, together with recommendations on future enhancements.
Performance evaluation for 120 four-layer DOI block detectors of the jPET-D4.
Inadama, Naoko; Murayama, Hideo; Ono, Yusuke; Tsuda, Tomoaki; Hamamoto, Manabu; Yamaya, Taiga; Yoshida, Eiji; Shibuya, Kengo; Nishikido, Fumihiko; Takahashi, Kei; Kawai, Hideyuki
2008-01-01
The jPET-D4 is a brain positron emission tomography (PET) scanner that we have developed to meet user demands for high sensitivity and high spatial resolution. For this scanner, we developed a four-layer depth-of-interaction (DOI) detector. The four-layer DOI detector is a key component for the jPET-D4, its performance has great influence on the overall system performance. Previously, we reported the original technique for encoding four-layer DOI. Here, we introduce the final design of the jPET-D4 detector and present the results of an investigation on uniformity in performance of the detector. The performance evaluation was done over the 120 DOI crystal blocks for the detectors, which are to be assembled into the jPET-D4 scanner. We also introduce the crystal assembly method, which is simple enough, even though each DOI crystal block is composed of 1,024 crystal elements. The jPET-D4 detector consists of four layers of 16 x 16 Gd(2)SiO(5) (GSO) crystals and a 256-channel flat-panel position-sensitive photomultiplier tube (256ch FP-PMT). To identify scintillated crystals in the four-layer DOI detector, we use pulse shape discrimination and position discrimination on the two-dimensional (2D) position histogram. For pulse shape discrimination, two kinds of GSO crystals that show different scintillation decay time constants are used in the upper two and lower two layers, respectively. Proper reflector arrangement in the crystal block then allows the scintillated crystals to be identified in these two-layer groupings with two 2D position histograms. We produced the 120 DOI crystal blocks for the jPET-D4 system, and measured their characteristics such as the accuracy of pulse shape discrimination, energy resolution, and the pulse height of the full energy peak. The results show a satisfactory and uniform performance of the four-layer DOI crystal blocks; for example, misidentification rate in each GSO layer is <5% based on pulse shape discrimination, the averaged energy resolutions for the central four crystals of the first (farthest from the FP-PMT), second, third, and 4th layers are 15.7 +/- 1.0, 15.8 +/- 0.6, 17.7 +/- 1.2, and 17.3 +/- 1.4%, respectively, and variation in pulse height of the full energy peak among the four layers is <5% on average.
Wind Measurements from Arc Scans with Doppler Wind Lidar
Wang, H.; Barthelmie, R. J.; Clifton, Andy; ...
2015-11-25
When defining optimal scanning geometries for scanning lidars for wind energy applications, we found that it is still an active field of research. Our paper evaluates uncertainties associated with arc scan geometries and presents recommendations regarding optimal configurations in the atmospheric boundary layer. The analysis is based on arc scan data from a Doppler wind lidar with one elevation angle and seven azimuth angles spanning 30° and focuses on an estimation of 10-min mean wind speed and direction. When flow is horizontally uniform, this approach can provide accurate wind measurements required for wind resource assessments in part because of itsmore » high resampling rate. Retrieved wind velocities at a single range gate exhibit good correlation to data from a sonic anemometer on a nearby meteorological tower, and vertical profiles of horizontal wind speed, though derived from range gates located on a conical surface, match those measured by mast-mounted cup anemometers. Uncertainties in the retrieved wind velocity are related to high turbulent wind fluctuation and an inhomogeneous horizontal wind field. Moreover, the radial velocity variance is found to be a robust measure of the uncertainty of the retrieved wind speed because of its relationship to turbulence properties. It is further shown that the standard error of wind speed estimates can be minimized by increasing the azimuthal range beyond 30° and using five to seven azimuth angles.« less
Magnetic Field Synthesis for Microwave Magnetics.
1982-04-01
Uniform Fields Ferrimagnetic Films Yettrium Iron Garnet Magnetic Fields 2.ABSTRACT (Continue en reviresde It neceeectv .. d identify by block num~ber) he...Iron Garnet ," Proc. of IEEE, 64 794 (1976). 3. J. H. Collins and F. A. Pizzarello, "Propagating Magnetic Waves in Thick Films : A Complementary...E. Wigen, "Exchange-Dominated Surface Spin Waves in Thin Yttrium-Iron- Garnet Films ," Phys. Rev. B, 11 420 (1975). 36. C. Vittoria and J. H. Schelleng
Code of Federal Regulations, 2010 CFR
2010-01-01
... pressure of 30 inches of mercury (101.6 kPa) (density of mercury equals 13.595 grams per cubic centimeter...—Volume of Gas Consumed W—Weight of Test Block 2. Test Conditions 2.1Installation. A free standing kitchen.... 2.9.2Gas Measurements. 2.9.2.1Positive displacement meters. The gas meter to be used for measuring...
Dynamic mask for producing uniform or graded-thickness thin films
Folta, James A [Livermore, CA
2006-06-13
A method for producing single layer or multilayer films with high thickness uniformity or thickness gradients. The method utilizes a moving mask which blocks some of the flux from a sputter target or evaporation source before it deposits on a substrate. The velocity and position of the mask is computer controlled to precisely tailor the film thickness distribution. The method is applicable to any type of vapor deposition system, but is particularly useful for ion beam sputter deposition and evaporation deposition; and enables a high degree of uniformity for ion beam deposition, even for near-normal incidence of deposition species, which may be critical for producing low-defect multilayer coatings, such as required for masks for extreme ultraviolet lithography (EUVL). The mask can have a variety of shapes, from a simple solid paddle shape to a larger mask with a shaped hole through which the flux passes. The motion of the mask can be linear or rotational, and the mask can be moved to make single or multiple passes in front of the substrate per layer, and can pass completely or partially across the substrate.
An Upgrade Pinning Block: A Mechanical Practical Aid for Fast Labelling of the Insect Specimens.
Ghafouri Moghaddam, Mohammad Hossein; Ghafouri Moghaddam, Mostafa; Rakhshani, Ehsan; Mokhtari, Azizollah
2017-01-01
A new mechanical innovation is described to deal with standard labelling of dried specimens on triangular cards and/or pinned specimens in personal and public collections. It works quickly, precisely, and easily and is very useful for maintaining label uniformity in collections. The tools accurately sets the position of labels in the shortest possible time. This tools has advantages including rapid processing, cost effectiveness, light weight, and high accuracy, compared to conventional methods. It is fully customisable, compact, and does not require specialist equipment to assemble. Conventional methods generally require locating holes on the pinning block surface when labelling with a resulting risk to damage of the specimens. Insects of different orders can be labelled by this simple and effective tool.
An Upgrade Pinning Block: A Mechanical Practical Aid for Fast Labelling of the Insect Specimens
Ghafouri Moghaddam, Mohammad Hossein; Rakhshani, Ehsan; Mokhtari, Azizollah
2017-01-01
Abstract A new mechanical innovation is described to deal with standard labelling of dried specimens on triangular cards and/or pinned specimens in personal and public collections. It works quickly, precisely, and easily and is very useful for maintaining label uniformity in collections. The tools accurately sets the position of labels in the shortest possible time. This tools has advantages including rapid processing, cost effectiveness, light weight, and high accuracy, compared to conventional methods. It is fully customisable, compact, and does not require specialist equipment to assemble. Conventional methods generally require locating holes on the pinning block surface when labelling with a resulting risk to damage of the specimens. Insects of different orders can be labelled by this simple and effective tool. PMID:29104440
Block copolymer hollow fiber membranes with catalytic activity and pH-response.
Hilke, Roland; Pradeep, Neelakanda; Madhavan, Poornima; Vainio, Ulla; Behzad, Ali Reza; Sougrat, Rachid; Nunes, Suzana P; Peinemann, Klaus-Viktor
2013-08-14
We fabricated block copolymer hollow fiber membranes with self-assembled, shell-side, uniform pore structures. The fibers in these membranes combined pores able to respond to pH and acting as chemical gates that opened above pH 4, and catalytic activity, achieved by the incorporation of gold nanoparticles. We used a dry/wet spinning process to produce the asymmetric hollow fibers and determined the conditions under which the hollow fibers were optimized to create the desired pore morphology and the necessary mechanical stability. To induce ordered micelle assembly in the doped solution, we identified an ideal solvent mixture as confirmed by small-angle X-ray scattering. We then reduced p-nitrophenol with a gold-loaded fiber to confirm the catalytic performance of the membranes.
The Anatomy of AP1000 Mono-Block Low Pressure Rotor Forging
NASA Astrophysics Data System (ADS)
Jin, Jia-yu; Rui, Shou-tai; Wang, Qun
AP1000 mono-block low pressure (LP) rotor forgings for nuclear power station have maximum ingot weight, maximum diameter and the highest technical requirements. It confronts many technical problems during manufacturing process such as composition segregation and control of inclusion in the large ingot, core compaction during forging, control of grain size and mechanical performance. The rotor forging were anatomized to evaluate the manufacturing level of CFHI. This article introduces the anatomical results of this forging. The contents include chemical composition, mechanical properties, inclusions and grain size and other aspects from the full-length and full cross-section of this forging. The fluctuation of mechanical properties, uniformity of microstructure and purity of chemical composition were emphasized. The results show that the overall performance of this rotor forging is particularly satisfying.
Recovering of images degraded by atmosphere
NASA Astrophysics Data System (ADS)
Lin, Guang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting
2017-08-01
Remote sensing images are seriously degraded by multiple scattering and bad weather. Through the analysis of the radiative transfer procedure in atmosphere, an image atmospheric degradation model considering the influence of atmospheric absorption multiple scattering and non-uniform distribution is proposed in this paper. Based on the proposed model, a novel recovering method is presented to eliminate atmospheric degradation. Mean-shift image segmentation and block-wise deconvolution are used to reduce time cost, retaining a good result. The recovering results indicate that the proposed method can significantly remove atmospheric degradation and effectively improve contrast compared with other removal methods. The results also illustrate that our method is suitable for various degraded remote sensing, including images with large field of view (FOV), images taken in side-glance situations, image degraded by atmospheric non-uniform distribution and images with various forms of clouds.
LANDSAT-D investigations in snow hydrology
NASA Technical Reports Server (NTRS)
Dozier, J. (Principal Investigator)
1984-01-01
Thematic mapper radiometric characteristics, snow/cloud reflectance, and atmospheric correction are discussed with application to determining the spectral albedo of snow. The geometric characterics of TM and digital elevation data are examined. The geometric transformations and resampling required to coregister these data are discussed.
A Simulation Model of Issue Processing at Naval Supply Depot Yokosuka, Japan.
1986-03-01
DEMANDS RECEIVED DURING THE WORKDAY AM** *AMDD VARIABLE (((V$DDMND*V$NITDD)/1000)*800)/1525 **DEMANDS RECEIVED DURING THE WORKDAY PM ** PMDD VARIABLE...TRANSFER TO NEXT BLOCK IF ON A WORKDAY, ELSE TO RQTRM SPLIT V$ PMDD ,PMAD SPLIT TRANSACTION INTO THE NUMBER OF REQS REC’D DURING WORKDAY PM PMAD ADVANCE...DURING WORKDAY DAYAD ADVANCE 437,437 SPREAD REQUISITION FLOW UNIFORMLY * THROUGHOUT WORKDAY TRANSFER ,PRIAS TRANSFER ALL TO PRIAS, ** PM REQUISITION
Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A
2017-06-30
Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Motion vector field phase-to-amplitude resampling for 4D motion-compensated cone-beam CT
NASA Astrophysics Data System (ADS)
Sauppe, Sebastian; Kuhm, Julian; Brehm, Marcus; Paysan, Pascal; Seghers, Dieter; Kachelrieß, Marc
2018-02-01
We propose a phase-to-amplitude resampling (PTAR) method to reduce motion blurring in motion-compensated (MoCo) 4D cone-beam CT (CBCT) image reconstruction, without increasing the computational complexity of the motion vector field (MVF) estimation approach. PTAR is able to improve the image quality in reconstructed 4D volumes, including both regular and irregular respiration patterns. The PTAR approach starts with a robust phase-gating procedure for the initial MVF estimation and then switches to a phase-adapted amplitude gating method. The switch implies an MVF-resampling, which makes them amplitude-specific. PTAR ensures that the MVFs, which have been estimated on phase-gated reconstructions, are still valid for all amplitude-gated reconstructions. To validate the method, we use an artificially deformed clinical CT scan with a realistic breathing pattern and several patient data sets acquired with a TrueBeamTM integrated imaging system (Varian Medical Systems, Palo Alto, CA, USA). Motion blurring, which still occurs around the area of the diaphragm or at small vessels above the diaphragm in artifact-specific cyclic motion compensation (acMoCo) images based on phase-gating, is significantly reduced by PTAR. Also, small lung structures appear sharper in the images. This is demonstrated both for simulated and real patient data. A quantification of the sharpness of the diaphragm confirms these findings. PTAR improves the image quality of 4D MoCo reconstructions compared to conventional phase-gated MoCo images, in particular for irregular breathing patterns. Thus, PTAR increases the robustness of MoCo reconstructions for CBCT. Because PTAR does not require any additional steps for the MVF estimation, it is computationally efficient. Our method is not restricted to CBCT but could rather be applied to other image modalities.
Dudoit, Sandrine; Gilbert, Houston N.; van der Laan, Mark J.
2014-01-01
Summary This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP(q, g) = Pr(g(Vn, Sn) > q), and generalized expected value (gEV) error rates, gEV(g) = E[g(Vn, Sn)], for arbitrary functions g(Vn, Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g(Vn, Sn) = Vn/(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E[Vn/(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. PMID:18932138
NASA Astrophysics Data System (ADS)
Bou-Fakhreddine, Bassam; Mougharbel, Imad; Faye, Alain; Abou Chakra, Sara; Pollet, Yann
2018-03-01
Accurate daily river flow forecast is essential in many applications of water resources such as hydropower operation, agricultural planning and flood control. This paper presents a forecasting approach to deal with a newly addressed situation where hydrological data exist for a period longer than that of meteorological data (measurements asymmetry). In fact, one of the potential solutions to resolve measurements asymmetry issue is data re-sampling. It is a matter of either considering only the hydrological data or the balanced part of the hydro-meteorological data set during the forecasting process. However, the main disadvantage is that we may lose potentially relevant information from the left-out data. In this research, the key output is a Two-Phase Constructive Fuzzy inference hybrid model that is implemented over the non re-sampled data. The introduced modeling approach must be capable of exploiting the available data efficiently with higher prediction efficiency relative to Constructive Fuzzy model trained over re-sampled data set. The study was applied to Litani River in the Bekaa Valley - Lebanon by using 4 years of rainfall and 24 years of river flow daily measurements. A Constructive Fuzzy System Model (C-FSM) and a Two-Phase Constructive Fuzzy System Model (TPC-FSM) are trained. Upon validating, the second model has shown a primarily competitive performance and accuracy with the ability to preserve a higher day-to-day variability for 1, 3 and 6 days ahead. In fact, for the longest lead period, the C-FSM and TPC-FSM were able of explaining respectively 84.6% and 86.5% of the actual river flow variation. Overall, the results indicate that TPC-FSM model has provided a better tool to capture extreme flows in the process of streamflow prediction.
Study on the Classification of GAOFEN-3 Polarimetric SAR Images Using Deep Neural Network
NASA Astrophysics Data System (ADS)
Zhang, J.; Zhang, J.; Zhao, Z.
2018-04-01
Polarimetric Synthetic Aperture Radar (POLSAR) imaging principle determines that the image quality will be affected by speckle noise. So the recognition accuracy of traditional image classification methods will be reduced by the effect of this interference. Since the date of submission, Deep Convolutional Neural Network impacts on the traditional image processing methods and brings the field of computer vision to a new stage with the advantages of a strong ability to learn deep features and excellent ability to fit large datasets. Based on the basic characteristics of polarimetric SAR images, the paper studied the types of the surface cover by using the method of Deep Learning. We used the fully polarimetric SAR features of different scales to fuse RGB images to the GoogLeNet model based on convolution neural network Iterative training, and then use the trained model to test the classification of data validation.First of all, referring to the optical image, we mark the surface coverage type of GF-3 POLSAR image with 8m resolution, and then collect the samples according to different categories. To meet the GoogLeNet model requirements of 256 × 256 pixel image input and taking into account the lack of full-resolution SAR resolution, the original image should be pre-processed in the process of resampling. In this paper, POLSAR image slice samples of different scales with sampling intervals of 2 m and 1 m to be trained separately and validated by the verification dataset. Among them, the training accuracy of GoogLeNet model trained with resampled 2-m polarimetric SAR image is 94.89 %, and that of the trained SAR image with resampled 1 m is 92.65 %.
Jollymore, Ashlee; Johnson, Mark S.; Hawthorne, Iain
2012-01-01
Organic material, including total and dissolved organic carbon (DOC), is ubiquitous within aquatic ecosystems, playing a variety of important and diverse biogeochemical and ecological roles. Determining how land-use changes affect DOC concentrations and bioavailability within aquatic ecosystems is an important means of evaluating the effects on ecological productivity and biogeochemical cycling. This paper presents a methodology case study looking at the deployment of a submersible UV-Vis absorbance spectrophotometer (UV-Vis spectro∷lyzer model, s∷can, Vienna, Austria) to determine stream organic carbon dynamics within a headwater catchment located near Campbell River (British Columbia, Canada). Field-based absorbance measurements of DOC were made before and after forest harvest, highlighting the advantages of high temporal resolution compared to traditional grab sampling and laboratory measurements. Details of remote deployment are described. High-frequency DOC data is explored by resampling the 30 min time series with a range of resampling time intervals (from daily to weekly time steps). DOC export was calculated for three months from the post-harvest data and resampled time series, showing that sampling frequency has a profound effect on total DOC export. DOC exports derived from weekly measurements were found to underestimate export by as much as 30% compared to DOC export calculated from high-frequency data. Additionally, the importance of the ability to remotely monitor the system through a recently deployed wireless connection is emphasized by examining causes of prior data losses, and how such losses may be prevented through the ability to react when environmental or power disturbances cause system interruption and data loss. PMID:22666002
Jollymore, Ashlee; Johnson, Mark S; Hawthorne, Iain
2012-01-01
Organic material, including total and dissolved organic carbon (DOC), is ubiquitous within aquatic ecosystems, playing a variety of important and diverse biogeochemical and ecological roles. Determining how land-use changes affect DOC concentrations and bioavailability within aquatic ecosystems is an important means of evaluating the effects on ecological productivity and biogeochemical cycling. This paper presents a methodology case study looking at the deployment of a submersible UV-Vis absorbance spectrophotometer (UV-Vis spectro::lyzer model, s::can, Vienna, Austria) to determine stream organic carbon dynamics within a headwater catchment located near Campbell River (British Columbia, Canada). Field-based absorbance measurements of DOC were made before and after forest harvest, highlighting the advantages of high temporal resolution compared to traditional grab sampling and laboratory measurements. Details of remote deployment are described. High-frequency DOC data is explored by resampling the 30 min time series with a range of resampling time intervals (from daily to weekly time steps). DOC export was calculated for three months from the post-harvest data and resampled time series, showing that sampling frequency has a profound effect on total DOC export. DOC exports derived from weekly measurements were found to underestimate export by as much as 30% compared to DOC export calculated from high-frequency data. Additionally, the importance of the ability to remotely monitor the system through a recently deployed wireless connection is emphasized by examining causes of prior data losses, and how such losses may be prevented through the ability to react when environmental or power disturbances cause system interruption and data loss.
Testing non-inferiority of a new treatment in three-arm clinical trials with binary endpoints.
Tang, Nian-Sheng; Yu, Bin; Tang, Man-Lai
2014-12-18
A two-arm non-inferiority trial without a placebo is usually adopted to demonstrate that an experimental treatment is not worse than a reference treatment by a small pre-specified non-inferiority margin due to ethical concerns. Selection of the non-inferiority margin and establishment of assay sensitivity are two major issues in the design, analysis and interpretation for two-arm non-inferiority trials. Alternatively, a three-arm non-inferiority clinical trial including a placebo is usually conducted to assess the assay sensitivity and internal validity of a trial. Recently, some large-sample approaches have been developed to assess the non-inferiority of a new treatment based on the three-arm trial design. However, these methods behave badly with small sample sizes in the three arms. This manuscript aims to develop some reliable small-sample methods to test three-arm non-inferiority. Saddlepoint approximation, exact and approximate unconditional, and bootstrap-resampling methods are developed to calculate p-values of the Wald-type, score and likelihood ratio tests. Simulation studies are conducted to evaluate their performance in terms of type I error rate and power. Our empirical results show that the saddlepoint approximation method generally behaves better than the asymptotic method based on the Wald-type test statistic. For small sample sizes, approximate unconditional and bootstrap-resampling methods based on the score test statistic perform better in the sense that their corresponding type I error rates are generally closer to the prespecified nominal level than those of other test procedures. Both approximate unconditional and bootstrap-resampling test procedures based on the score test statistic are generally recommended for three-arm non-inferiority trials with binary outcomes.
Ozçift, Akin
2011-05-01
Supervised classification algorithms are commonly used in the designing of computer-aided diagnosis systems. In this study, we present a resampling strategy based Random Forests (RF) ensemble classifier to improve diagnosis of cardiac arrhythmia. Random forests is an ensemble classifier that consists of many decision trees and outputs the class that is the mode of the class's output by individual trees. In this way, an RF ensemble classifier performs better than a single tree from classification performance point of view. In general, multiclass datasets having unbalanced distribution of sample sizes are difficult to analyze in terms of class discrimination. Cardiac arrhythmia is such a dataset that has multiple classes with small sample sizes and it is therefore adequate to test our resampling based training strategy. The dataset contains 452 samples in fourteen types of arrhythmias and eleven of these classes have sample sizes less than 15. Our diagnosis strategy consists of two parts: (i) a correlation based feature selection algorithm is used to select relevant features from cardiac arrhythmia dataset. (ii) RF machine learning algorithm is used to evaluate the performance of selected features with and without simple random sampling to evaluate the efficiency of proposed training strategy. The resultant accuracy of the classifier is found to be 90.0% and this is a quite high diagnosis performance for cardiac arrhythmia. Furthermore, three case studies, i.e., thyroid, cardiotocography and audiology, are used to benchmark the effectiveness of the proposed method. The results of experiments demonstrated the efficiency of random sampling strategy in training RF ensemble classification algorithm. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
He, Chuansong; Dong, Shuwen; Chen, Xuanhua; Santosh, M.; Li, Qiusheng
2014-01-01
The Qinling-Tongbai-Hong'an-Dabie-Sulu orogenic belt records the tectonic history of Paleozoic convergence between the South China and North China Blocks. In this study, the distribution of crustal thickness and P- and S-wave velocity ratio (Vp/Vs) is obtained by using the H-k stacking technique from the Dabie-Sulu belt in central China. Our results show marked differences in the crustal structure between the Dabie and Sulu segments of the ultrahigh-pressure (UHP) orogen. The lower crust in the Dabie orogenic belt is dominantly of felsic-intermediate composition, whereas the crust beneath the Sulu segment is largely intermediate-mafic. The crust of the Dabie orogenic belt is thicker by ca. 3-5 km as compared to that of the surrounding region with the presence of an ‘orogenic root’. The crustal thickness is nearly uniform in the Dabie orogenic belt with a generally smooth crust-mantle boundary. A symmetrically thickened crust in the absence of any deep-structural features similar to that of the Yangtze block suggests no supportive evidence for the proposed northward subduction of the Yangtze continental block beneath the North China Block. We propose that the collision between the Yangtze and North China Blocks and extrusion caused crustal shortening and thickening, as well as delamination of the lower crust, resulting in asthenospheric upwelling and lower crustal UHP metamorphism along the Dabie Orogen. Our results also reveal the presence of a SE to NW dipping Moho in the North China Block (beneath the Tran-North China Orogen and Eastern Block), suggesting the fossil architecture of the northwestward subduction of the Kula plate.
Yoshida, Eiji; Tashima, Hideaki; Inadama, Naoko; Nishikido, Fumihiko; Moriya, Takahiro; Omura, Tomohide; Watanabe, Mitsuo; Murayama, Hideo; Yamaya, Taiga
2013-01-01
The X'tal cube is a depth-of-interaction (DOI)-PET detector which is aimed at obtaining isotropic resolution by effective readout of scintillation photons from the six sides of a crystal block. The X'tal cube is composed of the 3D crystal block with isotropic resolution and arrays of multi-pixel photon counters (MPPCs). In this study, to fabricate the 3D crystal block efficiently and precisely, we applied a sub-surface laser engraving (SSLE) technique to a monolithic crystal block instead of gluing segmented small crystals. The SSLE technique provided micro-crack walls which carve a groove into a monolithic scintillator block. Using the fabricated X'tal cube, we evaluated its intrinsic spatial resolution to show a proof of concept of isotropic resolution. The 3D grids of 2 mm pitch were fabricated into an 18 × 18 × 18 mm(3) monolithic lutetium yttrium orthosilicate (LYSO) crystal by the SSLE technique. 4 × 4 MPPCs were optically coupled to each surface of the crystal block. The X'tal cube was uniformly irradiated by (22)Na gamma rays, and all of the 3D grids on the 3D position histogram were separated clearly by an Anger-type calculation from the 96-channel MPPC signals. Response functions of the X'tal cube were measured by scanning with a (22)Na point source. The gamma-ray beam with a 1.0 mm slit was scanned in 0.25 mm steps by positioning of the X'tal cube at vertical and 45° incident angles. The average FWHM resolution at both incident angles was 2.1 mm. Therefore, we confirmed the isotropic spatial resolution performance of the X'tal cube.
NASA Astrophysics Data System (ADS)
Römer, Wolfgang
2008-08-01
In southern São Paulo the Serra do Mar is characterized by three distinct terrain types: 1) highly dissected areas with closely spaced ridges and accordant summit heights; 2) multiconvex hills; and 3) terrains with highly elevated watershed areas, irregular summit heights, and locally subdued relief. The development of this landscape is considered to be the result of the Cenozoic block-faulting and of the influences that are exerted by the differing lithological and structural setting of block-faulted compartments on weathering and erosion processes. In areas characterized by pronounced accordant summits the close coincidence between hillslope angle and the angle of limiting stability against landsliding points to a close adjustment of hillslope gradients and the mechanical properties of the regolith. The relative height of the hillslopes is functionally related to the spacing of the valleys and the gradient of the hillslopes. In areas with a regular spacing of v-shaped valleys and uniform rocks, this leads to the intersection of valley-side slopes in summits and ridges at a certain elevation. This elevation is determined by the length and steepness of the valley-side slopes. Therefore, the heights of the summits are geometrically constrained and are likely to indicate the upper limit of summit heights or an "upper denudation level" that is adjusted by hillslope processes to the incising streams. Accordant summit heights of this type are poor indicators of formerly more extensive denudation surfaces as it is also likely that they are a result of the long-term adjustment of hillslopes to river incision. The steep mountain flanks of block-faulted compartments on the other hand, comprise regolith-covered hillslopes that are closely adjusted to the maximum stable gradient as well as rock-slopes that are controlled by the rock-mass strength. Their summits are usually not accommodated into uniform summit levels. Highly elevated watershed areas exhibiting a subdued relief are detached from the base level response. On granitoid rocks these areas are often characterized by the rocky hills and domal rock outcrops. However, differences in the elevation of interfluves and summits between rocks of differing resistance and in the elevation of lithologically distinct individual fault-blocks imply that long-term weathering and erosion has transformed and lowered these landscapes. Therefore, these areas cannot be interpreted as a remnant of a pre-uplift topography and it appears to be unlikely that the height of the summits correlates with formerly more widespread planation surfaces in the far hinterland. The studies indicate that concepts such as the parallel retreat of hillslopes cannot account for the observed differences in the landscape. It is suggested that the Serra do Mar is consumed from the Atlantic and the inland side by spatially non-uniform developmental states. These states are determined by local differences in the coupling and distance to the regional base level and sea-level or are due to lithological and structural controls between and within the block-faulted compartments.
SAMPLE SIZE FOR SEASONAL MEAN CONCENTRATION, DEPOSITION VELOCITY AND DEPOSITION: A RESAMPLING STUDY
Methodologies are described to assign confidence statements to seasonal means of concentration (C), deposition velocity (V J, and deposition categorized by species/parameters, sites, and seasons in the presence of missing data. Estimators of seasonal means with missing weekly dat...
MISR Level 1 Near Real Time Products
Atmospheric Science Data Center
2016-10-31
Level 1 Near Real Time The MISR Near Real Time Level 1 data products ... km MISR swath and projected onto a Space-Oblique Mercator (SOM) map grid. The Ellipsoid-projected and Terrain-projected top-of-atmosphere (TOA) radiance products provide measurements respectively resampled onto the ...
Using and Evaluating Resampling Simulations in SPSS and Excel.
ERIC Educational Resources Information Center
Smith, Brad
2003-01-01
Describes and evaluates three computer-assisted simulations used with Statistical Package for the Social Sciences (SPSS) and Microsoft Excel. Designed the simulations to reinforce and enhance student understanding of sampling distributions, confidence intervals, and significance tests. Reports evaluations revealed improved student comprehension of…
Bradu, Adrian; Kapinchev, Konstantin; Barnes, Frederick; Podoleanu, Adrian
2015-07-01
In a previous report, we demonstrated master-slave optical coherence tomography (MS-OCT), an OCT method that does not need resampling of data and can be used to deliver en face images from several depths simultaneously. In a separate report, we have also demonstrated MS-OCT's capability of producing cross-sectional images of a quality similar to those provided by the traditional Fourier domain (FD) OCT technique, but at a much slower rate. Here, we demonstrate that by taking advantage of the parallel processing capabilities offered by the MS-OCT method, cross-sectional OCT images of the human retina can be produced in real time. We analyze the conditions that ensure a true real-time B-scan imaging operation and demonstrate in vivo real-time images from human fovea and the optic nerve, with resolution and sensitivity comparable to those produced using the traditional FD-based method, however, without the need of data resampling.
Resampling approach for anomalous change detection
NASA Astrophysics Data System (ADS)
Theiler, James; Perkins, Simon
2007-04-01
We investigate the problem of identifying pixels in pairs of co-registered images that correspond to real changes on the ground. Changes that are due to environmental differences (illumination, atmospheric distortion, etc.) or sensor differences (focus, contrast, etc.) will be widespread throughout the image, and the aim is to avoid these changes in favor of changes that occur in only one or a few pixels. Formal outlier detection schemes (such as the one-class support vector machine) can identify rare occurrences, but will be confounded by pixels that are "equally rare" in both images: they may be anomalous, but they are not changes. We describe a resampling scheme we have developed that formally addresses both of these issues, and reduces the problem to a binary classification, a problem for which a large variety of machine learning tools have been developed. In principle, the effects of misregistration will manifest themselves as pervasive changes, and our method will be robust against them - but in practice, misregistration remains a serious issue.
Clausen, J L; Georgian, T; Gardner, K H; Douglas, T A
2018-01-01
Research shows grab sampling is inadequate for evaluating military ranges contaminated with energetics because of their highly heterogeneous distribution. Similar studies assessing the heterogeneous distribution of metals at small-arms ranges (SAR) are lacking. To address this we evaluated whether grab sampling provides appropriate data for performing risk analysis at metal-contaminated SARs characterized with 30-48 grab samples. We evaluated the extractable metal content of Cu, Pb, Sb, and Zn of the field data using a Monte Carlo random resampling with replacement (bootstrapping) simulation approach. Results indicate the 95% confidence interval of the mean for Pb (432 mg/kg) at one site was 200-700 mg/kg with a data range of 5-4500 mg/kg. Considering the U.S. Environmental Protection Agency screening level for lead is 400 mg/kg, the necessity of cleanup at this site is unclear. Resampling based on populations of 7 and 15 samples, a sample size more realistic for the area yielded high false negative rates.
Measures of precision for dissimilarity-based multivariate analysis of ecological communities.
Anderson, Marti J; Santana-Garcon, Julia
2015-01-01
Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. © 2014 The Authors. Ecology Letters published by John Wiley & Sons Ltd and CNRS.
Uncertainties in the cluster-cluster correlation function
NASA Astrophysics Data System (ADS)
Ling, E. N.; Frenk, C. S.; Barrow, J. D.
1986-12-01
The bootstrap resampling technique is applied to estimate sampling errors and significance levels of the two-point correlation functions determined for a subset of the CfA redshift survey of galaxies and a redshift sample of 104 Abell clusters. The angular correlation function for a sample of 1664 Abell clusters is also calculated. The standard errors in xi(r) for the Abell data are found to be considerably larger than quoted 'Poisson errors'. The best estimate for the ratio of the correlation length of Abell clusters (richness class R greater than or equal to 1, distance class D less than or equal to 4) to that of CfA galaxies is 4.2 + 1.4 or - 1.0 (68 percentile error). The enhancement of cluster clustering over galaxy clustering is statistically significant in the presence of resampling errors. The uncertainties found do not include the effects of possible systematic biases in the galaxy and cluster catalogs and could be regarded as lower bounds on the true uncertainty range.
Survival estimation and the effects of dependency among animals
Schmutz, Joel A.; Ward, David H.; Sedinger, James S.; Rexstad, Eric A.
1995-01-01
Survival models assume that fates of individuals are independent, yet the robustness of this assumption has been poorly quantified. We examine how empirically derived estimates of the variance of survival rates are affected by dependency in survival probability among individuals. We used Monte Carlo simulations to generate known amounts of dependency among pairs of individuals and analyzed these data with Kaplan-Meier and Cormack-Jolly-Seber models. Dependency significantly increased these empirical variances as compared to theoretically derived estimates of variance from the same populations. Using resighting data from 168 pairs of black brant, we used a resampling procedure and program RELEASE to estimate empirical and mean theoretical variances. We estimated that the relationship between paired individuals caused the empirical variance of the survival rate to be 155% larger than the empirical variance for unpaired individuals. Monte Carlo simulations and use of this resampling strategy can provide investigators with information on how robust their data are to this common assumption of independent survival probabilities.
Confidence limit calculation for antidotal potency ratio derived from lethal dose 50
Manage, Ananda; Petrikovics, Ilona
2013-01-01
AIM: To describe confidence interval calculation for antidotal potency ratios using bootstrap method. METHODS: We can easily adapt the nonparametric bootstrap method which was invented by Efron to construct confidence intervals in such situations like this. The bootstrap method is a resampling method in which the bootstrap samples are obtained by resampling from the original sample. RESULTS: The described confidence interval calculation using bootstrap method does not require the sampling distribution antidotal potency ratio. This can serve as a substantial help for toxicologists, who are directed to employ the Dixon up-and-down method with the application of lower number of animals to determine lethal dose 50 values for characterizing the investigated toxic molecules and eventually for characterizing the antidotal protections by the test antidotal systems. CONCLUSION: The described method can serve as a useful tool in various other applications. Simplicity of the method makes it easier to do the calculation using most of the programming software packages. PMID:25237618
A program for handling map projections of small-scale geospatial raster data
Finn, Michael P.; Steinwand, Daniel R.; Trent, Jason R.; Buehler, Robert A.; Mattli, David M.; Yamamoto, Kristina H.
2012-01-01
Scientists routinely accomplish small-scale geospatial modeling using raster datasets of global extent. Such use often requires the projection of global raster datasets onto a map or the reprojection from a given map projection associated with a dataset. The distortion characteristics of these projection transformations can have significant effects on modeling results. Distortions associated with the reprojection of global data are generally greater than distortions associated with reprojections of larger-scale, localized areas. The accuracy of areas in projected raster datasets of global extent is dependent on spatial resolution. To address these problems of projection and the associated resampling that accompanies it, methods for framing the transformation space, direct point-to-point transformations rather than gridded transformation spaces, a solution to the wrap-around problem, and an approach to alternative resampling methods are presented. The implementations of these methods are provided in an open-source software package called MapImage (or mapIMG, for short), which is designed to function on a variety of computer architectures.
Dislocation model for aseismic fault slip in the transverse ranges of Southern California
NASA Technical Reports Server (NTRS)
Cheng, A.; Jackson, D. D.; Matsuura, M.
1985-01-01
Geodetic data at a plate boundary can reveal the pattern of subsurface displacements that accompany plate motion. These displacements are modelled as the sum of rigid block motion and the elastic effects of frictional interaction between blocks. The frictional interactions are represented by uniform dislocation on each of several rectangular fault patches. The block velocities and fault parameters are then estimated from geodetic data. Bayesian inversion procedure employs prior estimates based on geological and seismological data. The method is applied to the Transverse Ranges, using prior geological and seismological data and geodetic data from the USGS trilateration networks. Geodetic data imply a displacement rate of about 20 mm/yr across the San Andreas Fault, while the geologic estimates exceed 30 mm/yr. The prior model and the final estimates both imply about 10 mm/yr crustal shortening normal to the trend of the San Andreas Fault. Aseismic fault motion is a major contributor to plate motion. The geodetic data can help to identify faults that are suffering rapid stress accumulation; in the Transverse Ranges those faults are the San Andreas and the Santa Susana.
Block and Gradient Copoly(2-oxazoline) Micelles: Strikingly Different on the Inside.
Filippov, Sergey K; Verbraeken, Bart; Konarev, Petr V; Svergun, Dmitri I; Angelov, Borislav; Vishnevetskaya, Natalya S; Papadakis, Christine M; Rogers, Sarah; Radulescu, Aurel; Courtin, Tim; Martins, José C; Starovoytova, Larisa; Hruby, Martin; Stepanek, Petr; Kravchenko, Vitaly S; Potemkin, Igor I; Hoogenboom, Richard
2017-08-17
Herein, we provide a direct proof for differences in the micellar structure of amphiphilic diblock and gradient copolymers, thereby unambiguously demonstrating the influence of monomer distribution along the polymer chains on the micellization behavior. The internal structure of amphiphilic block and gradient co poly(2-oxazolines) based on the hydrophilic poly(2-methyl-2-oxazoline) (PMeOx) and the hydrophobic poly(2-phenyl-2-oxazoline) (PPhOx) was studied in water and water-ethanol mixtures by small-angle X-ray scattering (SAXS), small-angle neutron scattering (SANS), static and dynamic light scattering (SLS/DLS), and 1 H NMR spectroscopy. Contrast matching SANS experiments revealed that block copolymers form micelles with a uniform density profile of the core. In contrast to popular assumption, the outer part of the core of the gradient copolymer micelles has a distinctly higher density than the middle of the core. We attribute the latter finding to back-folding of chains resulting from hydrophilic-hydrophobic interactions, leading to a new type of micelles that we refer to as micelles with a "bitterball-core" structure.
Exploiting Vector and Multicore Parallelsim for Recursive, Data- and Task-Parallel Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Bin; Krishnamoorthy, Sriram; Agrawal, Kunal
Modern hardware contains parallel execution resources that are well-suited for data-parallelism-vector units-and task parallelism-multicores. However, most work on parallel scheduling focuses on one type of hardware or the other. In this work, we present a scheduling framework that allows for a unified treatment of task- and data-parallelism. Our key insight is an abstraction, task blocks, that uniformly handles data-parallel iterations and task-parallel tasks, allowing them to be scheduled on vector units or executed independently as multicores. Our framework allows us to define schedulers that can dynamically select between executing task- blocks on vector units or multicores. We show that thesemore » schedulers are asymptotically optimal, and deliver the maximum amount of parallelism available in computation trees. To evaluate our schedulers, we develop program transformations that can convert mixed data- and task-parallel pro- grams into task block-based programs. Using a prototype instantiation of our scheduling framework, we show that, on an 8-core system, we can simultaneously exploit vector and multicore parallelism to achieve 14×-108× speedup over sequential baselines.« less
Accuracy Validation of Large-scale Block Adjustment without Control of ZY3 Images over China
NASA Astrophysics Data System (ADS)
Yang, Bo
2016-06-01
Mapping from optical satellite images without ground control is one of the goals of photogrammetry. Using 8802 three linear array stereo images (a total of 26406 images) of ZY3 over China, we propose a large-scale and non-control block adjustment method of optical satellite images based on the RPC model, in which a single image is regarded as an adjustment unit to be organized. To overcome the block distortion caused by unstable adjustment without ground control and the excessive accumulation of errors, we use virtual control points created by the initial RPC model of the images as the weighted observations and add them into the adjustment model to refine the adjustment. We use 8000 uniformly distributed high precision check points to evaluate the geometric accuracy of the DOM (Digital Ortho Model) and DSM (Digital Surface Model) production, for which the standard deviations of plane and elevation are 3.6 m and 4.2 m respectively. The geometric accuracy is consistent across the whole block and the mosaic accuracy of neighboring DOM is within a pixel, thus, the seamless mosaic could take place. This method achieves the goal of an accuracy of mapping without ground control better than 5 m for the whole China from ZY3 satellite images.
3D Monte-Carlo study of toroidally discontinuous limiter SOL configurations of Aditya tokamak
NASA Astrophysics Data System (ADS)
Sahoo, Bibhu Prasad; Sharma, Devendra; Jha, Ratneshwar; Feng, Yühe
2017-08-01
The plasma-neutral transport in the scrape-off layer (SOL) region formed by toroidally discontinuous limiters deviates from usual uniform SOL approximations when 3D effects caused by limiter discreteness begin to dominate. In an upgrade version of the Aditya tokamak, originally having a toroidally localized poloidal ring-like limiter, the newer outboard block and inboard belt limiters are expected to have smaller connection lengths and a multiple fold toroidal periodicity. The characteristics of plasma discharges may accordingly vary from the original observations of large diffusivity, and a net improvement and the stability of the discharges are desired. The estimations related to 3D effects in the ring limiter plasma transport are also expected to be modified and are updated by predictive simulations of transport in the new block limiter configuration. A comparison between the ring limiter results and those from new simulations with block limiter SOL shows that for the grids produced using same core plasma equilibrium, the modified SOL plasma flows and flux components have enhanced poloidal periodicity in the block limiter case. These SOL modifications result in a reduced net recycling for the equivalent edge density values. Predictions are also made about the relative level of the diffusive transport and its impact on the factors limiting the operational regime.
A Self-organized MIMO-OFDM-based Cellular Network
NASA Astrophysics Data System (ADS)
Grünheid, Rainer; Fellenberg, Christian
2012-05-01
This paper presents a system proposal for a self-organized cellular network, which is based on the MIMO-OFDM transmission technique. Multicarrier transmission, combined with appropriate beamforming concepts, yields high bandwidth-efficiency and shows a robust behavior in multipath radio channels. Moreover, it provides a fine and tuneable granularity of space-time-frequency resources. Using a TDD approach and interference measurements in each cell, the Base Stations (BSs) decide autonomously which of the space-time-frequency resource blocks are allocated to the Mobile Terminals (MTs) in the cell, in order to fulfil certain Quality of Service (QoS) parameters. Since a synchronized Single Frequency Network (SFN), i.e., a re-use factor of one is applied, the resource blocks can be shared adaptively and flexibly among the cells, which is very advantageous in the case of a non-uniform MT distribution.
(E,Z)-3-(3',5'-Dimethoxy-4'-hydroxy-benzylidene)-2-indolinone blocks mast cell degranulation.
Kiefer, S; Mertz, A C; Koryakina, A; Hamburger, M; Küenzi, P
2010-05-12
(E,Z)-3-(3',5'-Dimethoxy-4'-hydroxy-benzylidene)-2-indolinone (indolinone) is an alkaloid that has been identified as a pharmacologically active compound in extracts of the traditional anti-inflammatory herb Isatis tinctoria. Indolinone has been shown to inhibit compound 48/80-induced mast cell degranulation in vitro. Application of indolinone to bone marrow derived mast cells showed that it was uniformly distributed in the cytoplasm and that cellular uptake was terminated within minutes. Pre-treatment of IgE-sensitized mast cells with 100nM indolinone rendered them insensitive against FcvarepsilonRI-receptor dependent degranulation. However, upstream signalling induced by antigen such as activation of PI3-K and MAPK remained unaffected. We conclude that indolinone blocks mast cell degranulation at the level of granule exocitosis with an IC(50) of 54nm.
1984-12-01
34MISCELLANEOUS" ACCOUNT CATEGORY WITHIN THE DOD INSTRUCTION 7220.29-H DEPOT LEVEL MAINTENANCE COST ACCOUNTING SYSTEM by a. Steven Eugene Lehr CDecember 1984...PERFORMING ONG. REPORT NUMBER Maintenance Cost Accounting System 7. AUTHOR(&) S. CONTRACT OR GRANT NUMBER(@) Steven Eugene Lehr 9. PERFORMING ORGANIZATION...Availability Codes IS. KEY WORDS (Continue on reverse *ids It necessary and Identify by block number) Dvi Special Uniform Cost Accounting System DoD
SPRUCE S1 Bog Vegetation Survey and Peat Depth Data: 2009
Hanson, P. J. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A
2009-12-31
This data set reports the results of a field survey of the S1 Bog to characterize the vegetation and to determine peat depth. The survey was conducted on September 21 and 22, 2009. The initial survey of vegetation and peat depth characteristics of the target bog was conducted to evaluate the logical locations for installing replicated experimental blocks for SPRUCE. The goal was to identify multiple locations of uniform aboveground vegetation and belowground peat depth for positioning experimental units within the bog.
Parallel Readout of Optical Disks
1992-08-01
r(x,y) is the apparent reflectance function of the disk surface including the phase error. The illuminat - ing optics should be chosen so that Er(x,y...of the light uniformly illuminat - ing the chip, Ap = 474\\im 2 is the area of photodiode, and rs is the time required to switch the synapses. Figure...reference beam that is incident from the right. Once the hologram is recorded the input is blocked and the disk is illuminat - ed. Lens LI takes the
Perceptual Optimization of DCT Color Quantization Matrices
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Statler, Irving C. (Technical Monitor)
1994-01-01
Many image compression schemes employ a block Discrete Cosine Transform (DCT) and uniform quantization. Acceptable rate/distortion performance depends upon proper design of the quantization matrix. In previous work, we showed how to use a model of the visibility of DCT basis functions to design quantization matrices for arbitrary display resolutions and color spaces. Subsequently, we showed how to optimize greyscale quantization matrices for individual images, for optimal rate/perceptual distortion performance. Here we describe extensions of this optimization algorithm to color images.
1992-12-30
Encl 5) Cayman Islands CJ Central African Republic CT Chad CD Chile CI China CH Christmas Island KT Clipperton Islands IP Cocos (Keeling) Islands CK...PA Puerto Rico PR Rhode Island RI South Carolina SC South Dakota SD Tennessee TN Federated States of Marshall Islands , Palau TT Texas TX Utah UT...Vermont VT Virginia VA Virgin Islands VI Washington WA West Virginia WV Wisconsin WI Wyoming WY Block 17. ZIP Code. Enter the correct nine-digit ZIP Code
Enzymatically triggered rupture of polymersomes.
Jang, Woo-Sik; Park, Seung Chul; Reed, Ellen H; Dooley, Kevin P; Wheeler, Samuel F; Lee, Daeyeon; Hammer, Daniel A
2016-01-28
Polymersomes are robust vesicles made from amphiphilic block co-polymers. Large populations of uniform giant polymersomes with defined, entrapped species can be made by templating of double-emulsions using microfluidics. In the present study, a series of two enzymatic reactions, one inside and the other outside of the polymersome, were designed to induce rupture of polymersomes. We measured how the kinetics of rupture were affected by altering enzyme concentration. These results suggest that protocells with entrapped enzymes can be engineered to secrete contents on cue.
Reducing the Bias in Blocked Particle Filtering for High Dimensional Systems
2014-07-01
studied in theory and in countless practical applications [8, 13, 7]. In [6] the authors prove that the error can be controlled uniformly in time, thus...phenomenon for a particular case can be found in [17]. In [5, 14] the authors give a precise relation between the dimension of the system and the...Recent studies [3, 4, 15, 16] however suggest that high-dimensional particle filtering may be feasible in particular applications and/or if one is
Superparamagnetic properties of carbon nanotubes filled with NiFe{sub 2}O{sub 4} nanoparticles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stojak Repa, K.; Israel, D.; Phan, M. H., E-mail: phanm@usf.edu, E-mail: sharihar@usf.edu
2015-05-07
Multi walled carbon nanotubes (MWCNTs) were successfully synthesized using custom-made 80 nm pore-size alumina templates, and were uniformly filled with nickel ferrite (NFO) nanoparticles of 7.4 ± 1.7 nm diameter using a novel magnetically assisted capillary action method. X-ray diffraction confirmed the inverse spinel phase for the synthesized NFO. Transmission electron microscopy confirms spherical NFO nanoparticles with an average diameter of 7.4 nm inside MWCNTs. Magnetometry indicates that both NFO and NFO-filled MWCNTs present a blocking temperature around 52 K, with similar superparamagnetic-like behavior, and weak dipolar interactions, giving rise to a super-spin-glass-like behavior at low temperatures. These properties along with the uniformity of sub-100 nm structuresmore » and the possibility of tunable magnetic response in variable diameter carbon nanotubes make them ideal for advanced biomedical and microwave applications.« less
Liu, Jia; Jiang, Guiyuan; Liu, Ying; Di, Jiancheng; Wang, Yajun; Zhao, Zhen; Sun, Qianyao; Xu, Chunming; Gao, Jinsen; Duan, Aijun; Liu, Jian; Wei, Yuechang; Zhao, Yong; Jiang, Lei
2014-01-01
Zeolite fibers have attracted growing interest for a range of new applications because of their structural particularity while maintaining the intrinsic performances of the building blocks of zeolites. The fabrication of uniform zeolite fibers with tunable hierarchical porosity and further exploration of their catalytic potential are of great importance. Here, we present a versatile and facile method for the fabrication of hierarchical ZSM-5 zeolite fibers with macro-meso-microporosity by coaxial electrospinning. Due to the synergistic integration of the suitable acidity and the hierarchical porosity, high yield of propylene and excellent anti-coking stability were demonstrated on the as-prepared ZSM-5 hollow fibers in the catalytic cracking reaction of iso-butane. This work may also provide good model catalysts with uniform wall thickness and tunable porosity for studying a series of important catalytic reactions. PMID:25450726
Building Intuitions about Statistical Inference Based on Resampling
ERIC Educational Resources Information Center
Watson, Jane; Chance, Beth
2012-01-01
Formal inference, which makes theoretical assumptions about distributions and applies hypothesis testing procedures with null and alternative hypotheses, is notoriously difficult for tertiary students to master. The debate about whether this content should appear in Years 11 and 12 of the "Australian Curriculum: Mathematics" has gone on…
ERIC Educational Resources Information Center
Peterson, Ivars
1991-01-01
A method that enables people to obtain the benefits of statistics and probability theory without the shortcomings of conventional methods because it is free of mathematical formulas and is easy to understand and use is described. A resampling technique called the "bootstrap" is discussed in terms of application and development. (KR)
Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models
ERIC Educational Resources Information Center
Raykov, Tenko
2005-01-01
A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…
Testing variance components by two jackknife methods
USDA-ARS?s Scientific Manuscript database
The jacknife method, a resampling technique, has been widely used for statistical tests for years. The pseudo value based jacknife method (defined as pseudo jackknife method) is commonly used to reduce the bias for an estimate; however, sometimes it could result in large variaion for an estmimate a...
Chen, Jun Song; Liang, Yen Nan; Li, Yongmei; Yan, Qingyu; Hu, Xiao
2013-10-23
A facile green method to synthesize uniform nanostructured urchinlike rutile TiO2 is demonstrated. Titanium trichloride was selected as the TiO2 precursor, and a mixed solvent containing H2O and ethylene glycol was used. By using this binary medium, the nucleation and crystal growth of rutile TiO2 can be regulated, giving rise to very uniform urchinlike structures with tailorable sizes. As confirmed by the SEM and TEM analysis, large particles with dense aggregation of needle-like building blocks or small ones with loosely packed subunits could be obtained at different reaction conditions. The as-prepared samples were applied as the anode material for lithium-ion batteries, and they were shown to have superior properties with a high reversible capacity of 140 mA h g(-1) at a high current rate of 10 C for up to 300 cycles, which is almost unmatched by other rutile TiO2-based electrodes. A stable capacity of 88 mA h g(-1) can also be delivered at an extremely high rate of 50 C, suggesting the great potential of the as-prepared product for high-rate lithium-ion batteries.
Semiclassical evaluation of quantum fidelity
NASA Astrophysics Data System (ADS)
Vanicek, Jiri
2004-03-01
We present a numerically feasible semiclassical method to evaluate quantum fidelity (Loschmidt echo) in a classically chaotic system. It was thought that such evaluation would be intractable, but instead we show that a uniform semiclassical expression not only is tractable but it gives remarkably accurate numerical results for the standard map in both the Fermi-golden-rule and Lyapunov regimes. Because it allows a Monte-Carlo evaluation, this uniform expression is accurate at times where there are 10^70 semiclassical contributions. Remarkably, the method also explicitly contains the ``building blocks'' of analytical theories of recent literature, and thus permits a direct test of approximations made by other authors in these regimes, rather than an a posteriori comparison with numerical results. We explain in more detail the extended validity of the classical perturbation approximation and thus provide a ``defense" of the linear response theory from the famous Van Kampen objection. We point out the potential use of our uniform expression in other areas because it gives a most direct link between the quantum Feynman propagator based on the path integral and the semiclassical Van Vleck propagator based on the sum over classical trajectories. Finally, we test the applicability of our method in integrable and mixed systems.
Zhang, Hui-Ming; Imtiaz, Mohammad S; Laver, Derek R; McCurdy, David W; Offler, Christina E; van Helden, Dirk F; Patrick, John W
2015-03-01
Transfer cell morphology is characterized by a polarized ingrowth wall comprising a uniform wall upon which wall ingrowth papillae develop at right angles into the cytoplasm. The hypothesis that positional information directing construction of wall ingrowth papillae is mediated by Ca(2+) signals generated by spatiotemporal alterations in cytosolic Ca(2+) ([Ca(2+)]cyt) of cells trans-differentiating to a transfer cell morphology was tested. This hypothesis was examined using Vicia faba cotyledons. On transferring cotyledons to culture, their adaxial epidermal cells synchronously trans-differentiate to epidermal transfer cells. A polarized and persistent Ca(2+) signal, generated during epidermal cell trans-differentiation, was found to co-localize with the site of ingrowth wall formation. Dampening Ca(2+) signal intensity, by withdrawing extracellular Ca(2+) or blocking Ca(2+) channel activity, inhibited formation of wall ingrowth papillae. Maintenance of Ca(2+) signal polarity and persistence depended upon a rapid turnover (minutes) of cytosolic Ca(2+) by co-operative functioning of plasma membrane Ca(2+)-permeable channels and Ca(2+)-ATPases. Viewed paradermally, and proximal to the cytosol-plasma membrane interface, the Ca(2+) signal was organized into discrete patches that aligned spatially with clusters of Ca(2+)-permeable channels. Mathematical modelling demonstrated that these patches of cytosolic Ca(2+) were consistent with inward-directed plumes of elevated [Ca(2+)]cyt. Plume formation depended upon an alternating distribution of Ca(2+)-permeable channels and Ca(2+)-ATPase clusters. On further inward diffusion, the Ca(2+) plumes coalesced into a uniform Ca(2+) signal. Blocking or dispersing the Ca(2+) plumes inhibited deposition of wall ingrowth papillae, while uniform wall formation remained unaltered. A working model envisages that cytosolic Ca(2+) plumes define the loci at which wall ingrowth papillae are deposited. © The Author 2014. Published by Oxford University Press on behalf of the Society for Experimental Biology.
Incorporation of ice sheet models into an Earth system model: Focus on methodology of coupling
NASA Astrophysics Data System (ADS)
Rybak, Oleg; Volodin, Evgeny; Morozova, Polina; Nevecherja, Artiom
2018-03-01
Elaboration of a modern Earth system model (ESM) requires incorporation of ice sheet dynamics. Coupling of an ice sheet model (ICM) to an AOGCM is complicated by essential differences in spatial and temporal scales of cryospheric, atmospheric and oceanic components. To overcome this difficulty, we apply two different approaches for the incorporation of ice sheets into an ESM. Coupling of the Antarctic ice sheet model (AISM) to the AOGCM is accomplished via using procedures of resampling, interpolation and assigning to the AISM grid points annually averaged meanings of air surface temperature and precipitation fields generated by the AOGCM. Surface melting, which takes place mainly on the margins of the Antarctic peninsula and on ice shelves fringing the continent, is currently ignored. AISM returns anomalies of surface topography back to the AOGCM. To couple the Greenland ice sheet model (GrISM) to the AOGCM, we use a simple buffer energy- and water-balance model (EWBM-G) to account for orographically-driven precipitation and other sub-grid AOGCM-generated quantities. The output of the EWBM-G consists of surface mass balance and air surface temperature to force the GrISM, and freshwater run-off to force thermohaline circulation in the oceanic block of the AOGCM. Because of a rather complex coupling procedure of GrIS compared to AIS, the paper mostly focuses on Greenland.
Three-dimensional image analysis as a tool for embryology
NASA Astrophysics Data System (ADS)
Verweij, Andre
1992-06-01
In the study of cell fate, cell lineage, and morphogenetic transformation it is necessary to obtain 3-D data. Serial sections of glutaraldehyde fixed and glycol methacrylate embedded material provide high resolution data. Clonal spread during germ layer formation in the mouse embryo has been followed by labeling a progenitor epiblast cell with horseradish peroxidase and staining its descendants one or two days later, followed by histological processing. Reconstruction of a 3-D image from histological sections must provide a solution for the alignment problem. As we want to study images at different magnification levels, we have chosen a method in which the sections are aligned under the microscope. Positioning is possible through a translation and a rotation stage. The first step for reconstruction is a coarse alignment on the basis of the moments in a binary, low magnification image of the embedding block. Thereafter, images of higher magnification levels are aligned by optimizing a similarity measure between the images. To analyze, first a global 3-D second order surface is fitted on the image to obtain the orientation of the embryo. The coefficients of this fit are used to normalize the size of the different embryos. Thereafter, the image is resampled with respect to the surface to create a 2-D mapping of the embryo and to guide the segmentation of the different cell layers which make up the embryo.
NASA Astrophysics Data System (ADS)
Pandey, Gavendra; Sharan, Maithili
2018-01-01
Application of atmospheric dispersion models in air quality analysis requires a proper representation of the vertical and horizontal growth of the plume. For this purpose, various schemes for the parameterization of dispersion parameters σ‧s are described in both stable and unstable conditions. These schemes differ on the use of (i) extent of availability of on-site measurements (ii) formulations developed for other sites and (iii) empirical relations. The performance of these schemes is evaluated in an earlier developed IIT (Indian Institute of Technology) dispersion model with the data set in single and multiple releases conducted at Fusion Field Trials, Dugway Proving Ground, Utah 2007. Qualitative and quantitative evaluation of the relative performance of all the schemes is carried out in both stable and unstable conditions in the light of (i) peak/maximum concentrations, and (ii) overall concentration distribution. The blocked bootstrap resampling technique is adopted to investigate the statistical significance of the differences in performances of each of the schemes by computing 95% confidence limits on the parameters FB and NMSE. The various analysis based on some selected statistical measures indicated consistency in the qualitative and quantitative performances of σ schemes. The scheme which is based on standard deviation of wind velocity fluctuations and Lagrangian time scales exhibits a relatively better performance in predicting the peak as well as the lateral spread.
Publications - GMC 366 | Alaska Division of Geological & Geophysical
Alaska MAPTEACH Tsunami Inundation Mapping Energy Resources Gas Hydrates STATEMAP Program information DGGS GMC 366 Publication Details Title: Makushin Geothermal Project ST-1R Core 2009 re-sampling and analysis: Analytical results for anomalous precious and base metals associated with geothermal systems
Tracking the Gender Pay Gap: A Case Study
ERIC Educational Resources Information Center
Travis, Cheryl B.; Gross, Louis J.; Johnson, Bruce A.
2009-01-01
This article provides a short introduction to standard considerations in the formal study of wages and illustrates the use of multiple regression and resampling simulation approaches in a case study of faculty salaries at one university. Multiple regression is especially beneficial where it provides information on strength of association, specific…
Testing the Difference of Correlated Agreement Coefficients for Statistical Significance
ERIC Educational Resources Information Center
Gwet, Kilem L.
2016-01-01
This article addresses the problem of testing the difference between two correlated agreement coefficients for statistical significance. A number of authors have proposed methods for testing the difference between two correlated kappa coefficients, which require either the use of resampling methods or the use of advanced statistical modeling…
Performance evaluation of DNA copy number segmentation methods.
Pierre-Jean, Morgane; Rigaill, Guillem; Neuvial, Pierre
2015-07-01
A number of bioinformatic or biostatistical methods are available for analyzing DNA copy number profiles measured from microarray or sequencing technologies. In the absence of rich enough gold standard data sets, the performance of these methods is generally assessed using unrealistic simulation studies, or based on small real data analyses. To make an objective and reproducible performance assessment, we have designed and implemented a framework to generate realistic DNA copy number profiles of cancer samples with known truth. These profiles are generated by resampling publicly available SNP microarray data from genomic regions with known copy-number state. The original data have been extracted from dilutions series of tumor cell lines with matched blood samples at several concentrations. Therefore, the signal-to-noise ratio of the generated profiles can be controlled through the (known) percentage of tumor cells in the sample. This article describes this framework and its application to a comparison study between methods for segmenting DNA copy number profiles from SNP microarrays. This study indicates that no single method is uniformly better than all others. It also helps identifying pros and cons of the compared methods as a function of biologically informative parameters, such as the fraction of tumor cells in the sample and the proportion of heterozygous markers. This comparison study may be reproduced using the open source and cross-platform R package jointseg, which implements the proposed data generation and evaluation framework: http://r-forge.r-project.org/R/?group_id=1562. © The Author 2014. Published by Oxford University Press.
A Digital Sensor Simulator of the Pushbroom Offner Hyperspectral Imaging Spectrometer
Tao, Dongxing; Jia, Guorui; Yuan, Yan; Zhao, Huijie
2014-01-01
Sensor simulators can be used in forecasting the imaging quality of a new hyperspectral imaging spectrometer, and generating simulated data for the development and validation of the data processing algorithms. This paper presents a novel digital sensor simulator for the pushbroom Offner hyperspectral imaging spectrometer, which is widely used in the hyperspectral remote sensing. Based on the imaging process, the sensor simulator consists of a spatial response module, a spectral response module, and a radiometric response module. In order to enhance the simulation accuracy, spatial interpolation-resampling, which is implemented before the spatial degradation, is developed to compromise the direction error and the extra aliasing effect. Instead of using the spectral response function (SRF), the dispersive imaging characteristics of the Offner convex grating optical system is accurately modeled by its configuration parameters. The non-uniformity characteristics, such as keystone and smile effects, are simulated in the corresponding modules. In this work, the spatial, spectral and radiometric calibration processes are simulated to provide the parameters of modulation transfer function (MTF), SRF and radiometric calibration parameters of the sensor simulator. Some uncertainty factors (the stability, band width of the monochromator for the spectral calibration, and the integrating sphere uncertainty for the radiometric calibration) are considered in the simulation of the calibration process. With the calibration parameters, several experiments were designed to validate the spatial, spectral and radiometric response of the sensor simulator, respectively. The experiment results indicate that the sensor simulator is valid. PMID:25615727
Hibbard, J.P.; van Staal, C.R.; Rankin, D.W.
2007-01-01
The New York promontory serves as the divide between the northern and southern segments of the Appalachian orogen. Antiquated subdivisions, distinct for each segment, implied that they had lithotectonic histories that were independent of each other. Using new lithotectonic subdivisions we compare first order features of the pre-Silurian orogenic 'building blocks' in order to test the validity of the implication of independent lithotectonic histories for the two segments. Three lithotectonic divisions, termed here the Laurentian, Iapetan, and the peri-Gondwanan realms, characterize the entire orogen. The Laurentian realm, composed of native North American rocks, is remarkably uniform for the length of the orogen. It records the multistage Neoproterozoic-early Paleozoic rift-drift history of the Appalachian passive margin, formation of a Taconic Seaway, and the ultimate demise of both in the Middle Ordovician. The Iapetan realm encompasses mainly oceanic and magmatic arc tracts that once lay within the Iapetus Ocean, between Laurentia and Gondwana. In the northern segment, the realm is divisible on the basis of stratigraphy and faunal provinciality into peri-Laurentian and peri-Gondwanan tracts that were amalgamated in the Late Ordovician. South of New York, stratigraphic and faunal controls decrease markedly; rock associations are not inconsistent with those of the northern Appalachians, although second-order differences exist. Exposed exotic crustal blocks of the peri-Gondwanan realm include Ganderia, Avalonia, and Meguma in the north, and Carolinia in the south. Carolinia most closely resembles Ganderia, both in early evolution and Late Ordovician-Silurian docking to Laurentia. Our comparison indicates that, to a first order, the pre-Silurian Appalachian orogen developed uniformly, starting with complex rifting and a subsequent drift phase to form the Appalachian margin, followed by the consolidation of Iapetan components and ending with accretion of the peri-Gonwanan Ganderia and Carolinia. This deduction implies that any first-order differences between northern and southern segments post-date Late Ordovician consolidation of a large portion of the orogen.
Elastic instabilities in rubber
NASA Astrophysics Data System (ADS)
Gent, Alan
2009-03-01
Materials that undergo large elastic deformations can exhibit novel instabilities. Several examples are described: development of an aneurysm on inflating a rubber tube; non-uniform stretching on inflating a spherical balloon; formation of internal cracks in rubber blocks at a critical level of triaxial tension or when supersaturated with a dissolved gas; surface wrinkling of a block at a critical amount of compression; debonding or fracture of constrained films on swelling, and formation of ``knots'' on twisting stretched cylindrical rods. These various deformations are analyzed in terms of a simple strain energy function, using Rivlin's theory of large elastic deformations, and the results are compared with experimental measurements of the onset of unstable states. Such comparisons provide new tests of Rivlin's theory and, at least in principle, critical tests of proposed strain energy functions for rubber. Moreover the onset of highly non-uniform deformations has serious implications for the fatigue life and fracture resistance of rubber components. [4pt] References: [0pt] R. S. Rivlin, Philos. Trans. Roy. Soc. Lond. Ser. A241 (1948) 379--397. [0pt] A. Mallock, Proc. Roy. Soc. Lond. 49 (1890--1891) 458--463. [0pt] M. A. Biot, ``Mechanics of Incremental Deformations'', Wiley, New York, 1965. [0pt] A. N. Gent and P. B. Lindley, Proc. Roy. Soc. Lond. A 249 (1958) 195--205. [0pt] A. N. Gent, W. J. Hung and M. F. Tse, Rubb. Chem. Technol. 74 (2001) 89--99. [0pt] A. N. Gent, Internatl. J. Non-Linear Mech. 40 (2005) 165--175.
NASA Astrophysics Data System (ADS)
Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; Birkholzer, Jens T.
2017-11-01
There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1-D, 2-D, and 3-D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, td. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, td0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the first two terms for high-accuracy approximations (with less than 10-7 relative error) for 1-D isotropic (spheres, cylinders, slabs) and 2-D/3-D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1-D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2-D/3-D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny
There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1D, 2D, and 3D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, t d0. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, t d0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the firstmore » two terms for high-accuracy approximations (with less than 10-7 relative error) for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2D/3D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.« less
Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; ...
2017-10-24
There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1D, 2D, and 3D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, t d0. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, t d0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the firstmore » two terms for high-accuracy approximations (with less than 10-7 relative error) for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2D/3D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.« less
Shear-induced Long Range Order in Diblock Copolymer Thin Films
NASA Astrophysics Data System (ADS)
Ding, Xuan; Russell, Thomas
2007-03-01
Shear is a well-established means of aligning block copolymer micro-domains in bulk; cylinder-forming block copolymers respond by orienting cylinder axes parallel to the flow direction, and macroscopic specimens with near-single-crystal texture can be obtained. A stepper motor is a brushless, synchronous electric motor that can divide a full rotation into a large number of steps. With the combination of a stepper motor and several gear boxes in our experiment, we can control the rotating resolution to be as small as 1 x10-4 degree/step. Also, with the help of a customized computer program we can control the motor speed in a very systematical way. By changing parameters such as the weight (or the uniform pressure) and the lateral force we can carry on experiment to examine the effect of lateral shear on different polymer systems such as PS-b-PEO (large χ) and PS-b-P2VP (small χ).
The effects of cooling systems on CO2-lased human enamel.
Lian, H J; Lan, W H; Lin, C P
1996-12-01
The thermal effects on dentin during CO2 laser irradiation on human enamel were investigated. To simulate the clinical practice, two cooling methods (air and water spray) were applied immediately after laser exposure, whereas one group without cooling was served as control. Three hundred and sixty uniform tooth blocks were obtained from freshly extracted human third molars. Temperature change measurements were made via electrical thermocouple implanted within the tooth block 2 mm away from the enamel surface. Experimental treatments consisted of lasing without cooling, lasing with 0.5-ml/sec water cooling, and lasing with 15-psi air cooling. Our results indicated that (1) both air- and water-cooling groups could reduce temperature elevation significantly; (2) the larger power energy resulted in the higher temperature elevation. In conclusion, for CO2 laser irradiation on human enamel both water- and air-cooling methods may be effective on prevention of thermal damage of pulp.
Lu, Xiaobin; Yan, Qin; Ma, Yinzhou; Guo, Xin; Xiao, Shou-Jun
2016-01-01
Block copolymer nanolithography has attracted enormous interest in chip technologies, such as integrated silicon chips and biochips, due to its large-scale and mass production of uniform patterns. We further modified this technology to grow embossed nanodots, nanorods, and nanofingerprints of polymer brushes on silicon from their corresponding wet-etched nanostructures covered with pendent SiHx (X = 1–3) species. Atomic force microscopy (AFM) was used to image the topomorphologies, and multiple transmission-reflection infrared spectroscopy (MTR-IR) was used to monitor the surface molecular films in each step for the sequential stepwise reactions. In addition, two layers of polymethacrylic acid (PMAA) brush nanodots were observed, which were attributed to the circumferential convergence growth and the diffusion-limited growth of the polymer brushes. The pH response of PMAA nanodots in the same region was investigated by AFM from pH 3.0 to 9.0. PMID:26841692
NASA Astrophysics Data System (ADS)
Lu, Xiaobin; Yan, Qin; Ma, Yinzhou; Guo, Xin; Xiao, Shou-Jun
2016-02-01
Block copolymer nanolithography has attracted enormous interest in chip technologies, such as integrated silicon chips and biochips, due to its large-scale and mass production of uniform patterns. We further modified this technology to grow embossed nanodots, nanorods, and nanofingerprints of polymer brushes on silicon from their corresponding wet-etched nanostructures covered with pendent SiHx (X = 1-3) species. Atomic force microscopy (AFM) was used to image the topomorphologies, and multiple transmission-reflection infrared spectroscopy (MTR-IR) was used to monitor the surface molecular films in each step for the sequential stepwise reactions. In addition, two layers of polymethacrylic acid (PMAA) brush nanodots were observed, which were attributed to the circumferential convergence growth and the diffusion-limited growth of the polymer brushes. The pH response of PMAA nanodots in the same region was investigated by AFM from pH 3.0 to 9.0.
Orcutt, Kelly D; Adams, Gregory P; Wu, Anna M; Silva, Matthew D; Harwell, Catey; Hoppin, Jack; Matsumura, Manabu; Kotsuma, Masakatsu; Greenberg, Jonathan; Scott, Andrew M; Beckman, Robert A
2017-10-01
Competitive radiolabeled antibody imaging can determine the unlabeled intact antibody dose that fully blocks target binding but may be confounded by heterogeneous tumor penetration. We evaluated the hypothesis that smaller radiolabeled constructs can be used to more accurately evaluate tumor expressed receptors. The Krogh cylinder distributed model, including bivalent binding and variable intervessel distances, simulated distribution of smaller constructs in the presence of increasing doses of labeled antibody forms. Smaller constructs <25 kDa accessed binding sites more uniformly at large distances from blood vessels compared with larger constructs and intact antibody. These observations were consistent for different affinity and internalization characteristics of constructs. As predicted, a higher dose of unlabeled intact antibody was required to block binding to these distant receptor sites. Small radiolabeled constructs provide more accurate information on total receptor expression in tumors and reveal the need for higher antibody doses for target receptor blockade.
Li, Yanan; Yang, Chenguang; Ge, Shuzhi Sam; Lee, Tong Heng
2011-04-01
In this paper, adaptive neural network (NN) control is investigated for a class of block triangular multiinput-multioutput nonlinear discrete-time systems with each subsystem in pure-feedback form with unknown control directions. These systems are of couplings in every equation of each subsystem, and different subsystems may have different orders. To avoid the noncausal problem in the control design, the system is transformed into a predictor form by rigorous derivation. By exploring the properties of the block triangular form, implicit controls are developed for each subsystem such that the couplings of inputs and states among subsystems have been completely decoupled. The radial basis function NN is employed to approximate the unknown control. Each subsystem achieves a semiglobal uniformly ultimately bounded stability with the proposed control, and simulation results are presented to demonstrate its efficiency.
Monodisperse Block Copolymer Particles with Controllable Size, Shape, and Nanostructure
NASA Astrophysics Data System (ADS)
Shin, Jae Man; Kim, Yongjoo; Kim, Bumjoon; PNEL Team
Shape-anisotropic particles are important class of novel colloidal building block for their functionality is more strongly governed by their shape, size and nanostructure compared to conventional spherical particles. Recently, facile strategy for producing non-spherical polymeric particles by interfacial engineering received significant attention. However, achieving uniform size distribution of particles together with controlled shape and nanostructure has not been achieved. Here, we introduce versatile system for producing monodisperse BCP particles with controlled size, shape and morphology. Polystyrene-b-polybutadiene (PS-b-PB) self-assembled to either onion-like or striped ellipsoid particle, where final structure is governed by amount of adsorbed sodium dodecyl sulfate (SDS) surfactant at the particle/surrounding interface. Further control of molecular weight and particle size enabled fine-tuning of aspect ratio of ellipsoid particle. Underlying physics of free energy for morphology formation and entropic penalty associated with bending BCP chains strongly affects particle structure and specification.
Quantifying nonhomogeneous colors in agricultural materials part I: method development.
Balaban, M O
2008-11-01
Measuring the color of food and agricultural materials using machine vision (MV) has advantages not available by other measurement methods such as subjective tests or use of color meters. The perception of consumers may be affected by the nonuniformity of colors. For relatively uniform colors, average color values similar to those given by color meters can be obtained by MV. For nonuniform colors, various image analysis methods (color blocks, contours, and "color change index"[CCI]) can be applied to images obtained by MV. The degree of nonuniformity can be quantified, depending on the level of detail desired. In this article, the development of the CCI concept is presented. For images with a wide range of hue values, the color blocks method quantifies well the nonhomogeneity of colors. For images with a narrow hue range, the CCI method is a better indicator of color nonhomogeneity.
NASA Technical Reports Server (NTRS)
Ting, David Z. (Inventor); Khoshakhlagh, Arezou (Inventor); Soibel, Alexander (Inventor); Hill, Cory J. (Inventor); Gunapala, Sarath D. (Inventor)
2012-01-01
A superlattice-based infrared absorber and the matching electron-blocking and hole-blocking unipolar barriers, absorbers and barriers with graded band gaps, high-performance infrared detectors, and methods of manufacturing such devices are provided herein. The infrared absorber material is made from a superlattice (periodic structure) where each period consists of two or more layers of InAs, InSb, InSbAs, or InGaAs. The layer widths and alloy compositions are chosen to yield the desired energy band gap, absorption strength, and strain balance for the particular application. Furthermore, the periodicity of the superlattice can be "chirped" (varied) to create a material with a graded or varying energy band gap. The superlattice based barrier infrared detectors described and demonstrated herein have spectral ranges covering the entire 3-5 micron atmospheric transmission window, excellent dark current characteristics operating at least 150K, high yield, and have the potential for high-operability, high-uniformity focal plane arrays.
NASA Astrophysics Data System (ADS)
Hudson, Zachary M.; Boott, Charlotte E.; Robinson, Matthew E.; Rupar, Paul A.; Winnik, Mitchell A.; Manners, Ian
2014-10-01
Recent advances in the self-assembly of block copolymers have enabled the precise fabrication of hierarchical nanostructures using low-cost solution-phase protocols. However, the preparation of well-defined and complex planar nanostructures in which the size is controlled in two dimensions (2D) has remained a challenge. Using a series of platelet-forming block copolymers, we have demonstrated through quantitative experiments that the living crystallization-driven self-assembly (CDSA) approach can be extended to growth in 2D. We used 2D CDSA to prepare uniform lenticular platelet micelles of controlled size and to construct precisely concentric lenticular micelles composed of spatially distinct functional regions, as well as complex structures analogous to nanoscale single- and double-headed arrows and spears. These methods represent a route to hierarchical nanostructures that can be tailored in 2D, with potential applications as diverse as liquid crystals, diagnostic technology and composite reinforcement.
Cai, Zuansi; Merly, Corrine; Thomson, Neil R; Wilson, Ryan D; Lerner, David N
2007-08-15
Technical developments have now made it possible to emplace granular zero-valent iron (Fe(0)) in fractured media to create a Fe(0) fracture reactive barrier (Fe(0) FRB) for the treatment of contaminated groundwater. To evaluate this concept, we conducted a laboratory experiment in which trichloroethylene (TCE) contaminated water was flushed through a single uniform fracture created between two sandstone blocks. This fracture was partly filled with what was intended to be a uniform thickness of iron. Partial treatment of TCE by iron demonstrated that the concept of a Fe(0) FRB is practical, but was less than anticipated for an iron layer of uniform thickness. When the experiment was disassembled, evidence of discrete channelised flow was noted and attributed to imperfect placement of the iron. To evaluate the effect of the channel flow, an explicit Channel Model was developed that simplifies this complex flow regime into a conceptualised set of uniform and parallel channels. The mathematical representation of this conceptualisation directly accounts for (i) flow channels and immobile fluid arising from the non-uniform iron placement, (ii) mass transfer from the open fracture to iron and immobile fluid regions, and (iii) degradation in the iron regions. A favourable comparison between laboratory data and the results from the developed mathematical model suggests that the model is capable of representing TCE degradation in fractures with non-uniform iron placement. In order to apply this Channel Model concept to a Fe(0) FRB system, a simplified, or implicit, Lumped Channel Model was developed where the physical and chemical processes in the iron layer and immobile fluid regions are captured by a first-order lumped rate parameter. The performance of this Lumped Channel Model was compared to laboratory data, and benchmarked against the Channel Model. The advantages of the Lumped Channel Model are that the degradation of TCE in the system is represented by a first-order parameter that can be used directly in readily available numerical simulators.
NASA Astrophysics Data System (ADS)
Cai, Zuansi; Merly, Corrine; Thomson, Neil R.; Wilson, Ryan D.; Lerner, David N.
2007-08-01
Technical developments have now made it possible to emplace granular zero-valent iron (Fe 0) in fractured media to create a Fe 0 fracture reactive barrier (Fe 0 FRB) for the treatment of contaminated groundwater. To evaluate this concept, we conducted a laboratory experiment in which trichloroethylene (TCE) contaminated water was flushed through a single uniform fracture created between two sandstone blocks. This fracture was partly filled with what was intended to be a uniform thickness of iron. Partial treatment of TCE by iron demonstrated that the concept of a Fe 0 FRB is practical, but was less than anticipated for an iron layer of uniform thickness. When the experiment was disassembled, evidence of discrete channelised flow was noted and attributed to imperfect placement of the iron. To evaluate the effect of the channel flow, an explicit Channel Model was developed that simplifies this complex flow regime into a conceptualised set of uniform and parallel channels. The mathematical representation of this conceptualisation directly accounts for (i) flow channels and immobile fluid arising from the non-uniform iron placement, (ii) mass transfer from the open fracture to iron and immobile fluid regions, and (iii) degradation in the iron regions. A favourable comparison between laboratory data and the results from the developed mathematical model suggests that the model is capable of representing TCE degradation in fractures with non-uniform iron placement. In order to apply this Channel Model concept to a Fe 0 FRB system, a simplified, or implicit, Lumped Channel Model was developed where the physical and chemical processes in the iron layer and immobile fluid regions are captured by a first-order lumped rate parameter. The performance of this Lumped Channel Model was compared to laboratory data, and benchmarked against the Channel Model. The advantages of the Lumped Channel Model are that the degradation of TCE in the system is represented by a first-order parameter that can be used directly in readily available numerical simulators.
Morphological Expressions of Crater Infill Collapse: Model Simulations of Chaotic Terrains on Mars
NASA Astrophysics Data System (ADS)
Roda, Manuel; Marketos, George; Westerweel, Jan; Govers, Rob
2017-10-01
Martian chaotic terrains are characterized by deeply depressed intensively fractured areas that contain a large number of low-strain tilted blocks. Stronger deformation (e.g., higher number of fractures) is generally observed in the rims when compared to the middle regions of the terrains. The distribution and number of fractures and tilted blocks are correlated with the size of the chaotic terrains. Smaller chaotic terrains are characterized by few fractures between undeformed blocks. Larger terrains show an elevated number of fractures uniformly distributed with single blocks. We investigate whether this surface morphology may be a consequence of the collapse of the infill of a crater. We perform numerical simulations with the Discrete Element Method and we evaluate the distribution of fractures within the crater and the influence of the crater size, infill thickness, and collapsing depth on the final morphology. The comparison between model predictions and the morphology of the Martian chaotic terrains shows strong statistical similarities in terms of both number of fractures and correlation between fractures and crater diameters. No or very weak correlation is observed between fractures and the infill thickness or collapsing depth. The strong correspondence between model results and observations suggests that the collapse of an infill layer within a crater is a viable mechanism for the peculiar morphology of the Martian chaotic terrains.
Municipal household solid waste fee based on an increasing block pricing model in Beijing, China.
Chu, Zhujie; Wu, Yunga; Zhuang, Jun
2017-03-01
This article aims to design an increasing block pricing model to estimate the waste fee with the consideration of the goals and principles of municipal household solid waste pricing. The increasing block pricing model is based on the main consideration of the per capita disposable income of urban residents, household consumption expenditure, production rate of waste disposal industry, and inflation rate. The empirical analysis is based on survey data of 5000 households in Beijing, China. The results indicate that the current uniform price of waste disposal is set too high for low-income people, and waste fees to the household disposable income or total household spending ratio are too low for the medium- and high-income families. An increasing block pricing model can prevent this kind of situation, and not only solve the problem of lack of funds, but also enhance the residents' awareness of environmental protection. A comparative study based on the grey system model is made by having a preliminary forecast for the waste emissions reduction effect of the pay-as-you-throw programme in the next 5 years of Beijing, China. The results show that the effect of the pay-as-you-throw programme is not only to promote the energy conservation and emissions reduction, but also giving a further improvement of the environmental quality.
2011-03-01
resampling a second time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 70 Plot of RSA bitgroup exponentiation with DAILMOM after a...14 DVFS Dynamic Voltage and Frequency Switching . . . . . . . . . . . . . . . . . . . 14 MDPL Masked Dual-Rail...algorithms to prevent whole-sale discovery of PINs and other simple methods to prevent employee tampering [5]. In time , cryptographic systems have
On the estimation of spread rate for a biological population
Jim Clark; Lajos Horváth; Mark Lewis
2001-01-01
We propose a nonparametric estimator for the rate of spread of an introduced population. We prove that the limit distribution of the estimator is normal or stable, depending on the behavior of the moment generating function. We show that resampling methods can also be used to approximate the distribution of the estimators.
ERIC Educational Resources Information Center
Du, Yunfei
This paper discusses the impact of sampling error on the construction of confidence intervals around effect sizes. Sampling error affects the location and precision of confidence intervals. Meta-analytic resampling demonstrates that confidence intervals can haphazardly bounce around the true population parameter. Special software with graphical…
Applying Bootstrap Resampling to Compute Confidence Intervals for Various Statistics with R
ERIC Educational Resources Information Center
Dogan, C. Deha
2017-01-01
Background: Most of the studies in academic journals use p values to represent statistical significance. However, this is not a good indicator of practical significance. Although confidence intervals provide information about the precision of point estimation, they are, unfortunately, rarely used. The infrequent use of confidence intervals might…
Exploring the Replicability of a Study's Results: Bootstrap Statistics for the Multivariate Case.
ERIC Educational Resources Information Center
Thompson, Bruce
Conventional statistical significance tests do not inform the researcher regarding the likelihood that results will replicate. One strategy for evaluating result replication is to use a "bootstrap" resampling of a study's data so that the stability of results across numerous configurations of the subjects can be explored. This paper…
Statistical process control for residential treated wood
Patricia K. Lebow; Timothy M. Young; Stan Lebow
2017-01-01
This paper is the first stage of a study that attempts to improve the process of manufacturing treated lumber through the use of statistical process control (SPC). Analysis of industrial and auditing agency data sets revealed there are differences between the industry and agency probability density functions (pdf) for normalized retention data. Resampling of batches of...
Explanation of Two Anomalous Results in Statistical Mediation Analysis
ERIC Educational Resources Information Center
Fritz, Matthew S.; Taylor, Aaron B.; MacKinnon, David P.
2012-01-01
Previous studies of different methods of testing mediation models have consistently found two anomalous results. The first result is elevated Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap tests not found in nonresampling tests or in resampling tests that did not include a bias correction. This is of special…
NASA Technical Reports Server (NTRS)
Park, Steve
1990-01-01
A large and diverse number of computational techniques are routinely used to process and analyze remotely sensed data. These techniques include: univariate statistics; multivariate statistics; principal component analysis; pattern recognition and classification; other multivariate techniques; geometric correction; registration and resampling; radiometric correction; enhancement; restoration; Fourier analysis; and filtering. Each of these techniques will be considered, in order.
DOT National Transportation Integrated Search
2004-02-01
Researchers and practitioners are commonly faced with the problem of limited data in the evaluation of ITS systems. Due to high data collection costs and limited resources, they are often forced to make decisions about the efficacy of a system or tec...
USDA-ARS?s Scientific Manuscript database
Satellite-based passive microwave remote sensing typically involves a scanning antenna that makes measurements at irregularly spaced locations. These locations can change on a day to day basis. Soil moisture products derived from satellite-based passive microwave remote sensing are usually resampled...
Resampling-Based Gap Analysis for Detecting Nodes with High Centrality on Large Social Network
2015-05-22
University, Shiga, Japan kimura@rins.ryukoku.ac.jp 4 Institute of Scientific and Industrial Research, Osaka University, Osaka, Japan 5 School of...second one is a network extracted from a Japanese word-of-mouth communication site for cosmetics , “@cosme”2, consist- ing of 45, 024 nodes
Confidence Intervals for Effect Sizes: Applying Bootstrap Resampling
ERIC Educational Resources Information Center
Banjanovic, Erin S.; Osborne, Jason W.
2016-01-01
Confidence intervals for effect sizes (CIES) provide readers with an estimate of the strength of a reported statistic as well as the relative precision of the point estimate. These statistics offer more information and context than null hypothesis statistic testing. Although confidence intervals have been recommended by scholars for many years,…
From climate-change spaghetti to climate-change distributions for 21st Century California
Dettinger, M.D.
2005-01-01
The uncertainties associated with climate-change projections for California are unlikely to disappear any time soon, and yet important long-term decisions will be needed to accommodate those potential changes. Projection uncertainties have typically been addressed by analysis of a few scenarios, chosen based on availability or to capture the extreme cases among available projections. However, by focusing on more common projections rather than the most extreme projections (using a new resampling method), new insights into current projections emerge: (1) uncertainties associated with future greenhouse-gas emissions are comparable with the differences among climate models, so that neither source of uncertainties should be neglected or underrepresented; (2) twenty-first century temperature projections spread more, overall, than do precipitation scenarios; (3) projections of extremely wet futures for California are true outliers among current projections; and (4) current projections that are warmest tend, overall, to yield a moderately drier California, while the cooler projections yield a somewhat wetter future. The resampling approach applied in this paper also provides a natural opportunity to objectively incorporate measures of model skill and the likelihoods of various emission scenarios into future assessments.
Xiao, Yongling; Abrahamowicz, Michal
2010-03-30
We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.
Bersanelli, Matteo; Mosca, Ettore; Remondini, Daniel; Castellani, Gastone; Milanesi, Luciano
2016-01-01
A relation exists between network proximity of molecular entities in interaction networks, functional similarity and association with diseases. The identification of network regions associated with biological functions and pathologies is a major goal in systems biology. We describe a network diffusion-based pipeline for the interpretation of different types of omics in the context of molecular interaction networks. We introduce the network smoothing index, a network-based quantity that allows to jointly quantify the amount of omics information in genes and in their network neighbourhood, using network diffusion to define network proximity. The approach is applicable to both descriptive and inferential statistics calculated on omics data. We also show that network resampling, applied to gene lists ranked by quantities derived from the network smoothing index, indicates the presence of significantly connected genes. As a proof of principle, we identified gene modules enriched in somatic mutations and transcriptional variations observed in samples of prostate adenocarcinoma (PRAD). In line with the local hypothesis, network smoothing index and network resampling underlined the existence of a connected component of genes harbouring molecular alterations in PRAD. PMID:27731320
NASA Astrophysics Data System (ADS)
Wechsung, Frank; Wechsung, Maximilian
2016-11-01
The STatistical Analogue Resampling Scheme (STARS) statistical approach was recently used to project changes of climate variables in Germany corresponding to a supposed degree of warming. We show by theoretical and empirical analysis that STARS simply transforms interannual gradients between warmer and cooler seasons into climate trends. According to STARS projections, summers in Germany will inevitably become dryer and winters wetter under global warming. Due to the dominance of negative interannual correlations between precipitation and temperature during the year, STARS has a tendency to generate a net annual decrease in precipitation under mean German conditions. Furthermore, according to STARS, the annual level of global radiation would increase in Germany. STARS can be still used, e.g., for generating scenarios in vulnerability and uncertainty studies. However, it is not suitable as a climate downscaling tool to access risks following from changing climate for a finer than general circulation model (GCM) spatial scale.
McRoy, Susan; Jones, Sean; Kurmally, Adam
2016-09-01
This article examines methods for automated question classification applied to cancer-related questions that people have asked on the web. This work is part of a broader effort to provide automated question answering for health education. We created a new corpus of consumer-health questions related to cancer and a new taxonomy for those questions. We then compared the effectiveness of different statistical methods for developing classifiers, including weighted classification and resampling. Basic methods for building classifiers were limited by the high variability in the natural distribution of questions and typical refinement approaches of feature selection and merging categories achieved only small improvements to classifier accuracy. Best performance was achieved using weighted classification and resampling methods, the latter yielding an accuracy of F1 = 0.963. Thus, it would appear that statistical classifiers can be trained on natural data, but only if natural distributions of classes are smoothed. Such classifiers would be useful for automated question answering, for enriching web-based content, or assisting clinical professionals to answer questions. © The Author(s) 2015.
Vehicle Fault Diagnose Based on Smart Sensor
NASA Astrophysics Data System (ADS)
Zhining, Li; Peng, Wang; Jianmin, Mei; Jianwei, Li; Fei, Teng
In the vehicle's traditional fault diagnose system, we usually use a computer system with a A/D card and with many sensors connected to it. The disadvantage of this system is that these sensor can hardly be shared with control system and other systems, there are too many connect lines and the electro magnetic compatibility(EMC) will be affected. In this paper, smart speed sensor, smart acoustic press sensor, smart oil press sensor, smart acceleration sensor and smart order tracking sensor were designed to solve this problem. With the CAN BUS these smart sensors, fault diagnose computer and other computer could be connected together to establish a network system which can monitor and control the vehicle's diesel and other system without any duplicate sensor. The hard and soft ware of the smart sensor system was introduced, the oil press, vibration and acoustic signal are resampled by constant angle increment to eliminate the influence of the rotate speed. After the resample, the signal in every working cycle could be averaged in angle domain and do other analysis like order spectrum.
Liu, Fang; Shen, Changqing; He, Qingbo; Zhang, Ao; Liu, Yongbin; Kong, Fanrang
2014-01-01
A fault diagnosis strategy based on the wayside acoustic monitoring technique is investigated for locomotive bearing fault diagnosis. Inspired by the transient modeling analysis method based on correlation filtering analysis, a so-called Parametric-Mother-Doppler-Wavelet (PMDW) is constructed with six parameters, including a center characteristic frequency and five kinematic model parameters. A Doppler effect eliminator containing a PMDW generator, a correlation filtering analysis module, and a signal resampler is invented to eliminate the Doppler effect embedded in the acoustic signal of the recorded bearing. Through the Doppler effect eliminator, the five kinematic model parameters can be identified based on the signal itself. Then, the signal resampler is applied to eliminate the Doppler effect using the identified parameters. With the ability to detect early bearing faults, the transient model analysis method is employed to detect localized bearing faults after the embedded Doppler effect is eliminated. The effectiveness of the proposed fault diagnosis strategy is verified via simulation studies and applications to diagnose locomotive roller bearing defects. PMID:24803197
Vyhnalkova, Renata; Eisenberg, Adi; van de Ven, Theo G M
2008-07-24
The kinetics of loading of polystyrene197-block-poly(acrylic acid)47 (PS197-b-PAA47) micelles, suspended in water, with thiocyanomethylthiobenzothiazole biocide and its subsequent release were investigated. Loading of the micelles was found to be a two-step process. First, the surface of the PS core of the micelles is saturated with biocide, with a rate determined by the transfer of solid biocide to micelles during transient micelle-biocide contacts. Next, the biocide penetrates as a front into the micelles, lowering the Tg in the process (non-Fickian case II diffusion). The slow rate of release is governed by the height of the energy barrier that a biocide molecule must overcome to pass from PS into water, resulting in a uniform biocide concentration within the micelle, until Tg is increased to the point that diffusion inside the micelles becomes very slow. Maximum loading of biocide into micelles is approximately 30% (w/w) and is achieved in 1 h. From partition experiments, it can be concluded that the biocide has a similar preference for polystyrene as for ethylbenzene over water, implying that the maximum loading is governed by thermodynamics.
Akashi, Masaya; Hiraoka, Yujiro; Hasegawa, Takumi; Komori, Takahide
2016-01-01
This retrospective study aimed to report the incidence of neurosensory complications after third molar extraction and also to identify current problems and discuss appropriate management of these complications. Patients who underwent extraction of deeply impacted mandibular third molars under general anesthesia were included. The following epidemiological data were retrospectively gathered from medical charts: type of neurosensory complication, treatment for complication, and outcome. A total 369 mandibular third molars were extracted in 210 patients under general anesthesia during this study period. Thirty-one of the 369 teeth (8.4%) in 31 patients had neurosensory complications during the first postoperative week resulting from inferior alveolar nerve damage. Neurosensory complications lasting from 1 to 3 months postoperatively included 17 cases of hypoesthesia and 8 of dysesthesia in 19 patients. Five cases of hypoesthesia and 4 of dysesthesia in 5 patients persisted over 1 year postoperatively. Sixteen of 369 teeth (4.3%) in 16 patients had persistent neurosensory complications after third molar extraction under general anesthesia. Stellate ganglion block was performed in 4 patients. Early initiation of stellate ganglion block (within 2 weeks postoperatively) produced better outcomes than late stellate ganglion block (over 6 months postoperatively). Refractory neurosensory complications after third molar extraction often combine both hypoesthesia and dysesthesia. Current problems in diagnosis and treatment included delayed detection of dysesthesia and the lack of uniform timing of stellate ganglion block. In the future, routinely inquiring about dysesthesia and promptly providing affected patients with information about stellate ganglion block might produce better outcomes.
Akashi, Masaya; Hiraoka, Yujiro; Hasegawa, Takumi; Komori, Takahide
2016-01-01
Objective: This retrospective study aimed to report the incidence of neurosensory complications after third molar extraction and also to identify current problems and discuss appropriate management of these complications. Method: Patients who underwent extraction of deeply impacted mandibular third molars under general anesthesia were included. The following epidemiological data were retrospectively gathered from medical charts: type of neurosensory complication, treatment for complication, and outcome. Results: A total 369 mandibular third molars were extracted in 210 patients under general anesthesia during this study period. Thirty-one of the 369 teeth (8.4%) in 31 patients had neurosensory complications during the first postoperative week resulting from inferior alveolar nerve damage. Neurosensory complications lasting from 1 to 3 months postoperatively included 17 cases of hypoesthesia and 8 of dysesthesia in 19 patients. Five cases of hypoesthesia and 4 of dysesthesia in 5 patients persisted over 1 year postoperatively. Sixteen of 369 teeth (4.3%) in 16 patients had persistent neurosensory complications after third molar extraction under general anesthesia. Stellate ganglion block was performed in 4 patients. Early initiation of stellate ganglion block (within 2 weeks postoperatively) produced better outcomes than late stellate ganglion block (over 6 months postoperatively). Conclusion: Refractory neurosensory complications after third molar extraction often combine both hypoesthesia and dysesthesia. Current problems in diagnosis and treatment included delayed detection of dysesthesia and the lack of uniform timing of stellate ganglion block. In the future, routinely inquiring about dysesthesia and promptly providing affected patients with information about stellate ganglion block might produce better outcomes. PMID:28217188
Bit-wise arithmetic coding for data compression
NASA Technical Reports Server (NTRS)
Kiely, A. B.
1994-01-01
This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.
Electrophoretic deposition of fluorescent Cu and Au sheets for light-emitting diodes
NASA Astrophysics Data System (ADS)
Liu, Jiale; Wu, Zhennan; Li, Tingting; Zhou, Ding; Zhang, Kai; Sheng, Yu; Cui, Jianli; Zhang, Hao; Yang, Bai
2015-12-01
Electrophoretic deposition (EPD) is a conventional method for fabricating film materials from nanometer-sized building blocks, and exhibits the advantages of low-cost, high-efficiency, wide-range thickness adjustment, and uniform deposition. Inspired by the interest in the application of two-dimensional (2D) nanomaterials, the EPD technique has been recently extended to building blocks with 2D features. However, the studies are mainly focused on simplex building blocks. The utilization of multiplex building blocks is rarely reported. In this work, we demonstrate a controlled EPD of Cu and Au sheets, which are 2D assemblies of luminescent Cu and Au nanoclusters. Systematic investigations reveal that both the deposition efficiency and the thickness are determined by the lateral size of the sheets. For Cu sheets with a large lateral size, a high ζ-potential and strong face-to-face van der Waals interactions facilitate the deposition with high efficiency. However, for Au sheets, the small lateral size and ζ-potential limit the formation of a thick film. To solve this problem, the deposition dynamics are controlled by increasing the concentration of the Au sheets and adding acetone. This understanding permits the fabrication of a binary EPD film by the stepwise deposition of Cu and Au sheets, thus producing a luminescent film with both Cu green emission and Au red emission. A white light-emitting diode prototype with color coordinates (x, y) = (0.31, 0.36) is fabricated by employing the EPD film as a color conversion layer on a 365 nm GaN clip and further tuning the amount of deposited Cu and Au sheets.Electrophoretic deposition (EPD) is a conventional method for fabricating film materials from nanometer-sized building blocks, and exhibits the advantages of low-cost, high-efficiency, wide-range thickness adjustment, and uniform deposition. Inspired by the interest in the application of two-dimensional (2D) nanomaterials, the EPD technique has been recently extended to building blocks with 2D features. However, the studies are mainly focused on simplex building blocks. The utilization of multiplex building blocks is rarely reported. In this work, we demonstrate a controlled EPD of Cu and Au sheets, which are 2D assemblies of luminescent Cu and Au nanoclusters. Systematic investigations reveal that both the deposition efficiency and the thickness are determined by the lateral size of the sheets. For Cu sheets with a large lateral size, a high ζ-potential and strong face-to-face van der Waals interactions facilitate the deposition with high efficiency. However, for Au sheets, the small lateral size and ζ-potential limit the formation of a thick film. To solve this problem, the deposition dynamics are controlled by increasing the concentration of the Au sheets and adding acetone. This understanding permits the fabrication of a binary EPD film by the stepwise deposition of Cu and Au sheets, thus producing a luminescent film with both Cu green emission and Au red emission. A white light-emitting diode prototype with color coordinates (x, y) = (0.31, 0.36) is fabricated by employing the EPD film as a color conversion layer on a 365 nm GaN clip and further tuning the amount of deposited Cu and Au sheets. Electronic supplementary information (ESI) available: Additional experimental information, and SEM images of Cu EPD films. See DOI: 10.1039/c5nr06599b
NASA Astrophysics Data System (ADS)
Moreno, Claudia E.; Guevara, Roger; Sánchez-Rojas, Gerardo; Téllez, Dianeis; Verdú, José R.
2008-01-01
Environmental assessment at the community level in highly diverse ecosystems is limited by taxonomic constraints and statistical methods requiring true replicates. Our objective was to show how diverse systems can be studied at the community level using higher taxa as biodiversity surrogates, and re-sampling methods to allow comparisons. To illustrate this we compared the abundance, richness, evenness and diversity of the litter fauna in a pine-oak forest in central Mexico among seasons, sites and collecting methods. We also assessed changes in the abundance of trophic guilds and evaluated the relationships between community parameters and litter attributes. With the direct search method we observed differences in the rate of taxa accumulation between sites. Bootstrap analysis showed that abundance varied significantly between seasons and sampling methods, but not between sites. In contrast, diversity and evenness were significantly higher at the managed than at the non-managed site. Tree regression models show that abundance varied mainly between seasons, whereas taxa richness was affected by litter attributes (composition and moisture content). The abundance of trophic guilds varied among methods and seasons, but overall we found that parasitoids, predators and detrivores decreased under management. Therefore, although our results suggest that management has positive effects on the richness and diversity of litter fauna, the analysis of trophic guilds revealed a contrasting story. Our results indicate that functional groups and re-sampling methods may be used as tools for describing community patterns in highly diverse systems. Also, the higher taxa surrogacy could be seen as a preliminary approach when it is not possible to identify the specimens at a low taxonomic level in a reasonable period of time and in a context of limited financial resources, but further studies are needed to test whether the results are specific to a system or whether they are general with regards to land management.
Epistemic uncertainty in the location and magnitude of earthquakes in Italy from Macroseismic data
Bakun, W.H.; Gomez, Capera A.; Stucchi, M.
2011-01-01
Three independent techniques (Bakun and Wentworth, 1997; Boxer from Gasperini et al., 1999; and Macroseismic Estimation of Earthquake Parameters [MEEP; see Data and Resources section, deliverable D3] from R.M.W. Musson and M.J. Jimenez) have been proposed for estimating an earthquake location and magnitude from intensity data alone. The locations and magnitudes obtained for a given set of intensity data are almost always different, and no one technique is consistently best at matching instrumental locations and magnitudes of recent well-recorded earthquakes in Italy. Rather than attempting to select one of the three solutions as best, we use all three techniques to estimate the location and the magnitude and the epistemic uncertainties among them. The estimates are calculated using bootstrap resampled data sets with Monte Carlo sampling of a decision tree. The decision-tree branch weights are based on goodness-of-fit measures of location and magnitude for recent earthquakes. The location estimates are based on the spatial distribution of locations calculated from the bootstrap resampled data. The preferred source location is the locus of the maximum bootstrap location spatial density. The location uncertainty is obtained from contours of the bootstrap spatial density: 68% of the bootstrap locations are within the 68% confidence region, and so on. For large earthquakes, our preferred location is not associated with the epicenter but with a location on the extended rupture surface. For small earthquakes, the epicenters are generally consistent with the location uncertainties inferred from the intensity data if an epicenter inaccuracy of 2-3 km is allowed. The preferred magnitude is the median of the distribution of bootstrap magnitudes. As with location uncertainties, the uncertainties in magnitude are obtained from the distribution of bootstrap magnitudes: the bounds of the 68% uncertainty range enclose 68% of the bootstrap magnitudes, and so on. The instrumental magnitudes for large and small earthquakes are generally consistent with the confidence intervals inferred from the distribution of bootstrap resampled magnitudes.
NASA Astrophysics Data System (ADS)
Hammond, W. C.; Bormann, J.; Blewitt, G.; Kreemer, C.
2013-12-01
The Walker Lane in the western Great Basin of the western United States is an 800 km long and 100 km wide zone of active intracontinental transtension that absorbs ~10 mm/yr, about 20% of the Pacific/North America plate boundary relative motion. Lying west of the Sierra Nevada/Great Valley microplate (SNGV) and adjoining the Basin and Range Province to the east, deformation is predominantly shear strain overprinted with a minor component of extension. The Walker Lane responds with faulting, block rotations, structural step-overs, and has distinct and varying partitioned domains of shear and extension. Resolving these complex deformation patterns requires a long term observation strategy with a dense network of GPS stations (spacing ~20 km). The University of Nevada, Reno operates the 373 station Mobile Array of GPS for Nevada transtension (MAGNET) semi-continuous network that supplements coverage by other networks such as EarthScope's Plate Boundary Observatory, which alone has insufficient density to resolve the deformation patterns. Uniform processing of data from these GPS mega-networks provides a synoptic view and new insights into the kinematics and mechanics of Walker Lane tectonics. We present velocities for thousands of stations with time series between 3 to 17 years in duration aligned to our new GPS-based North America fixed reference frame NA12. The velocity field shows a rate budget across the southern Walker Lane of ~10 mm/yr, decreasing northward to ~7 mm/yr at the latitude of the Mohawk Valley and Pyramid Lake. We model the data with a new block model that estimates rotations and slip rates of known active faults between the Mojave Desert and northern Nevada and northeast California. The density of active faults in the region requires including a relatively large number of blocks in the model to accurately estimate deformation patterns. With 49 blocks, our the model captures structural detail not represented in previous province-scale models, and improves our ability to compare results to geologic fault slip rates. Modeling the kinematics on this scale has the advantages of 1) reducing the impact of poorly constrained boundaries on small geographically limited models, 2) consistent modeling of rotations across major structural step-overs near the Mina deflection and Carson domain, 3) tracking the kinematics of the south-to-north varying budget of Walker Lane deformation by solving for extension in the Basin and Range to the east, and 4) using a contiguous SNGV as a uniform western kinematic boundary condition. We compare contemporary deformation to geologic slip rates and longer term rotation rates estimated from rock paleomagnetism. GPS-derived block rotation rates are somewhat dependent on model regularization, but are generally within 1° per million years, and tend to be slower than published paleomagnetic rotations rates. GPS data, together with neotectonic and rock paleomagnetism studies provide evidence that the relative importance of Walker Lane block rotations and fault slip continues to evolve, giving way to a more through-going system with slower rotation rates and higher slip rates on individual faults.
Optimal occlusion uniformly partitions red blood cells fluxes within a microvascular network
Tu, Shenyinying; Liu, Yu-Hsiu; Savage, Van M.; Hsiai, Tzung K.; Roper, Marcus
2017-01-01
In animals, gas exchange between blood and tissues occurs in narrow vessels, whose diameter is comparable to that of a red blood cell. Red blood cells must deform to squeeze through these narrow vessels, transiently blocking or occluding the vessels they pass through. Although the dynamics of vessel occlusion have been studied extensively, it remains an open question why microvessels need to be so narrow. We study occlusive dynamics within a model microvascular network: the embryonic zebrafish trunk. We show that pressure feedbacks created when red blood cells enter the finest vessels of the trunk act together to uniformly partition red blood cells through the microvasculature. Using mathematical models as well as direct observation, we show that these occlusive feedbacks are tuned throughout the trunk network to prevent the vessels closest to the heart from short-circuiting the network. Thus occlusion is linked with another open question of microvascular function: how are red blood cells delivered at the same rate to each micro-vessel? Our analysis shows that tuning of occlusive feedbacks increase the total dissipation within the network by a factor of 11, showing that uniformity of flows rather than minimization of transport costs may be prioritized by the microvascular network. PMID:29244812
NASA Technical Reports Server (NTRS)
Davis, M. F.; Wosik, J.; Forster, K.; Deshmukh, S. C.; Rampersad, H. R.
1991-01-01
The paper describes thin films deposited in a system where substrates are scanned over areas up to 3.5 x 3.5 cm through the stationary plume of an ablated material defined by an aperture. These YBCO films are deposited on LaAlO3 and SrTiO3 substrates with the thickness of 90 and 160 nm. Attention is focused on the main features of the deposition system: line focusing of the laser beam on the target; an aperture defining the area of the plume; computerized stepper motor-driven X-Y stage translating the heated sampler holder behind the plume-defining aperture in programmed patterns; and substrate mounting block with uniform heating at high temperatures over large areas. It is noted that the high degree of uniformity of the properties in each film batch illustrates that the technique of pulsed laser deposition can be applied to produce large YBCO films of high quality.
NASA Astrophysics Data System (ADS)
Eason, Thomas J.; Bond, Leonard J.; Lozev, Mark G.
2015-03-01
Crude oil is becoming more corrosive with higher sulfur concentration, chloride concentration, and acidity. The increasing presence of naphthenic acids in oils with various environmental conditions at temperatures between 150°C and 400°C can lead to different internal degradation morphologies in refineries that are uniform, non-uniform, or localized pitting. Improved corrosion measurement technology is needed to better quantify the integrity risk associated with refining crude oils of higher acid concentration. This paper first reports a consolidated review of corrosion inspection technology to establish the foundation for structural health monitoring of localized internal corrosion in high temperature piping. An approach under investigation is to employ flexible ultrasonic thin-film piezoelectric transducer arrays fabricated by the sol-gel manufacturing process for monitoring localized internal corrosion at temperatures up to 400°C. A statistical analysis of sol-gel transducer measurement accuracy using various time of flight thickness calculation algorithms on a flat calibration block is demonstrated.
High Order Schemes in Bats-R-US for Faster and More Accurate Predictions
NASA Astrophysics Data System (ADS)
Chen, Y.; Toth, G.; Gombosi, T. I.
2014-12-01
BATS-R-US is a widely used global magnetohydrodynamics model that originally employed second order accurate TVD schemes combined with block based Adaptive Mesh Refinement (AMR) to achieve high resolution in the regions of interest. In the last years we have implemented fifth order accurate finite difference schemes CWENO5 and MP5 for uniform Cartesian grids. Now the high order schemes have been extended to generalized coordinates, including spherical grids and also to the non-uniform AMR grids including dynamic regridding. We present numerical tests that verify the preservation of free-stream solution and high-order accuracy as well as robust oscillation-free behavior near discontinuities. We apply the new high order accurate schemes to both heliospheric and magnetospheric simulations and show that it is robust and can achieve the same accuracy as the second order scheme with much less computational resources. This is especially important for space weather prediction that requires faster than real time code execution.
Radial solute transport in highly heterogeneous aquifers: Modeling and experimental comparison
NASA Astrophysics Data System (ADS)
Di Dato, Mariaines; Fiori, Aldo; de Barros, Felipe P. J.; Bellin, Alberto
2017-07-01
We analyze solute transport in a radially converging 3-D flow field in a porous medium with spatially heterogeneous hydraulic conductivity (K). The aim of the paper is to analyze the impact of heterogeneity and the mode of injection on BreakThrough Curves (BTCs) detected at a well pumping a contaminated aquifer. The aquifer is conceptualized as an ensemble of blocks of uniform but contrasting K and the analysis makes use of the travel time approach. Despite the approximations introduced, the model reproduces a laboratory experiment without calibration of transport parameters. Our results also show excellent agreement with numerical simulations for different levels of heterogeneity. We focus on the impact on the BTC of both heterogeneity in K and solute release conditions. It is shown that the injection mode matters, and the differences in the BTCs between uniform and flux-proportional injection increase with the heterogeneity of the K-field. Furthermore, we study the effect of heterogeneity and mode of injection on early and late arrivals at the well.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afzal, Muhammad U., E-mail: muhammad.afzal@mq.edu.au; Esselle, Karu P.
This paper presents a quasi-analytical technique to design a continuous, all-dielectric phase correcting structures (PCSs) for circularly polarized Fabry-Perot resonator antennas (FPRAs). The PCS has been realized by varying the thickness of a rotationally symmetric dielectric block placed above the antenna. A global analytical expression is derived for the PCS thickness profile, which is required to achieve nearly uniform phase distribution at the output of the PCS, despite the non-uniform phase distribution at its input. An alternative piecewise technique based on spline interpolation is also explored to design a PCS. It is shown from both far- and near-field results thatmore » a PCS tremendously improves the radiation performance of the FPRA. These improvements include an increase in peak directivity from 22 to 120 (from 13.4 dBic to 20.8 dBic) and a decrease of 3 dB beamwidth from 41.5° to 15°. The phase-corrected antenna also has a good directivity bandwidth of 1.3 GHz, which is 11% of the center frequency.« less
Jiang, Wei; Xu, Chao-Zhen; Jiang, Si-Zhi; Zhang, Tang-Duo; Wang, Shi-Zhen; Fang, Bai-Shan
2017-04-01
L-tert-Leucine (L-Tle) and its derivatives are extensively used as crucial building blocks for chiral auxiliaries, pharmaceutically active ingredients, and ligands. Combining with formate dehydrogenase (FDH) for regenerating the expensive coenzyme NADH, leucine dehydrogenase (LeuDH) is continually used for synthesizing L-Tle from α-keto acid. A multilevel factorial experimental design was executed for research of this system. In this work, an efficient optimization method for improving the productivity of L-Tle was developed. And the mathematical model between different fermentation conditions and L-Tle yield was also determined in the form of the equation by using uniform design and regression analysis. The multivariate regression equation was conveniently implemented in water, with a space time yield of 505.9 g L -1 day -1 and an enantiomeric excess value of >99 %. These results demonstrated that this method might become an ideal protocol for industrial production of chiral compounds and unnatural amino acids such as chiral drug intermediates.
CH₃NH₃PbI₃-based planar solar cells with magnetron-sputtered nickel oxide.
Cui, Jin; Meng, Fanping; Zhang, Hua; Cao, Kun; Yuan, Huailiang; Cheng, Yibing; Huang, Feng; Wang, Mingkui
2014-12-24
Herein we report an investigation of a CH3NH3PbI3 planar solar cell, showing significant power conversion efficiency (PCE) improvement from 4.88% to 6.13% by introducing a homogeneous and uniform NiO blocking interlayer fabricated with the reactive magnetron sputtering method. The sputtered NiO layer exhibits enhanced crystallization, high transmittance, and uniform surface morphology as well as a preferred in-plane orientation of the (200) plane. The PCE of the sputtered-NiO-based perovskite p-i-n planar solar cell can be further promoted to 9.83% when a homogeneous and dense perovskite layer is formed with solvent-engineering technology, showing an impressive open circuit voltage of 1.10 V. This is about 33% higher than that of devices using the conventional spray pyrolysis of NiO onto a transparent conducting glass. These results highlight the importance of a morphology- and crystallization-compatible interlayer toward a high-performance inverted perovskite planar solar cell.
NASA Astrophysics Data System (ADS)
Zhang, Xin; Huang, Yingqiu; Liu, Xiangyu; Yang, Lei; Shi, Changdong; Wu, Yucheng; Tang, Wenming
2018-03-01
Composites of 40Cu/Ag(Invar) were prepared via pressureless sintering and subsequent thermo-mechanical treatment from raw materials of electroless Ag-plated Invar alloy powder and electrolytic Cu powder. Microstructures and properties of the prepared composites were studied to evaluate the effect of the Ag layer on blocking Cu/Invar interfacial diffusion in the composites. The electroless-plated Ag layer was dense, uniform, continuous, and bonded tightly with the Invar alloy substrate. During sintering of the composites, the Ag layer effectively prevented Cu/Invar interfacial diffusion. During cold-rolling, the Ag layer was deformed uniformly with the Invar alloy particles. The composites exhibited bi-continuous network structure and considerably improved properties. After sintering at 775 °C and subsequent thermo-mechanical treatment, the 40Cu/Ag(Invar) composites showed satisfactory comprehensive properties: relative density of 99.0 pct, hardness of HV 253, thermal conductivity of 55.7 W/(m K), and coefficient of thermal expansion of 11.2 × 10-6/K.
Flexible single-layer ionic organic-inorganic frameworks towards precise nano-size separation
NASA Astrophysics Data System (ADS)
Yue, Liang; Wang, Shan; Zhou, Ding; Zhang, Hao; Li, Bao; Wu, Lixin
2016-02-01
Consecutive two-dimensional frameworks comprised of molecular or cluster building blocks in large area represent ideal candidates for membranes sieving molecules and nano-objects, but challenges still remain in methodology and practical preparation. Here we exploit a new strategy to build soft single-layer ionic organic-inorganic frameworks via electrostatic interaction without preferential binding direction in water. Upon consideration of steric effect and additional interaction, polyanionic clusters as connection nodes and cationic pseudorotaxanes acting as bridging monomers connect with each other to form a single-layer ionic self-assembled framework with 1.4 nm layer thickness. Such soft supramolecular polymer frameworks possess uniform and adjustable ortho-tetragonal nanoporous structure in pore size of 3.4-4.1 nm and exhibit greatly convenient solution processability. The stable membranes maintaining uniform porous structure demonstrate precisely size-selective separation of semiconductor quantum dots within 0.1 nm of accuracy and may hold promise for practical applications in selective transport, molecular separation and dialysis systems.
A Data Augmentation Approach to Short Text Classification
ERIC Educational Resources Information Center
Rosario, Ryan Robert
2017-01-01
Text classification typically performs best with large training sets, but short texts are very common on the World Wide Web. Can we use resampling and data augmentation to construct larger texts using similar terms? Several current methods exist for working with short text that rely on using external data and contexts, or workarounds. Our focus is…
Mist net effort required to inventory a forest bat species assemblage.
Theodore J. Weller; Danny C. Lee
2007-01-01
Little quantitative information exists about the survey effort necessary to inventory temperate bat species assemblages. We used a bootstrap resampling lgorithm to estimate the number of mist net surveys required to capture individuals from 9 species at both study area and site levels using data collected in a forested watershed in northwestern California, USA, during...
Long-Term Soil Chemistry Changes in Aggrading Forest Ecosystems
Jennifer D. Knoepp; Wayne T. Swank
1994-01-01
Assessing potential long-term forest productivity requires identification of the processes regulating chemical changes in forest soils. We resampled the litter layer and upper two mineral soil horizons, A and AB/BA, in two aggrading southern Appalachian watersheds 20 yr after an earlier sampling. Soils from a mixed-hardwood watershed exhibited a small but significant...
ERIC Educational Resources Information Center
Nevitt, Johnathan; Hancock, Gregory R.
Though common structural equation modeling (SEM) methods are predicated upon the assumption of multivariate normality, applied researchers often find themselves with data clearly violating this assumption and without sufficient sample size to use distribution-free estimation methods. Fortunately, promising alternatives are being integrated into…
Jeffrey T. Walton
2008-01-01
Three machine learning subpixel estimation methods (Cubist, Random Forests, and support vector regression) were applied to estimate urban cover. Urban forest canopy cover and impervious surface cover were estimated from Landsat-7 ETM+ imagery using a higher resolution cover map resampled to 30 m as training and reference data. Three different band combinations (...
Fourier Descriptor Analysis and Unification of Voice Range Profile Contours: Method and Applications
ERIC Educational Resources Information Center
Pabon, Peter; Ternstrom, Sten; Lamarche, Anick
2011-01-01
Purpose: To describe a method for unified description, statistical modeling, and comparison of voice range profile (VRP) contours, even from diverse sources. Method: A morphologic modeling technique, which is based on Fourier descriptors (FDs), is applied to the VRP contour. The technique, which essentially involves resampling of the curve of the…
Propagating probability distributions of stand variables using sequential Monte Carlo methods
Jeffrey H. Gove
2009-01-01
A general probabilistic approach to stand yield estimation is developed based on sequential Monte Carlo filters, also known as particle filters. The essential steps in the development of the sampling importance resampling (SIR) particle filter are presented. The SIR filter is then applied to simulated and observed data showing how the 'predictor - corrector'...
The Relationship of Cohabitation and Mental Health: A Study of a Young Adult Cohort.
ERIC Educational Resources Information Center
Horwitz, Allan V.; White, Helene Raskin
1998-01-01
Uses a cohort of unmarried young adults who were sampled when they were 18, 21, or 24 years old and resampled seven years later. Results indicate no differences between cohabitators and married couples in levels of depression. Cohabitating men report more alcohol problems than married and single men; cohabitating women reported more alcohol…
Techniques for Down-Sampling a Measured Surface Height Map for Model Validation
NASA Technical Reports Server (NTRS)
Sidick, Erkin
2012-01-01
This software allows one to down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. The software tool of the current two new techniques can be used in all optical model validation processes involving large space optical surfaces
Rangeland exclosures of northeastern Oregon: stories they tell (1936–2004).
Charles Grier Johnson
2007-01-01
Rangeland exclosures installed primarily in the 1960s, but with some from the 1940s, were resampled for changes in plant community structure and composition periodically from 1977 to 2004 on the Malheur, Umatilla, and Wallowa-Whitman National Forests in northeastern Oregon. They allow one to compare vegetation with all-ungulate exclusion (known historically as game...
Collateral Information for Equating in Small Samples: A Preliminary Investigation
ERIC Educational Resources Information Center
Kim, Sooyeon; Livingston, Samuel A.; Lewis, Charles
2011-01-01
This article describes a preliminary investigation of an empirical Bayes (EB) procedure for using collateral information to improve equating of scores on test forms taken by small numbers of examinees. Resampling studies were done on two different forms of the same test. In each study, EB and non-EB versions of two equating methods--chained linear…
Simulation of an active cooling system for photovoltaic modules
NASA Astrophysics Data System (ADS)
Abdelhakim, Lotfi
2016-06-01
Photovoltaic cells are devices that convert solar radiation directly into electricity. However, solar radiation increases the photovoltaic cells temperature [1] [2]. The temperature has an influence on the degradation of the cell efficiency and the lifetime of a PV cell. This work reports on a water cooling technique for photovoltaic panel, whereby the cooling system was placed at the front surface of the cells to dissipate excess heat away and to block unwanted radiation. By using water as a cooling medium for the photovoltaic solar cells, the overheating of closed panel is greatly reduced without prejudicing luminosity. The water also acts as a filter to remove a portion of solar spectrum in the infrared band but allows transmission of the visible spectrum most useful for the PV operation. To improve the cooling system efficiency and electrical efficiency, uniform flow rate among the cooling system is required to ensure uniform distribution of the operating temperature of the PV cells. The aims of this study are to develop a 3D thermal model to simulate the cooling and heat transfer in Photovoltaic panel and to recommend a cooling technique for the PV panel. The velocity, pressure and temperature distribution of the three-dimensional flow across the cooling block were determined using the commercial package, Fluent. The second objective of this work is to study the influence of the geometrical dimensions of the panel, water mass flow rate and water inlet temperature on the flow distribution and the solar panel temperature. The results obtained by the model are compared with experimental results from testing the prototype of the cooling device.
Fonteyne, Margot; Vercruysse, Jurgen; De Leersnyder, Fien; Besseling, Rut; Gerich, Ad; Oostra, Wim; Remon, Jean Paul; Vervaet, Chris; De Beer, Thomas
2016-09-07
This study focuses on the twin screw granulator of a continuous from-powder-to-tablet production line. Whereas powder dosing into the granulation unit is possible from a container of preblended material, a truly continuous process uses several feeders (each one dosing an individual ingredient) and relies on a continuous blending step prior to granulation. The aim of the current study was to investigate the in-line blending capacity of this twin screw granulator, equipped with conveying elements only. The feasibility of in-line NIR (SentroPAT, Sentronic GmbH, Dresden, Germany) spectroscopy for evaluating the blend uniformity of powders after the granulator was tested. Anhydrous theophylline was used as a tracer molecule and was blended with lactose monohydrate. Theophylline and lactose were both fed from a different feeder into the twin screw granulator barrel. Both homogeneous mixtures and mixing experiments with induced errors were investigated. The in-line spectroscopic analyses showed that the twin screw granulator is a useful tool for in-line blending in different conditions. The blend homogeneity was evaluated by means of a novel statistical method being the moving F-test method in which the variance between two blocks of collected NIR spectra is evaluated. The α- and β-error of the moving F-test are controlled by using the appropriate block size of spectra. The moving F-test method showed to be an appropriate calibration and maintenance free method for blend homogeneity evaluation during continuous mixing. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Matsumoto, Naoya; Okazaki, Shigetoshi; Takamoto, Hisayoshi; Inoue, Takashi; Terakawa, Susumu
2014-02-01
We propose a method for high precision modulation of the pupil function of a microscope objective lens to improve the performance of multifocal multi-photon microscopy (MMM). To modulate the pupil function, we adopt a spatial light modulator (SLM) and place it at the conjugate position of the objective lens. The SLM can generate an arbitrary number of spots to excite the multiple fluorescence spots (MFS) at the desired positions and intensities by applying an appropriate computer-generated hologram (CGH). This flexibility allows us to control the MFS according to the photobleaching level of a fluorescent protein and phototoxicity of a specimen. However, when a large number of excitation spots are generated, the intensity distribution of the MFS is significantly different from the one originally designed due to misalignment of the optical setup and characteristics of the SLM. As a result, the image of a specimen obtained using laser scanning for the MFS has block noise segments because the SLM could not generate a uniform MFS. To improve the intensity distribution of the MFS, we adaptively redesigned the CGH based on the observed MFS. We experimentally demonstrate an improvement in the uniformity of a 10 × 10 MFS grid using a dye solution. The simplicity of the proposed method will allow it to be applied for calibration of MMM before observing living tissue. After the MMM calibration, we performed laser scanning with two-photon excitation to observe a real specimen without detecting block noise segments.
Tumor gene expression and prognosis in breast cancer patients with 10 or more positive lymph nodes.
Cobleigh, Melody A; Tabesh, Bita; Bitterman, Pincas; Baker, Joffre; Cronin, Maureen; Liu, Mei-Lan; Borchik, Russell; Mosquera, Juan-Miguel; Walker, Michael G; Shak, Steven
2005-12-15
This study, along with two others, was done to develop the 21-gene Recurrence Score assay (Oncotype DX) that was validated in a subsequent independent study and is used to aid decision making about chemotherapy in estrogen receptor (ER)-positive, node-negative breast cancer patients. Patients with >or=10 nodes diagnosed from 1979 to 1999 were identified. RNA was extracted from paraffin blocks, and expression of 203 candidate genes was quantified using reverse transcription-PCR (RT-PCR). Seventy-eight patients were studied. As of August 2002, 77% of patients had distant recurrence or breast cancer death. Univariate Cox analysis of clinical and immunohistochemistry variables indicated that HER2/immunohistochemistry, number of involved nodes, progesterone receptor (PR)/immunohistochemistry (% cells), and ER/immunohistochemistry (% cells) were significantly associated with distant recurrence-free survival (DRFS). Univariate Cox analysis identified 22 genes associated with DRFS. Higher expression correlated with shorter DRFS for the HER2 adaptor GRB7 and the macrophage marker CD68. Higher expression correlated with longer DRFS for tumor protein p53-binding protein 2 (TP53BP2) and the ER axis genes PR and Bcl2. Multivariate methods, including stepwise variable selection and bootstrap resampling of the Cox proportional hazards regression model, identified several genes, including TP53BP2 and Bcl2, as significant predictors of DRFS. Tumor gene expression profiles of archival tissues, some more than 20 years old, provide significant information about risk of distant recurrence even among patients with 10 or more nodes.
Layer-by-layer assembly of two-dimensional materials into wafer-scale heterostructures
NASA Astrophysics Data System (ADS)
Kang, Kibum; Lee, Kan-Heng; Han, Yimo; Gao, Hui; Xie, Saien; Muller, David A.; Park, Jiwoong
2017-10-01
High-performance semiconductor films with vertical compositions that are designed to atomic-scale precision provide the foundation for modern integrated circuitry and novel materials discovery. One approach to realizing such films is sequential layer-by-layer assembly, whereby atomically thin two-dimensional building blocks are vertically stacked, and held together by van der Waals interactions. With this approach, graphene and transition-metal dichalcogenides--which represent one- and three-atom-thick two-dimensional building blocks, respectively--have been used to realize previously inaccessible heterostructures with interesting physical properties. However, no large-scale assembly method exists at present that maintains the intrinsic properties of these two-dimensional building blocks while producing pristine interlayer interfaces, thus limiting the layer-by-layer assembly method to small-scale proof-of-concept demonstrations. Here we report the generation of wafer-scale semiconductor films with a very high level of spatial uniformity and pristine interfaces. The vertical composition and properties of these films are designed at the atomic scale using layer-by-layer assembly of two-dimensional building blocks under vacuum. We fabricate several large-scale, high-quality heterostructure films and devices, including superlattice films with vertical compositions designed layer-by-layer, batch-fabricated tunnel device arrays with resistances that can be tuned over four orders of magnitude, band-engineered heterostructure tunnel diodes, and millimetre-scale ultrathin membranes and windows. The stacked films are detachable, suspendable and compatible with water or plastic surfaces, which will enable their integration with advanced optical and mechanical systems.
Layer-by-layer assembly of two-dimensional materials into wafer-scale heterostructures.
Kang, Kibum; Lee, Kan-Heng; Han, Yimo; Gao, Hui; Xie, Saien; Muller, David A; Park, Jiwoong
2017-10-12
High-performance semiconductor films with vertical compositions that are designed to atomic-scale precision provide the foundation for modern integrated circuitry and novel materials discovery. One approach to realizing such films is sequential layer-by-layer assembly, whereby atomically thin two-dimensional building blocks are vertically stacked, and held together by van der Waals interactions. With this approach, graphene and transition-metal dichalcogenides-which represent one- and three-atom-thick two-dimensional building blocks, respectively-have been used to realize previously inaccessible heterostructures with interesting physical properties. However, no large-scale assembly method exists at present that maintains the intrinsic properties of these two-dimensional building blocks while producing pristine interlayer interfaces, thus limiting the layer-by-layer assembly method to small-scale proof-of-concept demonstrations. Here we report the generation of wafer-scale semiconductor films with a very high level of spatial uniformity and pristine interfaces. The vertical composition and properties of these films are designed at the atomic scale using layer-by-layer assembly of two-dimensional building blocks under vacuum. We fabricate several large-scale, high-quality heterostructure films and devices, including superlattice films with vertical compositions designed layer-by-layer, batch-fabricated tunnel device arrays with resistances that can be tuned over four orders of magnitude, band-engineered heterostructure tunnel diodes, and millimetre-scale ultrathin membranes and windows. The stacked films are detachable, suspendable and compatible with water or plastic surfaces, which will enable their integration with advanced optical and mechanical systems.
Liu, Spencer S; John, Raymond S
2010-01-01
Ultrasound guidance for regional anesthesia has increased in popularity. However, the cost of ultrasound versus nerve stimulator guidance is controversial, as multiple and varying cost inputs are involved. Sensitivity analysis allows modeling of different scenarios and determination of the relative importance of each cost input for a given scenario. We modeled cost per patient of ultrasound versus nerve stimulator using single-factor sensitivity analysis for 4 different clinical scenarios designed to span the expected financial impact of ultrasound guidance. The primary cost factors for ultrasound were revenue from billing for ultrasound (85% of variation in final cost), number of patients examined per ultrasound machine (10%), and block success rate (2.6%). In contrast, the most important input factors for nerve stimulator were the success rate of the nerve stimulator block (89%) and the amount of liability payout for failed airway due to rescue general anesthesia (9%). Depending on clinical scenario, ultrasound was either a profit or cost center. If revenue is generated, then ultrasound-guided blocks consistently become a profit center regardless of clinical scenario in our model. Without revenue, the clinical scenario dictates the cost of ultrasound. In an ambulatory setting, ultrasound is highly competitive with nerve stimulator and requires at least a 96% success rate with nerve stimulator before becoming more expensive. In a hospitalized scenario, ultrasound is consistently more expensive as the uniform use of general anesthesia and hospitalization negate any positive cost effects from greater efficiency with ultrasound.
Mapping from Space - Ontology Based Map Production Using Satellite Imageries
NASA Astrophysics Data System (ADS)
Asefpour Vakilian, A.; Momeni, M.
2013-09-01
Determination of the maximum ability for feature extraction from satellite imageries based on ontology procedure using cartographic feature determination is the main objective of this research. Therefore, a special ontology has been developed to extract maximum volume of information available in different high resolution satellite imageries and compare them to the map information layers required in each specific scale due to unified specification for surveying and mapping. ontology seeks to provide an explicit and comprehensive classification of entities in all sphere of being. This study proposes a new method for automatic maximum map feature extraction and reconstruction of high resolution satellite images. For example, in order to extract building blocks to produce 1 : 5000 scale and smaller maps, the road networks located around the building blocks should be determined. Thus, a new building index has been developed based on concepts obtained from ontology. Building blocks have been extracted with completeness about 83%. Then, road networks have been extracted and reconstructed to create a uniform network with less discontinuity on it. In this case, building blocks have been extracted with proper performance and the false positive value from confusion matrix was reduced by about 7%. Results showed that vegetation cover and water features have been extracted completely (100%) and about 71% of limits have been extracted. Also, the proposed method in this article had the ability to produce a map with largest scale possible from any multi spectral high resolution satellite imagery equal to or smaller than 1 : 5000.
Mapping from Space - Ontology Based Map Production Using Satellite Imageries
NASA Astrophysics Data System (ADS)
Asefpour Vakilian, A.; Momeni, M.
2013-09-01
Determination of the maximum ability for feature extraction from satellite imageries based on ontology procedure using cartographic feature determination is the main objective of this research. Therefore, a special ontology has been developed to extract maximum volume of information available in different high resolution satellite imageries and compare them to the map information layers required in each specific scale due to unified specification for surveying and mapping. ontology seeks to provide an explicit and comprehensive classification of entities in all sphere of being. This study proposes a new method for automatic maximum map feature extraction and reconstruction of high resolution satellite images. For example, in order to extract building blocks to produce 1 : 5000 scale and smaller maps, the road networks located around the building blocks should be determined. Thus, a new building index has been developed based on concepts obtained from ontology. Building blocks have been extracted with completeness about 83 %. Then, road networks have been extracted and reconstructed to create a uniform network with less discontinuity on it. In this case, building blocks have been extracted with proper performance and the false positive value from confusion matrix was reduced by about 7 %. Results showed that vegetation cover and water features have been extracted completely (100 %) and about 71 % of limits have been extracted. Also, the proposed method in this article had the ability to produce a map with largest scale possible from any multi spectral high resolution satellite imagery equal to or smaller than 1 : 5000.
Yoon, Kyong Sup; Previte, Domenic J.; Hodgdon, Hilliary E.; Poole, Bryan C.; Kwon, Deok Ho; El-Ghar, Gamal E. Abo; Lee, Si Hyeock; Clark, J. Marshall
2014-01-01
The study examines the extent and frequency of a knockdown-type resistance allele (kdr type) in North American populations of human head lice. Lice were collected from 32 locations in Canada and the United States. DNA was extracted from individual lice and used to determine their zygosity using the serial invasive signal amplification technique to detect the kdr-type T917I (TI) mutation, which is most responsible for nerve insensitivity that results in the kdr phenotype and permethrin resistance. Previously sampled sites were resampled to determine if the frequency of the TI mutation was changing. The TI frequency was also reevaluated using a quantitative sequencing method on pooled DNA samples from selected sites to validate this population genotyping method. Genotyping substantiated that TI occurs at high levels in North American lice (88.4%). Overall, the TI frequency in U.S. lice was 84.4% from 1999 to 2009, increased to 99.6% from 2007 to 2009, and was 97.1% in Canadian lice in 2008. Genotyping results using the serial invasive signal amplification reaction (99.54%) and quantitative sequencing (99.45%) techniques were highly correlated. Thus, the frequencies of TI in North American head louse populations were found to be uniformly high, which may be due to the high selection pressure from the intensive and widespread use of the pyrethrins- or pyrethroid-based pediculicides over many years, and is likely a main cause of increased pediculosis and failure of pyrethrins- or permethrin-based products in Canada and the United States. Alternative approaches to treatment of head lice infestations are critically needed. PMID:24724296
Super-resolution mapping using multi-viewing CHRIS/PROBA data
NASA Astrophysics Data System (ADS)
Dwivedi, Manish; Kumar, Vinay
2016-04-01
High-spatial resolution Remote Sensing (RS) data provides detailed information which ensures high-definition visual image analysis of earth surface features. These data sets also support improved information extraction capabilities at a fine scale. In order to improve the spatial resolution of coarser resolution RS data, the Super Resolution Reconstruction (SRR) technique has become widely acknowledged which focused on multi-angular image sequences. In this study multi-angle CHRIS/PROBA data of Kutch area is used for SR image reconstruction to enhance the spatial resolution from 18 m to 6m in the hope to obtain a better land cover classification. Various SR approaches like Projection onto Convex Sets (POCS), Robust, Iterative Back Projection (IBP), Non-Uniform Interpolation and Structure-Adaptive Normalized Convolution (SANC) chosen for this study. Subjective assessment through visual interpretation shows substantial improvement in land cover details. Quantitative measures including peak signal to noise ratio and structural similarity are used for the evaluation of the image quality. It was observed that SANC SR technique using Vandewalle algorithm for the low resolution image registration outperformed the other techniques. After that SVM based classifier is used for the classification of SRR and data resampled to 6m spatial resolution using bi-cubic interpolation. A comparative analysis is carried out between classified data of bicubic interpolated and SR derived images of CHRIS/PROBA and SR derived classified data have shown a significant improvement of 10-12% in the overall accuracy. The results demonstrated that SR methods is able to improve spatial detail of multi-angle images as well as the classification accuracy.
New constraints on Lyman-α opacity with a sample of 62 quasars at z > 5.7
NASA Astrophysics Data System (ADS)
Bosman, Sarah E. I.; Fan, Xiaohui; Jiang, Linhua; Reed, Sophie; Matsuoka, Yoshiki; Becker, George; Haehnelt, Martin
2018-05-01
We present measurements of the mean and scatter of the IGM Lyman-α opacity at 4.9 < z < 6.1 along the lines of sight of 62 quasars at zsource > 5.7, the largest sample assembled at these redshifts to date by a factor of two. The sample size enables us to sample cosmic variance at these redshifts more robustly than ever before. The spectra used here were obtained by the SDSS, DES-VHS and SHELLQs collaborations, drawn from the ESI and X-Shooter archives, reused from previous studies or observed specifically for this work. We measure the effective optical depth of Lyman-α in bins of 10, 30, 50 and 70 cMpc h-1, construct cumulative distribution functions under two treatments of upper limits on flux and explore an empirical analytic fit to residual Lyman-α transmission. We verify the consistency of our results with those of previous studies via bootstrap re-sampling and confirm the existence of tails towards high values in the opacity distributions, which may persist down to z ˜ 5.2. Comparing our results with predictions from cosmological simulations, we find further strong evidence against models that include a spatially uniform ionizing background and temperature-density relation. We also compare to IGM models that include either a fluctuating UVB dominated by rare quasars or temperature fluctuations due to patchy reionization. Although both models produce better agreement with the observations, neither fully captures the observed scatter in IGM opacity. Our sample of 62 z > 5.7 quasar spectra opens many avenues for future study of the reionisation epoch.
Alsabery, A I; Sheremet, M A; Chamkha, A J; Hashim, I
2018-05-09
The problem of steady, laminar natural convection in a discretely heated and cooled square cavity filled by an alumina/water nanofluid with a centered heat-conducting solid block under the effects of inclined uniform magnetic field, Brownian diffusion and thermophoresis is studied numerically by using the finite difference method. Isothermal heaters and coolers are placed along the vertical walls and the bottom horizontal wall, while the upper horizontal wall is kept adiabatic. Water-based nanofluids with alumina nanoparticles are chosen for investigation. The governing parameters of this study are the Rayleigh number (10 3 ≤ Ra ≤ 10 6 ), the Hartmann number (0 ≤ Ha ≤ 50), thermal conductivity ratio (0.28 ≤ k w ≤ 16), centered solid block size (0.1 ≤ D ≤ 0.7) and the nanoparticles volume fraction (0 ≤ ϕ ≤ 0.04). The developed computational code is validated comprehensively using the grid independency test and numerical and experimental data of other authors. The obtained results reveal that the effects of the thermal conductivity ratio, centered solid block size and the nanoparticles volume fraction are non-linear for the heat transfer rate. Therefore, it is possible to find optimal parameters for the heat transfer enhancement in dependence on the considered system. Moreover, high values of the Rayleigh number and nanoparticles volume fraction characterize homogeneous distributions of nanoparticles inside the cavity. High concentration of nanoparticles can be found near the centered solid block where thermal plumes from the local heaters interact.
Niyama, Kouhei; Ide, Naoto; Onoue, Kaori; Okabe, Takahiro; Wakitani, Shigeyuki; Takagi, Mutsumi
2011-09-01
The combination of a β-tricalcium phosphate (βTCP) block with a scaffold-free chondrocyte sheet formed by the centrifugation of chondrocytes in a well was investigated with the aim of constructing an osteochondral-like structure. Human and porcine articular cartilage chondrocytes were respectively centrifuged in a 96-well plate or cell culture insert (0.32 cm(2)) that was set in a 24-well plate, cultivated in the respective vessel for 3 weeks, and the cell sheets were harvested. In some cases, a cylindrical βTCP block (diameter 5 mm, height 3 mm) was placed on the sheet on days 1-7. The sheet size, cell number, and sulfated glycosaminoglycan accumulation were determined. The use of a 96-well plate for not suspension but adhesion culture and the initial centrifugation of a well containing cells were crucial to obtaining a uniformly thick cell sheet. The glycosaminoglycan density of the harvested cell sheet was comparable to that of the pellet culture. An inoculum cell number of more than 31 × 10(5) cells tended to result in a curved cell sheet. Culture involving 18.6 × 10(5) cells and the 96-well plate for adhesion culture showed no curving of the cell sheet (thickness of 0.85 mm), and these were found to be the best of the culture conditions tested. The timing of the addition of a βTCP block to the cell sheet (1-7 days) markedly affected the balance between the thickness of cell sheet parts on and in the βTCP block. Centrifugation and subsequent cultivation of chondrocytes (18.6 × 10(5) cells) in a 96-well plate for adhesion culture led to the production of a scaffold-free cartilage-like cell sheet with a thickness of 0.85 mm. A combined osteochondral-like structure was produced by putting a βTCP block on the cell sheet. The thickness of the cell sheet on the βTCP block and the binding strength between the cell sheet and the βTCP block could be optimized by adjusting the inoculum cell number and timing of βTCP block addition.
Contrasting natural regeneration and tree planting in fourteen North American cities
David J. Nowak
2012-01-01
Field data from randomly located plots in 12 cities in the United States and Canada were used to estimate the proportion of the existing tree population that was planted or occurred via natural regeneration. In addition, two cities (Baltimore and Syracuse) were recently re-sampled to estimate the proportion of newly established trees that were planted. Results for the...
Grain Size and Parameter Recovery with TIMSS and the General Diagnostic Model
ERIC Educational Resources Information Center
Skaggs, Gary; Wilkins, Jesse L. M.; Hein, Serge F.
2016-01-01
The purpose of this study was to explore the degree of grain size of the attributes and the sample sizes that can support accurate parameter recovery with the General Diagnostic Model (GDM) for a large-scale international assessment. In this resampling study, bootstrap samples were obtained from the 2003 Grade 8 TIMSS in Mathematics at varying…
NASA Astrophysics Data System (ADS)
Coupon, Jean; Leauthaud, Alexie; Kilbinger, Martin; Medezinski, Elinor
2017-07-01
SWOT (Super W Of Theta) computes two-point statistics for very large data sets, based on “divide and conquer” algorithms, mainly, but not limited to data storage in binary trees, approximation at large scale, parellelization (open MPI), and bootstrap and jackknife resampling methods “on the fly”. It currently supports projected and 3D galaxy auto and cross correlations, galaxy-galaxy lensing, and weighted histograms.
Confidence Limits for the Indirect Effect: Distribution of the Product and Resampling Methods
ERIC Educational Resources Information Center
MacKinnon, David P.; Lockwood, Chondra M.; Williams, Jason
2004-01-01
The most commonly used method to test an indirect effect is to divide the estimate of the indirect effect by its standard error and compare the resulting z statistic with a critical value from the standard normal distribution. Confidence limits for the indirect effect are also typically based on critical values from the standard normal…
Air-to-Air Missile Vector Scoring
2012-03-22
SIR sampling-importance resampling . . . . . . . . . . . . . . 53 EPF extended particle filter . . . . . . . . . . . . . . . . . . . . 54 UPF unscented...particle filter ( EPF ) or a unscented particle fil- ter (UPF) [20]. The basic concept is to apply a bank of N EKF or UKF filters to move particles from...Merwe, Doucet, Freitas and Wan provide a comprehensive discussion on the EPF and UPF, including algorithms for implementation [20]. 2Result based on
Synchronizing data from irregularly sampled sensors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uluyol, Onder
A system and method include receiving a set of sampled measurements for each of multiple sensors, wherein the sampled measurements are at irregular intervals or different rates, re-sampling the sampled measurements of each of the multiple sensors at a higher rate than one of the sensor's set of sampled measurements, and synchronizing the sampled measurements of each of the multiple sensors.
J. Travis Swaim; Daniel C. Dey; Michael R. Saunders; Dale R. Weigel; Christopher D. Thornton; John M. Kabrick; Michael A. Jenkins
2016-01-01
We resampled plots from a repeated measures study implemented on the Hoosier National Forest (HNF) in southern Indiana in 1988 to investigate the influence of site and seedling physical attributes on height growth and establishment success of oak species (Quercus spp.) reproduction in stands regenerated by the clearcut method. Before harvest, an...
USDA-ARS?s Scientific Manuscript database
Better understanding agriculture’s effect on shallow groundwater quality is needed on the southern Idaho, Twin Falls irrigation tract. In 1999 and 2002-2007 we resampled 10 of the 15 tunnel drains monitored in a late-1960s study to determine the influence of time on NO3-N, dissolved reactive P (DRP)...
ERIC Educational Resources Information Center
Longford, Nicholas T.
Large scale surveys usually employ a complex sampling design and as a consequence, no standard methods for estimation of the standard errors associated with the estimates of population means are available. Resampling methods, such as jackknife or bootstrap, are often used, with reference to their properties of robustness and reduction of bias. A…
Analysis of automobile engine cylinder pressure and rotation speed from engine body vibration signal
NASA Astrophysics Data System (ADS)
Wang, Yuhua; Cheng, Xiang; Tan, Haishu
2016-01-01
In order to improve the engine vibration signal process method for the engine cylinder pressure and engine revolution speed measurement instrument, the engine cylinder pressure varying with the engine working cycle process has been regarded as the main exciting force for the engine block forced vibration. The forced vibration caused by the engine cylinder pressure presents as a low frequency waveform which varies with the cylinder pressure synchronously and steadily in time domain and presents as low frequency high energy discrete humorous spectrum lines in frequency domain. The engine cylinder pressure and the rotation speed can been extract form the measured engine block vibration signal by low-pass filtering analysis in time domain or by FFT analysis in frequency domain, the low-pass filtering analysis in time domain is not only suitable for the engine in uniform revolution condition but also suitable for the engine in uneven revolution condition. That provides a practical and convenient way to design motor revolution rate and cylinder pressure measurement instrument.
Araldite as an Embedding Medium for Electron Microscopy
Glauert, Audrey M.; Glauert, R. H.
1958-01-01
Epoxy resins are suitable media for embedding for electron microscopy, as they set uniformly with virtually no shrinkage. A mixture of araldite epoxy resins has been developed which is soluble in ethanol, and which yields a block of the required hardness for thin sectioning. The critical modifications to the conventional mixtures are the choice of a plasticized resin in conjunction with an aliphatic anhydride as the hardener. The hardness of the final block can be varied by incorporating additional plasticizer, and the rate of setting can be controlled by the use of an amine accelerator. The properties of the araldite mixture can be varied quite widely by adjusting the proportions of the various constituents. The procedure for embedding biological specimens is similar to that employed with methacrylates, although longer soaking times are recommended to ensure the complete penetration of the more viscous epoxy resin. An improvement in the preservation of the fine structure of a variety of specimens has already been reported, and a typical electron microgram illustrates the present paper. PMID:13525433
Deposition of Nanostructured Thin Film from Size-Classified Nanoparticles
NASA Technical Reports Server (NTRS)
Camata, Renato P.; Cunningham, Nicholas C.; Seol, Kwang Soo; Okada, Yoshiki; Takeuchi, Kazuo
2003-01-01
Materials comprising nanometer-sized grains (approximately 1_50 nm) exhibit properties dramatically different from those of their homogeneous and uniform counterparts. These properties vary with size, shape, and composition of nanoscale grains. Thus, nanoparticles may be used as building blocks to engineer tailor-made artificial materials with desired properties, such as non-linear optical absorption, tunable light emission, charge-storage behavior, selective catalytic activity, and countless other characteristics. This bottom-up engineering approach requires exquisite control over nanoparticle size, shape, and composition. We describe the design and characterization of an aerosol system conceived for the deposition of size classified nanoparticles whose performance is consistent with these strict demands. A nanoparticle aerosol is generated by laser ablation and sorted according to size using a differential mobility analyzer. Nanoparticles within a chosen window of sizes (e.g., (8.0 plus or minus 0.6) nm) are deposited electrostatically on a surface forming a film of the desired material. The system allows the assembly and engineering of thin films using size-classified nanoparticles as building blocks.
Driven translocation of Polymer through a nanopore: effect of heterogeneous flexibility
NASA Astrophysics Data System (ADS)
Adhikari, Ramesh; Bhattacharya, Aniket
2014-03-01
We have studied translocation of a model bead-spring polymer through a nanopore whose building blocks consist of alternate stiff and flexible segments and variable elastic bond potentials. For the case of uniform spring potential translocation of a symmetric periodic stiff-flexible chain of contour length N and segment length m (mod(N,2m)=0), we find that the end-to-end distance and the mean first passage time (MFPT) have weak dependence on the length m. The characteristic periodic pattern of the waiting time distribution captures the stiff and flexible segments of the chain with stiff segments taking longer time to translocate. But when we vary both the elastic bond energy, and the bending energy, as well as the length of stiff/flexible segments, we discover novel patterns in the waiting time distribution which brings out structural information of the building blocks of the translocating chain. Partially supported by UCF Office of Research and Commercialization & College of Science SEED grant.
Multistage switching hardware and software implementations for student experiment purpose
NASA Astrophysics Data System (ADS)
Sani, A.; Suherman
2018-02-01
Current communication and internet networks are underpinned by the switching technologies that interconnect one network to the others. Students’ understanding on networks rely on how they conver the theories. However, understanding theories without touching the reality may exert spots in the overall knowledge. This paper reports the progress of the multistage switching design and implementation for student laboratory activities. The hardware and software designs are based on three stages clos switching architecture with modular 2x2 switches, controlled by an arduino microcontroller. The designed modules can also be extended for batcher and bayan switch, and working on circuit and packet switching systems. The circuit analysis and simulation show that the blocking probability for each switch combinations can be obtained by generating random or patterned traffics. The mathematic model and simulation analysis shows 16.4% blocking probability differences as the traffic generation is uniform. The circuits design components and interfacing solution have been identified to allow next step implementation.