Smoothing and gap-filling of high resolution multi-spectral time series: Example of Landsat data
NASA Astrophysics Data System (ADS)
Vuolo, Francesco; Ng, Wai-Tim; Atzberger, Clement
2017-05-01
This paper introduces a novel methodology for generating 15-day, smoothed and gap-filled time series of high spatial resolution data. The approach is based on templates from high quality observations to fill data gaps that are subsequently filtered. We tested our method for one large contiguous area (Bavaria, Germany) and for nine smaller test sites in different ecoregions of Europe using Landsat data. Overall, our results match the validation dataset to a high degree of accuracy with a mean absolute error (MAE) of 0.01 for visible bands, 0.03 for near-infrared and 0.02 for short-wave-infrared. Occasionally, the reconstructed time series are affected by artefacts due to undetected clouds. Less frequently, larger uncertainties occur as a result of extended periods of missing data. Reliable cloud masks are highly warranted for making full use of time series.
A study on suppressing transmittance fluctuations for air-gapped Glan-type polarizing prisms
NASA Astrophysics Data System (ADS)
Zhang, Chuanfa; Li, Dailin; Zhu, Huafeng; Li, Chuanzhi; Jiao, Zhiyong; Wang, Ning; Xu, Zhaopeng; Wang, Xiumin; Song, Lianke
2018-05-01
Light intensity transmittance is a key parameter for the design of polarizing prisms, while sometimes its experimental curves based on spatial incident angle presents periodical fluctuations. Here, we propose a novel method for completely suppressing these fluctuations via setting a glued error angle in the air gap of Glan-Taylor prisms. The proposal consists of: an accurate formula of the intensity transmittance for Glan-Taylor prisms, a numerical simulation and a contrast experiment of Glan-Taylor prisms for analyzing the causes of the fluctuations, and a simple method for accurately measuring the glued error angle. The result indicates that when the setting glued error angle is larger than the critical angle for a certain polarizing prism, the fluctuations can be completely suppressed, and a smooth intensity transmittance curve can be obtained. Besides, the critical angle in the air gap for suppressing the fluctuations is decreased with the increase of beam spot size. This method has the advantage of having less demand for the prism position in optical systems.
Distinct eye movement patterns enhance dynamic visual acuity.
Palidis, Dimitrios J; Wyder-Hodge, Pearson A; Fooken, Jolande; Spering, Miriam
2017-01-01
Dynamic visual acuity (DVA) is the ability to resolve fine spatial detail in dynamic objects during head fixation, or in static objects during head or body rotation. This ability is important for many activities such as ball sports, and a close relation has been shown between DVA and sports expertise. DVA tasks involve eye movements, yet, it is unclear which aspects of eye movements contribute to successful performance. Here we examined the relation between DVA and the kinematics of smooth pursuit and saccadic eye movements in a cohort of 23 varsity baseball players. In a computerized dynamic-object DVA test, observers reported the location of the gap in a small Landolt-C ring moving at various speeds while eye movements were recorded. Smooth pursuit kinematics-eye latency, acceleration, velocity gain, position error-and the direction and amplitude of saccadic eye movements were linked to perceptual performance. Results reveal that distinct eye movement patterns-minimizing eye position error, tracking smoothly, and inhibiting reverse saccades-were related to dynamic visual acuity. The close link between eye movement quality and DVA performance has important implications for the development of perceptual training programs to improve DVA.
Correction of mid-spatial-frequency errors by smoothing in spin motion for CCOS
NASA Astrophysics Data System (ADS)
Zhang, Yizhong; Wei, Chaoyang; Shao, Jianda; Xu, Xueke; Liu, Shijie; Hu, Chen; Zhang, Haichao; Gu, Haojin
2015-08-01
Smoothing is a convenient and efficient way to correct mid-spatial-frequency errors. Quantifying the smoothing effect allows improvements in efficiency for finishing precision optics. A series experiments in spin motion are performed to study the smoothing effects about correcting mid-spatial-frequency errors. Some of them use a same pitch tool at different spinning speed, and others at a same spinning speed with different tools. Introduced and improved Shu's model to describe and compare the smoothing efficiency with different spinning speed and different tools. From the experimental results, the mid-spatial-frequency errors on the initial surface were nearly smoothed out after the process in spin motion and the number of smoothing times can be estimated by the model before the process. Meanwhile this method was also applied to smooth the aspherical component, which has an obvious mid-spatial-frequency error after Magnetorheological Finishing processing. As a result, a high precision aspheric optical component was obtained with PV=0.1λ and RMS=0.01λ.
Direct evidence for a position input to the smooth pursuit system.
Blohm, Gunnar; Missal, Marcus; Lefèvre, Philippe
2005-07-01
When objects move in our environment, the orientation of the visual axis in space requires the coordination of two types of eye movements: saccades and smooth pursuit. The principal input to the saccadic system is position error, whereas it is velocity error for the smooth pursuit system. Recently, it has been shown that catch-up saccades to moving targets are triggered and programmed by using velocity error in addition to position error. Here, we show that, when a visual target is flashed during ongoing smooth pursuit, it evokes a smooth eye movement toward the flash. The velocity of this evoked smooth movement is proportional to the position error of the flash; it is neither influenced by the velocity of the ongoing smooth pursuit eye movement nor by the occurrence of a saccade, but the effect is absent if the flash is ignored by the subject. Furthermore, the response started around 85 ms after the flash presentation and decayed with an average time constant of 276 ms. Thus this is the first direct evidence of a position input to the smooth pursuit system. This study shows further evidence for a coupling between saccadic and smooth pursuit systems. It also suggests that there is an interaction between position and velocity error signals in the control of more complex movements.
NASA Astrophysics Data System (ADS)
Pieper, Michael; Manolakis, Dimitris; Truslow, Eric; Cooley, Thomas; Brueggeman, Michael; Jacobson, John; Weisner, Andrew
2017-08-01
Accurate estimation or retrieval of surface emissivity from long-wave infrared or thermal infrared (TIR) hyperspectral imaging data acquired by airborne or spaceborne sensors is necessary for many scientific and defense applications. This process consists of two interwoven steps: atmospheric compensation and temperature-emissivity separation (TES). The most widely used TES algorithms for hyperspectral imaging data assume that the emissivity spectra for solids are smooth compared to the atmospheric transmission function. We develop a model to explain and evaluate the performance of TES algorithms using a smoothing approach. Based on this model, we identify three sources of error: the smoothing error of the emissivity spectrum, the emissivity error from using the incorrect temperature, and the errors caused by sensor noise. For each TES smoothing technique, we analyze the bias and variability of the temperature errors, which translate to emissivity errors. The performance model explains how the errors interact to generate temperature errors. Since we assume exact knowledge of the atmosphere, the presented results provide an upper bound on the performance of TES algorithms based on the smoothness assumption.
Chen, Xin-Yan; Si, Jun-Qiang; Li, Li; Zhao, Lei; Wei, Li-Li; Jiang, Xue-Wei; Ma, Ke-Tao
2013-05-01
This study compared Wistar rat with spontaneously hypertensive rat (SHR) on the electrophysiology and coupling force of the smooth muscle cells in the cerebral arteriolar segments and observe the influence of 18beta-glycyrrhetinic acid(18beta-GA) on the gap junctions between the arterial smooth muscle cells. The outer layer's connective tissue of the cerebral arteriolar segments was removed. Whole-cell patch clamp recordings were used to observe the 18beta-GA's impaction on the arteriolar segment membrane's input capacitance (C(input)), input conductance (G(input)) and input resistance (R(input)) of the smooth muscle cells. (1) The C(input) and G(input) of the SHR arteriolar segment smooth muscle cells was much higher than the Wistar rats, there was significant difference (P < 0.05). (2) 18beta-GA concentration-dependently reduced C(input) and G(input) (or increase R(input)) on smooth muscle cells in arteriolar segment. IC50 of 18beta-GA suppression's G(input) of the Wistar rat and SHR were 1.7 and 2.0 micromol/L respectively, there was not significant difference (P > 0.05). After application of 18beta-GA concentration > or = 100 micrmol/L, the C(input), G(input) and R(input) of the single smooth muscle cells was very close. Gap junctional coupling is enhanced in the SHR cerebral arterial smooth muscle cells. 18beta-GA concentration-dependent inhibits Wistar rat's and SHR cerebral arteriolar gap junctions between arterial smooth muscle cells. The inhibitory potency is similar between the two different rats. When 18beta-GA concentration is > or = 100 micromol/L, it can completely block gap junctions between arteriolar smooth muscle cells.
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1981-01-01
A function space approach to smoothing is used to obtain a set of model error estimates inherent in a reduced-order model. By establishing knowledge of inevitable deficiencies in the truncated model, the error estimates provide a foundation for updating the model and thereby improving system performance. The function space smoothing solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for spacecraft attitude control.
Interpreting SBUV Smoothing Errors: an Example Using the Quasi-biennial Oscillation
NASA Technical Reports Server (NTRS)
Kramarova, N. A.; Bhartia, Pawan K.; Frith, S. M.; McPeters, R. D.; Stolarski, R. S.
2013-01-01
The Solar Backscattered Ultraviolet (SBUV) observing system consists of a series of instruments that have been measuring both total ozone and the ozone profile since 1970. SBUV measures the profile in the upper stratosphere with a resolution that is adequate to resolve most of the important features of that region. In the lower stratosphere the limited vertical resolution of the SBUV system means that there are components of the profile variability that SBUV cannot measure. The smoothing error, as defined in the optimal estimation retrieval method, describes the components of the profile variability that the SBUV observing system cannot measure. In this paper we provide a simple visual interpretation of the SBUV smoothing error by comparing SBUV ozone anomalies in the lower tropical stratosphere associated with the quasi-biennial oscillation (QBO) to anomalies obtained from the Aura Microwave Limb Sounder (MLS). We describe a methodology for estimating the SBUV smoothing error for monthly zonal mean (mzm) profiles. We construct covariance matrices that describe the statistics of the inter-annual ozone variability using a 6 yr record of Aura MLS and ozonesonde data. We find that the smoothing error is of the order of 1percent between 10 and 1 hPa, increasing up to 15-20 percent in the troposphere and up to 5 percent in the mesosphere. The smoothing error for total ozone columns is small, mostly less than 0.5 percent. We demonstrate that by merging the partial ozone columns from several layers in the lower stratosphere/troposphere into one thick layer, we can minimize the smoothing error. We recommend using the following layer combinations to reduce the smoothing error to about 1 percent: surface to 25 hPa (16 hPa) outside (inside) of the narrow equatorial zone 20 S-20 N.
Shin, Joon-Ho; Park, Gyulee; Cho, Duk Youn
2017-04-01
To explore motor performance on 2 different cognitive tasks during robotic rehabilitation in which motor performance was longitudinally assessed. Prospective study. Rehabilitation hospital. Patients (N=22) with chronic stroke and upper extremity impairment. A total of 640 repetitions of robot-assisted planar reaching, 5 times a week for 4 weeks. Longitudinal robotic evaluations regarding motor performance included smoothness, mean velocity, path error, and reach error by the type of cognitive task. Dual-task effects (DTEs) of motor performance were computed to analyze the effect of the cognitive task on dual-task interference. Cognitive task type influenced smoothness (P=.006), the DTEs of smoothness (P=.002), and the DTEs of reach error (P=.052). Robotic rehabilitation improved smoothness (P=.007) and reach error (P=.078), while stroke severity affected smoothness (P=.01), reach error (P<.001), and path error (P=.01). Robotic rehabilitation or severity did not affect the DTEs of motor performance. The results provide evidence for the effect of cognitive-motor interference on upper extremity performance among participants with stroke using a robotic-guided rehabilitation system. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Song, Chi; Zhang, Xuejun; Zhang, Xin; Hu, Haifei; Zeng, Xuefeng
2017-06-01
A rigid conformal (RC) lap can smooth mid-spatial-frequency (MSF) errors, which are naturally smaller than the tool size, while still removing large-scale errors in a short time. However, the RC-lap smoothing efficiency performance is poorer than expected, and existing smoothing models cannot explicitly specify the methods to improve this efficiency. We presented an explicit time-dependent smoothing evaluation model that contained specific smoothing parameters directly derived from the parametric smoothing model and the Preston equation. Based on the time-dependent model, we proposed a strategy to improve the RC-lap smoothing efficiency, which incorporated the theoretical model, tool optimization, and efficiency limit determination. Two sets of smoothing experiments were performed to demonstrate the smoothing efficiency achieved using the time-dependent smoothing model. A high, theory-like tool influence function and a limiting tool speed of 300 RPM were o
A Novel Four-Node Quadrilateral Smoothing Element for Stress Enhancement and Error Estimation
NASA Technical Reports Server (NTRS)
Tessler, A.; Riggs, H. R.; Dambach, M.
1998-01-01
A four-node, quadrilateral smoothing element is developed based upon a penalized-discrete-least-squares variational formulation. The smoothing methodology recovers C1-continuous stresses, thus enabling effective a posteriori error estimation and automatic adaptive mesh refinement. The element formulation is originated with a five-node macro-element configuration consisting of four triangular anisoparametric smoothing elements in a cross-diagonal pattern. This element pattern enables a convenient closed-form solution for the degrees of freedom of the interior node, resulting from enforcing explicitly a set of natural edge-wise penalty constraints. The degree-of-freedom reduction scheme leads to a very efficient formulation of a four-node quadrilateral smoothing element without any compromise in robustness and accuracy of the smoothing analysis. The application examples include stress recovery and error estimation in adaptive mesh refinement solutions for an elasticity problem and an aerospace structural component.
Smooth empirical Bayes estimation of observation error variances in linear systems
NASA Technical Reports Server (NTRS)
Martz, H. F., Jr.; Lian, M. W.
1972-01-01
A smooth empirical Bayes estimator was developed for estimating the unknown random scale component of each of a set of observation error variances. It is shown that the estimator possesses a smaller average squared error loss than other estimators for a discrete time linear system.
Spline-Based Smoothing of Airfoil Curvatures
NASA Technical Reports Server (NTRS)
Li, W.; Krist, S.
2008-01-01
Constrained fitting for airfoil curvature smoothing (CFACS) is a splinebased method of interpolating airfoil surface coordinates (and, concomitantly, airfoil thicknesses) between specified discrete design points so as to obtain smoothing of surface-curvature profiles in addition to basic smoothing of surfaces. CFACS was developed in recognition of the fact that the performance of a transonic airfoil is directly related to both the curvature profile and the smoothness of the airfoil surface. Older methods of interpolation of airfoil surfaces involve various compromises between smoothing of surfaces and exact fitting of surfaces to specified discrete design points. While some of the older methods take curvature profiles into account, they nevertheless sometimes yield unfavorable results, including curvature oscillations near end points and substantial deviations from desired leading-edge shapes. In CFACS as in most of the older methods, one seeks a compromise between smoothing and exact fitting. Unlike in the older methods, the airfoil surface is modified as little as possible from its original specified form and, instead, is smoothed in such a way that the curvature profile becomes a smooth fit of the curvature profile of the original airfoil specification. CFACS involves a combination of rigorous mathematical modeling and knowledge-based heuristics. Rigorous mathematical formulation provides assurance of removal of undesirable curvature oscillations with minimum modification of the airfoil geometry. Knowledge-based heuristics bridge the gap between theory and designers best practices. In CFACS, one of the measures of the deviation of an airfoil surface from smoothness is the sum of squares of the jumps in the third derivatives of a cubicspline interpolation of the airfoil data. This measure is incorporated into a formulation for minimizing an overall deviation- from-smoothness measure of the airfoil data within a specified fitting error tolerance. CFACS has been extensively tested on a number of supercritical airfoil data sets generated by inverse design and optimization computer programs. All of the smoothing results show that CFACS is able to generate unbiased smooth fits of curvature profiles, trading small modifications of geometry for increasing curvature smoothness by eliminating curvature oscillations and bumps (see figure).
Distinct eye movement patterns enhance dynamic visual acuity
Palidis, Dimitrios J.; Wyder-Hodge, Pearson A.; Fooken, Jolande; Spering, Miriam
2017-01-01
Dynamic visual acuity (DVA) is the ability to resolve fine spatial detail in dynamic objects during head fixation, or in static objects during head or body rotation. This ability is important for many activities such as ball sports, and a close relation has been shown between DVA and sports expertise. DVA tasks involve eye movements, yet, it is unclear which aspects of eye movements contribute to successful performance. Here we examined the relation between DVA and the kinematics of smooth pursuit and saccadic eye movements in a cohort of 23 varsity baseball players. In a computerized dynamic-object DVA test, observers reported the location of the gap in a small Landolt-C ring moving at various speeds while eye movements were recorded. Smooth pursuit kinematics—eye latency, acceleration, velocity gain, position error—and the direction and amplitude of saccadic eye movements were linked to perceptual performance. Results reveal that distinct eye movement patterns—minimizing eye position error, tracking smoothly, and inhibiting reverse saccades—were related to dynamic visual acuity. The close link between eye movement quality and DVA performance has important implications for the development of perceptual training programs to improve DVA. PMID:28187157
Direction-dependent regularization for improved estimation of liver and lung motion in 4D image data
NASA Astrophysics Data System (ADS)
Schmidt-Richberg, Alexander; Ehrhardt, Jan; Werner, René; Handels, Heinz
2010-03-01
The estimation of respiratory motion is a fundamental requisite for many applications in the field of 4D medical imaging, for example for radiotherapy of thoracic and abdominal tumors. It is usually done using non-linear registration of time frames of the sequence without further modelling of physiological motion properties. In this context, the accurate calculation of liver und lung motion is especially challenging because the organs are slipping along the surrounding tissue (i.e. the rib cage) during the respiratory cycle, which leads to discontinuities in the motion field. Without incorporating this specific physiological characteristic, common smoothing mechanisms cause an incorrect estimation along the object borders. In this paper, we present an extended diffusion-based model for incorporating physiological knowledge in image registration. By decoupling normal- and tangential-directed smoothing, we are able to estimate slipping motion at the organ borders while preventing gaps and ensuring smooth motion fields inside. We evaluate our model for the estimation of lung and liver motion on the basis of publicly accessible 4D CT and 4D MRI data. The results show a considerable increase of registration accuracy with respect to the target registration error and a more plausible motion estimation.
NASA Technical Reports Server (NTRS)
Knox, C. E.
1978-01-01
Navigation error data from these flights are presented in a format utilizing three independent axes - horizontal, vertical, and time. The navigation position estimate error term and the autopilot flight technical error term are combined to form the total navigation error in each axis. This method of error presentation allows comparisons to be made between other 2-, 3-, or 4-D navigation systems and allows experimental or theoretical determination of the navigation error terms. Position estimate error data are presented with the navigation system position estimate based on dual DME radio updates that are smoothed with inertial velocities, dual DME radio updates that are smoothed with true airspeed and magnetic heading, and inertial velocity updates only. The normal mode of navigation with dual DME updates that are smoothed with inertial velocities resulted in a mean error of 390 m with a standard deviation of 150 m in the horizontal axis; a mean error of 1.5 m low with a standard deviation of less than 11 m in the vertical axis; and a mean error as low as 252 m with a standard deviation of 123 m in the time axis.
Balancing aggregation and smoothing errors in inverse models
Turner, A. J.; Jacob, D. J.
2015-06-30
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less
Balancing aggregation and smoothing errors in inverse models
NASA Astrophysics Data System (ADS)
Turner, A. J.; Jacob, D. J.
2015-01-01
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.
Balancing aggregation and smoothing errors in inverse models
NASA Astrophysics Data System (ADS)
Turner, A. J.; Jacob, D. J.
2015-06-01
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.
Poston, Brach; Van Gemmert, Arend W.A.; Sharma, Siddharth; Chakrabarti, Somesh; Zavaremi, Shahrzad H.; Stelmach, George
2013-01-01
The minimum variance theory proposes that motor commands are corrupted by signal-dependent noise and smooth trajectories with low noise levels are selected to minimize endpoint error and endpoint variability. The purpose of the study was to determine the contribution of trajectory smoothness to the endpoint accuracy and endpoint variability of rapid multi-joint arm movements. Young and older adults performed arm movements (4 blocks of 25 trials) as fast and as accurately as possible to a target with the right (dominant) arm. Endpoint accuracy and endpoint variability along with trajectory smoothness and error were quantified for each block of trials. Endpoint error and endpoint variance were greater in older adults compared with young adults, but decreased at a similar rate with practice for the two age groups. The greater endpoint error and endpoint variance exhibited by older adults were primarily due to impairments in movement extent control and not movement direction control. The normalized jerk was similar for the two age groups, but was not strongly associated with endpoint error or endpoint variance for either group. However, endpoint variance was strongly associated with endpoint error for both the young and older adults. Finally, trajectory error was similar for both groups and was weakly associated with endpoint error for the older adults. The findings are not consistent with the predictions of the minimum variance theory, but support and extend previous observations that movement trajectories and endpoints are planned independently. PMID:23584101
NASA Astrophysics Data System (ADS)
Kim, Jin-Hong; Lee, Jun-Seok; Lim, Jungshik; Seo, Jung-Kyo
2009-03-01
Narrow gap distance in cover-layer incident near-field recording (NFR) configuration causes a collision problem in the interface between a solid immersion lens and a disk surface. A polymer cover-layer with smooth surface results in a stable gap servo while a nanocomposite cover-layer with high refractive index shows a collision problem during the gap servo test. Even though a dielectric cover-layer, in which the surface is rougher than the polymer, supplements the mechanical properties, an unclear eye pattern due to an unstable gap servo can be obtained after a chemical mechanical polishing. Not only smooth surface but also good mechanical properties of cover-layer are required for the stable gap servo in the NFR.
The Re-Analysis of Ozone Profile Data from a 41-Year Series of SBUV Instruments
NASA Technical Reports Server (NTRS)
Kramarova, Natalya; Frith, Stacey; Bhartia, Pawan K.; McPeters, Richard; Labow, Gordon; Taylor, Steven; Fisher, Bradford
2012-01-01
In this study we present the validation of ozone profiles from a number of Solar Back Scattered Ultra Violet (SBUV) and SBUV/2 instruments that were recently reprocessed using an updated (Version 8.6) algorithm. The SBUV dataset provides the longest available record of global ozone profiles, spanning a 41-year period from 1970 to 2011 (except a 5-year gap in the 1970s) and includes ozone profile records obtained from the Nimbus-4 BUV and Nimbus-7 SBUV instruments, and a series of SBUV(/2) instruments launched on NOAA operational satellites (NOAA 09, 11, 14, 16, 17, 18, 19). Although modifications in instrument design were made in the evolution from the BUV instrument to the modern SBUV(/2) model, the basic principles of the measurement technique and retrieval algorithm remain the same. The long term SBUV data record allows us to create a consistent, calibrated dataset of ozone profiles that can be used for climate studies and trend analyses. In particular, we focus on estimating the various sources of error in the SBUV profile ozone retrievals using independent observations and analysis of the algorithm itself. For the first time we include in the metadata a quantitative estimate of the smoothing error, defined as the error due to profile variability that the SBUV observing system cannot inherently measure. The magnitude of the smoothing error varies with altitude, latitude, season and solar zenith angle. Between 10 and 1 hPa the smoothing errors for the SBUV monthly zonal mean retrievals are of the order of 1 %, but start to increase above and below this layer. The largest smoothing errors, as large as 15-20%, were detected in in the troposphere. The SBUV averaging kernels, provided with the ozone profiles in version 8.6, help to eliminate the smoothing effect when comparing the SBUV profiles with high vertical resolution measurements, and make it convenient to use the SBUV ozone profiles for data assimilation and model validation purposes. The smoothing error can also be minimized by combining layers of data, and we will discuss recommendations for this approach as well. The SBUV ozone profiles have been intensively validated against satellite profile measurements obtained from the Microwave Limb Sounders (MLS) (on board the UARS and AURA satellites), Stratospheric Aerosol and Gas Experiment (SAGE) and Michelson Interferometer for Passive Atmospheric Sounding (MIPAS). Also, we compare coincident and collocated SBUV ozone retrievals with observations made by ground-based instruments, such as microwave spectrometers, lidars, Umkehr instruments and balloon-borne ozonosondes. Finally, we compare the SBUV ozone profiles with output from the NASA GSFC GEOS-CCM model. In the stratosphere between 25 and 1 hPa the mean biases and standard deviations are within 5% for monthly mean ozone profiles. Above and below this layer the vertical resolution of the SBUV algorithm decreases and the effects of vertical smoothing should be taken into account. Though the SBUV algorithm has a coarser vertical resolution in the lower stratosphere and troposphere, it is capable of precisely estimating the integrated ozone column between the surface and 25 hPa. The time series of the tropospheric - lower stratospheric ozone column derived from SBUV agrees within 5% with the corresponding values observed by an ensemble of ozone sonde stations in North Hemisphere. Drift of the ozone time series obtained from each SBUV(/2) instrument relative to ground based and satellite measurements are evaluated and some features of individual SBUV(l2) instruments are discussed. In addition to evaluating individual instruments against independent observations, we also focus on the instrument to instrument consistency in the series. Overall, Version 8.6 ozone profiles obtained from two different SBUV(l2) instruments compare within a couple of percent during overlap periods and are consistently varying in time, with some exceptions. Some of the noted discrepancies might bssociated with ozone diurnal variations, since the difference in the local time of the observations for a pair of SBUV(l2) instruments could be several hours. Other issues include the potential short-term drift in measurements as the instrument orbit drifts, and measurements are obtained at high solar zenith angles (>85 ). Based on the results of the validation, a consistent, calibrated dataset of SBUV ozone profiles has been created based on internal calibration only.
Evaluate error correction ability of magnetorheological finishing by smoothing spectral function
NASA Astrophysics Data System (ADS)
Wang, Jia; Fan, Bin; Wan, Yongjian; Shi, Chunyan; Zhuo, Bin
2014-08-01
Power Spectral Density (PSD) has been entrenched in optics design and manufacturing as a characterization of mid-high spatial frequency (MHSF) errors. Smoothing Spectral Function (SSF) is a newly proposed parameter that based on PSD to evaluate error correction ability of computer controlled optical surfacing (CCOS) technologies. As a typical deterministic and sub-aperture finishing technology based on CCOS, magnetorheological finishing (MRF) leads to MHSF errors inevitably. SSF is employed to research different spatial frequency error correction ability of MRF process. The surface figures and PSD curves of work-piece machined by MRF are presented. By calculating SSF curve, the correction ability of MRF for different spatial frequency errors will be indicated as a normalized numerical value.
NASA Astrophysics Data System (ADS)
Nie, Xuqing; Li, Shengyi; Song, Ci; Hu, Hao
2014-08-01
Due to the different curvature everywhere, the aspheric surface is hard to achieve high-precision accuracy by the traditional polishing process. Controlling of the mid-spatial frequency errors (MSFR), in particular, is almost unapproachable. In this paper, the combined fabrication process based on the smoothing polishing (SP) and magnetorheological finishing (MRF) is proposed. The pressure distribution of the rigid polishing lap and semi-flexible polishing lap is calculated. The shape preserving capacity and smoothing effect are compared. The feasibility of smoothing aspheric surface with the semi-flexible polishing lap is verified, and the key technologies in the SP process are discussed. Then, A K4 parabolic surface with the diameter of 500mm is fabricated based on the combined fabrication process. A Φ150 mm semi-flexible lap is used in the SP process to control the MSFR, and the deterministic MRF process is applied to figure the surface error. The root mean square (RMS) error of the aspheric surface converges from 0.083λ (λ=632.8 nm) to 0.008λ. The power spectral density (PSD) result shows that the MSFR are well restrained while the surface error has a great convergence.
Steffen, Michael; Curtis, Sean; Kirby, Robert M; Ryan, Jennifer K
2008-01-01
Streamline integration of fields produced by computational fluid mechanics simulations is a commonly used tool for the investigation and analysis of fluid flow phenomena. Integration is often accomplished through the application of ordinary differential equation (ODE) integrators--integrators whose error characteristics are predicated on the smoothness of the field through which the streamline is being integrated--smoothness which is not available at the inter-element level of finite volume and finite element data. Adaptive error control techniques are often used to ameliorate the challenge posed by inter-element discontinuities. As the root of the difficulties is the discontinuous nature of the data, we present a complementary approach of applying smoothness-enhancing accuracy-conserving filters to the data prior to streamline integration. We investigate whether such an approach applied to uniform quadrilateral discontinuous Galerkin (high-order finite volume) data can be used to augment current adaptive error control approaches. We discuss and demonstrate through numerical example the computational trade-offs exhibited when one applies such a strategy.
NASA Technical Reports Server (NTRS)
Xing, G. C.; Bachmann, K. J.; Posthill, J. B.; Timmons, M. L.
1991-01-01
Epitaxial ZnGeP2-Ge films have been grown on (111)GaP substrates using MOCVD. The films grown with dimethylzinc to germane flow rate ratio R greater than 10 show mirror-smooth surface morphology. Films grown with R less than 10 show a high density of twinning, including both double position and growth twins. Compared to films grown on (001) GaP substrates, the layers on (111) GaP generally show a higher density of microstructural defects. TEM electron diffraction patterns show that the films grown on (111) GaP substrates are more disordered than films grown on (001) GaP under comparable conditions. The growth rate on (111) GaP substrates is about 2.5 times slower than that on (001) GaP, and films grown on Si substrates show extensive twinning formation. Both TEM and SEM examinations indicate that smooth epitaxial overgrowth may be easier on (111) Si substrates than on (001) Si.
Adaptation of catch-up saccades during the initiation of smooth pursuit eye movements.
Schütz, Alexander C; Souto, David
2011-04-01
Reduction of retinal speed and alignment of the line of sight are believed to be the respective primary functions of smooth pursuit and saccadic eye movements. As the eye muscles strength can change in the short-term, continuous adjustments of motor signals are required to achieve constant accuracy. While adaptation of saccade amplitude to systematic position errors has been extensively studied, we know less about the adaptive response to position errors during smooth pursuit initiation, when target motion has to be taken into account to program saccades, and when position errors at the saccade endpoint could also be corrected by increasing pursuit velocity. To study short-term adaptation (250 adaptation trials) of tracking eye movements, we introduced a position error during the first catch-up saccade made during the initiation of smooth pursuit-in a ramp-step-ramp paradigm. The target position was either shifted in the direction of the horizontally moving target (forward step), against it (backward step) or orthogonally to it (vertical step). Results indicate adaptation of catch-up saccade amplitude to back and forward steps. With vertical steps, saccades became oblique, by an inflexion of the early or late saccade trajectory. With a similar time course, post-saccadic pursuit velocity was increased in the step direction, adding further evidence that under some conditions pursuit and saccades can act synergistically to reduce position errors.
A new parametric method to smooth time-series data of metabolites in metabolic networks.
Miyawaki, Atsuko; Sriyudthsak, Kansuporn; Hirai, Masami Yokota; Shiraishi, Fumihide
2016-12-01
Mathematical modeling of large-scale metabolic networks usually requires smoothing of metabolite time-series data to account for measurement or biological errors. Accordingly, the accuracy of smoothing curves strongly affects the subsequent estimation of model parameters. Here, an efficient parametric method is proposed for smoothing metabolite time-series data, and its performance is evaluated. To simplify parameter estimation, the method uses S-system-type equations with simple power law-type efflux terms. Iterative calculation using this method was found to readily converge, because parameters are estimated stepwise. Importantly, smoothing curves are determined so that metabolite concentrations satisfy mass balances. Furthermore, the slopes of smoothing curves are useful in estimating parameters, because they are probably close to their true behaviors regardless of errors that may be present in the actual data. Finally, calculations for each differential equation were found to converge in much less than one second if initial parameters are set at appropriate (guessed) values. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Shestakov, V. A.; Korshunov, M. M.; Togushova, Yu N.; Efremov, D. V.; Dolgov, O. V.
2018-07-01
Irradiation of superconductors with different particles is one of many ways to investigate the effects of disorder. Here we study the disorder-induced transition between s ± and s ++ states in the two-band model for Fe-based superconductors with nonmagnetic impurities. Specifically, we investigate the important question of whether the superconducting gaps during the transition change smoothly or abruptly. We show that the behavior can be of either type and is controlled by the ratio of intraband to interband impurity scattering potentials, and by a parameter σ , that represents scattering strength and ranges from zero (Born approximation) to one (unitary limit). For the pure interband scattering potential and the scattering strength σ ≲ 0.11, the {s}+/- \\to {s}++ transition is accompanied by steep changes in the gaps, while for larger values of σ , the gaps change smoothly. The behavior of the gaps is characterized by steep changes at low temperatures, T< 0.1{T}{{c}0} with T c0 being the critical temperature in the clean limit, otherwise it changes gradually. The critical temperature T c is always a smooth function of the scattering rate in spite of the steep changes in the behavior of the gaps.
Vegetation Phenology Metrics Derived from Temporally Smoothed and Gap-filled MODIS Data
NASA Technical Reports Server (NTRS)
Tan, Bin; Morisette, Jeff; Wolfe, Robert; Esaias, Wayne; Gao, Feng; Ederer, Greg; Nightingale, Joanne; Nickeson, Jamie E.; Ma, Pete; Pedely, Jeff
2012-01-01
Smoothed and gap-filled VI provides a good base for estimating vegetation phenology metrics. The TIMESAT software was improved by incorporating the ancillary information from MODIS products. A simple assessment of the association between retrieved greenup dates and ground observations indicates satisfactory result from improved TIMESAT software. One application example shows that mapping Nectar Flow Phenology is tractable on a continental scale using hive weight and satellite vegetation data. The phenology data product is supporting more researches in ecology, climate change fields.
Smoothing spline ANOVA frailty model for recurrent event data.
Du, Pang; Jiang, Yihua; Wang, Yuedong
2011-12-01
Gap time hazard estimation is of particular interest in recurrent event data. This article proposes a fully nonparametric approach for estimating the gap time hazard. Smoothing spline analysis of variance (ANOVA) decompositions are used to model the log gap time hazard as a joint function of gap time and covariates, and general frailty is introduced to account for between-subject heterogeneity and within-subject correlation. We estimate the nonparametric gap time hazard function and parameters in the frailty distribution using a combination of the Newton-Raphson procedure, the stochastic approximation algorithm (SAA), and the Markov chain Monte Carlo (MCMC) method. The convergence of the algorithm is guaranteed by decreasing the step size of parameter update and/or increasing the MCMC sample size along iterations. Model selection procedure is also developed to identify negligible components in a functional ANOVA decomposition of the log gap time hazard. We evaluate the proposed methods with simulation studies and illustrate its use through the analysis of bladder tumor data. © 2011, The International Biometric Society.
New GRACE-Derived Storage Change Estimates Using Empirical Mode Extraction
NASA Astrophysics Data System (ADS)
Aierken, A.; Lee, H.; Yu, H.; Ate, P.; Hossain, F.; Basnayake, S. B.; Jayasinghe, S.; Saah, D. S.; Shum, C. K.
2017-12-01
Estimated mass change from GRACE spherical harmonic solutions have north/south stripes and east/west banded errors due to random noise and modeling errors. Low pass filters like decorrelation and Gaussian smoothing are typically applied to reduce noise and errors. However, these filters introduce leakage errors that need to be addressed. GRACE mascon estimates (JPL and CSR mascon solutions) do not need decorrelation or Gaussian smoothing and offer larger signal magnitudes compared to the GRACE spherical harmonics (SH) filtered results. However, a recent study [Chen et al., JGR, 2017] demonstrated that both JPL and CSR mascon solutions also have leakage errors. We developed a new postprocessing method based on empirical mode decomposition to estimate mass change from GRACE SH solutions without decorrelation and Gaussian smoothing, the two main sources of leakage errors. We found that, without any post processing, the noise and errors in spherical harmonic solutions introduced very clear high frequency components in the spatial domain. By removing these high frequency components and reserve the overall pattern of the signal, we obtained better mass estimates with minimum leakage errors. The new global mass change estimates captured all the signals observed by GRACE without the stripe errors. Results were compared with traditional methods over the Tonle Sap Basin in Cambodia, Northwestern India, Central Valley in California, and the Caspian Sea. Our results provide larger signal magnitudes which are in good agreement with the leakage corrected (forward modeled) SH results.
Efficiently characterizing the total error in quantum circuits
NASA Astrophysics Data System (ADS)
Carignan-Dugas, Arnaud; Wallman, Joel J.; Emerson, Joseph
A promising technological advancement meant to enlarge our computational means is the quantum computer. Such a device would harvest the quantum complexity of the physical world in order to unfold concrete mathematical problems more efficiently. However, the errors emerging from the implementation of quantum operations are likewise quantum, and hence share a similar level of intricacy. Fortunately, randomized benchmarking protocols provide an efficient way to characterize the operational noise within quantum devices. The resulting figures of merit, like the fidelity and the unitarity, are typically attached to a set of circuit components. While important, this doesn't fulfill the main goal: determining if the error rate of the total circuit is small enough in order to trust its outcome. In this work, we fill the gap by providing an optimal bound on the total fidelity of a circuit in terms of component-wise figures of merit. Our bound smoothly interpolates between the classical regime, in which the error rate grows linearly in the circuit's length, and the quantum regime, which can naturally allow quadratic growth. Conversely, our analysis substantially improves the bounds on single circuit element fidelities obtained through techniques such as interleaved randomized benchmarking. This research was supported by the U.S. Army Research Office through Grant W911NF- 14-1-0103, CIFAR, the Government of Ontario, and the Government of Canada through NSERC and Industry Canada.
NASA Technical Reports Server (NTRS)
Xing, G. C.; Bachmann, Klaus J.
1993-01-01
The growth of ZnGeP2/GaP double and multiple heterostructures on GaP substrates by organometallic chemical vapor deposition is reported. These epitaxial films were deposited at a temperature of 580 C using dimethylzinc, trimethylgallium, germane, and phosphine as source gases. With appropriate deposition conditions, mirror smooth epitaxial GaP/ZnGeP2 multiple heterostructures were obtained on (001) GaP substrates. Transmission electron microscopy (TEM) and secondary ion mass spectroscopy (SIMS) studies of the films showed that the interfaces are sharp and smooth. Etching study of the films showed dislocation density on the order of 5x10(exp 4)cm(sup -2). The growth rates of the GaP layers depend linearly on the flow rates of trimethylgallium. While the GaP layers crystallize in zinc-blende structure, the ZnGeP2 layers crystallize in the chalcopyrite structure as determined by (010) electron diffraction pattern. This is the first time that multiple heterostructures combining these two crystal structures were made.
Aerodynamics of a translating comb-like plate inspired by a fairyfly wing
NASA Astrophysics Data System (ADS)
Lee, Seung Hun; Kim, Daegyoum
2017-08-01
Unlike the smooth wings of common insects or birds, micro-scale insects such as the fairyfly have a distinctive wing geometry, comprising a frame with several bristles. Motivated by this peculiar wing geometry, we experimentally investigated the flow structure of a translating comb-like wing for a wide range of gap size, angle of attack, and Reynolds number, Re = O(10) - O(103), and the correlation of these parameters with aerodynamic performance. The flow structures of a smooth plate without a gap and a comb-like plate are significantly different at high Reynolds number, while little difference was observed at the low Reynolds number of O(10). At low Reynolds number, shear layers that were generated at the edges of the tooth of the comb-like plate strongly diffuse and eventually block a gap. This gap blockage increases the effective surface area of the plate and alters the formation of leading-edge and trailing-edge vortices. As a result, the comb-like plate generates larger aerodynamic force per unit area than the smooth plate. In addition to a quasi-steady phase after the comb-like plate travels several chords, we also studied a starting phase of the shear layer development when the comb-like plate begins to translate from rest. While a plate with small gap size can generate aerodynamic force at the starting phase as effectively as at the quasi-steady phase, the aerodynamic force drops noticeably for a plate with a large gap because the diffusion of the developing shear layers is not enough to block the gap.
NASA Astrophysics Data System (ADS)
Žáček, K.
Summary- The only way to make an excessively complex velocity model suitable for application of ray-based methods, such as the Gaussian beam or Gaussian packet methods, is to smooth it. We have smoothed the Marmousi model by choosing a coarser grid and by minimizing the second spatial derivatives of the slowness. This was done by minimizing the relevant Sobolev norm of slowness. We show that minimizing the relevant Sobolev norm of slowness is a suitable technique for preparing the optimum models for asymptotic ray theory methods. However, the price we pay for a model suitable for ray tracing is an increase of the difference between the smoothed and original model. Similarly, the estimated error in the travel time also increases due to the difference between the models. In smoothing the Marmousi model, we have found the estimated error of travel times at the verge of acceptability. Due to the low frequencies in the wavefield of the original Marmousi data set, we have found the Gaussian beams and Gaussian packets at the verge of applicability even in models sufficiently smoothed for ray tracing.
Stress Recovery and Error Estimation for Shell Structures
NASA Technical Reports Server (NTRS)
Yazdani, A. A.; Riggs, H. R.; Tessler, A.
2000-01-01
The Penalized Discrete Least-Squares (PDLS) stress recovery (smoothing) technique developed for two dimensional linear elliptic problems is adapted here to three-dimensional shell structures. The surfaces are restricted to those which have a 2-D parametric representation, or which can be built-up of such surfaces. The proposed strategy involves mapping the finite element results to the 2-D parametric space which describes the geometry, and smoothing is carried out in the parametric space using the PDLS-based Smoothing Element Analysis (SEA). Numerical results for two well-known shell problems are presented to illustrate the performance of SEA/PDLS for these problems. The recovered stresses are used in the Zienkiewicz-Zhu a posteriori error estimator. The estimated errors are used to demonstrate the performance of SEA-recovered stresses in automated adaptive mesh refinement of shell structures. The numerical results are encouraging. Further testing involving more complex, practical structures is necessary.
Registration of organs with sliding interfaces and changing topologies
NASA Astrophysics Data System (ADS)
Berendsen, Floris F.; Kotte, Alexis N. T. J.; Viergever, Max A.; Pluim, Josien P. W.
2014-03-01
Smoothness and continuity assumptions on the deformation field in deformable image registration do not hold for applications where the imaged objects have sliding interfaces. Recent extensions to deformable image registration that accommodate for sliding motion of organs are limited to sliding motion along approximately planar surfaces or cannot model sliding that changes the topological configuration in case of multiple organs. We propose a new extension to free-form image registration that is not limited in this way. Our method uses a transformation model that consists of uniform B-spline transformations for each organ region separately, which is based on segmentation of one image. Since this model can create overlapping regions or gaps between regions, we introduce a penalty term that minimizes this undesired effect. The penalty term acts on the surfaces of the organ regions and is optimized simultaneously with the image similarity. To evaluate our method registrations were performed on publicly available inhale-exhale CT scans for which performances of other methods are known. Target registration errors are computed on dense landmark sets that are available with these datasets. On these data our method outperforms the other methods in terms of target registration error and, where applicable, also in terms of overlap and gap volumes. The approximation of the other methods of sliding motion along planar surfaces is reasonably well suited for the motion present in the lung data. The ability of our method to handle sliding along curved boundaries and for changing region topology configurations was demonstrated on synthetic images.
Alternative Attitude Commanding and Control for Precise Spacecraft Landing
NASA Technical Reports Server (NTRS)
Singh, Gurkirpal
2004-01-01
A report proposes an alternative method of control for precision landing on a remote planet. In the traditional method, the attitude of a spacecraft is required to track a commanded translational acceleration vector, which is generated at each time step by solving a two-point boundary value problem. No requirement of continuity is imposed on the acceleration. The translational acceleration does not necessarily vary smoothly. Tracking of a non-smooth acceleration causes the vehicle attitude to exhibit undesirable transients and poor pointing stability behavior. In the alternative method, the two-point boundary value problem is not solved at each time step. A smooth reference position profile is computed. The profile is recomputed only when the control errors get sufficiently large. The nominal attitude is still required to track the smooth reference acceleration command. A steering logic is proposed that controls the position and velocity errors about the reference profile by perturbing the attitude slightly about the nominal attitude. The overall pointing behavior is therefore smooth, greatly reducing the degree of pointing instability.
2014-01-01
We propose a smooth approximation l 0-norm constrained affine projection algorithm (SL0-APA) to improve the convergence speed and the steady-state error of affine projection algorithm (APA) for sparse channel estimation. The proposed algorithm ensures improved performance in terms of the convergence speed and the steady-state error via the combination of a smooth approximation l 0-norm (SL0) penalty on the coefficients into the standard APA cost function, which gives rise to a zero attractor that promotes the sparsity of the channel taps in the channel estimation and hence accelerates the convergence speed and reduces the steady-state error when the channel is sparse. The simulation results demonstrate that our proposed SL0-APA is superior to the standard APA and its sparsity-aware algorithms in terms of both the convergence speed and the steady-state behavior in a designated sparse channel. Furthermore, SL0-APA is shown to have smaller steady-state error than the previously proposed sparsity-aware algorithms when the number of nonzero taps in the sparse channel increases. PMID:24790588
NASA Astrophysics Data System (ADS)
Bilici, Mihai A.; Haase, John R.; Boyle, Calvin R.; Go, David B.; Sankaran, R. Mohan
2016-06-01
We report on the existence of a smooth transition from field emission to a self-sustained plasma in microscale electrode geometries at atmospheric pressure. This behavior, which is not found at macroscopic scales or low pressures, arises from the unique combination of large electric fields that are created in microscale dimensions to produce field-emitted electrons and the high pressures that lead to collisional ionization of the gas. Using a tip-to-plane electrode geometry, currents less than 10 μA are measured at onset voltages of ˜200 V for gaps less than 5 μm, and analysis of the current-voltage (I-V) relationship is found to follow Fowler-Nordheim behavior, confirming field emission. As the applied voltage is increased, gas breakdown occurs smoothly, initially resulting in the formation of a weak, partial-like glow and then a self-sustained glow discharge. Remarkably, this transition is essentially reversible, as no significant hysteresis is observed during forward and reverse voltage sweeps. In contrast, at larger electrode gaps, no field emission current is measured and gas breakdown occurs abruptly at higher voltages of ˜400 V, absent of any smooth transition from the pre-breakdown condition and is characterized only by glow discharge formation.
Kansui, Yasuo; Garland, Christopher J; Dora, Kim A
2008-08-01
Increases in global Ca(2+) in the endothelium are a crucial step in releasing relaxing factors to modulate arterial tone. In the present study we investigated spontaneous Ca(2+) events in endothelial cells, and the contribution of smooth muscle cells to these Ca(2+) events, in pressurized rat mesenteric resistance arteries. Spontaneous Ca(2+) events were observed under resting conditions in 34% of cells. These Ca(2+) events were absent in arteries preincubated with either cyclopiazonic acid or U-73122, but were unaffected by ryanodine or nicotinamide. Stimulation of smooth muscle cell depolarization and contraction with either phenylephrine or high concentrations of KCl significantly increased the frequency of endothelial cell Ca(2+) events. The putative gap junction uncouplers carbenoxolone and 18alpha-glycyrrhetinic acid each inhibited spontaneous and evoked Ca(2+) events, and the movement of calcein from endothelial to smooth muscle cells. In addition, spontaneous Ca(2+) events were diminished by nifedipine, lowering extracellular Ca(2+) levels, or by blockers of non-selective Ca(2+) influx pathways. These findings suggest that in pressurized rat mesenteric arteries, spontaneous Ca(2+) events in the endothelial cells appear to originate from endoplasmic reticulum IP(3) receptors, and are subject to regulation by surrounding smooth muscle cells via myoendothelial gap junctions, even under basal conditions.
Non-linear dynamic compensation system
NASA Technical Reports Server (NTRS)
Lin, Yu-Hwan (Inventor); Lurie, Boris J. (Inventor)
1992-01-01
A non-linear dynamic compensation subsystem is added in the feedback loop of a high precision optical mirror positioning control system to smoothly alter the control system response bandwidth from a relatively wide response bandwidth optimized for speed of control system response to a bandwidth sufficiently narrow to reduce position errors resulting from the quantization noise inherent in the inductosyn used to measure mirror position. The non-linear dynamic compensation system includes a limiter for limiting the error signal within preselected limits, a compensator for modifying the limiter output to achieve the reduced bandwidth response, and an adder for combining the modified error signal with the difference between the limited and unlimited error signals. The adder output is applied to control system motor so that the system response is optimized for accuracy when the error signal is within the preselected limits, optimized for speed of response when the error signal is substantially beyond the preselected limits and smoothly varied therebetween as the error signal approaches the preselected limits.
Lovastatin inhibits gap junctional communication in cultured aortic smooth muscle cells.
Shen, Jing; Wang, Li-Hong; Zheng, Liang-Rong; Zhu, Jian-Hua; Hu, Shen-Jiang
2010-09-01
Gap junctions, which serve as intercellular channels that allow the passage of ions and other small molecules between neighboring cells, play an important role in vital functions, including the regulation of cell growth, differentiation, and development. Statins, the 3-hydroxy-3-methylglutaryl-coenzymeA (HMG-CoA) reductase inhibitors, have been shown to inhibit the migration and proliferation of smooth muscle cells (SMCs) leading to an antiproliferative effect. Recent studies have shown that statins can reduce gap junction protein connexin43 (Cx43) expression both in vivo and in vitro. However, little work has been done on the effects of statins on gap junctional intercellular communication (GJIC). We hypothesized in this study that lovastatin inhibits vascular smooth muscle cells (VSMCs) migration through the inhibition of the GJIC. Rat aortic SMCs (RASMCs) were exposed to lovastatin. Vascular smooth muscle cells migration was then assessed with a Transwell migration assay. Gap junctional intercellular communication was determined by using fluorescence recovery after photobleaching (FRAP) analysis, which was performed with a laser-scanning confocal microscope. The migration of the cultured RASMCs were detected by Transwell system. Cell migration was dose-dependently inhibited with lovastatin. Compared with that in the control (110 ± 26), the number of migrated SMCs was significantly reduced to 72 ± 24 (P < .05), 62 ± 18 (P < .01), and 58 ± 19 (P < .01) at the concentration of 0.4, 2, and 10 umol/L, per field. The rate of fluorescence recovery (R) at 5 minutes after photobleaching was adopted as the functional index of GJIC. The R- value of cells exposed to lovastatin 10 umol/L for 48 hours was 24.38% ± 4.84%, whereas the cells in the control group had an R- value of 36.11% ± 10.53%, demonstrating that the GJIC of RASMCs was significantly inhibited by lovastatin (P < .01). Smaller concentrations of lovastatin 0.08 umol/L did not change gap junction coupling (P > .05). These results suggest that lovastatin inhibits migration in a dose-dependent manner by attenuating JIC. Suppression of gap junction function could add another explanation of statin-induced antiproliferative effect.
ERIC Educational Resources Information Center
Zheng, Yinggan; Gierl, Mark J.; Cui, Ying
2010-01-01
This study combined the kernel smoothing procedure and a nonparametric differential item functioning statistic--Cochran's Z--to statistically test the difference between the kernel-smoothed item response functions for reference and focal groups. Simulation studies were conducted to investigate the Type I error and power of the proposed…
Ultra-Smooth ZnS Films Grown on Silicon via Pulsed Laser Deposition
NASA Astrophysics Data System (ADS)
Reidy, Christopher; Tate, Janet
2011-10-01
Ultra-smooth, high quality ZnS films were grown on (100) and (111) oriented Si wafers via pulsed laser deposition with a KrF excimer laser in UHV (10-9 Torr). The resultant films were examined with optical spectroscopy, electron diffraction, and electron probe microanalysis. The films have an rms roughness of ˜1.5 nm, and the film stoichiometry is approximately Zn:S :: 1:0.87. Additionally, each film exhibits an optical interference pattern which is not a function of probing location on the sample, indicating excellent film thickness uniformity. Motivation for high-quality ZnS films comes from a proposed experiment to measure carrier amplification via impact ionization at the boundary between a wide-gap and a narrow-gap semiconductor. If excited charge carriers in a sufficiently wide-gap harvester can be extracted into a narrow-gap host material, impact ionization may occur. We seek near-perfect interfaces between ZnS, with a direct gap between 3.3 and 3.7 eV, and Si, with an indirect gap of 1.1 eV.
NASA Astrophysics Data System (ADS)
Islamiyati, A.; Fatmawati; Chamidah, N.
2018-03-01
The correlation assumption of the longitudinal data with bi-response occurs on the measurement between the subjects of observation and the response. It causes the auto-correlation of error, and this can be overcome by using a covariance matrix. In this article, we estimate the covariance matrix based on the penalized spline regression model. Penalized spline involves knot points and smoothing parameters simultaneously in controlling the smoothness of the curve. Based on our simulation study, the estimated regression model of the weighted penalized spline with covariance matrix gives a smaller error value compared to the error of the model without covariance matrix.
Point Set Denoising Using Bootstrap-Based Radial Basis Function.
Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad
2016-01-01
This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.
On The Calculation Of Derivatives From Digital Information
NASA Astrophysics Data System (ADS)
Pettett, Christopher G.; Budney, David R.
1982-02-01
Biomechanics analysis frequently requires cinematographic studies as a first step toward understanding the essential mechanics of a sport or exercise. In order to understand the exertion by the athlete, cinematography is used to establish the kinematics from which the energy exchanges can be considered and the equilibrium equations can be studied. Errors in the raw digital information necessitate smoothing of the data before derivatives can be obtained. Researchers employ a variety of curve-smoothing techniques including filtering and polynomial spline methods. It is essential that the researcher understands the accuracy which can be expected in velocities and accelerations obtained from smoothed digital information. This paper considers particular types of data inherent in athletic motion and the expected accuracy of calculated velocities and accelerations using typical error distributions in the raw digital information. Included in this paper are high acceleration, impact and smooth motion types of data.
Smoothness of In vivo Spectral Baseline Determined by Mean Squared Error
Zhang, Yan; Shen, Jun
2013-01-01
Purpose A nonparametric smooth line is usually added to spectral model to account for background signals in vivo magnetic resonance spectroscopy (MRS). The assumed smoothness of the baseline significantly influences quantitative spectral fitting. In this paper, a method is proposed to minimize baseline influences on estimated spectral parameters. Methods In this paper, the non-parametric baseline function with a given smoothness was treated as a function of spectral parameters. Its uncertainty was measured by root-mean-squared error (RMSE). The proposed method was demonstrated with a simulated spectrum and in vivo spectra of both short echo time (TE) and averaged echo times. The estimated in vivo baselines were compared with the metabolite-nulled spectra, and the LCModel-estimated baselines. The accuracies of estimated baseline and metabolite concentrations were further verified by cross-validation. Results An optimal smoothness condition was found that led to the minimal baseline RMSE. In this condition, the best fit was balanced against minimal baseline influences on metabolite concentration estimates. Conclusion Baseline RMSE can be used to indicate estimated baseline uncertainties and serve as the criterion for determining the baseline smoothness of in vivo MRS. PMID:24259436
DOE Office of Scientific and Technical Information (OSTI.GOV)
McVicker, A; Oldham, M; Yin, F
2014-06-15
Purpose: To test the ability of the TG-119 commissioning process and RPC credentialing to detect errors in the commissioning process for a commercial Treatment Planning System (TPS). Methods: We introduced commissioning errors into the commissioning process for the Anisotropic Analytical Algorithm (AAA) within the Eclipse TPS. We included errors in Dosimetric Leaf Gap (DLG), electron contamination, flattening filter material, and beam profile measurement with an inappropriately large farmer chamber (simulated using sliding window smoothing of profiles). We then evaluated the clinical impact of these errors on clinical intensity modulated radiation therapy (IMRT) plans (head and neck, low and intermediate riskmore » prostate, mesothelioma, and scalp) by looking at PTV D99, and mean and max OAR dose. Finally, for errors with substantial clinical impact we determined sensitivity of the RPC IMRT film analysis at the midpoint between PTV and OAR using a 4mm distance to agreement metric, and of a 7% TLD dose comparison. We also determined sensitivity of the 3 dose planes of the TG-119 C-shape IMRT phantom using gamma criteria of 3% 3mm. Results: The largest clinical impact came from large changes in the DLG with a change of 1mm resulting in up to a 5% change in the primary PTV D99. This resulted in a discrepancy in the RPC TLDs in the PTVs and OARs of 7.1% and 13.6% respectively, which would have resulted in detection. While use of incorrect flattening filter caused only subtle errors (<1%) in clinical plans, the effect was most pronounced for the RPC TLDs in the OARs (>6%). Conclusion: The AAA commissioning process within the Eclipse TPS is surprisingly robust to user error. When errors do occur, the RPC and TG-119 commissioning credentialing criteria are effective at detecting them; however OAR TLDs are the most sensitive despite the RPC currently excluding them from analysis.« less
Pressure gradient effects on heat transfer to reusable surface insulation tile-array gaps
NASA Technical Reports Server (NTRS)
Throckmorton, D. A.
1975-01-01
An experimental investigation was performed to determine the effect of pressure gradient on the heat transfer within space shuttle reusable surface insulation (RSI) tile-array gaps under thick, turbulent boundary-layer conditions. Heat-transfer and pressure measurements were obtained on a curved array of full-scale simulated RSI tiles in a tunnel-wall boundary layer at a nominal free-stream Mach number and free-stream Reynolds numbers. Transverse pressure gradients of varying degree were induced over the model surface by rotating the curved array with respect to the flow. Definition of the tunnel-wall boundary-layer flow was obtained by measurement of boundary-layer pitot pressure profiles, wall pressure, and heat transfer. Flat-plate heat-transfer data were correlated and a method was derived for prediction of heat transfer to a smooth curved surface in the highly three-dimensional tunnel-wall boundary-layer flow. Pressure on the floor of the RSI tile-array gap followed the trends of the external surface pressure. Heat transfer to the surface immediately downstream of a transverse gap is higher than that for a smooth surface at the same location. Heating to the wall of a transverse gap, and immediately downstream of it, at its intersection with a longitudinal gap is significantly greater than that for the simple transverse gap.
Prediction of valid acidity in intact apples with Fourier transform near infrared spectroscopy.
Liu, Yan-De; Ying, Yi-Bin; Fu, Xia-Ping
2005-03-01
To develop nondestructive acidity prediction for intact Fuji apples, the potential of Fourier transform near infrared (FT-NIR) method with fiber optics in interactance mode was investigated. Interactance in the 800 nm to 2619 nm region was measured for intact apples, harvested from early to late maturity stages. Spectral data were analyzed by two multivariate calibration techniques including partial least squares (PLS) and principal component regression (PCR) methods. A total of 120 Fuji apples were tested and 80 of them were used to form a calibration data set. The influences of different data preprocessing and spectra treatments were also quantified. Calibration models based on smoothing spectra were slightly worse than that based on derivative spectra, and the best result was obtained when the segment length was 5 nm and the gap size was 10 points. Depending on data preprocessing and PLS method, the best prediction model yielded correlation coefficient of determination (r2) of 0.759, low root mean square error of prediction (RMSEP) of 0.0677, low root mean square error of calibration (RMSEC) of 0.0562. The results indicated the feasibility of FT-NIR spectral analysis for predicting apple valid acidity in a nondestructive way.
Prediction of valid acidity in intact apples with Fourier transform near infrared spectroscopy*
Liu, Yan-de; Ying, Yi-bin; Fu, Xia-ping
2005-01-01
To develop nondestructive acidity prediction for intact Fuji apples, the potential of Fourier transform near infrared (FT-NIR) method with fiber optics in interactance mode was investigated. Interactance in the 800 nm to 2619 nm region was measured for intact apples, harvested from early to late maturity stages. Spectral data were analyzed by two multivariate calibration techniques including partial least squares (PLS) and principal component regression (PCR) methods. A total of 120 Fuji apples were tested and 80 of them were used to form a calibration data set. The influences of different data preprocessing and spectra treatments were also quantified. Calibration models based on smoothing spectra were slightly worse than that based on derivative spectra, and the best result was obtained when the segment length was 5 nm and the gap size was 10 points. Depending on data preprocessing and PLS method, the best prediction model yielded correlation coefficient of determination (r 2) of 0.759, low root mean square error of prediction (RMSEP) of 0.0677, low root mean square error of calibration (RMSEC) of 0.0562. The results indicated the feasibility of FT-NIR spectral analysis for predicting apple valid acidity in a nondestructive way. PMID:15682498
Puri, Rajinder N; Fan, Ya-Ping; Rattan, Satish
2002-08-01
We examined the role of mitogen-activated protein kinase (p(44/42) MAPK) in ANG II-induced contraction of lower esophageal sphincter (LES) and internal anal sphincter (IAS) smooth muscles. Studies were performed in the isolated smooth muscles and cells (SMC). ANG II-induced changes in the levels of phosphorylation of different signal transduction and effector proteins were determined before and after selective inhibitors. ANG II-induced contraction of the rat LES and IAS SMC was inhibited by genistein, PD-98059 [a specific inhibitor of MAPK kinases (MEK 1/2)], herbimycin A (a pp60(c-src) inhibitor), and antibodies to pp60(c-src) and p(120) ras GTPase-activating protein (p(120) rasGAP). ANG II-induced contraction of the tonic smooth muscles was accompanied by an increase in tyrosine phosphorylation of p(120) rasGAP. These were attenuated by genistein but not by PD-98059. ANG II-induced increase in phosphorylations of p(44/42) MAPKs and caldesmon was attenuated by both genistein and PD-98059. We conclude that pp60(c-src) and p(44/42) MAPKs play an important role in ANG II-induced contraction of LES and IAS smooth muscles.
A new polishing process for large-aperture and high-precision aspheric surface
NASA Astrophysics Data System (ADS)
Nie, Xuqing; Li, Shengyi; Dai, Yifan; Song, Ci
2013-07-01
The high-precision aspheric surface is hard to be achieved due to the mid-spatial frequency error in the finishing step. The influence of mid-spatial frequency error is studied through the simulations and experiments. In this paper, a new polishing process based on magnetorheological finishing (MRF), smooth polishing (SP) and ion beam figuring (IBF) is proposed. A 400mm aperture parabolic surface is polished with this new process. The smooth polishing (SP) is applied after rough machining to control the MSF error. In the middle finishing step, most of low-spatial frequency error is removed by MRF rapidly, then the mid-spatial frequency error is restricted by SP, finally ion beam figuring is used to finish the surface. The surface accuracy is improved from the initial 37.691nm (rms, 95% aperture) to the final 4.195nm. The results show that the new polishing process is effective to manufacture large-aperture and high-precision aspheric surface.
Optimal interpolation analysis of leaf area index using MODIS data
Gu, Yingxin; Belair, Stephane; Mahfouf, Jean-Francois; Deblonde, Godelieve
2006-01-01
A simple data analysis technique for vegetation leaf area index (LAI) using Moderate Resolution Imaging Spectroradiometer (MODIS) data is presented. The objective is to generate LAI data that is appropriate for numerical weather prediction. A series of techniques and procedures which includes data quality control, time-series data smoothing, and simple data analysis is applied. The LAI analysis is an optimal combination of the MODIS observations and derived climatology, depending on their associated errors σo and σc. The “best estimate” LAI is derived from a simple three-point smoothing technique combined with a selection of maximum LAI (after data quality control) values to ensure a higher quality. The LAI climatology is a time smoothed mean value of the “best estimate” LAI during the years of 2002–2004. The observation error is obtained by comparing the MODIS observed LAI with the “best estimate” of the LAI, and the climatological error is obtained by comparing the “best estimate” of LAI with the climatological LAI value. The LAI analysis is the result of a weighting between these two errors. Demonstration of the method described in this paper is presented for the 15-km grid of Meteorological Service of Canada (MSC)'s regional version of the numerical weather prediction model. The final LAI analyses have a relatively smooth temporal evolution, which makes them more appropriate for environmental prediction than the original MODIS LAI observation data. They are also more realistic than the LAI data currently used operationally at the MSC which is based on land-cover databases.
Smooth extrapolation of unknown anatomy via statistical shape models
NASA Astrophysics Data System (ADS)
Grupp, R. B.; Chiang, H.; Otake, Y.; Murphy, R. J.; Gordon, C. R.; Armand, M.; Taylor, R. H.
2015-03-01
Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.
NASA Technical Reports Server (NTRS)
Thibodeaux, J. J.
1977-01-01
The results of a simulation study performed to determine the effects of gyro verticality error on lateral autoland tracking and landing performance are presented. A first order vertical gyro error model was used to generate the measurement of the roll attitude feedback signal normally supplied by an inertial navigation system. The lateral autoland law used was an inertially smoothed control design. The effect of initial angular gyro tilt errors (2 deg, 3 deg, 4 deg, and 5 deg), introduced prior to localizer capture, were investigated by use of a small perturbation aircraft simulation. These errors represent the deviations which could occur in the conventional attitude sensor as a result of the maneuver-induced spin-axis misalinement and drift. Results showed that for a 1.05 deg per minute erection rate and a 5 deg initial tilt error, ON COURSE autoland control logic was not satisfied. Failure to attain the ON COURSE mode precluded high control loop gains and localizer beam path integration and resulted in unacceptable beam standoff at touchdown.
Past observable dynamics of a continuously monitored qubit
NASA Astrophysics Data System (ADS)
García-Pintos, Luis Pedro; Dressel, Justin
2017-12-01
Monitoring a quantum observable continuously in time produces a stochastic measurement record that noisily tracks the observable. For a classical process, such noise may be reduced to recover an average signal by minimizing the mean squared error between the noisy record and a smooth dynamical estimate. We show that for a monitored qubit, this usual procedure returns unusual results. While the record seems centered on the expectation value of the observable during causal generation, examining the collected past record reveals that it better approximates a moving-mean Gaussian stochastic process centered at a distinct (smoothed) observable estimate. We show that this shifted mean converges to the real part of a generalized weak value in the time-continuous limit without additional postselection. We verify that this smoothed estimate minimizes the mean squared error even for individual measurement realizations. We go on to show that if a second observable is weakly monitored concurrently, then that second record is consistent with the smoothed estimate of the second observable based solely on the information contained in the first observable record. Moreover, we show that such a smoothed estimate made from incomplete information can still outperform estimates made using full knowledge of the causal quantum state.
Celik, Ozkan; O’Malley, Marcia K.; Boake, Corwin; Levin, Harvey S.; Yozbatiran, Nuray; Reistetter, Timothy A.
2016-01-01
In this paper, we analyze the correlations between four clinical measures (Fugl–Meyer upper extremity scale, Motor Activity Log, Action Research Arm Test, and Jebsen-Taylor Hand Function Test) and four robotic measures (smoothness of movement, trajectory error, average number of target hits per minute, and mean tangential speed), used to assess motor recovery. Data were gathered as part of a hybrid robotic and traditional upper extremity rehabilitation program for nine stroke patients. Smoothness of movement and trajectory error, temporally and spatially normalized measures of movement quality defined for point-to-point movements, were found to have significant moderate to strong correlations with all four of the clinical measures. The strong correlations suggest that smoothness of movement and trajectory error may be used to compare outcomes of different rehabilitation protocols and devices effectively, provide improved resolution for tracking patient progress compared to only pre-and post-treatment measurements, enable accurate adaptation of therapy based on patient progress, and deliver immediate and useful feedback to the patient and therapist. PMID:20388607
What triggers catch-up saccades during visual tracking?
de Brouwer, Sophie; Yuksel, Demet; Blohm, Gunnar; Missal, Marcus; Lefèvre, Philippe
2002-03-01
When tracking moving visual stimuli, primates orient their visual axis by combining two kinds of eye movements, smooth pursuit and saccades, that have very different dynamics. Yet, the mechanisms that govern the decision to switch from one type of eye movement to the other are still poorly understood, even though they could bring a significant contribution to the understanding of how the CNS combines different kinds of control strategies to achieve a common motor and sensory goal. In this study, we investigated the oculomotor responses to a large range of different combinations of position error and velocity error during visual tracking of moving stimuli in humans. We found that the oculomotor system uses a prediction of the time at which the eye trajectory will cross the target, defined as the "eye crossing time" (T(XE)). The eye crossing time, which depends on both position error and velocity error, is the criterion used to switch between smooth and saccadic pursuit, i.e., to trigger catch-up saccades. On average, for T(XE) between 40 and 180 ms, no saccade is triggered and target tracking remains purely smooth. Conversely, when T(XE) becomes smaller than 40 ms or larger than 180 ms, a saccade is triggered after a short latency (around 125 ms).
NUMERICAL CONVERGENCE IN SMOOTHED PARTICLE HYDRODYNAMICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Qirong; Li, Yuexing; Hernquist, Lars
2015-02-10
We study the convergence properties of smoothed particle hydrodynamics (SPH) using numerical tests and simple analytic considerations. Our analysis shows that formal numerical convergence is possible in SPH only in the joint limit N → ∞, h → 0, and N{sub nb} → ∞, where N is the total number of particles, h is the smoothing length, and N{sub nb} is the number of neighbor particles within the smoothing volume used to compute smoothed estimates. Previous work has generally assumed that the conditions N → ∞ and h → 0 are sufficient to achieve convergence, while holding N{sub nb} fixed.more » We demonstrate that if N{sub nb} is held fixed as the resolution is increased, there will be a residual source of error that does not vanish as N → ∞ and h → 0. Formal numerical convergence in SPH is possible only if N{sub nb} is increased systematically as the resolution is improved. Using analytic arguments, we derive an optimal compromise scaling for N{sub nb} by requiring that this source of error balance that present in the smoothing procedure. For typical choices of the smoothing kernel, we find N{sub nb} ∝N {sup 0.5}. This means that if SPH is to be used as a numerically convergent method, the required computational cost does not scale with particle number as O(N), but rather as O(N {sup 1} {sup +} {sup δ}), where δ ≈ 0.5, with a weak dependence on the form of the smoothing kernel.« less
Docherty, Paul D; Schranz, Christoph; Chase, J Geoffrey; Chiew, Yeong Shiong; Möller, Knut
2014-05-01
Accurate model parameter identification relies on accurate forward model simulations to guide convergence. However, some forward simulation methodologies lack the precision required to properly define the local objective surface and can cause failed parameter identification. The role of objective surface smoothness in identification of a pulmonary mechanics model was assessed using forward simulation from a novel error-stepping method and a proprietary Runge-Kutta method. The objective surfaces were compared via the identified parameter discrepancy generated in a Monte Carlo simulation and the local smoothness of the objective surfaces they generate. The error-stepping method generated significantly smoother error surfaces in each of the cases tested (p<0.0001) and more accurate model parameter estimates than the Runge-Kutta method in three of the four cases tested (p<0.0001) despite a 75% reduction in computational cost. Of note, parameter discrepancy in most cases was limited to a particular oblique plane, indicating a non-intuitive multi-parameter trade-off was occurring. The error-stepping method consistently improved or equalled the outcomes of the Runge-Kutta time-integration method for forward simulations of the pulmonary mechanics model. This study indicates that accurate parameter identification relies on accurate definition of the local objective function, and that parameter trade-off can occur on oblique planes resulting prematurely halted parameter convergence. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Wang, Tianyou
2009-01-01
Holland and colleagues derived a formula for analytical standard error of equating using the delta-method for the kernel equating method. Extending their derivation, this article derives an analytical standard error of equating procedure for the conventional percentile rank-based equipercentile equating with log-linear smoothing. This procedure is…
Error detection and data smoothing based on local procedures
NASA Technical Reports Server (NTRS)
Guerra, V. M.
1974-01-01
An algorithm is presented which is able to locate isolated bad points and correct them without contaminating the rest of the good data. This work has been greatly influenced and motivated by what is currently done in the manual loft. It is not within the scope of this work to handle small random errors characteristic of a noisy system, and it is therefore assumed that the bad points are isolated and relatively few when compared with the total number of points. Motivated by the desire to imitate the loftsman a visual experiment was conducted to determine what is considered smooth data. This criterion is used to determine how much the data should be smoothed and to prove that this method produces such data. The method utimately converges to a set of points that lies on the polynomial that interpolates the first and last points; however convergence to such a set is definitely not the purpose of our algorithm. The proof of convergence is necessary to demonstrate that oscillation does not take place and that in a finite number of steps the method produces a set as smooth as desired.
Shichinohe, Natsuko; Akao, Teppei; Kurkin, Sergei; Fukushima, Junko; Kaneko, Chris R S; Fukushima, Kikuro
2009-06-11
Cortical motor areas are thought to contribute "higher-order processing," but what that processing might include is unknown. Previous studies of the smooth pursuit-related discharge of supplementary eye field (SEF) neurons have not distinguished activity associated with the preparation for pursuit from discharge related to processing or memory of the target motion signals. Using a memory-based task designed to separate these components, we show that the SEF contains signals coding retinal image-slip-velocity, memory, and assessment of visual motion direction, the decision of whether to pursue, and the preparation for pursuit eye movements. Bilateral muscimol injection into SEF resulted in directional errors in smooth pursuit, errors of whether to pursue, and impairment of initial correct eye movements. These results suggest an important role for the SEF in memory and assessment of visual motion direction and the programming of appropriate pursuit eye movements.
Wisps in the outer edge of the Keeler Gap
NASA Astrophysics Data System (ADS)
Tiscareno, Matthew S.; Arnault, Ethan G.
2015-11-01
Superposed upon the relatively smooth outer edge of the Keeler Gap are a system of "wisps," which appear to be ring material protruding inward into the gap, usually with a sharp trailing edge and a smooth gradation back to the background edge location on the leading side (Porco et al. 2005, Science). The radial amplitude of wisps is usually 0.5 to 1 km, and their azimuthal extent is approximately a degree of longitude (~2400 km). Wisps are likely caused by an interplay between Daphnis (and perhaps other moons) and embedded moonlets within the ring, though the details remain unclear.Aside from the wisps, the Keeler Gap outer edge is the only one of the five sharp edges in the outer part of Saturn's A ring that is reasonably smooth in appearance (Tiscareno et al. 2005, DPS), with occultations indicating residuals less than 1 km upon a possibly non-zero eccentricity (R.G. French, personal communication, 2014). The other four (the inner and outer edges of the Encke Gap, the inner edge of the Keeler Gap, and the outer edge of the A ring itself) are characterized by wavy structure at moderate to high spatial frequencies, with amplitudes ranging from 2 to 30 km (Tiscareno et al. 2005, DPS).We will present a catalogue of wisp detections in Cassini images. We carry out repeated gaussian fits of the radial edge location in order to characterize edge structure and visually scan those fitted edges in order to detect wisps. With extensive coverage in longitude and in time, we will report on how wisps evolve and move, both within an orbit period and on longer timescales. We will also report on the frequency and interpretation of wisps that deviate from the standard morphology. We will discuss the implications of our results for the origin and nature of wisps, and for the larger picture of how masses interact within Saturn's rings.
Wisps in the outer edge of the Keeler Gap
NASA Astrophysics Data System (ADS)
Tiscareno, M. S.; Arnault, E. G.
2014-12-01
The outer part of Saturn's A ring contains five sharp edges: the inner and outer edges of the Encke Gap and of the Keeler Gap (which contain the moons Pan and Daphnis, respectively), and the outer edge of the A ring itself. Four of these five edges are characterized by structure at moderate to high spatial frequencies, with amplitudes ranging from 2 to 30 km (Tiscareno et al. 2005, DPS). Only the outer edge of the Keeler Gap is reasonably smooth in appearance (Tiscareno et al. 2005, DPS), with occultations indicating residuals less than 1 km upon a possibly non-zero eccentricity (R.G. French, personal communication, 2014). Superposed upon the relatively smooth outer edge of the Keeler Gap are a system of "wisps," which appear to be ring material protruding inward into the gap, usually with a sharp trailing edge and a smooth gradation back to the background edge location on the leading side (Porco et al. 2005, Science). The radial amplitude of wisps is usually 0.5 to 1 km, and their azimuthal extent is approximately a degree of longitude (~2400 km). Wisps are likely caused by an interplay between Daphnis (and perhaps other moons) and embedded moonlets within the ring, though the details remain unclear. We will present a catalogue of wisp detections in Cassini images. We carry out repeated gaussian fits of the radial edge location in order to characterize edge structure (see Figure, which compares our fitted edge to the figure presented by Porco et al. 2005) and visually scan those fitted edges in order to detect wisps. With extensive coverage in longitude and in time, we will report on how wisps evolve and move, both within an orbit period and on longer timescales. We will also report on the frequency and interpretation of wisps that deviate from the standard morphology. We will discuss the implications of our results for the origin and nature of wisps, and for the larger picture of how masses interact within Saturn's rings.
Application of Holt exponential smoothing and ARIMA method for data population in West Java
NASA Astrophysics Data System (ADS)
Supriatna, A.; Susanti, D.; Hertini, E.
2017-01-01
One method of time series that is often used to predict data that contains trend is Holt. Holt method using different parameters used in the original data which aims to smooth the trend value. In addition to Holt, ARIMA method can be used on a wide variety of data including data pattern containing a pattern trend. Data actual of population from 1998-2015 contains the trends so can be solved by Holt and ARIMA method to obtain the prediction value of some periods. The best method is measured by looking at the smallest MAPE and MAE error. The result using Holt method is 47.205.749 populations in 2016, 47.535.324 populations in 2017, and 48.041.672 populations in 2018, with MAPE error is 0,469744 and MAE error is 189.731. While the result using ARIMA method is 46.964.682 populations in 2016, 47.342.189 in 2017, and 47.899.696 in 2018, with MAPE error is 0,4380 and MAE is 176.626.
Modernization of dump truck onboard system
NASA Astrophysics Data System (ADS)
Semenov, M. A.; Bolshunova, O. M.; Korzhev, A. A.; Kamyshyan, A. M.
2017-10-01
The review of the only automated dispatch system for the career dump trucks, which is presented in the domestic market, was made. A method for upgrading the loading control system and technological weighing process of the career dump was proposed. The cargo weight during loading is determined by the gas pressure in the suspension cylinders at the time of the oscillation ending and at the start of the vibration smoothing process; the smoothing speed correction is performed. The error of the cargo weighting is 2.5-3%, and of the technological weighing process during driving - 1%, which corresponds to the error level of the steady-state weighting means.
Stress Recovery and Error Estimation for 3-D Shell Structures
NASA Technical Reports Server (NTRS)
Riggs, H. R.
2000-01-01
The C1-continuous stress fields obtained from finite element analyses are in general lower- order accurate than are the corresponding displacement fields. Much effort has focussed on increasing their accuracy and/or their continuity, both for improved stress prediction and especially error estimation. A previous project developed a penalized, discrete least squares variational procedure that increases the accuracy and continuity of the stress field. The variational problem is solved by a post-processing, 'finite-element-type' analysis to recover a smooth, more accurate, C1-continuous stress field given the 'raw' finite element stresses. This analysis has been named the SEA/PDLS. The recovered stress field can be used in a posteriori error estimators, such as the Zienkiewicz-Zhu error estimator or equilibrium error estimators. The procedure was well-developed for the two-dimensional (plane) case involving low-order finite elements. It has been demonstrated that, if optimal finite element stresses are used for the post-processing, the recovered stress field is globally superconvergent. Extension of this work to three dimensional solids is straightforward. Attachment: Stress recovery and error estimation for shell structure (abstract only). A 4-node, shear-deformable flat shell element developed via explicit Kirchhoff constraints (abstract only). A novel four-node quadrilateral smoothing element for stress enhancement and error estimation (abstract only).
Disclosure of Medical Errors in Oman
Norrish, Mark I. K.
2015-01-01
Objectives: This study aimed to provide insight into the preferences for and perceptions of medical error disclosure (MED) by members of the public in Oman. Methods: Between January and June 2012, an online survey was used to collect responses from 205 members of the public across five governorates of Oman. Results: A disclosure gap was revealed between the respondents’ preferences for MED and perceived current MED practices in Oman. This disclosure gap extended to both the type of error and the person most likely to disclose the error. Errors resulting in patient harm were found to have a strong influence on individuals’ perceived quality of care. In addition, full disclosure was found to be highly valued by respondents and able to mitigate for a perceived lack of care in cases where medical errors led to damages. Conclusion: The perceived disclosure gap between respondents’ MED preferences and perceptions of current MED practices in Oman needs to be addressed in order to increase public confidence in the national health care system. PMID:26052463
Kaminsky, Jan; Rodt, Thomas; Gharabaghi, Alireza; Forster, Jan; Brand, Gerd; Samii, Madjid
2005-06-01
The FE-modeling of complex anatomical structures is not solved satisfyingly so far. Voxel-based as opposed to contour-based algorithms allow an automated mesh generation based on the image data. Nonetheless their geometric precision is limited. We developed an automated mesh-generator that combines the advantages of voxel-based generation with improved representation of the geometry by displacement of nodes on the object-surface. Models of an artificial 3D-pipe-section and a skullbase were generated with different mesh-densities using the newly developed geometric, unsmoothed and smoothed voxel generators. Compared to the analytic calculation of the 3D-pipe-section model the normalized RMS error of the surface stress was 0.173-0.647 for the unsmoothed voxel models, 0.111-0.616 for the smoothed voxel models with small volume error and 0.126-0.273 for the geometric models. The highest element-energy error as a criterion for the mesh quality was 2.61x10(-2) N mm, 2.46x10(-2) N mm and 1.81x10(-2) N mm for unsmoothed, smoothed and geometric voxel models, respectively. The geometric model of the 3D-skullbase resulted in the lowest element-energy error and volume error. This algorithm also allowed the best representation of anatomical details. The presented geometric mesh-generator is universally applicable and allows an automated and accurate modeling by combining the advantages of the voxel-technique and of improved surface-modeling.
Flow structure and aerodynamic performance of a hovering bristled wing in low Re
NASA Astrophysics Data System (ADS)
Lee, Seunghun; Lahooti, Mohsen; Kim, Daegyoum
2017-11-01
Previous studies on a bristled wing have mainly focused on simple kinematics of the wing such as translation or rotation. The aerodynamic performance of a bristled wing in a quasi-steady phase is known to be comparable to that of a smooth wing without a gap because shear layers in the gaps of the bristled wing are sufficiently developed to block the gaps. However, we point out that, in the starting transient phase where the shear layers are not fully developed, the force generation of a bristled wing is not as efficient as that of a quasi-steady state. The performance in the transient phase is important to understand the aerodynamics of a bristled wing in an unsteady motion. In the hovering motion, due to repeated stroke reversals, the formation and development of shear layers inside the gaps is repeated in each stroke. In this study, a bristled wing in hovering is numerically investigated in the low Reynolds number of O(10). We especially focus on the development of shear layers during a stroke reversal and its effect on the overall propulsive performance. Although the aerodynamic force generation is slightly reduced due to the gap vortices, the asymmetric behavior of vortices in a gap between bristles during a stroke reversal makes the bristled wing show higher lift to drag ratio than a smooth wing.
Limb Retrievals of TES solarband/IR data (and MCS solarband data)
NASA Astrophysics Data System (ADS)
Wolff, M. J.; Pankine, A.
2016-12-01
Vertical variations in aerosol distributions (and their microphysicalproperties) can have a dramatic impact on the state and evolution of theMartian atmosphere. This has been clearly delineated recent work usingretrieval products produced by the Mars Climate Sounder (MCS) teamfrom limb observations by the MCS IR bolometers. However, similarproducts for Thermal EmissionSpectrometer (TES) limb observationshave not been as widely disseminated. In addition, the solar bandchannels of both datasets have been essentially unanalyzed. Ouroverarching goal has been to fill these gaps in order to addressparticle size studies, as well as to generate products that can beused by the wider community. In our presentation we will include: 1) A summary of our limb radiative transfer algorithms and retrievalscheme; 2) The limitations imposed by "Smoothing Error" and by systematicradiometric error on retrievals in lower and upper atmosphere, respectively;3) vertical profiles of opacity and particle size associated with theevolution of the 2001 TES dust storm; and 4) the use of limbretrievals to estimate integrated-column optical depths (validatedagainst Mars Exploration Rover and TES emission phase functionmeasurements); and 5) the plans for an ongoing archive to be used forthe distribution of the derived profiles and associated retrievalmetadata. This work has been supported by NASA with a Mars Data AnalysisProgram award (grant NNX10AO23G).
Tariq, Amina; Georgiou, Andrew; Westbrook, Johanna
2013-05-01
Medication safety is a pressing concern for residential aged care facilities (RACFs). Retrospective studies in RACF settings identify inadequate communication between RACFs, doctors, hospitals and community pharmacies as the major cause of medication errors. Existing literature offers limited insight about the gaps in the existing information exchange process that may lead to medication errors. The aim of this research was to explicate the cognitive distribution that underlies RACF medication ordering and delivery to identify gaps in medication-related information exchange which lead to medication errors in RACFs. The study was undertaken in three RACFs in Sydney, Australia. Data were generated through ethnographic field work over a period of five months (May-September 2011). Triangulated analysis of data primarily focused on examining the transformation and exchange of information between different media across the process. The findings of this study highlight the extensive scope and intense nature of information exchange in RACF medication ordering and delivery. Rather than attributing error to individual care providers, the explication of distributed cognition processes enabled the identification of gaps in three information exchange dimensions which potentially contribute to the occurrence of medication errors namely: (1) design of medication charts which complicates order processing and record keeping (2) lack of coordination mechanisms between participants which results in misalignment of local practices (3) reliance on restricted communication bandwidth channels mainly telephone and fax which complicates the information processing requirements. The study demonstrates how the identification of these gaps enhances understanding of medication errors in RACFs. Application of the theoretical lens of distributed cognition can assist in enhancing our understanding of medication errors in RACFs through identification of gaps in information exchange. Understanding the dynamics of the cognitive process can inform the design of interventions to manage errors and improve residents' safety. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Gairhe, Salina; Bauer, Natalie N; Gebb, Sarah A; McMurtry, Ivan F
2012-11-01
Myoendothelial gap junctional signaling mediates pulmonary arterial endothelial cell (PAEC)-induced activation of latent TGF-β and differentiation of cocultured pulmonary arterial smooth muscle cells (PASMCs), but the nature of the signal passing from PAECs to PASMCs through the gap junctions is unknown. Because PAECs but not PASMCs synthesize serotonin, and serotonin can pass through gap junctions, we hypothesized that the monoamine is the intercellular signal. We aimed to determine whether PAEC-derived serotonin mediates PAEC-induced myoendothelial gap junction-dependent activation of TGF-β signaling and differentiation of PASMCs. Rat PAECs and PASMCs were monocultured or cocultured with (touch) or without (no-touch) direct cell-cell contact. In all cases, tryptophan hydroxylase 1 (Tph1) transcripts were expressed predominantly in PAECs. Serotonin was detected by immunostaining in both PAECs and PASMCs in PAEC/PASMC touch coculture but was not found in PASMCs in either PAEC/PASMC no-touch coculture or in PASMC/PASMC touch coculture. Furthermore, inhibition of gap junctions but not of the serotonin transporter in PAEC/PASMC touch coculture prevented serotonin transfer from PAECs to PASMCs. Inhibition of serotonin synthesis pharmacologically or by small interfering RNAs to Tph1 in PAECs inhibited the PAEC-induced activation of TGF-β signaling and differentiation of PASMCs. We concluded that serotonin synthesized by PAECs is transferred through myoendothelial gap junctions into PASMCs, where it activates TGF-β signaling and induces a more differentiated phenotype. This finding suggests a novel role of gap junction-mediated intercellular serotonin signaling in regulation of PASMC phenotype.
Critical older driver errors in a national sample of serious U.S. crashes.
Cicchino, Jessica B; McCartt, Anne T
2015-07-01
Older drivers are at increased risk of crash involvement per mile traveled. The purpose of this study was to examine older driver errors in serious crashes to determine which errors are most prevalent. The National Highway Traffic Safety Administration's National Motor Vehicle Crash Causation Survey collected in-depth, on-scene data for a nationally representative sample of 5470 U.S. police-reported passenger vehicle crashes during 2005-2007 for which emergency medical services were dispatched. There were 620 crashes involving 647 drivers aged 70 and older, representing 250,504 crash-involved older drivers. The proportion of various critical errors made by drivers aged 70 and older were compared with those made by drivers aged 35-54. Driver error was the critical reason for 97% of crashes involving older drivers. Among older drivers who made critical errors, the most common were inadequate surveillance (33%) and misjudgment of the length of a gap between vehicles or of another vehicle's speed, illegal maneuvers, medical events, and daydreaming (6% each). Inadequate surveillance (33% vs. 22%) and gap or speed misjudgment errors (6% vs. 3%) were more prevalent among older drivers than middle-aged drivers. Seventy-one percent of older drivers' inadequate surveillance errors were due to looking and not seeing another vehicle or failing to see a traffic control rather than failing to look, compared with 40% of inadequate surveillance errors among middle-aged drivers. About two-thirds (66%) of older drivers' inadequate surveillance errors and 77% of their gap or speed misjudgment errors were made when turning left at intersections. When older drivers traveled off the edge of the road or traveled over the lane line, this was most commonly due to non-performance errors such as medical events (51% and 44%, respectively), whereas middle-aged drivers were involved in these crash types for other reasons. Gap or speed misjudgment errors and inadequate surveillance errors were significantly more prevalent among female older drivers than among female middle-aged drivers, but the prevalence of these errors did not differ significantly between older and middle-aged male drivers. These errors comprised 51% of errors among older female drivers but only 31% among older male drivers. Efforts to reduce older driver crash involvements should focus on diminishing the likelihood of the most common driver errors. Countermeasures that simplify or remove the need to make left turns across traffic such as roundabouts, protected left turn signals, and diverging diamond intersection designs could decrease the frequency of inadequate surveillance and gap or speed misjudgment errors. In the future, vehicle-to-vehicle and vehicle-to-infrastructure communications may also help protect older drivers from these errors. Copyright © 2015 Elsevier Ltd. All rights reserved.
Effect of data gaps on correlation dimension computed from light curves of variable stars
NASA Astrophysics Data System (ADS)
George, Sandip V.; Ambika, G.; Misra, R.
2015-11-01
Observational data, especially astrophysical data, is often limited by gaps in data that arises due to lack of observations for a variety of reasons. Such inadvertent gaps are usually smoothed over using interpolation techniques. However the smoothing techniques can introduce artificial effects, especially when non-linear analysis is undertaken. We investigate how gaps can affect the computed values of correlation dimension of the system, without using any interpolation. For this we introduce gaps artificially in synthetic data derived from standard chaotic systems, like the Rössler and Lorenz, with frequency of occurrence and size of missing data drawn from two Gaussian distributions. Then we study the changes in correlation dimension with change in the distributions of position and size of gaps. We find that for a considerable range of mean gap frequency and size, the value of correlation dimension is not significantly affected, indicating that in such specific cases, the calculated values can still be reliable and acceptable. Thus our study introduces a method of checking the reliability of computed correlation dimension values by calculating the distribution of gaps with respect to its size and position. This is illustrated for the data from light curves of three variable stars, R Scuti, U Monocerotis and SU Tauri. We also demonstrate how a cubic spline interpolation can cause a time series of Gaussian noise with missing data to be misinterpreted as being chaotic in origin. This is demonstrated for the non chaotic light curve of variable star SS Cygni, which gives a saturated D2 value, when interpolated using a cubic spline. In addition we also find that a careful choice of binning, in addition to reducing noise, can help in shifting the gap distribution to the reliable range for D2 values.
Torque scaling in small-gap Taylor-Couette flow with smooth or grooved wall
NASA Astrophysics Data System (ADS)
Zhu, Bihai; Ji, Zengqi; Lou, Zhengkun; Qian, Pengcheng
2018-03-01
The torque in the Taylor-Couette flow for radius ratios η ≥0.97 , with smooth or grooved wall static outer cylinders, is studied experimentally, with the Reynolds number of the inner cylinder reaching up to Rei=2 ×105 , corresponding to the Taylor number up to Ta =5 ×1010 . The grooves are perpendicular to the mean flow, and similar to the structure of a submersible motor stator. It is found that the dimensionless torque G , at a given Rei and η , is significantly greater for grooved cases than smooth cases. We compare our experimental torques for the smooth cases to the fit proposed by Wendt [F. Wendt, Ing.-Arch. 4, 577 (1993), 10.1007/BF02084936] and the fit proposed by Bilgen and Boulos [E. Bilgen and R. Boulos, J Fluids Eng. 95, 122 (1973), 10.1115/1.3446944], which shows both fits are outside their range for small gaps. Furthermore, an additional dimensionless torque (angular velocity flux) N uω in the smooth cases exhibits an effective scaling of N uω˜T a0.39 in the ultimate regime, which occurs at a lower Taylor number, Ta ≈3.5 ×107 , than the well-explored η =0.714 case (at Ta ≈3 ×108 ). The same effective scaling exponent, 0.39, is also evident in the grooved cases, but for η =0.97 and 0.985, there is a peak before this exponent appears.
Smooth interface effects on the confinement properties of GaSb/Al xGa 1- xSb quantum wells
NASA Astrophysics Data System (ADS)
Adib, Artur B.; de Sousa, Jeanlex S.; Farias, Gil A.; Freire, Valder N.
2000-10-01
A theoretical investigation on the confinement properties of GaSb/Al xGa 1- xSb single quantum wells (QWs) with smooth interfaces is performed. Error function ( erf)-like interfacial aluminum molar fraction variations in the QWs, from which it is possible to obtain the carriers effective masses and confinement potential profiles, are assumed. It is shown that the existence of smooth interfaces blue shifts considerably the confined carriers and exciton energies, an effect which is stronger in thin QWs.
Holt-Winters Forecasting: A Study of Practical Applications for Healthcare Managers
2006-05-25
Winters Forecasting 5 List of Tables Table 1. Holt-Winters smoothing parameters and Mean Absolute Percentage Errors: Pseudoephedrine prescriptions Table 2...confidence intervals Holt-Winters Forecasting 6 List of Figures Figure 1. Line Plot of Pseudoephedrine Prescriptions forecast using smoothing parameters...The first represents monthly prescriptions of pseudoephedrine . Pseudoephedrine is a drug commonly prescribed to relieve nasal congestion and other
Calkins, Monica E; Iacono, William G; Ones, Deniz S
2008-12-01
Several forms of eye movement dysfunction (EMD) are regarded as promising candidate endophenotypes of schizophrenia. Discrepancies in individual study results have led to inconsistent conclusions regarding particular aspects of EMD in relatives of schizophrenia patients. To quantitatively evaluate and compare the candidacy of smooth pursuit, saccade and fixation deficits in first-degree biological relatives, we conducted a set of meta-analytic investigations. Among 18 measures of EMD, memory-guided saccade accuracy and error rate, global smooth pursuit dysfunction, intrusive saccades during fixation, antisaccade error rate and smooth pursuit closed-loop gain emerged as best differentiating relatives from controls (standardized mean differences ranged from .46 to .66), with no significant differences among these measures. Anticipatory saccades, but no other smooth pursuit component measures were also increased in relatives. Visually-guided reflexive saccades were largely normal. Moderator analyses examining design characteristics revealed few variables affecting the magnitude of the meta-analytically observed effects. Moderate effect sizes of relatives v. controls in selective aspects of EMD supports their endophenotype potential. Future work should focus on facilitating endophenotype utility through attention to heterogeneity of EMD performance, relationships among forms of EMD, and application in molecular genetics studies.
NASA Technical Reports Server (NTRS)
Beutter, Brent R.; Stone, Leland S.
1997-01-01
Although numerous studies have examined the relationship between smooth-pursuit eye movements and motion perception, it remains unresolved whether a common motion-processing system subserves both perception and pursuit. To address this question, we simultaneously recorded perceptual direction judgments and the concomitant smooth eye movement response to a plaid stimulus that we have previously shown generates systematic perceptual errors. We measured the perceptual direction biases psychophysically and the smooth eye-movement direction biases using two methods (standard averaging and oculometric analysis). We found that the perceptual and oculomotor biases were nearly identical suggesting that pursuit and perception share a critical motion processing stage, perhaps in area MT or MST of extrastriate visual cortex.
NASA Technical Reports Server (NTRS)
Beutter, B. R.; Stone, L. S.
1998-01-01
Although numerous studies have examined the relationship between smooth-pursuit eye movements and motion perception, it remains unresolved whether a common motion-processing system subserves both perception and pursuit. To address this question, we simultaneously recorded perceptual direction judgments and the concomitant smooth eye-movement response to a plaid stimulus that we have previously shown generates systematic perceptual errors. We measured the perceptual direction biases psychophysically and the smooth eye-movement direction biases using two methods (standard averaging and oculometric analysis). We found that the perceptual and oculomotor biases were nearly identical, suggesting that pursuit and perception share a critical motion processing stage, perhaps in area MT or MST of extrastriate visual cortex.
Finke, K; Tilgner, A
2012-07-01
We study numerically the dynamo transition of an incompressible electrically conducting fluid filling the gap between two concentric spheres. In a first series of simulations, the fluid is driven by the rotation of a smooth inner sphere through no-slip boundary conditions, whereas the outer sphere is stationary. In a second series a volume force intended to simulate a rough surface drives the fluid next to the inner sphere within a layer of thickness one-tenth of the gap width. We investigate the effect of the boundary layer thickness on the dynamo threshold in the turbulent regime. The simulations show that the boundary forcing simulating the rough surface lowers the necessary rotation rate, which may help to improve spherical dynamo experiments.
Verification of micro-scale photogrammetry for smooth three-dimensional object measurement
NASA Astrophysics Data System (ADS)
Sims-Waterhouse, Danny; Piano, Samanta; Leach, Richard
2017-05-01
By using sub-millimetre laser speckle pattern projection we show that photogrammetry systems are able to measure smooth three-dimensional objects with surface height deviations less than 1 μm. The projection of laser speckle patterns allows correspondences on the surface of smooth spheres to be found, and as a result, verification artefacts with low surface height deviations were measured. A combination of VDI/VDE and ISO standards were also utilised to provide a complete verification method, and determine the quality parameters for the system under test. Using the proposed method applied to a photogrammetry system, a 5 mm radius sphere was measured with an expanded uncertainty of 8.5 μm for sizing errors, and 16.6 μm for form errors with a 95 % confidence interval. Sphere spacing lengths between 6 mm and 10 mm were also measured by the photogrammetry system, and were found to have expanded uncertainties of around 20 μm with a 95 % confidence interval.
Jiang, Qingan; Wu, Wenqi; Jiang, Mingming; Li, Yun
2017-01-01
High-accuracy railway track surveying is essential for railway construction and maintenance. The traditional approaches based on total station equipment are not efficient enough since high precision surveying frequently needs static measurements. This paper proposes a new filtering and smoothing algorithm based on the IMU/odometer and landmarks integration for the railway track surveying. In order to overcome the difficulty of estimating too many error parameters with too few landmark observations, a new model with completely observable error states is established by combining error terms of the system. Based on covariance analysis, the analytical relationship between the railway track surveying accuracy requirements and equivalent gyro drifts including bias instability and random walk noise are established. Experiment results show that the accuracy of the new filtering and smoothing algorithm for railway track surveying can reach 1 mm (1σ) when using a Ring Laser Gyroscope (RLG)-based Inertial Measurement Unit (IMU) with gyro bias instability of 0.03°/h and random walk noise of 0.005°/h while control points of the track control network (CPIII) position observations are provided by the optical total station in about every 60 m interval. The proposed approach can satisfy at the same time the demands of high accuracy and work efficiency for railway track surveying. PMID:28629191
Arima model and exponential smoothing method: A comparison
NASA Astrophysics Data System (ADS)
Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri
2013-04-01
This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.
Yang, Guangming; Peng, Xiaoyong; Wu, Yue; Li, Tao; Liu, Liangming
2017-10-01
We examined the roles played by gap junctions (GJs) and the GJ channel protein connexin 43 (Cx43) in arginine vasopressin (AVP)-induced vasoconstriction after hemorrhagic shock and their relationship to Rho kinase (ROCK) and protein kinase C (PKC). The results showed that AVP induced an endothelium-independent contraction in rat superior mesenteric arteries (SMAs). Blocking the GJs significantly decreased the contractile response of SMAs and vascular smooth muscle cells (VSMCs) to AVP after shock and hypoxia. The selective Cx43-mimetic peptide inhibited the vascular contractile effect of AVP after shock and hypoxia. AVP restored hypoxia-induced decrease of Cx43 phosphorylation at Ser 262 and gap junctional communication in VSMCs. Activation of RhoA with U-46619 increased the contractile effect of AVP. This effect was antagonized by the ROCK inhibitor Y27632 and the Cx43-mimetic peptide. In contrast, neither an agonist nor an inhibitor of PKC had significant effects on AVP-induced contraction after hemorrhagic shock. In addition, silencing of Cx43 with siRNA blocked the AVP-induced increase of ROCK activity in hypoxic VSMCs. In conclusion, AVP-mediated vascular contractile effects are endothelium and myoendothelial gap junction independent. Gap junctions between VSMCs, gap junctional communication, and Cx43 phosphorylation at Ser 262 play important roles in the vascular effects of AVP. RhoA/ROCK, but not PKC, is involved in this process. Copyright © 2017 the American Physiological Society.
1980-03-01
interpreting/smoothing data containing a significant percentage of gross errors, and thus is ideally suited for applications in automated image ... analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of the paper describes the application of
NASA Astrophysics Data System (ADS)
Huang, Chengcheng; Zheng, Xiaogu; Tait, Andrew; Dai, Yongjiu; Yang, Chi; Chen, Zhuoqi; Li, Tao; Wang, Zhonglei
2014-01-01
Partial thin-plate smoothing spline model is used to construct the trend surface.Correction of the spline estimated trend surface is often necessary in practice.Cressman weight is modified and applied in residual correction.The modified Cressman weight performs better than Cressman weight.A method for estimating the error covariance matrix of gridded field is provided.
Systematic study of error sources in supersonic skin-friction balance measurements
NASA Technical Reports Server (NTRS)
Allen, J. M.
1976-01-01
An experimental study was performed to investigate potential error sources in data obtained with a self-nulling, moment-measuring, skin-friction balance. The balance was installed in the sidewall of a supersonic wind tunnel, and independent measurements of the three forces contributing to the balance output (skin friction, lip force, and off-center normal force) were made for a range of gap size and element protrusion. The relatively good agreement between the balance data and the sum of these three independently measured forces validated the three-term model used. No advantage to a small gap size was found; in fact, the larger gaps were preferable. Perfect element alignment with the surrounding test surface resulted in very small balance errors. However, if small protrusion errors are unavoidable, no advantage was found in having the element slightly below the surrounding test surface rather than above it.
Notes on Accuracy of Finite-Volume Discretization Schemes on Irregular Grids
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2011-01-01
Truncation-error analysis is a reliable tool in predicting convergence rates of discretization errors on regular smooth grids. However, it is often misleading in application to finite-volume discretization schemes on irregular (e.g., unstructured) grids. Convergence of truncation errors severely degrades on general irregular grids; a design-order convergence can be achieved only on grids with a certain degree of geometric regularity. Such degradation of truncation-error convergence does not necessarily imply a lower-order convergence of discretization errors. In these notes, irregular-grid computations demonstrate that the design-order discretization-error convergence can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all.
Modeling and Simulation of Ceramic Arrays to Improve Ballaistic Performance
2013-11-01
2219 , 2000 Tile gap is found to increase the DoP as compared to One Tile tiles The next step will be run simulations on narrower and wider gap sizes...experiments described in reference - ARL-TR- 2219 , 2000 □ Tile gap is found to increase the DoP as compared to One Tile tiles □ The next step will be run...L| Al m ^ s\\cr V^ 1 v^ □ Smoothed-particle hydrodynamics (SPH) used for all parts □ SPH size = 0.40-mm, totaling 278k
Error analysis and correction in wavefront reconstruction from the transport-of-intensity equation
Barbero, Sergio; Thibos, Larry N.
2007-01-01
Wavefront reconstruction from the transport-of-intensity equation (TIE) is a well-posed inverse problem given smooth signals and appropriate boundary conditions. However, in practice experimental errors lead to an ill-condition problem. A quantitative analysis of the effects of experimental errors is presented in simulations and experimental tests. The relative importance of numerical, misalignment, quantization, and photodetection errors are shown. It is proved that reduction of photodetection noise by wavelet filtering significantly improves the accuracy of wavefront reconstruction from simulated and experimental data. PMID:20052302
Bridging the gap between high and low acceleration for planetary escape
NASA Astrophysics Data System (ADS)
Indrikis, Janis; Preble, Jeffrey C.
With the exception of the often time consuming analysis by numerical optimization, no single orbit transfer analysis technique exists that can be applied over a wide range of accelerations. Using the simple planetary escape (parabolic trajectory) mission some of the more common techniques are considered as the limiting bastions at the high and the extremely low acceleration regimes. The brachistochrone, the minimum time of flight path, is proposed as the technique to bridge the gap between the high and low acceleration regions, providing a smooth bridge over the entire acceleration spectrum. A smooth and continuous velocity requirement is established for the planetary escape mission. By using these results, it becomes possible to determine the effect of finite accelerations on mission performance and target propulsion and power system designs which are consistent with a desired mission objective.
NASA Technical Reports Server (NTRS)
Verger, Aleixandre; Baret, F.; Weiss, M.; Kandasamy, S.; Vermote, E.
2013-01-01
Consistent, continuous, and long time series of global biophysical variables derived from satellite data are required for global change research. A novel climatology fitting approach called CACAO (Consistent Adjustment of the Climatology to Actual Observations) is proposed to reduce noise and fill gaps in time series by scaling and shifting the seasonal climatological patterns to the actual observations. The shift and scale CACAO parameters adjusted for each season allow quantifying shifts in the timing of seasonal phenology and inter-annual variations in magnitude as compared to the average climatology. CACAO was assessed first over simulated daily Leaf Area Index (LAI) time series with varying fractions of missing data and noise. Then, performances were analyzed over actual satellite LAI products derived from AVHRR Long-Term Data Record for the 1981-2000 period over the BELMANIP2 globally representative sample of sites. Comparison with two widely used temporal filtering methods-the asymmetric Gaussian (AG) model and the Savitzky-Golay (SG) filter as implemented in TIMESAT-revealed that CACAO achieved better performances for smoothing AVHRR time series characterized by high level of noise and frequent missing observations. The resulting smoothed time series captures well the vegetation dynamics and shows no gaps as compared to the 50-60% of still missing data after AG or SG reconstructions. Results of simulation experiments as well as confrontation with actual AVHRR time series indicate that the proposed CACAO method is more robust to noise and missing data than AG and SG methods for phenology extraction.
Effects of anchoring and arc structure on the control authority of a rail plasma actuator
NASA Astrophysics Data System (ADS)
Choi, Young-Joon; Gray, Miles; Sirohi, Jayant; Raja, Laxminarayan L.
2017-09-01
Experiments were conducted on a rail plasma actuator (RailPAc) with different electrode cross sections (rails or rods) to assess methods to improve the actuation authority, defined as the impulse generated for a given electrical input. The arc was characterized with electrical measurements and high-speed images, while impulse measurements quantified the actuation authority. A RailPAc power supply capable of delivering ∼1 kA of current at ∼100 V was connected to rod electrodes (free-floating with circular cross-section) and rail electrodes (flush-mounted in a flat plate with rectangular cross-section). High-speed images show that the rail electrodes cause the arc to anchor itself to the anode electrode and transit in discrete jumps, while rod electrodes permit the arc to transit smoothly without anchoring. The impulse measurements reveal that the anchoring reduces the actuation authority by ∼21% compared to a smooth transit, and the effect of anchoring can be suppressed by reducing the gap between the rails to 2 mm. The study further demonstrates that if a smooth transit is achieved, the control authority can be increased with a larger gap and larger arc current. In conclusion, the actuation authority of a RailPAc can be maximized by carefully choosing a gap width that prevents anchoring. Further study is warranted to increase the RailPAc actuation authority by introducing multiple turns of wires beneath the RailPAc to augment the induced magnetic field.
ERIC Educational Resources Information Center
Ferrando, Pere J.
2004-01-01
This study used kernel-smoothing procedures to estimate the item characteristic functions (ICFs) of a set of continuous personality items. The nonparametric ICFs were compared with the ICFs estimated (a) by the linear model and (b) by Samejima's continuous-response model. The study was based on a conditioned approach and used an error-in-variables…
Gao, Bo-Cai; Liu, Ming
2013-01-01
Surface reflectance spectra retrieved from remotely sensed hyperspectral imaging data using radiative transfer models often contain residual atmospheric absorption and scattering effects. The reflectance spectra may also contain minor artifacts due to errors in radiometric and spectral calibrations. We have developed a fast smoothing technique for post-processing of retrieved surface reflectance spectra. In the present spectral smoothing technique, model-derived reflectance spectra are first fit using moving filters derived with a cubic spline smoothing algorithm. A common gain curve, which contains minor artifacts in the model-derived reflectance spectra, is then derived. This gain curve is finally applied to all of the reflectance spectra in a scene to obtain the spectrally smoothed surface reflectance spectra. Results from analysis of hyperspectral imaging data collected with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data are given. Comparisons between the smoothed spectra and those derived with the empirical line method are also presented. PMID:24129022
NASA Astrophysics Data System (ADS)
Fan, Qingbiao; Xu, Caijun; Yi, Lei; Liu, Yang; Wen, Yangmao; Yin, Zhi
2017-10-01
When ill-posed problems are inverted, the regularization process is equivalent to adding constraint equations or prior information from a Bayesian perspective. The veracity of the constraints (or the regularization matrix R) significantly affects the solution, and a smoothness constraint is usually added in seismic slip inversions. In this paper, an adaptive smoothness constraint (ASC) based on the classic Laplacian smoothness constraint (LSC) is proposed. The ASC not only improves the smoothness constraint, but also helps constrain the slip direction. A series of experiments are conducted in which different magnitudes of noise are imposed and different densities of observation are assumed, and the results indicated that the ASC was superior to the LSC. Using the proposed ASC, the Helmert variance component estimation method is highlighted as the best for selecting the regularization parameter compared with other methods, such as generalized cross-validation or the mean squared error criterion method. The ASC may also benefit other ill-posed problems in which a smoothness constraint is required.
Effect of electrical coupling on ionic current and synaptic potential measurements.
Rabbah, Pascale; Golowasch, Jorge; Nadim, Farzan
2005-07-01
Recent studies have found electrical coupling to be more ubiquitous than previously thought, and coupling through gap junctions is known to play a crucial role in neuronal function and network output. In particular, current spread through gap junctions may affect the activation of voltage-dependent conductances as well as chemical synaptic release. Using voltage-clamp recordings of two strongly electrically coupled neurons of the lobster stomatogastric ganglion and conductance-based models of these neurons, we identified effects of electrical coupling on the measurement of leak and voltage-gated outward currents, as well as synaptic potentials. Experimental measurements showed that both leak and voltage-gated outward currents are recruited by gap junctions from neurons coupled to the clamped cell. Nevertheless, in spite of the strong coupling between these neurons, the errors made in estimating voltage-gated conductance parameters were relatively minor (<10%). Thus in many cases isolation of coupled neurons may not be required if a small degree of measurement error of the voltage-gated currents or the synaptic potentials is acceptable. Modeling results show, however, that such errors may be as high as 20% if the gap-junction position is near the recording site or as high as 90% when measuring smaller voltage-gated ionic currents. Paradoxically, improved space clamp increases the errors arising from electrical coupling because voltage control across gap junctions is poor for even the highest realistic coupling conductances. Furthermore, the common procedure of leak subtraction can add an extra error to the conductance measurement, the sign of which depends on the maximal conductance.
Enhanced p122RhoGAP/DLC-1 Expression Can Be a Cause of Coronary Spasm
Kinjo, Takahiko; Tanaka, Makoto; Osanai, Tomohiro; Shibutani, Shuji; Narita, Ikuyo; Tanno, Tomohiro; Nishizaki, Kimitaka; Ichikawa, Hiroaki; Kimura, Yoshihiro; Ishida, Yuji; Yokota, Takashi; Shimada, Michiko; Homma, Yoshimi; Tomita, Hirofumi; Okumura, Ken
2015-01-01
Background We previously showed that phospholipase C (PLC)-δ1 activity was enhanced by 3-fold in patients with coronary spastic angina (CSA). We also reported that p122Rho GTPase-activating protein/deleted in liver cancer-1 (p122RhoGAP/DLC-1) protein, which was discovered as a PLC-δ1 stimulator, was upregulated in CSA patients. We tested the hypothesis that p122RhoGAP/DLC-1 overexpression causes coronary spasm. Methods and Results We generated transgenic (TG) mice with vascular smooth muscle (VSM)-specific overexpression of p122RhoGAP/DLC-1. The gene and protein expressions of p122RhoGAP/DLC-1 were markedly increased in the aorta of homozygous TG mice. Stronger staining with anti-p122RhoGAP/DLC-1 in the coronary artery was found in TG than in WT mice. PLC activities in the plasma membrane fraction and the whole cell were enhanced by 1.43 and 2.38 times, respectively, in cultured aortic vascular smooth muscle cells from homozygous TG compared with those from WT mice. Immediately after ergometrine injection, ST-segment elevation was observed in 1 of 7 WT (14%), 6 of 7 heterozygous TG (84%), and 7 of 7 homozygous TG mice (100%) (p<0.05, WT versus TGs). In the isolated Langendorff hearts, coronary perfusion pressure was increased after ergometrine in TG, but not in WT mice, despite of the similar response to prostaglandin F2α between TG and WT mice (n = 5). Focal narrowing of the coronary artery after ergometrine was documented only in TG mice. Conclusions VSM-specific overexpression of p122RhoGAP/DLC-1 enhanced coronary vasomotility after ergometrine injection in mice, which is relevant to human CSA. PMID:26624289
Smoothed Spectra, Ogives, and Error Estimates for Atmospheric Turbulence Data
NASA Astrophysics Data System (ADS)
Dias, Nelson Luís
2018-01-01
A systematic evaluation is conducted of the smoothed spectrum, which is a spectral estimate obtained by averaging over a window of contiguous frequencies. The technique is extended to the ogive, as well as to the cross-spectrum. It is shown that, combined with existing variance estimates for the periodogram, the variance—and therefore the random error—associated with these estimates can be calculated in a straightforward way. The smoothed spectra and ogives are biased estimates; with simple power-law analytical models, correction procedures are devised, as well as a global constraint that enforces Parseval's identity. Several new results are thus obtained: (1) The analytical variance estimates compare well with the sample variance calculated for the Bartlett spectrum and the variance of the inertial subrange of the cospectrum is shown to be relatively much larger than that of the spectrum. (2) Ogives and spectra estimates with reduced bias are calculated. (3) The bias of the smoothed spectrum and ogive is shown to be negligible at the higher frequencies. (4) The ogives and spectra thus calculated have better frequency resolution than the Bartlett spectrum, with (5) gradually increasing variance and relative error towards the low frequencies. (6) Power-law identification and extraction of the rate of dissipation of turbulence kinetic energy are possible directly from the ogive. (7) The smoothed cross-spectrum is a valid inner product and therefore an acceptable candidate for coherence and spectral correlation coefficient estimation by means of the Cauchy-Schwarz inequality. The quadrature, phase function, coherence function and spectral correlation function obtained from the smoothed spectral estimates compare well with the classical ones derived from the Bartlett spectrum.
Development of an adaptive hp-version finite element method for computational optimal control
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Warner, Michael S.
1994-01-01
In this research effort, the usefulness of hp-version finite elements and adaptive solution-refinement techniques in generating numerical solutions to optimal control problems has been investigated. Under NAG-939, a general FORTRAN code was developed which approximated solutions to optimal control problems with control constraints and state constraints. Within that methodology, to get high-order accuracy in solutions, the finite element mesh would have to be refined repeatedly through bisection of the entire mesh in a given phase. In the current research effort, the order of the shape functions in each element has been made a variable, giving more flexibility in error reduction and smoothing. Similarly, individual elements can each be subdivided into many pieces, depending on the local error indicator, while other parts of the mesh remain coarsely discretized. The problem remains to reduce and smooth the error while still keeping computational effort reasonable enough to calculate time histories in a short enough time for on-board applications.
Tuning support vector machines for minimax and Neyman-Pearson classification.
Davenport, Mark A; Baraniuk, Richard G; Scott, Clayton D
2010-10-01
This paper studies the training of support vector machine (SVM) classifiers with respect to the minimax and Neyman-Pearson criteria. In principle, these criteria can be optimized in a straightforward way using a cost-sensitive SVM. In practice, however, because these criteria require especially accurate error estimation, standard techniques for tuning SVM parameters, such as cross-validation, can lead to poor classifier performance. To address this issue, we first prove that the usual cost-sensitive SVM, here called the 2C-SVM, is equivalent to another formulation called the 2nu-SVM. We then exploit a characterization of the 2nu-SVM parameter space to develop a simple yet powerful approach to error estimation based on smoothing. In an extensive experimental study, we demonstrate that smoothing significantly improves the accuracy of cross-validation error estimates, leading to dramatic performance gains. Furthermore, we propose coordinate descent strategies that offer significant gains in computational efficiency, with little to no loss in performance.
Antisaccade and smooth pursuit eye movements in healthy subjects receiving sertraline and lorazepam.
Green, J F; King, D J; Trimble, K M
2000-03-01
Patients suffering from some psychiatric and neurological disorders demonstrate abnormally high levels of saccadic distractibility when carrying out the antisaccade task. This has been particularly thoroughly demonstrated in patients with schizophrenia. A large body of evidence has been accumulated from studies of patients which suggests that such eye movement abnormalities may arise from frontal lobe dysfunction. The psychopharmacology of saccadic distractibility is less well understood, but is relevant both to interpreting patient studies and to establishing the neurological basis of their findings. Twenty healthy subjects received lorazepam 0.5 mg, 1 mg and 2 mg, sertraline 50 mg and placebo in a balanced, repeated measures study design. Antisaccade, no-saccade, visually guided saccade and smooth pursuit tasks were carried out and the effects of practice and drugs measured. Lorazepam increased direction errors in the antisaccade and no-saccade tasks in a dose-dependent manner. Sertraline had no effect on these measures. Correlation showed a statistically significant, but rather weak, association between direction errors and smooth pursuit measures. Practice was shown to have a powerful effect on antisaccade direction errors. This study supports our previous work by confirming that lorazepam reliably worsens saccadic distractibility, in contrast to other psychotropic drugs such as sertraline and chlorpromazine. Our results also suggest that other studies in this field, particularly those using parallel groups design, should take account of practice effects.
Khozani, Zohreh Sheikh; Bonakdari, Hossein; Zaji, Amir Hossein
2016-01-01
Two new soft computing models, namely genetic programming (GP) and genetic artificial algorithm (GAA) neural network (a combination of modified genetic algorithm and artificial neural network methods) were developed in order to predict the percentage of shear force in a rectangular channel with non-homogeneous roughness. The ability of these methods to estimate the percentage of shear force was investigated. Moreover, the independent parameters' effectiveness in predicting the percentage of shear force was determined using sensitivity analysis. According to the results, the GP model demonstrated superior performance to the GAA model. A comparison was also made between the GP program determined as the best model and five equations obtained in prior research. The GP model with the lowest error values (root mean square error ((RMSE) of 0.0515) had the best function compared with the other equations presented for rough and smooth channels as well as smooth ducts. The equation proposed for rectangular channels with rough boundaries (RMSE of 0.0642) outperformed the prior equations for smooth boundaries.
Optimal Divergence-Free Hatch Filter for GNSS Single-Frequency Measurement.
Park, Byungwoon; Lim, Cheolsoon; Yun, Youngsun; Kim, Euiho; Kee, Changdon
2017-02-24
The Hatch filter is a code-smoothing technique that uses the variation of the carrier phase. It can effectively reduce the noise of a pseudo-range with a very simple filter construction, but it occasionally causes an ionosphere-induced error for low-lying satellites. Herein, we propose an optimal single-frequency (SF) divergence-free Hatch filter that uses a satellite-based augmentation system (SBAS) message to reduce the ionospheric divergence and applies the optimal smoothing constant for its smoothing window width. According to the data-processing results, the overall performance of the proposed filter is comparable to that of the dual frequency (DF) divergence-free Hatch filter. Moreover, it can reduce the horizontal error of 57 cm to 37 cm and improve the vertical accuracy of the conventional Hatch filter by 25%. Considering that SF receivers dominate the global navigation satellite system (GNSS) market and that most of these receivers include the SBAS function, the filter suggested in this paper is of great value in that it can make the differential GPS (DGPS) performance of the low-cost SF receivers comparable to that of DF receivers.
Optimal Divergence-Free Hatch Filter for GNSS Single-Frequency Measurement
Park, Byungwoon; Lim, Cheolsoon; Yun, Youngsun; Kim, Euiho; Kee, Changdon
2017-01-01
The Hatch filter is a code-smoothing technique that uses the variation of the carrier phase. It can effectively reduce the noise of a pseudo-range with a very simple filter construction, but it occasionally causes an ionosphere-induced error for low-lying satellites. Herein, we propose an optimal single-frequency (SF) divergence-free Hatch filter that uses a satellite-based augmentation system (SBAS) message to reduce the ionospheric divergence and applies the optimal smoothing constant for its smoothing window width. According to the data-processing results, the overall performance of the proposed filter is comparable to that of the dual frequency (DF) divergence-free Hatch filter. Moreover, it can reduce the horizontal error of 57 cm to 37 cm and improve the vertical accuracy of the conventional Hatch filter by 25%. Considering that SF receivers dominate the global navigation satellite system (GNSS) market and that most of these receivers include the SBAS function, the filter suggested in this paper is of great value in that it can make the differential GPS (DGPS) performance of the low-cost SF receivers comparable to that of DF receivers. PMID:28245584
Near-Stall Modal Disturbances Within a Transonic Compressor Rotor
2011-12-01
kpi to kulite.position.interp %to loc creation.... what is interesting is why the other runs for 70,80, %85 pc were not affected? kpi ...kulite.position.interp; kulite.position.smooth = smooth(( kpi (loc_loc)... -(round( kpi (loc_loc(1)))): ... round( kpi (loc_loc(end))))’,0.05, ’rloess...8217); % Step 4: Correct Position Vector kulite.position.correct = kpi *blade.number; % total number of blade passings 90 % Trigger Plot with Error
Validity of the two-level model for Viterbi decoder gap-cycle performance
NASA Technical Reports Server (NTRS)
Dolinar, S.; Arnold, S.
1990-01-01
A two-level model has previously been proposed for approximating the performance of a Viterbi decoder which encounters data received with periodically varying signal-to-noise ratio. Such cyclically gapped data is obtained from the Very Large Array (VLA), either operating as a stand-alone system or arrayed with Goldstone. This approximate model predicts that the decoder error rate will vary periodically between two discrete levels with the same period as the gap cycle. It further predicts that the length of the gapped portion of the decoder error cycle for a constraint length K decoder will be about K-1 bits shorter than the actual duration of the gap. The two-level model for Viterbi decoder performance with gapped data is subjected to detailed validation tests. Curves showing the cyclical behavior of the decoder error burst statistics are compared with the simple square-wave cycles predicted by the model. The validity of the model depends on a parameter often considered irrelevant in the analysis of Viterbi decoder performance, the overall scaling of the received signal or the decoder's branch-metrics. Three scaling alternatives are examined: optimum branch-metric scaling and constant branch-metric scaling combined with either constant noise-level scaling or constant signal-level scaling. The simulated decoder error cycle curves roughly verify the accuracy of the two-level model for both the case of optimum branch-metric scaling and the case of constant branch-metric scaling combined with constant noise-level scaling. However, the model is not accurate for the case of constant branch-metric scaling combined with constant signal-level scaling.
Human Research Program Space Human Factors Engineering (SHFE) Standing Review Panel (SRP)
NASA Technical Reports Server (NTRS)
Wichansky, Anna; Badler, Norman; Butler, Keith; Cummings, Mary; DeLucia, Patricia; Endsley, Mica; Scholtz, Jean
2009-01-01
The Space Human Factors Engineering (SHFE) Standing Review Panel (SRP) evaluated 22 gaps and 39 tasks in the three risk areas assigned to the SHFE Project. The area where tasks were best designed to close the gaps and the fewest gaps were left out was the Risk of Reduced Safety and Efficiency dire to Inadequate Design of Vehicle, Environment, Tools or Equipment. The areas where there were more issues with gaps and tasks, including poor or inadequate fit of tasks to gaps and missing gaps, were Risk of Errors due to Poor Task Design and Risk of Error due to Inadequate Information. One risk, the Risk of Errors due to Inappropriate Levels of Trust in Automation, should be added. If astronauts trust automation too much in areas where it should not be trusted, but rather tempered with human judgment and decision making, they will incur errors. Conversely, if they do not trust automation when it should be trusted, as in cases where it can sense aspects of the environment such as radiation levels or distances in space, they will also incur errors. This will be a larger risk when astronauts are less able to rely on human mission control experts and are out of touch, far away, and on their own. The SRP also identified 11 new gaps and five new tasks. Although the SRP had an extremely large quantity of reading material prior to and during the meeting, we still did not feel we had an overview of the activities and tasks the astronauts would be performing in exploration missions. Without a detailed task analysis and taxonomy of activities the humans would be engaged in, we felt it was impossible to know whether the gaps and tasks were really sufficient to insure human safety, performance, and comfort in the exploration missions. The SRP had difficulty evaluating many of the gaps and tasks that were not as quantitative as those related to concrete physical danger such as excessive noise and vibration. Often the research tasks for cognitive risks that accompany poor task or information design addressed only part, but not all, of the gaps they were programmed to fill. In fact the tasks outlined will not close the gap but only scratch the surface in many cases. In other cases, the gap was written too broadly, and really should be restated in a more constrained way that can be addressed by a well-organized and complementary set of tasks. In many cases, the research results should be turned into guidelines for design. However, it was not clear whether the researchers or another group would construct and deliver these guidelines.
NASA Technical Reports Server (NTRS)
Long, E. R., Jr.
1986-01-01
Effects of specimen preparation on measured values of an acrylic's electomagnetic properties at X-band microwave frequencies, TE sub 1,0 mode, utilizing an automatic network analyzer have been studied. For 1 percent or less error, a gap between the specimen edge and the 0.901-in. wall of the specimen holder was the most significant parameter. The gap had to be less than 0.002 in. The thickness variation and alignment errors in the direction parallel to the 0.901-in. wall were equally second most significant and had to be less than 1 degree. Errors in the measurement f the thickness were third most significant. They had to be less than 3 percent. The following parameters caused errors of 1 percent or less: ratios of specimen-holder thicknesses of more than 15 percent, gaps between the specimen edge and the 0.401-in. wall less than 0.045 in., position errors less than 15 percent, surface roughness, hickness variation in the direction parallel to the 0.401-in. wall less than 35 percent, and specimen alignment in the direction parallel to the 0.401-in. wall mass than 5 degrees.
Spin Contamination Error in Optimized Geometry of Singlet Carbene (1A1) by Broken-Symmetry Method
NASA Astrophysics Data System (ADS)
Kitagawa, Yasutaka; Saito, Toru; Nakanishi, Yasuyuki; Kataoka, Yusuke; Matsui, Toru; Kawakami, Takashi; Okumura, Mitsutaka; Yamaguchi, Kizashi
2009-10-01
Spin contamination errors of a broken-symmetry (BS) method in optimized structural parameters of the singlet methylene (1A1) molecule are quantitatively estimated for the Hartree-Fock (HF) method, post-HF methods (CID, CCD, MP2, MP3, MP4(SDQ)), and a hybrid DFT (B3LYP) method. For the purpose, the optimized geometry by the BS method is compared with that of an approximate spin projection (AP) method. The difference between the BS and the AP methods is about 10-20° in the HCH angle. In order to examine the basis set dependency of the spin contamination error, calculated results by STO-3G, 6-31G*, and 6-311++G** are compared. The error depends on the basis sets, but the tendencies of each method are classified into two types. Calculated energy splitting values between the triplet and the singlet states (ST gap) indicate that the contamination of the stable triplet state makes the BS singlet solution stable and the ST gap becomes small. The energy order of the spin contamination error in the ST gap is estimated to be 10-1 eV.
NASA Astrophysics Data System (ADS)
Zeng, Lu-Chuan; Yao, Jen-Chih
2006-09-01
Recently, Agarwal, Cho, Li and Huang [R.P. Agarwal, Y.J. Cho, J. Li, N.J. Huang, Stability of iterative procedures with errors approximating common fixed points for a couple of quasi-contractive mappings in q-uniformly smooth Banach spaces, J. Math. Anal. Appl. 272 (2002) 435-447] introduced the new iterative procedures with errors for approximating the common fixed point of a couple of quasi-contractive mappings and showed the stability of these iterative procedures with errors in Banach spaces. In this paper, we introduce a new concept of a couple of q-contractive-like mappings (q>1) in a Banach space and apply these iterative procedures with errors for approximating the common fixed point of the couple of q-contractive-like mappings. The results established in this paper improve, extend and unify the corresponding ones of Agarwal, Cho, Li and Huang [R.P. Agarwal, Y.J. Cho, J. Li, N.J. Huang, Stability of iterative procedures with errors approximating common fixed points for a couple of quasi-contractive mappings in q-uniformly smooth Banach spaces, J. Math. Anal. Appl. 272 (2002) 435-447], Chidume [C.E. Chidume, Approximation of fixed points of quasi-contractive mappings in Lp spaces, Indian J. Pure Appl. Math. 22 (1991) 273-386], Chidume and Osilike [C.E. Chidume, M.O. Osilike, Fixed points iterations for quasi-contractive maps in uniformly smooth Banach spaces, Bull. Korean Math. Soc. 30 (1993) 201-212], Liu [Q.H. Liu, On Naimpally and Singh's open questions, J. Math. Anal. Appl. 124 (1987) 157-164; Q.H. Liu, A convergence theorem of the sequence of Ishikawa iterates for quasi-contractive mappings, J. Math. Anal. Appl. 146 (1990) 301-305], Osilike [M.O. Osilike, A stable iteration procedure for quasi-contractive maps, Indian J. Pure Appl. Math. 27 (1996) 25-34; M.O. Osilike, Stability of the Ishikawa iteration method for quasi-contractive maps, Indian J. Pure Appl. Math. 28 (1997) 1251-1265] and many others in the literature.
Computational provenance in hydrologic science: a snow mapping example.
Dozier, Jeff; Frew, James
2009-03-13
Computational provenance--a record of the antecedents and processing history of digital information--is key to properly documenting computer-based scientific research. To support investigations in hydrologic science, we produce the daily fractional snow-covered area from NASA's moderate-resolution imaging spectroradiometer (MODIS). From the MODIS reflectance data in seven wavelengths, we estimate the fraction of each 500 m pixel that snow covers. The daily products have data gaps and errors because of cloud cover and sensor viewing geometry, so we interpolate and smooth to produce our best estimate of the daily snow cover. To manage the data, we have developed the Earth System Science Server (ES3), a software environment for data-intensive Earth science, with unique capabilities for automatically and transparently capturing and managing the provenance of arbitrary computations. Transparent acquisition avoids the scientists having to express their computations in specific languages or schemas in order for provenance to be acquired and maintained. ES3 models provenance as relationships between processes and their input and output files. It is particularly suited to capturing the provenance of an evolving algorithm whose components span multiple languages and execution environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elliott, C.J.; McVey, B.; Quimby, D.C.
The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of thesemore » errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.« less
Foot Structure in Japanese Speech Errors: Normal vs. Pathological
ERIC Educational Resources Information Center
Miyakoda, Haruko
2008-01-01
Although many studies of speech errors have been presented in the literature, most have focused on errors occurring at either the segmental or feature level. Few, if any, studies have dealt with the prosodic structure of errors. This paper aims to fill this gap by taking up the issue of prosodic structure in Japanese speech errors, with a focus on…
Investigation of the influence of a step change in surface roughness on turbulent heat transfer
NASA Technical Reports Server (NTRS)
Taylor, Robert P.; Coleman, Hugh W.; Taylor, J. Keith; Hosni, M. H.
1991-01-01
The use is studied of smooth heat flux gages on the otherwise very rough SSME fuel pump turbine blades. To gain insights into behavior of such installations, fluid mechanics and heat transfer data were collected and are reported for a turbulent boundary layer over a surface with a step change from a rough surface to a smooth surface. The first 0.9 m length of the flat plate test surface was roughened with 1.27 mm hemispheres in a staggered, uniform array spaced 2 base diameters apart. The remaining 1.5 m length was smooth. The effect of the alignment of the smooth surface with respect to the rough surface was also studied by conducting experiments with the smooth surface aligned with the bases or alternatively with the crests of the roughness elements. Stanton number distributions, skin friction distributions, and boundary layer profiles of temperature and velocity are reported and are compared to previous data for both all rough and all smooth wall cases. The experiments show that the step change from rough to smooth has a dramatic effect on the convective heat transfer. It is concluded that use of smooth heat flux gages on otherwise rough surfaces could cause large errors.
Airway mechanics and methods used to visualize smooth muscle dynamics in vitro.
Cooper, P R; McParland, B E; Mitchell, H W; Noble, P B; Politi, A Z; Ressmeyer, A R; West, A R
2009-10-01
Contraction of airway smooth muscle (ASM) is regulated by the physiological, structural and mechanical environment in the lung. We review two in vitro techniques, lung slices and airway segment preparations, that enable in situ ASM contraction and airway narrowing to be visualized. Lung slices and airway segment approaches bridge a gap between cell culture and isolated ASM, and whole animal studies. Imaging techniques enable key upstream events involved in airway narrowing, such as ASM cell signalling and structural and mechanical events impinging on ASM, to be investigated.
Puzzles in modern biology. V. Why are genomes overwired?
Frank, Steven A
2017-01-01
Many factors affect eukaryotic gene expression. Transcription factors, histone codes, DNA folding, and noncoding RNA modulate expression. Those factors interact in large, broadly connected regulatory control networks. An engineer following classical principles of control theory would design a simpler regulatory network. Why are genomes overwired? Neutrality or enhanced robustness may lead to the accumulation of additional factors that complicate network architecture. Dynamics progresses like a ratchet. New factors get added. Genomes adapt to the additional complexity. The newly added factors can no longer be removed without significant loss of fitness. Alternatively, highly wired genomes may be more malleable. In large networks, most genomic variants tend to have a relatively small effect on gene expression and trait values. Many small effects lead to a smooth gradient, in which traits may change steadily with respect to underlying regulatory changes. A smooth gradient may provide a continuous path from a starting point up to the highest peak of performance. A potential path of increasing performance promotes adaptability and learning. Genomes gain by the inductive process of natural selection, a trial and error learning algorithm that discovers general solutions for adapting to environmental challenge. Similarly, deeply and densely connected computational networks gain by various inductive trial and error learning procedures, in which the networks learn to reduce the errors in sequential trials. Overwiring alters the geometry of induction by smoothing the gradient along the inductive pathways of improving performance. Those overwiring benefits for induction apply to both natural biological networks and artificial deep learning networks.
NASA Astrophysics Data System (ADS)
Özcan, Abdullah; Rivière-Lorphèvre, Edouard; Ducobu, François
2018-05-01
In part manufacturing, efficient process should minimize the cycle time needed to reach the prescribed quality on the part. In order to optimize it, the machining time needs to be as low as possible and the quality needs to meet some requirements. For a 2D milling toolpath defined by sharp corners, the programmed feedrate is different from the reachable feedrate due to kinematic limits of the motor drives. This phenomena leads to a loss of productivity. Smoothing the toolpath allows to reduce significantly the machining time but the dimensional accuracy should not be neglected. Therefore, a way to address the problem of optimizing a toolpath in part manufacturing is to take into account the manufacturing time and the part quality. On one hand, maximizing the feedrate will minimize the manufacturing time and, on the other hand, the maximum of the contour error needs to be set under a threshold to meet the quality requirements. This paper presents a method to optimize sharp corner smoothing using b-spline curves by adjusting the control points defining the curve. The objective function used in the optimization process is based on the contour error and the difference between the programmed feedrate and an estimation of the reachable feedrate. The estimation of the reachable feedrate is based on geometrical information. Some simulation results are presented in the paper and the machining times are compared in each cases.
Demand forecasting of electricity in Indonesia with limited historical data
NASA Astrophysics Data System (ADS)
Dwi Kartikasari, Mujiati; Rohmad Prayogi, Arif
2018-03-01
Demand forecasting of electricity is an important activity for electrical agents to know the description of electricity demand in future. Prediction of demand electricity can be done using time series models. In this paper, double moving average model, Holt’s exponential smoothing model, and grey model GM(1,1) are used to predict electricity demand in Indonesia under the condition of limited historical data. The result shows that grey model GM(1,1) has the smallest value of MAE (mean absolute error), MSE (mean squared error), and MAPE (mean absolute percentage error).
Combined fabrication technique for high-precision aspheric optical windows
NASA Astrophysics Data System (ADS)
Hu, Hao; Song, Ci; Xie, Xuhui
2016-07-01
Specifications made on optical components are becoming more and more stringent with the performance improvement of modern optical systems. These strict requirements not only involve low spatial frequency surface accuracy, mid-and-high spatial frequency surface errors, but also surface smoothness and so on. This presentation mainly focuses on the fabrication process for square aspheric window which combines accurate grinding, magnetorheological finishing (MRF) and smoothing polishing (SP). In order to remove the low spatial frequency surface errors and subsurface defects after accurate grinding, the deterministic polishing method MRF with high convergence and stable material removal rate is applied. Then the SP technology with pseudo-random path is adopted to eliminate the mid-and-high spatial frequency surface ripples and high slope errors which is the defect for MRF. Additionally, the coordinate measurement method and interferometry are combined in different phase. Acid-etched method and ion beam figuring (IBF) are also investigated on observing and reducing the subsurface defects. Actual fabrication result indicates that the combined fabrication technique can lead to high machining efficiency on manufaturing the high-precision and high-quality optical aspheric windows.
Multivariate Time Series Forecasting of Crude Palm Oil Price Using Machine Learning Techniques
NASA Astrophysics Data System (ADS)
Kanchymalay, Kasturi; Salim, N.; Sukprasert, Anupong; Krishnan, Ramesh; Raba'ah Hashim, Ummi
2017-08-01
The aim of this paper was to study the correlation between crude palm oil (CPO) price, selected vegetable oil prices (such as soybean oil, coconut oil, and olive oil, rapeseed oil and sunflower oil), crude oil and the monthly exchange rate. Comparative analysis was then performed on CPO price forecasting results using the machine learning techniques. Monthly CPO prices, selected vegetable oil prices, crude oil prices and monthly exchange rate data from January 1987 to February 2017 were utilized. Preliminary analysis showed a positive and high correlation between the CPO price and soy bean oil price and also between CPO price and crude oil price. Experiments were conducted using multi-layer perception, support vector regression and Holt Winter exponential smoothing techniques. The results were assessed by using criteria of root mean square error (RMSE), means absolute error (MAE), means absolute percentage error (MAPE) and Direction of accuracy (DA). Among these three techniques, support vector regression(SVR) with Sequential minimal optimization (SMO) algorithm showed relatively better results compared to multi-layer perceptron and Holt Winters exponential smoothing method.
Bellman's GAP--a language and compiler for dynamic programming in sequence analysis.
Sauthoff, Georg; Möhl, Mathias; Janssen, Stefan; Giegerich, Robert
2013-03-01
Dynamic programming is ubiquitous in bioinformatics. Developing and implementing non-trivial dynamic programming algorithms is often error prone and tedious. Bellman's GAP is a new programming system, designed to ease the development of bioinformatics tools based on the dynamic programming technique. In Bellman's GAP, dynamic programming algorithms are described in a declarative style by tree grammars, evaluation algebras and products formed thereof. This bypasses the design of explicit dynamic programming recurrences and yields programs that are free of subscript errors, modular and easy to modify. The declarative modules are compiled into C++ code that is competitive to carefully hand-crafted implementations. This article introduces the Bellman's GAP system and its language, GAP-L. It then demonstrates the ease of development and the degree of re-use by creating variants of two common bioinformatics algorithms. Finally, it evaluates Bellman's GAP as an implementation platform of 'real-world' bioinformatics tools. Bellman's GAP is available under GPL license from http://bibiserv.cebitec.uni-bielefeld.de/bellmansgap. This Web site includes a repository of re-usable modules for RNA folding based on thermodynamics.
1985-04-01
EM 32 12 MICROCOP REOUTO TETCHR NTOA B URA FSA4ARS16- AFHRL-TR-84-64 9 AIR FORCE 6 __ H EQUIPERCENTILE TEST EQUATING: THE EFFECTS OF PRESMOOTHING AND...combined or compound presmoother and a presmoothing method based on a particular model of test scores. Of the seven methods of presmoothing the score...unsmoothed distributions, the smoothing of that sequence of differences by the same compound method, and, finally, adding the smoothed differences back
Epinephrine Auto-Injector Versus Drawn Up Epinephrine for Anaphylaxis Management: A Scoping Review.
Chime, Nnenna O; Riese, Victoria G; Scherzer, Daniel J; Perretta, Julianne S; McNamara, LeAnn; Rosen, Michael A; Hunt, Elizabeth A
2017-08-01
Anaphylaxis is a life-threatening event. Most clinical symptoms of anaphylaxis can be reversed by prompt intramuscular administration of epinephrine using an auto-injector or epinephrine drawn up in a syringe and delays and errors may be fatal. The aim of this scoping review is to identify and compare errors associated with use of epinephrine drawn up in a syringe versus epinephrine auto-injectors in order to assist hospitals as they choose which approach minimizes risk of adverse events for their patients. PubMed, Embase, CINAHL, Web of Science, and the Cochrane Library were searched using terms agreed to a priori. We reviewed human and simulation studies reporting errors associated with the use of epinephrine in anaphylaxis. There were multiple screening stages with evolving feedback. Each study was independently assessed by two reviewers for eligibility. Data were extracted using an instrument modeled from the Zaza et al instrument and grouped into themes. Three main themes were noted: 1) ergonomics, 2) dosing errors, and 3) errors due to route of administration. Significant knowledge gaps in the operation of epinephrine auto-injectors among healthcare providers, patients, and caregivers were identified. For epinephrine in a syringe, there were more frequent reports of incorrect dosing and erroneous IV administration with associated adverse cardiac events. For the epinephrine auto-injector, unintentional administration to the digit was an error reported on multiple occasions. This scoping review highlights knowledge gaps and a diverse set of errors regardless of the approach to epinephrine preparation during management of anaphylaxis. There are more potentially life-threatening errors reported for epinephrine drawn up in a syringe than with the auto-injectors. The impact of these knowledge gaps and potentially fatal errors on patient outcomes, cost, and quality of care is worthy of further investigation.
Puleo, J.A.; Mouraenko, O.; Hanes, D.M.
2004-01-01
Six one-dimensional-vertical wave bottom boundary layer models are analyzed based on different methods for estimating the turbulent eddy viscosity: Laminar, linear, parabolic, k—one equation turbulence closure, k−ε—two equation turbulence closure, and k−ω—two equation turbulence closure. Resultant velocity profiles, bed shear stresses, and turbulent kinetic energy are compared to laboratory data of oscillatory flow over smooth and rough beds. Bed shear stress estimates for the smooth bed case were most closely predicted by the k−ω model. Normalized errors between model predictions and measurements of velocity profiles over the entire computational domain collected at 15° intervals for one-half a wave cycle show that overall the linear model was most accurate. The least accurate were the laminar and k−ε models. Normalized errors between model predictions and turbulence kinetic energy profiles showed that the k−ω model was most accurate. Based on these findings, when the smallest overall velocity profile prediction error is required, the processing requirements and error analysis suggest that the linear eddy viscosity model is adequate. However, if accurate estimates of bed shear stress and TKE are required then, of the models tested, the k−ω model should be used.
Does a better model yield a better argument? An info-gap analysis
NASA Astrophysics Data System (ADS)
Ben-Haim, Yakov
2017-04-01
Theories, models and computations underlie reasoned argumentation in many areas. The possibility of error in these arguments, though of low probability, may be highly significant when the argument is used in predicting the probability of rare high-consequence events. This implies that the choice of a theory, model or computational method for predicting rare high-consequence events must account for the probability of error in these components. However, error may result from lack of knowledge or surprises of various sorts, and predicting the probability of error is highly uncertain. We show that the putatively best, most innovative and sophisticated argument may not actually have the lowest probability of error. Innovative arguments may entail greater uncertainty than more standard but less sophisticated methods, creating an innovation dilemma in formulating the argument. We employ info-gap decision theory to characterize and support the resolution of this problem and present several examples.
Preparing and Analyzing Iced Airfoils
NASA Technical Reports Server (NTRS)
Vickerman, Mary B.; Baez, Marivell; Braun, Donald C.; Cotton, Barbara J.; Choo, Yung K.; Coroneos, Rula M.; Pennline, James A.; Hackenberg, Anthony W.; Schilling, Herbert W.; Slater, John W.;
2004-01-01
SmaggIce version 1.2 is a computer program for preparing and analyzing iced airfoils. It includes interactive tools for (1) measuring ice-shape characteristics, (2) controlled smoothing of ice shapes, (3) curve discretization, (4) generation of artificial ice shapes, and (5) detection and correction of input errors. Measurements of ice shapes are essential for establishing relationships between characteristics of ice and effects of ice on airfoil performance. The shape-smoothing tool helps prepare ice shapes for use with already available grid-generation and computational-fluid-dynamics software for studying the aerodynamic effects of smoothed ice on airfoils. The artificial ice-shape generation tool supports parametric studies since ice-shape parameters can easily be controlled with the artificial ice. In such studies, artificial shapes generated by this program can supplement simulated ice obtained from icing research tunnels and real ice obtained from flight test under icing weather condition. SmaggIce also automatically detects geometry errors such as tangles or duplicate points in the boundary which may be introduced by digitization and provides tools to correct these. By use of interactive tools included in SmaggIce version 1.2, one can easily characterize ice shapes and prepare iced airfoils for grid generation and flow simulations.
Enhancement of flow measurements using fluid-dynamic constraints
NASA Astrophysics Data System (ADS)
Egger, H.; Seitz, T.; Tropea, C.
2017-09-01
Novel experimental modalities acquire spatially resolved velocity measurements for steady state and transient flows which are of interest for engineering and biological applications. One of the drawbacks of such high resolution velocity data is their susceptibility to measurement errors. In this paper, we propose a novel filtering strategy that allows enhancement of the noisy measurements to obtain reconstruction of smooth divergence free velocity and corresponding pressure fields which together approximately comply to a prescribed flow model. The main step in our approach consists of the appropriate use of the velocity measurements in the design of a linearized flow model which can be shown to be well-posed and consistent with the true velocity and pressure fields up to measurement and modeling errors. The reconstruction procedure is then formulated as an optimal control problem for this linearized flow model. The resulting filter has analyzable smoothing and approximation properties. We briefly discuss the discretization of the approach by finite element methods and comment on the efficient solution by iterative methods. The capability of the proposed filter to significantly reduce data noise is demonstrated by numerical tests including the application to experimental data. In addition, we compare with other methods like smoothing and solenoidal filtering.
Bayesian multi-scale smoothing of photon-limited images with applications to astronomy and medicine
NASA Astrophysics Data System (ADS)
White, John
Multi-scale models for smoothing Poisson signals or images have gained much attention over the past decade. A new Bayesian model is developed using the concept of the Chinese restaurant process to find structures in two-dimensional images when performing image reconstruction or smoothing. This new model performs very well when compared to other leading methodologies for the same problem. It is developed and evaluated theoretically and empirically throughout Chapter 2. The newly developed Bayesian model is extended to three-dimensional images in Chapter 3. The third dimension has numerous different applications, such as different energy spectra, another spatial index, or possibly a temporal dimension. Empirically, this method shows promise in reducing error with the use of simulation studies. A further development removes background noise in the image. This removal can further reduce the error and is done using a modeling adjustment and post-processing techniques. These details are given in Chapter 4. Applications to real world problems are given throughout. Photon-based images are common in astronomical imaging due to the collection of different types of energy such as X-Rays. Applications to real astronomical images are given, and these consist of X-ray images from the Chandra X-ray observatory satellite. Diagnostic medicine uses many types of imaging such as magnetic resonance imaging and computed tomography that can also benefit from smoothing techniques such as the one developed here. Reducing the amount of radiation a patient takes will make images more noisy, but this can be mitigated through the use of image smoothing techniques. Both types of images represent the potential real world use for these methods.
The calibration methods for Multi-Filter Rotating Shadowband Radiometer: a review
NASA Astrophysics Data System (ADS)
Chen, Maosi; Davis, John; Tang, Hongzhao; Ownby, Carolyn; Gao, Wei
2013-09-01
The continuous, over two-decade data record from the Multi-Filter Rotating Shadowband Radiometer (MFRSR) is ideal for climate research which requires timely and accurate information of important atmospheric components such as gases, aerosols, and clouds. Except for parameters derived from MFRSR measurement ratios, which are not impacted by calibration error, most applications require accurate calibration factor(s), angular correction, and spectral response function(s) from calibration. Although a laboratory lamp (or reference) calibration can provide all the information needed to convert the instrument readings to actual radiation, in situ calibration methods are implemented routinely (daily) to fill the gaps between lamp calibrations. In this paper, the basic structure and the data collection and pretreatment of the MFRSR are described. The laboratory lamp calibration and its limitations are summarized. The cloud screening algorithms for MFRSR data are presented. The in situ calibration methods, the standard Langley method and its variants, the ratio-Langley method, the general method, Alexandrov's comprehensive method, and Chen's multi-channel method, are outlined. The reason that all these methods do not fit for all situations is that they assume some properties, such as aerosol optical depth (AOD), total optical depth (TOD), precipitable water vapor (PWV), effective size of aerosol particles, or angstrom coefficient, are invariant over time. These properties are not universal and some of them rarely happen. In practice, daily calibration factors derived from these methods should be smoothed to restrain error.
NASA Astrophysics Data System (ADS)
Werner, C. L.; Wegmüller, U.; Strozzi, T.
2012-12-01
The Lost-Hills oil field located in Kern County,California ranks sixth in total remaining reserves in California. Hundreds of densely packed wells characterize the field with one well every 5000 to 20000 square meters. Subsidence due to oil extraction can be grater than 10 cm/year and is highly variable both in space and time. The RADARSAT-1 SAR satellite collected data over this area with a 24-day repeat during a 2 year period spanning 2002-2004. Relatively high interferometric correlation makes this an excellent region for development and test of deformation time-series inversion algorithms. Errors in deformation time series derived from a stack of differential interferograms are primarily due to errors in the digital terrain model, interferometric baselines, variability in tropospheric delay, thermal noise and phase unwrapping errors. Particularly challenging is separation of non-linear deformation from variations in troposphere delay and phase unwrapping errors. In our algorithm a subset of interferometric pairs is selected from a set of N radar acquisitions based on criteria of connectivity, time interval, and perpendicular baseline. When possible, the subset consists of temporally connected interferograms, otherwise the different groups of interferograms are selected to overlap in time. The maximum time interval is constrained to be less than a threshold value to minimize phase gradients due to deformation as well as minimize temporal decorrelation. Large baselines are also avoided to minimize the consequence of DEM errors on the interferometric phase. Based on an extension of the SVD based inversion described by Lee et al. ( USGS Professional Paper 1769), Schmidt and Burgmann (JGR, 2003), and the earlier work of Berardino (TGRS, 2002), our algorithm combines estimation of the DEM height error with a set of finite difference smoothing constraints. A set of linear equations are formulated for each spatial point that are functions of the deformation velocities during the time intervals spanned by the interferogram and a DEM height correction. The sensitivity of the phase to the height correction depends on the length of the perpendicular baseline of each interferogram. This design matrix is augmented with a set of additional weighted constraints on the acceleration that penalize rapid velocity variations. The weighting factor γ can be varied from 0 (no smoothing) to a large values (> 10) that yield an essentially linear time-series solution. The factor can be tuned to take into account a priori knowledge of the deformation non-linearity. The difference between the time-series solution and the unconstrained time-series can be interpreted as due to a combination of tropospheric path delay and baseline error. Spatial smoothing of the residual phase leads to an improved atmospheric model that can be fed back into the model and iterated. Our analysis shows non-linear deformation related to changes in the oil extraction as well as local height corrections improving on the low resolution 3 arc-sec SRTM DEM.
Gap filling strategies and error in estimating annual soil respiration
USDA-ARS?s Scientific Manuscript database
Soil respiration (Rsoil) is one of the largest CO2 fluxes in the global carbon (C) cycle. Estimation of annual Rsoil requires extrapolation of survey measurements or gap-filling of automated records to produce a complete time series. While many gap-filling methodologies have been employed, there is ...
NASA Astrophysics Data System (ADS)
Zeng, Fanhai; Zhang, Zhongqiang; Karniadakis, George Em
2017-12-01
Starting with the asymptotic expansion of the error equation of the shifted Gr\\"{u}nwald--Letnikov formula, we derive a new modified weighted shifted Gr\\"{u}nwald--Letnikov (WSGL) formula by introducing appropriate correction terms. We then apply one special case of the modified WSGL formula to solve multi-term fractional ordinary and partial differential equations, and we prove the linear stability and second-order convergence for both smooth and non-smooth solutions. We show theoretically and numerically that numerical solutions up to certain accuracy can be obtained with only a few correction terms. Moreover, the correction terms can be tuned according to the fractional derivative orders without explicitly knowing the analytical solutions. Numerical simulations verify the theoretical results and demonstrate that the new formula leads to better performance compared to other known numerical approximations with similar resolution.
Feedback attitude sliding mode regulation control of spacecraft using arm motion
NASA Astrophysics Data System (ADS)
Shi, Ye; Liang, Bin; Xu, Dong; Wang, Xueqian; Xu, Wenfu
2013-09-01
The problem of spacecraft attitude regulation based on the reaction of arm motion has attracted extensive attentions from both engineering and academic fields. Most of the solutions of the manipulator’s motion tracking problem just achieve asymptotical stabilization performance, so that these controllers cannot realize precise attitude regulation because of the existence of non-holonomic constraints. Thus, sliding mode control algorithms are adopted to stabilize the tracking error with zero transient process. Due to the switching effects of the variable structure controller, once the tracking error reaches the designed hyper-plane, it will be restricted to this plane permanently even with the existence of external disturbances. Thus, precise attitude regulation can be achieved. Furthermore, taking the non-zero initial tracking errors and chattering phenomenon into consideration, saturation functions are used to replace sign functions to smooth the control torques. The relations between the upper bounds of tracking errors and the controller parameters are derived to reveal physical characteristic of the controller. Mathematical models of free-floating space manipulator are established and simulations are conducted in the end. The results show that the spacecraft’s attitude can be regulated to the position as desired by using the proposed algorithm, the steady state error is 0.000 2 rad. In addition, the joint tracking trajectory is smooth, the joint tracking errors converges to zero quickly with a satisfactory continuous joint control input. The proposed research provides a feasible solution for spacecraft attitude regulation by using arm motion, and improves the precision of the spacecraft attitude regulation.
Effects of Piecewise Spatial Smoothing in 4-D SPECT Reconstruction
NASA Astrophysics Data System (ADS)
Qi, Wenyuan; Yang, Yongyi; King, Michael A.
2014-02-01
In nuclear medicine, cardiac gated SPECT images are known to suffer from significantly increased noise owing to limited data counts. Consequently, spatial (and temporal) smoothing has been indispensable for suppressing the noise artifacts in SPECT reconstruction. However, recently we demonstrated that the benefit of spatial processing in motion-compensated reconstruction of gated SPECT (aka 4-D) could be outweighed by its adverse effects on the myocardium, which included degraded wall motion and perfusion defect detectability. In this work, we investigate whether we can alleviate these adverse effects by exploiting an alternative spatial smoothing prior in 4-D based on image total variation (TV). TV based prior is known to induce piecewise smoothing which can preserve edge features (such as boundaries of the heart wall) in reconstruction. However, it is not clear whether such a property would necessarily be beneficial for improving the accuracy of the myocardium in 4-D reconstruction. In particular, it is unknown whether it would adversely affect the detectability of perfusion defects that are small in size or low in contrast. In our evaluation study, we first use Monte Carlo simulated imaging with 4-D NURBS-based cardiac-torso (NCAT) phantom wherein the ground truth is known for quantitative comparison. We evaluated the accuracy of the reconstructed myocardium using a number of metrics, including regional and overall accuracy of the myocardium, accuracy of the phase activity curve (PAC) of the LV wall for wall motion, uniformity and spatial resolution of the LV wall, and detectability of perfusion defects using a channelized Hotelling observer (CHO). For lesion detection, we simulated perfusion defects with different sizes and contrast levels with the focus being on perfusion defects that are subtle. As a preliminary demonstration, we also tested on three sets of clinical acquisitions. From the quantitative results, it was demonstrated that TV smoothing could further reduce the error level in the myocardium in 4-D reconstruction along with motion-compensated temporal smoothing. In contrast to quadratic spatial smoothing, TV smoothing could reduce the noise level in the LV at a faster pace than the increase in the bias level, thereby achieving a net decrease in the error level. In particular, at the same noise level, TV smoothing could reduce the bias by about 30% compared to quadratic smoothing. Moreover, the CHO results indicate that TV could also improve the lesion detectability even when the lesion is small. The PAC results show that, at the same noise level, TV smoothing achieved lower temporal bias, which is also consistent with the improved spatial resolution of the LV in reconstruction. The improvement in blurring effects by TV was also observed in the clinical images.
Optimize of shrink process with X-Y CD bias on hole pattern
NASA Astrophysics Data System (ADS)
Koike, Kyohei; Hara, Arisa; Natori, Sakurako; Yamauchi, Shohei; Yamato, Masatoshi; Oyama, Kenichi; Yaegashi, Hidetami
2017-03-01
Gridded design rules[1] is major process in configuring logic circuit used 193-immersion lithography. In the scaling of grid patterning, we can make 10nm order line and space pattern by using multiple patterning techniques such as self-aligned multiple patterning (SAMP) and litho-etch- litho-etch (LELE)[2][3][4] . On the other hand, Line cut process has some error parameters such as pattern defect, placement error, roughness and X-Y CD bias with the decreasing scale. We tried to cure hole pattern roughness to use additional process such as Line smoothing[5] . Each smoothing process showed different effect. As the result, CDx shrink amount is smaller than CDy without one additional process. In this paper, we will report the pattern controllability comparison of EUV and 193-immersion. And we will discuss optimum method about CD bias on hole pattern.
Zhang, Liping; Zheng, Yanling; Wang, Kai; Zhang, Xueliang; Zheng, Yujian
2014-06-01
In this paper, by using a particle swarm optimization algorithm to solve the optimal parameter estimation problem, an improved Nash nonlinear grey Bernoulli model termed PSO-NNGBM(1,1) is proposed. To test the forecasting performance, the optimized model is applied for forecasting the incidence of hepatitis B in Xinjiang, China. Four models, traditional GM(1,1), grey Verhulst model (GVM), original nonlinear grey Bernoulli model (NGBM(1,1)) and Holt-Winters exponential smoothing method, are also established for comparison with the proposed model under the criteria of mean absolute percentage error and root mean square percent error. The prediction results show that the optimized NNGBM(1,1) model is more accurate and performs better than the traditional GM(1,1), GVM, NGBM(1,1) and Holt-Winters exponential smoothing method. Copyright © 2014. Published by Elsevier Ltd.
The Higgs transverse momentum distribution at NNLL and its theoretical errors
Neill, Duff; Rothstein, Ira Z.; Vaidya, Varun
2015-12-15
In this letter, we present the NNLL-NNLO transverse momentum Higgs distribution arising from gluon fusion. In the regime p ⊥ << m h we include the resummation of the large logs at next to next-to leading order and then match on to the α 2 s fixed order result near p ⊥~m h. By utilizing the rapidity renormalization group (RRG) we are able to smoothly match between the resummed, small p ⊥ regime and the fixed order regime. We give a detailed discussion of the scale dependence of the result including an analysis of the rapidity scale dependence. Our centralmore » value differs from previous results, in the transition region as well as the tail, by an amount which is outside the error band. Lastly, this difference is due to the fact that the RRG profile allows us to smoothly turn off the resummation.« less
``Particle traps'' at planet gap edges in disks: effects of grain growth and fragmentation
NASA Astrophysics Data System (ADS)
Gonzalez, J.-F.; Laibe, G.; Maddison, S. T.; Pinte, C.; Ménard, F.
2014-12-01
We model the dust evolution in protoplanetary disks (PPD) with 3D, Smoothed Particle Hydrodynamics (SPH), two-phase (gas+dust) hydrodynamical simulations. The gas+dust dynamics, where aerodynamic drag leads to the vertical settling and radial migration of grains, is consistently treated. In a previous work, we characterized the spatial distribution of non-growing dust grains of different sizes in a disk containing a gap-opening planet and investigated the gap's detectability with ALMA. Here we take into account the effects of grain growth and fragmentation and study their impact on the distribution of solids in the disk. We show that rapid grain growth in the ``particle traps'' at the edges of planet gaps are strongly affected by fragmentation. We discuss the consequences for ALMA and NOEMA observations.
Mechanism of Contact between a Droplet and an Atomically Smooth Substrate
NASA Astrophysics Data System (ADS)
Lo, Hau Yung; Liu, Yuan; Xu, Lei
2017-04-01
When a droplet gently lands on an atomically smooth substrate, it will most likely contact the underlying surface in about 0.1 s. However, theoretical estimation from fluid mechanics predicts a contact time of 10-100 s. What causes this large discrepancy, and how does nature speed up contact by 2 orders of magnitude? To probe this fundamental question, we prepare atomically smooth substrates by either coating a liquid film on glass or using a freshly cleaved mica surface, and visualize the droplet contact dynamics with 30-nm resolution. Interestingly, we discover two distinct speed-up approaches: (1) droplet skidding due to even minute perturbations breaks rotational symmetry and produces early contact at the thinnest gap location, and (2) for the unperturbed situation with rotational symmetry, a previously unnoticed boundary flow around only 0.1 mm /s expedites air drainage by over 1 order of magnitude. Together, these two mechanisms universally explain general contact phenomena on smooth substrates. The fundamental discoveries shed new light on contact and drainage research.
Does rat granulation tissue maturation involve gap junction communications?
Au, Katherine; Ehrlich, H Paul
2007-07-01
Wound healing, a coordinated process, proceeds by sequential changes in cell differentiation and terminates with the deposition of a new connective tissue matrix, a scar. Initially, there is the migratory fibroblast, followed by the proliferative fibroblast, then the synthetic fibroblast, which transforms into the myofibroblast, and finally the apoptotic fibroblast. Gap junction intercellular communications are proposed to coordinate the stringent control of fibroblast phenotypic changes. Does added oleamide, a natural fatty acid that blocks gap junction intercellular communications, alter the phenotypic progression of wound fibroblasts? Pairs of polyvinyl alcohol sponges attached to Alzet pumps, which constantly pumped either oleamide or vehicle solvent, were implanted subcutaneously into three rats. On day 8, implants were harvested and evaluated histologically and biochemically. The capsule of oleamide-treated sponge contained closely packed fibroblasts with little connective tissue between them. The birefringence intensity of that connective tissue was reduced, indicating a reduced density of collagen fiber bundles. Myofibroblasts, identified immunohistologically by alpha-smooth muscle actin-stained stress fibers, were reduced in oleamide-treated implants. Western blot analysis showing less alpha-smooth muscle actin confirmed the reduced density of myofibroblasts. It appears that oleamide retards the progression of wound repair, where less connective tissue is deposited, the collagen is less organized, and the appearance of myofibroblasts is impaired. These findings support the hypothesis that gap junction intercellular communications between wound fibroblasts in granulation tissue play a role in the progression of repair and the maturation of granulation tissue into scar.
Closing the Achievement Gap as Addressed in Student Support Programs
ERIC Educational Resources Information Center
Gordon, Vincent Hoover Adams, Jr.
2012-01-01
This research will focus on three components: (1) factors contributing to the achievement gap, (2) common errors made by policy makers with regard to school reform, and (3) recommendations to educators, policy makers, and parents on closing the achievement gap through results-based student support programs. Examples of each of the three components…
NASA Astrophysics Data System (ADS)
Wang, Jia; Hou, Xi; Wan, Yongjian; Shi, Chunyan
2017-10-01
An optimized method to calculate error correction capability of tool influence function (TIF) in certain polishing conditions will be proposed based on smoothing spectral function. The basic mathematical model for this method will be established in theory. A set of polishing experimental data with rigid conformal tool is used to validate the optimized method. The calculated results can quantitatively indicate error correction capability of TIF for different spatial frequency errors in certain polishing conditions. The comparative analysis with previous method shows that the optimized method is simpler in form and can get the same accuracy results with less calculating time in contrast to previous method.
Mapping GRACE Accelerometer Error
NASA Astrophysics Data System (ADS)
Sakumura, C.; Harvey, N.; McCullough, C. M.; Bandikova, T.; Kruizinga, G. L. H.
2017-12-01
After more than fifteen years in orbit, instrument noise, and accelerometer noise in particular, remains one of the limiting error sources for the NASA/DLR Gravity Recovery and Climate Experiment mission. The recent V03 Level-1 reprocessing campaign used a Kalman filter approach to produce a high fidelity, smooth attitude solution fusing star camera and angular acceleration data. This process provided an unprecedented method for analysis and error estimation of each instrument. The accelerometer exhibited signal aliasing, differential scale factors between electrode plates, and magnetic effects. By applying the noise model developed for the angular acceleration data to the linear measurements, we explore the magnitude and geophysical pattern of gravity field error due to the electrostatic accelerometer.
An opening criterion for dust gaps in protoplanetary discs
NASA Astrophysics Data System (ADS)
Dipierro, Giovanni; Laibe, Guillaume
2017-08-01
We aim to understand under which conditions a low-mass planet can open a gap in viscous dusty protoplanetary discs. For this purpose, we extend the theory of dust radial drift to include the contribution from the tides of an embedded planet and from the gas viscous forces. From this formalism, we derive (I) a grain-size-dependent criterion for dust gap opening in discs, (II) an estimate of the location of the outer edge of the dust gap and (III) an estimate of the minimum Stokes number above which low-mass planets are able to carve gaps that appear only in the dust disc. These analytical estimates are particularly helpful to appraise the minimum mass of a hypothetical planet carving gaps in discs observed at long wavelengths and high resolution. We validate the theory against 3D smoothed particle hydrodynamics simulations of planet-disc interaction in a broad range of dusty protoplanetary discs. We find a remarkable agreement between the theoretical model and the numerical experiments.
Convergence Rates for Multivariate Smoothing Spline Functions.
1982-10-01
GAI (,T) g (T)dT - g In order to show convergence of the series and obtain bounds on the terms, we need to estimate £ Now (1 + Ay v) AyV ( g ,#V...Cox* Technical Summary Report #2437 October 1982 ABSTRACT Given data z i - g (ti ) + ci, 1 4 i 4 n, where g is the unknown function, the ti are unknown...d-dimensional variables in a domain fl, and the ei are i.i.d. random errors, the smoothing spline estimate g n is defined to be the
NASA Technical Reports Server (NTRS)
Rapp, R. H.
1977-01-01
The frequently used rule specifying the relationship between a mean gravity anomaly in a block whose side length is theta degrees and a spherical harmonic representation of these data to degree l-bar is examined in light of the smoothing parameter used by Pellinen (1966). It is found that if the smoothing parameter is not considered, mean anomalies computed from potential coefficients can be in error by about 30% of the rms anomaly value. It is suggested that the above mentioned rule should be considered only a crude approximation.
Proximal-distal differences in movement smoothness reflect differences in biomechanics.
Salmond, Layne H; Davidson, Andrew D; Charles, Steven K
2017-03-01
Smoothness is a hallmark of healthy movement. Past research indicates that smoothness may be a side product of a control strategy that minimizes error. However, this is not the only reason for smooth movements. Our musculoskeletal system itself contributes to movement smoothness: the mechanical impedance (inertia, damping, and stiffness) of our limbs and joints resists sudden change, resulting in a natural smoothing effect. How the biomechanics and neural control interact to result in an observed level of smoothness is not clear. The purpose of this study is to 1 ) characterize the smoothness of wrist rotations, 2 ) compare it with the smoothness of planar shoulder-elbow (reaching) movements, and 3 ) determine the cause of observed differences in smoothness. Ten healthy subjects performed wrist and reaching movements involving different targets, directions, and speeds. We found wrist movements to be significantly less smooth than reaching movements and to vary in smoothness with movement direction. To identify the causes underlying these observations, we tested a number of hypotheses involving differences in bandwidth, signal-dependent noise, speed, impedance anisotropy, and movement duration. Our simulations revealed that proximal-distal differences in smoothness reflect proximal-distal differences in biomechanics: the greater impedance of the shoulder-elbow filters neural noise more than the wrist. In contrast, differences in signal-dependent noise and speed were not sufficiently large to recreate the observed differences in smoothness. We also found that the variation in wrist movement smoothness with direction appear to be caused by, or at least correlated with, differences in movement duration, not impedance anisotropy. NEW & NOTEWORTHY This article presents the first thorough characterization of the smoothness of wrist rotations (flexion-extension and radial-ulnar deviation) and comparison with the smoothness of reaching (shoulder-elbow) movements. We found wrist rotations to be significantly less smooth than reaching movements and determined that this difference reflects proximal-distal differences in biomechanics: the greater impedance (inertia, damping, stiffness) of the shoulder-elbow filters noise in the command signal more than the impedance of the wrist. Copyright © 2017 the American Physiological Society.
Recognizing and managing errors of cognitive underspecification.
Duthie, Elizabeth A
2014-03-01
James Reason describes cognitive underspecification as incomplete communication that creates a knowledge gap. Errors occur when an information mismatch occurs in bridging that gap with a resulting lack of shared mental models during the communication process. There is a paucity of studies in health care examining this cognitive error and the role it plays in patient harm. The goal of the following case analyses is to facilitate accurate recognition, identify how it contributes to patient harm, and suggest appropriate management strategies. Reason's human error theory is applied in case analyses of errors of cognitive underspecification. Sidney Dekker's theory of human incident investigation is applied to event investigation to facilitate identification of this little recognized error. Contributory factors leading to errors of cognitive underspecification include workload demands, interruptions, inexperienced practitioners, and lack of a shared mental model. Detecting errors of cognitive underspecification relies on blame-free listening and timely incident investigation. Strategies for interception include two-way interactive communication, standardization of communication processes, and technological support to ensure timely access to documented clinical information. Although errors of cognitive underspecification arise at the sharp end with the care provider, effective management is dependent upon system redesign that mitigates the latent contributory factors. Cognitive underspecification is ubiquitous whenever communication occurs. Accurate identification is essential if effective system redesign is to occur.
Mind the gap: The impact of missing data on the calculation of phytoplankton phenology metrics
NASA Astrophysics Data System (ADS)
Cole, Harriet; Henson, Stephanie; Martin, Adrian; Yool, Andrew
2012-08-01
Annual phytoplankton blooms are key events in marine ecosystems and interannual variability in bloom timing has important implications for carbon export and the marine food web. The degree of match or mismatch between the timing of phytoplankton and zooplankton annual cycles may impact larval survival with knock-on effects at higher trophic levels. Interannual variability in phytoplankton bloom timing may also be used to monitor changes in the pelagic ecosystem that are either naturally or anthropogenically forced. Seasonality metrics that use satellite ocean color data have been developed to quantify the timing of phenological events which allow for objective comparisons between different regions and over long periods of time. However, satellite data sets are subject to frequent gaps due to clouds and atmospheric aerosols, or persistent data gaps in winter due to low sun angle. Here we quantify the impact of these gaps on determining the start and peak timing of phytoplankton blooms. We use the NASA Ocean Biogeochemical Model that assimilates SeaWiFS data as a gap-free time series and derive an empirical relationship between the percentage of missing data and error in the phenology metric. Applied globally, we find that the majority of subpolar regions have typical errors of 30 days for the bloom initiation date and 15 days for the peak date. The errors introduced by intermittent data must be taken into account in phenological studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, X; Yang, F
Purpose: Knowing MLC leaf positioning error over the course of treatment would be valuable for treatment planning, QA design, and patient safety. The objective of the current study was to quantify the MLC positioning accuracy for VMAT delivery of head and neck treatment plans. Methods: A total of 837 MLC log files were collected from 14 head and neck cancer patients undergoing full arc VMAT treatment on one Varian Trilogy machine. The actual and planned leaf gaps were extracted from the retrieved MLC log files. For a given patient, the leaf gap error percentage (LGEP), defined as the ratio ofmore » the actual leaf gap over the planned, was evaluated for each leaf pair at all the gantry angles recorded over the course of the treatment. Statistics describing the distribution of the largest LGEP (LLGEP) of the 60 leaf pairs including the maximum, minimum, mean, Kurtosis, and skewness were evaluated. Results: For the 14 studied patients, their PTV located at tonsil, base of tongue, larynx, supraglottis, nasal cavity, and thyroid gland with volume ranging from 72.0 cm{sup 3} to 602.0 cm{sup 3}. The identified LLGEP differed between patients. It ranged from 183.9% to 457.7% with a mean of 368.6%. For the majority of the patients, the LLGEP distributions peaked at non-zero positions and showed no obvious dependence on gantry rotations. Kurtosis and skewness, with minimum/maximum of 66.6/217.9 and 6.5/12.6, respectively, suggested relatively more peaked while right-skewed leaf error distribution pattern. Conclusion: The results indicate pattern of MLC leaf gap error differs between patients of lesion located at similar anatomic site. Understanding the systemic mechanisms underlying these observed error patterns necessitates examining more patient-specific plan parameters in a large patient cohort setting.« less
Senadheera, Sevvandi; Bertrand, Paul P; Grayson, T Hilton; Leader, Leo; Murphy, Timothy V; Sandow, Shaun L
2013-01-01
In pregnancy, the vasculature of the uterus undergoes rapid remodelling to increase blood flow and maintain perfusion to the fetus. The present study determines the distribution and density of caveolae, transient receptor potential vanilloid type 4 channels (TRPV4) and myoendothelial gap junctions, and the relative contribution of related endothelium-dependent vasodilator components in uterine radial arteries of control virgin non-pregnant and 20-day late-pregnant rats. The hypothesis examined is that specific components of endothelium-dependent vasodilator mechanisms are altered in pregnancy-related uterine radial artery remodelling. Conventional and serial section electron microscopy were used to determine the morphological characteristics of uterine radial arteries from control and pregnant rats. TRPV4 distribution and expression was examined using conventional confocal immunohistochemistry, and the contribution of endothelial TRPV4, nitric oxide (NO) and endothelium-derived hyperpolarization (EDH)-type activity determined using pressure myography with pharmacological intervention. Data show outward hypertrophic remodelling occurs in uterine radial arteries in pregnancy. Further, caveolae density in radial artery endothelium and smooth muscle from pregnant rats was significantly increased by ∼94% and ∼31%, respectively, compared with control, whereas caveolae density did not differ in endothelium compared with smooth muscle from control. Caveolae density was significantly higher by ∼59% on the abluminal compared with the luminal surface of the endothelium in uterine radial artery of pregnant rats but did not differ at those surfaces in control. TRPV4 was present in endothelium and smooth muscle, but not associated with internal elastic lamina hole sites in radial arteries. TRPV4 fluorescence intensity was significantly increased in the endothelium and smooth muscle of radial artery of pregnant compared with control rats by ∼2.6- and 5.5-fold, respectively. The TRPV4 signal was significantly higher in the endothelium compared with the smooth muscle in radial artery of both control and pregnant rats, by ∼5.7- and 2.7-fold, respectively. Myoendothelial gap junction density was significantly decreased by ∼37% in radial artery from pregnant compared with control rats. Pressure myography with pharmacological intervention showed that NO contributes ∼80% and ∼30%, and the EDH-type component ∼20% and ∼70% of the total endothelium-dependent vasodilator response in radial arteries of control and pregnant rats, respectively. TRPV4 plays a functional role in radial arteries, with a greater contribution in those from pregnant rats. The correlative association of increased TRPV4 and caveolae density and role of EDH-type activity in uterine radial artery of pregnant rats is suggestive of their causal relationship. The decreased myoendothelial gap junction density and lack of TRPV4 density at such sites is consistent with their having an integral, albeit complex, interactive role in uterine vascular signalling and remodelling in pregnancy. PMID:24128141
Neuhaus, Jochen; Heinrich, Marco; Schwalenberg, Thilo; Stolzenburg, Jens-Uwe
2009-02-01
Human detrusor smooth muscle cells (hBSMCs) are coupled by connexin 43 (Cx43)-positive gap junctions to form functional syncytia. Gap junctional communication likely is necessary for synchronised detrusor contractions and is supposed to be altered in voiding disturbances. Other authors have shown that the pleiotropic cytokine TGF-beta1 upregulates Cx43 expression in human aortic smooth muscle cells. In this study, we examined the TGF-beta1 effects on Cx43 expression in cultured hBSMCs. hBSMC cultures, established from patients undergoing cystectomy, were treated with recombinant human TGF-beta1. Cx43 expression was then examined by Western blotting, real-time PCR, and immunocytochemistry. Dye-injection experiments were used to study the size of functional syncytia. Dye-coupling experiments revealed stable formation of functional syncytia in passaged cell cultures (P1-P4). Stimulation with TGF-beta1 led to significant reduction of Cx43 immunoreactivity and coupling. Cx43 protein expression was significantly downregulated and Cx43 mRNA was only 30% of the control level. Interestingly, low phosphorylation species of Cx43 were particularly affected. Our experiments demonstrated a significant down regulation of connexin 43 by TGF-beta1 in cultured hBSMCs. These findings support the view that TGF-beta1 is involved in the pathophysiology of urinary bladder dysfunction.
NASA Technical Reports Server (NTRS)
Ivey, Margaret F
1945-01-01
Flat-plate flaps with no wing cutouts and flaps having Clark Y sections with corresponding cutouts made in wing were tested for various flap deflections, chord-wise locations, and gaps between flaps and airfoil contour. The drag was slightly lower for wing with airfoil section flaps. Satisfactory aileron effectiveness was obtained with flap gap of 20% wing chord and flap-nose location of 80 percent wing chord behind leading edge. Airflow was smooth and buffeting negligible.
Smoluchowski Equation for Networks: Merger Induced Intermittent Giant Node Formation and Degree Gap
NASA Astrophysics Data System (ADS)
Goto, Hayato; Viegas, Eduardo; Jensen, Henrik Jeldtoft; Takayasu, Hideki; Takayasu, Misako
2018-06-01
The dynamical phase diagram of a network undergoing annihilation, creation, and coagulation of nodes is found to exhibit two regimes controlled by the combined effect of preferential attachment for initiator and target nodes during coagulation and for link assignment to new nodes. The first regime exhibits smooth dynamics and power law degree distributions. In the second regime, giant degree nodes and gaps in the degree distribution are formed intermittently. Data for the Japanese firm network in 1994 and 2014 suggests that this network is moving towards the intermittent switching region.
Leblanc, Fabien; Delaney, Conor P; Ellis, Clyde N; Neary, Paul C; Champagne, Bradley J; Senagore, Anthony J
2010-12-01
We hypothesized that simulator-generated metrics and intraoperative errors may be able to differentiate the technical differences between hand-assisted laparoscopic (HAL) and straight laparoscopic (SL) approaches. Thirty-eight trainees performed two laparoscopic sigmoid colectomies on an augmented reality simulator, randomly starting by a SL (n = 19) or HAL (n = 19) approach. Both approaches were compared according to simulator-generated metrics, and intraoperative errors were collected by faculty. Sixty-four percent of surgeons were experienced (>50 procedures) with open colon surgery. Fifty-five percent and 69% of surgeons were inexperienced (<10 procedures) with SL and HAL colon surgery, respectively. Time (P < 0.001), path length (P < 0.001), and smoothness (P < 0.001) were lower with the HAL approach. Operative times for sigmoid and splenic flexure mobilization and for the colorectal anastomosis were significantly shorter with the HAL approach. Time to control the vascular pedicle was similar between both approaches. Error rates were similar between both approaches. Operative time, path length, and smoothness correlated directly with the error rate for the HAL approach. In contrast, error rate inversely correlated with the operative time for the SL approach. A HAL approach for sigmoid colectomy accelerated colonic mobilization and anastomosis. The difference in correlation between both laparoscopic approaches and error rates suggests the need for different skills to perform the HAL and the SL sigmoid colectomy. These findings may explain the preference of some surgeons for a HAL approach early in the learning of laparoscopic colorectal surgery.
Characterizing Accuracy and Precision of Glucose Sensors and Meters
2014-01-01
There is need for a method to describe precision and accuracy of glucose measurement as a smooth continuous function of glucose level rather than as a step function for a few discrete ranges of glucose. We propose and illustrate a method to generate a “Glucose Precision Profile” showing absolute relative deviation (ARD) and /or %CV versus glucose level to better characterize measurement errors at any glucose level. We examine the relationship between glucose measured by test and comparator methods using linear regression. We examine bias by plotting deviation = (test – comparator method) versus glucose level. We compute the deviation, absolute deviation (AD), ARD, and standard deviation (SD) for each data pair. We utilize curve smoothing procedures to minimize the effects of random sampling variability to facilitate identification and display of the underlying relationships between ARD or %CV and glucose level. AD, ARD, SD, and %CV display smooth continuous relationships versus glucose level. Estimates of MARD and %CV are subject to relatively large errors in the hypoglycemic range due in part to a markedly nonlinear relationship with glucose level and in part to the limited number of observations in the hypoglycemic range. The curvilinear relationships of ARD and %CV versus glucose level are helpful when characterizing and comparing the precision and accuracy of glucose sensors and meters. PMID:25037194
The effect of bathymetric filtering on nearshore process model results
Plant, N.G.; Edwards, K.L.; Kaihatu, J.M.; Veeramony, J.; Hsu, L.; Holland, K.T.
2009-01-01
Nearshore wave and flow model results are shown to exhibit a strong sensitivity to the resolution of the input bathymetry. In this analysis, bathymetric resolution was varied by applying smoothing filters to high-resolution survey data to produce a number of bathymetric grid surfaces. We demonstrate that the sensitivity of model-predicted wave height and flow to variations in bathymetric resolution had different characteristics. Wave height predictions were most sensitive to resolution of cross-shore variability associated with the structure of nearshore sandbars. Flow predictions were most sensitive to the resolution of intermediate scale alongshore variability associated with the prominent sandbar rhythmicity. Flow sensitivity increased in cases where a sandbar was closer to shore and shallower. Perhaps the most surprising implication of these results is that the interpolation and smoothing of bathymetric data could be optimized differently for the wave and flow models. We show that errors between observed and modeled flow and wave heights are well predicted by comparing model simulation results using progressively filtered bathymetry to results from the highest resolution simulation. The damage done by over smoothing or inadequate sampling can therefore be estimated using model simulations. We conclude that the ability to quantify prediction errors will be useful for supporting future data assimilation efforts that require this information.
Kizub, Igor V; Lakhkar, Anand; Dhagia, Vidhi; Joshi, Sachindra R; Jiang, Houli; Wolin, Michael S; Falck, John R; Koduru, Sreenivasulu Reddy; Errabelli, Ramu; Jacobs, Elizabeth R; Schwartzman, Michal L; Gupte, Sachin A
2016-04-15
In response to hypoxia, the pulmonary artery normally constricts to maintain optimal ventilation-perfusion matching in the lung, but chronic hypoxia leads to the development of pulmonary hypertension. The mechanisms of sustained hypoxic pulmonary vasoconstriction (HPV) remain unclear. The aim of this study was to determine the role of gap junctions (GJs) between smooth muscle cells (SMCs) in the sustained HPV development and involvement of arachidonic acid (AA) metabolites in GJ-mediated signaling. Vascular tone was measured in bovine intrapulmonary arteries (BIPAs) using isometric force measurement technique. Expression of contractile proteins was determined by Western blot. AA metabolites in the bath fluid were analyzed by mass spectrometry. Prolonged hypoxia elicited endothelium-independent sustained HPV in BIPAs. Inhibition of GJs by 18β-glycyrrhetinic acid (18β-GA) and heptanol, nonspecific blockers, and Gap-27, a specific blocker, decreased HPV in deendothelized BIPAs. The sustained HPV was not dependent on Ca(2+) entry but decreased by removal of Ca(2+) and by Rho-kinase inhibition with Y-27632. Furthermore, inhibition of GJs decreased smooth muscle myosin heavy chain (SM-MHC) expression and myosin light chain phosphorylation in BIPAs. Interestingly, inhibition of 15- and 20-hydroxyeicosatetraenoic acid (HETE) synthesis decreased HPV in deendothelized BIPAs. 15-HETE- and 20-HETE-stimulated constriction of BIPAs was inhibited by 18β-GA and Gap-27. Application of 15-HETE and 20-HETE to BIPAs increased SM-MHC expression, which was also suppressed by 18β-GA and by inhibitors of lipoxygenase and cytochrome P450 monooxygenases. More interestingly, 15,20-dihydroxyeicosatetraenoic acid and 20-OH-prostaglandin E2, novel derivatives of 20-HETE, were detected in tissue bath fluid and synthesis of these derivatives was almost completely abolished by 18β-GA. Taken together, our novel findings show that GJs between SMCs are involved in the sustained HPV in BIPAs, and 15-HETE and 20-HETE, through GJs, appear to mediate SM-MHC expression and contribute to the sustained HPV development. Copyright © 2016 the American Physiological Society.
On the alleged collisional origin of the Kirkwood Gaps. [in asteroid belt
NASA Technical Reports Server (NTRS)
Heppenheimer, T. A.
1975-01-01
This paper examines two proposed mechanisms whereby asteroidal collisions and close approaches may have given rise to the Kirkwood Gaps. The first hypothesis is that asteroids in near-resonant orbits have markedly increased collision probabilities and so are preferentially destroyed, or suffer decay in population density, within the resonance zones. A simple order-of-magnitude analysis shows that this hypothesis is untenable since it leads to conclusions which are either unrealistic or not in accord with present understanding of asteroidal physics. The second hypothesis is the Brouwer-Jefferys theory that collisions would smooth an asteroidal distribution function, as a function of Jacobi constant, thus forming resonance gaps. This hypothesis is examined by direct numerical integration of 50 asteroid orbits near the 2:1 resonance, with collisions simulated by random variables. No tendency to form a gap was observed.
Existence of the Stark-Wannier quantum resonances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sacchetti, Andrea, E-mail: andrea.sacchetti@unimore.it
2014-12-15
In this paper, we prove the existence of the Stark-Wannier quantum resonances for one-dimensional Schrödinger operators with smooth periodic potential and small external homogeneous electric field. Such a result extends the existence result previously obtained in the case of periodic potentials with a finite number of open gaps.
A four-component model of the action potential in mouse detrusor smooth muscle cell
Brain, Keith L.; Young, John S.; Manchanda, Rohit
2018-01-01
Background and hypothesis Detrusor smooth muscle cells (DSMCs) of the urinary bladder are electrically connected to one another via gap junctions and form a three dimensional syncytium. DSMCs exhibit spontaneous electrical activity, including passive depolarizations and action potentials. The shapes of spontaneous action potentials (sAPs) observed from a single DSM cell can vary widely. The biophysical origins of this variability, and the precise components which contribute to the complex shapes observed are not known. To address these questions, the basic components which constitute the sAPs were investigated. We hypothesized that linear combinations of scaled versions of these basic components can produce sAP shapes observed in the syncytium. Methods and results The basic components were identified as spontaneous evoked junction potentials (sEJP), native AP (nAP), slow after hyperpolarization (sAHP) and very slow after hyperpolarization (vsAHP). The experimental recordings were grouped into two sets: a training data set and a testing data set. A training set was used to estimate the components, and a test set to evaluate the efficiency of the estimated components. We found that a linear combination of the identified components when appropriately amplified and time shifted replicated various AP shapes to a high degree of similarity, as quantified by the root mean square error (RMSE) measure. Conclusions We conclude that the four basic components—sEJP, nAP, sAHP, and vsAHP—identified and isolated in this work are necessary and sufficient to replicate all varieties of the sAPs recorded experimentally in DSMCs. This model has the potential to generate testable hypotheses that can help identify the physiological processes underlying various features of the sAPs. Further, this model also provides a means to classify the sAPs into various shape classes. PMID:29351282
A four-component model of the action potential in mouse detrusor smooth muscle cell.
Padmakumar, Mithun; Brain, Keith L; Young, John S; Manchanda, Rohit
2018-01-01
Detrusor smooth muscle cells (DSMCs) of the urinary bladder are electrically connected to one another via gap junctions and form a three dimensional syncytium. DSMCs exhibit spontaneous electrical activity, including passive depolarizations and action potentials. The shapes of spontaneous action potentials (sAPs) observed from a single DSM cell can vary widely. The biophysical origins of this variability, and the precise components which contribute to the complex shapes observed are not known. To address these questions, the basic components which constitute the sAPs were investigated. We hypothesized that linear combinations of scaled versions of these basic components can produce sAP shapes observed in the syncytium. The basic components were identified as spontaneous evoked junction potentials (sEJP), native AP (nAP), slow after hyperpolarization (sAHP) and very slow after hyperpolarization (vsAHP). The experimental recordings were grouped into two sets: a training data set and a testing data set. A training set was used to estimate the components, and a test set to evaluate the efficiency of the estimated components. We found that a linear combination of the identified components when appropriately amplified and time shifted replicated various AP shapes to a high degree of similarity, as quantified by the root mean square error (RMSE) measure. We conclude that the four basic components-sEJP, nAP, sAHP, and vsAHP-identified and isolated in this work are necessary and sufficient to replicate all varieties of the sAPs recorded experimentally in DSMCs. This model has the potential to generate testable hypotheses that can help identify the physiological processes underlying various features of the sAPs. Further, this model also provides a means to classify the sAPs into various shape classes.
Compensating for estimation smoothing in kriging
Olea, R.A.; Pawlowsky, Vera
1996-01-01
Smoothing is a characteristic inherent to all minimum mean-square-error spatial estimators such as kriging. Cross-validation can be used to detect and model such smoothing. Inversion of the model produces a new estimator-compensated kriging. A numerical comparison based on an exhaustive permeability sampling of a 4-fr2 slab of Berea Sandstone shows that the estimation surface generated by compensated kriging has properties intermediate between those generated by ordinary kriging and stochastic realizations resulting from simulated annealing and sequential Gaussian simulation. The frequency distribution is well reproduced by the compensated kriging surface, which also approximates the experimental semivariogram well - better than ordinary kriging, but not as well as stochastic realizations. Compensated kriging produces surfaces that are more accurate than stochastic realizations, but not as accurate as ordinary kriging. ?? 1996 International Association for Mathematical Geology.
Andrew D. Richardson; David Y. Hollinger
2007-01-01
Missing values in any data set create problems for researchers. The process by which missing values are replaced, and the data set is made complete, is generally referred to as imputation. Within the eddy flux community, the term "gap filling" is more commonly applied. A major challenge is that random errors in measured data result in uncertainty in the gap-...
Improving travel information products via robust estimation techniques : final report, March 2009.
DOT National Transportation Integrated Search
2009-03-01
Traffic-monitoring systems, such as those using loop detectors, are prone to coverage gaps, arising from sensor noise, processing errors and : transmission problems. Such gaps adversely affect the accuracy of Advanced Traveler Information Systems. Th...
Sherrer, Shanen M.; Taggart, David J.; Pack, Lindsey R.; Malik, Chanchal K.; Basu, Ashis K.; Suo, Zucai
2012-01-01
N- (deoxyguanosin-8-yl)-1-aminopyrene (dGAP) is the predominant nitro polyaromatic hydrocarbon product generated from the air pollutant 1-nitropyrene reacting with DNA. Previous studies have shown that dGAP induces genetic mutations in bacterial and mammalian cells. One potential source of these mutations is the error-prone bypass of dGAP lesions catalyzed by the low-fidelity Y-family DNA polymerases. To provide a comparative analysis of the mutagenic potential of the translesion DNA synthesis (TLS) of dGAP, we employed short oligonucleotide sequencing assays (SOSAs) with the model Y-family DNA polymerase from Sulfolobus solfataricus, DNA Polymerase IV (Dpo4), and the human Y-family DNA polymerases eta (hPolη), kappa (hPolκ), and iota (hPolι). Relative to undamaged DNA, all four enzymes generated far more mutations (base deletions, insertions, and substitutions) with a DNA template containing a site-specifically placed dGAP. Opposite dGAP and at an immediate downstream template position, the most frequent mutations made by the three human enzymes were base deletions and the most frequent base substitutions were dAs for all enzymes. Based on the SOSA data, Dpo4 was the least error-prone Y-family DNA polymerase among the four enzymes during the TLS of dGAP. Among the three human Y-family enzymes, hPolκ made the fewest mutations at all template positions except opposite the lesion site. hPolκ was significantly less error-prone than hPolι and hPolη during the extension of dGAP bypass products. Interestingly, the most frequent mutations created by hPolι at all template positions were base deletions. Although hRev1, the fourth human Y-family enzyme, could not extend dGAP bypass products in our standing start assays, it preferentially incorporated dCTP opposite the bulky lesion. Collectively, these mutagenic profiles suggest that hPolkk and hRev1 are the most suitable human Y-family DNA polymerases to perform TLS of dGAP in humans. PMID:22917544
Porous plug for reducing orifice induced pressure error in airfoils
NASA Technical Reports Server (NTRS)
Plentovich, Elizabeth B. (Inventor); Gloss, Blair B. (Inventor); Eves, John W. (Inventor); Stack, John P. (Inventor)
1988-01-01
A porous plug is provided for the reduction or elimination of positive error caused by the orifice during static pressure measurements of airfoils. The porous plug is press fitted into the orifice, thereby preventing the error caused either by fluid flow turning into the exposed orifice or by the fluid flow stagnating at the downstream edge of the orifice. In addition, the porous plug is made flush with the outer surface of the airfoil, by filing and polishing, to provide a smooth surface which alleviates the error caused by imperfections in the orifice. The porous plug is preferably made of sintered metal, which allows air to pass through the pores, so that the static pressure measurements can be made by remote transducers.
Empirical performance of interpolation techniques in risk-neutral density (RND) estimation
NASA Astrophysics Data System (ADS)
Bahaludin, H.; Abdullah, M. H.
2017-03-01
The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.
Role of retinal slip in the prediction of target motion during smooth and saccadic pursuit.
de Brouwer, S; Missal, M; Lefèvre, P
2001-08-01
Visual tracking of moving targets requires the combination of smooth pursuit eye movements with catch-up saccades. In primates, catch-up saccades usually take place only during pursuit initiation because pursuit gain is close to unity. This contrasts with the lower and more variable gain of smooth pursuit in cats, where smooth eye movements are intermingled with catch-up saccades during steady-state pursuit. In this paper, we studied in detail the role of retinal slip in the prediction of target motion during smooth and saccadic pursuit in the cat. We found that the typical pattern of pursuit in the cat was a combination of smooth eye movements with saccades. During smooth pursuit initiation, there was a correlation between peak eye acceleration and target velocity. During pursuit maintenance, eye velocity oscillated at approximately 3 Hz around a steady-state value. The average gain of smooth pursuit was approximately 0.5. Trained cats were able to continue pursuing in the absence of a visible target, suggesting a role of the prediction of future target motion in this species. The analysis of catch-up saccades showed that the smooth-pursuit motor command is added to the saccadic command during catch-up saccades and that both position error and retinal slip are taken into account in their programming. The influence of retinal slip on catch-up saccades showed that prediction about future target motion is used in the programming of catch-up saccades. Altogether, these results suggest that pursuit systems in primates and cats are qualitatively similar, with a lower average gain in the cat and that prediction affects both saccades and smooth eye movements during pursuit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Juberg, D.R.; Loch-Caruso, R.
Elevated levels of DDT and other organochlorine pesticides have been associated with spontaneous abortion and preterm birth in several species, including humans. Despite the prevalence of organochlorine pesticides in the environment, a mechanistic basis for this association has not been explored. Furthermore, while DDT has been associated with inhibition of calcium ATPases, altered gap junctional communication and electrophysiological changes, all of which could affect the excitation-contraction process characteristic of smooth muscle, direct effects of DDT on uterine smooth muscle have not been reported. This study was initiated to assess the direct effects of o,p{prime}-DDT (an estrogenic isomer present in themore » technical grade preparation) on pregnant rat uterine tissue.« less
An inverse method using toroidal mode data
Willis, C.
1986-01-01
The author presents a numerical method for calculating the density and S-wave velocity in the upper mantle of a spherically symmetric, non-rotating Earth which consists of a perfect elastic, isotropic material. The data comes from the periods of the toroidal oscillations. She tests the method on a smoothed version of model A. The error in the reconstruction is less than 1%. The effects of perturbations in the eigenvalues are studied and she finds that the final model is sensitive to errors in the data.
MISSING BLACK HOLES UNVEIL THE SUPERNOVA EXPLOSION MECHANISM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belczynski, Krzysztof; Wiktorowicz, Grzegorz; Fryer, Chris L.
2012-09-20
It is firmly established that the stellar mass distribution is smooth, covering the range 0.1-100 M{sub Sun }. It is to be expected that the masses of the ensuing compact remnants correlate with the masses of their progenitor stars, and thus it is generally thought that the remnant masses should be smoothly distributed from the lightest white dwarfs to the heaviest black holes (BHs). However, this intuitive prediction is not borne out by observed data. In the rapidly growing population of remnants with observationally determined masses, a striking mass gap has emerged at the boundary between neutron stars (NSs) andmore » BHs. The heaviest NSs reach a maximum of two solar masses, while the lightest BHs are at least five solar masses. Over a decade after the discovery, the gap has become a significant challenge to our understanding of compact object formation. We offer new insights into the physical processes that bifurcate the formation of remnants into lower-mass NSs and heavier BHs. Combining the results of stellar modeling with hydrodynamic simulations of supernovae, we both explain the existence of the gap and also put stringent constraints on the inner workings of the supernova explosion mechanism. In particular, we show that core-collapse supernovae are launched within 100-200 ms of the initial stellar collapse, implying that the explosions are driven by instabilities with a rapid (10-20 ms) growth time. Alternatively, if future observations fill in the gap, this will be an indication that these instabilities develop over a longer (>200 ms) timescale.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauer, Carl A., E-mail: bauerca@colorado.ed; Werner, Gregory R.; Cary, John R.
A new frequency-domain electromagnetics algorithm is developed for simulating curved interfaces between anisotropic dielectrics embedded in a Yee mesh with second-order error in resonant frequencies. The algorithm is systematically derived using the finite integration formulation of Maxwell's equations on the Yee mesh. Second-order convergence of the error in resonant frequencies is achieved by guaranteeing first-order error on dielectric boundaries and second-order error in bulk (possibly anisotropic) regions. Convergence studies, conducted for an analytically solvable problem and for a photonic crystal of ellipsoids with anisotropic dielectric constant, both show second-order convergence of frequency error; the convergence is sufficiently smooth that Richardsonmore » extrapolation yields roughly third-order convergence. The convergence of electric fields near the dielectric interface for the analytic problem is also presented.« less
Trends in stratospheric ozone profiles using functional mixed models
NASA Astrophysics Data System (ADS)
Park, A.; Guillas, S.; Petropavlovskikh, I.
2013-11-01
This paper is devoted to the modeling of altitude-dependent patterns of ozone variations over time. Umkehr ozone profiles (quarter of Umkehr layer) from 1978 to 2011 are investigated at two locations: Boulder (USA) and Arosa (Switzerland). The study consists of two statistical stages. First we approximate ozone profiles employing an appropriate basis. To capture primary modes of ozone variations without losing essential information, a functional principal component analysis is performed. It penalizes roughness of the function and smooths excessive variations in the shape of the ozone profiles. As a result, data-driven basis functions (empirical basis functions) are obtained. The coefficients (principal component scores) corresponding to the empirical basis functions represent dominant temporal evolution in the shape of ozone profiles. We use those time series coefficients in the second statistical step to reveal the important sources of the patterns and variations in the profiles. We estimate the effects of covariates - month, year (trend), quasi-biennial oscillation, the solar cycle, the Arctic oscillation, the El Niño/Southern Oscillation cycle and the Eliassen-Palm flux - on the principal component scores of ozone profiles using additive mixed effects models. The effects are represented as smooth functions and the smooth functions are estimated by penalized regression splines. We also impose a heteroscedastic error structure that reflects the observed seasonality in the errors. The more complex error structure enables us to provide more accurate estimates of influences and trends, together with enhanced uncertainty quantification. Also, we are able to capture fine variations in the time evolution of the profiles, such as the semi-annual oscillation. We conclude by showing the trends by altitude over Boulder and Arosa, as well as for total column ozone. There are great variations in the trends across altitudes, which highlights the benefits of modeling ozone profiles.
NASA Astrophysics Data System (ADS)
Chen, Qiujie; Shen, Yunzhong; Chen, Wu; Zhang, Xingfu; Hsu, Houze
2016-06-01
The main contribution of this study is to improve the GRACE gravity field solution by taking errors of non-conservative acceleration and attitude observations into account. Unlike previous studies, the errors of the attitude and non-conservative acceleration data, and gravity field parameters, as well as accelerometer biases are estimated by means of weighted least squares adjustment. Then we compute a new time series of monthly gravity field models complete to degree and order 60 covering the period Jan. 2003 to Dec. 2012 from the twin GRACE satellites' data. The derived GRACE solution (called Tongji-GRACE02) is compared in terms of geoid degree variances and temporal mass changes with the other GRACE solutions, namely CSR RL05, GFZ RL05a, and JPL RL05. The results show that (1) the global mass signals of Tongji-GRACE02 are generally consistent with those of CSR RL05, GFZ RL05a, and JPL RL05; (2) compared to CSR RL05, the noise of Tongji-GRACE02 is reduced by about 21 % over ocean when only using 300 km Gaussian smoothing, and 60 % or more over deserts (Australia, Kalahari, Karakum and Thar) without using Gaussian smoothing and decorrelation filtering; and (3) for all examples, the noise reductions are more significant than signal reductions, no matter whether smoothing and filtering are applied or not. The comparison with GLDAS data supports that the signals of Tongji-GRACE02 over St. Lawrence River basin are close to those from CSR RL05, GFZ RL05a and JPL RL05, while the GLDAS result shows the best agreement with the Tongji-GRACE02 result.
cBathy: A robust algorithm for estimating nearshore bathymetry
Plant, Nathaniel G.; Holman, Rob; Holland, K. Todd
2013-01-01
A three-part algorithm is described and tested to provide robust bathymetry maps based solely on long time series observations of surface wave motions. The first phase consists of frequency-dependent characterization of the wave field in which dominant frequencies are estimated by Fourier transform while corresponding wave numbers are derived from spatial gradients in cross-spectral phase over analysis tiles that can be small, allowing high-spatial resolution. Coherent spatial structures at each frequency are extracted by frequency-dependent empirical orthogonal function (EOF). In phase two, depths are found that best fit weighted sets of frequency-wave number pairs. These are subsequently smoothed in time in phase 3 using a Kalman filter that fills gaps in coverage and objectively averages new estimates of variable quality with prior estimates. Objective confidence intervals are returned. Tests at Duck, NC, using 16 surveys collected over 2 years showed a bias and root-mean-square (RMS) error of 0.19 and 0.51 m, respectively but were largest near the offshore limits of analysis (roughly 500 m from the camera) and near the steep shoreline where analysis tiles mix information from waves, swash and static dry sand. Performance was excellent for small waves but degraded somewhat with increasing wave height. Sand bars and their small-scale alongshore variability were well resolved. A single ground truth survey from a dissipative, low-sloping beach (Agate Beach, OR) showed similar errors over a region that extended several kilometers from the camera and reached depths of 14 m. Vector wave number estimates can also be incorporated into data assimilation models of nearshore dynamics.
Optical induction of muscle contraction at the tissue scale through intrinsic cellular amplifiers.
Yoon, Jonghee; Choi, Myunghwan; Ku, Taeyun; Choi, Won Jong; Choi, Chulhee
2014-08-01
The smooth muscle cell is the principal component responsible for involuntary control of visceral organs, including vascular tonicity, secretion, and sphincter regulation. It is known that the neurotransmitters released from nerve endings increase the intracellular Ca(2+) level in smooth muscle cells followed by muscle contraction. We herein report that femtosecond laser pulses focused on the diffraction-limited volume can induce intracellular Ca(2+) increases in the irradiated smooth muscle cell without neurotransmitters, and locally increased intracellular Ca(2+) levels are amplified by calcium-induced calcium-releasing mechanisms through the ryanodine receptor, a Ca(2+) channel of the endoplasmic reticulum. The laser-induced Ca(2+) increases propagate to adjacent cells through gap junctions. Thus, ultrashort-pulsed lasers can induce smooth muscle contraction by controlling Ca(2+), even with optical stimulation of the diffraction-limited volume. This optical method, which leads to reversible and reproducible muscle contraction, can be used in research into muscle dynamics, neuromuscular disease treatment, and nanorobot control. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Fabrication of a high-precision spherical micromirror by bending a silicon plate with a metal pad.
Wu, Tong; Hane, Kazuhiro
2011-09-20
We demonstrate here the fabrication of a smooth mirror surface by bending a thin silicon plate. A spherical surface is achieved by the bending moment generated in the circumference of the micromirror. Both convex and concave spherical micromirrors are realized through the anodic bonding of silicon and Pyrex glass. Since the mirror surface is originated from the polished silicon surface and no additional etching is introduced for manufacturing, the surface roughness is thus limited to the polishing error. This novel approach opens possibilities for fabricating a smooth surface for micromirror and microlens applications.
Method and apparatus for controlling electrode gap during vacuum consumable arc remelting
Fisher, R.W.; Maroone, J.P.; Tipping, D.W.; Zanner, F.J.
During vacuum consumable arc remelting the electrode gap between a consumable electrode and a pool of molten metal is difficult to control. The present invention monitors drop shorts by detecting a decrease in the voltage between the consumable electrode and molten pool. The drop shorts and their associated voltage reductions occur as repetitive pulses which are closely correlated to the electrode gap. Thus, the method and apparatus of the present invention controls electrode gap based upon drop shorts detected from the monitored anode-cathode voltage. The number of drop shorts are accumulated, and each time the number of drop shorts reach a predetermined number, the average period between drop shorts is calculated from this predetermined number and the time in which this number is accumulated. This average drop short period is used in a drop short period electrode gap model which determines the actual electrode gap from the drop short. The actual electrode gap is then compared with a desired electrode gap which is selected to produce optimum operating conditions and the velocity of the consumable error is varied based upon the gap error. The consumable electrode is driven according to any prior art system at this velocity. In the preferred embodiment, a microprocessor system is utilized to perform the necessary calculations and further to monitor the duration of each drop short. If any drop short exceeds a preset duration period, the consumable electrode is rapidly retracted a predetermined distance to prevent bonding of the consumable electrode to the molten remelt.
Drop short control of electrode gap
Fisher, Robert W.; Maroone, James P.; Tipping, Donald W.; Zanner, Frank J.
1986-01-01
During vacuum consumable arc remelting the electrode gap between a consumable electrode and a pool of molten metal is difficult to control. The present invention monitors drop shorts by detecting a decrease in the voltage between the consumable electrode and molten pool. The drop shorts and their associated voltage reductions occur as repetitive pulses which are closely correlated to the electrode gap. Thus, the method and apparatus of the present invention controls electrode gap based upon drop shorts detected from the monitored anode-cathode voltage. The number of drop shorts are accumulated, and each time the number of drop shorts reach a predetermined number, the average period between drop shorts is calculated from this predetermined number and the time in which this number is accumulated. This average drop short period is used in a drop short period electrode gap model which determines the actual electrode gap from the drop short. The actual electrode gap is then compared with a desired electrode gap which is selected to produce optimum operating conditions and the velocity of the consumable error is varied based upon the gap error. The consumable electrode is driven according to any prior art system at this velocity. In the preferred embodiment, a microprocessor system is utilized to perform the necessary calculations and further to monitor the duration of each drop short. If any drop short exceeds a preset duration period, the consumable electrode is rapidly retracted a predetermined distance to prevent bonding of the consumable electrode to the molten remelt.
A new smooth robust control design for uncertain nonlinear systems with non-vanishing disturbances
NASA Astrophysics Data System (ADS)
Xian, Bin; Zhang, Yao
2016-06-01
In this paper, we consider the control problem for a general class of nonlinear system subjected to uncertain dynamics and non-varnishing disturbances. A smooth nonlinear control algorithm is presented to tackle these uncertainties and disturbances. The proposed control design employs the integral of a nonlinear sigmoid function to compensate the uncertain dynamics, and achieve a uniformly semi-global practical asymptotic stable tracking control of the system outputs. A novel Lyapunov-based stability analysis is employed to prove the convergence of the tracking errors and the stability of the closed-loop system. Numerical simulation results on a two-link robot manipulator are presented to illustrate the performance of the proposed control algorithm comparing with the layer-boundary sliding mode controller and the robust of integration of sign of error control design. Furthermore, real-time experiment results for the attitude control of a quadrotor helicopter are also included to confirm the effectiveness of the proposed algorithm.
Knowing what to expect, forecasting monthly emergency department visits: A time-series analysis.
Bergs, Jochen; Heerinckx, Philipe; Verelst, Sandra
2014-04-01
To evaluate an automatic forecasting algorithm in order to predict the number of monthly emergency department (ED) visits one year ahead. We collected retrospective data of the number of monthly visiting patients for a 6-year period (2005-2011) from 4 Belgian Hospitals. We used an automated exponential smoothing approach to predict monthly visits during the year 2011 based on the first 5 years of the dataset. Several in- and post-sample forecasting accuracy measures were calculated. The automatic forecasting algorithm was able to predict monthly visits with a mean absolute percentage error ranging from 2.64% to 4.8%, indicating an accurate prediction. The mean absolute scaled error ranged from 0.53 to 0.68 indicating that, on average, the forecast was better compared with in-sample one-step forecast from the naïve method. The applied automated exponential smoothing approach provided useful predictions of the number of monthly visits a year in advance. Copyright © 2013 Elsevier Ltd. All rights reserved.
Types of diagnostic errors in neurological emergencies in the emergency department.
Dubosh, Nicole M; Edlow, Jonathan A; Lefton, Micah; Pope, Jennifer V
2015-02-01
Neurological emergencies often pose diagnostic challenges for emergency physicians because these patients often present with atypical symptoms and standard imaging tests are imperfect. Misdiagnosis occurs due to a variety of errors. These can be classified as knowledge gaps, cognitive errors, and systems-based errors. The goal of this study was to describe these errors through review of quality assurance (QA) records. This was a retrospective pilot study of patients with neurological emergency diagnoses that were missed or delayed at one urban, tertiary academic emergency department. Cases meeting inclusion criteria were identified through review of QA records. Three emergency physicians independently reviewed each case and determined the type of error that led to the misdiagnosis. Proportions, confidence intervals, and a reliability coefficient were calculated. During the study period, 1168 cases were reviewed. Forty-two cases were found to include a neurological misdiagnosis and twenty-nine were determined to be the result of an error. The distribution of error types was as follows: knowledge gap 45.2% (95% CI 29.2, 62.2), cognitive error 29.0% (95% CI 15.9, 46.8), and systems-based error 25.8% (95% CI 13.5, 43.5). Cerebellar strokes were the most common type of stroke misdiagnosed, accounting for 27.3% of missed strokes. All three error types contributed to the misdiagnosis of neurological emergencies. Misdiagnosis of cerebellar lesions and erroneous radiology resident interpretations of neuroimaging were the most common mistakes. Understanding the types of errors may enable emergency physicians to develop possible solutions and avoid them in the future.
Zhang, Chen; Li, Geng; Gao, Ming; Zeng, XiaoYan
2017-01-26
Both laser-arc hybrid welding and narrow gap welding have potential for the fabrication of thick sections, but their combination has been seldom studied. In this research, 40 mm thick mild steel was welded by narrow gap laser-arc hybrid welding. A weld with smooth layer transition, free of visible defects, was obtained by nine passes at a 6 mm width narrow gap. The lower part of the weld has the lowest mechanical properties because of the lowest amount of acicular ferrite, but its ultimate tensile strength and impact absorbing energy is still 49% and 60% higher than those of base metal, respectively. The microhardness deviation of all filler layers along weld thickness direction is no more than 15 HV 0.2 , indicating that no temper softening appeared during multiple heat cycles. The results provide an alternative technique for improving the efficiency and quality of welding thick sections.
Zhang, Chen; Li, Geng; Gao, Ming; Zeng, XiaoYan
2017-01-01
Both laser-arc hybrid welding and narrow gap welding have potential for the fabrication of thick sections, but their combination has been seldom studied. In this research, 40 mm thick mild steel was welded by narrow gap laser-arc hybrid welding. A weld with smooth layer transition, free of visible defects, was obtained by nine passes at a 6 mm width narrow gap. The lower part of the weld has the lowest mechanical properties because of the lowest amount of acicular ferrite, but its ultimate tensile strength and impact absorbing energy is still 49% and 60% higher than those of base metal, respectively. The microhardness deviation of all filler layers along weld thickness direction is no more than 15 HV0.2, indicating that no temper softening appeared during multiple heat cycles. The results provide an alternative technique for improving the efficiency and quality of welding thick sections. PMID:28772469
NASA Astrophysics Data System (ADS)
Zhu, Bing; Chen, Hongxun; Wei, Qun
2014-06-01
This paper is to study the cavitating characteristics in a low specific speed centrifugal pump with gap structure impeller experimentally and numerically. A scalable DES numerical method is proposed and developed by introducing the von Karman scale instead of the local grid scale, which can switch at the RANS and LES region interface smoothly and reasonably. The SDES method can detect and grasp unsteady scale flow structures, which were proved by the flow around a triangular prism and the cavitation flow in a centrifugal pump. Through numerical and experimental research, it's shown that the simulated results match qualitatively with tested cavitation performances and visualization patterns, and we can conclude that the gap structure impeller has a superior feature of cavitation suppression. Its mechanism may be the guiding flow feature of the small vice blade and the pressure auto-balance effect of the gap tunnel.
ERIC Educational Resources Information Center
Tajeddin, Zia; Alemi, Minoo; Pashmforoosh, Roya
2017-01-01
Unlike linguistic fossilization, pragmatic fossilization has received scant attention in fossilization research. To bridge this gap, the present study adopted a typical-error method of fossilization research to identify the most frequent errors in pragmatic routines committed by Persian-speaking learners of L2 English and explore the sources of…
Stereotype susceptibility narrows the gender gap in imagined self-rotation performance.
Wraga, Maryjane; Duncan, Lauren; Jacobs, Emily C; Helt, Molly; Church, Jessica
2006-10-01
Three studies examined the impact of stereotype messages on men's and women's performance of a mental rotation task involving imagined self-rotations. Experiment 1 established baseline differences between men and women; women made 12% more errors than did men. Experiment 2 found that exposure to a positive stereotype message enhanced women's performance in comparison with that of another group of women who received neutral information. In Experiment 3, men who were exposed to the same stereotype message emphasizing a female advantage made more errors than did male controls, and the magnitude of error was similar to that for women from Experiment 1. The results suggest that the gender gap in mental rotation performance is partially caused by experiential factors, particularly those induced by sociocultural stereotypes.
NASA Technical Reports Server (NTRS)
Borgia, Andrea; Spera, Frank J.
1990-01-01
This work discusses the propagation of errors for the recovery of the shear rate from wide-gap concentric cylinder viscometric measurements of non-Newtonian fluids. A least-square regression of stress on angular velocity data to a system of arbitrary functions is used to propagate the errors for the series solution to the viscometric flow developed by Krieger and Elrod (1953) and Pawlowski (1953) ('power-law' approximation) and for the first term of the series developed by Krieger (1968). A numerical experiment shows that, for measurements affected by significant errors, the first term of the Krieger-Elrod-Pawlowski series ('infinite radius' approximation) and the power-law approximation may recover the shear rate with equal accuracy as the full Krieger-Elrod-Pawlowski solution. An experiment on a clay slurry indicates that the clay has a larger yield stress at rest than during shearing, and that, for the range of shear rates investigated, a four-parameter constitutive equation approximates reasonably well its rheology. The error analysis presented is useful for studying the rheology of fluids such as particle suspensions, slurries, foams, and magma.
[Cellular mechanism of the generation of spontaneous activity in gastric muscle].
Nakamura, Eri; Kito, Yoshihiko; Fukuta, Hiroyasu; Yanai, Yoshimasa; Hashitani, Hikaru; Yamamoto, Yoshimichi; Suzuki, Hikaru
2004-03-01
In gastric smooth muscles, interstitial cells of Cajal (ICC) might be the pacemaker cells of spontaneous activities since ICC are rich in mitochondria and are connected with smooth muscle cells via gap junctions. Several types of ICC are distributed widely in the stomach wall. A group of ICC distributed in the myenteric layer (ICC-MY) were the pacemaker cells of gastrointestinal smooth muscles. Pacemaker potentials were generated in ICC-MY, and the potentials were conducted to circular smooth muscles to trigger slow waves and also conducted to longitudinal muscles to form follower potentials. In circular muscle preparations, interstitial cells distributed within muscle bundles (ICC-IM) produced unitary potentials, which were conducted to circular muscles to form slow potentials by summation. In mutant mice lacking inositol trisphosphate (IP(3)) receptor, slow waves were absent in gastric smooth muscles. The generation of spontaneous activity was impaired by the inhibition of Ca(2+)-release from internal stores through IP(3) receptors, inhibition of mitochondrial Ca(2+)-handling with proton pump inhibitors, and inhibition of ATP-sensitive K(+)-channels at the mitochondrial inner membrane. These results suggested that mitochondrial Ca(2+)-handling causes the generation of spontaneous activity in pacemaker cells. Possible involvement of protein kinase C (PKC) in the Ca(2+) signaling system was also suggested.
Optimal application of Morrison's iterative noise removal for deconvolution. Appendices
NASA Technical Reports Server (NTRS)
Ioup, George E.; Ioup, Juliette W.
1987-01-01
Morrison's iterative method of noise removal, or Morrison's smoothing, is applied in a simulation to noise-added data sets of various noise levels to determine its optimum use. Morrison's smoothing is applied for noise removal alone, and for noise removal prior to deconvolution. For the latter, an accurate method is analyzed to provide confidence in the optimization. The method consists of convolving the data with an inverse filter calculated by taking the inverse discrete Fourier transform of the reciprocal of the transform of the response of the system. Various length filters are calculated for the narrow and wide Gaussian response functions used. Deconvolution of non-noisy data is performed, and the error in each deconvolution calculated. Plots are produced of error versus filter length; and from these plots the most accurate length filters determined. The statistical methodologies employed in the optimizations of Morrison's method are similar. A typical peak-type input is selected and convolved with the two response functions to produce the data sets to be analyzed. Both constant and ordinate-dependent Gaussian distributed noise is added to the data, where the noise levels of the data are characterized by their signal-to-noise ratios. The error measures employed in the optimizations are the L1 and L2 norms. Results of the optimizations for both Gaussians, both noise types, and both norms include figures of optimum iteration number and error improvement versus signal-to-noise ratio, and tables of results. The statistical variation of all quantities considered is also given.
A plasmid-based lacZα gene assay for DNA polymerase fidelity measurement
Keith, Brian J.; Jozwiakowski, Stanislaw K.; Connolly, Bernard A.
2013-01-01
A significantly improved DNA polymerase fidelity assay, based on a gapped plasmid containing the lacZα reporter gene in a single-stranded region, is described. Nicking at two sites flanking lacZα, and removing the excised strand by thermocycling in the presence of complementary competitor DNA, is used to generate the gap. Simple methods are presented for preparing the single-stranded competitor. The gapped plasmid can be purified, in high amounts and in a very pure state, using benzoylated–naphthoylated DEAE–cellulose, resulting in a low background mutation frequency (∼1 × 10−4). Two key parameters, the number of detectable sites and the expression frequency, necessary for measuring polymerase error rates have been determined. DNA polymerase fidelity is measured by gap filling in vitro, followed by transformation into Escherichia coli and scoring of blue/white colonies and converting the ratio to error rate. Several DNA polymerases have been used to fully validate this straightforward and highly sensitive system. PMID:23098700
Reconstruction of Arctic surface temperature in past 100 years using DINEOF
NASA Astrophysics Data System (ADS)
Zhang, Qiyi; Huang, Jianbin; Luo, Yong
2015-04-01
Global annual mean surface temperature has not risen apparently since 1998, which is described as global warming hiatus in recent years. However, measuring of temperature variability in Arctic is difficult because of large gaps in coverage of Arctic region in most observed gridded datasets. Since Arctic has experienced a rapid temperature change in recent years that called polar amplification, and temperature risen in Arctic is faster than global mean, the unobserved temperature in central Arctic will result in cold bias in both global and Arctic temperature measurement compared with model simulations and reanalysis datasets. Moreover, some datasets that have complete coverage in Arctic but short temporal scale cannot show Arctic temperature variability for long time. Data Interpolating Empirical Orthogonal Function (DINEOF) were applied to fill the coverage gap of NASA's Goddard Institute for Space Studies Surface Temperature Analysis (GISTEMP 250km smooth) product in Arctic with IABP dataset which covers entire Arctic region between 1979 and 1998, and to reconstruct Arctic temperature in 1900-2012. This method provided temperature reconstruction in central Arctic and precise estimation of both global and Arctic temperature variability with a long temporal scale. Results have been verified by extra independent station records in Arctic by statistical analysis, such as variance and standard deviation. The result of reconstruction shows significant warming trend in Arctic in recent 30 years, as the temperature trend in Arctic since 1997 is 0.76°C per decade, compared with 0.48°C and 0.67°C per decade from 250km smooth and 1200km smooth of GISTEMP. And global temperature trend is two times greater after using DINEOF. The discrepancies above stress the importance of fully consideration of temperature variance in Arctic because gaps of coverage in Arctic cause apparent cold bias in temperature estimation. The result of global surface temperature also proves that global warming in recent years is not as slow as thought.
Voltage and pace-capture mapping of linear ablation lesions overestimates chronic ablation gap size.
O'Neill, Louisa; Harrison, James; Chubb, Henry; Whitaker, John; Mukherjee, Rahul K; Bloch, Lars Ølgaard; Andersen, Niels Peter; Dam, Høgni; Jensen, Henrik K; Niederer, Steven; Wright, Matthew; O'Neill, Mark; Williams, Steven E
2018-04-26
Conducting gaps in lesion sets are a major reason for failure of ablation procedures. Voltage mapping and pace-capture have been proposed for intra-procedural identification of gaps. We aimed to compare gap size measured acutely and chronically post-ablation to macroscopic gap size in a porcine model. Intercaval linear ablation was performed in eight Göttingen minipigs with a deliberate gap of ∼5 mm left in the ablation line. Gap size was measured by interpolating ablation contact force values between ablation tags and thresholding at a low force cut-off of 5 g. Bipolar voltage mapping and pace-capture mapping along the length of the line were performed immediately, and at 2 months, post-ablation. Animals were euthanized and gap sizes were measured macroscopically. Voltage thresholds to define scar were determined by receiver operating characteristic analysis as <0.56 mV (acutely) and <0.62 mV (chronically). Taking the macroscopic gap size as gold standard, error in gap measurements were determined for voltage, pace-capture, and ablation contact force maps. All modalities overestimated chronic gap size, by 1.4 ± 2.0 mm (ablation contact force map), 5.1 ± 3.4 mm (pace-capture), and 9.5 ± 3.8 mm (voltage mapping). Error on ablation contact force map gap measurements were significantly less than for voltage mapping (P = 0.003, Tukey's multiple comparisons test). Chronically, voltage mapping and pace-capture mapping overestimated macroscopic gap size by 11.9 ± 3.7 and 9.8 ± 3.5 mm, respectively. Bipolar voltage and pace-capture mapping overestimate the size of chronic gap formation in linear ablation lesions. The most accurate estimation of chronic gap size was achieved by analysis of catheter-myocardium contact force during ablation.
Li, Wen-bing; Yao, Lin-tao; Liu, Mu-hua; Huang, Lin; Yao, Ming-yin; Chen, Tian-bing; He, Xiu-wen; Yang, Ping; Hu, Hui-qin; Nie, Jiang-hui
2015-05-01
Cu in navel orange was detected rapidly by laser-induced breakdown spectroscopy (LIBS) combined with partial least squares (PLS) for quantitative analysis, then the effect on the detection accuracy of the model with different spectral data ptetreatment methods was explored. Spectral data for the 52 Gannan navel orange samples were pretreated by different data smoothing, mean centralized and standard normal variable transform. Then 319~338 nm wavelength section containing characteristic spectral lines of Cu was selected to build PLS models, the main evaluation indexes of models such as regression coefficient (r), root mean square error of cross validation (RMSECV) and the root mean square error of prediction (RMSEP) were compared and analyzed. Three indicators of PLS model after 13 points smoothing and processing of the mean center were found reaching 0. 992 8, 3. 43 and 3. 4 respectively, the average relative error of prediction model is only 5. 55%, and in one word, the quality of calibration and prediction of this model are the best results. The results show that selecting the appropriate data pre-processing method, the prediction accuracy of PLS quantitative model of fruits and vegetables detected by LIBS can be improved effectively, providing a new method for fast and accurate detection of fruits and vegetables by LIBS.
Dong, Zhichao; Cheng, Haobo; Tam, Hon-Yuen
2014-01-20
As further application investigations on fixed abrasive diamond pellets (FADPs), this work exhibits their potential capability for diminishing mid-spatial frequency errors (MSFEs, i.e., periodic small structure) of optical surfaces. Benefitting from its high surficial rigidness, the FADPs tool has a natural smoothing effect to periodic small errors. Compared with the previous design, this proposed new tool employs more compliance to aspherical surfaces due to the pellets being mutually separated and bonded on a steel plate with elastic back of silica rubber adhesive. Moreover, a unicursal Peano-like path is presented for improving MSFEs, which can enhance the multidirectionality and uniformity of the tool's motion. Experiments were conducted to validate the effectiveness of FADPs for diminishing MSFEs. In the lapping of a Φ=420 mm Zerodur paraboloid workpiece, the grinding ripples were quickly diminished (210 min) by both visual inspection and profile metrology, as well as the power spectrum density (PSD) analysis, RMS was reduced from 4.35 to 0.55 μm. In the smoothing of a Φ=101 mm fused silica workpiece, MSFEs were obviously improved from the inspection of surface form maps, interferometric fringe patterns, and PSD analysis. The mid-spatial frequency RMS was diminished from 0.017λ to 0.014λ (λ=632.8 nm).
Using satellite radiotelemetry data to delineate and manage wildlife populations
Amstrup, Steven C.; McDonald, T.L.; Durner, George M.
2004-01-01
The greatest promise of radiotelemetry always has been a better understanding of animal movements. Telemetry has helped us know when animals are active, how active they are, how far and how fast they move, the geographic areas they occupy, and whether individuals vary in these traits. Unfortunately, the inability to estimate the error in animals utilization distributions (UDs), has prevented probabilistic linkage of movements data, which are always retrospective, with future management actions. We used the example of the harvested population of polar bears (Ursus maritimus) in the Southern Beaufort Sea to illustrate a method that provides that linkage. We employed a 2-dimensional Gaussian kernel density estimator to smooth and scale frequencies of polar bear radio locations within cells of a grid overlying our study area. True 2-dimensional smoothing allowed us to create accurate descriptions of the UDs of individuals and groups of bears. We used a new method of clustering, based upon the relative use collared bears made of each cell in our grid, to assign individual animals to populations. We applied the fast Fourier transform to make bootstrapped estimates of the error in UDs computationally feasible. Clustering and kernel smoothing identified 3 populations of polar bears in the region between Wrangel Island, Russia, and Banks Island, Canada. The relative probability of occurrence of animals from each population varied significantly among grid cells distributed across the study area. We displayed occurrence probabilities as contour maps wherein each contour line corresponded with a change in relative probability. Only at the edges of our study area and in some offshore regions were bootstrapped estimates of error in occurrence probabilities too high to allow prediction. Error estimates, which also were displayed as contours, allowed us to show that occurrence probabilities did not vary by season. Near Barrow, Alaska, 50% of bears observed are predicted to be from the Chukchi Sea population and 50% from the Southern Beaufort Sea population. At Tuktoyaktuk, Northwest Territories, Canada, 50% are from the Southern Beaufort Sea and 50% from the Northern Beaufort Sea population. The methods described here will aid managers of all wildlife that can be studied by telemetry to allocate harvests and other human perturbations to the appropriate populations, make risk assessments, and predict impacts of human activities. They will aid researchers by providing the refined descriptions of study populations that are necessary for population estimation and other investigative tasks. Arctic, Beaufort Sea, boundaries, clustering, Fourier transform, kernel, management, polar bears, population delineation, radiotelemetry, satellite, smoothing, Ursus maritimus
Chang, C H; Hwang, C S; Fan, T C; Chen, K H; Pan, K T; Lin, F Y; Wang, C; Chang, L H; Chen, H H; Lin, M C; Yeh, S
1998-05-01
In this work, a 1 m long Sasaki-type elliptically polarizing undulator (EPU) prototype with 5.6 cm period length is used to examine the mechanical design feasibility as well as magnetic field performance. The magnetic field characteristics of the EPU5.6 prototype at various phase shifts and gap motion are described. The field errors from mechanical tolerances, magnet block errors, end field effects and phase/gap motion effects are analysed. The procedures related to correcting the field with the block position tuning, iron shimming and the trim blocks at both ends are outlined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ouyang, L; Yan, H; Jia, X
2014-06-01
Purpose: A moving blocker based strategy has shown promising results for scatter correction in cone-beam computed tomography (CBCT). Different parameters of the system design affect its performance in scatter estimation and image reconstruction accuracy. The goal of this work is to optimize the geometric design of the moving block system. Methods: In the moving blocker system, a blocker consisting of lead strips is inserted between the x-ray source and imaging object and moving back and forth along rotation axis during CBCT acquisition. CT image of an anthropomorphic pelvic phantom was used in the simulation study. Scatter signal was simulated bymore » Monte Carlo calculation with various combinations of the lead strip width and the gap between neighboring lead strips, ranging from 4 mm to 80 mm (projected at the detector plane). Scatter signal in the unblocked region was estimated by cubic B-spline interpolation from the blocked region. Scatter estimation accuracy was quantified as relative root mean squared error by comparing the interpolated scatter to the Monte Carlo simulated scatter. CBCT was reconstructed by total variation minimization from the unblocked region, under various combinations of the lead strip width and gap. Reconstruction accuracy in each condition is quantified by CT number error as comparing to a CBCT reconstructed from unblocked full projection data. Results: Scatter estimation error varied from 0.5% to 2.6% as the lead strip width and the gap varied from 4mm to 80mm. CT number error in the reconstructed CBCT images varied from 12 to 44. Highest reconstruction accuracy is achieved when the blocker lead strip width is 8 mm and the gap is 48 mm. Conclusions: Accurate scatter estimation can be achieved in large range of combinations of lead strip width and gap. However, image reconstruction accuracy is greatly affected by the geometry design of the blocker.« less
A User Guide for Smoothing Air Traffic Radar Data
NASA Technical Reports Server (NTRS)
Bach, Ralph E.; Paielli, Russell A.
2014-01-01
Matlab software was written to provide smoothing of radar tracking data to simulate ADS-B (Automatic Dependent Surveillance-Broadcast) data in order to test a tactical conflict probe. The probe, called TSAFE (Tactical Separation-Assured Flight Environment), is designed to handle air-traffic conflicts left undetected or unresolved when loss-of-separation is predicted to occur within approximately two minutes. The data stream that is down-linked from an aircraft equipped with an ADS-B system would include accurate GPS-derived position and velocity information at sample rates of 1 Hz. Nation-wide ADS-B equipage (mandated by 2020) should improve surveillance accuracy and TSAFE performance. Currently, position data are provided by Center radar (nominal 12-sec samples) and Terminal radar (nominal 4.8-sec samples). Aircraft ground speed and ground track are estimated using real-time filtering, causing lags up to 60 sec, compromising performance of a tactical resolution tool. Offline smoothing of radar data reduces wild-point errors, provides a sample rate as high as 1 Hz, and yields more accurate and lag-free estimates of ground speed, ground track, and climb rate. Until full ADS-B implementation is available, smoothed radar data should provide reasonable track estimates for testing TSAFE in an ADS-B-like environment. An example illustrates the smoothing of radar data and shows a comparison of smoothed-radar and ADS-B tracking. This document is intended to serve as a guide for using the smoothing software.
Validation of High-Resolution CFD Method for Slosh Damping Extraction of Baffled Tanks
NASA Technical Reports Server (NTRS)
Yang, H. Q.; West, Jeff
2016-01-01
Determination of slosh damping is a very challenging task as there is no analytical solution. The damping physics involve the vorticity dissipation which requires the full solution of the nonlinear Navier-Stokes equations. As a result, previous investigations and knowledge were mainly carried out by extensive experimental studies. A Volume-Of-Fluid (VOF) based CFD program developed at NASA MSFC was applied to extract slosh damping in a baffled tank from the first principle. First, experimental data using water with subscale smooth wall tank were used as the baseline validation. CFD simulation was demonstrated to be capable of accurately predicting natural frequency and very low damping value from the smooth wall tank at different fill levels. The damping due to a ring baffle at different liquid fill levels from barrel section and into the upper dome was then investigated to understand the slosh damping physics due to the presence of a ring baffle. Based on this study, the Root-Mean-Square error of our CFD simulation in estimating slosh damping was less than 4.8%, and the maximum error was less than 8.5%. Scalability of subscale baffled tank test using water was investigated using the validated CFD tool, and it was found that unlike the smooth wall case, slosh damping with baffle is almost independent of the working fluid and it is reasonable to apply water test data to the full scale LOX tank when the damping from baffle is dominant. On the other hand, for the smooth wall, the damping value must be scaled according to the Reynolds number. Comparison of experimental data, CFD, with the classical and modified Miles equations for upper dome was made, and the limitations of these semi-empirical equations were identified.
Self-recovery reversible image watermarking algorithm
Sun, He; Gao, Shangbing; Jin, Shenghua
2018-01-01
The integrity of image content is essential, although most watermarking algorithms can achieve image authentication but not automatically repair damaged areas or restore the original image. In this paper, a self-recovery reversible image watermarking algorithm is proposed to recover the tampered areas effectively. First of all, the original image is divided into homogeneous blocks and non-homogeneous blocks through multi-scale decomposition, and the feature information of each block is calculated as the recovery watermark. Then, the original image is divided into 4×4 non-overlapping blocks classified into smooth blocks and texture blocks according to image textures. Finally, the recovery watermark generated by homogeneous blocks and error-correcting codes is embedded into the corresponding smooth block by mapping; watermark information generated by non-homogeneous blocks and error-correcting codes is embedded into the corresponding non-embedded smooth block and the texture block via mapping. The correlation attack is detected by invariant moments when the watermarked image is attacked. To determine whether a sub-block has been tampered with, its feature is calculated and the recovery watermark is extracted from the corresponding block. If the image has been tampered with, it can be recovered. The experimental results show that the proposed algorithm can effectively recover the tampered areas with high accuracy and high quality. The algorithm is characterized by sound visual quality and excellent image restoration. PMID:29920528
Aliased tidal errors in TOPEX/POSEIDON sea surface height data
NASA Technical Reports Server (NTRS)
Schlax, Michael G.; Chelton, Dudley B.
1994-01-01
Alias periods and wavelengths for the M(sub 2, S(sub 2), N(sub 2), K(sub 1), O(sub 1), and P(sub 1) tidal constituents are calculated for TOPEX/POSEIDON. Alias wavelenghts calculated in previous studies are shown to be in error, and a correct method is presented. With the exception of the K(sub 1) constituent, all of these tidal aliases for TOPEX/POSEIDON have periods shorter than 90 days and are likely to be confounded with long-period sea surface height signals associated with real ocean processes. In particular, the correspondence between the periods and wavelengths of the M(sub 2) alias and annual baroclinic Rossby waves that plagued Geosat sea surface height data is avoided. The potential for aliasing residual tidal errors in smoothed estimates of sea surface height is calculated for the six tidal constituents. The potential for aliasing the lunar tidal constituents M(sub 2), N(sub 2) and O(sub 1) fluctuates with latitude and is different for estimates made at the crossovers of ascending and descending ground tracks than for estimates at points midway between crossovers. The potential for aliasing the solar tidal constituents S(sub 2), K(sub 1) and P(sub 1) varies smoothly with latitude. S(sub 2) is strongly aliased for latitudes within 50 degress of the equator, while K(sub 1) and P(sub 1) are only weakly aliased in that range. A weighted least squares method for estimating and removing residual tidal errors from TOPEX/POSEIDON sea surface height data is presented. A clear understanding of the nature of aliased tidal error in TOPEX/POSEIDON data aids the unambiguous identification of real propagating sea surface height signals. Unequivocal evidence of annual period, westward propagating waves in the North Atlantic is presented.
How good are the Garvey-Kelson predictions of nuclear masses?
NASA Astrophysics Data System (ADS)
Morales, Irving O.; López Vieyra, J. C.; Hirsch, J. G.; Frank, A.
2009-09-01
The Garvey-Kelson relations are used in an iterative process to predict nuclear masses in the neighborhood of nuclei with measured masses. Average errors in the predicted masses for the first three iteration shells are smaller than those obtained with the best nuclear mass models. Their quality is comparable with the Audi-Wapstra extrapolations, offering a simple and reproducible procedure for short range mass predictions. A systematic study of the way the error grows as a function of the iteration and the distance to the known masses region, shows that a correlation exists between the error and the residual neutron-proton interaction, produced mainly by the implicit assumption that V varies smoothly along the nuclear landscape.
Bellman’s GAP—a language and compiler for dynamic programming in sequence analysis
Sauthoff, Georg; Möhl, Mathias; Janssen, Stefan; Giegerich, Robert
2013-01-01
Motivation: Dynamic programming is ubiquitous in bioinformatics. Developing and implementing non-trivial dynamic programming algorithms is often error prone and tedious. Bellman’s GAP is a new programming system, designed to ease the development of bioinformatics tools based on the dynamic programming technique. Results: In Bellman’s GAP, dynamic programming algorithms are described in a declarative style by tree grammars, evaluation algebras and products formed thereof. This bypasses the design of explicit dynamic programming recurrences and yields programs that are free of subscript errors, modular and easy to modify. The declarative modules are compiled into C++ code that is competitive to carefully hand-crafted implementations. This article introduces the Bellman’s GAP system and its language, GAP-L. It then demonstrates the ease of development and the degree of re-use by creating variants of two common bioinformatics algorithms. Finally, it evaluates Bellman’s GAP as an implementation platform of ‘real-world’ bioinformatics tools. Availability: Bellman’s GAP is available under GPL license from http://bibiserv.cebitec.uni-bielefeld.de/bellmansgap. This Web site includes a repository of re-usable modules for RNA folding based on thermodynamics. Contact: robert@techfak.uni-bielefeld.de Supplementary information: Supplementary data are available at Bioinformatics online PMID:23355290
Fractional-dimensional Child-Langmuir law for a rough cathode
NASA Astrophysics Data System (ADS)
Zubair, M.; Ang, L. K.
2016-07-01
This work presents a self-consistent model of space charge limited current transport in a gap combined of free-space and fractional-dimensional space (Fα), where α is the fractional dimension in the range 0 < α ≤ 1. In this approach, a closed-form fractional-dimensional generalization of Child-Langmuir (CL) law is derived in classical regime which is then used to model the effect of cathode surface roughness in a vacuum diode by replacing the rough cathode with a smooth cathode placed in a layer of effective fractional-dimensional space. Smooth transition of CL law from the fractional-dimensional to integer-dimensional space is also demonstrated. The model has been validated by comparing results with an experiment.
Voltage-Clamp Studies on Uterine Smooth Muscle
Anderson, Nels C.
1969-01-01
These studies have developed and tested an experimental approach to the study of membrane ionic conductance mechanisms in strips of uterine smooth muscle. The experimental and theoretical basis for applying the double sucrose-gap technique is described along with the limitations of this system. Nonpropagating membrane action potentials were produced in response to depolarizing current pulses under current-clamp conditions. The stepwise change of membrane potential under voltage-clamp conditions resulted in a family of ionic currents with voltage- and time-dependent characteristics. In sodium-free solution the peak transient current decreased and its equilibrium potential shifted along the voltage axis toward a more negative internal potential. These studies indicate a sodium-dependent, regenerative excitation mechanism. PMID:5796366
NASA Astrophysics Data System (ADS)
Baker, S.; Berryman, E.; Hawbaker, T. J.; Ewers, B. E.
2015-12-01
While much attention has been focused on large scale forest disturbances such as fire, harvesting, drought and insect attacks, small scale forest disturbances that create gaps in forest canopies and below ground root and mycorrhizal networks may accumulate to impact regional scale carbon budgets. In a lodgepole pine (Pinus contorta) forest near Fox Park, WY, clusters of 15 and 30 trees were removed in 1988 to assess the effect of tree gap disturbance on fine root density and nitrogen transformation. Twenty seven years later the gaps remain with limited regeneration present only in the center of the 30 tree plots, beyond the influence of roots from adjacent intact trees. Soil respiration was measured in the summer of 2015 to assess the influence of these disturbances on carbon cycling in Pinus contorta forests. Positions at the centers of experimental disturbances were found to have the lowest respiration rates (mean 2.45 μmol C/m2/s, standard error 0.17 C/m2/s), control plots in the undisturbed forest were highest (mean 4.15 μmol C/m2/s, standard error 0.63 C/m2/s), and positions near the margin of the disturbance were intermediate (mean 3.7 μmol C/m2/s, standard error 0.34 C/m2/s). Fine root densities, soil nitrogen, and microclimate changes were also measured and played an important role in respiration rates of disturbed plots. This demonstrates that a long-term effect on carbon cycling occurs when gaps are created in the canopy and root network of lodgepole forests.
Automated, on-board terrain analysis for precision landings
NASA Technical Reports Server (NTRS)
Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.; Hines, Glenn D.
2006-01-01
Advances in space robotics technology hinge to a large extent upon the development and deployment of sophisticated new vision-based methods for automated in-space mission operations and scientific survey. To this end, we have developed a new concept for automated terrain analysis that is based upon a generic image enhancement platform|multi-scale retinex (MSR) and visual servo (VS) processing. This pre-conditioning with the MSR and the vs produces a "canonical" visual representation that is largely independent of lighting variations, and exposure errors. Enhanced imagery is then processed with a biologically inspired two-channel edge detection process, followed by a smoothness based criteria for image segmentation. Landing sites can be automatically determined by examining the results of the smoothness-based segmentation which shows those areas in the image that surpass a minimum degree of smoothness. Though the msr has proven to be a very strong enhancement engine, the other elements of the approach|the vs, terrain map generation, and smoothness-based segmentation|are in early stages of development. Experimental results on data from the Mars Global Surveyor show that the imagery can be processed to automatically obtain smooth landing sites. In this paper, we describe the method used to obtain these landing sites, and also examine the smoothness criteria in terms of the imager and scene characteristics. Several examples of applying this method to simulated and real imagery are shown.
Sokolenko, Stanislav; Aucoin, Marc G
2015-09-04
The growing ubiquity of metabolomic techniques has facilitated high frequency time-course data collection for an increasing number of applications. While the concentration trends of individual metabolites can be modeled with common curve fitting techniques, a more accurate representation of the data needs to consider effects that act on more than one metabolite in a given sample. To this end, we present a simple algorithm that uses nonparametric smoothing carried out on all observed metabolites at once to identify and correct systematic error from dilution effects. In addition, we develop a simulation of metabolite concentration time-course trends to supplement available data and explore algorithm performance. Although we focus on nuclear magnetic resonance (NMR) analysis in the context of cell culture, a number of possible extensions are discussed. Realistic metabolic data was successfully simulated using a 4-step process. Starting with a set of metabolite concentration time-courses from a metabolomic experiment, each time-course was classified as either increasing, decreasing, concave, or approximately constant. Trend shapes were simulated from generic functions corresponding to each classification. The resulting shapes were then scaled to simulated compound concentrations. Finally, the scaled trends were perturbed using a combination of random and systematic errors. To detect systematic errors, a nonparametric fit was applied to each trend and percent deviations calculated at every timepoint. Systematic errors could be identified at time-points where the median percent deviation exceeded a threshold value, determined by the choice of smoothing model and the number of observed trends. Regardless of model, increasing the number of observations over a time-course resulted in more accurate error estimates, although the improvement was not particularly large between 10 and 20 samples per trend. The presented algorithm was able to identify systematic errors as small as 2.5 % under a wide range of conditions. Both the simulation framework and error correction method represent examples of time-course analysis that can be applied to further developments in (1)H-NMR methodology and the more general application of quantitative metabolomics.
Hsueh, Ya-seng Arthur; Brando, Alex; Dunt, David; Anjou, Mitchell D; Boudville, Andrea; Taylor, Hugh
2013-12-01
To estimate the costs of the extra resources required to close the gap of vision between Indigenous and non-Indigenous Australians. Constructing comprehensive eye care pathways for Indigenous Australians with their related probabilities, to capture full eye care usage compared with current usage rate for cataract surgery, refractive error and diabetic retinopathy using the best available data. Urban and remote regions of Australia. The provision of eye care for cataract surgery, refractive error and diabetic retinopathy. Estimated cost needed for full access, estimated current spending and estimated extra cost required to close the gaps of cataract surgery, refractive error and diabetic retinopathy for Indigenous Australians. Total cost needed for full coverage of all three major eye conditions is $45.5 million per year in 2011 Australian dollars. Current annual spending is $17.4 million. Additional yearly cost required to close the gap of vision is $28 million. This includes extra-capped funds of $3 million from the Commonwealth Government and $2 million from the State and Territory Governments. Additional coordination costs per year are $13.3 million. Although available data are limited, this study has produced the first estimates that are indicative of the need for planning and provide equity in eye care. © 2013 The Authors. Australian Journal of Rural Health © National Rural Health Alliance Inc.
Poster - 53: Improving inter-linac DMLC IMRT dose precision by fine tuning of MLC leaf calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakonechny, Keith; Tran, Muoi; Sasaki, David
Purpose: To develop a method to improve the inter-linac precision of DMLC IMRT dosimetry. Methods: The distance between opposing MLC leaf banks (“gap size”) can be finely tuned on Varian linacs. The dosimetric effect due to small deviations from the nominal gap size (“gap error”) was studied by introducing known errors for several DMLC sliding gap sizes, and for clinical plans based on the TG119 test cases. The plans were delivered on a single Varian linac and the relationship between gap error and the corresponding change in dose was measured. The plans were also delivered on eight Varian 2100 seriesmore » linacs (at two institutions) in order to quantify the inter-linac variation in dose before and after fine tuning the MLC calibration. Results: The measured dose differences for each field agreed well with the predictions of LoSasso et al. Using the default MLC calibration, the variation in the physical MLC gap size was determined to be less than 0.4 mm between all linacs studied. The dose difference between the linacs with the largest and smallest physical gap was up to 5.4% (spinal cord region of the head and neck TG119 test case). This difference was reduced to 2.5% after fine tuning the MLC gap calibration. Conclusions: The inter-linac dose precision for DMLC IMRT on Varian linacs can be improved using a simple modification of the MLC calibration procedure that involves fine adjustment of the nominal gap size.« less
Benchmark radar targets for the validation of computational electromagnetics programs
NASA Technical Reports Server (NTRS)
Woo, Alex C.; Wang, Helen T. G.; Schuh, Michael J.; Sanders, Michael L.
1993-01-01
Results are presented of a set of computational electromagnetics validation measurements referring to three-dimensional perfectly conducting smooth targets, performed for the Electromagnetic Code Consortium. Plots are presented for both the low- and high-frequency measurements of the NASA almond, an ogive, a double ogive, a cone-sphere, and a cone-sphere with a gap.
Language Policy and Literacy Practices in the Family: The Case of Ethiopian Parental Narrative Input
ERIC Educational Resources Information Center
Stavans, Anat
2012-01-01
The present study analyses the Family Language Policy (FLP) in regards language literacy development of children in Ethiopian immigrant families. Bridging the gap between linguistic literacy at home and at school hinders a smooth societal integration and a normative literacy development. This study describes the home literacy patterns shaped by…
Examining Middle School Students' Awareness of Their Career Paths
ERIC Educational Resources Information Center
Alsuwaidi, Sultan A.
2012-01-01
The United Arab Emirates (UAE) education system is missing an educational plan that can provide students the necessary information to learn about themselves and the world of work and help them make a smooth transition from primary school to secondary schools and the workplace. To address this gap, this study examined 9th graders' career awareness…
On High-Order Radiation Boundary Conditions
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1995-01-01
In this paper we develop the theory of high-order radiation boundary conditions for wave propagation problems. In particular, we study the convergence of sequences of time-local approximate conditions to the exact boundary condition, and subsequently estimate the error in the solutions obtained using these approximations. We show that for finite times the Pade approximants proposed by Engquist and Majda lead to exponential convergence if the solution is smooth, but that good long-time error estimates cannot hold for spatially local conditions. Applications in fluid dynamics are also discussed.
Visual enhancement of unmixed multispectral imagery using adaptive smoothing
Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.
2004-01-01
Adaptive smoothing (AS) has been previously proposed as a method to smooth uniform regions of an image, retain contrast edges, and enhance edge boundaries. The method is an implementation of the anisotropic diffusion process which results in a gray scale image. This paper discusses modifications to the AS method for application to multi-band data which results in a color segmented image. The process was used to visually enhance the three most distinct abundance fraction images produced by the Lagrange constraint neural network learning-based unmixing of Landsat 7 Enhanced Thematic Mapper Plus multispectral sensor data. A mutual information-based method was applied to select the three most distinct fraction images for subsequent visualization as a red, green, and blue composite. A reported image restoration technique (partial restoration) was applied to the multispectral data to reduce unmixing error, although evaluation of the performance of this technique was beyond the scope of this paper. The modified smoothing process resulted in a color segmented image with homogeneous regions separated by sharpened, coregistered multiband edges. There was improved class separation with the segmented image, which has importance to subsequent operations involving data classification.
2003-01-22
One concern about human adaptation to space is how returning from the microgravity of orbit to Earth can affect an astronaut's ability to fly safely. There are monitors and infrared video cameras to measure eye movements without having to affect the crew member. A computer screen provides moving images which the eye tracks while the brain determines what it is seeing. A video camera records movement of the subject's eyes. Researchers can then correlate perception and response. Test subjects perceive different images when a moving object is covered by a mask that is visible or invisible (above). Early results challenge the accepted theory that smooth pursuit -- the fluid eye movement that humans and primates have -- does not involve the higher brain. NASA results show that: Eye movement can predict human perceptual performance, smooth pursuit and saccadic (quick or ballistic) movement share some signal pathways, and common factors can make both smooth pursuit and visual perception produce errors in motor responses.
Understanding Visible Perception
NASA Technical Reports Server (NTRS)
2003-01-01
One concern about human adaptation to space is how returning from the microgravity of orbit to Earth can affect an astronaut's ability to fly safely. There are monitors and infrared video cameras to measure eye movements without having to affect the crew member. A computer screen provides moving images which the eye tracks while the brain determines what it is seeing. A video camera records movement of the subject's eyes. Researchers can then correlate perception and response. Test subjects perceive different images when a moving object is covered by a mask that is visible or invisible (above). Early results challenge the accepted theory that smooth pursuit -- the fluid eye movement that humans and primates have -- does not involve the higher brain. NASA results show that: Eye movement can predict human perceptual performance, smooth pursuit and saccadic (quick or ballistic) movement share some signal pathways, and common factors can make both smooth pursuit and visual perception produce errors in motor responses.
Nonequilibrium flows with smooth particle applied mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kum, Oyeon
1995-07-01
Smooth particle methods are relatively new methods for simulating solid and fluid flows through they have a 20-year history of solving complex hydrodynamic problems in astrophysics, such as colliding planets and stars, for which correct answers are unknown. The results presented in this thesis evaluate the adaptability or fitness of the method for typical hydrocode production problems. For finite hydrodynamic systems, boundary conditions are important. A reflective boundary condition with image particles is a good way to prevent a density anomaly at the boundary and to keep the fluxes continuous there. Boundary values of temperature and velocity can be separatelymore » controlled. The gradient algorithm, based on differentiating the smooth particle expression for (uρ) and (Tρ), does not show numerical instabilities for the stress tensor and heat flux vector quantities which require second derivatives in space when Fourier`s heat-flow law and Newton`s viscous force law are used. Smooth particle methods show an interesting parallel linking to them to molecular dynamics. For the inviscid Euler equation, with an isentropic ideal gas equation of state, the smooth particle algorithm generates trajectories isomorphic to those generated by molecular dynamics. The shear moduli were evaluated based on molecular dynamics calculations for the three weighting functions, B spline, Lucy, and Cusp functions. The accuracy and applicability of the methods were estimated by comparing a set of smooth particle Rayleigh-Benard problems, all in the laminar regime, to corresponding highly-accurate grid-based numerical solutions of continuum equations. Both transient and stationary smooth particle solutions reproduce the grid-based data with velocity errors on the order of 5%. The smooth particle method still provides robust solutions at high Rayleigh number where grid-based methods fails.« less
Disclosing harmful medical errors to patients: tackling three tough cases.
Gallagher, Thomas H; Bell, Sigall K; Smith, Kelly M; Mello, Michelle M; McDonald, Timothy B
2009-09-01
A gap exists between recommendations to disclose errors to patients and current practice. This gap may reflect important, yet unanswered questions about implementing disclosure principles. We explore some of these unanswered questions by presenting three real cases that pose challenging disclosure dilemmas. The first case involves a pancreas transplant that failed due to the pancreas graft being discarded, an error that was not disclosed partly because the family did not ask clarifying questions. Relying on patient or family questions to determine the content of disclosure is problematic. We propose a standard of materiality that can help clinicians to decide what information to disclose. The second case involves a fatal diagnostic error that the patient's widower was unaware had happened. The error was not disclosed out of concern that disclosure would cause the widower more harm than good. This case highlights how institutions can overlook patients' and families' needs following errors and emphasizes that benevolent deception has little role in disclosure. Institutions should consider whether involving neutral third parties could make disclosures more patient centered. The third case presents an intraoperative cardiac arrest due to a large air embolism where uncertainty around the clinical event was high and complicated the disclosure. Uncertainty is common to many medical errors but should not deter open conversations with patients and families about what is and is not known about the event. Continued discussion within the medical profession about applying disclosure principles to real-world cases can help to better meet patients' and families' needs following medical errors.
Nurses' role in medication safety.
Choo, Janet; Hutchinson, Alison; Bucknall, Tracey
2010-10-01
To explore the nurse's role in the process of medication management and identify the challenges associated with safe medication management in contemporary clinical practice. Medication errors have been a long-standing factor affecting consumer safety. The nursing profession has been identified as essential to the promotion of patient safety. A review of literature on medication errors and the use of electronic prescribing in medication errors. Medication management requires a multidisciplinary approach and interdisciplinary communication is essential to reduce medication errors. Information technologies can help to reduce some medication errors through eradication of transcription and dosing errors. Nurses must play a major role in the design of computerized medication systems to ensure a smooth transition to such as system. The nurses' roles in medication management cannot be over-emphasized. This is particularly true when designing a computerized medication system. The adoption of safety measures during decision making that parallel those of the aviation industry safety procedures can provide some strategies to prevent medication error. Innovations in information technology offer potential mechanisms to avert adverse events in medication management for nurses. © 2010 The Authors. Journal compilation © 2010 Blackwell Publishing Ltd.
NASA Astrophysics Data System (ADS)
Hamazaki, Junichi; Furusawa, Kentaro; Sekine, Norihiko; Kasamatsu, Akifumi; Hosako, Iwao
2016-11-01
The effects of the chirp of the pump pulse in broadband terahertz (THz) pulse generation by optical rectification (OR) in GaP were systematically investigated. It was found that the pre-compensation for the dispersion of GaP is important for obtaining smooth and single-peaked THz spectra as well as high power-conversion efficiency. It was also found that an excessive amount of chirp leads to distortions in THz spectra, which can be quantitatively analyzed by using a simple model. Our results highlight the importance of accurate control over the chirp of the pump pulse for generating broadband THz pulses by OR.
An approach to develop an algorithm to detect the climbing height in radial-axial ring rolling
NASA Astrophysics Data System (ADS)
Husmann, Simon; Hohmann, Magnus; Kuhlenkötter, Bernd
2017-10-01
Radial-axial ring rolling is the mainly used forming process to produce seamless rings, which are applied in miscellaneous industries like the energy sector, the aerospace technology or in the automotive industry. Due to the simultaneously forming in two opposite rolling gaps and the fact that ring rolling is a mass forming process, different errors could occur during the rolling process. Ring climbing is one of the most occurring process errors leading to a distortion of the ring's cross section and a deformation of the rings geometry. The conventional sensors of a radial-axial rolling machine could not detect this error. Therefore, it is a common strategy to roll a slightly bigger ring, so that random occurring process errors could be reduce afterwards by removing the additional material. The LPS installed an image processing system to the radial rolling gap of their ring rolling machine to enable the recognition and measurement of climbing rings and by this, to reduce the additional material. This paper presents the algorithm which enables the image processing system to detect the error of a climbing ring and ensures comparable reliable results for the measurement of the climbing height of the rings.
Reference respiratory waveforms by minimum jerk model analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anetai, Yusuke, E-mail: anetai@radonc.med.osaka-u.ac.jp; Sumida, Iori; Takahashi, Yutaka
Purpose: CyberKnife{sup ®} robotic surgery system has the ability to deliver radiation to a tumor subject to respiratory movements using Synchrony{sup ®} mode with less than 2 mm tracking accuracy. However, rapid and rough motion tracking causes mechanical tracking errors and puts mechanical stress on the robotic joint, leading to unexpected radiation delivery errors. During clinical treatment, patient respiratory motions are much more complicated, suggesting the need for patient-specific modeling of respiratory motion. The purpose of this study was to propose a novel method that provides a reference respiratory wave to enable smooth tracking for each patient. Methods: The minimummore » jerk model, which mathematically derives smoothness by means of jerk, or the third derivative of position and the derivative of acceleration with respect to time that is proportional to the time rate of force changed was introduced to model a patient-specific respiratory motion wave to provide smooth motion tracking using CyberKnife{sup ®}. To verify that patient-specific minimum jerk respiratory waves were being tracked smoothly by Synchrony{sup ®} mode, a tracking laser projection from CyberKnife{sup ®} was optically analyzed every 0.1 s using a webcam and a calibrated grid on a motion phantom whose motion was in accordance with three pattern waves (cosine, typical free-breathing, and minimum jerk theoretical wave models) for the clinically relevant superior–inferior directions from six volunteers assessed on the same node of the same isocentric plan. Results: Tracking discrepancy from the center of the grid to the beam projection was evaluated. The minimum jerk theoretical wave reduced the maximum-peak amplitude of radial tracking discrepancy compared with that of the waveforms modeled by cosine and typical free-breathing model by 22% and 35%, respectively, and provided smooth tracking for radial direction. Motion tracking constancy as indicated by radial tracking discrepancy affected by respiratory phase was improved in the minimum jerk theoretical model by 7.0% and 13% compared with that of the waveforms modeled by cosine and free-breathing model, respectively. Conclusions: The minimum jerk theoretical respiratory wave can achieve smooth tracking by CyberKnife{sup ®} and may provide patient-specific respiratory modeling, which may be useful for respiratory training and coaching, as well as quality assurance of the mechanical CyberKnife{sup ®} robotic trajectory.« less
Darrouzet-Nardi, Amelia F; Masters, William A
2017-01-01
A large literature links early-life environmental shocks to later outcomes. This paper uses seasonal variation across the Democratic Republic of the Congo to test for nutrition smoothing, defined here as attaining similar height, weight and mortality outcomes despite different agroclimatic conditions at birth. We find that gaps between siblings and neighbors born at different times of year are larger in more remote rural areas, farther from the equator where there are greater seasonal differences in rainfall and temperature. For those born at adverse times in places with pronounced seasonality, the gains associated with above-median proximity to nearby towns are similar to rising one quintile in the national distribution of household wealth for mortality, and two quintiles for attained height. Smoothing of outcomes could involve a variety of mechanisms to be addressed in future work, including access to food markets, health services, public assistance and temporary migration to achieve more uniform dietary intake, or less exposure and improved recovery from seasonal diseases.
NASA Astrophysics Data System (ADS)
Young, Kenneth C.; Cook, James J. H.; Oduko, Jennifer M.; Bosmans, Hilde
2006-03-01
European Guidelines for quality control in digital mammography specify minimum and achievable standards of image quality in terms of threshold contrast, based on readings of images of the CDMAM test object by human observers. However this is time-consuming and has large inter-observer error. To overcome these problems a software program (CDCOM) is available to automatically read CDMAM images, but the optimal method of interpreting the output is not defined. This study evaluates methods of determining threshold contrast from the program, and compares these to human readings for a variety of mammography systems. The methods considered are (A) simple thresholding (B) psychometric curve fitting (C) smoothing and interpolation and (D) smoothing and psychometric curve fitting. Each method leads to similar threshold contrasts but with different reproducibility. Method (A) had relatively poor reproducibility with a standard error in threshold contrast of 18.1 +/- 0.7%. This was reduced to 8.4% by using a contrast-detail curve fitting procedure. Method (D) had the best reproducibility with an error of 6.7%, reducing to 5.1% with curve fitting. A panel of 3 human observers had an error of 4.4% reduced to 2.9 % by curve fitting. All automatic methods led to threshold contrasts that were lower than for humans. The ratio of human to program threshold contrasts varied with detail diameter and was 1.50 +/- .04 (sem) at 0.1mm and 1.82 +/- .06 at 0.25mm for method (D). There were good correlations between the threshold contrast determined by humans and the automated methods.
Narayanaswamy, Arunachalam; Dwarakapuram, Saritha; Bjornsson, Christopher S; Cutler, Barbara M; Shain, William; Roysam, Badrinath
2010-03-01
This paper presents robust 3-D algorithms to segment vasculature that is imaged by labeling laminae, rather than the lumenal volume. The signal is weak, sparse, noisy, nonuniform, low-contrast, and exhibits gaps and spectral artifacts, so adaptive thresholding and Hessian filtering based methods are not effective. The structure deviates from a tubular geometry, so tracing algorithms are not effective. We propose a four step approach. The first step detects candidate voxels using a robust hypothesis test based on a model that assumes Poisson noise and locally planar geometry. The second step performs an adaptive region growth to extract weakly labeled and fine vessels while rejecting spectral artifacts. To enable interactive visualization and estimation of features such as statistical confidence, local curvature, local thickness, and local normal, we perform the third step. In the third step, we construct an accurate mesh representation using marching tetrahedra, volume-preserving smoothing, and adaptive decimation algorithms. To enable topological analysis and efficient validation, we describe a method to estimate vessel centerlines using a ray casting and vote accumulation algorithm which forms the final step of our algorithm. Our algorithm lends itself to parallel processing, and yielded an 8 x speedup on a graphics processor (GPU). On synthetic data, our meshes had average error per face (EPF) values of (0.1-1.6) voxels per mesh face for peak signal-to-noise ratios from (110-28 dB). Separately, the error from decimating the mesh to less than 1% of its original size, the EPF was less than 1 voxel/face. When validated on real datasets, the average recall and precision values were found to be 94.66% and 94.84%, respectively.
Quantifying tidal stream disruption in a simulated Milky Way
NASA Astrophysics Data System (ADS)
Sandford, Emily; Küpper, Andreas H. W.; Johnston, Kathryn V.; Diemand, Jürg
2017-09-01
Simulations of tidal streams show that close encounters with dark matter subhaloes induce density gaps and distortions in on-sky path along the streams. Accordingly, observing disrupted streams in the Galactic halo would substantiate the hypothesis that dark matter substructure exists there, while in contrast, observing collimated streams with smoothly varying density profiles would place strong upper limits on the number density and mass spectrum of subhaloes. Here, we examine several measures of stellar stream 'disruption' and their power to distinguish between halo potentials with and without substructure and with different global shapes. We create and evolve a population of 1280 streams on a range of orbits in the Via Lactea II simulation of a Milky Way-like halo, replete with a full mass range of Λcold dark matter subhaloes, and compare it to two control stream populations evolved in smooth spherical and smooth triaxial potentials, respectively. We find that the number of gaps observed in a stellar stream is a poor indicator of the halo potential, but that (I) the thinness of the stream on-sky, (II) the symmetry of the leading and trailing tails and (III) the deviation of the tails from a low-order polynomial path on-sky ('path regularity') distinguish between the three potentials more effectively. We furthermore find that globular cluster streams on low-eccentricity orbits far from the galactic centre (apocentric radius ˜30-80 kpc) are most powerful in distinguishing between the three potentials. If they exist, such streams will shortly be discoverable and mapped in high dimensions with near-future photometric and spectroscopic surveys.
Azuma, Yasu-Taka; Samezawa, Nanako; Nishiyama, Kazuhiro; Nakajima, Hidemitsu; Takeuchi, Tadayoshi
2016-01-01
The muscular layer in the GI tract consists of an inner circular muscular layer and an outer longitudinal muscular layer. Acetylcholine (ACh) is the representative neurotransmitter that causes contractions in the gastrointestinal tracts of most animal species. There are many reports of muscarinic receptor-mediated contraction of longitudinal muscles, but few studies discuss circular muscles. The present study detailed the contractile response in the circular smooth muscles of the mouse ileum. We used small muscle strips (0.2 mm × 1 mm) and large muscle strips (4 × 4 mm) isolated from the circular and longitudinal muscle layers of the mouse ileum to compare contraction responses in circular and longitudinal smooth muscles. The time to peak contractile responses to carbamylcholine (CCh) were later in the small muscle strips (0.2 × 1 mm) of circular muscle (5.7 min) than longitudinal muscles (0.4 min). The time to peak contractile responses to CCh in the large muscle strips (4 × 4 mm) were also later in the circular muscle (3.1 min) than the longitudinal muscle (1.4 min). Furthermore, a muscarinic M2 receptor antagonist and gap junction inhibitor significantly delayed the time to peak contraction of the large muscle strips (4 × 4 mm) from the circular muscular layer. Our findings indicate that muscarinic M2 receptors in the circular muscular layer of mouse ileum exert a previously undocumented function in gut motility via the regulation of gap junctions.
Data Visualization of Item-Total Correlation by Median Smoothing
ERIC Educational Resources Information Center
Yu, Chong Ho; Douglas, Samantha; Lee, Anna; An, Min
2016-01-01
This paper aims to illustrate how data visualization could be utilized to identify errors prior to modeling, using an example with multi-dimensional item response theory (MIRT). MIRT combines item response theory and factor analysis to identify a psychometric model that investigates two or more latent traits. While it may seem convenient to…
The relationship of the concentration of air pollutants to wind direction has been determined by nonparametric regression using a Gaussian kernel. The results are smooth curves with error bars that allow for the accurate determination of the wind direction where the concentrat...
FBEYE: Analyzing Kepler light curves and validating flares
NASA Astrophysics Data System (ADS)
Johnson, Emily; Davenport, James R. A.; Hawley, Suzanne L.
2017-12-01
FBEYE, the "Flares By-Eye" detection suite, is written in IDL and analyzes Kepler light curves and validates flares. It works on any 3-column light curve that contains time, flux, and error. The success of flare identification is highly dependent on the smoothing routine, which may not be suitable for all sources.
Akce, Abdullah; Johnson, Miles; Dantsker, Or; Bretl, Timothy
2013-03-01
This paper presents an interface for navigating a mobile robot that moves at a fixed speed in a planar workspace, with noisy binary inputs that are obtained asynchronously at low bit-rates from a human user through an electroencephalograph (EEG). The approach is to construct an ordered symbolic language for smooth planar curves and to use these curves as desired paths for a mobile robot. The underlying problem is then to design a communication protocol by which the user can, with vanishing error probability, specify a string in this language using a sequence of inputs. Such a protocol, provided by tools from information theory, relies on a human user's ability to compare smooth curves, just like they can compare strings of text. We demonstrate our interface by performing experiments in which twenty subjects fly a simulated aircraft at a fixed speed and altitude with input only from EEG. Experimental results show that the majority of subjects are able to specify desired paths despite a wide range of errors made in decoding EEG signals.
Molavi, Ali; Jalali, Aliakbar; Ghasemi Naraghi, Mahdi
2017-07-01
In this paper, based on the passivity theorem, an adaptive fuzzy controller is designed for a class of unknown nonaffine nonlinear systems with arbitrary relative degree and saturation input nonlinearity to track the desired trajectory. The system equations are in normal form and its unforced dynamic may be unstable. As relative degree one is a structural obstacle in system passivation approach, in this paper, backstepping method is used to circumvent this obstacle and passivate the system step by step. Because of the existence of uncertainty and disturbance in the system, exact passivation and reference tracking cannot be tackled, so the approximate passivation or passivation with respect to a set is obtained to hold the tracking error in a neighborhood around zero. Furthermore, in order to overcome the non-smoothness of the saturation input nonlinearity, a parametric smooth nonlinear function with arbitrary approximation error is used to approximate the input saturation. Finally, the simulation results for the theoretical and practical examples are given to validate the proposed controller. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung.
Guo, Shengwen; Fei, Baowei
2009-03-27
We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.
A minimal path searching approach for active shape model (ASM)-based segmentation of the lung
NASA Astrophysics Data System (ADS)
Guo, Shengwen; Fei, Baowei
2009-02-01
We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 +/- 0.33 pixels, while the error is 1.99 +/- 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.
A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung
Guo, Shengwen; Fei, Baowei
2013-01-01
We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs. PMID:24386531
High accurate interpolation of NURBS tool path for CNC machine tools
NASA Astrophysics Data System (ADS)
Liu, Qiang; Liu, Huan; Yuan, Songmei
2016-09-01
Feedrate fluctuation caused by approximation errors of interpolation methods has great effects on machining quality in NURBS interpolation, but few methods can efficiently eliminate or reduce it to a satisfying level without sacrificing the computing efficiency at present. In order to solve this problem, a high accurate interpolation method for NURBS tool path is proposed. The proposed method can efficiently reduce the feedrate fluctuation by forming a quartic equation with respect to the curve parameter increment, which can be efficiently solved by analytic methods in real-time. Theoretically, the proposed method can totally eliminate the feedrate fluctuation for any 2nd degree NURBS curves and can interpolate 3rd degree NURBS curves with minimal feedrate fluctuation. Moreover, a smooth feedrate planning algorithm is also proposed to generate smooth tool motion with considering multiple constraints and scheduling errors by an efficient planning strategy. Experiments are conducted to verify the feasibility and applicability of the proposed method. This research presents a novel NURBS interpolation method with not only high accuracy but also satisfying computing efficiency.
ERIC Educational Resources Information Center
Holloway, Susan D.; Domínguez-Pareto, Irenka; Cohen, Shana R.; Kuppermann, Miriam
2014-01-01
Previous studies indicate that families construct daily routines that enable the household to function smoothly and promote family quality of life. However, we know little about how activities are distributed between parents caring for a child with an intellectual disability (ID), particularly in Latino families. To address this gap, we…
Advanced UVOIR Mirror Technology Development for Very Large Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip
2011-01-01
Objective of this work is to define and initiate a long-term program to mature six inter-linked critical technologies for future UVOIR space telescope mirrors to TRL6 by 2018 so that a viable flight mission can be proposed to the 2020 Decadal Review. (1) Large-Aperture, Low Areal Density, High Stiffness Mirrors: 4 to 8 m monolithic & 8 to 16 m segmented primary mirrors require larger, thicker, stiffer substrates. (2) Support System:Large-aperture mirrors require large support systems to ensure that they survive launch and deploy on orbit in a stress-free and undistorted shape. (3) Mid/High Spatial Frequency Figure Error:A very smooth mirror is critical for producing a high-quality point spread function (PSF) for high-contrast imaging. (4) Segment Edges:Edges impact PSF for high-contrast imaging applications, contributes to stray light noise, and affects the total collecting aperture. (5) Segment-to-Segment Gap Phasing:Segment phasing is critical for producing a high-quality temporally stable PSF. (6) Integrated Model Validation:On-orbit performance is determined by mechanical and thermal stability. Future systems require validated performance models. We are pursuing multiple design paths give the science community the option to enable either a future monolithic or segmented space telescope.
NASA Astrophysics Data System (ADS)
Wu, Wei; Xu, An-Ding; Liu, Hong-Bin
2015-01-01
Climate data in gridded format are critical for understanding climate change and its impact on eco-environment. The aim of the current study is to develop spatial databases for three climate variables (maximum, minimum temperatures, and relative humidity) over a large region with complex topography in southwestern China. Five widely used approaches including inverse distance weighting, ordinary kriging, universal kriging, co-kriging, and thin-plate smoothing spline were tested. Root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE) showed that thin-plate smoothing spline with latitude, longitude, and elevation outperformed other models. Average RMSE, MAE, and MAPE of the best models were 1.16 °C, 0.74 °C, and 7.38 % for maximum temperature; 0.826 °C, 0.58 °C, and 6.41 % for minimum temperature; and 3.44, 2.28, and 3.21 % for relative humidity, respectively. Spatial datasets of annual and monthly climate variables with 1-km resolution covering the period 1961-2010 were then obtained using the best performance methods. Comparative study showed that the current outcomes were in well agreement with public datasets. Based on the gridded datasets, changes in temperature variables were investigated across the study area. Future study might be needed to capture the uncertainty induced by environmental conditions through remote sensing and knowledge-based methods.
Jahani, Sahar; Setarehdan, Seyed K; Boas, David A; Yücel, Meryem A
2018-01-01
Motion artifact contamination in near-infrared spectroscopy (NIRS) data has become an important challenge in realizing the full potential of NIRS for real-life applications. Various motion correction algorithms have been used to alleviate the effect of motion artifacts on the estimation of the hemodynamic response function. While smoothing methods, such as wavelet filtering, are excellent in removing motion-induced sharp spikes, the baseline shifts in the signal remain after this type of filtering. Methods, such as spline interpolation, on the other hand, can properly correct baseline shifts; however, they leave residual high-frequency spikes. We propose a hybrid method that takes advantage of different correction algorithms. This method first identifies the baseline shifts and corrects them using a spline interpolation method or targeted principal component analysis. The remaining spikes, on the other hand, are corrected by smoothing methods: Savitzky-Golay (SG) filtering or robust locally weighted regression and smoothing. We have compared our new approach with the existing correction algorithms in terms of hemodynamic response function estimation using the following metrics: mean-squared error, peak-to-peak error ([Formula: see text]), Pearson's correlation ([Formula: see text]), and the area under the receiver operator characteristic curve. We found that spline-SG hybrid method provides reasonable improvements in all these metrics with a relatively short computational time. The dataset and the code used in this study are made available online for the use of all interested researchers.
NASA Astrophysics Data System (ADS)
Susanti, Ana; Suhartono; Jati Setyadi, Hario; Taruk, Medi; Haviluddin; Pamilih Widagdo, Putut
2018-03-01
Money currency availability in Bank Indonesia can be examined by inflow and outflow of money currency. The objective of this research is to forecast the inflow and outflow of money currency in each Representative Office (RO) of BI in East Java by using a hybrid exponential smoothing based on state space approach and calendar variation model. Hybrid model is expected to generate more accurate forecast. There are two studies that will be discussed in this research. The first studies about hybrid model using simulation data that contain pattern of trends, seasonal and calendar variation. The second studies about the application of a hybrid model for forecasting the inflow and outflow of money currency in each RO of BI in East Java. The first of results indicate that exponential smoothing model can not capture the pattern calendar variation. It results RMSE values 10 times standard deviation of error. The second of results indicate that hybrid model can capture the pattern of trends, seasonal and calendar variation. It results RMSE values approaching the standard deviation of error. In the applied study, the hybrid model give more accurate forecast for five variables : the inflow of money currency in Surabaya, Malang, Jember and outflow of money currency in Surabaya and Kediri. Otherwise, the time series regression model yields better for three variables : outflow of money currency in Malang, Jember and inflow of money currency in Kediri.
NASA Astrophysics Data System (ADS)
Andrews, Bartholomew; Möller, Gunnar
2018-01-01
We study the stability of composite fermion fractional quantum Hall states in Harper-Hofstadter bands with Chern number |C |>1 . From composite fermion theory, states are predicted to be found at filling factors ν =r /(k r |C |+1 ),r ∈Z , with k =1 for bosons and k =2 for fermions. Here, we closely analyze these series in both cases, with contact interactions for bosons and nearest-neighbor interactions for (spinless) fermions. In particular, we analyze how the many-body gap scales as the bands are tuned to the effective continuum limit of Chern number |C | bands, realized near flux density nϕ=1 /|C | . Near these points, the Hofstadter model requires large magnetic unit cells that yield bands with perfectly flat dispersion and Berry curvature. We exploit the known scaling of energies in the effective continuum limit in order to maintain a fixed square aspect ratio in finite-size calculations. Based on exact diagonalization calculations of the band-projected Hamiltonian for these lattice geometries, we show that for both bosons and fermions, the vast majority of finite-size spectra yield the ground-state degeneracy predicted by composite fermion theory. For the chosen interactions, we confirm that states with filling factor ν =1 /(k |C |+1 ) are the most robust and yield a clear gap in the thermodynamic limit. For bosons with contact interactions in |C |=2 and |C |=3 bands, our data for the composite fermion states are compatible with a finite gap in the thermodynamic limit. We also report new evidence for gapped incompressible states stabilized for fermions with nearest-neighbor interactions in |C |>1 bands. For cases with a clear gap, we confirm that the thermodynamic limit commutes with the effective continuum limit within finite-size error bounds. We analyze the nature of the correlation functions for the Abelian composite fermion states and find that the correlation functions for |C |>1 states are smooth functions for positions separated by |C | sites along both axes, giving rise to |C| 2 sheets; some of which can be related by inversion symmetry. We also comment on two cases which are associated with a bosonic integer quantum Hall effect (BIQHE): For ν =2 in |C |=1 bands, we find a strong competing state with a higher ground-state degeneracy, so no clear BIQHE is found in the band-projected Hofstadter model; for ν =1 in |C |=2 bands, we present additional data confirming the existence of a BIQHE state.
Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis
NASA Technical Reports Server (NTRS)
Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher
1996-01-01
We study a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and will be required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and a bias correction of forecast anomalies. In brief, the distortion is determined by minimizing the objective function by varying the displacement and bias correction fields. In the present project we use a global or hemispheric domain, and spherical harmonics to represent these fields. In this project we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically we study the forecast errors of the 500 hPa geopotential height field for forecasts of the short and medium range. The forecasts are those of the Goddard Earth Observing System data assimilation system. Results presented show that the methodology works, that a large part of the total error may be explained by a distortion limited to triangular truncation at wavenumber 10, and that the remaining residual error contains mostly small spatial scales.
The inference of atmospheric ozone using satellite horizon measurements in the 1042 per cm band.
NASA Technical Reports Server (NTRS)
Russell, J. M., III; Drayson, S. R.
1972-01-01
Description of a method for inferring atmospheric ozone information using infrared horizon radiance measurements in the 1042 per cm band. An analysis based on this method proves the feasibility of the horizon experiment for determining ozone information and shows that the ozone partial pressure can be determined in the altitude range from 50 down to 25 km. A comprehensive error study is conducted which considers effects of individual errors as well as the effect of all error sources acting simultaneously. The results show that in the absence of a temperature profile bias error, it should be possible to determine the ozone partial pressure to within an rms value of 15 to 20%. It may be possible to reduce this rms error to 5% by smoothing the solution profile. These results would be seriously degraded by an atmospheric temperature bias error of only 3 K; thus, great care should be taken to minimize this source of error in an experiment. It is probable, in view of recent technological developments, that these errors will be much smaller in future flight experiments and the altitude range will widen to include from about 60 km down to the tropopause region.
Towards fault tolerant adiabatic quantum computation.
Lidar, Daniel A
2008-04-25
I show how to protect adiabatic quantum computation (AQC) against decoherence and certain control errors, using a hybrid methodology involving dynamical decoupling, subsystem and stabilizer codes, and energy gaps. Corresponding error bounds are derived. As an example, I show how to perform decoherence-protected AQC against local noise using at most two-body interactions.
ERIC Educational Resources Information Center
Mirandola, C.; Paparella, G.; Re, A. M.; Ghetti, S.; Cornoldi, C.
2012-01-01
Enhanced semantic processing is associated with increased false recognition of items consistent with studied material, suggesting that children with poor semantic skills could produce fewer false memories. We examined whether memory errors differed in children with Attention Deficit/Hyperactivity Disorder (ADHD) and controls. Children viewed 18…
NASA Astrophysics Data System (ADS)
Park, Jisang
In this dissertation, we investigate MIMO stability margin inference of a large number of controllers using pre-established stability margins of a small number of nu-gap-wise adjacent controllers. The generalized stability margin and the nu-gap metric are inherently able to handle MIMO system analysis without the necessity of repeating multiple channel-by-channel SISO analyses. This research consists of three parts: (i) development of a decision support tool for inference of the stability margin, (ii) computational considerations for yielding the maximal stability margin with the minimal nu-gap metric in a less conservative manner, and (iii) experiment design for estimating the generalized stability margin with an assured error bound. A modern problem from aerospace control involves the certification of a large set of potential controllers with either a single plant or a fleet of potential plant systems, with both plants and controllers being MIMO and, for the moment, linear. Experiments on a limited number of controller/plant pairs should establish the stability and a certain level of margin of the complete set. We consider this certification problem for a set of controllers and provide algorithms for selecting an efficient subset for testing. This is done for a finite set of candidate controllers and, at least for SISO plants, for an infinite set. In doing this, the nu-gap metric will be the main tool. We provide a theorem restricting a radius of a ball in the parameter space so that the controller can guarantee a prescribed level of stability and performance if parameters of the controllers are contained in the ball. Computational examples are given, including one of certification of an aircraft engine controller. The overarching aim is to introduce truly MIMO margin calculations and to understand their efficacy in certifying stability over a set of controllers and in replacing legacy single-loop gain and phase margin calculations. We consider methods for the computation of; maximal MIMO stability margins bP̂,C, minimal nu-gap metrics deltanu , and the maximal difference between these two values, through the use of scaling and weighting functions. We propose simultaneous scaling selections that attempt to maximize the generalized stability margin and minimize the nu-gap. The minimization of the nu-gap by scaling involves a non-convex optimization. We modify the XY-centering algorithm to handle this non-convexity. This is done for applications in controller certification. Estimating the generalized stability margin with an accurate error bound has significant impact on controller certification. We analyze an error bound of the generalized stability margin as the infinity norm of the MIMO empirical transfer function estimate (ETFE). Input signal design to reduce the error on the estimate is also studied. We suggest running the system for a certain amount of time prior to recording of each output data set. The assured upper bound of estimation error can be tuned by the amount of the pre-experiment.
Applicability of AgMERRA Forcing Dataset to Fill Gaps in Historical in-situ Meteorological Data
NASA Astrophysics Data System (ADS)
Bannayan, M.; Lashkari, A.; Zare, H.; Asadi, S.; Salehnia, N.
2015-12-01
Integrated assessment studies of food production systems use crop models to simulate the effects of climate and socio-economic changes on food security. Climate forcing data is one of those key inputs of crop models. This study evaluated the performance of AgMERRA climate forcing dataset to fill gaps in historical in-situ meteorological data for different climatic regions of Iran. AgMERRA dataset intercompared with in- situ observational dataset for daily maximum and minimum temperature and precipitation during 1980-2010 periods via Root Mean Square error (RMSE), Mean Absolute Error (MAE) and Mean Bias Error (MBE) for 17 stations in four climatic regions included humid and moderate, cold, dry and arid, hot and humid. Moreover, probability distribution function and cumulative distribution function compared between model and observed data. The results of measures of agreement between AgMERRA data and observed data demonstrated that there are small errors in model data for all stations. Except for stations which are located in cold regions, model data in the other stations illustrated under-prediction for daily maximum temperature and precipitation. However, it was not significant. In addition, probability distribution function and cumulative distribution function showed the same trend for all stations between model and observed data. Therefore, the reliability of AgMERRA dataset is high to fill gaps in historical observations in different climatic regions of Iran as well as it could be applied as a basis for future climate scenarios.
NASA Technical Reports Server (NTRS)
Gubarev, Mikhail V.; Kilaru, Kirenmayee; Ramsey, Brian D.
2009-01-01
We are investigating differential deposition as a way of correcting small figure errors inside full-shell grazing-incidence x-ray optics. The optics in our study are fabricated using the electroformed-nickel-replication technique, and the figure errors arise from fabrication errors in the mandrel, from which the shells are replicated, as well as errors induced during the electroforming process. Combined, these give sub-micron-scale figure deviations which limit the angular resolution of the optics to approx. 10 arcsec. Sub-micron figure errors can be corrected by selectively depositing (physical vapor deposition) material inside the shell. The requirements for this filler material are that it must not degrade the ultra-smooth surface finish necessary for efficient x-ray reflection (approx. 5 A rms), and must not be highly stressed. In addition, a technique must be found to produce well controlled and defined beams within highly constrained geometries, as some of our mirror shells are less than 3 cm in diameter.
NASA Astrophysics Data System (ADS)
Xu, Xianfeng; Cai, Luzhong; Li, Dailin; Mao, Jieying
2010-04-01
In phase-shifting interferometry (PSI) the reference wave is usually supposed to be an on-axis plane wave. But in practice a slight tilt of reference wave often occurs, and this tilt will introduce unexpected errors of the reconstructed object wave-front. Usually the least-square method with iterations, which is time consuming, is employed to analyze the phase errors caused by the tilt of reference wave. Here a simple effective algorithm is suggested to detect and then correct this kind of errors. In this method, only some simple mathematic operation is used, avoiding using least-square equations as needed in most methods reported before. It can be used for generalized phase-shifting interferometry with two or more frames for both smooth and diffusing objects, and the excellent performance has been verified by computer simulations. The numerical simulations show that the wave reconstruction errors can be reduced by 2 orders of magnitude.
Estimating Uncertainties in the Multi-Instrument SBUV Profile Ozone Merged Data Set
NASA Technical Reports Server (NTRS)
Frith, Stacey; Stolarski, Richard
2015-01-01
The MOD data set is uniquely qualified for use in long-term ozone analysis because of its long record, high spatial coverage, and consistent instrument design and algorithm. The estimated MOD uncertainty term significantly increases the uncertainty over the statistical error alone. Trends in the post-2000 period are generally positive in the upper stratosphere, but only significant at 1-1.6 hPa. Remaining uncertainties not yet included in the Monte Carlo model are Smoothing Error ( 1 from 10 to 1 hPa) Relative calibration uncertainty between N11 and N17Seasonal cycle differences between SBUV records.
Robust dynamic 3-D measurements with motion-compensated phase-shifting profilometry
NASA Astrophysics Data System (ADS)
Feng, Shijie; Zuo, Chao; Tao, Tianyang; Hu, Yan; Zhang, Minliang; Chen, Qian; Gu, Guohua
2018-04-01
Phase-shifting profilometry (PSP) is a widely used approach to high-accuracy three-dimensional shape measurements. However, when it comes to moving objects, phase errors induced by the movement often result in severe artifacts even though a high-speed camera is in use. From our observations, there are three kinds of motion artifacts: motion ripples, motion-induced phase unwrapping errors, and motion outliers. We present a novel motion-compensated PSP to remove the artifacts for dynamic measurements of rigid objects. The phase error of motion ripples is analyzed for the N-step phase-shifting algorithm and is compensated using the statistical nature of the fringes. The phase unwrapping errors are corrected exploiting adjacent reliable pixels, and the outliers are removed by comparing the original phase map with a smoothed phase map. Compared with the three-step PSP, our method can improve the accuracy by more than 95% for objects in motion.
NASA Technical Reports Server (NTRS)
Bierman, G. J.
1975-01-01
Square root information estimation, starting from its beginnings in least-squares parameter estimation, is considered. Special attention is devoted to discussions of sensitivity and perturbation matrices, computed solutions and their formal statistics, consider-parameters and consider-covariances, and the effects of a priori statistics. The constant-parameter model is extended to include time-varying parameters and process noise, and the error analysis capabilities are generalized. Efficient and elegant smoothing results are obtained as easy consequences of the filter formulation. The value of the techniques is demonstrated by the navigation results that were obtained for the Mariner Venus-Mercury (Mariner 10) multiple-planetary space probe and for the Viking Mars space mission.
Multi-PON access network using a coarse AWG for smooth migration from TDM to WDM PON
NASA Astrophysics Data System (ADS)
Shachaf, Y.; Chang, C.-H.; Kourtessis, P.; Senior, J. M.
2007-06-01
An interoperable access network architecture based on a coarse array waveguide grating (AWG) is described, displaying dynamic wavelength assignment to manage the network load across multiple PONs. The multi-PON architecture utilizes coarse Gaussian channels of an AWG to facilitate scalability and smooth migration path between TDM and WDM PONs. Network simulations of a cross-operational protocol platform confirmed successful routing of individual PON clusters through 7 nm-wide passband windows of the AWG. Furthermore, polarization-dependent wavelength shift and phase errors of the device proved not to impose restrain on the routing performance. Optical transmission tests at 2.5 Gbit/s for distances up to 20 km are demonstrated.
Sources of Error in Substance Use Prevalence Surveys
Johnson, Timothy P.
2014-01-01
Population-based estimates of substance use patterns have been regularly reported now for several decades. Concerns with the quality of the survey methodologies employed to produce those estimates date back almost as far. Those concerns have led to a considerable body of research specifically focused on understanding the nature and consequences of survey-based errors in substance use epidemiology. This paper reviews and summarizes that empirical research by organizing it within a total survey error model framework that considers multiple types of representation and measurement errors. Gaps in our knowledge of error sources in substance use surveys and areas needing future research are also identified. PMID:27437511
Does Menstruation Explain Gender Gaps in Work Absenteeism?
ERIC Educational Resources Information Center
Herrmann, Mariesa A.; Rockoff, Jonah E.
2012-01-01
Ichino and Moretti (2009) find that menstruation may contribute to gender gaps in absenteeism and earnings, based on evidence that absences of young female Italian bank employees follow a 28-day cycle. We find this evidence is not robust to the correction of coding errors or small changes in specification, and we find no evidence of increased…
Tatsumi, Daisaku; Nakada, Ryosei; Ienaga, Akinori; Yomoda, Akane; Inoue, Makoto; Ichida, Takao; Hosono, Masako
2012-01-01
The tolerance of the Backup diaphragm (Backup JAW) setting in Elekta linac was specified as 2 mm according to the AAPM TG-142 report. However, the tolerance and the quality assurance procedure for volumetric modulated arc therapy (VMAT) was not provided. This paper describes positional accuracy and quality assurance procedure of the Backup JAWs required for VMAT. It was found that a gap-width error of the Backup JAW by a sliding window test needed to be less than 1.5 mm for prostate VMAT delivery. It was also confirmed that the gap-widths had been maintained with an error of 0.2 mm during the past one year.
Bilateral filter regularized accelerated Demons for improved discontinuity preserving registration.
Demirović, D; Šerifović-Trbalić, A; Prljača, N; Cattin, Ph C
2015-03-01
The classical accelerated Demons algorithm uses Gaussian smoothing to penalize oscillatory motion in the displacement fields during registration. This well known method uses the L2 norm for regularization. Whereas the L2 norm is known for producing well behaving smooth deformation fields it cannot properly deal with discontinuities often seen in the deformation field as the regularizer cannot differentiate between discontinuities and smooth part of motion field. In this paper we propose replacement the Gaussian filter of the accelerated Demons with a bilateral filter. In contrast the bilateral filter not only uses information from displacement field but also from the image intensities. In this way we can smooth the motion field depending on image content as opposed to the classical Gaussian filtering. By proper adjustment of two tunable parameters one can obtain more realistic deformations in a case of discontinuity. The proposed approach was tested on 2D and 3D datasets and showed significant improvements in the Target Registration Error (TRE) for the well known POPI dataset. Despite the increased computational complexity, the improved registration result is justified in particular abdominal data sets where discontinuities often appear due to sliding organ motion. Copyright © 2014 Elsevier Ltd. All rights reserved.
The Feasibility of Using Computer and Internet in Teaching Family Education for the 8th Grade Class
ERIC Educational Resources Information Center
Alluhaydan, Nuwayyir Saleh F.
2016-01-01
This paper is just a sample template for the prospective authors of IISTE. Over the decades, the concepts of holons and holonic systems have been adopted in many research fields, but they are scarcely attempted on labour planning. A literature gap exists, thus motivating the author to come up with a holonic model that uses exponential smoothing to…
ERIC Educational Resources Information Center
Carjuzaa, Jioanna; Baldwin, Anna E; Munson, Michael
2015-01-01
The espoused foundation of U.S. society, "E pluribus unum" (out of many, one), is based on the belief that this nation should simultaneously support pluralism and promote unity. The road to making this ideal a reality, however, has not always been smooth. The ever-widening achievement gap highlights how this discordance plays out in our…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-12
... impact of eliminating the correction window from the electronic grant application submission process on... process a temporary error correction window to ensure a smooth and successful transition for applicants. This window provides applicants a period of time beyond the grant application due date to correct any...
Thermoplastic Ribbon-Ply Bonding Model
NASA Technical Reports Server (NTRS)
Hinkley, Jeffrey A.; Marchello, Joseph M.; Messier, Bernadette C.
1996-01-01
The aim of the present work was to identify key variables in rapid weldbonding of thermoplastic tow (ribbon) and their relationship to matrix polymer properties and to ribbon microstructure. Theoretical models for viscosity, establishment of ply-ply contact, instantaneous (Velcro) bonding, molecular interdiffusion (healing), void growth suppression, and gap filling were reviewed and synthesized. Consideration of the theoretical bonding mechanisms and length scales and of the experimental weld/peel data allow the prediction of such quantities as the time and pressure required to achieve good contact between a ribbon and a flat substrate, the time dependence of bond strength, pressures needed to prevent void growth from dissolved moisture and conditions for filling gaps and smoothing overlaps.
Respiration rate detection based on intensity modulation using plastic optical fiber
NASA Astrophysics Data System (ADS)
Anwar, Zawawi Mohd; Ziran Nurul Sufia, Nor; Hadi, Manap
2017-11-01
This paper presents the implementation of respiration rate measurement via a simple intensity-based optical fiber sensor using optical fiber technology. The breathing rate is measured based on the light intensity variation due to the longitudinal gap changes between two separated fibers. In order to monitor the breathing rate continuously, the output from the photodetector conditioning circuit is connected to a low-cost Arduino kit. At the sensing point, two optical fiber cables are positioned in series with a small gap and fitted inside a transparent plastic tube. To ensure smooth movement of the fiber during inhale and exhale processes as well as to maintain the gap of the fiber during idle condition, the fiber is attached firmly to a stretchable bandage. This study shows that this simple fiber arrangement can be applied to detect respiration activity which might be critical for patient monitoring.
NASA Astrophysics Data System (ADS)
Pikus, F. G.; Efros, A. L.
1993-06-01
A two-dimensional electron liquid (TDEL), subjected to a smooth random potential, is studied in the regime of the fractional quantum Hall effect. An analytical theory of the nonlinear screening is presented for the case when the fractional gap is much less than the magnitude of the unscreened random potential. In this ``narrow-gap approximation'' (NGA), we calculate the electron density distribution function, the fraction of the TDEL which is in the incompressible state, and the thermodynamic density of states. The magnetocapacitance is calculated to compare with the recent experiments. The NGA is found to be not accurate enough to describe the data. The results for larger fractional gaps are obtained by computer modeling. To fit the recent experimental data we have also taken into account the anyon-anyon interaction in the vicinity of a fractional singularity.
Transport across nanogaps using self-consistent boundary conditions
NASA Astrophysics Data System (ADS)
Biswas, D.; Kumar, R.
2012-06-01
Charge particle transport across nanogaps is studied theoretically within the Schrodinger-Poisson mean field framework. The determination of self-consistent boundary conditions across the gap forms the central theme in order to allow for realistic interface potentials (such as metal-vacuum) which are smooth at the boundary and do not abruptly assume a constant value at the interface. It is shown that a semiclassical expansion of the transmitted wavefunction leads to approximate but self consistent boundary conditions without assuming any specific form of the potential beyond the gap. Neglecting the exchange and correlation potentials, the quantum Child-Langmuir law is investigated. It is shown that at zero injection energy, the quantum limiting current density (Jc) is found to obey the local scaling law Jc ~ Vgα/D5-2α with the gap separation D and voltage Vg. The exponent α > 1.1 with α → 3/2 in the classical regime of small de Broglie wavelengths.
System and method for smoothing a salient rotor in electrical machines
Raminosoa, Tsarafidy; Alexander, James Pellegrino; El-Refaie, Ayman Mohamed Fawzi; Torrey, David A.
2016-12-13
An electrical machine exhibiting reduced friction and windage losses is disclosed. The electrical machine includes a stator and a rotor assembly configured to rotate relative to the stator, wherein the rotor assembly comprises a rotor core including a plurality of salient rotor poles that are spaced apart from one another around an inner hub such that an interpolar gap is formed between each adjacent pair of salient rotor poles, with an opening being defined by the rotor core in each interpolar gap. Electrically non-conductive and non-magnetic inserts are positioned in the gaps formed between the salient rotor poles, with each of the inserts including a mating feature formed an axially inner edge thereof that is configured to mate with a respective opening being defined by the rotor core, so as to secure the insert to the rotor core against centrifugal force experienced during rotation of the rotor assembly.
Crainiceanu, Ciprian M.; Caffo, Brian S.; Di, Chong-Zhi; Punjabi, Naresh M.
2009-01-01
We introduce methods for signal and associated variability estimation based on hierarchical nonparametric smoothing with application to the Sleep Heart Health Study (SHHS). SHHS is the largest electroencephalographic (EEG) collection of sleep-related data, which contains, at each visit, two quasi-continuous EEG signals for each subject. The signal features extracted from EEG data are then used in second level analyses to investigate the relation between health, behavioral, or biometric outcomes and sleep. Using subject specific signals estimated with known variability in a second level regression becomes a nonstandard measurement error problem. We propose and implement methods that take into account cross-sectional and longitudinal measurement error. The research presented here forms the basis for EEG signal processing for the SHHS. PMID:20057925
StreamMap: Smooth Dynamic Visualization of High-Density Streaming Points.
Li, Chenhui; Baciu, George; Han, Yu
2018-03-01
Interactive visualization of streaming points for real-time scatterplots and linear blending of correlation patterns is increasingly becoming the dominant mode of visual analytics for both big data and streaming data from active sensors and broadcasting media. To better visualize and interact with inter-stream patterns, it is generally necessary to smooth out gaps or distortions in the streaming data. Previous approaches either animate the points directly or present a sampled static heat-map. We propose a new approach, called StreamMap, to smoothly blend high-density streaming points and create a visual flow that emphasizes the density pattern distributions. In essence, we present three new contributions for the visualization of high-density streaming points. The first contribution is a density-based method called super kernel density estimation that aggregates streaming points using an adaptive kernel to solve the overlapping problem. The second contribution is a robust density morphing algorithm that generates several smooth intermediate frames for a given pair of frames. The third contribution is a trend representation design that can help convey the flow directions of the streaming points. The experimental results on three datasets demonstrate the effectiveness of StreamMap when dynamic visualization and visual analysis of trend patterns on streaming points are required.
Altimeter error sources at the 10-cm performance level
NASA Technical Reports Server (NTRS)
Martin, C. F.
1977-01-01
Error sources affecting the calibration and operational use of a 10 cm altimeter are examined to determine the magnitudes of current errors and the investigations necessary to reduce them to acceptable bounds. Errors considered include those affecting operational data pre-processing, and those affecting altitude bias determination, with error budgets developed for both. The most significant error sources affecting pre-processing are bias calibration, propagation corrections for the ionosphere, and measurement noise. No ionospheric models are currently validated at the required 10-25% accuracy level. The optimum smoothing to reduce the effects of measurement noise is investigated and found to be on the order of one second, based on the TASC model of geoid undulations. The 10 cm calibrations are found to be feasible only through the use of altimeter passes that are very high elevation for a tracking station which tracks very close to the time of altimeter track, such as a high elevation pass across the island of Bermuda. By far the largest error source, based on the current state-of-the-art, is the location of the island tracking station relative to mean sea level in the surrounding ocean areas.
Rhesus Monkeys Behave As If They Perceive the Duncker Illusion
Zivotofsky, A. Z.; Goldberg, M. E.; Powell, K. D.
2008-01-01
The visual system uses the pattern of motion on the retina to analyze the motion of objects in the world, and the motion of the observer him/herself. Distinguishing between retinal motion evoked by movement of the retina in space and retinal motion evoked by movement of objects in the environment is computationally difficult, and the human visual system frequently misinterprets the meaning of retinal motion. In this study, we demonstrate that the visual system of the Rhesus monkey also misinterprets retinal motion. We show that monkeys erroneously report the trajectories of pursuit targets or their own pursuit eye movements during an epoch of smooth pursuit across an orthogonally moving background. Furthermore, when they make saccades to the spatial location of stimuli that flashed early in an epoch of smooth pursuit or fixation, they make large errors that appear to take into account the erroneous smooth eye movement that they report in the first experiment, and not the eye movement that they actually make. PMID:16102233
An impact analysis of forecasting methods and forecasting parameters on bullwhip effect
NASA Astrophysics Data System (ADS)
Silitonga, R. Y. H.; Jelly, N.
2018-04-01
Bullwhip effect is an increase of variance of demand fluctuation from downstream to upstream of supply chain. Forecasting methods and forecasting parameters were recognized as some factors that affect bullwhip phenomena. To study these factors, we can develop simulations. There are several ways to simulate bullwhip effect in previous studies, such as mathematical equation modelling, information control modelling, computer program, and many more. In this study a spreadsheet program named Bullwhip Explorer was used to simulate bullwhip effect. Several scenarios were developed to show the change in bullwhip effect ratio because of the difference in forecasting methods and forecasting parameters. Forecasting methods used were mean demand, moving average, exponential smoothing, demand signalling, and minimum expected mean squared error. Forecasting parameters were moving average period, smoothing parameter, signalling factor, and safety stock factor. It showed that decreasing moving average period, increasing smoothing parameter, increasing signalling factor can create bigger bullwhip effect ratio. Meanwhile, safety stock factor had no impact to bullwhip effect.
Forecasting hotspots in East Kutai, Kutai Kartanegara, and West Kutai as early warning information
NASA Astrophysics Data System (ADS)
Wahyuningsih, S.; Goejantoro, R.; Rizki, N. A.
2018-04-01
The aims of this research are to model hotspots and forecast hotspot 2017 in East Kutai, Kutai Kartanegara and West Kutai. The methods which used in this research were Holt exponential smoothing, Holt’s additive dump trend method, Holt-Winters’ additive method, additive decomposition method, multiplicative decomposition method, Loess decomposition method and Box-Jenkins method. For smoothing techniques, additive decomposition is better than Holt’s exponential smoothing. The hotspots model using Box-Jenkins method were Autoregressive Moving Average ARIMA(1,1,0), ARIMA(0,2,1), and ARIMA(0,1,0). Comparing the results from all methods which were used in this research, and based on Root of Mean Squared Error (RMSE), show that Loess decomposition method is the best times series model, because it has the least RMSE. Thus the Loess decomposition model used to forecast the number of hotspot. The forecasting result indicatethat hotspots pattern tend to increase at the end of 2017 in Kutai Kartanegara and West Kutai, but stationary in East Kutai.
Rigidity controllable polishing tool based on magnetorheological effect
NASA Astrophysics Data System (ADS)
Wang, Jia; Wan, Yongjian; Shi, Chunyan
2012-10-01
A stable and predictable material removal function (MRF) plays a crucial role in computer controlled optical surfacing (CCOS). For physical contact polishing case, the stability of MRF depends on intimate contact between polishing interface and workpiece. Rigid laps maintain this function in polishing spherical surfaces, whose curvature has no variation with the position on the surface. Such rigid laps provide smoothing effect for mid-spatial frequency errors, but can't be used in aspherical surfaces for they will destroy the surface figure. Flexible tools such as magnetorheological fluid or air bonnet conform to the surface [1]. They lack rigidity and provide little natural smoothing effect. We present a rigidity controllable polishing tool that uses a kind of magnetorheological elastomers (MRE) medium [2]. It provides the ability of both conforming to the aspheric surface and maintaining natural smoothing effect. What's more, its rigidity can be controlled by the magnetic field. This paper will present the design, analysis, and stiffness variation mechanism model of such polishing tool [3].
Kaufhold, John P; Tsai, Philbert S; Blinder, Pablo; Kleinfeld, David
2012-08-01
A graph of tissue vasculature is an essential requirement to model the exchange of gasses and nutriments between the blood and cells in the brain. Such a graph is derived from a vectorized representation of anatomical data, provides a map of all vessels as vertices and segments, and may include the location of nonvascular components, such as neuronal and glial somata. Yet vectorized data sets typically contain erroneous gaps, spurious endpoints, and spuriously merged strands. Current methods to correct such defects only address the issue of connecting gaps and further require manual tuning of parameters in a high dimensional algorithm. To address these shortcomings, we introduce a supervised machine learning method that (1) connects vessel gaps by "learned threshold relaxation"; (2) removes spurious segments by "learning to eliminate deletion candidate strands"; and (3) enforces consistency in the joint space of learned vascular graph corrections through "consistency learning." Human operators are only required to label individual objects they recognize in a training set and are not burdened with tuning parameters. The supervised learning procedure examines the geometry and topology of features in the neighborhood of each vessel segment under consideration. We demonstrate the effectiveness of these methods on four sets of microvascular data, each with >800(3) voxels, obtained with all optical histology of mouse tissue and vectorization by state-of-the-art techniques in image segmentation. Through statistically validated sampling and analysis in terms of precision recall curves, we find that learning with bagged boosted decision trees reduces equal-error error rates for threshold relaxation by 5-21% and strand elimination performance by 18-57%. We benchmark generalization performance across datasets; while improvements vary between data sets, learning always leads to a useful reduction in error rates. Overall, learning is shown to more than halve the total error rate, and therefore, human time spent manually correcting such vectorizations. Copyright © 2012 Elsevier B.V. All rights reserved.
Kaufhold, John P.; Tsai, Philbert S.; Blinder, Pablo; Kleinfeld, David
2012-01-01
A graph of tissue vasculature is an essential requirement to model the exchange of gasses and nutriments between the blood and cells in the brain. Such a graph is derived from a vectorized representation of anatomical data, provides a map of all vessels as vertices and segments, and may include the location of nonvascular components, such as neuronal and glial somata. Yet vectorized data sets typically contain erroneous gaps, spurious endpoints, and spuriously merged strands. Current methods to correct such defects only address the issue of connecting gaps and further require manual tuning of parameters in a high dimensional algorithm. To address these shortcomings, we introduce a supervised machine learning method that (1) connects vessel gaps by “learned threshold relaxation”; (2) removes spurious segments by “learning to eliminate deletion candidate strands”; and (3) enforces consistency in the joint space of learned vascular graph corrections through “consistency learning.” Human operators are only required to label individual objects they recognize in a training set and are not burdened with tuning parameters. The supervised learning procedure examines the geometry and topology of features in the neighborhood of each vessel segment under consideration. We demonstrate the effectiveness of these methods on four sets of microvascular data, each with > 8003 voxels, obtained with all optical histology of mouse tissue and vectorization by state-of-the-art techniques in image segmentation. Through statistically validated sampling and analysis in terms of precision recall curves, we find that learning with bagged boosted decision trees reduces equal-error error rates for threshold relaxation by 5 to 21 % and strand elimination performance by 18 to 57 %. We benchmark generalization performance across datasets; while improvements vary between data sets, learning always leads to a useful reduction in error rates. Overall, learning is shown to more than halve the total error rate, and therefore, human time spent manually correcting such vectorizations. PMID:22854035
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2014-01-01
This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.
Effect of gap detection threshold on consistency of speech in children with speech sound disorder.
Sayyahi, Fateme; Soleymani, Zahra; Akbari, Mohammad; Bijankhan, Mahmood; Dolatshahi, Behrooz
2017-02-01
The present study examined the relationship between gap detection threshold and speech error consistency in children with speech sound disorder. The participants were children five to six years of age who were categorized into three groups of typical speech, consistent speech disorder (CSD) and inconsistent speech disorder (ISD).The phonetic gap detection threshold test was used for this study, which is a valid test comprised six syllables with inter-stimulus intervals between 20-300ms. The participants were asked to listen to the recorded stimuli three times and indicate whether they heard one or two sounds. There was no significant difference between the typical and CSD groups (p=0.55), but there were significant differences in performance between the ISD and CSD groups and the ISD and typical groups (p=0.00). The ISD group discriminated between speech sounds at a higher threshold. Children with inconsistent speech errors could not distinguish speech sounds during time-limited phonetic discrimination. It is suggested that inconsistency in speech is a representation of inconsistency in auditory perception, which causes by high gap detection threshold. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sheldon, Rachel E.; Mashayamombe, Chipo; Shi, Shao-Qing; Garfield, Robert E.; Shmygol, Anatoly; Blanks, Andrew M.; van den Berg, Hugo A.
2014-01-01
The smooth muscle cells of the uterus contract in unison during delivery. These cells achieve coordinated activity via electrical connections called gap junctions which consist of aggregated connexin proteins such as connexin43 and connexin45. The density of gap junctions governs the excitability of the myometrium (among other factors). An increase in gap junction density occurs immediately prior to parturition. We extend a mathematical model of the myometrium by incorporating the voltage-dependence of gap junctions that has been demonstrated in the experimental literature. Two functional subtypes exist, corresponding to systems with predominantly connexin43 and predominantly connexin45, respectively. Our simulation results indicate that the gap junction protein connexin45 acts as a negative modulator of uterine excitability, and hence, activity. A network with a higher proportion of connexin45 relative to connexin43 is unable to excite every cell. Connexin45 has much more rapid gating kinetics than connexin43 which we show limits the maximum duration of a local burst of activity. We propose that this effect regulates the degree of synchronous excitation attained during a contraction. Our results support the hypothesis that as labour approaches, connexin45 is downregulated to allow action potentials to spread more readily through the myometrium. PMID:25401181
Directionally solidified Al2O3/GAP eutectic ceramics by micro-pulling-down method
NASA Astrophysics Data System (ADS)
Cao, Xue; Su, Haijun; Guo, Fengwei; Tan, Xi; Cao, Lamei
2016-11-01
We reported a novel route to prepare directionally solidified (DS) Al2O3/GAP eutectic ceramics by micro-pulling-down (μ-PD) method. The eutectic crystallizations, microstructure characters and evolutions, and their mechanical properties were investigated in detail. The results showed that the Al2O3/GAP eutectic composites can be successfully fabricated through μ-PD method, possessed smooth surface, full density and large crystal size (the maximal size: φ90 mm × 20 mm). At the process of Diameter, the as-solidified Al2O3/GAP eutectic presented a combination of "Chinese script" and elongated colony microstructure with complex regular structure. Inside the colonies, the rod-type or lamellar-type eutectic microstructures with ultra-fine GAP surrounded by the Al2O3 matrix were observed. At an appropriate solidificational rate, the binary eutectic exhibited a typical DS irregular eutectic structure of "chinese script" consisting of interpenetrating network of α-Al2O3 and GAP phases without any other phases. Therefore, the interphase spacing was refined to 1-2 µm and the irregular microstructure led to an outstanding vickers hardness of 17.04 GPa and fracture toughness of 6.3 MPa × m1/2 at room temperature.
Koch, Sven H; Weir, Charlene; Haar, Maral; Staggers, Nancy; Agutter, Jim; Görges, Matthias; Westenskow, Dwayne
2012-01-01
Fatal errors can occur in intensive care units (ICUs). Researchers claim that information integration at the bedside may improve nurses' situation awareness (SA) of patients and decrease errors. However, it is unclear which information should be integrated and in what form. Our research uses the theory of SA to analyze the type of tasks, and their associated information gaps. We aimed to provide recommendations for integrated, consolidated information displays to improve nurses' SA. Systematic observations methods were used to follow 19 ICU nurses for 38 hours in 3 clinical practice settings. Storyboard methods and concept mapping helped to categorize the observed tasks, the associated information needs, and the information gaps of the most frequent tasks by SA level. Consensus and discussion of the research team was used to propose recommendations to improve information displays at the bedside based on information deficits. Nurses performed 46 different tasks at a rate of 23.4 tasks per hour. The information needed to perform the most common tasks was often inaccessible, difficult to see at a distance or located on multiple monitoring devices. Current devices at the ICU bedside do not adequately support a nurse's information-gathering activities. Medication management was the most frequent category of tasks. Information gaps were present at all levels of SA and across most of the tasks. Using a theoretical model to understand information gaps can aid in designing functional requirements. Integrated information that enhances nurses' Situation Awareness may decrease errors and improve patient safety in the future.
Weir, Charlene; Haar, Maral; Staggers, Nancy; Agutter, Jim; Görges, Matthias; Westenskow, Dwayne
2012-01-01
Objective Fatal errors can occur in intensive care units (ICUs). Researchers claim that information integration at the bedside may improve nurses' situation awareness (SA) of patients and decrease errors. However, it is unclear which information should be integrated and in what form. Our research uses the theory of SA to analyze the type of tasks, and their associated information gaps. We aimed to provide recommendations for integrated, consolidated information displays to improve nurses' SA. Materials and Methods Systematic observations methods were used to follow 19 ICU nurses for 38 hours in 3 clinical practice settings. Storyboard methods and concept mapping helped to categorize the observed tasks, the associated information needs, and the information gaps of the most frequent tasks by SA level. Consensus and discussion of the research team was used to propose recommendations to improve information displays at the bedside based on information deficits. Results Nurses performed 46 different tasks at a rate of 23.4 tasks per hour. The information needed to perform the most common tasks was often inaccessible, difficult to see at a distance or located on multiple monitoring devices. Current devices at the ICU bedside do not adequately support a nurse's information-gathering activities. Medication management was the most frequent category of tasks. Discussion Information gaps were present at all levels of SA and across most of the tasks. Using a theoretical model to understand information gaps can aid in designing functional requirements. Conclusion Integrated information that enhances nurses' Situation Awareness may decrease errors and improve patient safety in the future. PMID:22437074
Fractional-dimensional Child-Langmuir law for a rough cathode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zubair, M., E-mail: muhammad-zubair@sutd.edu.sg; Ang, L. K., E-mail: ricky-ang@sutd.edu.sg
This work presents a self-consistent model of space charge limited current transport in a gap combined of free-space and fractional-dimensional space (F{sup α}), where α is the fractional dimension in the range 0 < α ≤ 1. In this approach, a closed-form fractional-dimensional generalization of Child-Langmuir (CL) law is derived in classical regime which is then used to model the effect of cathode surface roughness in a vacuum diode by replacing the rough cathode with a smooth cathode placed in a layer of effective fractional-dimensional space. Smooth transition of CL law from the fractional-dimensional to integer-dimensional space is also demonstrated. The model has beenmore » validated by comparing results with an experiment.« less
NASA Astrophysics Data System (ADS)
Yanju, Wei; Jingyu, Wang; Chongwei, An; Hequn, Li; Xiaomu, Wen; Binshuo, Yu
2017-01-01
With ε-2,4,6,8,10,12-hexanitro-2,4,6,8,10,12-hexaazaisowurtzitane (CL-20) and glycidyl azide polymer (GAP) as the solid filler and binder, respectively, GAP/CL-20-based compound explosives were designed and prepared. Using micro injection charge technology, the compound explosives were packed into small grooves to explore their application in a small-sized initiation network. The detonation reliability, detonation velocity, mechanical sensitivity, shock sensitivity, and brisance of the explosive were measured and analyzed. The results show that when the solid content of CL-20 is 82 wt%, the explosive charged in the groove has a smooth surface from a macroscopic view. From a microscopic view, a coarse surface is bonded with many CL-20 particles by GAP binder. The GAP/CL-20-based explosive charge successfully generates detonation waves in a groove larger than 0.6 mm × 0.6 mm. When the charge density in the groove is 1.68 g.cm-3 (90% theoretical maximum density), the detonation velocity reaches 7,290 m.s-1. Moreover, this kind of explosive is characterized by low impact and shock sensitivity.
Thin film GaP for solar cell application
NASA Astrophysics Data System (ADS)
Morozov, I. A.; Gudovskikh, A. S.; Kudryashov, D. A.; Nikitina, E. V.; Kleider, J.-P.; Myasoedov, A. V.; Levitskiy, V.
2016-08-01
A new approach to the silicon based heterostructures technology consisting of the growth of III-V compounds (GaP) on a silicon substrate by low-temperature plasma enhanced atomic layer deposition (PE-ALD) is proposed. The basic idea of the method is to use a time modulation of the growth process, i.e. time separated stages of atoms or precursors transport to the growing surface, migration over the surface, and crystal lattice relaxation for each monolayer. The GaP layers were grown on Si substrates by PE-ALD at 350°C with phosphine (PH3) and trimethylgallium (TMG) as sources of III and V atoms. Scanning and transmission electron microscopy demonstrate that the grown GaP films have homogeneous amorphous structure, smooth surface and a sharp GaP/Si interface. The GaP/Si heterostructures obtained by PE-ALD compare favourably to that conventionally grown by molecular beam epitaxy (MBE). Indeed, spectroscopic ellipsometry measurements indicate similar interband optical absorption while photoluminescence measurements indicate higher charge carrier effective lifetime. The better passivation properties of GaP layers grown by PE-ALD demonstrate a potential of this technology for new silicon based photovoltaic heterostructure
Reverse engineering the gap gene network of Drosophila melanogaster.
Perkins, Theodore J; Jaeger, Johannes; Reinitz, John; Glass, Leon
2006-05-01
A fundamental problem in functional genomics is to determine the structure and dynamics of genetic networks based on expression data. We describe a new strategy for solving this problem and apply it to recently published data on early Drosophila melanogaster development. Our method is orders of magnitude faster than current fitting methods and allows us to fit different types of rules for expressing regulatory relationships. Specifically, we use our approach to fit models using a smooth nonlinear formalism for modeling gene regulation (gene circuits) as well as models using logical rules based on activation and repression thresholds for transcription factors. Our technique also allows us to infer regulatory relationships de novo or to test network structures suggested by the literature. We fit a series of models to test several outstanding questions about gap gene regulation, including regulation of and by hunchback and the role of autoactivation. Based on our modeling results and validation against the experimental literature, we propose a revised network structure for the gap gene system. Interestingly, some relationships in standard textbook models of gap gene regulation appear to be unnecessary for or even inconsistent with the details of gap gene expression during wild-type development.
NASA Astrophysics Data System (ADS)
Li, Y. J.; Kokkinaki, Amalia; Darve, Eric F.; Kitanidis, Peter K.
2017-08-01
The operation of most engineered hydrogeological systems relies on simulating physical processes using numerical models with uncertain parameters and initial conditions. Predictions by such uncertain models can be greatly improved by Kalman-filter techniques that sequentially assimilate monitoring data. Each assimilation constitutes a nonlinear optimization, which is solved by linearizing an objective function about the model prediction and applying a linear correction to this prediction. However, if model parameters and initial conditions are uncertain, the optimization problem becomes strongly nonlinear and a linear correction may yield unphysical results. In this paper, we investigate the utility of one-step ahead smoothing, a variant of the traditional filtering process, to eliminate nonphysical results and reduce estimation artifacts caused by nonlinearities. We present the smoothing-based compressed state Kalman filter (sCSKF), an algorithm that combines one step ahead smoothing, in which current observations are used to correct the state and parameters one step back in time, with a nonensemble covariance compression scheme, that reduces the computational cost by efficiently exploring the high-dimensional state and parameter space. Numerical experiments show that when model parameters are uncertain and the states exhibit hyperbolic behavior with sharp fronts, as in CO2 storage applications, one-step ahead smoothing reduces overshooting errors and, by design, gives physically consistent state and parameter estimates. We compared sCSKF with commonly used data assimilation methods and showed that for the same computational cost, combining one step ahead smoothing and nonensemble compression is advantageous for real-time characterization and monitoring of large-scale hydrogeological systems with sharp moving fronts.
Binary-disk interaction. II. Gap-opening criteria for unequal-mass binaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Del Valle, Luciano; Escala, Andrés, E-mail: ldelvalleb@gmail.com
We study the interaction of an unequal-mass binary with an isothermal circumbinary disk, motivated by the theoretical and observational evidence that after a major merger of gas-rich galaxies, a massive gaseous disk with a supermassive black hole binary will be formed in the nuclear region. We focus on the gravitational torques that the binary exerts on the disk and how these torques can drive the formation of a gap in the disk. This exchange of angular momentum between the binary and the disk is mainly driven by the gravitational interaction between the binary and a strong nonaxisymmetric density perturbation thatmore » is produced in the disk, in response to the presence of the binary. Using smoothed particle hydrodynamics numerical simulations, we test two gap-opening criteria, one that assumes the geometry of the density perturbation is an ellipsoid/thick spiral and another that assumes a flat spiral geometry for the density perturbation. We find that the flat spiral gap-opening criterion successfully predicts which simulations will have a gap in the disk and which will not. We also study the limiting cases predicted by the gap-opening criteria. Since the viscosity in our simulations is considerably smaller than the expected value in the nuclear regions of gas-rich merging galaxies, we conclude that in such environments the formation of a circumbinary gap is unlikely.« less
Data preparation for functional data analysis of PM10 in Peninsular Malaysia
NASA Astrophysics Data System (ADS)
Shaadan, Norshahida; Jemain, Abdul Aziz; Deni, Sayang Mohd
2014-07-01
The use of curves or functional data in the study analysis is increasingly gaining momentum in the various fields of research. The statistical method to analyze such data is known as functional data analysis (FDA). The first step in FDA is to convert the observed data points which are repeatedly recorded over a period of time or space into either a rough (raw) or smooth curve. In the case of the smooth curve, basis functions expansion is one of the methods used for the data conversion. The data can be converted into a smooth curve either by using the regression smoothing or roughness penalty smoothing approach. By using the regression smoothing approach, the degree of curve's smoothness is very dependent on k number of basis functions; meanwhile for the roughness penalty approach, the smoothness is dependent on a roughness coefficient given by parameter λ Based on previous studies, researchers often used the rather time-consuming trial and error or cross validation method to estimate the appropriate number of basis functions. Thus, this paper proposes a statistical procedure to construct functional data or curves for the hourly and daily recorded data. The Bayesian Information Criteria is used to determine the number of basis functions while the Generalized Cross Validation criteria is used to identify the parameter λ The proposed procedure is then applied on a ten year (2001-2010) period of PM10 data from 30 air quality monitoring stations that are located in Peninsular Malaysia. It was found that the number of basis functions required for the construction of the PM10 daily curve in Peninsular Malaysia was in the interval of between 14 and 20 with an average value of 17; the first percentile is 15 and the third percentile is 19. Meanwhile the initial value of the roughness coefficient was in the interval of between 10-5 and 10-7 and the mode was 10-6. An example of the functional descriptive analysis is also shown.
von Kármán swirling flow between a rotating and a stationary smooth disk: Experiment
NASA Astrophysics Data System (ADS)
Mukherjee, Aryesh; Steinberg, Victor
2018-01-01
Precise measurements of the torque in a von Kármán swirling flow between a rotating and a stationary smooth disk in three Newtonian fluids with different dynamic viscosities are reported. From these measurements the dependence of the normalized torque, called the friction coefficient, on Re is found to be of the form Cf=1.17 (±0.03 ) Re-0.46±0.003 where the scaling exponent and coefficient are close to that predicted theoretically for an infinite, unshrouded, and smooth rotating disk which follows from an exact similarity solution of the Navier-Stokes equations, obtained by von Kármán. An error analysis shows that deviations from the theory can be partially caused by background errors. Measurements of the azimuthal Vθ and axial velocity profiles along radial and axial directions reveal that the flow core rotates at Vθ/r Ω ≃0.22 (up to z ≈4 cm from the rotating disk and up to r0/R ≃0.25 in the radial direction) in spite of the small aspect ratio of the vessel. Thus the friction coefficient shows scaling close to that obtained from the von Kármán exact similarity solution, but the observed rotating core provides evidence of the Batchelor-like solution [Q. J. Mech. Appl. Math. 4, 29 (1951), 10.1093/qjmam/4.1.29] different from the von Kármán [Z. Angew. Math. Mech. 1, 233 (1921), 10.1002/zamm.19210010401] or Stewartson [Proc. Camb. Philos. Soc. 49, 333 (1953), 10.1017/S0305004100028437] one.
A New and Fast Method for Smoothing Spectral Imaging Data
NASA Technical Reports Server (NTRS)
Gao, Bo-Cai; Liu, Ming; Davis, Curtiss O.
1998-01-01
The Airborne Visible Infrared Imaging Spectrometer (AVIRIS) acquires spectral imaging data covering the 0.4 - 2.5 micron wavelength range in 224 10-nm-wide channels from a NASA ER-2 aircraft at 20 km. More than half of the spectral region is affected by atmospheric gaseous absorption. Over the past decade, several techniques have been used to remove atmospheric effects from AVIRIS data for the derivation of surface reflectance spectra. An operational atmosphere removal algorithm (ATREM), which is based on theoretical modeling of atmospheric absorption and scattering effects, has been developed and updated for deriving surface reflectance spectra from AVIRIS data. Due to small errors in assumed wavelengths and errors in line parameters compiled on the HITRAN database, small spikes (particularly near the centers of the 0.94- and 1.14-micron water vapor bands) are present in this spectrum. Similar small spikes are systematically present in entire ATREM output cubes. These spikes have distracted geologists who are interested in studying surface mineral features. A method based on the "global" fitting of spectra with low order polynomials or other functions for removing these weak spikes has recently been developed by Boardman (this volume). In this paper, we describe another technique, which fits spectra "locally" based on cubic spline smoothing, for quick post processing of ATREM apparent reflectance spectra derived from AVIRIS data. Results from our analysis of AVIRIS data acquired over Cuprite mining district in Nevada in June of 1995 are given. Comparisons between our smoothed spectra and those derived with the empirical line method are presented.
Triple-frequency radar retrievals of snowfall properties from the OLYMPEX field campaign
NASA Astrophysics Data System (ADS)
Leinonen, J. S.; Lebsock, M. D.; Sy, O. O.; Tanelli, S.
2017-12-01
Retrieval of snowfall properties with radar is subject to significant errors arising from the uncertainties in the size and structure of snowflakes. Recent modeling and theoretical studies have shown that multi-frequency radars can potentially constrain the microphysical properties and thus reduce the uncertainties in the retrieved snow water content. So far, there have only been limited efforts to leverage the theoretical advances in actual snowfall retrievals. In this study, we have implemented an algorithm that retrieves the snowfall properties from triple-frequency radar data using the radar scattering properties from a combination of snowflake scattering databases, which were derived using numerical scattering methods. Snowflake number concentration, characteristic size and density are derived using a combination of optimal estimation and Kalman smoothing; the snow water content and other bulk properties are then derived from these. The retrieval framework is probabilistic and thus naturally provides error estimates for the retrieved quantities. We tested the retrieval algorithm using data from the APR3 airborne radar flown onboard the NASA DC-8 aircraft during the Olympic Mountain Experiment (OLYMPEX) in late 2015. We demonstrated consistent retrieval of snow properties and smooth transition from single- and dual-frequency retrievals to using all three frequencies simultaneously. The error analysis shows that the retrieval accuracy is improved when additional frequencies are introduced. We also compare the findings to in situ measurements of snow properties as well as measurements by polarimetric ground-based radar.
Li, Bin; Sang, Jizhang; Zhang, Zhongping
2016-01-01
A critical requirement to achieve high efficiency of debris laser tracking is to have sufficiently accurate orbit predictions (OP) in both the pointing direction (better than 20 arc seconds) and distance from the tracking station to the debris objects, with the former more important than the latter because of the narrow laser beam. When the two line element (TLE) is used to provide the orbit predictions, the resultant pointing errors are usually on the order of tens to hundreds of arc seconds. In practice, therefore, angular observations of debris objects are first collected using an optical tracking sensor, and then used to guide the laser beam pointing to the objects. The manual guidance may cause interrupts to the laser tracking, and consequently loss of valuable laser tracking data. This paper presents a real-time orbit determination (OD) and prediction method to realize smooth and efficient debris laser tracking. The method uses TLE-computed positions and angles over a short-arc of less than 2 min as observations in an OD process where simplified force models are considered. After the OD convergence, the OP is performed from the last observation epoch to the end of the tracking pass. Simulation and real tracking data processing results show that the pointing prediction errors are usually less than 10″, and the distance errors less than 100 m, therefore, the prediction accuracy is sufficient for the blind laser tracking. PMID:27347958
Matchkov, Vladimir V; Rahman, Awahan; Peng, Hongli; Nilsson, Holger; Aalkjær, Christian
2004-01-01
Heptanol, 18α-glycyrrhetinic acid (18αGA) and 18β-glycyrrhetinic acid (18βGA) are known blockers of gap junctions, and are often used in vascular studies. However, actions unrelated to gap junction block have been repeatedly suggested in the literature for these compounds. We report here the findings from a comprehensive study of these compounds in the arterial wall. Rat isolated mesenteric small arteries were studied with respect to isometric tension (myography), [Ca2+]i (Ca2+-sensitive dyes), membrane potential and – as a measure of intercellular coupling – input resistance (sharp intracellular glass electrodes). Also, membrane currents (patch-clamp) were measured in isolated smooth muscle cells (SMCs). Confocal imaging was used for visualisation of [Ca2+]i events in single SMCs in the arterial wall. Heptanol (150 μM) activated potassium currents, hyperpolarised the membrane, inhibited the Ca2+ current, and reduced [Ca2+]i and tension, but had little effect on input resistance. Only at concentrations above 200 μM did heptanol elevate input resistance, desynchronise SMCs and abolish vasomotion. 18βGA (30 μM) not only increased input resistance and desynchronised SMCs but also had nonjunctional effects on membrane currents. 18αGA (100 μM) had no significant effects on tension, [Ca2+]i, total membrane current and synchronisation in vascular smooth muscle. We conclude that in mesenteric small arteries, heptanol and 18βGA have important nonjunctional effects at concentrations where they have little or no effect on intercellular communication. Thus, the effects of heptanol and 18βGA on vascular function cannot be interpreted as being caused only by effects on gap junctions. 18αGA apparently does not block communication between SMCs in these arteries, although an effect on myoendothelial gap junctions cannot be excluded. PMID:15210581
Dynamics of binary-disk interaction. 1: Resonances and disk gap sizes
NASA Technical Reports Server (NTRS)
Artymowicz, Pawel; Lubow, Stephen H.
1994-01-01
We investigate the gravitational interaction of a generally eccentric binary star system with circumbinary and circumstellar gaseous disks. The disks are assumed to be coplanar with the binary, geometrically thin, and primarily governed by gas pressure and (turbulent) viscosity but not self-gravity. Both ordinary and eccentric Lindblad resonances are primarily responsible for truncating the disks in binaries with arbitrary eccentricity and nonextreme mass ratio. Starting from a smooth disk configuration, after the gravitational field of the binary truncates the disk on the dynamical timescale, a quasi-equilibrium is achieved, in which the resonant and viscous torques balance each other and any changes in the structure of the disk (e.g., due to global viscous evolution) occur slowly, preserving the average size of the gap. We analytically compute the approximate sizes of disks (or disk gaps) as a function of binary mass ratio and eccentricity in this quasi-equilibrium. Comparing the gap sizes with results of direct simulations using the smoothed particle hydrodynamics (SPH), we obtain a good agreement. As a by-product of the computations, we verify that standard SPH codes can adequately represent the dynamics of disks with moderate viscosity, Reynolds number R approximately 10(exp 3). For typical viscous disk parameters, and with a denoting the binary semimajor axis, the inner edge location of a circumbinary disk varies from 1.8a to 2.6a with binary eccentricity increasing from 0 to 0.25. For eccentricities 0 less than e less than 0.75, the minimum separation between a component star and the circumbinary disk inner edge is greater than a. Our calculations are relevant, among others, to protobinary stars and the recently discovered T Tau pre-main-sequence binaries. We briefly examine the case of a pre-main-sequence spectroscopic binary GW Ori and conclude that circumbinary disk truncation to the size required by one proposed spectroscopic model cannot be due to Linblad resonances, even if the disk is nonviscous.
Altman, Carmit; Goldstein, Tamara; Armon-Lotem, Sharon
2017-01-01
While bilingual children follow the same milestones of language acquisition as monolingual children do in learning the syntactic patterns of their second language (L2), their vocabulary size in L2 often lags behind compared to monolinguals. The present study explores the comprehension and production of nouns and verbs in Hebrew, by two groups of 5- to 6-year olds with typical language development: monolingual Hebrew speakers (N = 26), and Russian-Hebrew bilinguals (N = 27). Analyses not only show quantitative gaps between comprehension and production and between nouns and verbs, with a bilingual effect in both, but also a qualitative difference between monolinguals and bilinguals in their production errors: monolinguals' errors reveal knowledge of the language rules despite temporary access difficulties, while bilinguals' errors reflect gaps in their knowledge of Hebrew (L2). The nature of Hebrew as a Semitic language allows one to explore this qualitative difference in the semantic and morphological level.
A variational technique for smoothing flight-test and accident data
NASA Technical Reports Server (NTRS)
Bach, R. E., Jr.
1980-01-01
The problem of determining aircraft motions along a trajectory is solved using a variational algorithm that generates unmeasured states and forcing functions, and estimates instrument bias and scale-factor errors. The problem is formulated as a nonlinear fixed-interval smoothing problem, and is solved as a sequence of linear two-point boundary value problems, using a sweep method. The algorithm has been implemented for use in flight-test and accident analysis. Aircraft motions are assumed to be governed by a six-degree-of-freedom kinematic model; forcing functions consist of body accelerations and winds, and the measurement model includes aerodynamic and radar data. Examples of the determination of aircraft motions from typical flight-test and accident data are presented.
Smooth conditional distribution function and quantiles under random censorship.
Leconte, Eve; Poiraud-Casanova, Sandrine; Thomas-Agnan, Christine
2002-09-01
We consider a nonparametric random design regression model in which the response variable is possibly right censored. The aim of this paper is to estimate the conditional distribution function and the conditional alpha-quantile of the response variable. We restrict attention to the case where the response variable as well as the explanatory variable are unidimensional and continuous. We propose and discuss two classes of estimators which are smooth with respect to the response variable as well as to the covariate. Some simulations demonstrate that the new methods have better mean square error performances than the generalized Kaplan-Meier estimator introduced by Beran (1981) and considered in the literature by Dabrowska (1989, 1992) and Gonzalez-Manteiga and Cadarso-Suarez (1994).
Effects of errors and gaps in spatial data sets on assessment of conservation progress.
Visconti, P; Di Marco, M; Álvarez-Romero, J G; Januchowski-Hartley, S R; Pressey, R L; Weeks, R; Rondinini, C
2013-10-01
Data on the location and extent of protected areas, ecosystems, and species' distributions are essential for determining gaps in biodiversity protection and identifying future conservation priorities. However, these data sets always come with errors in the maps and associated metadata. Errors are often overlooked in conservation studies, despite their potential negative effects on the reported extent of protection of species and ecosystems. We used 3 case studies to illustrate the implications of 3 sources of errors in reporting progress toward conservation objectives: protected areas with unknown boundaries that are replaced by buffered centroids, propagation of multiple errors in spatial data, and incomplete protected-area data sets. As of 2010, the frequency of protected areas with unknown boundaries in the World Database on Protected Areas (WDPA) caused the estimated extent of protection of 37.1% of the terrestrial Neotropical mammals to be overestimated by an average 402.8% and of 62.6% of species to be underestimated by an average 10.9%. Estimated level of protection of the world's coral reefs was 25% higher when using recent finer-resolution data on coral reefs as opposed to globally available coarse-resolution data. Accounting for additional data sets not yet incorporated into WDPA contributed up to 6.7% of additional protection to marine ecosystems in the Philippines. We suggest ways for data providers to reduce the errors in spatial and ancillary data and ways for data users to mitigate the effects of these errors on biodiversity assessments. © 2013 Society for Conservation Biology.
Prediction of discretization error using the error transport equation
NASA Astrophysics Data System (ADS)
Celik, Ismail B.; Parsons, Don Roscoe
2017-06-01
This study focuses on an approach to quantify the discretization error associated with numerical solutions of partial differential equations by solving an error transport equation (ETE). The goal is to develop a method that can be used to adequately predict the discretization error using the numerical solution on only one grid/mesh. The primary problem associated with solving the ETE is the formulation of the error source term which is required for accurately predicting the transport of the error. In this study, a novel approach is considered which involves fitting the numerical solution with a series of locally smooth curves and then blending them together with a weighted spline approach. The result is a continuously differentiable analytic expression that can be used to determine the error source term. Once the source term has been developed, the ETE can easily be solved using the same solver that is used to obtain the original numerical solution. The new methodology is applied to the two-dimensional Navier-Stokes equations in the laminar flow regime. A simple unsteady flow case is also considered. The discretization error predictions based on the methodology presented in this study are in good agreement with the 'true error'. While in most cases the error predictions are not quite as accurate as those from Richardson extrapolation, the results are reasonable and only require one numerical grid. The current results indicate that there is much promise going forward with the newly developed error source term evaluation technique and the ETE.
Generalized Ordinary Differential Equation Models 1
Miao, Hongyu; Wu, Hulin; Xue, Hongqi
2014-01-01
Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method. PMID:25544787
Generalized Ordinary Differential Equation Models.
Miao, Hongyu; Wu, Hulin; Xue, Hongqi
2014-10-01
Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method.
Chelliah, Kanthasamy; Raman, Ganesh G.; Muehleisen, Ralph T.
2016-07-07
This paper evaluates the performance of various regularization parameter choice methods applied to different approaches of nearfield acoustic holography when a very nearfield measurement is not possible. For a fixed grid resolution, the larger the hologram distance, the larger the error in the naive nearfield acoustic holography reconstructions. These errors can be smoothed out by using an appropriate order of regularization. In conclusion, this study shows that by using a fixed/manual choice of regularization parameter, instead of automated parameter choice methods, reasonably accurate reconstructions can be obtained even when the hologram distance is 16 times larger than the grid resolution.
An empirical Bayes approach for the Poisson life distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1973-01-01
A smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing. The reliability function is also estimated either by using the empirical Bayes estimate of the parameter, or by obtaining the expectation of the reliability function. The behavior of the empirical Bayes procedure is studied through Monte Carlo simulation in which estimates of mean-squared errors of the empirical Bayes estimators are compared with those of conventional estimators such as minimum variance unbiased or maximum likelihood. Results indicate a significant reduction in mean-squared error of the empirical Bayes estimators over the conventional variety.
Analytical skin friction and heat transfer formula for compressible internal flows
NASA Technical Reports Server (NTRS)
Dechant, Lawrence J.; Tattar, Marc J.
1994-01-01
An analytic, closed-form friction formula for turbulent, internal, compressible, fully developed flow was derived by extending the incompressible law-of-the-wall relation to compressible cases. The model is capable of analyzing heat transfer as a function of constant surface temperatures and surface roughness as well as analyzing adiabatic conditions. The formula reduces to Prandtl's law of friction for adiabatic, smooth, axisymmetric flow. In addition, the formula reduces to the Colebrook equation for incompressible, adiabatic, axisymmetric flow with various roughnesses. Comparisons with available experiments show that the model averages roughly 12.5 percent error for adiabatic flow and 18.5 percent error for flow involving heat transfer.
NASA Technical Reports Server (NTRS)
Mccall, D. L.
1984-01-01
The results of a simulation study to define the functional characteristics of a airborne and ground reference GPS receiver for use in a Differential GPS system are doumented. The operations of a variety of receiver types (sequential-single channel, continuous multi-channel, etc.) are evaluated for a typical civil helicopter mission scenario. The math model of each receiver type incorporated representative system errors including intentional degradation. The results include the discussion of the receiver relative performance, the spatial correlative properties of individual range error sources, and the navigation algorithm used to smooth the position data.
Partitioning degrees of freedom in hierarchical and other richly-parameterized models.
Cui, Yue; Hodges, James S; Kong, Xiaoxiao; Carlin, Bradley P
2010-02-01
Hodges & Sargent (2001) developed a measure of a hierarchical model's complexity, degrees of freedom (DF), that is consistent with definitions for scatterplot smoothers, interpretable in terms of simple models, and that enables control of a fit's complexity by means of a prior distribution on complexity. DF describes complexity of the whole fitted model but in general it is unclear how to allocate DF to individual effects. We give a new definition of DF for arbitrary normal-error linear hierarchical models, consistent with Hodges & Sargent's, that naturally partitions the n observations into DF for individual effects and for error. The new conception of an effect's DF is the ratio of the effect's modeled variance matrix to the total variance matrix. This gives a way to describe the sizes of different parts of a model (e.g., spatial clustering vs. heterogeneity), to place DF-based priors on smoothing parameters, and to describe how a smoothed effect competes with other effects. It also avoids difficulties with the most common definition of DF for residuals. We conclude by comparing DF to the effective number of parameters p(D) of Spiegelhalter et al (2002). Technical appendices and a dataset are available online as supplemental materials.
Motion compensated shape error concealment.
Schuster, Guido M; Katsaggelos, Aggelos K
2006-02-01
The introduction of Video Objects (VOs) is one of the innovations of MPEG-4. The alpha-plane of a VO defines its shape at a given instance in time and hence determines the boundary of its texture. In packet-based networks, shape, motion, and texture are subject to loss. While there has been considerable attention paid to the concealment of texture and motion errors, little has been done in the field of shape error concealment. In this paper we propose a post-processing shape error concealment technique that uses the motion compensated boundary information of the previously received alpha-plane. The proposed approach is based on matching received boundary segments in the current frame to the boundary in the previous frame. This matching is achieved by finding a maximally smooth motion vector field. After the current boundary segments are matched to the previous boundary, the missing boundary pieces are reconstructed by motion compensation. Experimental results demonstrating the performance of the proposed motion compensated shape error concealment method, and comparing it with the previously proposed weighted side matching method are presented.
Feischl, Michael; Gantner, Gregor; Praetorius, Dirk
2015-01-01
We consider the Galerkin boundary element method (BEM) for weakly-singular integral equations of the first-kind in 2D. We analyze some residual-type a posteriori error estimator which provides a lower as well as an upper bound for the unknown Galerkin BEM error. The required assumptions are weak and allow for piecewise smooth parametrizations of the boundary, local mesh-refinement, and related standard piecewise polynomials as well as NURBS. In particular, our analysis gives a first contribution to adaptive BEM in the frame of isogeometric analysis (IGABEM), for which we formulate an adaptive algorithm which steers the local mesh-refinement and the multiplicity of the knots. Numerical experiments underline the theoretical findings and show that the proposed adaptive strategy leads to optimal convergence. PMID:26085698
Daily values flow comparison and estimates using program HYCOMP, version 1.0
Sanders, Curtis L.
2002-01-01
A method used by the U.S. Geological Survey for quality control in computing daily value flow records is to compare hydrographs of computed flows at a station under review to hydrographs of computed flows at a selected index station. The hydrographs are placed on top of each other (as hydrograph overlays) on a light table, compared, and missing daily flow data estimated. This method, however, is subjective and can produce inconsistent results, because hydrographers can differ when calculating acceptable limits of deviation between observed and estimated flows. Selection of appropriate index stations also is judgemental, giving no consideration to the mathematical correlation between the review station and the index station(s). To address the limitation of the hydrograph overlay method, a set of software programs, written in the SAS macrolanguage, was developed and designated Program HYDCOMP. The program automatically selects statistically comparable index stations by correlation and regression, and performs hydrographic comparisons and estimates of missing data by regressing daily mean flows at the review station against -8 to +8 lagged flows at one or two index stations and day-of-week. Another advantage that HYDCOMP has over the graphical method is that estimated flows, the criteria for determining the quality of the data, and the selection of index stations are determined statistically, and are reproducible from one user to another. HYDCOMP will load the most-correlated index stations into another file containing the ?best index stations,? but will not overwrite stations already in the file. A knowledgeable user should delete unsuitable index stations from this file based on standard error of estimate, hydrologic similarity of candidate index stations to the review station, and knowledge of the individual station characteristics. Also, the user can add index stations not selected by HYDCOMP, if desired. Once the file of best-index stations is created, a user may do hydrographic comparison and data estimates by entering the number of the review station, selecting an index station, and specifying the periods to be used for regression and plotting. For example, the user can restrict the regression to ice-free periods of the year to exclude flows estimated during iced conditions. However, the regression could still be used to estimate flow during iced conditions. HYDCOMP produces the standard error of estimate as a measure of the central scatter of the regression and R-square (coefficient of determination) for evaluating the accuracy of the regression. Output from HYDCOMP includes plots of percent residuals against (1) time within the regression and plot periods, (2) month and day of the year for evaluating seasonal bias in the regression, and (3) the magnitude of flow. For hydrographic comparisons, it plots 2-month segments of hydrographs over the selected plot period showing the observed flows, the regressed flows, the 95 percent confidence limit flows, flow measurements, and regression limits. If the observed flows at the review station remain outside the 95 percent confidence limits for a prolonged period, there may be some error in the flows at the review station or at the index station(s). In addition, daily minimum and maximum temperatures and daily rainfall are shown on the hydrographs, if available, to help indicate whether an apparent change in flow may result from rainfall or from changes in backwater from melting ice or freezing water. HYDCOMP statistically smooths estimated flows from non-missing flows at the edges of the gaps in data into regressed flows at the center of the gaps using the Kalman smoothing algorithm. Missing flows are automatically estimated by HYDCOMP, but the user also can specify that periods of erroneous, but nonmissing flows, be estimated by the program.
NASA Astrophysics Data System (ADS)
Song, Yeo-Ul; Youn, Sung-Kie; Park, K. C.
2017-10-01
A method for three-dimensional non-matching interface treatment with a virtual gap element is developed. When partitioned structures contain curved interfaces and have different brick meshes, the discretized models have gaps along the interfaces. As these gaps bring unexpected errors, special treatments are required to handle the gaps. In the present work, a virtual gap element is introduced to link the frame and surface domain nodes in the frame work of the mortar method. Since the surface of the hexahedron element is quadrilateral, the gap element is pyramidal. The pyramidal gap element consists of four domain nodes and one frame node. Zero-strain condition in the gap element is utilized for the interpolation of frame nodes in terms of the domain nodes. This approach is taken to satisfy the momentum and energy conservation. The present method is applicable not only to curved interfaces with gaps, but also to flat interfaces in three dimensions. Several numerical examples are given to describe the effectiveness and accuracy of the proposed method.
Derivative based sensitivity analysis of gamma index
Sarkar, Biplab; Pradhan, Anirudh; Ganesh, T.
2015-01-01
Originally developed as a tool for patient-specific quality assurance in advanced treatment delivery methods to compare between measured and calculated dose distributions, the gamma index (γ) concept was later extended to compare between any two dose distributions. It takes into effect both the dose difference (DD) and distance-to-agreement (DTA) measurements in the comparison. Its strength lies in its capability to give a quantitative value for the analysis, unlike other methods. For every point on the reference curve, if there is at least one point in the evaluated curve that satisfies the pass criteria (e.g., δDD = 1%, δDTA = 1 mm), the point is included in the quantitative score as “pass.” Gamma analysis does not account for the gradient of the evaluated curve - it looks at only the minimum gamma value, and if it is <1, then the point passes, no matter what the gradient of evaluated curve is. In this work, an attempt has been made to present a derivative-based method for the identification of dose gradient. A mathematically derived reference profile (RP) representing the penumbral region of 6 MV 10 cm × 10 cm field was generated from an error function. A general test profile (GTP) was created from this RP by introducing 1 mm distance error and 1% dose error at each point. This was considered as the first of the two evaluated curves. By its nature, this curve is a smooth curve and would satisfy the pass criteria for all points in it. The second evaluated profile was generated as a sawtooth test profile (STTP) which again would satisfy the pass criteria for every point on the RP. However, being a sawtooth curve, it is not a smooth one and would be obviously poor when compared with the smooth profile. Considering the smooth GTP as an acceptable profile when it passed the gamma pass criteria (1% DD and 1 mm DTA) against the RP, the first and second order derivatives of the DDs (δD’, δD”) between these two curves were derived and used as the boundary values for evaluating the STTP against the RP. Even though the STTP passed the simple gamma pass criteria, it was found failing at many locations when the derivatives were used as the boundary values. The proposed derivative-based method can identify a noisy curve and can prove to be a useful tool for improving the sensitivity of the gamma index. PMID:26865761
Multiscale measurement error models for aggregated small area health data.
Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin
2016-08-01
Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates. © The Author(s) 2016.
Spatial perseveration error by alpacas (Vicugna pacos) in an A-not-B detour task.
Abramson, José Z; Paulina Soto, D; Beatriz Zapata, S; Lloreda, María Victoria Hernández
2018-05-01
Spatial perseveration has been documented for domestic animals such as mules, donkeys, horses and dogs. However, evidence for this spatial cognition behavior among other domestic species is scarce. Alpacas have been domesticated for at least 7000 years yet their cognitive ability has not been officially reported. The present article used an A-not-B detour task to study the spatial problem-solving abilities of alpacas (Vicugna pacos) and to identify the perseveration errors, which refers to a tendency to maintain a learned route, despite having another available path. The study tested 51 alpacas, which had to pass through a gap at one end of a barrier in order to reach a reward. After one, two, three or four repeats (A trials), the gap was moved to the opposite end of the barrier (B trials). In contrast to what has been found in other domestic animals tested with the same task, the present study did not find clear evidence of spatial perseveration. Individuals' performance in the subsequent B trials, following the change of gap location, suggests no error persistence in alpacas. Results suggest that alpacas are more flexible than other domestic animals tested with this same task, which has important implications in planning proper training for experimental designs or productive purposes. These results could contribute toward enhancing alpacas' welfare and our understanding of their cognitive abilities.
Image-guided spatial localization of heterogeneous compartments for magnetic resonance
An, Li; Shen, Jun
2015-01-01
Purpose: Image-guided localization SPectral Localization Achieved by Sensitivity Heterogeneity (SPLASH) allows rapid measurement of signals from irregularly shaped anatomical compartments without using phase encoding gradients. Here, the authors propose a novel method to address the issue of heterogeneous signal distribution within the localized compartments. Methods: Each compartment was subdivided into multiple subcompartments and their spectra were solved by Tikhonov regularization to enforce smoothness within each compartment. The spectrum of a given compartment was generated by combining the spectra of the components of that compartment. The proposed method was first tested using Monte Carlo simulations and then applied to reconstructing in vivo spectra from irregularly shaped ischemic stroke and normal tissue compartments. Results: Monte Carlo simulations demonstrate that the proposed regularized SPLASH method significantly reduces localization and metabolite quantification errors. In vivo results show that the intracompartment regularization results in ∼40% reduction of error in metabolite quantification. Conclusions: The proposed method significantly reduces localization errors and metabolite quantification errors caused by intracompartment heterogeneous signal distribution. PMID:26328977
GEOS-C altimeter attitude bias error correction. [gate-tracking radar
NASA Technical Reports Server (NTRS)
Marini, J. W.
1974-01-01
A pulse-limited split-gate-tracking radar altimeter was flown on Skylab and will be used aboard GEOS-C. If such an altimeter were to employ a hypothetical isotropic antenna, the altimeter output would be independent of spacecraft orientation. To reduce power requirements the gain of the altimeter antenna proposed is increased to the point where its beamwidth is only a few degrees. The gain of the antenna consequently varies somewhat over the pulse-limited illuminated region of the ocean below the altimeter, and the altimeter output varies with antenna orientation. The error introduced into the altimeter data is modeled empirically, but close agreements with the expected errors was not realized. The attitude error effects expected with the GEOS-C altimeter are modelled using a form suggested by an analytical derivation. The treatment is restricted to the case of a relatively smooth sea, where the height of the ocean waves are small relative to the spatial length (pulse duration times speed of light) of the transmitted pulse.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naughton, M.J.; Bourke, W.; Browning, G.L.
The convergence of spectral model numerical solutions of the global shallow-water equations is examined as a function of the time step and the spectral truncation. The contributions to the errors due to the spatial and temporal discretizations are separately identified and compared. Numerical convergence experiments are performed with the inviscid equations from smooth (Rossby-Haurwitz wave) and observed (R45 atmospheric analysis) initial conditions, and also with the diffusive shallow-water equations. Results are compared with the forced inviscid shallow-water equations case studied by Browning et al. Reduction of the time discretization error by the removal of fast waves from the solution usingmore » initialization is shown. The effects of forcing and diffusion on the convergence are discussed. Time truncation errors are found to dominate when a feature is large scale and well resolved; spatial truncation errors dominate for small-scale features and also for large scale after the small scales have affected them. Possible implications of these results for global atmospheric modeling are discussed. 31 refs., 14 figs., 4 tabs.« less
Yang, Jingjing; Cox, Dennis D; Lee, Jong Soo; Ren, Peng; Choi, Taeryon
2017-12-01
Functional data are defined as realizations of random functions (mostly smooth functions) varying over a continuum, which are usually collected on discretized grids with measurement errors. In order to accurately smooth noisy functional observations and deal with the issue of high-dimensional observation grids, we propose a novel Bayesian method based on the Bayesian hierarchical model with a Gaussian-Wishart process prior and basis function representations. We first derive an induced model for the basis-function coefficients of the functional data, and then use this model to conduct posterior inference through Markov chain Monte Carlo methods. Compared to the standard Bayesian inference that suffers serious computational burden and instability in analyzing high-dimensional functional data, our method greatly improves the computational scalability and stability, while inheriting the advantage of simultaneously smoothing raw observations and estimating the mean-covariance functions in a nonparametric way. In addition, our method can naturally handle functional data observed on random or uncommon grids. Simulation and real studies demonstrate that our method produces similar results to those obtainable by the standard Bayesian inference with low-dimensional common grids, while efficiently smoothing and estimating functional data with random and high-dimensional observation grids when the standard Bayesian inference fails. In conclusion, our method can efficiently smooth and estimate high-dimensional functional data, providing one way to resolve the curse of dimensionality for Bayesian functional data analysis with Gaussian-Wishart processes. © 2017, The International Biometric Society.
Green, Thomas J; Bijlsma, Jan Jaap; Sweet, David D
2010-09-01
The workup of the emergency patient with a raised anion gap metabolic acidosis includes assessment of the components of “MUDPILES” (methanol; uremia; diabetic ketoacidosis; paraldehyde; isoniazid, iron or inborn errors of metabolism; lactic acid; ethylene glycol; salicylates). This approach is usually sufficient for the majority of cases in the emergency department; however, there are many other etiologies not addressed in this mnemonic. Organic acids including 5-oxoproline (pyroglutamic acid) are rare but important causes of anion gap metabolic acidosis. We present the case of a patient with profound metabolic acidosis with raised anion gap, due to pyroglutamic acid in the setting of malnutrition and chronic ingestion of acetaminophen.
NASA Astrophysics Data System (ADS)
Shi, Feng; Shu, Yong; Dai, Yifan; Peng, Xiaoqiang; Li, Shengyi
2013-07-01
Based on the elastic-plastic deformation theory, status between abrasives and workpiece in magnetorheological finishing (MRF) process and the feasibility of elastic polishing are analyzed. The relationship among material removal mechanism and particle force, removal efficiency, and surface topography are revealed through a set of experiments. The chemical dominant elastic super-smooth polishing can be fulfilled by changing the components of magnetorheological (MR) fluid and optimizing polishing parameters. The MR elastic super-smooth finishing technology can be applied in polishing high-power laser-irradiated components with high efficiency, high accuracy, low damage, and high laser-induced damage threshold (LIDT). A 430×430×10 mm fused silica (FS) optic window is polished and surface error is improved from 538.241 nm [peak to valley (PV)], 96.376 nm (rms) to 76.372 nm (PV), 8.295 nm (rms) after 51.6 h rough polishing, 42.6 h fine polishing, and 54.6 h super-smooth polishing. A 50×50×10 mm sample is polished with exactly the same parameters. The roughness is improved from 1.793 nm [roughness average (Ra)] to 0.167 nm (Ra) and LIDT is improved from 9.77 to 19.2 J/cm2 after MRF elastic polishing.
Presentation of growth velocities of rural Haitian children using smoothing spline techniques.
Waternaux, C; Hebert, J R; Dawson, R; Berggren, G G
1987-01-01
The examination of monthly (or quarterly) increments in weight or length is important for assessing the nutritional and health status of children. Growth velocities are widely thought to be more important than actual weight or length measurements per se. However, there are no standards by which clinicians, researchers, or parents can gauge a child's growth. This paper describes a method for computing growth velocities (monthly increments) for physical growth measurements with substantial measurement error and irregular spacing over time. These features are characteristic of data collected in the field where conditions are less than ideal. The technique of smoothing by splines provides a powerful tool to deal with the variability and irregularity of the measurements. The technique consists of approximating the observed data by a smooth curve as a clinician might have drawn on the child's growth chart. Spline functions are particularly appropriate to describe bio-physical processes such as growth, for which no model can be postulated a priori. This paper describes how the technique was used for the analysis of a large data base collected on pre-school aged children in rural Haiti. The sex-specific length and weight velocities derived from the spline-smoothed data are presented as reference data for researchers and others interested in longitudinal growth of children in the Third World.
Bacon, Dave; Flammia, Steven T
2009-09-18
The difficulty in producing precisely timed and controlled quantum gates is a significant source of error in many physical implementations of quantum computers. Here we introduce a simple universal primitive, adiabatic gate teleportation, which is robust to timing errors and many control errors and maintains a constant energy gap throughout the computation above a degenerate ground state space. This construction allows for geometric robustness based upon the control of two independent qubit interactions. Further, our piecewise adiabatic evolution easily relates to the quantum circuit model, enabling the use of standard methods from fault-tolerance theory for establishing thresholds.
Error reduction, patient safety and institutional ethics committees.
Meaney, Mark E
2004-01-01
Institutional ethics committees remain largely absent from the literature on error reduction and patient safety. In this paper, the author endeavors to fill the gap. As noted in the Hastings Center's recent report, "Promoting Patient Safety," the occurrence of medical error involves complex web of multiple factors. Human misstep is certainly one such factor, but not the only one. This paper builds on the Hastings Center's report in arguing that institutional ethics committees ought to play an integral role in the transformation of a "culture of blame" to a "culture of safety" in healthcare delivery.
How cigarette design can affect youth initiation into smoking: Camel cigarettes 1983-93
Wayne, G; Connolly, G
2002-01-01
Objective: To determine changes in the design of Camel cigarettes in the period surrounding the "Smooth Character" advertising campaign and to assess the impact of these changes on youth smoking. Data sources: Internal documents made available through the document website maintained by RJ Reynolds, manufacturer of Camel cigarettes. Study selection: Electronic searches using keywords to identify relevant data. Data extraction: A web based index search of documents targeting "smoothness" or "harshness" and "younger adult smokers" ("YAS") or "first usual brand younger adult smokers" ("FUBYAS") in the 10 year period surrounding the introduction of the "Smooth Character" campaign was used to identify Camel related product design research projects. A snowball methodology was used: initial documents were identified by focusing on key words, codes, researchers, committees, meetings, and gaps in overall chronology; a second set of documents was culled from these initial documents, and so on. Data synthesis: Product design research led to the introduction of redesigned Camel cigarettes targeted to younger adult males coinciding with the "Smooth Character" campaign. Further refinements in Camel cigarettes during the following five year period continued to emphasise the smoothness of the cigarette, utilising additives and blends which reduced throat irritation but increased or retained nicotine impact. Conclusions: Industry competition for market share among younger adult smokers may have contributed to the reversal of a decline in youth smoking rates during the late 1980s through development of products which were more appealing to youth smokers and which aided in initiation by reducing harshness and irritation. PMID:11893812
Holton, James M; Classen, Scott; Frankel, Kenneth A; Tainer, John A
2014-09-01
In macromolecular crystallography, the agreement between observed and predicted structure factors (Rcryst and Rfree ) is seldom better than 20%. This is much larger than the estimate of experimental error (Rmerge ). The difference between Rcryst and Rmerge is the R-factor gap. There is no such gap in small-molecule crystallography, for which calculated structure factors are generally considered more accurate than the experimental measurements. Perhaps the true noise level of macromolecular data is higher than expected? Or is the gap caused by inaccurate phases that trap refined models in local minima? By generating simulated diffraction patterns using the program MLFSOM, and including every conceivable source of experimental error, we show that neither is the case. Processing our simulated data yielded values that were indistinguishable from those of real data for all crystallographic statistics except the final Rcryst and Rfree . These values decreased to 3.8% and 5.5% for simulated data, suggesting that the reason for high R-factors in macromolecular crystallography is neither experimental error nor phase bias, but rather an underlying inadequacy in the models used to explain our observations. The present inability to accurately represent the entire macromolecule with both its flexibility and its protein-solvent interface may be improved by synergies between small-angle X-ray scattering, computational chemistry and crystallography. The exciting implication of our finding is that macromolecular data contain substantial hidden and untapped potential to resolve ambiguities in the true nature of the nanoscale, a task that the second century of crystallography promises to fulfill. Coordinates and structure factors for the real data have been submitted to the Protein Data Bank under accession 4tws. © 2014 The Authors. FEBS Journal published by John Wiley & Sons Ltd on behalf of FEBS.
Study on rapid valid acidity evaluation of apple by fiber optic diffuse reflectance technique
NASA Astrophysics Data System (ADS)
Liu, Yande; Ying, Yibin; Fu, Xiaping; Jiang, Xuesong
2004-03-01
Some issues related to nondestructive evaluation of valid acidity in intact apples by means of Fourier transform near infrared (FTNIR) (800-2631nm) method were addressed. A relationship was established between the diffuse reflectance spectra recorded with a bifurcated optic fiber and the valid acidity. The data were analyzed by multivariate calibration analysis such as partial least squares (PLS) analysis and principal component regression (PCR) technique. A total of 120 Fuji apples were tested and 80 of them were used to form a calibration data set. The influence of data preprocessing and different spectra treatments were also investigated. Models based on smoothing spectra were slightly worse than models based on derivative spectra and the best result was obtained when the segment length was 5 and the gap size was 10. Depending on data preprocessing and multivariate calibration technique, the best prediction model had a correlation efficient (0.871), a low RMSEP (0.0677), a low RMSEC (0.056) and a small difference between RMSEP and RMSEC by PLS analysis. The results point out the feasibility of FTNIR spectral analysis to predict the fruit valid acidity non-destructively. The ratio of data standard deviation to the root mean square error of prediction (SDR) is better to be less than 3 in calibration models, however, the results cannot meet the demand of actual application. Therefore, further study is required for better calibration and prediction.
Lagrangian predictability characteristics of an Ocean Model
NASA Astrophysics Data System (ADS)
Lacorata, Guglielmo; Palatella, Luigi; Santoleri, Rosalia
2014-11-01
The Mediterranean Forecasting System (MFS) Ocean Model, provided by INGV, has been chosen as case study to analyze Lagrangian trajectory predictability by means of a dynamical systems approach. To this regard, numerical trajectories are tested against a large amount of Mediterranean drifter data, used as sample of the actual tracer dynamics across the sea. The separation rate of a trajectory pair is measured by computing the Finite-Scale Lyapunov Exponent (FSLE) of first and second kind. An additional kinematic Lagrangian model (KLM), suitably treated to avoid "sweeping"-related problems, has been nested into the MFS in order to recover, in a statistical sense, the velocity field contributions to pair particle dispersion, at mesoscale level, smoothed out by finite resolution effects. Some of the results emerging from this work are: (a) drifter pair dispersion displays Richardson's turbulent diffusion inside the [10-100] km range, while numerical simulations of MFS alone (i.e., without subgrid model) indicate exponential separation; (b) adding the subgrid model, model pair dispersion gets very close to observed data, indicating that KLM is effective in filling the energy "mesoscale gap" present in MFS velocity fields; (c) there exists a threshold size beyond which pair dispersion becomes weakly sensitive to the difference between model and "real" dynamics; (d) the whole methodology here presented can be used to quantify model errors and validate numerical current fields, as far as forecasts of Lagrangian dispersion are concerned.
Garza-Gisholt, Eduardo; Hemmi, Jan M; Hart, Nathan S; Collin, Shaun P
2014-01-01
Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed 'by eye'. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation 'respects' the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the 'noise' caused by artefacts and permits a clearer representation of the dominant, 'real' distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome.
Xiaopeng, Bai; Tanaka, Yoshimasa; Ihara, Eikichi; Hirano, Katsuya; Nakano, Kayoko; Hirano, Mayumi; Oda, Yoshinao; Nakamura, Kazuhiko
2017-02-15
Duodenal reflux of fluids containing trypsin relates to refractory gastroesophageal reflux disease (GERD). Esophageal peristalsis and clearance are important factors in GERD pathogenesis. However, the function of trypsin in esophageal body contractility is not fully understood. In this study, effects of trypsin on circular smooth muscle (CSM) and longitudinal smooth muscle (LSM) of the porcine esophageal body were examined. Trypsin elicited a concentration dependent biphasic response, a major contraction and a subsequent relaxation only in CSM. In CSM, contraction occurred at trypsin concentrations of 100nM and relaxation at 1μM. A proteinase-activated receptor (PAR)2 activating peptide, SLIGKV-NH 2 (1mM), induced a monophasic contraction. Those responses were unaffected by tetrodotoxin though abolished by the gap junction uncouplers carbenoxolone and octanol. They were also partially inhibited by a transient receptor potential vanilloid type 1 (TRPV1) antagonist and abolished by combination of neurokinin receptor 1 (NK 1 ) and NK 2 antagonists, but not by an NK 3 antagonist, suggesting a PAR2-TRPV1-substance P pathway in sensory neurons. Substance P (100nM), an agonist for various NK receptors (NK 1 , NK 2 and NK 3 ) with differing affinities, induced significant contraction in CSM, but not in LSM. The contraction was also blocked by the combination of NK 1 and NK 2 antagonists, but not by the NK 3 antagonist. Moreover, substance P-induced contractions were unaffected by the TRPV1 antagonist, but inhibited by a gap junction uncoupler. In conclusion, trypsin induced a biphasic response only in CSM and this was mediated by PAR2, TRPV1 and NK 1/2 . Gap junctions were indispensable in this tachykinin-induced response. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Psikuta, Agnes; Mert, Emel; Annaheim, Simon; Rossi, René M.
2018-02-01
To evaluate the quality of new energy-saving and performance-supporting building and urban settings, the thermal sensation and comfort models are often used. The accuracy of these models is related to accurate prediction of the human thermo-physiological response that, in turn, is highly sensitive to the local effect of clothing. This study aimed at the development of an empirical regression model of the air gap thickness and the contact area in clothing to accurately simulate human thermal and perceptual response. The statistical model predicted reliably both parameters for 14 body regions based on the clothing ease allowances. The effect of the standard error in air gap prediction on the thermo-physiological response was lower than the differences between healthy humans. It was demonstrated that currently used assumptions and methods for determination of the air gap thickness can produce a substantial error for all global, mean, and local physiological parameters, and hence, lead to false estimation of the resultant physiological state of the human body, thermal sensation, and comfort. Thus, this model may help researchers to strive for improvement of human thermal comfort, health, productivity, safety, and overall sense of well-being with simultaneous reduction of energy consumption and costs in built environment.
A vorticity transport model to restore spatial gaps in velocity data
NASA Astrophysics Data System (ADS)
Ameli, Siavash; Shadden, Shawn
2017-11-01
Often measurements of velocity data do not have full spatial coverage in the probed domain or near boundaries. These gaps can be due to missing measurements or masked regions of corrupted data. These gaps confound interpretation, and are problematic when the data is used to compute Lagrangian or trajectory-based analyses. Various techniques have been proposed to overcome coverage limitations in velocity data such as unweighted least square fitting, empirical orthogonal function analysis, variational interpolation as well as boundary modal analysis. In this talk, we present a vorticity transport PDE to reconstruct regions of missing velocity vectors. The transport model involves both nonlinear anisotropic diffusion and advection. This approach is shown to preserve the main features of the flow even in cases of large gaps, and the reconstructed regions are continuous up to second order. We illustrate results for high-frequency radar (HFR) measurements of the ocean surface currents as this is a common application of limited coverage. We demonstrate that the error of the method is on the same order of the error of the original velocity data. In addition, we have developed a web-based gateway for data restoration, and we will demonstrate a practical application using available data. This work is supported by the NSF Grant No. 1520825.
Bol, M; Van Geyt, C; Baert, S; Decrock, E; Wang, N; De Bock, M; Gadicherla, A K; Randon, C; Evans, W H; Beele, H; Cornelissen, R; Leybaert, L
2013-04-01
Cryopreserved blood vessels are being increasingly employed in vascular reconstruction procedures but freezing/thawing is associated with significant cell death that may lead to graft failure. Vascular cells express connexin proteins that form gap junction channels and hemichannels. Gap junction channels directly connect the cytoplasm of adjacent cells and may facilitate the passage of cell death messengers leading to bystander cell death. Two hemichannels form a gap junction channel but these channels are also present as free non-connected hemichannels. Hemichannels are normally closed but may open under stressful conditions and thereby promote cell death. We here investigated whether blocking gap junctions and hemichannels could prevent cell death after cryopreservation. Inclusion of Gap27, a connexin channel inhibitory peptide, during cryopreservation and thawing of human saphenous veins and femoral arteries was evaluated by terminal deoxynucleotidyl transferase dUTP nick end labelling (TUNEL) assays and histological examination. We report that Gap27 significantly reduces cell death in human femoral arteries and saphenous veins when present during cryopreservation/thawing. In particular, smooth muscle cell death was reduced by 73% in arteries and 71% in veins, while endothelial cell death was reduced by 32% in arteries and 51% in veins. We conclude that inhibiting connexin channels during cryopreservation strongly promotes vascular cell viability. Copyright © 2012 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.
Comprehensive comparison of gap filling techniques for eddy covariance net carbon fluxes
NASA Astrophysics Data System (ADS)
Moffat, A. M.; Papale, D.; Reichstein, M.; Hollinger, D. Y.; Richardson, A. D.; Barr, A. G.; Beckstein, C.; Braswell, B. H.; Churkina, G.; Desai, A. R.; Falge, E.; Gove, J. H.; Heimann, M.; Hui, D.; Jarvis, A. J.; Kattge, J.; Noormets, A.; Stauch, V. J.
2007-12-01
Review of fifteen techniques for estimating missing values of net ecosystem CO2 exchange (NEE) in eddy covariance time series and evaluation of their performance for different artificial gap scenarios based on a set of ten benchmark datasets from six forested sites in Europe. The goal of gap filling is the reproduction of the NEE time series and hence this present work focuses on estimating missing NEE values, not on editing or the removal of suspect values in these time series due to systematic errors in the measurements (e.g. nighttime flux, advection). The gap filling was examined by generating fifty secondary datasets with artificial gaps (ranging in length from single half-hours to twelve consecutive days) for each benchmark dataset and evaluating the performance with a variety of statistical metrics. The performance of the gap filling varied among sites and depended on the level of aggregation (native half- hourly time step versus daily), long gaps were more difficult to fill than short gaps, and differences among the techniques were more pronounced during the day than at night. The non-linear regression techniques (NLRs), the look-up table (LUT), marginal distribution sampling (MDS), and the semi-parametric model (SPM) generally showed good overall performance. The artificial neural network based techniques (ANNs) were generally, if only slightly, superior to the other techniques. The simple interpolation technique of mean diurnal variation (MDV) showed a moderate but consistent performance. Several sophisticated techniques, the dual unscented Kalman filter (UKF), the multiple imputation method (MIM), the terrestrial biosphere model (BETHY), but also one of the ANNs and one of the NLRs showed high biases which resulted in a low reliability of the annual sums, indicating that additional development might be needed. An uncertainty analysis comparing the estimated random error in the ten benchmark datasets with the artificial gap residuals suggested that the techniques are already at or very close to the noise limit of the measurements. Based on the techniques and site data examined here, the effect of gap filling on the annual sums of NEE is modest, with most techniques falling within a range of ±25 g C m-2 y-1.
Saccades to remembered targets: the effects of smooth pursuit and illusory stimulus motion
NASA Technical Reports Server (NTRS)
Zivotofsky, A. Z.; Rottach, K. G.; Averbuch-Heller, L.; Kori, A. A.; Thomas, C. W.; Dell'Osso, L. F.; Leigh, R. J.
1996-01-01
1. Measurements were made in four normal human subjects of the accuracy of saccades to remembered locations of targets that were flashed on a 20 x 30 deg random dot display that was either stationary or moving horizontally and sinusoidally at +/-9 deg at 0.3 Hz. During the interval between the target flash and the memory-guided saccade, the "memory period" (1.4 s), subjects either fixated a stationary spot or pursued a spot moving vertically sinusoidally at +/-9 deg at 0.3 Hz. 2. When saccades were made toward the location of targets previously flashed on a stationary background as subjects fixated the stationary spot, median saccadic error was 0.93 deg horizontally and 1.1 deg vertically. These errors were greater than for saccades to visible targets, which had median values of 0.59 deg horizontally and 0.60 deg vertically. 3. When targets were flashed as subjects smoothly pursued a spot that moved vertically across the stationary background, median saccadic error was 1.1 deg horizontally and 1.2 deg vertically, thus being of similar accuracy to when targets were flashed during fixation. In addition, the vertical component of the memory-guided saccade was much more closely correlated with the "spatial error" than with the "retinal error"; this indicated that, when programming the saccade, the brain had taken into account eye movements that occurred during the memory period. 4. When saccades were made to targets flashed during attempted fixation of a stationary spot on a horizontally moving background, a condition that produces a weak Duncker-type illusion of horizontal movement of the primary target, median saccadic error increased horizontally to 3.2 deg but was 1.1 deg vertically. 5. When targets were flashed as subjects smoothly pursued a spot that moved vertically on the horizontally moving background, a condition that induces a strong illusion of diagonal target motion, median saccadic error was 4.0 deg horizontally and 1.5 deg vertically; thus the horizontal error was greater than under any other experimental condition. 6. In most trials, the initial saccade to the remembered target was followed by additional saccades while the subject was still in darkness. These secondary saccades, which were executed in the absence of visual feedback, brought the eye closer to the target location. During paradigms involving horizontal background movement, these corrections were more prominent horizontally than vertically. 7. Further measurements were made in two subjects to determine whether inaccuracy of memory-guided saccades, in the horizontal plane, was due to mislocalization at the time that the target flashed, misrepresentation of the trajectory of the pursuit eye movement during the memory period, or both. 8. The magnitude of the saccadic error, both with and without corrections made in darkness, was mislocalized by approximately 30% of the displacement of the background at the time that the target flashed. The magnitude of the saccadic error also was influenced by net movement of the background during the memory period, corresponding to approximately 25% of net background movement for the initial saccade and approximately 13% for the final eye position achieved in darkness. 9. We formulated simple linear models to test specific hypotheses about which combinations of signals best describe the observed saccadic amplitudes. We tested the possibilities that the brain made an accurate memory of target location and a reliable representation of the eye movement during the memory period, or that one or both of these was corrupted by the illusory visual stimulus. Our data were best accounted for by a model in which both the working memory of target location and the internal representation of the horizontal eye movements were corrupted by the illusory visual stimulus. We conclude that extraretinal signals played only a minor role, in comparison with visual estimates of the direction of gaze, in planning eye movements to remembered targ.
Power transformations improve interpolation of grids for molecular mechanics interaction energies.
Minh, David D L
2018-02-18
A common strategy for speeding up molecular docking calculations is to precompute nonbonded interaction energies between a receptor molecule and a set of three-dimensional grids. The grids are then interpolated to compute energies for ligand atoms in many different binding poses. Here, I evaluate a smoothing strategy of taking a power transformation of grid point energies and inverse transformation of the result from trilinear interpolation. For molecular docking poses from 85 protein-ligand complexes, this smoothing procedure leads to significant accuracy improvements, including an approximately twofold reduction in the root mean square error at a grid spacing of 0.4 Å and retaining the ability to rank docking poses even at a grid spacing of 0.7 Å. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Robust boundary treatment for open-channel flows in divergence-free incompressible SPH
NASA Astrophysics Data System (ADS)
Pahar, Gourabananda; Dhar, Anirban
2017-03-01
A robust Incompressible Smoothed Particle Hydrodynamics (ISPH) framework is developed to simulate specified inflow and outflow boundary conditions for open-channel flow. Being purely divergence-free, the framework offers smoothed and structured pressure distribution. An implicit treatment of Pressure Poison Equation and Dirichlet boundary condition is applied on free-surface to minimize error in velocity-divergence. Beyond inflow and outflow threshold, multiple layers of dummy particles are created according to specified boundary condition. Inflow boundary acts as a soluble wave-maker. Fluid particles beyond outflow threshold are removed and replaced with dummy particles with specified boundary velocity. The framework is validated against different cases of open channel flow with different boundary conditions. The model can efficiently capture flow evolution and vortex generation for random geometry and variable boundary conditions.
Quality Tetrahedral Mesh Smoothing via Boundary-Optimized Delaunay Triangulation
Gao, Zhanheng; Yu, Zeyun; Holst, Michael
2012-01-01
Despite its great success in improving the quality of a tetrahedral mesh, the original optimal Delaunay triangulation (ODT) is designed to move only inner vertices and thus cannot handle input meshes containing “bad” triangles on boundaries. In the current work, we present an integrated approach called boundary-optimized Delaunay triangulation (B-ODT) to smooth (improve) a tetrahedral mesh. In our method, both inner and boundary vertices are repositioned by analytically minimizing the error between a paraboloid function and its piecewise linear interpolation over the neighborhood of each vertex. In addition to the guaranteed volume-preserving property, the proposed algorithm can be readily adapted to preserve sharp features in the original mesh. A number of experiments are included to demonstrate the performance of our method. PMID:23144522
Graphene-based topological insulator with an intrinsic bulk band gap above room temperature.
Kou, Liangzhi; Yan, Binghai; Hu, Feiming; Wu, Shu-Chun; Wehling, Tim O; Felser, Claudia; Chen, Changfeng; Frauenheim, Thomas
2013-01-01
Topological insulators (TIs) represent a new quantum state of matter characterized by robust gapless states inside the insulating bulk gap. The metallic edge states of a two-dimensional (2D) TI, known as the quantum spin Hall (QSH) effect, are immune to backscattering and carry fully spin-polarized dissipationless currents. However, existing 2D TIs realized in HgTe and InAs/GaSb suffer from small bulk gaps (<10 meV) well below room temperature, thus limiting their application in electronic and spintronic devices. Here, we report a new 2D TI comprising a graphene layer sandwiched between two Bi2Se3 slabs that exhibits a large intrinsic bulk band gap of 30-50 meV, making it viable for room-temperature applications. Distinct from previous strategies for enhancing the intrinsic spin-orbit coupling effect of the graphene lattice, the present graphene-based TI operates on a new mechanism of strong inversion between graphene Dirac bands and Bi2Se3 conduction bands. Strain engineering leads to effective control and substantial enhancement of the bulk gap. Recently reported synthesis of smooth graphene/Bi2Se3 interfaces demonstrates the feasibility of experimental realization of this new 2D TI structure, which holds great promise for nanoscale device applications.
Modal Correction Method For Dynamically Induced Errors In Wind-Tunnel Model Attitude Measurements
NASA Technical Reports Server (NTRS)
Buehrle, R. D.; Young, C. P., Jr.
1995-01-01
This paper describes a method for correcting the dynamically induced bias errors in wind tunnel model attitude measurements using measured modal properties of the model system. At NASA Langley Research Center, the predominant instrumentation used to measure model attitude is a servo-accelerometer device that senses the model attitude with respect to the local vertical. Under smooth wind tunnel operating conditions, this inertial device can measure the model attitude with an accuracy of 0.01 degree. During wind tunnel tests when the model is responding at high dynamic amplitudes, the inertial device also senses the centrifugal acceleration associated with model vibration. This centrifugal acceleration results in a bias error in the model attitude measurement. A study of the response of a cantilevered model system to a simulated dynamic environment shows significant bias error in the model attitude measurement can occur and is vibration mode and amplitude dependent. For each vibration mode contributing to the bias error, the error is estimated from the measured modal properties and tangential accelerations at the model attitude device. Linear superposition is used to combine the bias estimates for individual modes to determine the overall bias error as a function of time. The modal correction model predicts the bias error to a high degree of accuracy for the vibration modes characterized in the simulated dynamic environment.
Object motion computation for the initiation of smooth pursuit eye movements in humans.
Wallace, Julian M; Stone, Leland S; Masson, Guillaume S
2005-04-01
Pursuing an object with smooth eye movements requires an accurate estimate of its two-dimensional (2D) trajectory. This 2D motion computation requires that different local motion measurements are extracted and combined to recover the global object-motion direction and speed. Several combination rules have been proposed such as vector averaging (VA), intersection of constraints (IOC), or 2D feature tracking (2DFT). To examine this computation, we investigated the time course of smooth pursuit eye movements driven by simple objects of different shapes. For type II diamond (where the direction of true object motion is dramatically different from the vector average of the 1-dimensional edge motions, i.e., VA not equal IOC = 2DFT), the ocular tracking is initiated in the vector average direction. Over a period of less than 300 ms, the eye-tracking direction converges on the true object motion. The reduction of the tracking error starts before the closing of the oculomotor loop. For type I diamonds (where the direction of true object motion is identical to the vector average direction, i.e., VA = IOC = 2DFT), there is no such bias. We quantified this effect by calculating the direction error between responses to types I and II and measuring its maximum value and time constant. At low contrast and high speeds, the initial bias in tracking direction is larger and takes longer to converge onto the actual object-motion direction. This effect is attenuated with the introduction of more 2D information to the extent that it was totally obliterated with a texture-filled type II diamond. These results suggest a flexible 2D computation for motion integration, which combines all available one-dimensional (edge) and 2D (feature) motion information to refine the estimate of object-motion direction over time.
Signature-forecasting and early outbreak detection system
Naumova, Elena N.; MacNeill, Ian B.
2008-01-01
SUMMARY Daily disease monitoring via a public health surveillance system provides valuable information on population risks. Efficient statistical tools for early detection of rapid changes in the disease incidence are a must for modern surveillance. The need for statistical tools for early detection of outbreaks that are not based on historical information is apparent. A system is discussed for monitoring cases of infections with a view to early detection of outbreaks and to forecasting the extent of detected outbreaks. We propose a set of adaptive algorithms for early outbreak detection that does not rely on extensive historical recording. We also include knowledge of infection disease epidemiology into forecasts. To demonstrate this system we use data from the largest water-borne outbreak of cryptosporidiosis, which occurred in Milwaukee in 1993. Historical data are smoothed using a loess-type smoother. Upon receipt of a new datum, the smoothing is updated and estimates are made of the first two derivatives of the smooth curve, and these are used for near-term forecasting. Recent data and the near-term forecasts are used to compute a color-coded warning index, which quantify the level of concern. The algorithms for computing the warning index have been designed to balance Type I errors (false prediction of an epidemic) and Type II errors (failure to correctly predict an epidemic). If the warning index signals a sufficiently high probability of an epidemic, then a forecast of the possible size of the outbreak is made. This longer term forecast is made by fitting a ‘signature’ curve to the available data. The effectiveness of the forecast depends upon the extent to which the signature curve captures the shape of outbreaks of the infection under consideration. PMID:18716671
NASA Astrophysics Data System (ADS)
Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry
2013-04-01
An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.
Pace, Danielle F.; Aylward, Stephen R.; Niethammer, Marc
2014-01-01
We propose a deformable image registration algorithm that uses anisotropic smoothing for regularization to find correspondences between images of sliding organs. In particular, we apply the method for respiratory motion estimation in longitudinal thoracic and abdominal computed tomography scans. The algorithm uses locally adaptive diffusion tensors to determine the direction and magnitude with which to smooth the components of the displacement field that are normal and tangential to an expected sliding boundary. Validation was performed using synthetic, phantom, and 14 clinical datasets, including the publicly available DIR-Lab dataset. We show that motion discontinuities caused by sliding can be effectively recovered, unlike conventional regularizations that enforce globally smooth motion. In the clinical datasets, target registration error showed improved accuracy for lung landmarks compared to the diffusive regularization. We also present a generalization of our algorithm to other sliding geometries, including sliding tubes (e.g., needles sliding through tissue, or contrast agent flowing through a vessel). Potential clinical applications of this method include longitudinal change detection and radiotherapy for lung or abdominal tumours, especially those near the chest or abdominal wall. PMID:23899632
Pace, Danielle F; Aylward, Stephen R; Niethammer, Marc
2013-11-01
We propose a deformable image registration algorithm that uses anisotropic smoothing for regularization to find correspondences between images of sliding organs. In particular, we apply the method for respiratory motion estimation in longitudinal thoracic and abdominal computed tomography scans. The algorithm uses locally adaptive diffusion tensors to determine the direction and magnitude with which to smooth the components of the displacement field that are normal and tangential to an expected sliding boundary. Validation was performed using synthetic, phantom, and 14 clinical datasets, including the publicly available DIR-Lab dataset. We show that motion discontinuities caused by sliding can be effectively recovered, unlike conventional regularizations that enforce globally smooth motion. In the clinical datasets, target registration error showed improved accuracy for lung landmarks compared to the diffusive regularization. We also present a generalization of our algorithm to other sliding geometries, including sliding tubes (e.g., needles sliding through tissue, or contrast agent flowing through a vessel). Potential clinical applications of this method include longitudinal change detection and radiotherapy for lung or abdominal tumours, especially those near the chest or abdominal wall.
Evaluation of new GRACE time-variable gravity data over the ocean
NASA Astrophysics Data System (ADS)
Chambers, Don P.
2006-09-01
Monthly GRACE gravity field models from the three science processing centers (CSR, GFZ, and JPL) are analyzed for the period from February 2003 to April 2005 over the ocean. The data are used to estimate maps of the mass component of sea level at smoothing radii of 500 km and 750 km. In addition to using new gravity field models, a filter has been applied to estimate and remove systematic errors in the coefficients that cause erroneous patterns in the maps of equivalent water level. The filter is described and its effects are discussed. The GRACE maps have been evaluated using a residual analysis with maps of altimeter sea level from Jason-1 corrected for steric variations using the World Ocean Atlas 2001 monthly climatology. The mean uncertainty of GRACE maps determined from an average of data from all 3 processing centers is estimated to be less than 1.8 cm RMS at 750 km smoothing and 2.4 cm at 500 km smoothing, which is better than was found previously using the first generation GRACE gravity fields.
Development of an Automatic Grid Generator for Multi-Element High-Lift Wings
NASA Technical Reports Server (NTRS)
Eberhardt, Scott; Wibowo, Pratomo; Tu, Eugene
1996-01-01
The procedure to generate the grid around a complex wing configuration is presented in this report. The automatic grid generation utilizes the Modified Advancing Front Method as a predictor and an elliptic scheme as a corrector. The scheme will advance the surface grid one cell outward and the newly obtained grid is corrected using the Laplace equation. The predictor-corrector step ensures that the grid produced will be smooth for every configuration. The predictor-corrector scheme is extended for a complex wing configuration. A new technique is developed to deal with the grid generation in the wing-gaps and on the flaps. It will create the grids that fill the gap on the wing surface and the gap created by the flaps. The scheme recognizes these configurations automatically so that minimal user input is required. By utilizing an appropriate sequence in advancing the grid points on a wing surface, the automatic grid generation for complex wing configurations is achieved.
Hybrid density functional theory band structure engineering in hematite
NASA Astrophysics Data System (ADS)
Pozun, Zachary D.; Henkelman, Graeme
2011-06-01
We present a hybrid density functional theory (DFT) study of doping effects in α-Fe2O3, hematite. Standard DFT underestimates the band gap by roughly 75% and incorrectly identifies hematite as a Mott-Hubbard insulator. Hybrid DFT accurately predicts the proper structural, magnetic, and electronic properties of hematite and, unlike the DFT+U method, does not contain d-electron specific empirical parameters. We find that using a screened functional that smoothly transitions from 12% exact exchange at short ranges to standard DFT at long range accurately reproduces the experimental band gap and other material properties. We then show that the antiferromagnetic symmetry in the pure α-Fe2O3 crystal is broken by all dopants and that the ligand field theory correctly predicts local magnetic moments on the dopants. We characterize the resulting band gaps for hematite doped by transition metals and the p-block post-transition metals. The specific case of Pd doping is investigated in order to correlate calculated doping energies and optical properties with experimentally observed photocatalytic behavior.
Error analysis of filtering operations in pixel-duplicated images of diabetic retinopathy
NASA Astrophysics Data System (ADS)
Mehrubeoglu, Mehrube; McLauchlan, Lifford
2010-08-01
In this paper, diabetic retinopathy is chosen for a sample target image to demonstrate the effectiveness of image enlargement through pixel duplication in identifying regions of interest. Pixel duplication is presented as a simpler alternative to data interpolation techniques for detecting small structures in the images. A comparative analysis is performed on different image processing schemes applied to both original and pixel-duplicated images. Structures of interest are detected and and classification parameters optimized for minimum false positive detection in the original and enlarged retinal pictures. The error analysis demonstrates the advantages as well as shortcomings of pixel duplication in image enhancement when spatial averaging operations (smoothing filters) are also applied.
Curve fitting methods for solar radiation data modeling
NASA Astrophysics Data System (ADS)
Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder
2014-10-01
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.
Analysis of estimation algorithms for CDTI and CAS applications
NASA Technical Reports Server (NTRS)
Goka, T.
1985-01-01
Estimation algorithms for Cockpit Display of Traffic Information (CDTI) and Collision Avoidance System (CAS) applications were analyzed and/or developed. The algorithms are based on actual or projected operational and performance characteristics of an Enhanced TCAS II traffic sensor developed by Bendix and the Federal Aviation Administration. Three algorithm areas are examined and discussed. These are horizontal x and y, range and altitude estimation algorithms. Raw estimation errors are quantified using Monte Carlo simulations developed for each application; the raw errors are then used to infer impacts on the CDTI and CAS applications. Applications of smoothing algorithms to CDTI problems are also discussed briefly. Technical conclusions are summarized based on the analysis of simulation results.
Curve fitting methods for solar radiation data modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my
2014-10-24
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both withmore » two terms) gives better results as compare with the other fitting methods.« less
Terrestrial Water Mass Load Changes from Gravity Recovery and Climate Experiment (GRACE)
NASA Technical Reports Server (NTRS)
Seo, K.-W.; Wilson, C. R.; Famiglietti, J. S.; Chen, J. L.; Rodell M.
2006-01-01
Recent studies show that data from the Gravity Recovery and Climate Experiment (GRACE) is promising for basin- to global-scale water cycle research. This study provides varied assessments of errors associated with GRACE water storage estimates. Thirteen monthly GRACE gravity solutions from August 2002 to December 2004 are examined, along with synthesized GRACE gravity fields for the same period that incorporate simulated errors. The synthetic GRACE fields are calculated using numerical climate models and GRACE internal error estimates. We consider the influence of measurement noise, spatial leakage error, and atmospheric and ocean dealiasing (AOD) model error as the major contributors to the error budget. Leakage error arises from the limited range of GRACE spherical harmonics not corrupted by noise. AOD model error is due to imperfect correction for atmosphere and ocean mass redistribution applied during GRACE processing. Four methods of forming water storage estimates from GRACE spherical harmonics (four different basin filters) are applied to both GRACE and synthetic data. Two basin filters use Gaussian smoothing, and the other two are dynamic basin filters which use knowledge of geographical locations where water storage variations are expected. Global maps of measurement noise, leakage error, and AOD model errors are estimated for each basin filter. Dynamic basin filters yield the smallest errors and highest signal-to-noise ratio. Within 12 selected basins, GRACE and synthetic data show similar amplitudes of water storage change. Using 53 river basins, covering most of Earth's land surface excluding Antarctica and Greenland, we document how error changes with basin size, latitude, and shape. Leakage error is most affected by basin size and latitude, and AOD model error is most dependent on basin latitude.
Sequential reconstruction of driving-forces from nonlinear nonstationary dynamics
NASA Astrophysics Data System (ADS)
Güntürkün, Ulaş
2010-07-01
This paper describes a functional analysis-based method for the estimation of driving-forces from nonlinear dynamic systems. The driving-forces account for the perturbation inputs induced by the external environment or the secular variations in the internal variables of the system. The proposed algorithm is applicable to the problems for which there is too little or no prior knowledge to build a rigorous mathematical model of the unknown dynamics. We derive the estimator conditioned on the differentiability of the unknown system’s mapping, and smoothness of the driving-force. The proposed algorithm is an adaptive sequential realization of the blind prediction error method, where the basic idea is to predict the observables, and retrieve the driving-force from the prediction error. Our realization of this idea is embodied by predicting the observables one-step into the future using a bank of echo state networks (ESN) in an online fashion, and then extracting the raw estimates from the prediction error and smoothing these estimates in two adaptive filtering stages. The adaptive nature of the algorithm enables to retrieve both slowly and rapidly varying driving-forces accurately, which are illustrated by simulations. Logistic and Moran-Ricker maps are studied in controlled experiments, exemplifying chaotic state and stochastic measurement models. The algorithm is also applied to the estimation of a driving-force from another nonlinear dynamic system that is stochastic in both state and measurement equations. The results are judged by the posterior Cramer-Rao lower bounds. The method is finally put into test on a real-world application; extracting sun’s magnetic flux from the sunspot time series.
Bayesian inference of Calibration curves: application to archaeomagnetism
NASA Astrophysics Data System (ADS)
Lanos, P.
2003-04-01
The range of errors that occur at different stages of the archaeomagnetic calibration process are modelled using a Bayesian hierarchical model. The archaeomagnetic data obtained from archaeological structures such as hearths, kilns or sets of bricks and tiles, exhibit considerable experimental errors and are typically more or less well dated by archaeological context, history or chronometric methods (14C, TL, dendrochronology, etc.). They can also be associated with stratigraphic observations which provide prior relative chronological information. The modelling we describe in this paper allows all these observations, on materials from a given period, to be linked together, and the use of penalized maximum likelihood for smoothing univariate, spherical or three-dimensional time series data allows representation of the secular variation of the geomagnetic field over time. The smooth curve we obtain (which takes the form of a penalized natural cubic spline) provides an adaptation to the effects of variability in the density of reference points over time. Since our model takes account of all the known errors in the archaeomagnetic calibration process, we are able to obtain a functional highest-posterior-density envelope on the new curve. With this new posterior estimate of the curve available to us, the Bayesian statistical framework then allows us to estimate the calendar dates of undated archaeological features (such as kilns) based on one, two or three geomagnetic parameters (inclination, declination and/or intensity). Date estimates are presented in much the same way as those that arise from radiocarbon dating. In order to illustrate the model and inference methods used, we will present results based on German archaeomagnetic data recently published by a German team.
Accurate, meshless methods for magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Hopkins, Philip F.; Raives, Matthias J.
2016-01-01
Recently, we explored new meshless finite-volume Lagrangian methods for hydrodynamics: the `meshless finite mass' (MFM) and `meshless finite volume' (MFV) methods; these capture advantages of both smoothed particle hydrodynamics (SPH) and adaptive mesh refinement (AMR) schemes. We extend these to include ideal magnetohydrodynamics (MHD). The MHD equations are second-order consistent and conservative. We augment these with a divergence-cleaning scheme, which maintains nabla \\cdot B≈ 0. We implement these in the code GIZMO, together with state-of-the-art SPH MHD. We consider a large test suite, and show that on all problems the new methods are competitive with AMR using constrained transport (CT) to ensure nabla \\cdot B=0. They correctly capture the growth/structure of the magnetorotational instability, MHD turbulence, and launching of magnetic jets, in some cases converging more rapidly than state-of-the-art AMR. Compared to SPH, the MFM/MFV methods exhibit convergence at fixed neighbour number, sharp shock-capturing, and dramatically reduced noise, divergence errors, and diffusion. Still, `modern' SPH can handle most test problems, at the cost of larger kernels and `by hand' adjustment of artificial diffusion. Compared to non-moving meshes, the new methods exhibit enhanced `grid noise' but reduced advection errors and diffusion, easily include self-gravity, and feature velocity-independent errors and superior angular momentum conservation. They converge more slowly on some problems (smooth, slow-moving flows), but more rapidly on others (involving advection/rotation). In all cases, we show divergence control beyond the Powell 8-wave approach is necessary, or all methods can converge to unphysical answers even at high resolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frishman, A.; Hoffman, D.K.; Kouri, D.J.
1997-07-01
We report a distributed approximating functional (DAF) fit of the {ital ab initio} potential-energy data of Liu [J. Chem. Phys. {bold 58}, 1925 (1973)] and Siegbahn and Liu [{ital ibid}. {bold 68}, 2457 (1978)]. The DAF-fit procedure is based on a variational principle, and is systematic and general. Only two adjustable parameters occur in the DAF leading to a fit which is both accurate (to the level inherent in the input data; RMS error of 0.2765 kcal/mol) and smooth ({open_quotes}well-tempered,{close_quotes} in DAF terminology). In addition, the LSTH surface of Truhlar and Horowitz based on this same data [J. Chem. Phys.more » {bold 68}, 2466 (1978)] is itself approximated using only the values of the LSTH surface on the same grid coordinate points as the {ital ab initio} data, and the same DAF parameters. The purpose of this exercise is to demonstrate that the DAF delivers a well-tempered approximation to a known function that closely mimics the true potential-energy surface. As is to be expected, since there is only roundoff error present in the LSTH input data, even more significant figures of fitting accuracy are obtained. The RMS error of the DAF fit, of the LSTH surface at the input points, is 0.0274 kcal/mol, and a smooth fit, accurate to better than 1cm{sup {minus}1}, can be obtained using more than 287 input data points. {copyright} {ital 1997 American Institute of Physics.}« less
Meyhöfer, Inga; Kumari, Veena; Hill, Antje; Petrovsky, Nadine; Ettinger, Ulrich
2017-04-01
Current antipsychotic medications fail to satisfactorily reduce negative and cognitive symptoms and produce many unwanted side effects, necessitating the development of new compounds. Cross-species, experimental behavioural model systems can be valuable to inform the development of such drugs. The aim of the current study was to further test the hypothesis that controlled sleep deprivation is a safe and effective model system for psychosis when combined with oculomotor biomarkers of schizophrenia. Using a randomized counterbalanced within-subjects design, we investigated the effects of 1 night of total sleep deprivation in 32 healthy participants on smooth pursuit eye movements (SPEM), prosaccades (PS), antisaccades (AS), and self-ratings of psychosis-like states. Compared with a normal sleep control night, sleep deprivation was associated with reduced SPEM velocity gain, higher saccadic frequency at 0.2 Hz, elevated PS spatial error, and an increase in AS direction errors. Sleep deprivation also increased intra-individual variability of SPEM, PS, and AS measures. In addition, sleep deprivation induced psychosis-like experiences mimicking hallucinations, cognitive disorganization, and negative symptoms, which in turn had moderate associations with AS direction errors. Taken together, sleep deprivation resulted in psychosis-like impairments in SPEM and AS performance. However, diverging somewhat from the schizophrenia literature, sleep deprivation additionally disrupted PS control. Sleep deprivation thus represents a promising but possibly unspecific experimental model that may be helpful to further improve our understanding of the underlying mechanisms in the pathophysiology of psychosis and aid the development of antipsychotic and pro-cognitive drugs.
ERIC Educational Resources Information Center
Lyons, Kristen E.; Ghetti, Simona; Cornoldi, Cesare
2010-01-01
Using a new method for studying the development of false-memory formation, we examined developmental differences in the rates at which 6-, 7-, 9-, 10-, and 18-year-olds made two types of memory errors: backward causal-inference errors (i.e. falsely remembering having viewed the non-viewed cause of a previously viewed effect), and gap-filling…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Jaehyung; Wagner, Lucas K.; Ertekin, Elif, E-mail: ertekin@illinois.edu
2015-12-14
The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlledmore » and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used.« less
Real-time orbit estimation for ATS-6 from redundant attitude sensors
NASA Technical Reports Server (NTRS)
Englar, T. S., Jr.
1975-01-01
A program installed in the ATSOCC on-line computer operates with attitude sensor data to produce a smoothed real-time orbit estimate. This estimate is obtained from a Kalman filter which enables the estimate to be maintained in the absence of T/M data. The results are described of analytical and numerical investigations into the sensitivity of Control Center output to the position errors resulting from the real-time estimation. The results of the numerical investigation, which used several segments of ATS-6 data gathered during the Sensor Data Acquisition run on August 19, 1974, show that the implemented system can achieve absolute position determination with an error of about 100 km, implying pointing errors of less than 0.2 deg in latitude and longitude. This compares very favorably with ATS-6 specifications of approximately 0.5 deg in latitude-longitude.
Smoothing of the bivariate LOD score for non-normal quantitative traits.
Buil, Alfonso; Dyer, Thomas D; Almasy, Laura; Blangero, John
2005-12-30
Variance component analysis provides an efficient method for performing linkage analysis for quantitative traits. However, type I error of variance components-based likelihood ratio testing may be affected when phenotypic data are non-normally distributed (especially with high values of kurtosis). This results in inflated LOD scores when the normality assumption does not hold. Even though different solutions have been proposed to deal with this problem with univariate phenotypes, little work has been done in the multivariate case. We present an empirical approach to adjust the inflated LOD scores obtained from a bivariate phenotype that violates the assumption of normality. Using the Collaborative Study on the Genetics of Alcoholism data available for the Genetic Analysis Workshop 14, we show how bivariate linkage analysis with leptokurtotic traits gives an inflated type I error. We perform a novel correction that achieves acceptable levels of type I error.
Movement decoupling control for two-axis fast steering mirror
NASA Astrophysics Data System (ADS)
Wang, Rui; Qiao, Yongming; Lv, Tao
2017-02-01
Based on flexure hinge and piezoelectric actuator of two-axis fast steering mirror is a complex system with time varying, uncertain and strong coupling. It is extremely difficult to achieve high precision decoupling control with the traditional PID control method. The feedback error learning method was established an inverse hysteresis model which was based inner product dynamic neural network nonlinear and no-smooth for piezo-ceramic. In order to improve the actuator high precision, a method was proposed, which was based piezo-ceramic inverse model of two dynamic neural network adaptive control. The experiment result indicated that, compared with two neural network adaptive movement decoupling control algorithm, static relative error is reduced from 4.44% to 0.30% and coupling degree is reduced from 12.71% to 0.60%, while dynamic relative error is reduced from 13.92% to 2.85% and coupling degree is reduced from 2.63% to 1.17%.
Toward Automatic Verification of Goal-Oriented Flow Simulations
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis, Michael J.
2014-01-01
We demonstrate the power of adaptive mesh refinement with adjoint-based error estimates in verification of simulations governed by the steady Euler equations. The flow equations are discretized using a finite volume scheme on a Cartesian mesh with cut cells at the wall boundaries. The discretization error in selected simulation outputs is estimated using the method of adjoint-weighted residuals. Practical aspects of the implementation are emphasized, particularly in the formulation of the refinement criterion and the mesh adaptation strategy. Following a thorough code verification example, we demonstrate simulation verification of two- and three-dimensional problems. These involve an airfoil performance database, a pressure signature of a body in supersonic flow and a launch abort with strong jet interactions. The results show reliable estimates and automatic control of discretization error in all simulations at an affordable computational cost. Moreover, the approach remains effective even when theoretical assumptions, e.g., steady-state and solution smoothness, are relaxed.
NASA Technical Reports Server (NTRS)
Aminpour, Mohammad
1995-01-01
The work reported here pertains only to the first year of research for a three year proposal period. As a prelude to this two dimensional interface element, the one dimensional element was tested and errors were discovered in the code for built-up structures and curved interfaces. These errors were corrected and the benchmark Boeing composite crown panel was analyzed successfully. A study of various splines led to the conclusion that cubic B-splines best suit this interface element application. A least squares approach combined with cubic B-splines was constructed to make a smooth function from the noisy data obtained with random error in the coordinate data points of the Boeing crown panel analysis. Preliminary investigations for the formulation of discontinuous 2-D shell and 3-D solid elements were conducted.
A GPU accelerated and error-controlled solver for the unbounded Poisson equation in three dimensions
NASA Astrophysics Data System (ADS)
Exl, Lukas
2017-12-01
An efficient solver for the three dimensional free-space Poisson equation is presented. The underlying numerical method is based on finite Fourier series approximation. While the error of all involved approximations can be fully controlled, the overall computation error is driven by the convergence of the finite Fourier series of the density. For smooth and fast-decaying densities the proposed method will be spectrally accurate. The method scales with O(N log N) operations, where N is the total number of discretization points in the Cartesian grid. The majority of the computational costs come from fast Fourier transforms (FFT), which makes it ideal for GPU computation. Several numerical computations on CPU and GPU validate the method and show efficiency and convergence behavior. Tests are performed using the Vienna Scientific Cluster 3 (VSC3). A free MATLAB implementation for CPU and GPU is provided to the interested community.
Hovakimyan, N; Nardi, F; Calise, A; Kim, Nakwan
2002-01-01
We consider adaptive output feedback control of uncertain nonlinear systems, in which both the dynamics and the dimension of the regulated system may be unknown. However, the relative degree of the regulated output is assumed to be known. Given a smooth reference trajectory, the problem is to design a controller that forces the system measurement to track it with bounded errors. The classical approach requires a state observer. Finding a good observer for an uncertain nonlinear system is not an obvious task. We argue that it is sufficient to build an observer for the output tracking error. Ultimate boundedness of the error signals is shown through Lyapunov's direct method. The theoretical results are illustrated in the design of a controller for a fourth-order nonlinear system of relative degree two and a high-bandwidth attitude command system for a model R-50 helicopter.
Du, Mao-Hua
2015-04-02
We know that native point defects play an important role in carrier transport properties of CH3NH3PbI3. However, the nature of many important defects remains controversial due partly to the conflicting results reported by recent density functional theory (DFT) calculations. In this Letter, we show that self-interaction error and the neglect of spin–orbit coupling (SOC) in many previous DFT calculations resulted in incorrect positions of valence and conduction band edges, although their difference, which is the band gap, is in good agreement with the experimental value. Moreover, this problem has led to incorrect predictions of defect-level positions. Hybrid density functional calculations,more » which partially correct the self-interaction error and include the SOC, show that, among native point defects (including vacancies, interstitials, and antisites), only the iodine vacancy and its complexes induce deep electron and hole trapping levels inside of the band gap, acting as nonradiative recombination centers.« less
NASA Astrophysics Data System (ADS)
Pieper, Michael
Accurate estimation or retrieval of surface emissivity spectra from long-wave infrared (LWIR) or Thermal Infrared (TIR) hyperspectral imaging data acquired by airborne or space-borne sensors is necessary for many scientific and defense applications. The at-aperture radiance measured by the sensor is a function of the ground emissivity and temperature, modified by the atmosphere. Thus the emissivity retrieval process consists of two interwoven steps: atmospheric compensation (AC) to retrieve the ground radiance from the measured at-aperture radiance and temperature-emissivity separation (TES) to separate the temperature and emissivity from the ground radiance. In-scene AC (ISAC) algorithms use blackbody-like materials in the scene, which have a linear relationship between their ground radiances and at-aperture radiances determined by the atmospheric transmission and upwelling radiance. Using a clear reference channel to estimate the ground radiance, a linear fitting of the at-aperture radiance and estimated ground radiance is done to estimate the atmospheric parameters. TES algorithms for hyperspectral imaging data assume that the emissivity spectra for solids are smooth compared to the sharp features added by the atmosphere. The ground temperature and emissivity are found by finding the temperature that provides the smoothest emissivity estimate. In this thesis we develop models to investigate the sensitivity of AC and TES to the basic assumptions enabling their performance. ISAC assumes that there are perfect blackbody pixels in a scene and that there is a clear channel, which is never the case. The developed ISAC model explains how the quality of blackbody-like pixels affect the shape of atmospheric estimates and the clear channel assumption affects their magnitude. Emissivity spectra for solids usually have some roughness. The TES model identifies four sources of error: the smoothing error of the emissivity spectrum, the emissivity error from using the incorrect temperature, and the errors caused by sensor noise and wavelength calibration. The ways these errors interact determines the overall TES performance. Since the AC and TES processes are interwoven, any errors in AC are transferred to TES and the final temperature and emissivity estimates. Combining the two models, shape errors caused by the blackbody assumption are transferred to the emissivity estimates, where magnitude errors from the clear channel assumption are compensated by TES temperature induced emissivity errors. The ability for the temperature induced error to compensate for such atmospheric errors makes it difficult to determine the correct atmospheric parameters for a scene. With these models we are able to determine the expected quality of estimated emissivity spectra based on the quality of blackbody-like materials on the ground, the emissivity of the materials being searched for, and the properties of the sensor. The quality of material emissivity spectra is a key factor in determining detection performance for a material in a scene.
Mapping forest canopy gaps using air-photo interpretation and ground surveys
Fox, T.J.; Knutson, M.G.; Hines, R.K.
2000-01-01
Canopy gaps are important structural components of forested habitats for many wildlife species. Recent improvements in the spatial accuracy of geographic information system tools facilitate accurate mapping of small canopy features such as gaps. We compared canopy-gap maps generated using ground survey methods with those derived from air-photo interpretation. We found that maps created from high-resolution air photos were more accurate than those created from ground surveys. Errors of omission were 25.6% for the ground-survey method and 4.7% for the air-photo method. One variable of inter est in songbird research is the distance from nests to gap edges. Distances from real and simulated nests to gap edges were longer using the ground-survey maps versus the air-photo maps, indicating that gap omission could potentially bias the assessment of spatial relationships. If research or management goals require location and size of canopy gaps and specific information about vegetation structure, we recommend a 2-fold approach. First, canopy gaps can be located and the perimeters defined using 1:15,000-scale or larger aerial photographs and the methods we describe. Mapped gaps can then be field-surveyed to obtain detailed vegetation data.
A meta-analytic review of two modes of learning and the description-experience gap.
Wulff, Dirk U; Mergenthaler-Canseco, Max; Hertwig, Ralph
2018-02-01
People can learn about the probabilistic consequences of their actions in two ways: One is by consulting descriptions of an action's consequences and probabilities (e.g., reading up on a medication's side effects). The other is by personally experiencing the probabilistic consequences of an action (e.g., beta testing software). In principle, people taking each route can reach analogous states of knowledge and consequently make analogous decisions. In the last dozen years, however, research has demonstrated systematic discrepancies between description- and experienced-based choices. This description-experience gap has been attributed to factors including reliance on a small set of experience, the impact of recency, and different weighting of probability information in the two decision types. In this meta-analysis focusing on studies using the sampling paradigm of decisions from experience, we evaluated these and other determinants of the decision-experience gap by reference to more than 70,000 choices made by more than 6,000 participants. We found, first, a robust description-experience gap but also a key moderator, namely, problem structure. Second, the largest determinant of the gap was reliance on small samples and the associated sampling error: free to terminate search, individuals explored too little to experience all possible outcomes. Third, the gap persisted when sampling error was basically eliminated, suggesting other determinants. Fourth, the occurrence of recency was contingent on decision makers' autonomy to terminate search, consistent with the notion of optional stopping. Finally, we found indications of different probability weighting in decisions from experience versus decisions from description when the problem structure involved a risky and a safe option. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Bing, Zhenshan; Cheng, Long; Chen, Guang; Röhrbein, Florian; Huang, Kai; Knoll, Alois
2017-04-04
Snake-like robots with 3D locomotion ability have significant advantages of adaptive travelling in diverse complex terrain over traditional legged or wheeled mobile robots. Despite numerous developed gaits, these snake-like robots suffer from unsmooth gait transitions by changing the locomotion speed, direction, and body shape, which would potentially cause undesired movement and abnormal torque. Hence, there exists a knowledge gap for snake-like robots to achieve autonomous locomotion. To address this problem, this paper presents the smooth slithering gait transition control based on a lightweight central pattern generator (CPG) model for snake-like robots. First, based on the convergence behavior of the gradient system, a lightweight CPG model with fast computing time was designed and compared with other widely adopted CPG models. Then, by reshaping the body into a more stable geometry, the slithering gait was modified, and studied based on the proposed CPG model, including the gait transition of locomotion speed, moving direction, and body shape. In contrast to sinusoid-based method, extensive simulations and prototype experiments finally demonstrated that smooth slithering gait transition can be effectively achieved using the proposed CPG-based control method without generating undesired locomotion and abnormal torque.
Photonic Waveguide Choke Joint with Absorptive Loading
NASA Technical Reports Server (NTRS)
Wollack, Edward J. (Inventor); U-Yen, Kongpop (Inventor); Chuss, David T. (Inventor)
2016-01-01
A photonic waveguide choke includes a first waveguide flange member having periodic metal tiling pillars, a dissipative dielectric material positioned within an area between the periodic metal tiling pillars and a second waveguide flange member disposed to be coupled with the first waveguide flange member and in spaced-apart relationship separated by a gap. The first waveguide flange member has a substantially smooth surface, and the second waveguide flange member has an array of two-dimensional pillar structures formed therein.
1983-10-01
types such as the Alberta, Plainview, Scotts Aluff, Eden Valley and Hell Gap ( Plano Complex) . A private collector from Sheyenne, North Dakota--on the...Grafton) (Michlovic 1979). An apparently early type point of the Plano Complex (Alberta point) was found net: the Manitoba community of Manitou (Pettipas...with the DL-S Burial Complex include miniature, smooth mortuary vessels, sometimes decorated with incised thunderbird designs and/or raised lizzards or
Crossover between few and many fermions in a harmonic trap
NASA Astrophysics Data System (ADS)
Grining, Tomasz; Tomza, Michał; Lesiuk, Michał; Przybytek, Michał; Musiał, Monika; Moszynski, Robert; Lewenstein, Maciej; Massignan, Pietro
2015-12-01
The properties of a balanced two-component Fermi gas in a one-dimensional harmonic trap are studied by means of the coupled-cluster method. For few fermions we recover the results of exact diagonalization, yet with this method we are able to study much larger systems. We compute the energy, the chemical potential, the pairing gap, and the density profile of the trapped clouds, smoothly mapping the crossover between the few-body and many-body limits. The energy is found to converge surprisingly rapidly to the many-body result for every value of the interaction strength. Many more particles are instead needed to give rise to the nonanalytic behavior of the pairing gap, and to smoothen the pronounced even-odd oscillations of the chemical potential induced by the shell structure of the trap.
ERIC Educational Resources Information Center
Sueiro, Manuel J.; Abad, Francisco J.
2011-01-01
The distance between nonparametric and parametric item characteristic curves has been proposed as an index of goodness of fit in item response theory in the form of a root integrated squared error index. This article proposes to use the posterior distribution of the latent trait as the nonparametric model and compares the performance of an index…
NASA Astrophysics Data System (ADS)
Kunimura, Shinsuke; Ohmori, Hitoshi
We present a rapid process for producing flat and smooth surfaces. In this technical note, a fabrication result of a carbon mirror is shown. Electrolytic in-process dressing (ELID) grinding with a metal bonded abrasive wheel, then a metal-resin bonded abrasive wheel, followed by a conductive rubber bonded abrasive wheel, and finally magnetorheological finishing (MRF) were performed as the first, second, third, and final steps, respectively in this process. Flatness over the whole surface was improved by performing the first and second steps. After the third step, peak to valley (PV) and root mean square (rms) values in an area of 0.72 x 0.54 mm2 on the surface were improved. These values were further improved after the final step, and a PV value of 10 nm and an rms value of 1 nm were obtained. Form errors and small surface irregularities such as surface waviness and micro roughness were efficiently reduced by performing ELID grinding using the above three kinds of abrasive wheels because of the high removal rate of ELID grinding, and residual small irregularities were reduced by short time MRF. This process makes it possible to produce flat and smooth surfaces in several hours.
Error Correction, Control Systems and Fuzzy Logic
NASA Technical Reports Server (NTRS)
Smith, Earl B.
2004-01-01
This paper will be a discussion on dealing with errors. While error correction and communication is important when dealing with spacecraft vehicles, the issue of control system design is also important. There will be certain commands that one wants a motion device to execute. An adequate control system will be necessary to make sure that the instruments and devices will receive the necessary commands. As it will be discussed later, the actual value will not always be equal to the intended or desired value. Hence, an adequate controller will be necessary so that the gap between the two values will be closed.
Singh, Prashant; Harbola, Manoj K.; Johnson, Duane D.
2017-09-08
Here, this work constitutes a comprehensive and improved account of electronic-structure and mechanical properties of silicon-nitride (more » $${\\rm Si}_{3}$$ $${\\rm N}_{4}$$ ) polymorphs via van Leeuwen and Baerends (LB) exchange-corrected local density approximation (LDA) that enforces the exact exchange potential asymptotic behavior. The calculated lattice constant, bulk modulus, and electronic band structure of $${\\rm Si}_{3}$$ $${\\rm N}_{4}$$ polymorphs are in good agreement with experimental results. We also show that, for a single electron in a hydrogen atom, spherical well, or harmonic oscillator, the LB-corrected LDA reduces the (self-interaction) error to exact total energy to ~10%, a factor of three to four lower than standard LDA, due to a dramatically improved representation of the exchange-potential.« less
Crash testing difference-smoothing algorithm on a large sample of simulated light curves from TDC1
NASA Astrophysics Data System (ADS)
Rathna Kumar, S.
2017-09-01
In this work, we propose refinements to the difference-smoothing algorithm for the measurement of time delay from the light curves of the images of a gravitationally lensed quasar. The refinements mainly consist of a more pragmatic approach to choose the smoothing time-scale free parameter, generation of more realistic synthetic light curves for the estimation of time delay uncertainty and using a plot of normalized χ2 computed over a wide range of trial time delay values to assess the reliability of a measured time delay and also for identifying instances of catastrophic failure. We rigorously tested the difference-smoothing algorithm on a large sample of more than thousand pairs of simulated light curves having known true time delays between them from the two most difficult 'rungs' - rung3 and rung4 - of the first edition of Strong Lens Time Delay Challenge (TDC1) and found an inherent tendency of the algorithm to measure the magnitude of time delay to be higher than the true value of time delay. However, we find that this systematic bias is eliminated by applying a correction to each measured time delay according to the magnitude and sign of the systematic error inferred by applying the time delay estimator on synthetic light curves simulating the measured time delay. Following these refinements, the TDC performance metrics for the difference-smoothing algorithm are found to be competitive with those of the best performing submissions of TDC1 for both the tested 'rungs'. The MATLAB codes used in this work and the detailed results are made publicly available.
Botschko, Yehudit; Yarkoni, Merav; Joshua, Mati
2018-01-01
When animal behavior is studied in a laboratory environment, the animals are often extensively trained to shape their behavior. A crucial question is whether the behavior observed after training is part of the natural repertoire of the animal or represents an outlier in the animal's natural capabilities. This can be investigated by assessing the extent to which the target behavior is manifested during the initial stages of training and the time course of learning. We explored this issue by examining smooth pursuit eye movements in monkeys naïve to smooth pursuit tasks. We recorded the eye movements of monkeys from the 1st days of training on a step-ramp paradigm. We used bright spots, monkey pictures and scrambled versions of the pictures as moving targets. We found that during the initial stages of training, the pursuit initiation was largest for the monkey pictures and in some direction conditions close to target velocity. When the pursuit initiation was large, the monkeys mostly continued to track the target with smooth pursuit movements while correcting for displacement errors with small saccades. Two weeks of training increased the pursuit eye velocity in all stimulus conditions, whereas further extensive training enhanced pursuit slightly more. The training decreased the coefficient of variation of the eye velocity. Anisotropies that grade pursuit across directions were observed from the 1st day of training and mostly persisted across training. Thus, smooth pursuit in the step-ramp paradigm appears to be part of the natural repertoire of monkeys' behavior and training adjusts monkeys' natural predisposed behavior.
Discrete wavelet transform: a tool in smoothing kinematic data.
Ismail, A R; Asfour, S S
1999-03-01
Motion analysis systems typically introduce noise to the displacement data recorded. Butterworth digital filters have been used to smooth the displacement data in order to obtain smoothed velocities and accelerations. However, this technique does not yield satisfactory results, especially when dealing with complex kinematic motions that occupy the low- and high-frequency bands. The use of the discrete wavelet transform, as an alternative to digital filters, is presented in this paper. The transform passes the original signal through two complementary low- and high-pass FIR filters and decomposes the signal into an approximation function and a detail function. Further decomposition of the signal results in transforming the signal into a hierarchy set of orthogonal approximation and detail functions. A reverse process is employed to perfectly reconstruct the signal (inverse transform) back from its approximation and detail functions. The discrete wavelet transform was applied to the displacement data recorded by Pezzack et al., 1977. The smoothed displacement data were twice differentiated and compared to Pezzack et al.'s acceleration data in order to choose the most appropriate filter coefficients and decomposition level on the basis of maximizing the percentage of retained energy (PRE) and minimizing the root mean square error (RMSE). Daubechies wavelet of the fourth order (Db4) at the second decomposition level showed better results than both the biorthogonal and Coiflet wavelets (PRE = 97.5%, RMSE = 4.7 rad s-2). The Db4 wavelet was then used to compress complex displacement data obtained from a noisy mathematically generated function. Results clearly indicate superiority of this new smoothing approach over traditional filters.
Characterization of in Band Stray Light in SBUV-2 Instruments
NASA Technical Reports Server (NTRS)
Huang, L. K.; DeLand, M. T.; Taylor, S. L.; Flynn, L. E.
2014-01-01
Significant in-band stray light (IBSL) error at solar zenith angle (SZA) values larger than 77deg near sunset in 4 SBUV/2 (Solar Backscattered Ultraviolet) instruments, on board the NOAA-14, 17, 18 and 19 satellites, has been characterized. The IBSL error is caused by large surface reflection and scattering of the air-gapped depolarizer in front of the instrument's monochromator aperture. The source of the IBSL error is direct solar illumination of instrument components near the aperture rather than from earth shine. The IBSL contamination at 273 nm can reach 40% of earth radiance near sunset, which results in as much as a 50% error in the retrieved ozone from the upper stratosphere. We have analyzed SBUV/2 albedo measurements on both the dayside and nightside to develop an empirical model for the IBSL error. This error has been corrected in the V8.6 SBUV/2 ozone retrieval.
Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting
Ghazali, Rozaida; Herawan, Tutut
2016-01-01
Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network. PMID:27959927
Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting.
Waheeb, Waddah; Ghazali, Rozaida; Herawan, Tutut
2016-01-01
Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network.
Spatial sampling considerations of the CERES (Clouds and Earth Radiant Energy System) instrument
NASA Astrophysics Data System (ADS)
Smith, G. L.; Manalo-Smith, Natividdad; Priestley, Kory
2014-10-01
The CERES (Clouds and Earth Radiant Energy System) instrument is a scanning radiometer with three channels for measuring Earth radiation budget. At present CERES models are operating aboard the Terra, Aqua and Suomi/NPP spacecraft and flights of CERES instruments are planned for the JPSS-1 spacecraft and its successors. CERES scans from one limb of the Earth to the other and back. The footprint size grows with distance from nadir simply due to geometry so that the size of the smallest features which can be resolved from the data increases and spatial sampling errors increase with nadir angle. This paper presents an analysis of the effect of nadir angle on spatial sampling errors of the CERES instrument. The analysis performed in the Fourier domain. Spatial sampling errors are created by smoothing of features which are the size of the footprint and smaller, or blurring, and inadequate sampling, that causes aliasing errors. These spatial sampling errors are computed in terms of the system transfer function, which is the Fourier transform of the point response function, the spacing of data points and the spatial spectrum of the radiance field.
Calibrating photometric redshifts of luminous red galaxies
Padmanabhan, Nikhil; Budavari, Tamas; Schlegel, David J.; ...
2005-05-01
We discuss the construction of a photometric redshift catalogue of luminous red galaxies (LRGs) from the Sloan Digital Sky Survey (SDSS), emphasizing the principal steps necessary for constructing such a catalogue: (i) photometrically selecting the sample, (ii) measuring photometric redshifts and their error distributions, and (iii) estimating the true redshift distribution. We compare two photometric redshift algorithms for these data and find that they give comparable results. Calibrating against the SDSS and SDSS–2dF (Two Degree Field) spectroscopic surveys, we find that the photometric redshift accuracy is σ~ 0.03 for redshifts less than 0.55 and worsens at higher redshift (~ 0.06more » for z < 0.7). These errors are caused by photometric scatter, as well as systematic errors in the templates, filter curves and photometric zero-points. We also parametrize the photometric redshift error distribution with a sum of Gaussians and use this model to deconvolve the errors from the measured photometric redshift distribution to estimate the true redshift distribution. We pay special attention to the stability of this deconvolution, regularizing the method with a prior on the smoothness of the true redshift distribution. The methods that we develop are applicable to general photometric redshift surveys.« less
A Starshade Petal Error Budget for Exo-Earth Detection and Characterization
NASA Technical Reports Server (NTRS)
Shaklan, Stuart B.; Marchen, Luis; Lisman, P. Douglas; Cady, Eric; Martin, Stefan; Thomson, Mark; Dumont, Philip; Kasdin, N. Jeremy
2011-01-01
We present a starshade error budget with engineering requirements that are well within the current manufacturing and metrology capabilities. The error budget is based on an observational scenario in which the starshade spins about its axis on timescales short relative to the zodi-limited integration time, typically several hours. The scatter from localized petal errors is smoothed into annuli around the center of the image plane, resulting in a large reduction in the background flux variation while reducing thermal gradients caused by structural shadowing. Having identified the performance sensitivity to petal shape errors with spatial periods of 3-4 cycles/petal as the most challenging aspect of the design, we have adopted and modeled a manufacturing approach that mitigates these perturbations with 1-meter-long precision edge segments positioned using commercial metrology that readily meets assembly requirements. We have performed detailed thermal modeling and show that the expected thermal deformations are well within the requirements as well. We compare the requirements for four cases: a 32 meter diameter starshade with a 1.5 meter telescope, analyzed at 75 and 90 milliarcseconds, and a 40 meter diameter starshade with a 4 meter telescope, analyzed at 60 and 75 milliarcseconds.
Characterizing the impact of model error in hydrologic time series recovery inverse problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Scott K.; He, Jiachuan; Vesselinov, Velimir V.
Hydrologic models are commonly over-smoothed relative to reality, owing to computational limitations and to the difficulty of obtaining accurate high-resolution information. When used in an inversion context, such models may introduce systematic biases which cannot be encapsulated by an unbiased “observation noise” term of the type assumed by standard regularization theory and typical Bayesian formulations. Despite its importance, model error is difficult to encapsulate systematically and is often neglected. In this paper, model error is considered for an important class of inverse problems that includes interpretation of hydraulic transients and contaminant source history inference: reconstruction of a time series thatmore » has been convolved against a transfer function (i.e., impulse response) that is only approximately known. Using established harmonic theory along with two results established here regarding triangular Toeplitz matrices, upper and lower error bounds are derived for the effect of systematic model error on time series recovery for both well-determined and over-determined inverse problems. It is seen that use of additional measurement locations does not improve expected performance in the face of model error. A Monte Carlo study of a realistic hydraulic reconstruction problem is presented, and the lower error bound is seen informative about expected behavior. Finally, a possible diagnostic criterion for blind transfer function characterization is also uncovered.« less
Narayan, Sreenath; Kalhan, Satish C.; Wilson, David L.
2012-01-01
I.Abstract Purpose To reduce swaps in fat-water separation methods, a particular issue on 7T small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Materials and Methods Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Results Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Conclusion Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. PMID:23023815
Narayan, Sreenath; Kalhan, Satish C; Wilson, David L
2013-05-01
To reduce swaps in fat-water separation methods, a particular issue on 7 Tesla (T) small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. Copyright © 2012 Wiley Periodicals, Inc.
Measurement error in environmental epidemiology and the shape of exposure-response curves.
Rhomberg, Lorenz R; Chandalia, Juhi K; Long, Christopher M; Goodman, Julie E
2011-09-01
Both classical and Berkson exposure measurement errors as encountered in environmental epidemiology data can result in biases in fitted exposure-response relationships that are large enough to affect the interpretation and use of the apparent exposure-response shapes in risk assessment applications. A variety of sources of potential measurement error exist in the process of estimating individual exposures to environmental contaminants, and the authors review the evaluation in the literature of the magnitudes and patterns of exposure measurement errors that prevail in actual practice. It is well known among statisticians that random errors in the values of independent variables (such as exposure in exposure-response curves) may tend to bias regression results. For increasing curves, this effect tends to flatten and apparently linearize what is in truth a steeper and perhaps more curvilinear or even threshold-bearing relationship. The degree of bias is tied to the magnitude of the measurement error in the independent variables. It has been shown that the degree of bias known to apply to actual studies is sufficient to produce a false linear result, and that although nonparametric smoothing and other error-mitigating techniques may assist in identifying a threshold, they do not guarantee detection of a threshold. The consequences of this could be great, as it could lead to a misallocation of resources towards regulations that do not offer any benefit to public health.
Characterizing the impact of model error in hydrologic time series recovery inverse problems
Hansen, Scott K.; He, Jiachuan; Vesselinov, Velimir V.
2017-10-28
Hydrologic models are commonly over-smoothed relative to reality, owing to computational limitations and to the difficulty of obtaining accurate high-resolution information. When used in an inversion context, such models may introduce systematic biases which cannot be encapsulated by an unbiased “observation noise” term of the type assumed by standard regularization theory and typical Bayesian formulations. Despite its importance, model error is difficult to encapsulate systematically and is often neglected. In this paper, model error is considered for an important class of inverse problems that includes interpretation of hydraulic transients and contaminant source history inference: reconstruction of a time series thatmore » has been convolved against a transfer function (i.e., impulse response) that is only approximately known. Using established harmonic theory along with two results established here regarding triangular Toeplitz matrices, upper and lower error bounds are derived for the effect of systematic model error on time series recovery for both well-determined and over-determined inverse problems. It is seen that use of additional measurement locations does not improve expected performance in the face of model error. A Monte Carlo study of a realistic hydraulic reconstruction problem is presented, and the lower error bound is seen informative about expected behavior. Finally, a possible diagnostic criterion for blind transfer function characterization is also uncovered.« less
The convergence analysis of SpikeProp algorithm with smoothing L1∕2 regularization.
Zhao, Junhong; Zurada, Jacek M; Yang, Jie; Wu, Wei
2018-07-01
Unlike the first and the second generation artificial neural networks, spiking neural networks (SNNs) model the human brain by incorporating not only synaptic state but also a temporal component into their operating model. However, their intrinsic properties require expensive computation during training. This paper presents a novel algorithm to SpikeProp for SNN by introducing smoothing L 1∕2 regularization term into the error function. This algorithm makes the network structure sparse, with some smaller weights that can be eventually removed. Meanwhile, the convergence of this algorithm is proved under some reasonable conditions. The proposed algorithms have been tested for the convergence speed, the convergence rate and the generalization on the classical XOR-problem, Iris problem and Wisconsin Breast Cancer classification. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Reyes López, Yaidel; Roose, Dirk; Recarey Morfa, Carlos
2013-05-01
In this paper, we present a dynamic refinement algorithm for the smoothed particle Hydrodynamics (SPH) method. An SPH particle is refined by replacing it with smaller daughter particles, which positions are calculated by using a square pattern centered at the position of the refined particle. We determine both the optimal separation and the smoothing distance of the new particles such that the error produced by the refinement in the gradient of the kernel is small and possible numerical instabilities are reduced. We implemented the dynamic refinement procedure into two different models: one for free surface flows, and one for post-failure flow of non-cohesive soil. The results obtained for the test problems indicate that using the dynamic refinement procedure provides a good trade-off between the accuracy and the cost of the simulations.
Resolution of the Band Gap Prediction Problem for Materials Design
Crowley, Jason M.; Tahir-Kheli, Jamil; Goddard, William A.
2016-03-04
An important property with any new material is the band gap. Standard density functional theory methods grossly underestimate band gaps. This is known as the band gap problem. Here in this paper, we show that the hybrid B3PW91 density functional returns band gaps with a mean absolute deviation (MAD) from experiment of 0.22 eV over 64 insulators with gaps spanning a factor of 500 from 0.014 to 7 eV. The MAD is 0.28 eV over 70 compounds with gaps up to 14.2 eV, with a mean error of -0.03 eV. To benchmark the quality of the hybrid method, we comparedmore » the hybrid method to the rigorous GW many-body perturbation theory method. Surprisingly, the MAD for B3PW91 is about 1.5 times smaller than the MAD for GW. Furthermore, B3PW91 is 3-4 orders of magnitude faster computationally. Hence, B3PW91 is a practical tool for predicting band gaps of materials before they are synthesized and represents a solution to the band gap prediction problem.« less
NASA Astrophysics Data System (ADS)
Li, Yi; Xu, Yanlong
2017-09-01
Considering uncertain geometrical and material parameters, the lower and upper bounds of the band gap of an undulated beam with periodically arched shape are studied by the Monte Carlo Simulation (MCS) and interval analysis based on the Taylor series. Given the random variations of the overall uncertain variables, scatter plots from the MCS are used to analyze the qualitative sensitivities of the band gap respect to these uncertainties. We find that the influence of uncertainty of the geometrical parameter on the band gap of the undulated beam is stronger than that of the material parameter. And this conclusion is also proved by the interval analysis based on the Taylor series. Our methodology can give a strategy to reduce the errors between the design and practical values of the band gaps by improving the accuracy of the specially selected uncertain design variables of the periodical structures.
Çizmeci, Hülya; Çiprut, Ayça
2018-06-01
This study aimed to (1) evaluate the gap filling skills and reading mistakes of students with cochlear implants, and to (2) compare their results with those of their normal-hearing peers. The effects of implantation age and total time of cochlear implant use were analyzed in relation to the subjects' reading skills development. The study included 19 students who underwent cochlear implantation and 20 students with normal hearing, who were enrolled at the 6th to 8th grades. The subjects' ages ranged between 12 and 14 years old. Their reading skills were evaluated by using the Informal Reading Inventory. A significant relationship were found between implanted and normal-hearing students in terms of the percentages of reading error and the percentages of gap filling scores. The average order of the reading errors of students using cochlear implants was higher than that of normal-hearing students. As for the gap filling, the performances of implanted students in the passage are lower than those of their normal-hearing peers. No significant relationship was found between the variables tested in terms of age and duration of implantation on the reading performances of implanted students. Even if they were early implanted, there were significant differences in the reading performances of implanted students compared with those of their normal-hearing peers in older classes. Copyright © 2018 Elsevier B.V. All rights reserved.
Henry, M; Porcher, C; Julé, Y
1998-06-10
The aim of the present study was to describe the deep muscular plexus of the pig duodenum and to characterize its cellular components. Numerous nerve varicosities have been detected in the deep muscular plexus using anti-synaptophysin antibodies. Nerve fibres were also detected here in the outer circular muscle layer, whereas no nerve fibres were observed in the inner circular muscle layer. In the deep muscular plexus, nerve fibres projected to interstitial cells which were characterized at the ultrastructural level. The interstitial cells were of two kinds: the interstitial fibroblastic-like cells (FLC) and the interstitial dense cells (IDC), both of which were interposed between nerve fibres and smooth muscle cells. The FLC were characterized by their elongated bipolar shape, the lack of basal lamina, a well-developed endoplasmic reticulum, a Golgi apparatus, and intermediate filaments. They were closely apposed to axon terminals containing small clear synaptic vesicles and/or dense-cored vesicles. They were frequently connected to each other and to smooth muscle cells of the inner and outer circular layer by desmosomes and more rarely by gap junctions. The IDC are myoid-like cells. They had a stellate appearance and were characterized by a dense cell body, numerous caveolae, and a discontinuous basal lamina. The IDC were always closely apposed to nerve fibres and were connected to smooth muscle cells by desmosomes and small gap junctions. The present results show the unique pattern of cellular organization of the deep muscular plexus of the pig small intestine. They suggest that the interstitial cells in the deep muscular plexus are involved in the integration and transmission of nervous inputs from myenteric neurons to the inner and outer circular muscle layers. The clear-cut distinction observed here between the two types of interstitial cells (fibroblastic and myoid-like) suggests that the interstitial cells of each type may also be involved in some other specific activity, which still remains to be determined.
A universal theory for gas breakdown from microscale to the classical Paschen law
NASA Astrophysics Data System (ADS)
Loveless, Amanda M.; Garner, Allen L.
2017-11-01
While well established for larger gaps, Paschen's law (PL) fails to accurately predict breakdown for microscale gaps, where field emission becomes important. This deviation from PL is characterized by the absence of a minimum breakdown voltage as a function of the product of pressure and gap distance, which has been demonstrated analytically for microscale and smaller gaps with no secondary emission at atmospheric pressure [A. M. Loveless and A. L. Garner, IEEE Trans. Plasma Sci. 45, 574-583 (2017)]. We extend these previous results by deriving analytic expressions that incorporate the nonzero secondary emission coefficient, γS E, that are valid for gap distances larger than those at which quantum effects become important (˜100 nm) while remaining below those at which streamers arise. We demonstrate the validity of this model by benchmarking to particle-in-cell simulations with γSE = 0 and comparing numerical results to an experiment with argon, while additionally predicting a minimum voltage that was masked by fixing the gap pressure in previous analyses. Incorporating γSE demonstrates the smooth transition from field emission dominated breakdown to the classical PL once the combination of electric field, pressure, and gap distance satisfies the conventional criterion for the Townsend avalanche; however, such a condition generally requires supra-atmospheric pressures for breakdown at the microscale. Therefore, this study provides a single universal breakdown theory for any gas at any pressure dominated by field emission or Townsend avalanche to guide engineers in avoiding breakdown when designing microscale and larger devices, or inducing breakdown for generating microplasmas.
Quantification of residual dose estimation error on log file-based patient dose calculation.
Katsuta, Yoshiyuki; Kadoya, Noriyuki; Fujita, Yukio; Shimizu, Eiji; Matsunaga, Kenichi; Matsushita, Haruo; Majima, Kazuhiro; Jingu, Keiichi
2016-05-01
The log file-based patient dose estimation includes a residual dose estimation error caused by leaf miscalibration, which cannot be reflected on the estimated dose. The purpose of this study is to determine this residual dose estimation error. Modified log files for seven head-and-neck and prostate volumetric modulated arc therapy (VMAT) plans simulating leaf miscalibration were generated by shifting both leaf banks (systematic leaf gap errors: ±2.0, ±1.0, and ±0.5mm in opposite directions and systematic leaf shifts: ±1.0mm in the same direction) using MATLAB-based (MathWorks, Natick, MA) in-house software. The generated modified and non-modified log files were imported back into the treatment planning system and recalculated. Subsequently, the generalized equivalent uniform dose (gEUD) was quantified for the definition of the planning target volume (PTV) and organs at risks. For MLC leaves calibrated within ±0.5mm, the quantified residual dose estimation errors that obtained from the slope of the linear regression of gEUD changes between non- and modified log file doses per leaf gap are in head-and-neck plans 1.32±0.27% and 0.82±0.17Gy for PTV and spinal cord, respectively, and in prostate plans 1.22±0.36%, 0.95±0.14Gy, and 0.45±0.08Gy for PTV, rectum, and bladder, respectively. In this work, we determine the residual dose estimation errors for VMAT delivery using the log file-based patient dose calculation according to the MLC calibration accuracy. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Local error estimates for discontinuous solutions of nonlinear hyperbolic equations
NASA Technical Reports Server (NTRS)
Tadmor, Eitan
1989-01-01
Let u(x,t) be the possibly discontinuous entropy solution of a nonlinear scalar conservation law with smooth initial data. Suppose u sub epsilon(x,t) is the solution of an approximate viscosity regularization, where epsilon greater than 0 is the small viscosity amplitude. It is shown that by post-processing the small viscosity approximation u sub epsilon, pointwise values of u and its derivatives can be recovered with an error as close to epsilon as desired. The analysis relies on the adjoint problem of the forward error equation, which in this case amounts to a backward linear transport with discontinuous coefficients. The novelty of this approach is to use a (generalized) E-condition of the forward problem in order to deduce a W(exp 1,infinity) energy estimate for the discontinuous backward transport equation; this, in turn, leads one to an epsilon-uniform estimate on moments of the error u(sub epsilon) - u. This approach does not follow the characteristics and, therefore, applies mutatis mutandis to other approximate solutions such as E-difference schemes.
Influence of surface error on electromagnetic performance of reflectors based on Zernike polynomials
NASA Astrophysics Data System (ADS)
Li, Tuanjie; Shi, Jiachen; Tang, Yaqiong
2018-04-01
This paper investigates the influence of surface error distribution on the electromagnetic performance of antennas. The normalized Zernike polynomials are used to describe a smooth and continuous deformation surface. Based on the geometrical optics and piecewise linear fitting method, the electrical performance of reflector described by the Zernike polynomials is derived to reveal the relationship between surface error distribution and electromagnetic performance. Then the relation database between surface figure and electric performance is built for ideal and deformed surfaces to realize rapidly calculation of far-field electric performances. The simulation analysis of the influence of Zernike polynomials on the electrical properties for the axis-symmetrical reflector with the axial mode helical antenna as feed is further conducted to verify the correctness of the proposed method. Finally, the influence rules of surface error distribution on electromagnetic performance are summarized. The simulation results show that some terms of Zernike polynomials may decrease the amplitude of main lobe of antenna pattern, and some may reduce the pointing accuracy. This work extracts a new concept for reflector's shape adjustment in manufacturing process.
Development of an Abort Gap Monitor for High-Energy Proton Rings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beche, J.-F.; Byrd, J.; De Santis, S.
2004-11-10
The fill pattern in proton synchrotrons usually features an empty gap, longer than the abort kicker raise time, for machine protection. This gap is referred to as the 'abort gap', and any particles, which may accumulate in it due to injection errors and diffusion between RF buckets, would be lost inside the ring, rather than in the beam dump, during the kicker firing. In large proton rings, due to the high energies involved, it is vital to monitor the build up of charges in the abort gap with a high sensitivity. We present a study of an abort gap monitormore » based on a photomultiplier with a gated microchannel plate, which would allow for detecting low charge densities by monitoring the synchrotron radiation emitted. We show results of beam test experiments at the Advanced Light Source using a Hamamatsu 5916U MCP-PMT and compare them to the specifications for the Large Hadron Collider.« less
Development of an abort gap monitor for high-energy proton rings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beche, Jean-Francois; Byrd, John; De Santis, Stefano
2004-05-03
The fill pattern in proton synchrotrons usually features an empty gap, longer than the abort kicker raise time, for machine protection. This gap is referred to as the ''abort gap'' and any particles, which may accumulate in it due to injection errors and diffusion between RF buckets, would be lost inside the ring, rather than in the beam dump, during the kicker firing. In large proton rings, due to the high energies involved, it is vital to monitor the build up of charges in the abort gap with a high sensitivity. We present a study of an abort gap monitormore » based on a photomultiplier with a gated microchannel plate, which would allow for detecting low charge densities by monitoring the synchrotron radiation emitted. We show results of beam test experiments at the Advanced Light Source using a Hamamatsu 5916U MCP-PMT and compare them to the specifications for the Large Hadron Collider« less
A day in the life of a volunteer incident commander: errors, pressures and mitigating strategies.
Bearman, Christopher; Bremner, Peter A
2013-05-01
To meet an identified gap in the literature this paper investigates the tasks that a volunteer incident commander needs to carry out during an incident, the errors that can be made and the way that errors are managed. In addition, pressure from goal seduction and situation aversion were also examined. Volunteer incident commanders participated in a two-part interview consisting of a critical decision method interview and discussions about a hierarchical task analysis constructed by the authors. A SHERPA analysis was conducted to further identify potential errors. The results identified the key tasks, errors with extreme risk, pressures from strong situations and mitigating strategies for errors and pressures. The errors and pressures provide a basic set of issues that need to be managed by both volunteer incident commanders and fire agencies. The mitigating strategies identified here suggest some ways that this can be done. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Veenstra, Richard D
2016-01-01
The development of the patch clamp technique has enabled investigators to directly measure gap junction conductance between isolated pairs of small cells with resolution to the single channel level. The dual patch clamp recording technique requires specialized equipment and the acquired skill to reliably establish gigaohm seals and the whole cell recording configuration with high efficiency. This chapter describes the equipment needed and methods required to achieve accurate measurement of macroscopic and single gap junction channel conductances. Inherent limitations with the dual whole cell recording technique and methods to correct for series access resistance errors are defined as well as basic procedures to determine the essential electrical parameters necessary to evaluate the accuracy of gap junction conductance measurements using this approach.
Large deformation frictional contact analysis with immersed boundary method
NASA Astrophysics Data System (ADS)
Navarro-Jiménez, José Manuel; Tur, Manuel; Albelda, José; Ródenas, Juan José
2018-01-01
This paper proposes a method of solving 3D large deformation frictional contact problems with the Cartesian Grid Finite Element Method. A stabilized augmented Lagrangian contact formulation is developed using a smooth stress field as stabilizing term, calculated by Zienckiewicz and Zhu Superconvergent Patch Recovery. The parametric definition of the CAD surfaces (usually NURBS) is considered in the definition of the contact kinematics in order to obtain an enhanced measure of the contact gap. The numerical examples show the performance of the method.
Photonic Choke-Joints for Dual Polarization Waveguides
NASA Technical Reports Server (NTRS)
Wollack, Edward J. (Inventor); U-Yen, Kongpop (Inventor); Chuss, David T. (Inventor)
2014-01-01
A waveguide structure for a dual polarization waveguide includes a first flange member, a second flange member, and a waveguide member disposed in each of the first flange member and second flange member. The first flange member and the second flange member are configured to be coupled together in a spaced-apart relationship separated by a gap. The first flange member has a substantially smooth surface, and the second flange member has an array of two-dimensional pillar structures formed therein.
Hsi-Ping, Liu; Peselnick, L.
1983-01-01
A detailed evaluation on the method of internal friction measurement by the stress-strain hysteresis loop method from 0.01 to 1 Hz at 10-8-10-7 strain amplitude and 23.9oC is presented. Significant systematic errors in relative phase measurement can result from convex end surfaces of the sample and stress sensor and from end surface irregularities such as nicks and asperities. Preparation of concave end surfaces polished to optical smoothness having a radius of curvature >3.6X104 cm reduces the systematic error in relative phase measurements to <(5.5+ or -2.2)X10-4 radians. -from Authors
NASA Astrophysics Data System (ADS)
Voloshinov, V. V.
2018-03-01
In computations related to mathematical programming problems, one often has to consider approximate, rather than exact, solutions satisfying the constraints of the problem and the optimality criterion with a certain error. For determining stopping rules for iterative procedures, in the stability analysis of solutions with respect to errors in the initial data, etc., a justified characteristic of such solutions that is independent of the numerical method used to obtain them is needed. A necessary δ-optimality condition in the smooth mathematical programming problem that generalizes the Karush-Kuhn-Tucker theorem for the case of approximate solutions is obtained. The Lagrange multipliers corresponding to the approximate solution are determined by solving an approximating quadratic programming problem.
On Motion Planning with Uncertainty. Revised.
1984-01-01
drift to the right, sticking at the right corner. See Fig. 1.6. Given the uncertainty in the position sensor, it is impossible to execute corrective ...action once * sticking is detected. This is because the corrective action depends on knowing the side at which sticking occurred. Worse than being...unable to correct errors should they occur, is the inability to detect success. In the given example, it is possible that the peg may move smoothly into
A geometrical interpretation of the 2n-th central difference
NASA Technical Reports Server (NTRS)
Tapia, R. A.
1972-01-01
Many algorithms used for data smoothing, data classification and error detection require the calculation of the distance from a point to the polynomial interpolating its 2n neighbors (n on each side). This computation, if performed naively, would require the solution of a system of equations and could create numerical problems. This note shows that if the data is equally spaced, then this calculation can be performed using a simple recursion formula.
Atwood, E.L.
1958-01-01
Response bias errors are studied by comparing questionnaire responses from waterfowl hunters using four large public hunting areas with actual hunting data from these areas during two hunting seasons. To the extent that the data permit, the sources of the error in the responses were studied and the contribution of each type to the total error was measured. Response bias errors, including both prestige and memory bias, were found to be very large as compared to non-response and sampling errors. Good fits were obtained with the seasonal kill distribution of the actual hunting data and the negative binomial distribution and a good fit was obtained with the distribution of total season hunting activity and the semi-logarithmic curve. A comparison of the actual seasonal distributions with the questionnaire response distributions revealed that the prestige and memory bias errors are both positive. The comparisons also revealed the tendency for memory bias errors to occur at digit frequencies divisible by five and for prestige bias errors to occur at frequencies which are multiples of the legal daily bag limit. A graphical adjustment of the response distributions was carried out by developing a smooth curve from those frequency classes not included in the predictable biased frequency classes referred to above. Group averages were used in constructing the curve, as suggested by Ezekiel [1950]. The efficiency of the technique described for reducing response bias errors in hunter questionnaire responses on seasonal waterfowl kill is high in large samples. The graphical method is not as efficient in removing response bias errors in hunter questionnaire responses on seasonal hunting activity where an average of 60 percent was removed.
Hahne, Jan; Helias, Moritz; Kunkel, Susanne; Igarashi, Jun; Bolten, Matthias; Frommer, Andreas; Diesmann, Markus
2015-01-01
Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy in the presence of gap junctions, we present benchmarks for workstations, clusters, and supercomputers. Finally, we discuss limitations of the novel technology.
Hahne, Jan; Helias, Moritz; Kunkel, Susanne; Igarashi, Jun; Bolten, Matthias; Frommer, Andreas; Diesmann, Markus
2015-01-01
Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy in the presence of gap junctions, we present benchmarks for workstations, clusters, and supercomputers. Finally, we discuss limitations of the novel technology. PMID:26441628
NASA Astrophysics Data System (ADS)
Zhang, Shengjun; Li, Jiancheng; Jin, Taoyong; Che, Defu
2018-04-01
Marine gravity anomaly derived from satellite altimetry can be computed using either sea surface height or sea surface slope measurements. Here we consider the slope method and evaluate the errors in the slope of the corrections supplied with the Jason-1 geodetic mission data. The slope corrections are divided into three groups based on whether they are small, comparable, or large with respect to the 1 microradian error in the current sea surface slope models. (1) The small and thus negligible corrections include dry tropospheric correction, inverted barometer correction, solid earth tide and geocentric pole tide. (2) The moderately important corrections include wet tropospheric correction, dual-frequency ionospheric correction and sea state bias. The radiometer measurements are more preferred than model values in the geophysical data records for constraining wet tropospheric effect owing to the highly variable water-vapor structure in atmosphere. The items of dual-frequency ionospheric correction and sea state bias should better not be directly added to range observations for obtaining sea surface slopes since their inherent errors may cause abnormal sea surface slopes and along-track smoothing with uniform distribution weight in certain width is an effective strategy for avoiding introducing extra noises. The slopes calculated from radiometer wet tropospheric corrections, and along-track smoothed dual-frequency ionospheric corrections, sea state bias are generally within ±0.5 microradians and no larger than 1 microradians. (3) Ocean tide has the largest influence on obtaining sea surface slopes while most of ocean tide slopes distribute within ±3 microradians. Larger ocean tide slopes mostly occur over marginal and island-surrounding seas, and extra tidal models with better precision or with extending process (e.g. Got-e) are strongly recommended for updating corrections in geophysical data records.
A high-order vertex-based central ENO finite-volume scheme for three-dimensional compressible flows
Charest, Marc R.J.; Canfield, Thomas R.; Morgan, Nathaniel R.; ...
2015-03-11
High-order discretization methods offer the potential to reduce the computational cost associated with modeling compressible flows. However, it is difficult to obtain accurate high-order discretizations of conservation laws that do not produce spurious oscillations near discontinuities, especially on multi-dimensional unstructured meshes. A novel, high-order, central essentially non-oscillatory (CENO) finite-volume method that does not have these difficulties is proposed for tetrahedral meshes. The proposed unstructured method is vertex-based, which differs from existing cell-based CENO formulations, and uses a hybrid reconstruction procedure that switches between two different solution representations. It applies a high-order k-exact reconstruction in smooth regions and a limited linearmore » reconstruction when discontinuities are encountered. Both reconstructions use a single, central stencil for all variables, making the application of CENO to arbitrary unstructured meshes relatively straightforward. The new approach was applied to the conservation equations governing compressible flows and assessed in terms of accuracy and computational cost. For all problems considered, which included various function reconstructions and idealized flows, CENO demonstrated excellent reliability and robustness. Up to fifth-order accuracy was achieved in smooth regions and essentially non-oscillatory solutions were obtained near discontinuities. The high-order schemes were also more computationally efficient for high-accuracy solutions, i.e., they took less wall time than the lower-order schemes to achieve a desired level of error. In one particular case, it took a factor of 24 less wall-time to obtain a given level of error with the fourth-order CENO scheme than to obtain the same error with the second-order scheme.« less
Dynamic Maternal Gradients Control Timing and Shift-Rates for Drosophila Gap Gene Expression
Verd, Berta; Crombach, Anton
2017-01-01
Pattern formation during development is a highly dynamic process. In spite of this, few experimental and modelling approaches take into account the explicit time-dependence of the rules governing regulatory systems. We address this problem by studying dynamic morphogen interpretation by the gap gene network in Drosophila melanogaster. Gap genes are involved in segment determination during early embryogenesis. They are activated by maternal morphogen gradients encoded by bicoid (bcd) and caudal (cad). These gradients decay at the same time-scale as the establishment of the antero-posterior gap gene pattern. We use a reverse-engineering approach, based on data-driven regulatory models called gene circuits, to isolate and characterise the explicitly time-dependent effects of changing morphogen concentrations on gap gene regulation. To achieve this, we simulate the system in the presence and absence of dynamic gradient decay. Comparison between these simulations reveals that maternal morphogen decay controls the timing and limits the rate of gap gene expression. In the anterior of the embyro, it affects peak expression and leads to the establishment of smooth spatial boundaries between gap domains. In the posterior of the embryo, it causes a progressive slow-down in the rate of gap domain shifts, which is necessary to correctly position domain boundaries and to stabilise the spatial gap gene expression pattern. We use a newly developed method for the analysis of transient dynamics in non-autonomous (time-variable) systems to understand the regulatory causes of these effects. By providing a rigorous mechanistic explanation for the role of maternal gradient decay in gap gene regulation, our study demonstrates that such analyses are feasible and reveal important aspects of dynamic gene regulation which would have been missed by a traditional steady-state approach. More generally, it highlights the importance of transient dynamics for understanding complex regulatory processes in development. PMID:28158178
Dynamic Maternal Gradients Control Timing and Shift-Rates for Drosophila Gap Gene Expression.
Verd, Berta; Crombach, Anton; Jaeger, Johannes
2017-02-01
Pattern formation during development is a highly dynamic process. In spite of this, few experimental and modelling approaches take into account the explicit time-dependence of the rules governing regulatory systems. We address this problem by studying dynamic morphogen interpretation by the gap gene network in Drosophila melanogaster. Gap genes are involved in segment determination during early embryogenesis. They are activated by maternal morphogen gradients encoded by bicoid (bcd) and caudal (cad). These gradients decay at the same time-scale as the establishment of the antero-posterior gap gene pattern. We use a reverse-engineering approach, based on data-driven regulatory models called gene circuits, to isolate and characterise the explicitly time-dependent effects of changing morphogen concentrations on gap gene regulation. To achieve this, we simulate the system in the presence and absence of dynamic gradient decay. Comparison between these simulations reveals that maternal morphogen decay controls the timing and limits the rate of gap gene expression. In the anterior of the embyro, it affects peak expression and leads to the establishment of smooth spatial boundaries between gap domains. In the posterior of the embryo, it causes a progressive slow-down in the rate of gap domain shifts, which is necessary to correctly position domain boundaries and to stabilise the spatial gap gene expression pattern. We use a newly developed method for the analysis of transient dynamics in non-autonomous (time-variable) systems to understand the regulatory causes of these effects. By providing a rigorous mechanistic explanation for the role of maternal gradient decay in gap gene regulation, our study demonstrates that such analyses are feasible and reveal important aspects of dynamic gene regulation which would have been missed by a traditional steady-state approach. More generally, it highlights the importance of transient dynamics for understanding complex regulatory processes in development.
Micromachined electrical cauterizer
Lee, Abraham P.; Krulevitch, Peter A.; Northrup, M. Allen
1999-01-01
A micromachined electrical cauterizer. Microstructures are combined with microelectrodes for highly localized electro cauterization. Using boron etch stops and surface micromachining, microneedles with very smooth surfaces are made. Micromachining also allows for precision placement of electrodes by photolithography with micron sized gaps to allow for concentrated electric fields. A microcauterizer is fabricated by bulk etching silicon to form knife edges, then parallelly placed microelectrodes with gaps as small as 5 .mu.m are patterned and aligned adjacent the knife edges to provide homeostasis while cutting tissue. While most of the microelectrode lines are electrically insulated from the atmosphere by depositing and patterning silicon dioxide on the electric feedthrough portions, a window is opened in the silicon dioxide to expose the parallel microelectrode portion. This helps reduce power loss and assist in focusing the power locally for more efficient and safer procedures.
Micromachined electrical cauterizer
Lee, A.P.; Krulevitch, P.A.; Northrup, M.A.
1999-08-31
A micromachined electrical cauterizer is disclosed. Microstructures are combined with microelectrodes for highly localized electro cauterization. Using boron etch stops and surface micromachining, microneedles with very smooth surfaces are made. Micromachining also allows for precision placement of electrodes by photolithography with micron sized gaps to allow for concentrated electric fields. A microcauterizer is fabricated by bulk etching silicon to form knife edges, then parallelly placed microelectrodes with gaps as small as 5 {mu}m are patterned and aligned adjacent the knife edges to provide homeostasis while cutting tissue. While most of the microelectrode lines are electrically insulated from the atmosphere by depositing and patterning silicon dioxide on the electric feedthrough portions, a window is opened in the silicon dioxide to expose the parallel microelectrode portion. This helps reduce power loss and assist in focusing the power locally for more efficient and safer procedures. 7 figs.
Calculation of laser pulse distribution maps for corneal reshaping with a scanning beam
NASA Astrophysics Data System (ADS)
Manns, Fabrice; Shen, Jin-Hui; Soederberg, Per G.; Matsui, Takaaki; Parel, Jean-Marie A.
1995-05-01
A method for calculating pulse distribution maps for scanning laser corneal surgery is presented. The accuracy, the smoothness of the corneal shape, and the duration of surgery were evaluated for corrections of myopia by using computer simulations. The accuracy and the number of pulses were computed as a function of the beam diameter, the diameter of the treatment zone, and the amount of attempted flattening. The ablation is smooth when the spot overlap is 80% or more. The accuracy does not depend on the beam diameter or on the diameter of the ablation zone when the ablation zone is larger than 5 mm. With an overlap of 80% and an ablation zone larger than 5 mm, the error is 5% of the attempted flattening, and 610 pulses are needed per Diopter of correction with a beam diameter of 1 mm. Pulse maps for the correction of astigmatism were computed and evaluated. The simulations show that with 60% overlap, a beam diameter of 1 mm, and a 5 mm treatment zone, 6 D of astigmatism can be corrected with an accuracy better than 1.8 D. This study shows that smooth and accurate ablations can be produced with a scanning spot.
Comparison of bipolar vs. tripolar concentric ring electrode Laplacian estimates.
Besio, W; Aakula, R; Dai, W
2004-01-01
Potentials on the body surface from the heart are of a spatial and temporal function. The 12-lead electrocardiogram (ECG) provides useful global temporal assessment, but it yields limited spatial information due to the smoothing effect caused by the volume conductor. The smoothing complicates identification of multiple simultaneous bioelectrical events. In an attempt to circumvent the smoothing problem, some researchers used a five-point method (FPM) to numerically estimate the analytical solution of the Laplacian with an array of monopolar electrodes. The FPM is generalized to develop a bi-polar concentric ring electrode system. We have developed a new Laplacian ECG sensor, a trielectrode sensor, based on a nine-point method (NPM) numerical approximation of the analytical Laplacian. For a comparison, the NPM, FPM and compact NPM were calculated over a 400 x 400 mesh with 1/400 spacing. Tri and bi-electrode sensors were also simulated and their Laplacian estimates were compared against the analytical Laplacian. We found that tri-electrode sensors have a much-improved accuracy with significantly less relative and maximum errors in estimating the Laplacian operator. Apart from the higher accuracy, our new electrode configuration will allow better localization of the electrical activity of the heart than bi-electrode configurations.
Intermittent Demand Forecasting in a Tertiary Pediatric Intensive Care Unit.
Cheng, Chen-Yang; Chiang, Kuo-Liang; Chen, Meng-Yin
2016-10-01
Forecasts of the demand for medical supplies both directly and indirectly affect the operating costs and the quality of the care provided by health care institutions. Specifically, overestimating demand induces an inventory surplus, whereas underestimating demand possibly compromises patient safety. Uncertainty in forecasting the consumption of medical supplies generates intermittent demand events. The intermittent demand patterns for medical supplies are generally classified as lumpy, erratic, smooth, and slow-moving demand. This study was conducted with the purpose of advancing a tertiary pediatric intensive care unit's efforts to achieve a high level of accuracy in its forecasting of the demand for medical supplies. On this point, several demand forecasting methods were compared in terms of the forecast accuracy of each. The results confirm that applying Croston's method combined with a single exponential smoothing method yields the most accurate results for forecasting lumpy, erratic, and slow-moving demand, whereas the Simple Moving Average (SMA) method is the most suitable for forecasting smooth demand. In addition, when the classification of demand consumption patterns were combined with the demand forecasting models, the forecasting errors were minimized, indicating that this classification framework can play a role in improving patient safety and reducing inventory management costs in health care institutions.
Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P
2015-03-01
Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical. Published by Elsevier B.V.
Time-domain electromagnetic soundings collected in Dawson County, Nebraska, 2007-09
Payne, Jason; Teeple, Andrew
2011-01-01
Between April 2007 and November 2009, the U.S. Geological Survey, in cooperation with the Central Platte Natural Resources District, collected time-domain electro-magnetic (TDEM) soundings at 14 locations in Dawson County, Nebraska. The TDEM soundings provide information pertaining to the hydrogeology at each of 23 sites at the 14 locations; 30 TDEM surface geophysical soundings were collected at the 14 locations to develop smooth and layered-earth resistivity models of the subsurface at each site. The soundings yield estimates of subsurface electrical resistivity; variations in subsurface electrical resistivity can be correlated with hydrogeologic and stratigraphic units. Results from each sounding were used to calculate resistivity to depths of approximately 90-130 meters (depending on loop size) below the land surface. Geonics Protem 47 and 57 systems, as well as the Alpha Geoscience TerraTEM, were used to collect the TDEM soundings (voltage data from which resistivity is calculated). For each sounding, voltage data were averaged and evaluated statistically before inversion (inverse modeling). Inverse modeling is the process of creating an estimate of the true distribution of subsurface resistivity from the mea-sured apparent resistivity obtained from TDEM soundings. Smooth and layered-earth models were generated for each sounding. A smooth model is a vertical delineation of calculated apparent resistivity that represents a non-unique estimate of the true resistivity. Ridge regression (Interpex Limited, 1996) was used by the inversion software in a series of iterations to create a smooth model consisting of 24-30 layers for each sounding site. Layered-earth models were then generated based on results of smooth modeling. The layered-earth models are simplified (generally 1 to 6 layers) to represent geologic units with depth. Throughout the area, the layered-earth models range from 2 to 4 layers, depending on observed inflections in the raw data and smooth model inversions. The TDEM data collected were considered good results on the basis of root mean square errors calculated after inversion modeling, comparisons with borehole geophysical logging, and repeatability.
Garza-Gisholt, Eduardo; Hemmi, Jan M.; Hart, Nathan S.; Collin, Shaun P.
2014-01-01
Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed ‘by eye’. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation ‘respects’ the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the ‘noise’ caused by artefacts and permits a clearer representation of the dominant, ‘real’ distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome. PMID:24747568
Random Walk Graph Laplacian-Based Smoothness Prior for Soft Decoding of JPEG Images.
Liu, Xianming; Cheung, Gene; Wu, Xiaolin; Zhao, Debin
2017-02-01
Given the prevalence of joint photographic experts group (JPEG) compressed images, optimizing image reconstruction from the compressed format remains an important problem. Instead of simply reconstructing a pixel block from the centers of indexed discrete cosine transform (DCT) coefficient quantization bins (hard decoding), soft decoding reconstructs a block by selecting appropriate coefficient values within the indexed bins with the help of signal priors. The challenge thus lies in how to define suitable priors and apply them effectively. In this paper, we combine three image priors-Laplacian prior for DCT coefficients, sparsity prior, and graph-signal smoothness prior for image patches-to construct an efficient JPEG soft decoding algorithm. Specifically, we first use the Laplacian prior to compute a minimum mean square error initial solution for each code block. Next, we show that while the sparsity prior can reduce block artifacts, limiting the size of the overcomplete dictionary (to lower computation) would lead to poor recovery of high DCT frequencies. To alleviate this problem, we design a new graph-signal smoothness prior (desired signal has mainly low graph frequencies) based on the left eigenvectors of the random walk graph Laplacian matrix (LERaG). Compared with the previous graph-signal smoothness priors, LERaG has desirable image filtering properties with low computation overhead. We demonstrate how LERaG can facilitate recovery of high DCT frequencies of a piecewise smooth signal via an interpretation of low graph frequency components as relaxed solutions to normalized cut in spectral clustering. Finally, we construct a soft decoding algorithm using the three signal priors with appropriate prior weights. Experimental results show that our proposal outperforms the state-of-the-art soft decoding algorithms in both objective and subjective evaluations noticeably.
A comparison of regional flood frequency analysis approaches in a simulation framework
NASA Astrophysics Data System (ADS)
Ganora, D.; Laio, F.
2016-07-01
Regional frequency analysis (RFA) is a well-established methodology to provide an estimate of the flood frequency curve at ungauged (or scarcely gauged) sites. Different RFA approaches exist, depending on the way the information is transferred to the site of interest, but it is not clear in the literature if a specific method systematically outperforms the others. The aim of this study is to provide a framework wherein carrying out the intercomparison by building up a virtual environment based on synthetically generated data. The considered regional approaches include: (i) a unique regional curve for the whole region; (ii) a multiple-region model where homogeneous subregions are determined through cluster analysis; (iii) a Region-of-Influence model which defines a homogeneous subregion for each site; (iv) a spatially smooth estimation procedure where the parameters of the regional model vary continuously along the space. Virtual environments are generated considering different patterns of heterogeneity, including step change and smooth variations. If the region is heterogeneous, with the parent distribution changing continuously within the region, the spatially smooth regional approach outperforms the others, with overall errors 10-50% lower than the other methods. In the case of a step-change, the spatially smooth and clustering procedures perform similarly if the heterogeneity is moderate, while clustering procedures work better when the step-change is severe. To extend our findings, an extensive sensitivity analysis has been performed to investigate the effect of sample length, number of virtual stations, return period of the predicted quantile, variability of the scale parameter of the parent distribution, number of predictor variables and different parent distribution. Overall, the spatially smooth approach appears as the most robust approach as its performances are more stable across different patterns of heterogeneity, especially when short records are considered.
Estimation of slipping organ motion by registration with direction-dependent regularization.
Schmidt-Richberg, Alexander; Werner, René; Handels, Heinz; Ehrhardt, Jan
2012-01-01
Accurate estimation of respiratory motion is essential for many applications in medical 4D imaging, for example for radiotherapy of thoracic and abdominal tumors. It is usually done by non-linear registration of image scans at different states of the breathing cycle but without further modeling of specific physiological motion properties. In this context, the accurate computation of respiration-driven lung motion is especially challenging because this organ is sliding along the surrounding tissue during the breathing cycle, leading to discontinuities in the motion field. Without considering this property in the registration model, common intensity-based algorithms cause incorrect estimation along the object boundaries. In this paper, we present a model for incorporating slipping motion in image registration. Extending the common diffusion registration by distinguishing between normal- and tangential-directed motion, we are able to estimate slipping motion at the organ boundaries while preventing gaps and ensuring smooth motion fields inside and outside. We further present an algorithm for a fully automatic detection of discontinuities in the motion field, which does not rely on a prior segmentation of the organ. We evaluate the approach for the estimation of lung motion based on 23 inspiration/expiration pairs of thoracic CT images. The results show a visually more plausible motion estimation. Moreover, the target registration error is quantified using manually defined landmarks and a significant improvement over the standard diffusion regularization is shown. Copyright © 2011 Elsevier B.V. All rights reserved.
Borgia, G C; Brown, R J; Fantazzini, P
2000-12-01
The basic method of UPEN (uniform penalty inversion of multiexponential decay data) is given in an earlier publication (Borgia et al., J. Magn. Reson. 132, 65-77 (1998)), which also discusses the effects of noise, constraints, and smoothing on the resolution or apparent resolution of features of a computed distribution of relaxation times. UPEN applies negative feedback to a regularization penalty, allowing stronger smoothing for a broad feature than for a sharp line. This avoids unnecessarily broadening the sharp line and/or breaking the wide peak or tail into several peaks that the relaxation data do not demand to be separate. The experimental and artificial data presented earlier were T(1) data, and all had fixed data spacings, uniform in log-time. However, for T(2) data, usually spaced uniformly in linear time, or for data spaced in any manner, we have found that the data spacing does not enter explicitly into the computation. The present work shows the extension of UPEN to T(2) data, including the averaging of data in windows and the use of the corresponding weighting factors in the computation. Measures are implemented to control portions of computed distributions extending beyond the data range. The input smoothing parameters in UPEN are normally fixed, rather than data dependent. A major problem arises, especially at high signal-to-noise ratios, when UPEN is applied to data sets with systematic errors due to instrumental nonidealities or adjustment problems. For instance, a relaxation curve for a wide line can be narrowed by an artificial downward bending of the relaxation curve. Diagnostic parameters are generated to help identify data problems, and the diagnostics are applied in several examples, with particular attention to the meaningful resolution of two closely spaced peaks in a distribution of relaxation times. Where feasible, processing with UPEN in nearly real time should help identify data problems while further instrument adjustments can still be made. The need for the nonnegative constraint is greatly reduced in UPEN, and preliminary processing without this constraint helps identify data sets for which application of the nonnegative constraint is too expensive in terms of error of fit for the data set to represent sums of decaying positive exponentials plus random noise. Copyright 2000 Academic Press.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bagher-Ebadian, H; Chetty, I; Liu, C
Purpose: To examine the impact of image smoothing and noise on the robustness of textural information extracted from CBCT images for prediction of radiotherapy response for patients with head/neck (H/N) cancers. Methods: CBCT image datasets for 14 patients with H/N cancer treated with radiation (70 Gy in 35 fractions) were investigated. A deformable registration algorithm was used to fuse planning CT’s to CBCT’s. Tumor volume was automatically segmented on each CBCT image dataset. Local control at 1-year was used to classify 8 patients as responders (R), and 6 as non-responders (NR). A smoothing filter [2D Adaptive Weiner (2DAW) with 3more » different windows (ψ=3, 5, and 7)], and two noise models (Poisson and Gaussian, SNR=25) were implemented, and independently applied to CBCT images. Twenty-two textural features, describing the spatial arrangement of voxel intensities calculated from gray-level co-occurrence matrices, were extracted for all tumor volumes. Results: Relative to CBCT images without smoothing, none of 22 textural features extracted showed any significant differences when smoothing was applied (using the 2DAW with filtering parameters of ψ=3 and 5), in the responder and non-responder groups. When smoothing, 2DAW with ψ=7 was applied, one textural feature, Information Measure of Correlation, was significantly different relative to no smoothing. Only 4 features (Energy, Entropy, Homogeneity, and Maximum-Probability) were found to be statistically different between the R and NR groups (Table 1). These features remained statistically significant discriminators for R and NR groups in presence of noise and smoothing. Conclusion: This preliminary work suggests that textural classifiers for response prediction, extracted from H&N CBCT images, are robust to low-power noise and low-pass filtering. While other types of filters will alter the spatial frequencies differently, these results are promising. The current study is subject to Type II errors. A much larger cohort of patients is needed to confirm these results. This work was supported in part by a grant from Varian Medical Systems (Palo Alto, CA)« less
Dynamic analysis of the mechanical seals of the rotor of the labyrinth screw pump
NASA Astrophysics Data System (ADS)
Lebedev, A. Y.; Andrenko, P. M.; Grigoriev, A. L.
2017-08-01
A mathematical model of the work of the mechanical seal with smooth rings made from cast tungsten carbide in the condition of liquid friction is drawn up. A special feature of this model is the allowance for the thermal expansion of a liquid in the gap between the rings; this effect acting in the conjunction with the frictional forces creates additional pressure and lift which in its turn depends on the width of the gap and the speed of sliding. The developed model displays the processes of separation, transportation and heat removal in the compaction elements and also the resistance to axial movement of the ring arising in the gap caused by the pumping effect and the friction in the flowing liquid; the inertia of this fluid is taken into account by the mass reduction method. The linearization of the model is performed and the dynamic characteristics of the transient processes and the forced oscillations of the device are obtained. The conditions imposed on the parameters of the mechanical seal are formulated to provide a regime of the liquid friction, which minimizes the wear.
Rasmuson, Anna; Pazmino, Eddy; Assemi, Shoeleh; Johnson, William P
2017-02-21
Surface roughness has been reported to both increase as well as decrease colloid retention. In order to better understand the boundaries within which roughness operates, attachment of a range of colloid sizes to glass with three levels of roughness was examined under both favorable (energy barrier absent) and unfavorable (energy barrier present) conditions in an impinging jet system. Smooth glass was found to provide the upper and lower bounds for attachment under favorable and unfavorable conditions, respectively. Surface roughness decreased, or even eliminated, the gap between favorable and unfavorable attachment and did so by two mechanisms: (1) under favorable conditions attachment decreased via increased hydrodynamic slip length and reduced attraction and (2) under unfavorable conditions attachment increased via reduced colloid-collector repulsion (reduced radius of curvature) and increased attraction (multiple points of contact, and possibly increased surface charge heterogeneity). Absence of a gap where these forces most strongly operate for smaller (<200 nm) and larger (>2 μm) colloids was observed and discussed. These observations elucidate the role of roughness in colloid attachment under both favorable and unfavorable conditions.
NASA Technical Reports Server (NTRS)
Xing, G. C.; Bachmann, K. J.; Posthill, J. B.; Timmons, M. L.
1991-01-01
In this paper, we report the epitaxial growth of ZnGe(1-x)Si(x)P2-Ge alloys on GaP substrates by open tube OMCVD. The chemical composition of the alloys characterized by energy dispersive X-ray spectroscopy shows that alloys with x up to 0.13 can be deposited on (001) GaP. Epitaxial growth with mirror smooth surface morphology has been achieved for x less than or equals to 0.05. Selected area electron diffraction pattern of the alloy shows that the epitaxial layer crystallizes in the chalcopyrite structure with relatively weak superlattice reflections indicating certain degree of randomness in the cation sublattice. Hall measurements show that the alloys are p-type, like the unalloyed films; the carrier concentration, however, dropped about 10 times from 2 x 10 exp 18 to 2 x 10 exp 17/cu cm. Absorption measurements indicate that the band tailing in the absorption spectra of the alloy has been shifted about 0.04 eV towards shorter wavelength as compared to the unalloyed material.
NASA Astrophysics Data System (ADS)
Liu, Wei; Sneeuw, Nico; Jiang, Weiping
2017-04-01
GRACE mission has contributed greatly to the temporal gravity field monitoring in the past few years. However, ocean tides cause notable alias errors for single-pair spaceborne gravimetry missions like GRACE in two ways. First, undersampling from satellite orbit induces the aliasing of high-frequency tidal signals into the gravity signal. Second, ocean tide models used for de-aliasing in the gravity field retrieval carry errors, which will directly alias into the recovered gravity field. GRACE satellites are in non-repeat orbit, disabling the alias error spectral estimation based on the repeat period. Moreover, the gravity field recovery is conducted in non-strictly monthly interval and has occasional gaps, which result in an unevenly sampled time series. In view of the two aspects above, we investigate the data-driven method to mitigate the ocean tide alias error in a post-processing mode.
The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates
NASA Technical Reports Server (NTRS)
Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush
2008-01-01
We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.
NASA Astrophysics Data System (ADS)
Pelicano, Christian Mark; Rapadas, Nick; Cagatan, Gerard; Magdaluyo, Eduardo
2017-12-01
Herein, the crystallite size and band gap energy of zinc oxide (ZnO) quantum dots were predicted using artificial neural network (ANN). Three input factors including reagent ratio, growth time, and growth temperature were examined with respect to crystallite size and band gap energy as response factors. The generated results from neural network model were then compared with the experimental results. Experimental crystallite size and band gap energy of ZnO quantum dots were measured from TEM images and absorbance spectra, respectively. The Levenberg-Marquardt (LM) algorithm was used as the learning algorithm for the ANN model. The performance of the ANN model was then assessed through mean square error (MSE) and regression values. Based on the results, the ANN modelling results are in good agreement with the experimental data.
Predicting 2D target velocity cannot help 2D motion integration for smooth pursuit initiation.
Montagnini, Anna; Spering, Miriam; Masson, Guillaume S
2006-12-01
Smooth pursuit eye movements reflect the temporal dynamics of bidimensional (2D) visual motion integration. When tracking a single, tilted line, initial pursuit direction is biased toward unidimensional (1D) edge motion signals, which are orthogonal to the line orientation. Over 200 ms, tracking direction is slowly corrected to finally match the 2D object motion during steady-state pursuit. We now show that repetition of line orientation and/or motion direction does not eliminate the transient tracking direction error nor change the time course of pursuit correction. Nonetheless, multiple successive presentations of a single orientation/direction condition elicit robust anticipatory pursuit eye movements that always go in the 2D object motion direction not the 1D edge motion direction. These results demonstrate that predictive signals about target motion cannot be used for an efficient integration of ambiguous velocity signals at pursuit initiation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muslimov, A. E., E-mail: amuslimov@mail.ru; Butashin, A. V.; Kanevsky, V. M.
The (001) cleavage surface of vanadium pentoxide (V{sub 2}O{sub 5}) crystal has been studied by scanning tunneling spectroscopy (STM). It is shown that the surface is not reconstructed; the STM image allows geometric lattice parameters to be determined with high accuracy. The nanostructure formed on the (001) cleavage surface of crystal consists of atomically smooth steps with a height multiple of unit-cell parameter c = 4.37 Å. The V{sub 2}O{sub 5} crystal cleavages can be used as references in calibration of a scanning tunneling microscope under atmospheric conditions both along the (Ñ…, y) surface and normally to the sample surfacemore » (along the z axis). It is found that the terrace surface is not perfectly atomically smooth; its roughness is estimated to be ~0.5 Å. This circumstance may introduce an additional error into the microscope calibration along the z coordinate.« less
Estimating index of refraction from polarimetric hyperspectral imaging measurements.
Martin, Jacob A; Gross, Kevin C
2016-08-08
Current material identification techniques rely on estimating reflectivity or emissivity which vary with viewing angle. As off-nadir remote sensing platforms become increasingly prevalent, techniques robust to changing viewing geometries are desired. A technique leveraging polarimetric hyperspectral imaging (P-HSI), to estimate complex index of refraction, N̂(ν̃), an inherent material property, is presented. The imaginary component of N̂(ν̃) is modeled using a small number of "knot" points and interpolation at in-between frequencies ν̃. The real component is derived via the Kramers-Kronig relationship. P-HSI measurements of blackbody radiation scattered off of a smooth quartz window show that N̂(ν̃) can be retrieved to within 0.08 RMS error between 875 cm-1 ≤ ν̃ ≤ 1250 cm-1. P-HSI emission measurements of a heated smooth Pyrex beaker also enable successful N̂(ν̃) estimates, which are also invariant to object temperature.
Hidden dynamics in models of discontinuity and switching
NASA Astrophysics Data System (ADS)
Jeffrey, Mike R.
2014-04-01
Sharp switches in behaviour, like impacts, stick-slip motion, or electrical relays, can be modelled by differential equations with discontinuities. A discontinuity approximates fine details of a switching process that lie beyond a bulk empirical model. The theory of piecewise-smooth dynamics describes what happens assuming we can solve the system of equations across its discontinuity. What this typically neglects is that effects which are vanishingly small outside the discontinuity can have an arbitrarily large effect at the discontinuity itself. Here we show that such behaviour can be incorporated within the standard theory through nonlinear terms, and these introduce multiple sliding modes. We show that the nonlinear terms persist in more precise models, for example when the discontinuity is smoothed out. The nonlinear sliding can be eliminated, however, if the model contains an irremovable level of unknown error, which provides a criterion for systems to obey the standard Filippov laws for sliding dynamics at a discontinuity.
NASA Astrophysics Data System (ADS)
Gu, Junhua; Xu, Haiguang; Wang, Jingying; An, Tao; Chen, Wen
2013-08-01
We propose a continuous wavelet transform based non-parametric foreground subtraction method for the detection of redshifted 21 cm signal from the epoch of reionization. This method works based on the assumption that the foreground spectra are smooth in frequency domain, while the 21 cm signal spectrum is full of saw-tooth-like structures, thus their characteristic scales are significantly different. We can distinguish them in the wavelet coefficient space easily and perform the foreground subtraction. Compared with the traditional spectral fitting based method, our method is more tolerant to complex foregrounds. Furthermore, we also find that when the instrument has uncorrected response error, our method can also work significantly better than the spectral fitting based method. Our method can obtain similar results with the Wp smoothing method, which is also a non-parametric method, but our method consumes much less computing time.
Calculation of smooth potential energy surfaces using local electron correlation methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mata, Ricardo A.; Werner, Hans-Joachim
2006-11-14
The geometry dependence of excitation domains in local correlation methods can lead to noncontinuous potential energy surfaces. We propose a simple domain merging procedure which eliminates this problem in many situations. The method is applied to heterolytic bond dissociations of ketene and propadienone, to SN2 reactions of Cl{sup -} with alkylchlorides, and in a quantum mechanical/molecular mechanical study of the chorismate mutase enzyme. It is demonstrated that smooth potentials are obtained in all cases. Furthermore, basis set superposition error effects are reduced in local calculations, and it is found that this leads to better basis set convergence when computing barriermore » heights or weak interactions. When the electronic structure strongly changes between reactants or products and the transition state, the domain merging procedure leads to a balanced description of all structures and accurate barrier heights.« less
Spectral Topography Generation for Arbitrary Grids
NASA Astrophysics Data System (ADS)
Oh, T. J.
2015-12-01
A new topography generation tool utilizing spectral transformation technique for both structured and unstructured grids is presented. For the source global digital elevation data, the NASA Shuttle Radar Topography Mission (SRTM) 15 arc-second dataset (gap-filling by Jonathan de Ferranti) is used and for land/water mask source, the NASA Moderate Resolution Imaging Spectroradiometer (MODIS) 30 arc-second land water mask dataset v5 is used. The original source data is coarsened to a intermediate global 2 minute lat-lon mesh. Then, spectral transformation to the wave space and inverse transformation with wavenumber truncation is performed for isotropic topography smoothness control. Target grid topography mapping is done by bivariate cubic spline interpolation from the truncated 2 minute lat-lon topography. Gibbs phenomenon in the water region can be removed by overwriting ocean masked target coordinate grids with interpolated values from the intermediate 2 minute grid. Finally, a weak smoothing operator is applied on the target grid to minimize the land/water surface height discontinuity that might have been introduced by the Gibbs oscillation removal procedure. Overall, the new topography generation approach provides spectrally-derived, smooth topography with isotropic resolution and minimum damping, enabling realistic topography forcing in the numerical model. Topography is generated for the cubed-sphere grid and tested on the KIAPS Integrated Model (KIM).
Amster, Brian; Marquard, Jenna; Henneman, Elizabeth; Fisher, Donald
2015-01-01
In this clinical simulation study using an eye-tracking device, 40% of senior nursing students administered a contraindicated medication to a patient. Our findings suggest that the participants who did not identify the error did not know that amoxicillin is a type of penicillin. Eye-tracking devices may be valuable for determining whether nursing students are making rule- or knowledge-based errors, a distinction not easily captured via observations and interviews.
Discrete Tchebycheff orthonormal polynomials and applications
NASA Technical Reports Server (NTRS)
Lear, W. M.
1980-01-01
Discrete Tchebycheff orthonormal polynomials offer a convenient way to make least squares polynomial fits of uniformly spaced discrete data. Computer programs to do so are simple and fast, and appear to be less affected by computer roundoff error, for the higher order fits, than conventional least squares programs. They are useful for any application of polynomial least squares fits: approximation of mathematical functions, noise analysis of radar data, and real time smoothing of noisy data, to name a few.
Hydrodynamic boundary condition of water on hydrophobic surfaces.
Schaeffel, David; Yordanov, Stoyan; Schmelzeisen, Marcus; Yamamoto, Tetsuya; Kappl, Michael; Schmitz, Roman; Dünweg, Burkhard; Butt, Hans-Jürgen; Koynov, Kaloian
2013-05-01
By combining total internal reflection fluorescence cross-correlation spectroscopy with Brownian dynamics simulations, we were able to measure the hydrodynamic boundary condition of water flowing over a smooth solid surface with exceptional accuracy. We analyzed the flow of aqueous electrolytes over glass coated with a layer of poly(dimethylsiloxane) (advancing contact angle Θ = 108°) or perfluorosilane (Θ = 113°). Within an error of better than 10 nm the slip length was indistinguishable from zero on all surfaces.
NASA Astrophysics Data System (ADS)
Joetzjer, E.; Pillet, M.; Ciais, P.; Barbier, N.; Chave, J.; Schlund, M.; Maignan, F.; Barichivich, J.; Luyssaert, S.; Hérault, B.; von Poncet, F.; Poulter, B.
2017-07-01
Despite advances in Earth observation and modeling, estimating tropical biomass remains a challenge. Recent work suggests that integrating satellite measurements of canopy height within ecosystem models is a promising approach to infer biomass. We tested the feasibility of this approach to retrieve aboveground biomass (AGB) at three tropical forest sites by assimilating remotely sensed canopy height derived from a texture analysis algorithm applied to the high-resolution Pleiades imager in the Organizing Carbon and Hydrology in Dynamic Ecosystems Canopy (ORCHIDEE-CAN) ecosystem model. While mean AGB could be estimated within 10% of AGB derived from census data in average across sites, canopy height derived from Pleiades product was spatially too smooth, thus unable to accurately resolve large height (and biomass) variations within the site considered. The error budget was evaluated in details, and systematic errors related to the ORCHIDEE-CAN structure contribute as a secondary source of error and could be overcome by using improved allometric equations.
NASA Technical Reports Server (NTRS)
Fromme, J. A.; Golberg, M. A.
1979-01-01
Lift interference effects are discussed based on Bland's (1968) integral equation. A mathematical existence theory is utilized for which convergence of the numerical method has been proved for general (square-integrable) downwashes. Airloads are computed using orthogonal airfoil polynomial pairs in conjunction with a collocation method which is numerically equivalent to Galerkin's method and complex least squares. Convergence exhibits exponentially decreasing error with the number n of collocation points for smooth downwashes, whereas errors are proportional to 1/n for discontinuous downwashes. The latter can be reduced to 1/n to the m+1 power with mth-order Richardson extrapolation (by using m = 2, hundredfold error reductions were obtained with only a 13% increase of computer time). Numerical results are presented showing acoustic resonance, as well as the effect of Mach number, ventilation, height-to-chord ratio, and mode shape on wind-tunnel interference. Excellent agreement with experiment is obtained in steady flow, and good agreement is obtained for unsteady flow.
GIZMO: Multi-method magneto-hydrodynamics+gravity code
NASA Astrophysics Data System (ADS)
Hopkins, Philip F.
2014-10-01
GIZMO is a flexible, multi-method magneto-hydrodynamics+gravity code that solves the hydrodynamic equations using a variety of different methods. It introduces new Lagrangian Godunov-type methods that allow solving the fluid equations with a moving particle distribution that is automatically adaptive in resolution and avoids the advection errors, angular momentum conservation errors, and excessive diffusion problems that seriously limit the applicability of “adaptive mesh” (AMR) codes, while simultaneously avoiding the low-order errors inherent to simpler methods like smoothed-particle hydrodynamics (SPH). GIZMO also allows the use of SPH either in “traditional” form or “modern” (more accurate) forms, or use of a mesh. Self-gravity is solved quickly with a BH-Tree (optionally a hybrid PM-Tree for periodic boundaries) and on-the-fly adaptive gravitational softenings. The code is descended from P-GADGET, itself descended from GADGET-2 (ascl:0003.001), and many of the naming conventions remain (for the sake of compatibility with the large library of GADGET work and analysis software).
Applications and Comparisons of Four Time Series Models in Epidemiological Surveillance Data
Young, Alistair A.; Li, Xiaosong
2014-01-01
Public health surveillance systems provide valuable data for reliable predication of future epidemic events. This paper describes a study that used nine types of infectious disease data collected through a national public health surveillance system in mainland China to evaluate and compare the performances of four time series methods, namely, two decomposition methods (regression and exponential smoothing), autoregressive integrated moving average (ARIMA) and support vector machine (SVM). The data obtained from 2005 to 2011 and in 2012 were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The accuracy of the statistical models in forecasting future epidemic disease proved their effectiveness in epidemiological surveillance. Although the comparisons found that no single method is completely superior to the others, the present study indeed highlighted that the SVMs outperforms the ARIMA model and decomposition methods in most cases. PMID:24505382
Murugesan, Yahini Prabha; Alsadoon, Abeer; Manoranjan, Paul; Prasad, P W C
2018-06-01
Augmented reality-based surgeries have not been successfully implemented in oral and maxillofacial areas due to limitations in geometric accuracy and image registration. This paper aims to improve the accuracy and depth perception of the augmented video. The proposed system consists of a rotational matrix and translation vector algorithm to reduce the geometric error and improve the depth perception by including 2 stereo cameras and a translucent mirror in the operating room. The results on the mandible/maxilla area show that the new algorithm improves the video accuracy by 0.30-0.40 mm (in terms of overlay error) and the processing rate to 10-13 frames/s compared to 7-10 frames/s in existing systems. The depth perception increased by 90-100 mm. The proposed system concentrates on reducing the geometric error. Thus, this study provides an acceptable range of accuracy with a shorter operating time, which provides surgeons with a smooth surgical flow. Copyright © 2018 John Wiley & Sons, Ltd.
Wang, Wei; Wen, Changyun; Huang, Jiangshuai; Fan, Huijin
2017-11-01
In this paper, a backstepping based distributed adaptive control scheme is proposed for multiple uncertain Euler-Lagrange systems under directed graph condition. The common desired trajectory is allowed totally unknown by part of the subsystems and the linearly parameterized trajectory model assumed in currently available results is no longer needed. To compensate the effects due to unknown trajectory information, a smooth function of consensus errors and certain positive integrable functions are introduced in designing virtual control inputs. Besides, to overcome the difficulty of completely counteracting the coupling terms of distributed consensus errors and parameter estimation errors in the presence of asymmetric Laplacian matrix, extra information transmission of local parameter estimates are introduced among linked subsystem and adaptive gain technique is adopted to generate distributed torque inputs. It is shown that with the proposed distributed adaptive control scheme, global uniform boundedness of all the closed-loop signals and asymptotically output consensus tracking can be achieved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Landmark-based elastic registration using approximating thin-plate splines.
Rohr, K; Stiehl, H S; Sprengel, R; Buzug, T M; Weese, J; Kuhn, M H
2001-06-01
We consider elastic image registration based on a set of corresponding anatomical point landmarks and approximating thin-plate splines. This approach is an extension of the original interpolating thin-plate spline approach and allows to take into account landmark localization errors. The extension is important for clinical applications since landmark extraction is always prone to error. Our approach is based on a minimizing functional and can cope with isotropic as well as anisotropic landmark errors. In particular, in the latter case it is possible to include different types of landmarks, e.g., unique point landmarks as well as arbitrary edge points. Also, the scheme is general with respect to the image dimension and the order of smoothness of the underlying functional. Optimal affine transformations as well as interpolating thin-plate splines are special cases of this scheme. To localize landmarks we use a semi-automatic approach which is based on three-dimensional (3-D) differential operators. Experimental results are presented for two-dimensional as well as 3-D tomographic images of the human brain.
Regional application of multi-layer artificial neural networks in 3-D ionosphere tomography
NASA Astrophysics Data System (ADS)
Ghaffari Razin, Mir Reza; Voosoghi, Behzad
2016-08-01
Tomography is a very cost-effective method to study physical properties of the ionosphere. In this paper, residual minimization training neural network (RMTNN) is used in voxel-based tomography to reconstruct of 3-D ionosphere electron density with high spatial resolution. For numerical experiments, observations collected at 37 GPS stations from Iranian permanent GPS network (IPGN) are used. A smoothed TEC approach was used for absolute STEC recovery. To improve the vertical resolution, empirical orthogonal functions (EOFs) obtained from international reference ionosphere 2012 (IRI-2012) used as object function in training neural network. Ionosonde observations is used for validate reliability of the proposed method. Minimum relative error for RMTNN is 1.64% and maximum relative error is 15.61%. Also root mean square error (RMSE) of 0.17 × 1011 (electrons/m3) is computed for RMTNN which is less than RMSE of IRI2012. The results show that RMTNN has higher accuracy and compiles speed than other ionosphere reconstruction methods.
Adaptive Fuzzy Bounded Control for Consensus of Multiple Strict-Feedback Nonlinear Systems.
Wang, Wei; Tong, Shaocheng
2018-02-01
This paper studies the adaptive fuzzy bounded control problem for leader-follower multiagent systems, where each follower is modeled by the uncertain nonlinear strict-feedback system. Combining the fuzzy approximation with the dynamic surface control, an adaptive fuzzy control scheme is developed to guarantee the output consensus of all agents under directed communication topologies. Different from the existing results, the bounds of the control inputs are known as a priori, and they can be determined by the feedback control gains. To realize smooth and fast learning, a predictor is introduced to estimate each error surface, and the corresponding predictor error is employed to learn the optimal fuzzy parameter vector. It is proved that the developed adaptive fuzzy control scheme guarantees the uniformly ultimate boundedness of the closed-loop systems, and the tracking error converges to a small neighborhood of the origin. The simulation results and comparisons are provided to show the validity of the control strategy presented in this paper.
NASA Astrophysics Data System (ADS)
Wang, Ting; Sheng, Meiping; Ding, Xiaodong; Yan, Xiaowei
2018-03-01
This paper presents analysis on wave propagation and power flow in an acoustic metamaterial plate with lateral local resonance. The metamaterial is designed to have lateral local resonance systems attached to a homogeneous plate. Relevant theoretical analysis, numerical modelling and application prospect are presented. Results show that the metamaterial has two complete band gaps for flexural wave absorption and vibration attenuation. Damping can smooth and lower the metamaterial’s frequency responses in high frequency ranges at the expense of the band gap effect, and as an important factor to calculate the power flow is thoroughly investigated. Moreover, the effective mass density becomes negative and unbounded at specific frequencies. Simultaneously, power flow within band gaps are dramatically blocked from the power flow contour and power flow maps. Results from finite element modelling and power flow analysis reveal the working mechanism of the flexural wave attenuation and power flow blocked within the band gaps, where part of the flexural vibration is absorbed by the vertical resonator and the rest is transformed through four-link-mechanisms to the lateral resonators that oscillate and generate inertial forces indirectly to counterbalance the shear forces induced by the vibrational plate. The power flow is stored in the vertical and lateral local resonance, as well as in the connected plate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Xuan; Zhang, Wen-Tao; Zhao, Lin
For this study, we carry out detailed momentum-dependent and temperature-dependent measurements on Bi 2Sr 2CaCu 2O 8+δ (Bi2212) superconductor in the superconducting and pseudogap states by super-high resolution laser-based angle-resolved photoemission spectroscopy. The precise determination of the superconducting gap for the nearly optimally doped Bi2212 (T c=91 K) at low temperature indicates that the momentum-dependence of the superconducting gap deviates from the standard d-wave form (cos(2Φ)). It can be alternatively fitted by including a high-order term (cos(6Φ)) in which the next nearest-neighbor interaction is considered. We find that the band structure near the antinodal region smoothly evolves across the pseudogapmore » temperature without a signature of band reorganization which is distinct from that found in Bi 2Sr 2CuO 6+δ superconductors. This indicates that the band reorganization across the pseudogap temperature is not a universal behavior in cuprate superconductors. These results provide new insights in understanding the nature of the superconducting gap and pseudogap in high-temperature cuprate superconductors.« less
Sun, Xuan; Zhang, Wen-Tao; Zhao, Lin; ...
2017-12-17
For this study, we carry out detailed momentum-dependent and temperature-dependent measurements on Bi 2Sr 2CaCu 2O 8+δ (Bi2212) superconductor in the superconducting and pseudogap states by super-high resolution laser-based angle-resolved photoemission spectroscopy. The precise determination of the superconducting gap for the nearly optimally doped Bi2212 (T c=91 K) at low temperature indicates that the momentum-dependence of the superconducting gap deviates from the standard d-wave form (cos(2Φ)). It can be alternatively fitted by including a high-order term (cos(6Φ)) in which the next nearest-neighbor interaction is considered. We find that the band structure near the antinodal region smoothly evolves across the pseudogapmore » temperature without a signature of band reorganization which is distinct from that found in Bi 2Sr 2CuO 6+δ superconductors. This indicates that the band reorganization across the pseudogap temperature is not a universal behavior in cuprate superconductors. These results provide new insights in understanding the nature of the superconducting gap and pseudogap in high-temperature cuprate superconductors.« less
First-Order-hold interpolation digital-to-analog converter with application to aircraft simulation
NASA Technical Reports Server (NTRS)
Cleveland, W. B.
1976-01-01
Those who design piloted aircraft simulations must contend with the finite size and speed of the available digital computer and the requirement for simulation reality. With a fixed computational plant, the more complex the model, the more computing cycle time is required. While increasing the cycle time may not degrade the fidelity of the simulated aircraft dynamics, the larger steps in the pilot cue feedback variables (such as the visual scene cues), may be disconcerting to the pilot. The first-order-hold interpolation (FOHI) digital-to-analog converter (DAC) is presented as a device which offers smooth output, regardless of cycle time. The Laplace transforms of these three conversion types are developed and their frequency response characteristics and output smoothness are compared. The FOHI DAC exhibits a pure one-cycle delay. Whenever the FOHI DAC input comes from a second-order (or higher) system, a simple computer software technique can be used to compensate for the DAC phase lag. When so compensated, the FOHI DAC has (1) an output signal that is very smooth, (2) a flat frequency response in frequency ranges of interest, and (3) no phase error. When the input comes from a first-order system, software compensation may cause the FOHI DAC to perform as an FOHE DAC, which, although its output is not as smooth as that of the FOHI DAC, has a smoother output than that of the ZOH DAC.
Assessing the significance of pedobarographic signals using random field theory.
Pataky, Todd C
2008-08-07
Traditional pedobarographic statistical analyses are conducted over discrete regions. Recent studies have demonstrated that regionalization can corrupt pedobarographic field data through conflation when arbitrary dividing lines inappropriately delineate smooth field processes. An alternative is to register images such that homologous structures optimally overlap and then conduct statistical tests at each pixel to generate statistical parametric maps (SPMs). The significance of SPM processes may be assessed within the framework of random field theory (RFT). RFT is ideally suited to pedobarographic image analysis because its fundamental data unit is a lattice sampling of a smooth and continuous spatial field. To correct for the vast number of multiple comparisons inherent in such data, recent pedobarographic studies have employed a Bonferroni correction to retain a constant family-wise error rate. This approach unfortunately neglects the spatial correlation of neighbouring pixels, so provides an overly conservative (albeit valid) statistical threshold. RFT generally relaxes the threshold depending on field smoothness and on the geometry of the search area, but it also provides a framework for assigning p values to suprathreshold clusters based on their spatial extent. The current paper provides an overview of basic RFT concepts and uses simulated and experimental data to validate both RFT-relevant field smoothness estimations and RFT predictions regarding the topological characteristics of random pedobarographic fields. Finally, previously published experimental data are re-analysed using RFT inference procedures to demonstrate how RFT yields easily understandable statistical results that may be incorporated into routine clinical and laboratory analyses.
A multi-source precipitation approach to fill gaps over a radar precipitation field
NASA Astrophysics Data System (ADS)
Tesfagiorgis, K. B.; Mahani, S. E.; Khanbilvardi, R.
2012-12-01
Satellite Precipitation Estimates (SPEs) may be the only available source of information for operational hydrologic and flash flood prediction due to spatial limitations of radar and gauge products. The present work develops an approach to seamlessly blend satellite, radar, climatological and gauge precipitation products to fill gaps over ground-based radar precipitation fields. To mix different precipitation products, the bias of any of the products relative to each other should be removed. For bias correction, the study used an ensemble-based method which aims to estimate spatially varying multiplicative biases in SPEs using a radar rainfall product. Bias factors were calculated for a randomly selected sample of rainy pixels in the study area. Spatial fields of estimated bias were generated taking into account spatial variation and random errors in the sampled values. A weighted Successive Correction Method (SCM) is proposed to make the merging between error corrected satellite and radar rainfall estimates. In addition to SCM, we use a Bayesian spatial method for merging the gap free radar with rain gauges, climatological rainfall sources and SPEs. We demonstrate the method using SPE Hydro-Estimator (HE), radar- based Stage-II, a climatological product PRISM and rain gauge dataset for several rain events from 2006 to 2008 over three different geographical locations of the United States. Results show that: the SCM method in combination with the Bayesian spatial model produced a precipitation product in good agreement with independent measurements. The study implies that using the available radar pixels surrounding the gap area, rain gauge, PRISM and satellite products, a radar like product is achievable over radar gap areas that benefits the scientific community.
Ma, Ke-Tao; Li, Xin-Zhi; Li, Li; Jiang, Xue-Wei; Chen, Xin-Yan; Liu, Wei-Dong; Zhao, Lei; Zhang, Zhong-Shuang; Si, Jun-Qiang
2014-02-01
To investigate the effects of hypertension on the changes in gap junctions between vascular smooth muscle cells (VSMCs) in the mesenteric artery (MA) of spontaneously hypertensive rats (SHRs). Whole-cell patch clamp, pressure myography, real-time quantitative reverse transcription PCR (qRT-PCR), western blot analysis and transmission electron microscopy were used to examine the differences in expression and function of the gap junction between MA VSMCs of SHR and control normotensive Wistar-Kyoto (WKY) rats. (1) Whole-cell patch clamp measurements showed that the membrane capacitance and conductance of in-situ MA VSMCs of SHR were significantly greater than those of WKY rats (P<0.05), suggesting enhanced gap junction coupling between MA VSMCs of SHR. (2) The administration of phenylephrine (PE) and KCl (an endothelium-independent vasoconstrictor) initiated more pronounced vasoconstriction in SHR versus WKY rats (P<0.05). Furthermore, 2-APB (a gap junction inhibitor) attenuated PE- and KCl-induced vasoconstriction, and the inhibitory effects of 2-APB were significantly greater in SHR (P<0.05). (3) The expression of connexin 45 (Cx45) mRNA and protein in the MA was greater in SHR versus WKY rats (P<0.05). The level of phosphorylated Cx43 was significantly higher in SHR versus WKY rats (P<0.05), although the expression of total Cx43 mRNA and protein in the MA was equivalent between SHR and WKY rats. Electron microscopy revealed that the gap junctions were significantly larger in SHR versus WKY rats. Increases in the expression of Cx45 and phosphorylation of Cx43 may contribute to the enhancement of communication across gap junctions between MA VSMCs of SHR, which may increase the contractile response to agonists.
NASA Astrophysics Data System (ADS)
Jonkkari, I.; Kostamo, E.; Kostamo, J.; Syrjala, S.; Pietola, M.
2012-07-01
Effects of the plate material, surface roughness and measuring gap height on static and dynamic yield stresses of a magnetorheological (MR) fluid were investigated with a commercial plate-plate magnetorheometer. Magnetic and non-magnetic plates with smooth (Ra ˜ 0.3 μm) and rough (Ra ˜ 10 μm) surface finishes were used. It was shown by Hall probe measurements and finite element simulations that the use of magnetic plates or higher gap heights increases the level of magnetic flux density and changes the shape of the radial flux density profile. The yield stress increase caused by these factors was determined and subtracted from the measured values in order to examine only the effect of the wall characteristics or the gap height. Roughening of the surfaces offered a significant increase in the yield stresses for non-magnetic plates. With magnetic plates the yield stresses were higher to start with, but roughening did not increase them further. A significant part of the difference in measured stresses between rough non-magnetic and magnetic plates was caused by changes in magnetic flux density rather than by better contact of the particles to the plate surfaces. In a similar manner, an increase in gap height from 0.25 to 1.00 mm can lead to over 20% increase in measured stresses due to changes in the flux density profile. When these changes were compensated the dynamic yield stresses generally remained independent of the gap height, even in the cases where it was obvious that the wall slip was present. This suggests that with MR fluids the wall slip cannot be reliably detected by comparison of flow curves measured at different gap heights.
Evaluation and comparison of the marginal adaptation of two different substructure materials.
Karaman, Tahir; Ulku, Sabiha Zelal; Zengingul, Ali Ihsan; Guven, Sedat; Eratilla, Veysel; Sumer, Ebru
2015-06-01
In this study, we aimed to evaluate the amount of marginal gap with two different substructure materials using identical margin preparations. Twenty stainless steel models with a chamfer were prepared with a CNC device. Marginal gap measurements of the galvano copings on these stainless steel models and Co-Cr copings obtained by a laser-sintering method were made with a stereomicroscope device before and after the cementation process and surface properties were evaluated by scanning electron microscopy (SEM). A dependent t-test was used to compare the mean of the two groups for normally distributed data, and two-way variance analysis was used for more than two data sets. Pearson's correlation analysis was also performed to assess relationships between variables. According to the results obtained, the marginal gap in the galvano copings before cementation was measured as, on average, 24.47 ± 5.82 µm before and 35.11 ± 6.52 µm after cementation; in the laser-sintered Co-Cr structure, it was, on average, 60.45 ± 8.87 µm before and 69.33 ± 9.03 µm after cementation. A highly significant difference (P<.001) was found in marginal gap measurements of galvano copings and a significant difference (P<.05) was found in marginal gap measurements of the laser-sintered Co-Cr copings. According to the SEM examination, surface properties of laser sintered Co-Cr copings showed rougher structure than galvano copings. The galvano copings showed a very smooth surface. Marginal gaps values of both groups before and after cementation were within the clinically acceptable level. The smallest marginal gaps occurred with the use of galvano copings.
Evaluation and comparison of the marginal adaptation of two different substructure materials
Karaman, Tahir; Ulku, Sabiha Zelal; Zengingul, Ali Ihsan; Eratilla, Veysel; Sumer, Ebru
2015-01-01
PURPOSE In this study, we aimed to evaluate the amount of marginal gap with two different substructure materials using identical margin preparations. MATERIALS AND METHODS Twenty stainless steel models with a chamfer were prepared with a CNC device. Marginal gap measurements of the galvano copings on these stainless steel models and Co-Cr copings obtained by a laser-sintering method were made with a stereomicroscope device before and after the cementation process and surface properties were evaluated by scanning electron microscopy (SEM). A dependent t-test was used to compare the mean of the two groups for normally distributed data, and two-way variance analysis was used for more than two data sets. Pearson's correlation analysis was also performed to assess relationships between variables. RESULTS According to the results obtained, the marginal gap in the galvano copings before cementation was measured as, on average, 24.47 ± 5.82 µm before and 35.11 ± 6.52 µm after cementation; in the laser-sintered Co-Cr structure, it was, on average, 60.45 ± 8.87 µm before and 69.33 ± 9.03 µm after cementation. A highly significant difference (P<.001) was found in marginal gap measurements of galvano copings and a significant difference (P<.05) was found in marginal gap measurements of the laser-sintered Co-Cr copings. According to the SEM examination, surface properties of laser sintered Co-Cr copings showed rougher structure than galvano copings. The galvano copings showed a very smooth surface. CONCLUSION Marginal gaps values of both groups before and after cementation were within the clinically acceptable level. The smallest marginal gaps occurred with the use of galvano copings. PMID:26140178
Long-term care physical environments--effect on medication errors.
Mahmood, Atiya; Chaudhury, Habib; Gaumont, Alana; Rust, Tiana
2012-01-01
Few studies examine physical environmental factors and their effects on staff health, effectiveness, work errors and job satisfaction. To address this gap, this study aims to examine environmental features and their role in medication and nursing errors in long-term care facilities. A mixed methodological strategy was used. Data were collected via focus groups, observing medication preparation and administration, and a nursing staff survey in four facilities. The paper reveals that, during the medication preparation phase, physical design, such as medication room layout, is a major source of potential errors. During medication administration, social environment is more likely to contribute to errors. Interruptions, noise and staff shortages were particular problems. The survey's relatively small sample size needs to be considered when interpreting the findings. Also, actual error data could not be included as existing records were incomplete. The study offers several relatively low-cost recommendations to help staff reduce medication errors. Physical environmental factors are important when addressing measures to reduce errors. The findings of this study underscore the fact that the physical environment's influence on the possibility of medication errors is often neglected. This study contributes to the scarce empirical literature examining the relationship between physical design and patient safety.
Sensitivity analysis of Jacobian determinant used in treatment planning for lung cancer
NASA Astrophysics Data System (ADS)
Shao, Wei; Gerard, Sarah E.; Pan, Yue; Patton, Taylor J.; Reinhardt, Joseph M.; Durumeric, Oguz C.; Bayouth, John E.; Christensen, Gary E.
2018-03-01
Four-dimensional computed tomography (4DCT) is regularly used to visualize tumor motion in radiation therapy for lung cancer. These 4DCT images can be analyzed to estimate local ventilation by finding a dense correspondence map between the end inhalation and the end exhalation CT image volumes using deformable image registration. Lung regions with ventilation values above a threshold are labeled as regions of high pulmonary function and are avoided when possible in the radiation plan. This paper investigates a sensitivity analysis of the relative Jacobian error to small registration errors. We present a linear approximation of the relative Jacobian error. Next, we give a formula for the sensitivity of the relative Jacobian error with respect to the Jacobian of perturbation displacement field. Preliminary sensitivity analysis results are presented using 4DCT scans from 10 individuals. For each subject, we generated 6400 random smooth biologically plausible perturbation vector fields using a cubic B-spline model. We showed that the correlation between the Jacobian determinant and the Frobenius norm of the sensitivity matrix is close to -1, which implies that the relative Jacobian error in high-functional regions is less sensitive to noise. We also showed that small displacement errors on the average of 0.53 mm may lead to a 10% relative change in Jacobian determinant. We finally showed that the average relative Jacobian error and the sensitivity of the system for all subjects are positively correlated (close to +1), i.e. regions with high sensitivity has more error in Jacobian determinant on average.
Rapid fabrication of miniature lens arrays by four-axis single point diamond machining
McCall, Brian; Tkaczyk, Tomasz S.
2013-01-01
A novel method for fabricating lens arrays and other non-rotationally symmetric free-form optics is presented. This is a diamond machining technique using 4 controlled axes of motion – X, Y, Z, and C. As in 3-axis diamond micro-milling, a diamond ball endmill is mounted to the work spindle of a 4-axis ultra-precision computer numerical control (CNC) machine. Unlike 3-axis micro-milling, the C-axis is used to hold the cutting edge of the tool in contact with the lens surface for the entire cut. This allows the feed rates to be doubled compared to the current state of the art of micro-milling while producing an optically smooth surface with very low surface form error and exceptionally low radius error. PMID:23481813
Calibration and filtering strategies for frequency domain electromagnetic data
Minsley, Burke J.; Smith, Bruce D.; Hammack, Richard; Sams, James I.; Veloski, Garret
2010-01-01
echniques for processing frequency-domain electromagnetic (FDEM) data that address systematic instrument errors and random noise are presented, improving the ability to invert these data for meaningful earth models that can be quantitatively interpreted. A least-squares calibration method, originally developed for airborne electromagnetic datasets, is implemented for a ground-based survey in order to address systematic instrument errors, and new insights are provided into the importance of calibration for preserving spectral relationships within the data that lead to more reliable inversions. An alternative filtering strategy based on principal component analysis, which takes advantage of the strong correlation observed in FDEM data, is introduced to help address random noise in the data without imposing somewhat arbitrary spatial smoothing.Read More: http://library.seg.org/doi/abs/10.4133/1.3445431
SAGE: The Self-Adaptive Grid Code. 3
NASA Technical Reports Server (NTRS)
Davies, Carol B.; Venkatapathy, Ethiraj
1999-01-01
The multi-dimensional self-adaptive grid code, SAGE, is an important tool in the field of computational fluid dynamics (CFD). It provides an efficient method to improve the accuracy of flow solutions while simultaneously reducing computer processing time. Briefly, SAGE enhances an initial computational grid by redistributing the mesh points into more appropriate locations. The movement of these points is driven by an equal-error-distribution algorithm that utilizes the relationship between high flow gradients and excessive solution errors. The method also provides a balance between clustering points in the high gradient regions and maintaining the smoothness and continuity of the adapted grid, The latest version, Version 3, includes the ability to change the boundaries of a given grid to more efficiently enclose flow structures and provides alternative redistribution algorithms.
Criteria for the use of regression analysis for remote sensing of sediment and pollutants
NASA Technical Reports Server (NTRS)
Whitlock, C. H.; Kuo, C. Y.; Lecroy, S. R.
1982-01-01
An examination of limitations, requirements, and precision of the linear multiple-regression technique for quantification of marine environmental parameters is conducted. Both environmental and optical physics conditions have been defined for which an exact solution to the signal response equations is of the same form as the multiple regression equation. Various statistical parameters are examined to define a criteria for selection of an unbiased fit when upwelled radiance values contain error and are correlated with each other. Field experimental data are examined to define data smoothing requirements in order to satisfy the criteria of Daniel and Wood (1971). Recommendations are made concerning improved selection of ground-truth locations to maximize variance and to minimize physical errors associated with the remote sensing experiment.
Madison, Heather; Pereira, Anna; Korshøj, Mette; Taylor, Laura; Barr, Alan; Rempel, David
2015-11-01
The aim of this study was to evaluate the effects of key gap (distance between edges of keys) on computer keyboards on typing speed, percentage error, preference, and usability. In Parts 1 and 2 of this series, a small key pitch (center-to-center distance between keys) was found to reduce productivity and usability, but the findings were confounded by gap. In this study, key gap was varied while holding key pitch constant. Participants (N = 25) typed on six keyboards, which differed in gap between keys (1, 3, or 5 mm) and pitch (16 or 17 mm; distance between centers of keys), while typing speed, accuracy, usability, and preference were measured. There was no statistical interaction between gap and pitch. Accuracy was better for keyboards with a gap of 5 mm compared to a 1-mm gap (p = .04). Net typing speed (p = .02), accuracy (p = .002), and most usability measures were better for keyboards with a pitch of 17 mm compared to a 16-mm pitch. The study findings support keyboard designs with a gap between keys of 5 mm over 1 mm and a key pitch of 17 mm over 16 mm. These findings may influence keyboard standards and design, especially the design of small keyboards used with portable devices, such as tablets and laptops. © 2015, Human Factors and Ergonomics Society.
Bayesian Optimization for Neuroimaging Pre-processing in Brain Age Classification and Prediction
Lancaster, Jenessa; Lorenz, Romy; Leech, Rob; Cole, James H.
2018-01-01
Neuroimaging-based age prediction using machine learning is proposed as a biomarker of brain aging, relating to cognitive performance, health outcomes and progression of neurodegenerative disease. However, even leading age-prediction algorithms contain measurement error, motivating efforts to improve experimental pipelines. T1-weighted MRI is commonly used for age prediction, and the pre-processing of these scans involves normalization to a common template and resampling to a common voxel size, followed by spatial smoothing. Resampling parameters are often selected arbitrarily. Here, we sought to improve brain-age prediction accuracy by optimizing resampling parameters using Bayesian optimization. Using data on N = 2003 healthy individuals (aged 16–90 years) we trained support vector machines to (i) distinguish between young (<22 years) and old (>50 years) brains (classification) and (ii) predict chronological age (regression). We also evaluated generalisability of the age-regression model to an independent dataset (CamCAN, N = 648, aged 18–88 years). Bayesian optimization was used to identify optimal voxel size and smoothing kernel size for each task. This procedure adaptively samples the parameter space to evaluate accuracy across a range of possible parameters, using independent sub-samples to iteratively assess different parameter combinations to arrive at optimal values. When distinguishing between young and old brains a classification accuracy of 88.1% was achieved, (optimal voxel size = 11.5 mm3, smoothing kernel = 2.3 mm). For predicting chronological age, a mean absolute error (MAE) of 5.08 years was achieved, (optimal voxel size = 3.73 mm3, smoothing kernel = 3.68 mm). This was compared to performance using default values of 1.5 mm3 and 4mm respectively, resulting in MAE = 5.48 years, though this 7.3% improvement was not statistically significant. When assessing generalisability, best performance was achieved when applying the entire Bayesian optimization framework to the new dataset, out-performing the parameters optimized for the initial training dataset. Our study outlines the proof-of-principle that neuroimaging models for brain-age prediction can use Bayesian optimization to derive case-specific pre-processing parameters. Our results suggest that different pre-processing parameters are selected when optimization is conducted in specific contexts. This potentially motivates use of optimization techniques at many different points during the experimental process, which may improve statistical sensitivity and reduce opportunities for experimenter-led bias. PMID:29483870
Mean Field Variational Bayesian Data Assimilation
NASA Astrophysics Data System (ADS)
Vrettas, M.; Cornford, D.; Opper, M.
2012-04-01
Current data assimilation schemes propose a range of approximate solutions to the classical data assimilation problem, particularly state estimation. Broadly there are three main active research areas: ensemble Kalman filter methods which rely on statistical linearization of the model evolution equations, particle filters which provide a discrete point representation of the posterior filtering or smoothing distribution and 4DVAR methods which seek the most likely posterior smoothing solution. In this paper we present a recent extension to our variational Bayesian algorithm which seeks the most probably posterior distribution over the states, within the family of non-stationary Gaussian processes. Our original work on variational Bayesian approaches to data assimilation sought the best approximating time varying Gaussian process to the posterior smoothing distribution for stochastic dynamical systems. This approach was based on minimising the Kullback-Leibler divergence between the true posterior over paths, and our Gaussian process approximation. So long as the observation density was sufficiently high to bring the posterior smoothing density close to Gaussian the algorithm proved very effective, on lower dimensional systems. However for higher dimensional systems, the algorithm was computationally very demanding. We have been developing a mean field version of the algorithm which treats the state variables at a given time as being independent in the posterior approximation, but still accounts for their relationships between each other in the mean solution arising from the original dynamical system. In this work we present the new mean field variational Bayesian approach, illustrating its performance on a range of classical data assimilation problems. We discuss the potential and limitations of the new approach. We emphasise that the variational Bayesian approach we adopt, in contrast to other variational approaches, provides a bound on the marginal likelihood of the observations given parameters in the model which also allows inference of parameters such as observation errors, and parameters in the model and model error representation, particularly if this is written as a deterministic form with small additive noise. We stress that our approach can address very long time window and weak constraint settings. However like traditional variational approaches our Bayesian variational method has the benefit of being posed as an optimisation problem. We finish with a sketch of the future directions for our approach.
Inverse consistent non-rigid image registration based on robust point set matching
2014-01-01
Background Robust point matching (RPM) has been extensively used in non-rigid registration of images to robustly register two sets of image points. However, except for the location at control points, RPM cannot estimate the consistent correspondence between two images because RPM is a unidirectional image matching approach. Therefore, it is an important issue to make an improvement in image registration based on RPM. Methods In our work, a consistent image registration approach based on the point sets matching is proposed to incorporate the property of inverse consistency and improve registration accuracy. Instead of only estimating the forward transformation between the source point sets and the target point sets in state-of-the-art RPM algorithms, the forward and backward transformations between two point sets are estimated concurrently in our algorithm. The inverse consistency constraints are introduced to the cost function of RPM and the fuzzy correspondences between two point sets are estimated based on both the forward and backward transformations simultaneously. A modified consistent landmark thin-plate spline registration is discussed in detail to find the forward and backward transformations during the optimization of RPM. The similarity of image content is also incorporated into point matching in order to improve image matching. Results Synthetic data sets, medical images are employed to demonstrate and validate the performance of our approach. The inverse consistent errors of our algorithm are smaller than RPM. Especially, the topology of transformations is preserved well for our algorithm for the large deformation between point sets. Moreover, the distance errors of our algorithm are similar to that of RPM, and they maintain a downward trend as whole, which demonstrates the convergence of our algorithm. The registration errors for image registrations are evaluated also. Again, our algorithm achieves the lower registration errors in same iteration number. The determinant of the Jacobian matrix of the deformation field is used to analyse the smoothness of the forward and backward transformations. The forward and backward transformations estimated by our algorithm are smooth for small deformation. For registration of lung slices and individual brain slices, large or small determinant of the Jacobian matrix of the deformation fields are observed. Conclusions Results indicate the improvement of the proposed algorithm in bi-directional image registration and the decrease of the inverse consistent errors of the forward and the reverse transformations between two images. PMID:25559889
Symbol Error Rate of Underlay Cognitive Relay Systems over Rayleigh Fading Channel
NASA Astrophysics Data System (ADS)
Ho van, Khuong; Bao, Vo Nguyen Quoc
Underlay cognitive systems allow secondary users (SUs) to access the licensed band allocated to primary users (PUs) for better spectrum utilization with the power constraint imposed on SUs such that their operation does not harm the normal communication of PUs. This constraint, which limits the coverage range of SUs, can be offset by relaying techniques that take advantage of shorter range communication for lower path loss. Symbol error rate (SER) analysis of underlay cognitive relay systems over fading channel has not been reported in the literature. This paper fills this gap. The derived SER expressions are validated by simulations and show that underlay cognitive relay systems suffer a high error floor for any modulation level.
Local band gap measurements by VEELS of thin film solar cells.
Keller, Debora; Buecheler, Stephan; Reinhard, Patrick; Pianezzi, Fabian; Pohl, Darius; Surrey, Alexander; Rellinghaus, Bernd; Erni, Rolf; Tiwari, Ayodhya N
2014-08-01
This work presents a systematic study that evaluates the feasibility and reliability of local band gap measurements of Cu(In,Ga)Se2 thin films by valence electron energy-loss spectroscopy (VEELS). The compositional gradients across the Cu(In,Ga)Se2 layer cause variations in the band gap energy, which are experimentally determined using a monochromated scanning transmission electron microscope (STEM). The results reveal the expected band gap variation across the Cu(In,Ga)Se2 layer and therefore confirm the feasibility of local band gap measurements of Cu(In,Ga)Se2 by VEELS. The precision and accuracy of the results are discussed based on the analysis of individual error sources, which leads to the conclusion that the precision of our measurements is most limited by the acquisition reproducibility, if the signal-to-noise ratio of the spectrum is high enough. Furthermore, we simulate the impact of radiation losses on the measured band gap value and propose a thickness-dependent correction. In future work, localized band gap variations will be measured on a more localized length scale to investigate, e.g., the influence of chemical inhomogeneities and dopant accumulations at grain boundaries.
[Phenylephrine dosing error in Intensive Care Unit. Case of the trimester].
2013-01-01
A real clinical case reported to SENSAR is presented. A patient admitted to the surgical intensive care unit following a lung resection, suffered arterial hypotension. The nurse was asked to give the patient 1 mL of phenylephrine. A few seconds afterwards, the patient experienced a hypertensive crisis, which resolved spontaneously without damage. Thereafter, the nurse was interviewed and a dosing error was identified: she had mistakenly given the patient 1 mg of phenylephrine (1 mL) instead of 100 mcg (1 mL of the standard dilution, 1mg in 10 mL). The incident analysis revealed latent factors (event triggers) due to the lack of protocols and standard operating procedures, communication errors among team members (physician-nurse), suboptimal training, and underdeveloped safety culture. In order to preempt similar incidents in the future, the following actions were implemented in the surgical intensive care unit: a protocol for bolus and short lived infusions (<30 min) was developed and to close the communication gap through the adoption of communication techniques. The protocol was designed by physicians and nurses to standardize the administration of drugs with high potential for errors. To close the communication gap, repeated checks about saying and understanding was proposed ("closed loop"). Labeling syringes with the drug dilution was also recommended. Copyright © 2013 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Published by Elsevier España. All rights reserved.
NASA Astrophysics Data System (ADS)
Katiyar, Ajit K.; Grimm, Andreas; Bar, R.; Schmidt, Jan; Wietler, Tobias; Joerg Osten, H.; Ray, Samit K.
2016-10-01
Compressively strained Ge films have been grown on relaxed Si0.45Ge0.55 virtual substrates using molecular beam epitaxy in the presence of Sb as a surfactant. Structural characterization has shown that films grown in the presence of surfactant exhibit very smooth surfaces with a relatively higher strain value in comparison to those grown without any surfactant. The variation of strain with increasing Ge layer thickness was analyzed using Raman spectroscopy. The strain is found to be reduced with increasing film thickness due to the onset of island nucleation following Stranski-Krastanov growth mechanism. No phonon assisted direct band gap photoluminescence from compressively strained Ge films grown on relaxed Si0.45Ge0.55 has been achieved up to room temperature. Excitation power and temperature dependent photoluminescence have been studied in details to investigate the origin of different emission sub-bands.
Cover-layer with High Refractive Index for Near-Field Recording Media
NASA Astrophysics Data System (ADS)
Kim, Jin-Hong; Lee, Jun-Seok
2007-06-01
TiO2 nanoparticles are added into UV-curable resin to increase the refractive index of the cover-layer laminated for cover-layer incident near-field recording media. A high refractive index is required for the cover-layer operating with an optical head with a high numerical aperture. The eye pattern from a cover-layer coated 20 GB read-only memory disc in which the refractive index of the cover-layer is 1.75 is achieved, but the gap servo is unstable owing to the rough surface of the cover-layer. Even though the light loss due to the nanoparticles is negligible, a rough microstructure is developed by adding the nanoparticles into an organic binder material. To achieve a smooth surface for a stable gap servo, the solubility of the nanoparticles should be enhanced by the optimization of the surface of the nanoparticles.
Cover-Layer with High Refractive Index for Near-Field Recording Media
NASA Astrophysics Data System (ADS)
Kim, Jin-Hong; Lee, Jun-Seok
2007-06-01
TiO2 nanoparticles are added into UV-curable resin to increase the refractive index of the cover-layer laminated for cover-layer incident near-field recording media. A high refractive index is required for the cover-layer operating with an optical head with a high numerical aperture. The eye pattern from a cover-layer coated 20 GB read-only memory disc in which the refractive index of the cover-layer is 1.75 is achieved, but the gap servo is unstable owing to the rough surface of the cover-layer. Even though the light loss due to the nanoparticles is negligible, a rough microstructure is developed by adding the nanoparticles into an organic binder material. To achieve a smooth surface for a stable gap servo, the solubility of the nanoparticles should be enhanced by the optimization of the surface of the nanoparticles.
Heterointerface engineering of broken-gap InAs/GaSb multilayer structures.
Liu, Jheng-Sin; Zhu, Yan; Goley, Patrick S; Hudait, Mantu K
2015-02-04
Broken-gap InAs/GaSb strain balanced multilayer structures were grown by molecular beam epitaxy (MBE), and their structural, morphological, and band alignment properties were analyzed. Precise shutter sequence during the MBE growth process, enable to achieve the strain balanced structure. Cross-sectional transmission electron microscopy exhibited sharp heterointerfaces, and the lattice line extended from the top GaSb layer to the bottom InAs layer. X-ray analysis further confirmed a strain balanced InAs/GaSb multilayer structure. A smooth surface morphology with surface roughness of ∼0.5 nm was demonstrated. The effective barrier height -0.15 eV at the GaSb/InAs heterointerface was determined by X-ray photoelectron spectroscopy, and it was further corroborated by simulation. These results are important to demonstrate desirable characteristics of mixed As/Sb material systems for high-performance and low-power tunnel field-effect transistor applications.
The aerodynamic performance of several flow control devices for internal flow systems
NASA Technical Reports Server (NTRS)
Eckert, W. T.; Wettlaufer, B. M.; Mort, K. W.
1982-01-01
An experimental reseach and development program was undertaken to develop and document new flow-control devices for use in the major modifications to the 40 by 80 Foot wind tunnel at Ames Research Center. These devices, which are applicable to other facilities as well, included grid-type and quasi-two-dimensional flow straighteners, louver panels for valving, and turning-vane cascades with net turning angles from 0 deg to 90 deg. The tests were conducted at model scale over a Reynolds number range from 2 x 100,000 to 17 x 100,000, based on chord. The results showed quantitatively the performance benefits of faired, low-blockage, smooth-surface straightener systems, and the advantages of curved turning-vanes with hinge-line gaps sealed and a preferred chord-to-gap ratio between 2.5 and 3.0 for 45 deg or 90 deg turns.
The section TiInSe/sub 2/-TiSbSe/sub 2/ of the system Ti-In-Sb-Se
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guseinov, G.D.; Chapanova, L.M.; Mal'sagov, A.U.
1985-09-01
The ternary compounds A /SUP I/ B /SUP III/ C/sub 2/ /SUP VI/ (A /SUP I/ is univalent Ti; B /SUP III/ is Ga or In; and C /SUP VI/ is S, Se or Te) form a class of semiconductors with a large number of different gap widths. The compounds crystallize in the chalcopyrite structure. Solid solutions based on these compounds, which permit varying smoothly the gap width and other physical parameters over wide limits, are of great interest. The authors synthesized the compounds TiInSe/sub 2/ and TiSbSe/sub 2/ from the starting materials Ti-000, In-000, Sb-000 and Se-OSCh-17-4 by directmore » fusion of the components, taken in a stoichiometric ratio, in quartz ampules evacuated to 1.3 X 10/sup -3/ Pa and sealed.« less
Theory of Magnetic Edge States in Chiral Graphene Nanoribbons
NASA Astrophysics Data System (ADS)
Capaz, Rodrigo; Yazyev, Oleg; Louie, Steven
2011-03-01
Using a model Hamiltonian approach including electron Coulomb interactions, we systematically investigate the electronic structure and magnetic properties of chiral graphene nanoribbons. We show that the presence of magnetic edge states is an intrinsic feature of any smooth graphene nanoribbons with chiral edges, and discover a number of structure-property relations. Specifically, we describe how the edge-state energy gap, zone-boundary edge-state energy splitting, and magnetic moment per edge length depend on the nanoribbon width and chiral angle. The role of environmental screening effects is also studied. Our results address a recent experimental observation of signatures of magnetic ordering at smooth edges of chiral graphene nanoribbons and provide an avenue towards tuning their properties via the structural and environmental degrees of freedom. This work was supported by National Science Foundation Grant No. DMR10-1006184, the U.S. Department of Energy under Contract No. DE-AC02-05CH11231 and the ONR MURI program. RBC acknowledges financial support from Brazilian agencies CNPq, FAPERJ and INCT-Nanomateriais de Carbono.
Jhamb, Rajat; Gupta, Naresh; Garg, Sandeep; Kumar, Sachin; Gulati, Sameer; Mishra, Deepak; Beniwal, Pankaj
2007-01-01
We report the case of a 22-year-old woman who presented with acute onset flaccid quadriparesis. Physical examination showed mild pallor with cervical and axillary lymphadenopathy, hepatomegaly, and bilateral smooth enlarged kidneys. Neurological examination revealed lower motor neuron muscle weakness in all the four limbs with hyporeflexia and normal sensory examination. Laboratory investigations showed anemia, severe hypokalemia, and metabolic acidosis. Urinalysis showed a specific gravity of 1.010, pH of 7.0, with a positive urine anion gap. Ultrasound revealed hepatosplenomegaly with bilateral enlarged smooth kidneys. Renal biopsy was consistent with the diagnosis of non-Hodgkin lymphoma (B cell type). Metabolic acidosis, alkaline urine, and severe hypokalemia due to excessive urinary loss in our patient were suggestive of distal renal tubular acidosis. Renal involvement in lymphoma is usually subclinical and clinically overt renal disease is rare. Diffuse lymphomatous infiltration of the kidneys may cause tubular dysfunction and present with hypokalemic paralysis. PMID:18074421
NASA Astrophysics Data System (ADS)
Lee, Hyun-Chul; Kumar, Arun; Wang, Wanqiu
2018-03-01
Coupled prediction systems for seasonal and inter-annual variability in the tropical Pacific are initialized from ocean analyses. In ocean initial states, small scale perturbations are inevitably smoothed or distorted by the observational limits and data assimilation procedures, which tends to induce potential ocean initial errors for the El Nino-Southern Oscillation (ENSO) prediction. Here, the evolution and effects of ocean initial errors from the small scale perturbation on the developing phase of ENSO are investigated by an ensemble of coupled model predictions. Results show that the ocean initial errors at the thermocline in the western tropical Pacific grow rapidly to project on the first mode of equatorial Kelvin wave and propagate to the east along the thermocline. In boreal spring when the surface buoyancy flux weakens in the eastern tropical Pacific, the subsurface errors influence sea surface temperature variability and would account for the seasonal dependence of prediction skill in the NINO3 region. It is concluded that the ENSO prediction in the eastern tropical Pacific after boreal spring can be improved by increasing the observational accuracy of subsurface ocean initial states in the western tropical Pacific.