A trade-off solution between model resolution and covariance in surface-wave inversion
Xia, J.; Xu, Y.; Miller, R.D.; Zeng, C.
2010-01-01
Regularization is necessary for inversion of ill-posed geophysical problems. Appraisal of inverse models is essential for meaningful interpretation of these models. Because uncertainties are associated with regularization parameters, extra conditions are usually required to determine proper parameters for assessing inverse models. Commonly used techniques for assessment of a geophysical inverse model derived (generally iteratively) from a linear system are based on calculating the model resolution and the model covariance matrices. Because the model resolution and the model covariance matrices of the regularized solutions are controlled by the regularization parameter, direct assessment of inverse models using only the covariance matrix may provide incorrect results. To assess an inverted model, we use the concept of a trade-off between model resolution and covariance to find a proper regularization parameter with singular values calculated in the last iteration. We plot the singular values from large to small to form a singular value plot. A proper regularization parameter is normally the first singular value that approaches zero in the plot. With this regularization parameter, we obtain a trade-off solution between model resolution and model covariance in the vicinity of a regularized solution. The unit covariance matrix can then be used to calculate error bars of the inverse model at a resolution level determined by the regularization parameter. We demonstrate this approach with both synthetic and real surface-wave data. ?? 2010 Birkh??user / Springer Basel AG.
Non-Cartesian MRI Reconstruction With Automatic Regularization Via Monte-Carlo SURE
Weller, Daniel S.; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.
2013-01-01
Magnetic resonance image (MRI) reconstruction from undersampled k-space data requires regularization to reduce noise and aliasing artifacts. Proper application of regularization however requires appropriate selection of associated regularization parameters. In this work, we develop a data-driven regularization parameter adjustment scheme that minimizes an estimate (based on the principle of Stein’s unbiased risk estimate—SURE) of a suitable weighted squared-error measure in k-space. To compute this SURE-type estimate, we propose a Monte-Carlo scheme that extends our previous approach to inverse problems (e.g., MRI reconstruction) involving complex-valued images. Our approach depends only on the output of a given reconstruction algorithm and does not require knowledge of its internal workings, so it is capable of tackling a wide variety of reconstruction algorithms and nonquadratic regularizers including total variation and those based on the ℓ1-norm. Experiments with simulated and real MR data indicate that the proposed approach is capable of providing near mean squared-error (MSE) optimal regularization parameters for single-coil undersampled non-Cartesian MRI reconstruction. PMID:23591478
Space-dependent perfusion coefficient estimation in a 2D bioheat transfer problem
NASA Astrophysics Data System (ADS)
Bazán, Fermín S. V.; Bedin, Luciano; Borges, Leonardo S.
2017-05-01
In this work, a method for estimating the space-dependent perfusion coefficient parameter in a 2D bioheat transfer model is presented. In the method, the bioheat transfer model is transformed into a time-dependent semidiscrete system of ordinary differential equations involving perfusion coefficient values as parameters, and the estimation problem is solved through a nonlinear least squares technique. In particular, the bioheat problem is solved by the method of lines based on a highly accurate pseudospectral approach, and perfusion coefficient values are estimated by the regularized Gauss-Newton method coupled with a proper regularization parameter. The performance of the method on several test problems is illustrated numerically.
The International Conference on Amorphous and Liquid Semiconductors (9th).
1979-12-11
loop effective action of a constant gluon field can be expressed in terms of the experimentally determinable A,.,• In the following chapter, the...regularization and Schwinger’s proper time method. The renormalization mass parameters appearing in the two treatments can then be related and the exact one
A space-frequency multiplicative regularization for force reconstruction problems
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2018-05-01
Dynamic forces reconstruction from vibration data is an ill-posed inverse problem. A standard approach to stabilize the reconstruction consists in using some prior information on the quantities to identify. This is generally done by including in the formulation of the inverse problem a regularization term as an additive or a multiplicative constraint. In the present article, a space-frequency multiplicative regularization is developed to identify mechanical forces acting on a structure. The proposed regularization strategy takes advantage of one's prior knowledge of the nature and the location of excitation sources, as well as that of their spectral contents. Furthermore, it has the merit to be free from the preliminary definition of any regularization parameter. The validity of the proposed regularization procedure is assessed numerically and experimentally. It is more particularly pointed out that properly exploiting the space-frequency characteristics of the excitation field to identify can improve the quality of the force reconstruction.
Bilateral filter regularized accelerated Demons for improved discontinuity preserving registration.
Demirović, D; Šerifović-Trbalić, A; Prljača, N; Cattin, Ph C
2015-03-01
The classical accelerated Demons algorithm uses Gaussian smoothing to penalize oscillatory motion in the displacement fields during registration. This well known method uses the L2 norm for regularization. Whereas the L2 norm is known for producing well behaving smooth deformation fields it cannot properly deal with discontinuities often seen in the deformation field as the regularizer cannot differentiate between discontinuities and smooth part of motion field. In this paper we propose replacement the Gaussian filter of the accelerated Demons with a bilateral filter. In contrast the bilateral filter not only uses information from displacement field but also from the image intensities. In this way we can smooth the motion field depending on image content as opposed to the classical Gaussian filtering. By proper adjustment of two tunable parameters one can obtain more realistic deformations in a case of discontinuity. The proposed approach was tested on 2D and 3D datasets and showed significant improvements in the Target Registration Error (TRE) for the well known POPI dataset. Despite the increased computational complexity, the improved registration result is justified in particular abdominal data sets where discontinuities often appear due to sliding organ motion. Copyright © 2014 Elsevier Ltd. All rights reserved.
Tsoumpas, C; Polycarpou, I; Thielemans, K; Buerger, C; King, A P; Schaeffter, T; Marsden, P K
2013-03-21
Following continuous improvement in PET spatial resolution, respiratory motion correction has become an important task. Two of the most common approaches that utilize all detected PET events to motion-correct PET data are the reconstruct-transform-average method (RTA) and motion-compensated image reconstruction (MCIR). In RTA, separate images are reconstructed for each respiratory frame, subsequently transformed to one reference frame and finally averaged to produce a motion-corrected image. In MCIR, the projection data from all frames are reconstructed by including motion information in the system matrix so that a motion-corrected image is reconstructed directly. Previous theoretical analyses have explained why MCIR is expected to outperform RTA. It has been suggested that MCIR creates less noise than RTA because the images for each separate respiratory frame will be severely affected by noise. However, recent investigations have shown that in the unregularized case RTA images can have fewer noise artefacts, while MCIR images are more quantitatively accurate but have the common salt-and-pepper noise. In this paper, we perform a realistic numerical 4D simulation study to compare the advantages gained by including regularization within reconstruction for RTA and MCIR, in particular using the median-root-prior incorporated in the ordered subsets maximum a posteriori one-step-late algorithm. In this investigation we have demonstrated that MCIR with proper regularization parameters reconstructs lesions with less bias and root mean square error and similar CNR and standard deviation to regularized RTA. This finding is reproducible for a variety of noise levels (25, 50, 100 million counts), lesion sizes (8 mm, 14 mm diameter) and iterations. Nevertheless, regularized RTA can also be a practical solution for motion compensation as a proper level of regularization reduces both bias and mean square error.
Modelling and properties of a nonlinear autonomous switching system in fed-batch culture of glycerol
NASA Astrophysics Data System (ADS)
Wang, Juan; Sun, Qingying; Feng, Enmin
2012-11-01
A nonlinear autonomous switching system is proposed to describe the coupled fed-batch fermentation with the pH as the feedback parameter. We prove the non-Zeno behaviors of the switching system and some basic properties of its solution, including the existence, uniqueness, boundedness and regularity. Numerical simulation is also carried out, which reveals that the proposed system can describe the factual fermentation process properly.
Bayesian Inference for Generalized Linear Models for Spiking Neurons
Gerwinn, Sebastian; Macke, Jakob H.; Bethge, Matthias
2010-01-01
Generalized Linear Models (GLMs) are commonly used statistical methods for modelling the relationship between neural population activity and presented stimuli. When the dimension of the parameter space is large, strong regularization has to be used in order to fit GLMs to datasets of realistic size without overfitting. By imposing properly chosen priors over parameters, Bayesian inference provides an effective and principled approach for achieving regularization. Here we show how the posterior distribution over model parameters of GLMs can be approximated by a Gaussian using the Expectation Propagation algorithm. In this way, we obtain an estimate of the posterior mean and posterior covariance, allowing us to calculate Bayesian confidence intervals that characterize the uncertainty about the optimal solution. From the posterior we also obtain a different point estimate, namely the posterior mean as opposed to the commonly used maximum a posteriori estimate. We systematically compare the different inference techniques on simulated as well as on multi-electrode recordings of retinal ganglion cells, and explore the effects of the chosen prior and the performance measure used. We find that good performance can be achieved by choosing an Laplace prior together with the posterior mean estimate. PMID:20577627
Robust blood-glucose control using Mathematica.
Kovács, Levente; Paláncz, Béla; Benyó, Balázs; Török, László; Benyó, Zoltán
2006-01-01
A robust control design on frequency domain using Mathematica is presented for regularization of glucose level in type I diabetes persons under intensive care. The method originally proposed under Mathematica by Helton and Merino, --now with an improved disturbance rejection constraint inequality--is employed, using a three-state minimal patient model. The robustness of the resulted high-order linear controller is demonstrated by nonlinear closed loop simulation in state-space, in case of standard meal disturbances and is compared with H infinity design implemented with the mu-toolbox of Matlab. The controller designed with model parameters represented the most favorable plant dynamics from the point of view of control purposes, can operate properly even in case of parameter values of the worst-case scenario.
Comparison of detrending methods for fluctuation analysis in hydrology
NASA Astrophysics Data System (ADS)
Zhang, Qiang; Zhou, Yu; Singh, Vijay P.; Chen, Yongqin David
2011-03-01
SummaryTrends within a hydrologic time series can significantly influence the scaling results of fluctuation analysis, such as rescaled range (RS) analysis and (multifractal) detrended fluctuation analysis (MF-DFA). Therefore, removal of trends is important in the study of scaling properties of the time series. In this study, three detrending methods, including adaptive detrending algorithm (ADA), Fourier-based method, and average removing technique, were evaluated by analyzing numerically generated series and observed streamflow series with obvious relative regular periodic trend. Results indicated that: (1) the Fourier-based detrending method and ADA were similar in detrending practices, and given proper parameters, these two methods can produce similarly satisfactory results; (2) detrended series by Fourier-based detrending method and ADA lose the fluctuation information at larger time scales, and the location of crossover points is heavily impacted by the chosen parameters of these two methods; and (3) the average removing method has an advantage over the other two methods, i.e., the fluctuation information at larger time scales is kept well-an indication of relatively reliable performance in detrending. In addition, the average removing method performed reasonably well in detrending a time series with regular periods or trends. In this sense, the average removing method should be preferred in the study of scaling properties of the hydrometeorolgical series with relative regular periodic trend using MF-DFA.
Fabrication of amorphous silica nanowires via oxygen plasma treatment of polymers on silicon
NASA Astrophysics Data System (ADS)
Chen, Zhuojie; She, Didi; Chen, Qinghua; Li, Yanmei; Wu, Wengang
2018-02-01
We demonstrate a facile non-catalytic method of fabricating silica nanowires at room temperature. Different polymers including photoresists, parylene C and polystyrene are patterned into pedestals on the silicon substrates. The silica nanowires are obtained via the oxygen plasma treatment on those pedestals. Compared to traditional strategies of silica nanowire fabrication, this method is much simpler and low-cost. Through designing the proper initial patterns and plasma process parameters, the method can be used to fabricate various regiment nano-scale silica structure arrays in any laboratory with a regular oxygen-plasma-based cleaner or reactive-ion-etching equipment.
Proper time regularization and the QCD chiral phase transition
Cui, Zhu-Fang; Zhang, Jin-Li; Zong, Hong-Shi
2017-01-01
We study the QCD chiral phase transition at finite temperature and finite quark chemical potential within the two flavor Nambu–Jona-Lasinio (NJL) model, where a generalization of the proper-time regularization scheme is motivated and implemented. We find that in the chiral limit the whole transition line in the phase diagram is of second order, whereas for finite quark masses a crossover is observed. Moreover, if we take into account the influence of quark condensate to the coupling strength (which also provides a possible way of how the effective coupling varies with temperature and quark chemical potential), it is found that a CEP may appear. These findings differ substantially from other NJL results which use alternative regularization schemes, some explanation and discussion are given at the end. This indicates that the regularization scheme can have a dramatic impact on the study of the QCD phase transition within the NJL model. PMID:28401889
Selection of regularization parameter for l1-regularized damage detection
NASA Astrophysics Data System (ADS)
Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing
2018-06-01
The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.
Nekrasov and Argyres-Douglas theories in spherical Hecke algebra representation
NASA Astrophysics Data System (ADS)
Rim, Chaiho; Zhang, Hong
2017-06-01
AGT conjecture connects Nekrasov instanton partition function of 4D quiver gauge theory with 2D Liouville conformal blocks. We re-investigate this connection using the central extension of spherical Hecke algebra in q-coordinate representation, q being the instanton expansion parameter. Based on AFLT basis together with intertwiners we construct gauge conformal state and demonstrate its equivalence to the Liouville conformal state, with careful attention to the proper scaling behavior of the state. Using the colliding limit of regular states, we obtain the formal expression of irregular conformal states corresponding to Argyres-Douglas theory, which involves summation of functions over Young diagrams.
Developing proper mealtime behaviors of the institutionalized retarded.
O'brien, F; Azrin, N H
1972-01-01
The institutionalized mentally retarded display a variety of unsanitary, disruptive, and improper table manners. A program was developed that included (1) acquisition-training of a high standard of proper table manners and (2) maintenance procedures to provide continued motivation to maintain proper mealtime behaviors and decrease improper skills. Twelve retardates received acquisition training, individually, by a combination of verbal instruction, imitation, and manual guidance. The students then ate in their group dining arrangement where the staff supervisor provided continuing approval for proper manners and verbal correction and timeout for improper manners. The results were: (1) the trained retardates showed significant improvement, whereas those untrained did not; (2) the trained retardates ate as well in the institution as non-retarded customers did in a public restaurant; (3) proper eating was maintained in the group dining setting; (4) timeout was rarely needed; (5) the program was easily administered by regular staff in a regular dining setting. The rapidity, feasibility, and effectiveness of the program suggests the program as a solution to improper mealtime behaviors by the institutionalized mentally retarded.
Convergence of damped inertial dynamics governed by regularized maximally monotone operators
NASA Astrophysics Data System (ADS)
Attouch, Hedy; Cabot, Alexandre
2018-06-01
In a Hilbert space setting, we study the asymptotic behavior, as time t goes to infinity, of the trajectories of a second-order differential equation governed by the Yosida regularization of a maximally monotone operator with time-varying positive index λ (t). The dissipative and convergence properties are attached to the presence of a viscous damping term with positive coefficient γ (t). A suitable tuning of the parameters γ (t) and λ (t) makes it possible to prove the weak convergence of the trajectories towards zeros of the operator. When the operator is the subdifferential of a closed convex proper function, we estimate the rate of convergence of the values. These results are in line with the recent articles by Attouch-Cabot [3], and Attouch-Peypouquet [8]. In this last paper, the authors considered the case γ (t) = α/t, which is naturally linked to Nesterov's accelerated method. We unify, and often improve the results already present in the literature.
Physiological time-series analysis: what does regularity quantify?
NASA Technical Reports Server (NTRS)
Pincus, S. M.; Goldberger, A. L.
1994-01-01
Approximate entropy (ApEn) is a recently developed statistic quantifying regularity and complexity that appears to have potential application to a wide variety of physiological and clinical time-series data. The focus here is to provide a better understanding of ApEn to facilitate its proper utilization, application, and interpretation. After giving the formal mathematical description of ApEn, we provide a multistep description of the algorithm as applied to two contrasting clinical heart rate data sets. We discuss algorithm implementation and interpretation and introduce a general mathematical hypothesis of the dynamics of a wide class of diseases, indicating the utility of ApEn to test this hypothesis. We indicate the relationship of ApEn to variability measures, the Fourier spectrum, and algorithms motivated by study of chaotic dynamics. We discuss further mathematical properties of ApEn, including the choice of input parameters, statistical issues, and modeling considerations, and we conclude with a section on caveats to ensure correct ApEn utilization.
Code of Federal Regulations, 2012 CFR
2012-01-01
... motor vehicle to properly carry out his or her assigned duties. Motor vehicle means a vehicle designed... vehicle (a) designed or used for military field training, combat, or tactical purposes; (b) used principally within the confines of a regularly established military post, camp, or depot; or (c) regularly...
Code of Federal Regulations, 2013 CFR
2013-01-01
... motor vehicle to properly carry out his or her assigned duties. Motor vehicle means a vehicle designed... vehicle (a) designed or used for military field training, combat, or tactical purposes; (b) used principally within the confines of a regularly established military post, camp, or depot; or (c) regularly...
Code of Federal Regulations, 2011 CFR
2011-01-01
... motor vehicle to properly carry out his or her assigned duties. Motor vehicle means a vehicle designed... vehicle (a) designed or used for military field training, combat, or tactical purposes; (b) used principally within the confines of a regularly established military post, camp, or depot; or (c) regularly...
Code of Federal Regulations, 2014 CFR
2014-01-01
... motor vehicle to properly carry out his or her assigned duties. Motor vehicle means a vehicle designed... vehicle (a) designed or used for military field training, combat, or tactical purposes; (b) used principally within the confines of a regularly established military post, camp, or depot; or (c) regularly...
Code of Federal Regulations, 2010 CFR
2010-01-01
... motor vehicle to properly carry out his or her assigned duties. Motor vehicle means a vehicle designed... vehicle (a) designed or used for military field training, combat, or tactical purposes; (b) used principally within the confines of a regularly established military post, camp, or depot; or (c) regularly...
NASA Astrophysics Data System (ADS)
Crisanto-Neto, J. C.; da Luz, M. G. E.; Raposo, E. P.; Viswanathan, G. M.
2016-09-01
In practice, the Lévy α-stable distribution is usually expressed in terms of the Fourier integral of its characteristic function. Indeed, known closed form expressions are relatively scarce given the huge parameters space: 0\\lt α ≤slant 2 ({{L\\'{e}vy}} {{index}}), -1≤slant β ≤slant 1 ({{skewness}}),σ \\gt 0 ({{scale}}), and -∞ \\lt μ \\lt ∞ ({{shift}}). Hence, systematic efforts have been made towards the development of proper methods for analytically solving the mentioned integral. As a further contribution in this direction, here we propose a new way to tackle the problem. We consider an approach in which one first solves the Fourier integral through a formal (thus not necessarily convergent) series representation. Then, one uses (if necessary) a pertinent sum-regularization procedure to the resulting divergent series, so as to obtain an exact formula for the distribution, which is amenable to direct numerical calculations. As a concrete study, we address the centered, symmetric, unshifted and unscaled distribution (β =0, μ =0, σ =1), with α ={α }M=2/M, M=1,2,3\\ldots . Conceivably, the present protocol could be applied to other sets of parameter values.
NASA Astrophysics Data System (ADS)
Zhang, Xiaoqing; Qi, Fenglin; Li, Shuping; Wei, Shaohua; Zhou, Jiahong
2012-10-01
A mechanochemical approach is developed in preparing a series of magnesium-aluminum-layered double hydroxides (Mg-Al-LDHs). This approach includes a mechanochemical process which involved manual grinding of solid salts in an agate mortar and afterwards peptization process. In order to verify the LDHs structure synthesized in the grinding process, X-ray diffraction (XRD) patterns, transmission electron microscopy (TEM) photos and thermogravimetry/differential scanning calorimetry (TG-DSC) property of the product without peptization were characterized and the results show that amorphous particles with low crystallinity and poor thermal stability are obtained, and the effect of peptization is to improve the properties, more accurately, regular particles with high crystallinity and good thermal stability can be gained after peptization. Furthermore, the fundamental experimental parameters including grinding time, the molar ratio of Mg to Al element (defined as R value) and the water content were systematically examined in order to control the size and morphologies of LDHs particles, regular hexagonal particles or the spherical nanostructures can be efficiently obtained and the particle sizes were controlled in the range of 52-130 nm by carefully adjusting these parameters. At last, stunningly uniform Mg-Al-LDHs particles can be synthesized under proper R values, suitable grinding time and high degree of supersaturation.
Minimal residual method provides optimal regularization parameter for diffuse optical tomography
NASA Astrophysics Data System (ADS)
Jagannath, Ravi Prasad K.; Yalavarthy, Phaneendra K.
2012-10-01
The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.
Minimal residual method provides optimal regularization parameter for diffuse optical tomography.
Jagannath, Ravi Prasad K; Yalavarthy, Phaneendra K
2012-10-01
The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.
Volume determination of irregularly-shaped quasi-spherical nanoparticles.
Attota, Ravi Kiran; Liu, Eileen Cherry
2016-11-01
Nanoparticles (NPs) are widely used in diverse application areas, such as medicine, engineering, and cosmetics. The size (or volume) of NPs is one of the most important parameters for their successful application. It is relatively straightforward to determine the volume of regular NPs such as spheres and cubes from a one-dimensional or two-dimensional measurement. However, due to the three-dimensional nature of NPs, it is challenging to determine the proper physical size of many types of regularly and irregularly-shaped quasi-spherical NPs at high-throughput using a single tool. Here, we present a relatively simple method that determines a better volume estimate of NPs by combining measurements from their top-down projection areas and peak heights using two tools. The proposed method is significantly faster and more economical than the electron tomography method. We demonstrate the improved accuracy of the combined method over scanning electron microscopy (SEM) or atomic force microscopy (AFM) alone by using modeling, simulations, and measurements. This study also exposes the existence of inherent measurement biases for both SEM and AFM, which usually produce larger measured diameters with SEM than with AFM. However, in some cases SEM measured diameters appear to have less error compared to AFM measured diameters, especially for widely used IS-NPs such as of gold, and silver. The method provides a much needed, proper high-throughput volumetric measurement method useful for many applications. Graphical Abstract The combined method for volume determination of irregularly-shaped quasi-spherical nanoparticles.
The Changes of Pulmonary Function in COPD During Four-Year Period
Cukic, Vesna; Lovre, Vladimir; Ustamujic, Aida
2013-01-01
Conflict of interest: none declared. Introduction COPD (chronic obstructive pulmonary disease) is characterized by airflow limitation that is not fully reversible. OBJECTIVE: to show the changes of pulmonary function in COPD during the 4 -year evolution of illness. Material and Methods The research was done on patients suffering from COPD treated at the Clinic “Podhrastovi” during 2006 and 2007. The tested parameters were examined from the date of receiving patient with COPD to hospital treatment in 2006 and 2007 and then followed prospectively until 2010 or 2011 (the follow-up period was 4 years). There were total 199 treated patients who were chosen at random and regularly attended the control examinations. The study was conducted on adult patients of both sexes, different age group. In each patient the duration of illness was recorded so is sex, age, data of smoking habits, information about the regularity of taking bronchodilator therapy during remissions of disease, about the treatment of disease exacerbations, results of pulmonary functional tests as follows: FVC (forced vital capacity), FEV1 (forced expiratory volume in one second) and bronchodilator reversibility testing. All these parameters were measured at the beginning and at the end of each hospital treatment on the apparatuses of Clinic “Podhrastovi”. We took in elaboration those data obtained in the beginning of the first hospitalization and at the end of the last hospitalization or at the last control in outpatient department when patient was in stable state. Patients were divided into three groups according to the number of exacerbations per year. Results airflow limitation in COPD is progressive; both FVC and FEV1 shows the statistically significant decrease during follow-up period of 4 years (p values / for both parameters/ =0.05) . But in patients regularly treated in phases of remission and exacerbations of illness the course of illness is slower. The fall of FVC and FEV1 is statistically significantly smaller in those received regular treatment in phases of remissions and exacerbations of illness (p values / for both parameters/ =0.01). The number of patients responding properly to bronchodilators decreased statistically significantly in patients with COPD during follow-up period (p=0.05). Conclusion COPD is characterized with airflow limitation which is progressive in the course of illness, but that course may be made slower using appropriate treatment during remission and exacerbations of diseases. PMID:24082829
Uterine Contraction Modeling and Simulation
NASA Technical Reports Server (NTRS)
Liu, Miao; Belfore, Lee A.; Shen, Yuzhong; Scerbo, Mark W.
2010-01-01
Building a training system for medical personnel to properly interpret fetal heart rate tracing requires developing accurate models that can relate various signal patterns to certain pathologies. In addition to modeling the fetal heart rate signal itself, the change of uterine pressure that bears strong relation to fetal heart rate and provides indications of maternal and fetal status should also be considered. In this work, we have developed a group of parametric models to simulate uterine contractions during labor and delivery. Through analysis of real patient records, we propose to model uterine contraction signals by three major components: regular contractions, impulsive noise caused by fetal movements, and low amplitude noise invoked by maternal breathing and measuring apparatus. The regular contractions are modeled by an asymmetric generalized Gaussian function and least squares estimation is used to compute the parameter values of the asymmetric generalized Gaussian function based on uterine contractions of real patients. Regular contractions are detected based on thresholding and derivative analysis of uterine contractions. Impulsive noise caused by fetal movements and low amplitude noise by maternal breathing and measuring apparatus are modeled by rational polynomial functions and Perlin noise, respectively. Experiment results show the synthesized uterine contractions can mimic the real uterine contractions realistically, demonstrating the effectiveness of the proposed algorithm.
40 CFR 85.2104 - Owners' compliance with instructions for proper maintenance and use.
Code of Federal Regulations, 2011 CFR
2011-07-01
... automobiles for the relevant maintenance instruction(s); or (2) A showing that the vehicle has been submitted... to someone who regularly engages in the business of servicing automobiles for the purpose of... area in which the vehicle or engine is located, unless the written instructions for proper maintenance...
40 CFR 85.2104 - Owners' compliance with instructions for proper maintenance and use.
Code of Federal Regulations, 2013 CFR
2013-07-01
... automobiles for the relevant maintenance instruction(s); or (2) A showing that the vehicle has been submitted... to someone who regularly engages in the business of servicing automobiles for the purpose of... area in which the vehicle or engine is located, unless the written instructions for proper maintenance...
40 CFR 85.2104 - Owners' compliance with instructions for proper maintenance and use.
Code of Federal Regulations, 2014 CFR
2014-07-01
... automobiles for the relevant maintenance instruction(s); or (2) A showing that the vehicle has been submitted... to someone who regularly engages in the business of servicing automobiles for the purpose of... area in which the vehicle or engine is located, unless the written instructions for proper maintenance...
40 CFR 85.2104 - Owners' compliance with instructions for proper maintenance and use.
Code of Federal Regulations, 2010 CFR
2010-07-01
... automobiles for the relevant maintenance instruction(s); or (2) A showing that the vehicle has been submitted... to someone who regularly engages in the business of servicing automobiles for the purpose of... area in which the vehicle or engine is located, unless the written instructions for proper maintenance...
40 CFR 85.2104 - Owners' compliance with instructions for proper maintenance and use.
Code of Federal Regulations, 2012 CFR
2012-07-01
... automobiles for the relevant maintenance instruction(s); or (2) A showing that the vehicle has been submitted... to someone who regularly engages in the business of servicing automobiles for the purpose of... area in which the vehicle or engine is located, unless the written instructions for proper maintenance...
The Link between Nutrition and Physical Activity in Increasing Academic Achievement
ERIC Educational Resources Information Center
Asigbee, Fiona M.; Whitney, Stephen D.; Peterson, Catherine E.
2018-01-01
Background: Research demonstrates a link between decreased cognitive function in overweight school-aged children and improved cognitive function among students with high fitness levels and children engaging in regular physical activity (PA). The purpose of this study was to examine whether regular PA and proper nutrition together had a significant…
Effects of crustal layering on source parameter inversion from coseismic geodetic data
NASA Astrophysics Data System (ADS)
Amoruso, A.; Crescentini, L.; Fidani, C.
2004-10-01
We study the effect of a superficial layer overlying a half-space on the surface displacements caused by uniform slipping of a dip-slip normal rectangular fault. We compute static coseismic displacements using a 3-D analytical code for different characteristics of the layered medium, different fault geometries and different configurations of bench marks to simulate different kinds of geodetic data (GPS, Synthetic Aperture Radar, and levellings). We perform both joint and separate inversions of the three components of synthetic displacement without constraining fault parameters, apart from strike and rake, and using a non-linear global inversion technique under the assumption of homogeneous half-space. Differences between synthetic displacements computed in the presence of the superficial soft layer and in a homogeneous half-space do not show a simple regular behaviour, even if a few features can be identified. Consequently, also retrieved parameters of the homogeneous equivalent fault obtained by unconstrained inversion of surface displacements do not show a simple regular behaviour. We point out that the presence of a superficial layer may lead to misestimating several fault parameters both using joint and separate inversions of the three components of synthetic displacement and that the effects of the presence of the superficial layer can change whether all fault parameters are left free in the inversions or not. In the inversion of any kind of coseismic geodetic data, fault size and slip can be largely misestimated, but the product (fault length) × (fault width) × slip, which is proportional to the seismic moment for a given rigidity modulus, is often well determined (within a few per cent). Because inversion of coseismic geodetic data assuming a layered medium is impracticable, we suggest that only a case-to-case study involving some kind of recursive determination of fault parameters through data correction seems to give the proper approach when layering is important.
Duality and the Knizhnik-Polyakov-Zamolodchikov relation in Liouville quantum gravity.
Duplantier, Bertrand; Sheffield, Scott
2009-04-17
We present a (mathematically rigorous) probabilistic and geometrical proof of the Knizhnik-Polyakov-Zamolodchikov relation between scaling exponents in a Euclidean planar domain D and in Liouville quantum gravity. It uses the properly regularized quantum area measure dmicro_{gamma}=epsilon;{gamma;{2}/2}e;{gammah_{epsilon}(z)}dz, where dz is the Lebesgue measure on D, gamma is a real parameter, 0
NASA Astrophysics Data System (ADS)
Kaltenbacher, Barbara; Klassen, Andrej
2018-05-01
In this paper we provide a convergence analysis of some variational methods alternative to the classical Tikhonov regularization, namely Ivanov regularization (also called the method of quasi solutions) with some versions of the discrepancy principle for choosing the regularization parameter, and Morozov regularization (also called the method of the residuals). After motivating nonequivalence with Tikhonov regularization by means of an example, we prove well-definedness of the Ivanov and the Morozov method, convergence in the sense of regularization, as well as convergence rates under variational source conditions. Finally, we apply these results to some linear and nonlinear parameter identification problems in elliptic boundary value problems.
Atmospheric inverse modeling via sparse reconstruction
NASA Astrophysics Data System (ADS)
Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten
2017-10-01
Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.
An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng Jinchao; Qin Chenghu; Jia Kebin
2011-11-15
Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescentmore » photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used rather than monochromatic data. Furthermore, the study conducted using an adaptive regularization parameter demonstrated our ability to accurately localize the bioluminescent source. With the adaptively estimated regularization parameter, the reconstructed center position of the source was (20.37, 31.05, 12.95) mm, and the distance to the real source was 0.63 mm. The results of the dual-source experiments further showed that our algorithm could localize the bioluminescent sources accurately. The authors then presented experimental evidence that the proposed algorithm exhibited its calculated efficiency over the heuristic method. The effectiveness of the new algorithm was also confirmed by comparing it with the L-curve method. Furthermore, various initial speculations regarding the regularization parameter were used to illustrate the convergence of our algorithm. Finally, in vivo mouse experiment further illustrates the effectiveness of the proposed algorithm. Conclusions: Utilizing numerical, physical phantom and in vivo examples, we demonstrated that the bioluminescent sources could be reconstructed accurately with automatic regularization parameters. The proposed algorithm exhibited superior performance than both the heuristic regularization parameter choice method and L-curve method based on the computational speed and localization error.« less
Knapen, Stefan E; Riemersma-van der Lek, Rixt F; Haarman, Bartholomeus C M; Schoevers, Robert A
2016-10-13
Disruption of the biological rhythm in patients with bipolar disorder is a known risk factor for a switch in mood. This case study describes how modern techniques using ambulatory assessment of sleep parameters can help in signalling a mood switch and start early treatment. We studied a 40-year-old woman with bipolar disorder experiencing a life event while wearing an actigraph to measure sleep-wake parameters. The night after the life event the woman had sleep later and shorter sleep duration. Adequate response of both the woman and the treating psychiatrist resulted in two normal nights with the use of 1 mg lorazepam, possibly preventing further mood disturbances. Ambulatory assessment of the biological rhythm can function as an add-on to regular signalling plans for prevention of episodes in patients with bipolar disorder. More research should be conducted to validate clinical applicability, proper protocols and to understand underlying mechanisms. 2016 BMJ Publishing Group Ltd.
ERIC Educational Resources Information Center
Levesque, Luc
2014-01-01
Inaccurate measurements occur regularly in data acquisition as a result of improper sampling times. An understanding of proper sampling times when collecting data with an analogue-to-digital converter or video camera is crucial in order to avoid anomalies. A proper choice of sampling times should be based on the Nyquist sampling theorem. If the…
Selection of regularization parameter in total variation image restoration.
Liao, Haiyong; Li, Fang; Ng, Michael K
2009-11-01
We consider and study total variation (TV) image restoration. In the literature there are several regularization parameter selection methods for Tikhonov regularization problems (e.g., the discrepancy principle and the generalized cross-validation method). However, to our knowledge, these selection methods have not been applied to TV regularization problems. The main aim of this paper is to develop a fast TV image restoration method with an automatic selection of the regularization parameter scheme to restore blurred and noisy images. The method exploits the generalized cross-validation (GCV) technique to determine inexpensively how much regularization to use in each restoration step. By updating the regularization parameter in each iteration, the restored image can be obtained. Our experimental results for testing different kinds of noise show that the visual quality and SNRs of images restored by the proposed method is promising. We also demonstrate that the method is efficient, as it can restore images of size 256 x 256 in approximately 20 s in the MATLAB computing environment.
Principles of blood transfusion service audit.
Dosunmu, A O; Dada, M O
2005-12-01
Blood transfusion is still an important procedure in modern medical practice despite efforts to avoid it. This is due to it's association with infections especially HIV. It is therefore necessary to have proper quality control of its production, storage and usage [1]. A way of controlling usage is to do regular clinical audit. To effect this, there has to be an agreed standard for appropriate use of blood. The aim of this paper is to briefly highlight the importance of audit, audit procedures and tools i.e. required records, development of audit criteria and audit parameters. Every hospital/blood transfusion center is expected to develop a system of audit that is appropriate to its needs. The suggestions are mainly based on the experience at the Lagos University Teaching Hospital and the Lagos State Blood Transfusion Service.
Chelliah, Kanthasamy; Raman, Ganesh G.; Muehleisen, Ralph T.
2016-07-07
This paper evaluates the performance of various regularization parameter choice methods applied to different approaches of nearfield acoustic holography when a very nearfield measurement is not possible. For a fixed grid resolution, the larger the hologram distance, the larger the error in the naive nearfield acoustic holography reconstructions. These errors can be smoothed out by using an appropriate order of regularization. In conclusion, this study shows that by using a fixed/manual choice of regularization parameter, instead of automated parameter choice methods, reasonably accurate reconstructions can be obtained even when the hologram distance is 16 times larger than the grid resolution.
Kołłątaj, Witold; Sygit, Katarzyna; Sygit, Marian; Karwat, Irena Dorota; Kołłątaj, Barbara
2011-01-01
The proper lifestyle of a child, including proper eating habits, should be monitored to ensure proper physical and psychological development. This applies particularly to rural areas which are economically, socially and educationally backward. The study included 1,341 rural schoolchildren and adolescents aged 9-13 years (734 females, 607 males). The representative survey research was conducted in 2008, making use of an original survey questionnaire. The results showed that the majority of respondents eat improperly. 83.2% of them have regular breakfast, and 62.6% have regular light lunch. Most respondents do not eat more than 4 meals a day (usually 3-4). It is worrying that the consumption of sweets is high (34.9% of the surveyed group eat them regularly), whereas fruit and vegetable consumption is low. In this study, relationships between types of diet and such descriptive variables as gender, parents' educational status, and economic situation of the households are described. In families where the parents have a higher education and the household situation is good, the eating habits are much better. The list of poor dietary habits of pupils from rural schools includes skipping breakfast and/or light lunch, high consumption of sweets and low consumption of fruit and vegetables. There are correlations between improper dietary habits and gender of the children and adolescents, educational status of parents, economic situation of households, and housing conditions.
Regularization of soft-X-ray imaging in the DIII-D tokamak
Wingen, A.; Shafer, M. W.; Unterberg, E. A.; ...
2015-03-02
We developed an image inversion scheme for the soft X-ray imaging system (SXRIS) diagnostic at the DIII-D tokamak in order to obtain the local soft X-ray emission at a poloidal cross-section from the spatially line-integrated image taken by the SXRIS camera. The scheme uses the Tikhonov regularization method since the inversion problem is generally ill-posed. The regularization technique uses the generalized singular value decomposition to determine a solution that depends on a free regularization parameter. The latter has to be chosen carefully, and the so called {\\it L-curve} method to find the optimum regularization parameter is outlined. A representative testmore » image is used to study the properties of the inversion scheme with respect to inversion accuracy, amount/strength of regularization, image noise and image resolution. Moreover, the optimum inversion parameters are identified, while the L-curve method successfully computes the optimum regularization parameter. Noise is found to be the most limiting issue, but sufficient regularization is still possible at noise to signal ratios up to 10%-15%. Finally, the inversion scheme is applied to measured SXRIS data and the line-integrated SXRIS image is successfully inverted.« less
... rhythm problems (arrhythmias) in which the heart's natural pacemaker (sinus node) doesn't work properly. The sinus ... people with sick sinus syndrome eventually need a pacemaker to keep the heart in a regular rhythm. ...
40 CFR 60.4355 - How do I establish and document a proper parameter monitoring plan?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Standards of Performance for Stationary Combustion Turbines Monitoring § 60.4355 How do I establish and document a proper parameter monitoring plan? (a) The steam or water to fuel ratio or other parameters that...
IPMP Global Fit - A one-step direct data analysis tool for predictive microbiology.
Huang, Lihan
2017-12-04
The objective of this work is to develop and validate a unified optimization algorithm for performing one-step global regression analysis of isothermal growth and survival curves for determination of kinetic parameters in predictive microbiology. The algorithm is incorporated with user-friendly graphical interfaces (GUIs) to develop a data analysis tool, the USDA IPMP-Global Fit. The GUIs are designed to guide the users to easily navigate through the data analysis process and properly select the initial parameters for different combinations of mathematical models. The software is developed for one-step kinetic analysis to directly construct tertiary models by minimizing the global error between the experimental observations and mathematical models. The current version of the software is specifically designed for constructing tertiary models with time and temperature as the independent model parameters in the package. The software is tested with a total of 9 different combinations of primary and secondary models for growth and survival of various microorganisms. The results of data analysis show that this software provides accurate estimates of kinetic parameters. In addition, it can be used to improve the experimental design and data collection for more accurate estimation of kinetic parameters. IPMP-Global Fit can be used in combination with the regular USDA-IPMP for solving the inverse problems and developing tertiary models in predictive microbiology. Published by Elsevier B.V.
Ivanov, J.; Miller, R.D.; Markiewicz, R.D.; Xia, J.
2008-01-01
We apply the P-wave refraction-tomography method to seismic data collected with a landstreamer. Refraction-tomography inversion solutions were determined using regularization parameters that provided the most realistic near-surface solutions that best matched the dipping layer structure of nearby outcrops. A reasonably well matched solution was obtained using an unusual set of optimal regularization parameters. In comparison, the use of conventional regularization parameters did not provide as realistic results. Thus, we consider that even if there is only qualitative a-priori information about a site (i.e., visual) - in the case of the East Canyon Dam, Utah - it might be possible to minimize the refraction nonuniqueness by estimating the most appropriate regularization parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ringleb, F.; Eylers, K.; Teubner, Th.
2016-03-14
A bottom-up approach is presented for the production of arrays of indium islands on a molybdenum layer on glass, which can serve as micro-sized precursors for indium compounds such as copper-indium-gallium-diselenide used in photovoltaics. Femtosecond laser ablation of glass and a subsequent deposition of a molybdenum film or direct laser processing of the molybdenum film both allow the preferential nucleation and growth of indium islands at the predefined locations in a following indium-based physical vapor deposition (PVD) process. A proper choice of laser and deposition parameters ensures the controlled growth of indium islands exclusively at the laser ablated spots. Basedmore » on a statistical analysis, these results are compared to the non-structured molybdenum surface, leading to randomly grown indium islands after PVD.« less
Kerr Reservoir LANDSAT experiment analysis for November 1980
NASA Technical Reports Server (NTRS)
Lecroy, S. R.
1982-01-01
An experiment was conducted on the waters of Kerr Reservoir to determine if reliable algorithms could be developed that relate water quality parameters to remotely sensed data. LANDSAT radiance data was used in the analysis since it is readily available and covers the area of interest on a regular basis. By properly designing the experiment, many of the unwanted variations due to atmosphere, solar, and hydraulic changes were minimized. The algorithms developed were constrained to satisfy rigorous statistical criteria before they could be considered dependable in predicting water quality parameters. A complete mix of different types of algorithms using the LANDSAT bands was generated to provide a thorough understanding of the relationships among the data involved. The study demonstrated that for the ranges measured, the algorithms that satisfactorily represented the data are mostly linear and only require a maximum of one or two LANDSAT bands. Rationing techniques did not improve the results since the initial design of the experiment minimized the errors that this procedure is effective against. Good correlations were established for inorganic suspended solids, iron, turbidity, and secchi depth.
Fast Quantitative Susceptibility Mapping with L1-Regularization and Automatic Parameter Selection
Bilgic, Berkin; Fan, Audrey P.; Polimeni, Jonathan R.; Cauley, Stephen F.; Bianciardi, Marta; Adalsteinsson, Elfar; Wald, Lawrence L.; Setsompop, Kawin
2014-01-01
Purpose To enable fast reconstruction of quantitative susceptibility maps with Total Variation penalty and automatic regularization parameter selection. Methods ℓ1-regularized susceptibility mapping is accelerated by variable-splitting, which allows closed-form evaluation of each iteration of the algorithm by soft thresholding and FFTs. This fast algorithm also renders automatic regularization parameter estimation practical. A weighting mask derived from the magnitude signal can be incorporated to allow edge-aware regularization. Results Compared to the nonlinear Conjugate Gradient (CG) solver, the proposed method offers 20× speed-up in reconstruction time. A complete pipeline including Laplacian phase unwrapping, background phase removal with SHARP filtering and ℓ1-regularized dipole inversion at 0.6 mm isotropic resolution is completed in 1.2 minutes using Matlab on a standard workstation compared to 22 minutes using the Conjugate Gradient solver. This fast reconstruction allows estimation of regularization parameters with the L-curve method in 13 minutes, which would have taken 4 hours with the CG algorithm. Proposed method also permits magnitude-weighted regularization, which prevents smoothing across edges identified on the magnitude signal. This more complicated optimization problem is solved 5× faster than the nonlinear CG approach. Utility of the proposed method is also demonstrated in functional BOLD susceptibility mapping, where processing of the massive time-series dataset would otherwise be prohibitive with the CG solver. Conclusion Online reconstruction of regularized susceptibility maps may become feasible with the proposed dipole inversion. PMID:24259479
NASA Astrophysics Data System (ADS)
Maslakov, M. L.
2018-04-01
This paper examines the solution of convolution-type integral equations of the first kind by applying the Tikhonov regularization method with two-parameter stabilizing functions. The class of stabilizing functions is expanded in order to improve the accuracy of the resulting solution. The features of the problem formulation for identification and adaptive signal correction are described. A method for choosing regularization parameters in problems of identification and adaptive signal correction is suggested.
A multiplicative regularization for force reconstruction
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2017-02-01
Additive regularizations, such as Tikhonov-like approaches, are certainly the most popular methods for reconstructing forces acting on a structure. These approaches require, however, the knowledge of a regularization parameter, that can be numerically computed using specific procedures. Unfortunately, these procedures are generally computationally intensive. For this particular reason, it could be of primary interest to propose a method able to proceed without defining any regularization parameter beforehand. In this paper, a multiplicative regularization is introduced for this purpose. By construction, the regularized solution has to be calculated in an iterative manner. In doing so, the amount of regularization is automatically adjusted throughout the resolution process. Validations using synthetic and experimental data highlight the ability of the proposed approach in providing consistent reconstructions.
Policastro, A M
1979-10-01
Automobile accidents are the number one killer of children. Effective devices for protecting infants and older children are now available. Counseling the parents on the proper use of car seats should begin in the prenatal period and should continue during regular checkups. Knowledge of the excuses that parents give for not using these devices can help offset some of the existing apathy. Family physicians are in an ideal position to provide proper preventive health counseling on the use of car restraints for children.
A Generalized Framework for Reduced-Order Modeling of a Wind Turbine Wake
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamilton, Nicholas; Viggiano, Bianca; Calaf, Marc
A reduced-order model for a wind turbine wake is sought from large eddy simulation data. Fluctuating velocity fields are combined in the correlation tensor to form the kernel of the proper orthogonal decomposition (POD). Proper orthogonal decomposition modes resulting from the decomposition represent the spatially coherent turbulence structures in the wind turbine wake; eigenvalues delineate the relative amount of turbulent kinetic energy associated with each mode. Back-projecting the POD modes onto the velocity snapshots produces dynamic coefficients that express the amplitude of each mode in time. A reduced-order model of the wind turbine wake (wakeROM) is defined through a seriesmore » of polynomial parameters that quantify mode interaction and the evolution of each POD mode coefficients. The resulting system of ordinary differential equations models the wind turbine wake composed only of the large-scale turbulent dynamics identified by the POD. Tikhonov regularization is used to recalibrate the dynamical system by adding additional constraints to the minimization seeking polynomial parameters, reducing error in the modeled mode coefficients. The wakeROM is periodically reinitialized with new initial conditions found by relating the incoming turbulent velocity to the POD mode coefficients through a series of open-loop transfer functions. The wakeROM reproduces mode coefficients to within 25.2%, quantified through the normalized root-mean-square error. A high-level view of the modeling approach is provided as a platform to discuss promising research directions, alternate processes that could benefit stability and efficiency, and desired extensions of the wakeROM.« less
Computation of Asteroid Proper Elements: Recent Advances
NASA Astrophysics Data System (ADS)
Knežević, Z.
2017-12-01
The recent advances in computation of asteroid proper elements are briefly reviewed. Although not representing real breakthroughs in computation and stability assessment of proper elements, these advances can still be considered as important improvements offering solutions to some practical problems encountered in the past. The problem of getting unrealistic values of perihelion frequency for very low eccentricity orbits is solved by computing frequencies using the frequency-modified Fourier transform. The synthetic resonant proper elements adjusted to a given secular resonance helped to prove the existence of Astraea asteroid family. The preliminary assessment of stability with time of proper elements computed by means of the analytical theory provides a good indication of their poorer performance with respect to their synthetic counterparts, and advocates in favor of ceasing their regular maintenance; the final decision should, however, be taken on the basis of more comprehensive and reliable direct estimate of their individual and sample average deviations from constancy.
On the regularization for nonlinear tomographic absorption spectroscopy
NASA Astrophysics Data System (ADS)
Dai, Jinghang; Yu, Tao; Xu, Lijun; Cai, Weiwei
2018-02-01
Tomographic absorption spectroscopy (TAS) has attracted increased research efforts recently due to the development in both hardware and new imaging concepts such as nonlinear tomography and compressed sensing. Nonlinear TAS is one of the emerging modality that bases on the concept of nonlinear tomography and has been successfully demonstrated both numerically and experimentally. However, all the previous demonstrations were realized using only two orthogonal projections simply for ease of implementation. In this work, we examine the performance of nonlinear TAS using other beam arrangements and test the effectiveness of the beam optimization technique that has been developed for linear TAS. In addition, so far only smoothness prior has been adopted and applied in nonlinear TAS. Nevertheless, there are also other useful priors such as sparseness and model-based prior which have not been investigated yet. This work aims to show how these priors can be implemented and included in the reconstruction process. Regularization through Bayesian formulation will be introduced specifically for this purpose, and a method for the determination of a proper regularization factor will be proposed. The comparative studies performed with different beam arrangements and regularization schemes on a few representative phantoms suggest that the beam optimization method developed for linear TAS also works for the nonlinear counterpart and the regularization scheme should be selected properly according to the available a priori information under specific application scenarios so as to achieve the best reconstruction fidelity. Though this work is conducted under the context of nonlinear TAS, it can also provide useful insights for other tomographic modalities.
... alert and aware of their surroundings. Keep your car in good working order You may think of a car as simply a way to get from Point A to Point B, but cars need regular care to work properly. Make sure ...
Regularities And Irregularities Of The Stark Parameters For Single Ionized Noble Gases
NASA Astrophysics Data System (ADS)
Peláez, R. J.; Djurovic, S.; Cirišan, M.; Aparicio, J. A.; Mar S.
2010-07-01
Spectroscopy of ionized noble gases has a great importance for the laboratory and astrophysical plasmas. Generally, spectra of inert gases are important for many physics areas, for example laser physics, fusion diagnostics, photoelectron spectroscopy, collision physics, astrophysics etc. Stark halfwidths as well as shifts of spectral lines are usually employed for plasma diagnostic purposes. For example atomic data of argon krypton and xenon will be useful for the spectral diagnostic of ITER. In addition, the software used for stellar atmosphere simulation like TMAP, and SMART require a large amount of atomic and spectroscopic data. Availability of these parameters will be useful for a further development of stellar atmosphere and evolution models. Stark parameters data of spectral lines can also be useful for verification of theoretical calculations and investigation of regularities and systematic trends of these parameters within a multiplet, supermultiplet or transition array. In the last years, different trends and regularities of Stark parameters (halwidths and shifts of spectral lines) have been analyzed. The conditions related with atomic structure of the element as well as plasma conditions are responsible for regular or irregular behaviors of the Stark parameters. The absence of very close perturbing levels makes Ne II as a good candidate for analysis of the regularities. Other two considered elements Kr II and Xe II with complex spectra present strong perturbations and in some cases an irregularities in Stark parameters appear. In this work we analyze the influence of the perturbations to Stark parameters within the multiplets.
Prakash, Jaya; Yalavarthy, Phaneendra K
2013-03-01
Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.
Quantifying the predictive consequences of model error with linear subspace analysis
White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.
2014-01-01
All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.
Polarimetric image reconstruction algorithms
NASA Astrophysics Data System (ADS)
Valenzuela, John R.
In the field of imaging polarimetry Stokes parameters are sought and must be inferred from noisy and blurred intensity measurements. Using a penalized-likelihood estimation framework we investigate reconstruction quality when estimating intensity images and then transforming to Stokes parameters (traditional estimator), and when estimating Stokes parameters directly (Stokes estimator). We define our cost function for reconstruction by a weighted least squares data fit term and a regularization penalty. It is shown that under quadratic regularization, the traditional and Stokes estimators can be made equal by appropriate choice of regularization parameters. It is empirically shown that, when using edge preserving regularization, estimating the Stokes parameters directly leads to lower RMS error in reconstruction. Also, the addition of a cross channel regularization term further lowers the RMS error for both methods especially in the case of low SNR. The technique of phase diversity has been used in traditional incoherent imaging systems to jointly estimate an object and optical system aberrations. We extend the technique of phase diversity to polarimetric imaging systems. Specifically, we describe penalized-likelihood methods for jointly estimating Stokes images and optical system aberrations from measurements that contain phase diversity. Jointly estimating Stokes images and optical system aberrations involves a large parameter space. A closed-form expression for the estimate of the Stokes images in terms of the aberration parameters is derived and used in a formulation that reduces the dimensionality of the search space to the number of aberration parameters only. We compare the performance of the joint estimator under both quadratic and edge-preserving regularization. The joint estimator with edge-preserving regularization yields higher fidelity polarization estimates than with quadratic regularization. Under quadratic regularization, using the reduced-parameter search strategy, accurate aberration estimates can be obtained without recourse to regularization "tuning". Phase-diverse wavefront sensing is emerging as a viable candidate wavefront sensor for adaptive-optics systems. In a quadratically penalized weighted least squares estimation framework a closed form expression for the object being imaged in terms of the aberrations in the system is available. This expression offers a dramatic reduction of the dimensionality of the estimation problem and thus is of great interest for practical applications. We have derived an expression for an approximate joint covariance matrix for object and aberrations in the phase diversity context. Our expression for the approximate joint covariance is compared with the "known-object" Cramer-Rao lower bound that is typically used for system parameter optimization. Estimates of the optimal amount of defocus in a phase-diverse wavefront sensor derived from the joint-covariance matrix, the known-object Cramer-Rao bound, and Monte Carlo simulations are compared for an extended scene and a point object. It is found that our variance approximation, that incorporates the uncertainty of the object, leads to an improvement in predicting the optimal amount of defocus to use in a phase-diverse wavefront sensor.
X-Ray Phase Imaging for Breast Cancer Detection
2010-09-01
regularization seeks the minimum- norm , least squares solution for phase retrieval. The retrieval result with Tikhonov regularization is still unsatisfactory...of norm , that can effectively reflect the accuracy of the retrieved data as an image, if ‖δ Ik+1−δ Ik‖ is less than a predefined threshold value β...pointed out that the proper norm for images is the total variation (TV) norm , which is the L1 norm of the gradient of the image function, and not the
Seismic Imaging of VTI, HTI and TTI based on Adjoint Methods
NASA Astrophysics Data System (ADS)
Rusmanugroho, H.; Tromp, J.
2014-12-01
Recent studies show that isotropic seismic imaging based on adjoint method reduces low-frequency artifact caused by diving waves, which commonly occur in two-wave wave-equation migration, such as Reverse Time Migration (RTM). Here, we derive new expressions of sensitivity kernels for Vertical Transverse Isotropy (VTI) using the Thomsen parameters (ɛ, δ, γ) plus the P-, and S-wave speeds (α, β) as well as via the Chen & Tromp (GJI 2005) parameters (A, C, N, L, F). For Horizontal Transverse Isotropy (HTI), these parameters depend on an azimuthal angle φ, where the tilt angle θ is equivalent to 90°, and for Tilted Transverse Isotropy (TTI), these parameters depend on both the azimuth and tilt angles. We calculate sensitivity kernels for each of these two approaches. Individual kernels ("images") are numerically constructed based on the interaction between the regular and adjoint wavefields in smoothed models which are in practice estimated through Full-Waveform Inversion (FWI). The final image is obtained as a result of summing all shots, which are well distributed to sample the target model properly. The impedance kernel, which is a sum of sensitivity kernels of density and the Thomsen or Chen & Tromp parameters, looks crisp and promising for seismic imaging. The other kernels suffer from low-frequency artifacts, similar to traditional seismic imaging conditions. However, all sensitivity kernels are important for estimating the gradient of the misfit function, which, in combination with a standard gradient-based inversion algorithm, is used to minimize the objective function in FWI.
Blocky inversion of multichannel elastic impedance for elastic parameters
NASA Astrophysics Data System (ADS)
Mozayan, Davoud Karami; Gholami, Ali; Siahkoohi, Hamid Reza
2018-04-01
Petrophysical description of reservoirs requires proper knowledge of elastic parameters like P- and S-wave velocities (Vp and Vs) and density (ρ), which can be retrieved from pre-stack seismic data using the concept of elastic impedance (EI). We propose an inversion algorithm which recovers elastic parameters from pre-stack seismic data in two sequential steps. In the first step, using the multichannel blind seismic inversion method (exploited recently for recovering acoustic impedance from post-stack seismic data), high-resolution blocky EI models are obtained directly from partial angle-stacks. Using an efficient total-variation (TV) regularization, each angle-stack is inverted independently in a multichannel form without prior knowledge of the corresponding wavelet. The second step involves inversion of the resulting EI models for elastic parameters. Mathematically, under some assumptions, the EI's are linearly described by the elastic parameters in the logarithm domain. Thus a linear weighted least squares inversion is employed to perform this step. Accuracy of the concept of elastic impedance in predicting reflection coefficients at low and high angles of incidence is compared with that of exact Zoeppritz elastic impedance and the role of low frequency content in the problem is discussed. The performance of the proposed inversion method is tested using synthetic 2D data sets obtained from the Marmousi model and also 2D field data sets. The results confirm the efficiency and accuracy of the proposed method for inversion of pre-stack seismic data.
Florida's Fit to Achieve Program.
ERIC Educational Resources Information Center
Sander, Allan N.; And Others
1993-01-01
Describes Florida's "Fit to Achieve," a cardiovascular fitness education program for elementary students. Children are taught responsibility for their own cardiovascular fitness through proper exercise, personal exercise habits, and regular aerobic exercise. The program stresses collaborative effort between physical educators and…
Teach children to brush (image)
... child's overall good health. Without proper dental care tooth decay and gum disease can lead to serious problems such as cavities and gingivitis, swollen and bleeding gums. Regular visits to the dentist, brushing twice each day, and flossing, are ways to ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yu; Gao, Kai; Huang, Lianjie
Accurate imaging and characterization of fracture zones is crucial for geothermal energy exploration. Aligned fractures within fracture zones behave as anisotropic media for seismic-wave propagation. The anisotropic properties in fracture zones introduce extra difficulties for seismic imaging and waveform inversion. We have recently developed a new anisotropic elastic-waveform inversion method using a modified total-variation regularization scheme and a wave-energy-base preconditioning technique. Our new inversion method uses the parameterization of elasticity constants to describe anisotropic media, and hence it can properly handle arbitrary anisotropy. We apply our new inversion method to a seismic velocity model along a 2D-line seismic data acquiredmore » at Eleven-Mile Canyon located at the Southern Dixie Valley in Nevada for geothermal energy exploration. Our inversion results show that anisotropic elastic-waveform inversion has potential to reconstruct subsurface anisotropic elastic parameters for imaging and characterization of fracture zones.« less
Approximate isotropic cloak for the Maxwell equations
NASA Astrophysics Data System (ADS)
Ghosh, Tuhin; Tarikere, Ashwin
2018-05-01
We construct a regular isotropic approximate cloak for the Maxwell system of equations. The method of transformation optics has enabled the design of electromagnetic parameters that cloak a region from external observation. However, these constructions are singular and anisotropic, making practical implementation difficult. Thus, regular approximations to these cloaks have been constructed that cloak a given region to any desired degree of accuracy. In this paper, we show how to construct isotropic approximations to these regularized cloaks using homogenization techniques so that one obtains cloaking of arbitrary accuracy with regular and isotropic parameters.
Regular flow reversals in Rayleigh-Bénard convection in a horizontal magnetic field.
Tasaka, Yuji; Igaki, Kazuto; Yanagisawa, Takatoshi; Vogt, Tobias; Zuerner, Till; Eckert, Sven
2016-04-01
Magnetohydrodynamic Rayleigh-Bénard convection was studied experimentally using a liquid metal inside a box with a square horizontal cross section and aspect ratio of five. Systematic flow measurements were performed by means of ultrasonic velocity profiling that can capture time variations of instantaneous velocity profiles. Applying a horizontal magnetic field organizes the convective motion into a flow pattern of quasi-two-dimensional rolls arranged parallel to the magnetic field. The number of rolls has the tendency to decrease with increasing Rayleigh number Ra and to increase with increasing Chandrasekhar number Q. We explored convection regimes in a parameter range, at 2×10^{3}
Review & Peer Review of “Parameters for Properly Designed and Operated Flares” Documents
This page contains two 2012 memoranda on the review of EPA's parameters for properly designed and operated flares. One details the process of peer review, and the other provides background information and specific charge questions to the panel.
A comprehensive numerical analysis of background phase correction with V-SHARP.
Özbay, Pinar Senay; Deistung, Andreas; Feng, Xiang; Nanz, Daniel; Reichenbach, Jürgen Rainer; Schweser, Ferdinand
2017-04-01
Sophisticated harmonic artifact reduction for phase data (SHARP) is a method to remove background field contributions in MRI phase images, which is an essential processing step for quantitative susceptibility mapping (QSM). To perform SHARP, a spherical kernel radius and a regularization parameter need to be defined. In this study, we carried out an extensive analysis of the effect of these two parameters on the corrected phase images and on the reconstructed susceptibility maps. As a result of the dependence of the parameters on acquisition and processing characteristics, we propose a new SHARP scheme with generalized parameters. The new SHARP scheme uses a high-pass filtering approach to define the regularization parameter. We employed the variable-kernel SHARP (V-SHARP) approach, using different maximum radii (R m ) between 1 and 15 mm and varying regularization parameters (f) in a numerical brain model. The local root-mean-square error (RMSE) between the ground-truth, background-corrected field map and the results from SHARP decreased towards the center of the brain. RMSE of susceptibility maps calculated with a spatial domain algorithm was smallest for R m between 6 and 10 mm and f between 0 and 0.01 mm -1 , and for maps calculated with a Fourier domain algorithm for R m between 10 and 15 mm and f between 0 and 0.0091 mm -1 . We demonstrated and confirmed the new parameter scheme in vivo. The novel regularization scheme allows the use of the same regularization parameter irrespective of other imaging parameters, such as image resolution. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule
NASA Astrophysics Data System (ADS)
Jin, Qinian; Wang, Wei
2018-03-01
The iteratively regularized Gauss-Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.
Triaging: common complaints in the workplace.
Zimmermann, Polly Gerber; Wachs, Joy E
2003-06-01
Occupational health nurses regularly encounter clients with benign, common complaints. However, the mundane nature should not detract from the need for careful evaluation and guidance. Obtaining a comprehensive history and making key assessments help guide the occupational health nurse to properly manage the client's complaint.
A combined reconstruction-classification method for diffuse optical tomography.
Hiltunen, P; Prince, S J D; Arridge, S
2009-11-07
We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.
Soriano, Mercedes; Li, Hui; Jacquard, Cédric; Angenent, Gerco C.; Krochko, Joan; Offringa, Remko; Boutilier, Kim
2014-01-01
In Arabidopsis thaliana, zygotic embryo divisions are highly regular, but it is not clear how embryo patterning is established in species or culture systems with irregular cell divisions. We investigated this using the Brassica napus microspore embryogenesis system, where the male gametophyte is reprogrammed in vitro to form haploid embryos in the absence of exogenous growth regulators. Microspore embryos are formed via two pathways: a zygotic-like pathway, characterized by initial suspensor formation followed by embryo proper formation from the distal cell of the suspensor, and a pathway characterized by initially unorganized embryos lacking a suspensor. Using embryo fate and auxin markers, we show that the zygotic-like pathway requires polar auxin transport for embryo proper specification from the suspensor, while the suspensorless pathway is polar auxin transport independent and marked by an initial auxin maximum, suggesting early embryo proper establishment in the absence of a basal suspensor. Polarity establishment in this suspensorless pathway was triggered and guided by rupture of the pollen exine. Irregular division patterns did not affect cell fate establishment in either pathway. These results confirm the importance of the suspensor and suspensor-driven auxin transport in patterning, but also uncover a mechanism where cell patterning is less regular and independent of auxin transport. PMID:24951481
Correlating the ground truth of mammographic histology with the success or failure of imaging.
Tot, Tibor
2005-02-01
Detailed and systematic mammographic-pathologic correlation is essential for evaluation of the advantages and disadvantages of mammography as an imaging method as well as for establishing the role of additional methods or alternatives. Two- and three-dimensional large section histopathology represents an ideal tool for this correlation. This kind of interdisciplinary approach ("mammographic histology") is slowly but irrevocably becoming accepted as the new golden standard in diagnosing breast abnormalities. In this review, upon summarizing the theoretical background and our practical experience in routine diagnostic use of these advantageous techniques, we report on the accuracy of the preoperative radiological diagnosis. As compared to the final diagnostic outcome, stellate lesions on the mammogram and microcalcifications of casting type indicate malignancy with very high accuracy while predicting malignancy in cases of powdery and crushed stone type microcalcifications is problematic. The extent of the disease is regularly underestimated on the mammogram by the radiologist. Combining different radiological signs, and comparing repeated static images taken in regular intervals in screening or postoperative follow-up, the mammographer may type and grade the lesions properly in a considerable number of cases. Regular mammographic-pathologic correlation may increase the specificity and sensitivity of mammographic diagnosis. This correlation is essential for establishing the proper pre- and postoperative histological diagnosis, too.
Asteroid families in the Cybele and Hungaria groups
NASA Astrophysics Data System (ADS)
Vinogradova, T.; Shor, V.
2014-07-01
Asteroid families are fragments of some disrupted parent bodies. Planetary perturbations force the primarily close orbits to evolve. One of the main features of the orbit evolution is the long-period variation of the osculating elements, such as the inclination and eccentricity. Proper elements are computed by elimination of short- and long-period perturbations, and, practically, they do not change with time. Therefore, proper elements are important for family-identification procedures. The techniques of proper-element computation have improved over time. More and more accurate dynamical theories are developed. Contrastingly, in this work, an empirical method is proposed for proper-element calculations. The long-term variations of osculating elements manifest themselves very clearly in the distributions of pairs: inclination and longitude of ascending node; eccentricity and longitude of perihelion in the corresponding planes. Both of these dependencies have a nearly sinusoidal form for most asteroid orbits with regular motion of node and perihelion. If these angular parameters librate, then the sinusoids transform to some closed curve. Hence, it is possible to obtain forced elements, as parameters of curves specified above. The proper elements can be calculated by an elimination of the forced ones. The method allows to obtain the proper elements in any region, if there is a sufficient number of asteroids. This fact and the simplicity of the calculations are advantages of the empirical method. The derived proper elements include the short-period perturbations, but their accuracy is sufficient to search for asteroid families. The special techniques have been developed for the identification of the families, but over a long time large discrepancies took place between the lists of families derived by different authors. As late as 1980, a list of 30 reliable families was formed. And now the list by D. Nesvorny includes about 80 robust families. To date, only two families have been found in the most outer part of the main asteroid belt or the Cybele group: Sylvia and Ulla. And the Hungaria group in the most inner part of the belt has always been considered as one family. In this work, the proper elements were calculated by the empirical method for all multi-opposition asteroids in these two zones. As the source of the initial osculating elements, the MPC catalogue (version Feb. 2014) was used. Due to the large set of proper elements used in our work, the families are apparent more clearly. An approach similar to the hierarchical clustering method (HCM) was used for the identification of the families. As a result, five additional families have been found in the Cybele region, associated with (121) Hermione, (643) Scheherezade, (1028) Lydina, (3141) Buchar, and (522) Helga. The small Helga family, including 15 members, is the family in the main belt (3.6--3.7 au) most distant from the Sun. Due to the isolation of this family, its identification is very reliable. As to the Hungaria region, two low-density families have been found additionally: (1453) Fennia and (3854) George. They have inclinations slightly greater than that of the Hungaria family (from 24 to 26 degrees). In contradiction to the predominant C-type of the Hungaria family asteroids, the taxonomy of these families is represented mainly by the S and L types. Most likely, these families are two parts of a single ancient family.
NASA Astrophysics Data System (ADS)
Bassrei, A.; Terra, F. A.; Santos, E. T.
2007-12-01
Inverse problems in Applied Geophysics are usually ill-posed. One way to reduce such deficiency is through derivative matrices, which are a particular case of a more general family that receive the name regularization. The regularization by derivative matrices has an input parameter called regularization parameter, which choice is already a problem. It was suggested in the 1970's a heuristic approach later called L-curve, with the purpose to provide the optimum regularization parameter. The L-curve is a parametric curve, where each point is associated to a λ parameter. In the horizontal axis one represents the error between the observed data and the calculated one and in the vertical axis one represents the product between the regularization matrix and the estimated model. The ideal point is the L-curve knee, where there is a balance between the quantities represented in the Cartesian axes. The L-curve has been applied to a variety of inverse problems, also in Geophysics. However, the visualization of the knee is not always an easy task, in special when the L-curve does not the L shape. In this work three methodologies are employed for the search and obtainment of the optimal regularization parameter from the L curve. The first criterion is the utilization of Hansen's tool box which extracts λ automatically. The second criterion consists in to extract visually the optimal parameter. By third criterion one understands the construction of the first derivative of the L-curve, and the posterior automatic extraction of the inflexion point. The utilization of the L-curve with the three above criteria were applied and validated in traveltime tomography and 2-D gravity inversion. After many simulations with synthetic data, noise- free as well as data corrupted with noise, with the regularization orders 0, 1, and 2, we verified that the three criteria are valid and provide satisfactory results. The third criterion presented the best performance, specially in cases where the L-curve has an irregular shape.
Green operators for low regularity spacetimes
NASA Astrophysics Data System (ADS)
Sanchez Sanchez, Yafet; Vickers, James
2018-02-01
In this paper we define and construct advanced and retarded Green operators for the wave operator on spacetimes with low regularity. In order to do so we require that the spacetime satisfies the condition of generalised hyperbolicity which is equivalent to well-posedness of the classical inhomogeneous problem with zero initial data where weak solutions are properly supported. Moreover, we provide an explicit formula for the kernel of the Green operators in terms of an arbitrary eigenbasis of H 1 and a suitable Green matrix that solves a system of second order ODEs.
Shaw, Calvin B; Prakash, Jaya; Pramanik, Manojit; Yalavarthy, Phaneendra K
2013-08-01
A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison.
Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim
2016-01-01
Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation.
Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim
2016-01-01
Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation. PMID:27092179
Code of Federal Regulations, 2012 CFR
2012-01-01
... both hot and cold water of safe and sanitary quality, with adequate facilities for its proper.... Convenient hand-washing facilities shall be provided, including hot and cold running water, soap or other... regularly and the containers cleaned before reuse. Accumulation of dry waste paper and cardboard shall be...
Code of Federal Regulations, 2014 CFR
2014-01-01
... both hot and cold water of safe and sanitary quality, with adequate facilities for its proper.... Convenient hand-washing facilities shall be provided, including hot and cold running water, soap or other... regularly and the containers cleaned before reuse. Accumulation of dry waste paper and cardboard shall be...
Code of Federal Regulations, 2013 CFR
2013-01-01
... both hot and cold water of safe and sanitary quality, with adequate facilities for its proper.... Convenient hand-washing facilities shall be provided, including hot and cold running water, soap or other... regularly and the containers cleaned before reuse. Accumulation of dry waste paper and cardboard shall be...
Code of Federal Regulations, 2011 CFR
2011-01-01
... both hot and cold water of safe and sanitary quality, with adequate facilities for its proper.... Convenient hand-washing facilities shall be provided, including hot and cold running water, soap or other... regularly and the containers cleaned before reuse. Accumulation of dry waste paper and cardboard shall be...
Code of Federal Regulations, 2014 CFR
2014-01-01
..., manufactured, handled, packaged or stored (except dry storage of packaged finished products and supplies) or in... materials not regularly used. (1) Coolers and freezers. Coolers and freezers where dairy products are stored shall be clean, reasonably dry and maintained at the proper uniform temperature and humidity to...
Code of Federal Regulations, 2010 CFR
2010-01-01
..., manufactured, handled, packaged or stored (except dry storage of packaged finished products and supplies) or in... materials not regularly used. (1) Coolers and freezers. Coolers and freezers where dairy products are stored shall be clean, reasonably dry and maintained at the proper uniform temperature and humidity to...
Code of Federal Regulations, 2011 CFR
2011-01-01
..., manufactured, handled, packaged or stored (except dry storage of packaged finished products and supplies) or in... materials not regularly used. (1) Coolers and freezers. Coolers and freezers where dairy products are stored shall be clean, reasonably dry and maintained at the proper uniform temperature and humidity to...
Code of Federal Regulations, 2012 CFR
2012-01-01
..., manufactured, handled, packaged or stored (except dry storage of packaged finished products and supplies) or in... materials not regularly used. (1) Coolers and freezers. Coolers and freezers where dairy products are stored shall be clean, reasonably dry and maintained at the proper uniform temperature and humidity to...
Code of Federal Regulations, 2013 CFR
2013-01-01
..., manufactured, handled, packaged or stored (except dry storage of packaged finished products and supplies) or in... materials not regularly used. (1) Coolers and freezers. Coolers and freezers where dairy products are stored shall be clean, reasonably dry and maintained at the proper uniform temperature and humidity to...
When to consider transfusion therapy for patients with non-transfusion-dependent thalassaemia
Taher, A T; Radwan, A; Viprakasit, V
2015-01-01
Non-transfusion-dependent thalassaemia (NTDT) refers to all thalassaemia disease phenotypes that do not require regular blood transfusions for survival. Thalassaemia disorders were traditionally concentrated along the tropical belt stretching from sub-Saharan Africa through the Mediterranean region and the Middle East to South and South-East Asia, but global migration has led to increased incidence in North America and Northern Europe. Transfusionists may be familiar with β-thalassaemia major because of the lifelong transfusions needed by these patients. Although patients with NTDT do not require regular transfusions for survival, they may require transfusions in some instances such as pregnancy, infection or growth failure. The complications associated with NTDT can be severe if not properly managed, and many are directly related to chronic anaemia. Awareness of NTDT is important, and this review will outline the factors that should be taken into consideration when deciding whether to initiate and properly plan for transfusion therapy in these patients in terms of transfusion interval and duration of treatment. PMID:25286743
NASA Astrophysics Data System (ADS)
Vaezi, S.; Mesgari, M. S.; Kaviary, F.
2015-12-01
Todays, stability of human life is threatened by a set of parameters. So sustainable urban development theory is introduced after the stability theory to protect the urban environment. In recent years, sustainable urban development gains a lot of attraction by different sciences and totally becomes a final target for urban development planners and managers to use resources properly and to establish a balanced relationship among human, community, and nature. Proper distribution of services for decreasing spatial inequalities, promoting the quality of living environment, and approaching an urban stability requires an analytical understanding of the present situation. Understanding the present situation is the first step for making a decision and planning effectively. This paper evaluates effective parameters affecting proper arrangement of land-uses using a descriptive-analytical method, to develop a conceptual framework for understanding of the present situation of urban land-uses, based on the assessment of their compatibility. This study considers not only the local parameters, but also spatial parameters are included in this study. The results indicate that land-uses in the zone considered here are not distributed properly. Considering mentioned parameters and distributing service land-uses effectively cause the better use of these land-uses.
Nyström type subsampling analyzed as a regularized projection
NASA Astrophysics Data System (ADS)
Kriukova, Galyna; Pereverzyev, Sergiy, Jr.; Tkachenko, Pavlo
2017-07-01
In the statistical learning theory the Nyström type subsampling methods are considered as tools for dealing with big data. In this paper we consider Nyström subsampling as a special form of the projected Lavrentiev regularization, and study it using the approaches developed in the regularization theory. As a result, we prove that the same capacity independent learning rates that are guaranteed for standard algorithms running with quadratic computational complexity can be obtained with subquadratic complexity by the Nyström subsampling approach, provided that the subsampling size is chosen properly. We propose a priori rule for choosing the subsampling size and a posteriori strategy for dealing with uncertainty in the choice of it. The theoretical results are illustrated by numerical experiments.
NASA Astrophysics Data System (ADS)
Oware, E. K.; Moysey, S. M.
2016-12-01
Regularization stabilizes the geophysical imaging problem resulting from sparse and noisy measurements that render solutions unstable and non-unique. Conventional regularization constraints are, however, independent of the physics of the underlying process and often produce smoothed-out tomograms with mass underestimation. Cascaded time-lapse (CTL) is a widely used reconstruction technique for monitoring wherein a tomogram obtained from the background dataset is employed as starting model for the inversion of subsequent time-lapse datasets. In contrast, a proper orthogonal decomposition (POD)-constrained inversion framework enforces physics-based regularization based upon prior understanding of the expected evolution of state variables. The physics-based constraints are represented in the form of POD basis vectors. The basis vectors are constructed from numerically generated training images (TIs) that mimic the desired process. The target can be reconstructed from a small number of selected basis vectors, hence, there is a reduction in the number of inversion parameters compared to the full dimensional space. The inversion involves finding the optimal combination of the selected basis vectors conditioned on the geophysical measurements. We apply the algorithm to 2-D lab-scale saline transport experiments with electrical resistivity (ER) monitoring. We consider two transport scenarios with one and two mass injection points evolving into unimodal and bimodal plume morphologies, respectively. The unimodal plume is consistent with the assumptions underlying the generation of the TIs, whereas bimodality in plume morphology was not conceptualized. We compare difference tomograms retrieved from POD with those obtained from CTL. Qualitative comparisons of the difference tomograms with images of their corresponding dye plumes suggest that POD recovered more compact plumes in contrast to those of CTL. While mass recovery generally deteriorated with increasing number of time-steps, POD outperformed CTL in terms of mass recovery accuracy rates. POD is computationally superior requiring only 2.5 mins to complete each inversion compared to 3 hours for CTL to do the same.
Effect of Low-Dose MDCT and Iterative Reconstruction on Trabecular Bone Microstructure Assessment.
Kopp, Felix K; Holzapfel, Konstantin; Baum, Thomas; Nasirudin, Radin A; Mei, Kai; Garcia, Eduardo G; Burgkart, Rainer; Rummeny, Ernst J; Kirschke, Jan S; Noël, Peter B
2016-01-01
We investigated the effects of low-dose multi detector computed tomography (MDCT) in combination with statistical iterative reconstruction algorithms on trabecular bone microstructure parameters. Twelve donated vertebrae were scanned with the routine radiation exposure used in our department (standard-dose) and a low-dose protocol. Reconstructions were performed with filtered backprojection (FBP) and maximum-likelihood based statistical iterative reconstruction (SIR). Trabecular bone microstructure parameters were assessed and statistically compared for each reconstruction. Moreover, fracture loads of the vertebrae were biomechanically determined and correlated to the assessed microstructure parameters. Trabecular bone microstructure parameters based on low-dose MDCT and SIR significantly correlated with vertebral bone strength. There was no significant difference between microstructure parameters calculated on low-dose SIR and standard-dose FBP images. However, the results revealed a strong dependency on the regularization strength applied during SIR. It was observed that stronger regularization might corrupt the microstructure analysis, because the trabecular structure is a very small detail that might get lost during the regularization process. As a consequence, the introduction of SIR for trabecular bone microstructure analysis requires a specific optimization of the regularization parameters. Moreover, in comparison to other approaches, superior noise-resolution trade-offs can be found with the proposed methods.
Safety in Riding Programs: A Director's Guide.
ERIC Educational Resources Information Center
Kpachavi, Teresa
1996-01-01
Camp riding programs should be examined regularly for liability and risk management issues. Elements of a basic safety assessment include requiring proper safety apparel, removing obstructions from riding rings, ensuring doors and gates are closed, requiring use of lead ropes, securing equine medications, banning smoking, posting written…
38 CFR 1.602 - Utilization of access.
Code of Federal Regulations, 2010 CFR
2010-07-01
... individual and organization will comply with all security requirements VBA deems necessary to ensure the integrity and confidentiality of the data and VBA's automated computer systems. (b) An organization granted... regular, adequate training on proper security, including the items listed in § 1.603(a). Where an...
38 CFR 1.602 - Utilization of access.
Code of Federal Regulations, 2011 CFR
2011-07-01
... individual and organization will comply with all security requirements VBA deems necessary to ensure the integrity and confidentiality of the data and VBA's automated computer systems. (b) An organization granted... regular, adequate training on proper security, including the items listed in § 1.603(a). Where an...
38 CFR 1.602 - Utilization of access.
Code of Federal Regulations, 2014 CFR
2014-07-01
... individual and organization will comply with all security requirements VBA deems necessary to ensure the integrity and confidentiality of the data and VBA's automated computer systems. (b) An organization granted... regular, adequate training on proper security, including the items listed in § 1.603(a). Where an...
38 CFR 1.602 - Utilization of access.
Code of Federal Regulations, 2012 CFR
2012-07-01
... individual and organization will comply with all security requirements VBA deems necessary to ensure the integrity and confidentiality of the data and VBA's automated computer systems. (b) An organization granted... regular, adequate training on proper security, including the items listed in § 1.603(a). Where an...
Supporting Advice Sharing for Technical Problems in Residential Settings
ERIC Educational Resources Information Center
Poole, Erika Shehan
2010-01-01
Visions of future computing in residential settings often come with assumptions of seamless, well-functioning, properly configured devices and network connectivity. In the near term, however, processes of setup, maintenance, and troubleshooting are fraught with difficulties; householders regularly report these tasks as confusing, frustrating, and…
Artificial neural network model for ozone concentration estimation and Monte Carlo analysis
NASA Astrophysics Data System (ADS)
Gao, Meng; Yin, Liting; Ning, Jicai
2018-07-01
Air pollution in urban atmosphere directly affects public-health; therefore, it is very essential to predict air pollutant concentrations. Air quality is a complex function of emissions, meteorology and topography, and artificial neural networks (ANNs) provide a sound framework for relating these variables. In this study, we investigated the feasibility of using ANN model with meteorological parameters as input variables to predict ozone concentration in the urban area of Jinan, a metropolis in Northern China. We firstly found that the architecture of network of neurons had little effect on the predicting capability of ANN model. A parsimonious ANN model with 6 routinely monitored meteorological parameters and one temporal covariate (the category of day, i.e. working day, legal holiday and regular weekend) as input variables was identified, where the 7 input variables were selected following the forward selection procedure. Compared with the benchmarking ANN model with 9 meteorological and photochemical parameters as input variables, the predicting capability of the parsimonious ANN model was acceptable. Its predicting capability was also verified in term of warming success ratio during the pollution episodes. Finally, uncertainty and sensitivity analysis were also performed based on Monte Carlo simulations (MCS). It was concluded that the ANN could properly predict the ambient ozone level. Maximum temperature, atmospheric pressure, sunshine duration and maximum wind speed were identified as the predominate input variables significantly influencing the prediction of ambient ozone concentrations.
History matching by spline approximation and regularization in single-phase areal reservoirs
NASA Technical Reports Server (NTRS)
Lee, T. Y.; Kravaris, C.; Seinfeld, J.
1986-01-01
An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.
Regular Decompositions for H(div) Spaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolev, Tzanio; Vassilevski, Panayot
We study regular decompositions for H(div) spaces. In particular, we show that such regular decompositions are closely related to a previously studied “inf-sup” condition for parameter-dependent Stokes problems, for which we provide an alternative, more direct, proof.
NASA Astrophysics Data System (ADS)
Brun, F.; Intranuovo, F.; Mohammadi, S.; Domingos, M.; Favia, P.; Tromba, G.
2013-07-01
The technique used to produce a 3D tissue engineering (TE) scaffold is of fundamental importance in order to guarantee its proper morphological characteristics. An accurate assessment of the resulting structural properties is therefore crucial in order to evaluate the effectiveness of the produced scaffold. Synchrotron radiation (SR) computed microtomography (μ-CT) combined with further image analysis seems to be one of the most effective techniques to this aim. However, a quantitative assessment of the morphological parameters directly from the reconstructed images is a non trivial task. This study considers two different poly(ε-caprolactone) (PCL) scaffolds fabricated with a conventional technique (Solvent Casting Particulate Leaching, SCPL) and an additive manufacturing (AM) technique (BioCell Printing), respectively. With the first technique it is possible to produce scaffolds with random, non-regular, rounded pore geometry. The AM technique instead is able to produce scaffolds with square-shaped interconnected pores of regular dimension. Therefore, the final morphology of the AM scaffolds can be predicted and the resulting model can be used for the validation of the applied imaging and image analysis protocols. It is here reported a SR μ-CT image analysis approach that is able to effectively and accurately reveal the differences in the pore- and throat-size distributions as well as connectivity of both AM and SCPL scaffolds.
Machine Learning for Mapping Groundwater Salinity with Oil Well Log Data
NASA Astrophysics Data System (ADS)
Chang, W. H.; Shimabukuro, D.; Gillespie, J. M.; Stephens, M.
2016-12-01
An oil field may have thousands of wells with detailed petrophysical logs, and far fewer direct measurements of groundwater salinity. Can the former be used to extrapolate the latter into a detailed map of groundwater salinity? California Senate Bill 4, with its requirement to identify Underground Sources of Drinking Water, makes this a question worth answering. A well-known obstacle is that the basic petrophysical equations describe ideal scenarios ("clean wet sand") and even these equations contain many parameters that may vary with location and depth. Accounting for other common scenarios such as high-conductivity shaly sands or low-permeability diatomite (both characteristic of California's Central Valley) causes parameters to proliferate to the point where the model is underdetermined by the data. When parameters outnumber data points, however, is when machine learning methods are most advantageous. We present a method for modeling a generic oil field, where groundwater salinity and lithology are depth series parameters, and the constants in petrophysical equations are scalar parameters. The data are well log measurements (resistivity, porosity, spontaneous potential, and gamma ray) and a small number of direct groundwater salinity measurements. Embedded in the model are petrophysical equations that account for shaly sand and diatomite formations. As a proof of concept, we feed in well logs and salinity measurements from the Lost Hills Oil Field in Kern County, California, and show that with proper regularization and validation the model makes reasonable predictions of groundwater salinity despite the large number of parameters. The model is implemented using Tensorflow, which is an open-source software released by Google in November, 2015 that has been rapidly and widely adopted by machine learning researchers. The code will be made available on Github, and we encourage scrutiny and modification by machine learning researchers and hydrogeologists alike.
NASA Astrophysics Data System (ADS)
Susyanto, Nanang
2017-12-01
We propose a simple derivation of the Cramer-Rao Lower Bound (CRLB) of parameters under equality constraints from the CRLB without constraints in regular parametric models. When a regular parametric model and an equality constraint of the parameter are given, a parametric submodel can be defined by restricting the parameter under that constraint. The tangent space of this submodel is then computed with the help of the implicit function theorem. Finally, the score function of the restricted parameter is obtained by projecting the efficient influence function of the unrestricted parameter on the appropriate inner product spaces.
Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration
Doherty, John E.; Hunt, Randall J.
2010-01-01
Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.
"I'm Proud to Be Me": Health, Community and Schooling
ERIC Educational Resources Information Center
Burrrows, Lisette
2011-01-01
Health reportage in New Zealand's popular and professional media regularly features large, avowedly inactive, indigenous and/or "poor" people failing to nurture their children properly on account of their size. While well-meaning government and school-based initiatives explicitly target these so-called "high-need" communities,…
NASA Astrophysics Data System (ADS)
Rostworowski, Andrzej
2017-06-01
We argue that if the degeneracy of the spectrum of linear perturbations of AdS is properly taken into account, there are globally regular, time-periodic, asymptotically AdS solutions (geons) bifurcating from each linear eigenfrequency of AdS.
Assessment of Student Academic Achievement.
ERIC Educational Resources Information Center
Neosho County Community Coll., Chanute, KS.
Neosho Community College (NCC) in Kansas developed an assessment program to measure changes in student learning and progress in courses and programs. The specific objectives of student assessment at NCC are to determine readiness for regular college courses; to determine proper placement; to assist students in meeting personal objectives; and to…
ERIC Educational Resources Information Center
Weinstock, Ruth
This monograph, part of an ongoing series, discusses the need for school arts programs and provides some examples of how the arts can be infused into the regular curriculum at the elementary level. Support systems for such programs are also discussed. Properly conceived, the arts constitute a great integrating force in the curriculum. To achieve…
Comment on "Construction of regular black holes in general relativity"
NASA Astrophysics Data System (ADS)
Bronnikov, Kirill A.
2017-12-01
We claim that the paper by Zhong-Ying Fan and Xiaobao Wang on nonlinear electrodynamics coupled to general relativity [Phys. Rev. D 94,124027 (2016)], although correct in general, in some respects repeats previously obtained results without giving proper references. There is also an important point missing in this paper, which is necessary for understanding the physics of the system: in solutions with an electric charge, a regular center requires a non-Maxwell behavior of Lagrangian function L (f ) , (f =Fμ νFμ ν) at small f . Therefore, in all electric regular black hole solutions with a Reissner-Nordström asymptotic, the Lagrangian L (f ) is different in different parts of space, and the electromagnetic field behaves in a singular way at surfaces where L (f ) suffers branching.
NASA Astrophysics Data System (ADS)
Gemmen, R. S.; Johnson, C. D.
Two primary parameters stand out for characterizing fuel cell system performance. The first and most important parameter is system efficiency. This parameter is relatively easy to define, and protocols for its assessment are already available. Another important parameter yet to be fully considered is system degradation. Degradation is important because customers desire to know how long their purchased fuel cell unit will last. The measure of degradation describes this performance factor by quantifying, for example, how the efficiency of the unit degrades over time. While both efficiency and degradation concepts are readily understood, the coupling between these two parameters must also be understood so that proper testing and evaluation of fuel cell systems is achieved. Tests not properly performed, and results not properly understood, may result in improper use of the evaluation data, producing improper R&D planning decisions and financial investments. This paper presents an analysis of system degradation, recommends an approach to its measurement, and shows how these two parameters are related and how one can be "traded-off" for the other.
Semi-supervised vibration-based classification and condition monitoring of compressors
NASA Astrophysics Data System (ADS)
Potočnik, Primož; Govekar, Edvard
2017-09-01
Semi-supervised vibration-based classification and condition monitoring of the reciprocating compressors installed in refrigeration appliances is proposed in this paper. The method addresses the problem of industrial condition monitoring where prior class definitions are often not available or difficult to obtain from local experts. The proposed method combines feature extraction, principal component analysis, and statistical analysis for the extraction of initial class representatives, and compares the capability of various classification methods, including discriminant analysis (DA), neural networks (NN), support vector machines (SVM), and extreme learning machines (ELM). The use of the method is demonstrated on a case study which was based on industrially acquired vibration measurements of reciprocating compressors during the production of refrigeration appliances. The paper presents a comparative qualitative analysis of the applied classifiers, confirming the good performance of several nonlinear classifiers. If the model parameters are properly selected, then very good classification performance can be obtained from NN trained by Bayesian regularization, SVM and ELM classifiers. The method can be effectively applied for the industrial condition monitoring of compressors.
Galias, Zbigniew
2017-05-01
An efficient method to find positions of periodic windows for the quadratic map f(x)=ax(1-x) and a heuristic algorithm to locate the majority of wide periodic windows are proposed. Accurate rigorous bounds of positions of all periodic windows with periods below 37 and the majority of wide periodic windows with longer periods are found. Based on these results, we prove that the measure of the set of regular parameters in the interval [3,4] is above 0.613960137. The properties of periodic windows are studied numerically. The results of the analysis are used to estimate that the true value of the measure of the set of regular parameters is close to 0.6139603.
SU-D-12A-06: A Comprehensive Parameter Analysis for Low Dose Cone-Beam CT Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, W; Southern Medical University, Guangzhou; Yan, H
Purpose: There is always a parameter in compressive sensing based iterative reconstruction (IR) methods low dose cone-beam CT (CBCT), which controls the weight of regularization relative to data fidelity. A clear understanding of the relationship between image quality and parameter values is important. The purpose of this study is to investigate this subject based on experimental data and a representative advanced IR algorithm using Tight-frame (TF) regularization. Methods: Three data sets of a Catphan phantom acquired at low, regular and high dose levels are used. For each tests, 90 projections covering a 200-degree scan range are used for reconstruction. Threemore » different regions-of-interest (ROIs) of different contrasts are used to calculate contrast-to-noise ratios (CNR) for contrast evaluation. A single point structure is used to measure modulation transfer function (MTF) for spatial-resolution evaluation. Finally, we analyze CNRs and MTFs to study the relationship between image quality and parameter selections. Results: It was found that: 1) there is no universal optimal parameter. The optimal parameter value depends on specific task and dose level. 2) There is a clear trade-off between CNR and resolution. The parameter for the best CNR is always smaller than that for the best resolution. 3) Optimal parameters are also dose-specific. Data acquired under a high dose protocol require less regularization, yielding smaller optimal parameter values. 4) Comparing with conventional FDK images, TF-based CBCT images are better under a certain optimally selected parameters. The advantages are more obvious for low dose data. Conclusion: We have investigated the relationship between image quality and parameter values in the TF-based IR algorithm. Preliminary results indicate optimal parameters are specific to both the task types and dose levels, providing guidance for selecting parameters in advanced IR algorithms. This work is supported in part by NIH (1R01CA154747-01)« less
Canales-Rodríguez, Erick J.; Caruyer, Emmanuel; Aja-Fernández, Santiago; Radua, Joaquim; Yurramendi Mendizabal, Jesús M.; Iturria-Medina, Yasser; Melie-García, Lester; Alemán-Gómez, Yasser; Thiran, Jean-Philippe; Sarró, Salvador; Pomarol-Clotet, Edith; Salvador, Raymond
2015-01-01
Spherical deconvolution (SD) methods are widely used to estimate the intra-voxel white-matter fiber orientations from diffusion MRI data. However, while some of these methods assume a zero-mean Gaussian distribution for the underlying noise, its real distribution is known to be non-Gaussian and to depend on many factors such as the number of coils and the methodology used to combine multichannel MRI signals. Indeed, the two prevailing methods for multichannel signal combination lead to noise patterns better described by Rician and noncentral Chi distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to Rician and noncentral Chi likelihood models. To quantify the benefits of using proper noise models, RUMBA-SD was compared with dRL-SD, a well-established method based on the RL algorithm for Gaussian noise. Another aim of the study was to quantify the impact of including a total variation (TV) spatial regularization term in the estimation framework. To do this, we developed TV spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The evaluation was performed by comparing various quality metrics on 132 three-dimensional synthetic phantoms involving different inter-fiber angles and volume fractions, which were contaminated with noise mimicking patterns generated by data processing in multichannel scanners. The results demonstrate that the inclusion of proper likelihood models leads to an increased ability to resolve fiber crossings with smaller inter-fiber angles and to better detect non-dominant fibers. The inclusion of TV regularization dramatically improved the resolution power of both techniques. The above findings were also verified in human brain data. PMID:26470024
Generating Models of Infinite-State Communication Protocols Using Regular Inference with Abstraction
NASA Astrophysics Data System (ADS)
Aarts, Fides; Jonsson, Bengt; Uijen, Johan
In order to facilitate model-based verification and validation, effort is underway to develop techniques for generating models of communication system components from observations of their external behavior. Most previous such work has employed regular inference techniques which generate modest-size finite-state models. They typically suppress parameters of messages, although these have a significant impact on control flow in many communication protocols. We present a framework, which adapts regular inference to include data parameters in messages and states for generating components with large or infinite message alphabets. A main idea is to adapt the framework of predicate abstraction, successfully used in formal verification. Since we are in a black-box setting, the abstraction must be supplied externally, using information about how the component manages data parameters. We have implemented our techniques by connecting the LearnLib tool for regular inference with the protocol simulator ns-2, and generated a model of the SIP component as implemented in ns-2.
40 CFR 112.6 - Qualified Facilities Plan Requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Self-Certification of the Plan. If you are an owner or operator of a facility that meets the Tier I... unloading equipment, tank overflow, rupture, or leakage, or any other equipment known to be a source of... the system or procedure in the SPCC Plan and regularly test to ensure proper operation or efficacy. (b...
40 CFR 112.6 - Qualified Facilities Plan Requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Self-Certification of the Plan. If you are an owner or operator of a facility that meets the Tier I... unloading equipment, tank overflow, rupture, or leakage, or any other equipment known to be a source of... the system or procedure in the SPCC Plan and regularly test to ensure proper operation or efficacy. (b...
40 CFR 112.6 - Qualified Facilities Plan Requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Self-Certification of the Plan. If you are an owner or operator of a facility that meets the Tier I... unloading equipment, tank overflow, rupture, or leakage, or any other equipment known to be a source of... the system or procedure in the SPCC Plan and regularly test to ensure proper operation or efficacy. (b...
40 CFR 112.6 - Qualified Facilities Plan Requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Self-Certification of the Plan. If you are an owner or operator of a facility that meets the Tier I... unloading equipment, tank overflow, rupture, or leakage, or any other equipment known to be a source of... the system or procedure in the SPCC Plan and regularly test to ensure proper operation or efficacy. (b...
40 CFR 112.6 - Qualified Facilities Plan Requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Self-Certification of the Plan. If you are an owner or operator of a facility that meets the Tier I... unloading equipment, tank overflow, rupture, or leakage, or any other equipment known to be a source of... the system or procedure in the SPCC Plan and regularly test to ensure proper operation or efficacy. (b...
Kevin T. Smith
2009-01-01
Landscape trees have real value and contribute to making livable communities. Making the most of that value requires providing trees with the proper care and attention. As potentially large and long-lived organisms, trees benefit from commitment to regular care that respects the natural tree system. This system captures, transforms, and uses energy to survive, grow,...
A Module for the Administration of Homebound Instructional Programs
ERIC Educational Resources Information Center
Wasserman, Lewis
2008-01-01
Special program and other school administrators regularly confront the issue of whether students under their charge are entitled to receive homebound instruction and if so, what procedures and criteria they should apply in coming to a proper decision. Where a student is entitled to such services the administrator must decide what subjects must be…
Idea Bank: Does Your Health Depend on a Clean Instrument?
ERIC Educational Resources Information Center
Gutoff, Olivia W.
2011-01-01
Music teachers have a responsibility to give detailed instruction on the regular cleaning of brass and wind instruments because of new, compelling research. Recent findings reinforce the importance of teaching proper instrument cleaning. Serious health consequences can be avoided by making instrument care an integral part of the educative process.…
When to consider transfusion therapy for patients with non-transfusion-dependent thalassaemia.
Taher, A T; Radwan, A; Viprakasit, V
2015-01-01
Non-transfusion-dependent thalassaemia (NTDT) refers to all thalassaemia disease phenotypes that do not require regular blood transfusions for survival. Thalassaemia disorders were traditionally concentrated along the tropical belt stretching from sub-Saharan Africa through the Mediterranean region and the Middle East to South and South-East Asia, but global migration has led to increased incidence in North America and Northern Europe. Transfusionists may be familiar with β-thalassaemia major because of the lifelong transfusions needed by these patients. Although patients with NTDT do not require regular transfusions for survival, they may require transfusions in some instances such as pregnancy, infection or growth failure. The complications associated with NTDT can be severe if not properly managed, and many are directly related to chronic anaemia. Awareness of NTDT is important, and this review will outline the factors that should be taken into consideration when deciding whether to initiate and properly plan for transfusion therapy in these patients in terms of transfusion interval and duration of treatment. © 2014 The Authors. Vox Sanguinis published by John Wiley & Sons Ltd on behalf of International Society of Blood Transfusion.
ACCELERATING MR PARAMETER MAPPING USING SPARSITY-PROMOTING REGULARIZATION IN PARAMETRIC DIMENSION
Velikina, Julia V.; Alexander, Andrew L.; Samsonov, Alexey
2013-01-01
MR parameter mapping requires sampling along additional (parametric) dimension, which often limits its clinical appeal due to a several-fold increase in scan times compared to conventional anatomic imaging. Data undersampling combined with parallel imaging is an attractive way to reduce scan time in such applications. However, inherent SNR penalties of parallel MRI due to noise amplification often limit its utility even at moderate acceleration factors, requiring regularization by prior knowledge. In this work, we propose a novel regularization strategy, which utilizes smoothness of signal evolution in the parametric dimension within compressed sensing framework (p-CS) to provide accurate and precise estimation of parametric maps from undersampled data. The performance of the method was demonstrated with variable flip angle T1 mapping and compared favorably to two representative reconstruction approaches, image space-based total variation regularization and an analytical model-based reconstruction. The proposed p-CS regularization was found to provide efficient suppression of noise amplification and preservation of parameter mapping accuracy without explicit utilization of analytical signal models. The developed method may facilitate acceleration of quantitative MRI techniques that are not suitable to model-based reconstruction because of complex signal models or when signal deviations from the expected analytical model exist. PMID:23213053
Chudzik, Michal; Klimczak, Artur; Wranicz, Jerzy Krzysztof
2013-10-31
We sought to determine the usefulness of ambulatory 24-hour Holter monitoring in detecting asymptomatic pacemaker (PM) malfunction episodes in patients with dual-chamber pacemakers whose pacing and sensing parameters were proper, as seen in routine post-implantation follow-ups. Ambulatory 24-hour Holter recordings (HM) were performed in 100 patients with DDD pacemakers 1 day after the implantation. Only asymptomatic patients with proper pacing and sensing parameters (assessed on PM telemetry on the first day post-implantation) were enrolled in the study. The following parameters were assessed: failure to pace, failure to sense (both oversensing and undersensing episodes) as well as the percentage of all PM disturbances. Despite proper sensing and pacing parameters, HM revealed PM disturbances in 23 patients out of 100 (23%). Atrial undersensing episodes were found in 12 patients (p < 0.005) with totally 963 episodes and failure to capture in 1 patient (1%). T wave oversensing was the most common ventricular channel disorder (1316 episodes in 9 patients, p < 0.0005). Malfunction episodes occurred sporadically, leading to pauses of up to 1.6 s or temporary bradycardia, which were, nevertheless, not accompanied by clinical symptoms. No ventricular pacing disturbances were found. Asymptomatic pacemaker dysfunction may be observed in nearly 25% of patients with proper DDD parameters after implantation. Thus, ambulatory HM during the early post-implantation period may be a useful tool to detect the need to reprogram PM parameters.
Accretion onto some well-known regular black holes
NASA Astrophysics Data System (ADS)
Jawad, Abdul; Shahzad, M. Umair
2016-03-01
In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes.
ERIC Educational Resources Information Center
Gencer, Yildirim Gokhan; Coskun, Funda; Sarikaya, Mucahit; Kaplan, Seyhmus
2018-01-01
The aim of this study is to investigate the effect of intensive basketball competitions (10 official basketball games in 12 days intensive competition period) on blood parameters of basketball players. Blood samples were taken from the basketball players of the university team. The players were training regularly and they had no regular health…
NASA Astrophysics Data System (ADS)
Mow, M.; Zbijewski, W.; Sisniega, A.; Xu, J.; Dang, H.; Stayman, J. W.; Wang, X.; Foos, D. H.; Koliatsos, V.; Aygun, N.; Siewerdsen, J. H.
2017-03-01
Purpose: To improve the timely detection and treatment of intracranial hemorrhage or ischemic stroke, recent efforts include the development of cone-beam CT (CBCT) systems for perfusion imaging and new approaches to estimate perfusion parameters despite slow rotation speeds compared to multi-detector CT (MDCT) systems. This work describes development of a brain perfusion CBCT method using a reconstruction of difference (RoD) approach to enable perfusion imaging on a newly developed CBCT head scanner prototype. Methods: A new reconstruction approach using RoD with a penalized-likelihood framework was developed to image the temporal dynamics of vascular enhancement. A digital perfusion simulation was developed to give a realistic representation of brain anatomy, artifacts, noise, scanner characteristics, and hemo-dynamic properties. This simulation includes a digital brain phantom, time-attenuation curves and noise parameters, a novel forward projection method for improved computational efficiency, and perfusion parameter calculation. Results: Our results show the feasibility of estimating perfusion parameters from a set of images reconstructed from slow scans, sparse data sets, and arc length scans as short as 60 degrees. The RoD framework significantly reduces noise and time-varying artifacts from inconsistent projections. Proper regularization and the use of overlapping reconstructed arcs can potentially further decrease bias and increase temporal resolution, respectively. Conclusions: A digital brain perfusion simulation with RoD imaging approach has been developed and supports the feasibility of using a CBCT head scanner for perfusion imaging. Future work will include testing with data acquired using a 3D-printed perfusion phantom currently and translation to preclinical and clinical studies.
Optimization of turning process through the analytic flank wear modelling
NASA Astrophysics Data System (ADS)
Del Prete, A.; Franchi, R.; De Lorenzis, D.
2018-05-01
In the present work, the approach used for the optimization of the process capabilities for Oil&Gas components machining will be described. These components are machined by turning of stainless steel castings workpieces. For this purpose, a proper Design Of Experiments (DOE) plan has been designed and executed: as output of the experimentation, data about tool wear have been collected. The DOE has been designed starting from the cutting speed and feed values recommended by the tools manufacturer; the depth of cut parameter has been maintained as a constant. Wear data has been obtained by means the observation of the tool flank wear under an optical microscope: the data acquisition has been carried out at regular intervals of working times. Through a statistical data and regression analysis, analytical models of the flank wear and the tool life have been obtained. The optimization approach used is a multi-objective optimization, which minimizes the production time and the number of cutting tools used, under the constraint on a defined flank wear level. The technique used to solve the optimization problem is a Multi Objective Particle Swarm Optimization (MOPS). The optimization results, validated by the execution of a further experimental campaign, highlighted the reliability of the work and confirmed the usability of the optimized process parameters and the potential benefit for the company.
Geodetic imaging: Reservoir monitoring using satellite interferometry
Vasco, D.W.; Wicks, C.; Karasaki, K.; Marques, O.
2002-01-01
Fluid fluxes within subsurface reservoirs give rise to surface displacements, particularly over periods of a year or more. Observations of such deformation provide a powerful tool for mapping fluid migration within the Earth, providing new insights into reservoir dynamics. In this paper we use Interferometric Synthetic Aperture Radar (InSAR) range changes to infer subsurface fluid volume strain at the Coso geothermal field. Furthermore, we conduct a complete model assessment, using an iterative approach to compute model parameter resolution and covariance matrices. The method is a generalization of a Lanczos-based technique which allows us to include fairly general regularization, such as roughness penalties. We find that we can resolve quite detailed lateral variations in volume strain both within the reservoir depth range (0.4-2.5 km) and below the geothermal production zone (2.5-5.0 km). The fractional volume change in all three layers of the model exceeds the estimated model parameter uncertainly by a factor of two or more. In the reservoir depth interval (0.4-2.5 km), the predominant volume change is associated with northerly and westerly oriented faults and their intersections. However, below the geothermal production zone proper [the depth range 2.5-5.0 km], there is the suggestion that both north- and northeast-trending faults may act as conduits for fluid flow.
NASA Astrophysics Data System (ADS)
Bayaskhalanov, M. V.; Vlasov, M. N.; Korsun, A. S.; Merinov, I. G.; Philippov, M. Ph
2017-11-01
Research results of “k-ε” turbulence integral model (TIM) parameters dependence on the angle of a coolant flow in regular smooth cylindrical rod-bundle are presented. TIM is intended for the definition of efficient impulse and heat transport coefficients in the averaged equations of a heat and mass transfer in the regular rod structures in an anisotropic porous media approximation. The TIM equations are received by volume-averaging of the “k-ε” turbulence model equations on periodic cell of rod-bundle. The water flow across rod-bundle under angles from 15 to 75 degrees was simulated by means of an ANSYS CFX code. Dependence of the TIM parameters on flow angle was as a result received.
Design of a multiple kernel learning algorithm for LS-SVM by convex programming.
Jian, Ling; Xia, Zhonghang; Liang, Xijun; Gao, Chuanhou
2011-06-01
As a kernel based method, the performance of least squares support vector machine (LS-SVM) depends on the selection of the kernel as well as the regularization parameter (Duan, Keerthi, & Poo, 2003). Cross-validation is efficient in selecting a single kernel and the regularization parameter; however, it suffers from heavy computational cost and is not flexible to deal with multiple kernels. In this paper, we address the issue of multiple kernel learning for LS-SVM by formulating it as semidefinite programming (SDP). Furthermore, we show that the regularization parameter can be optimized in a unified framework with the kernel, which leads to an automatic process for model selection. Extensive experimental validations are performed and analyzed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Reducing errors in the GRACE gravity solutions using regularization
NASA Astrophysics Data System (ADS)
Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.
2012-09-01
The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4 solutions (RL04) from the Center for Space Research (CSR). Post-fit residual analysis shows that the regularized solutions fit the data to within the noise level of GRACE. A time series of filtered hydrological model is used to confirm that signal attenuation for basins in the Total Runoff Integrating Pathways (TRIP) database over 320 km radii is less than 1 cm equivalent water height RMS, which is within the noise level of GRACE.
Iterative image reconstruction that includes a total variation regularization for radial MRI.
Kojima, Shinya; Shinohara, Hiroyuki; Hashimoto, Takeyuki; Hirata, Masami; Ueno, Eiko
2015-07-01
This paper presents an iterative image reconstruction method for radial encodings in MRI based on a total variation (TV) regularization. The algebraic reconstruction method combined with total variation regularization (ART_TV) is implemented with a regularization parameter specifying the weight of the TV term in the optimization process. We used numerical simulations of a Shepp-Logan phantom, as well as experimental imaging of a phantom that included a rectangular-wave chart, to evaluate the performance of ART_TV, and to compare it with that of the Fourier transform (FT) method. The trade-off between spatial resolution and signal-to-noise ratio (SNR) was investigated for different values of the regularization parameter by experiments on a phantom and a commercially available MRI system. ART_TV was inferior to the FT with respect to the evaluation of the modulation transfer function (MTF), especially at high frequencies; however, it outperformed the FT with regard to the SNR. In accordance with the results of SNR measurement, visual impression suggested that the image quality of ART_TV was better than that of the FT for reconstruction of a noisy image of a kiwi fruit. In conclusion, ART_TV provides radial MRI with improved image quality for low-SNR data; however, the regularization parameter in ART_TV is a critical factor for obtaining improvement over the FT.
Canine and feline obesity: frequently asked questions and their answers.
Becvarova, Iveta
2011-11-01
The diagnosis of obesity is simple and warrants intervention because of the association between obesity and increased morbidity. Pet owner commitment, a proper feeding plan, and regular monitoring are the keys to a successful weight loss program. Treatment of obesity involves caloric restriction and/or diet change. Therapeutic weight loss diets differ in fiber, moisture, and digestible carbohydrate contents, and the diet choice should be tailored to the individual patient. Appropriate feeding management is equally important. To protect against the recurrence of obesity, owners should be educated on how to monitor body condition score and adjust the feeding program to maintain proper body condition.
Prospective regularization design in prior-image-based reconstruction
NASA Astrophysics Data System (ADS)
Dang, Hao; Siewerdsen, Jeffrey H.; Webster Stayman, J.
2015-12-01
Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in phantoms where the optimal parameters vary spatially by an order of magnitude or more. In a series of studies designed to explore potential unknowns associated with accurate PIBR, optimal prior image strength was found to vary with attenuation differences associated with anatomical change but exhibited only small variations as a function of the shape and size of the change. The results suggest that, given a target change attenuation, prospective patient-, change-, and data-specific customization of the prior image strength can be performed to ensure reliable reconstruction of specific anatomical changes.
NASA Astrophysics Data System (ADS)
Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten
2018-06-01
This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Fukuda, Jun'ichi; Johnson, Kaj M.
2010-06-01
We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.
Assessment of technical condition of concrete pavement by the example of district road
NASA Astrophysics Data System (ADS)
Linek, M.; Nita, P.; Żebrowski, W.; Wolka, P.
2018-05-01
The article presents the comprehensive assessment of concrete pavement condition. Analyses included the district road located in the swietokrzyskie province, used for 11 years. Comparative analyses were conducted twice. The first analysis was carried out after 9 years of pavement operation, in 2015. In order to assess the extent of pavement degradation, the tests were repeated in 2017. Within the scope of field research, the traffic intensity within the analysed road section was determined. Visual assessment of pavement condition was conducted, according to the guidelines included in SOSN-B. Visual assessment can be extended by ground-penetrating radar measurements which allow to provide comprehensive assessment of the occurred structure changes within its entire thickness and length. The assessment included also performance parameters, i.e. pavement regularity, surface roughness and texture. Extension of test results by the assessment of changes in internal structure of concrete composite and structure observations by means of Scanning Electron Microscope allow for the assessment of parameters of internal structure of hardened concrete. Supplementing the observations of internal structure by means of computed tomography scan provides comprehensive information of possible discontinuities and composite structure. According to the analysis of the obtained results, conclusions concerning the analysed pavement condition were reached. It was determined that the pavement is distinguished by high performance parameters, its condition is good and it does not require any repairs. Maintenance treatment was suggested in order to extend the period of proper operation of the analysed pavement.
Information fusion in regularized inversion of tomographic pumping tests
Bohling, Geoffrey C.; ,
2008-01-01
In this chapter we investigate a simple approach to incorporating geophysical information into the analysis of tomographic pumping tests for characterization of the hydraulic conductivity (K) field in an aquifer. A number of authors have suggested a tomographic approach to the analysis of hydraulic tests in aquifers - essentially simultaneous analysis of multiple tests or stresses on the flow system - in order to improve the resolution of the estimated parameter fields. However, even with a large amount of hydraulic data in hand, the inverse problem is still plagued by non-uniqueness and ill-conditioning and the parameter space for the inversion needs to be constrained in some sensible fashion in order to obtain plausible estimates of aquifer properties. For seismic and radar tomography problems, the parameter space is often constrained through the application of regularization terms that impose penalties on deviations of the estimated parameters from a prior or background model, with the tradeoff between data fit and model norm explored through systematic analysis of results for different levels of weighting on the regularization terms. In this study we apply systematic regularized inversion to analysis of tomographic pumping tests in an alluvial aquifer, taking advantage of the steady-shape flow regime exhibited in these tests to expedite the inversion process. In addition, we explore the possibility of incorporating geophysical information into the inversion through a regularization term relating the estimated K distribution to ground penetrating radar velocity and attenuation distributions through a smoothing spline model. ?? 2008 Springer-Verlag Berlin Heidelberg.
Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.
2012-01-01
Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764
ERIC Educational Resources Information Center
Bigelow, Cale A.; Walker, Kristina S.
2007-01-01
Putting greens are the most important golf course use area and regularly draw comments regarding their appearance and playing condition. This field laboratory exercise taught students how to properly measure putting green speed, an important functional characteristic, using a Stimpmeter device that measures golf ball roll distance (BRD).…
40 CFR 80.1507 - What are the defenses for acts prohibited under this subpart?
Code of Federal Regulations, 2014 CFR
2014-07-01
... Requirements for Gasoline-Ethanol Blends § 80.1507 What are the defenses for acts prohibited under this subpart... applicable maximum and/or minimum volume percent of ethanol. (2) That on each occasion when gasoline is found... checks to reconcile volumes of ethanol in inventory and regular checks of equipment for proper ethanol...
40 CFR 80.1507 - What are the defenses for acts prohibited under this subpart?
Code of Federal Regulations, 2013 CFR
2013-07-01
... Requirements for Gasoline-Ethanol Blends § 80.1507 What are the defenses for acts prohibited under this subpart... applicable maximum and/or minimum volume percent of ethanol. (2) That on each occasion when gasoline is found... checks to reconcile volumes of ethanol in inventory and regular checks of equipment for proper ethanol...
40 CFR 80.1507 - What are the defenses for acts prohibited under this subpart?
Code of Federal Regulations, 2012 CFR
2012-07-01
... Requirements for Gasoline-Ethanol Blends § 80.1507 What are the defenses for acts prohibited under this subpart... applicable maximum and/or minimum volume percent of ethanol. (2) That on each occasion when gasoline is found... checks to reconcile volumes of ethanol in inventory and regular checks of equipment for proper ethanol...
Effect of Selected Variables on Funding State Compensatory and Regular Education in Texas
ERIC Educational Resources Information Center
Wiesman, Karen Wheeler
2009-01-01
Funding public schools has been an ongoing struggle since the inception of the United States. Beginning with Jefferson's "A General Diffusion of Knowledge" that charged the states with properly funding public schools, to the current day legal battles that continue in states across the Union, America struggles with finding a solution to…
Starch-Branching Enzyme IIa Is Required for Proper Diurnal Cycling of Starch in Leaves of Maize1[OA
Yandeau-Nelson, Marna D.; Laurens, Lieve; Shi, Zi; Xia, Huan; Smith, Alison M.; Guiltinan, Mark J.
2011-01-01
Starch-branching enzyme (SBE), a glucosyl transferase, is required for the highly regular pattern of α-1,6 bonds in the amylopectin component of starch. In the absence of SBEIIa, as shown previously in the sbe2a mutant of maize (Zea mays), leaf starch has drastically reduced branching and the leaves exhibit a severe senescence-like phenotype. Detailed characterization of the maize sbe2a mutant revealed that SBEIIa is the primary active branching enzyme in the leaf and that in its absence plant growth is affected. Both seedling and mature sbe2a mutant leaves do not properly degrade starch during the night, resulting in hyperaccumulation. In mature sbe2a leaves, starch hyperaccumulation is greatest in visibly senescing regions but also observed in green tissue and is correlated to a drastic reduction in photosynthesis within the leaf. Starch granules from sbe2a leaves observed via scanning electron microscopy and transmission electron microscopy analyses are larger, irregular, and amorphous as compared with the highly regular, discoid starch granules observed in wild-type leaves. This appears to trigger premature senescence, as shown by an increased expression of genes encoding proteins known to be involved in senescence and programmed cell death processes. Together, these results indicate that SBEIIa is required for the proper diurnal cycling of transitory starch within the leaf and suggest that SBEIIa is necessary in producing an amylopectin structure amenable to degradation by starch metabolism enzymes. PMID:21508184
NASA Astrophysics Data System (ADS)
Huang, Zhenghua; Zhang, Tianxu; Deng, Lihua; Fang, Hao; Li, Qian
2015-12-01
Total variation(TV) based on regularization has been proven as a popular and effective model for image restoration, because of its ability of edge preserved. However, as the TV favors a piece-wise constant solution, the processing results in the flat regions of the image are easily produced "staircase effects", and the amplitude of the edges will be underestimated; the underlying cause of the problem is that the regularization parameter can not be changeable with spatial local information of image. In this paper, we propose a novel Scatter-matrix eigenvalues-based TV(SMETV) regularization with image blind restoration algorithm for deblurring medical images. The spatial information in different image regions is incorporated into regularization by using the edge indicator called difference eigenvalue to distinguish edges from flat areas. The proposed algorithm can effectively reduce the noise in flat regions as well as preserve the edge and detailed information. Moreover, it becomes more robust with the change of the regularization parameter. Extensive experiments demonstrate that the proposed approach produces results superior to most methods in both visual image quality and quantitative measures.
NASA Astrophysics Data System (ADS)
Vachálek, Ján
2011-12-01
The paper compares the abilities of forgetting methods to track time varying parameters of two different simulated models with different types of excitation. The observed parameters in the simulations are the integral sum of the Euclidean norm, deviation of the parameter estimates from their true values and a selected band prediction error count. As supplementary information, we observe the eigenvalues of the covariance matrix. In the paper we used a modified method of Regularized Exponential Forgetting with Alternative Covariance Matrix (REFACM) along with Directional Forgetting (DF) and three standard regularized methods.
Teraoka, Seitaro; Hayashida, Naomi; Shinkawa, Tetsuko; Taira, Yasuyuki; Nagai-Sekitani, Yui; Irie, Sumiko; Kamasaki, Toshihiko; Nakashima-Hashiguchi, Kanami; Yoshida, Koji; Orita, Makiko; Morishita, Michiko; Clancey, Gregory; Takamura, Noboru
2013-01-01
Psychosocial stress is generally associated with adverse health behaviors and has been linked to the development of cardiovascular diseases (CVD). Recently, an individual's sense of coherence (SOC), which is a concept that reflects the ability to cope with psychosocial stress, has been recognized as an essential component of long-term health and stress management. The association between SOC and traditional and alternative atherosclerotic markers in a community sample, however, has not been thoroughly investigated. In the present study, we evaluated stress management capability and psychological conditions using the Japanese version of the Sense of Coherence-13 (SOC-13) Scale, supplemented by the General Health Questionnaire-12 (GHQ-12) that screens for minor psychiatric disorders. The study subjects were 511 adults, median age 64 years (range 48-70), who participated in a regular medical screening program in Nagasaki Prefecture, Japan. We then correlated our findings with atherosclerotic risk factors in the same community sample, such as body mass index (BMI) and proper and regular sleeping habits. We found that close association between good stress management capability and lower BMI and/or regular sleeping habits in elderly Japanese. This provides strong evidence that BMI and sleep management are contributory to SOC. If the ability to cope with psychosocial stress is important to the prevention of CVD, then weight control and proper sleep habits must be emphasized from a psychosocial stress-management perspective as well as a physical one.
On the regularized fermionic projector of the vacuum
NASA Astrophysics Data System (ADS)
Finster, Felix
2008-03-01
We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed.
Zhang, Cheng; Zhang, Tao; Zheng, Jian; Li, Ming; Lu, Yanfei; You, Jiali; Guan, Yihui
2015-01-01
In recent years, X-ray computed tomography (CT) is becoming widely used to reveal patient's anatomical information. However, the side effect of radiation, relating to genetic or cancerous diseases, has caused great public concern. The problem is how to minimize radiation dose significantly while maintaining image quality. As a practical application of compressed sensing theory, one category of methods takes total variation (TV) minimization as the sparse constraint, which makes it possible and effective to get a reconstruction image of high quality in the undersampling situation. On the other hand, a preliminary attempt of low-dose CT reconstruction based on dictionary learning seems to be another effective choice. But some critical parameters, such as the regularization parameter, cannot be determined by detecting datasets. In this paper, we propose a reweighted objective function that contributes to a numerical calculation model of the regularization parameter. A number of experiments demonstrate that this strategy performs well with better reconstruction images and saving of a large amount of time.
Stark widths regularities within spectral series of sodium isoelectronic sequence
NASA Astrophysics Data System (ADS)
Trklja, Nora; Tapalaga, Irinel; Dojčinović, Ivan P.; Purić, Jagoš
2018-02-01
Stark widths within spectral series of sodium isoelectronic sequence have been studied. This is a unique approach that includes both neutrals and ions. Two levels of problem are considered: if the required atomic parameters are known, Stark widths can be calculated by some of the known methods (in present paper modified semiempirical formula has been used), but if there is a lack of parameters, regularities enable determination of Stark broadening data. In the framework of regularity research, Stark broadening dependence on environmental conditions and certain atomic parameters has been investigated. The aim of this work is to give a simple model, with minimum of required parameters, which can be used for calculation of Stark broadening data for any chosen transitions within sodium like emitters. Obtained relations were used for predictions of Stark widths for transitions that have not been measured or calculated yet. This system enables fast data processing by using of proposed theoretical model and it provides quality control and verification of obtained results.
Regularized Semiparametric Estimation for Ordinary Differential Equations
Li, Yun; Zhu, Ji; Wang, Naisyin
2015-01-01
Ordinary differential equations (ODEs) are widely used in modeling dynamic systems and have ample applications in the fields of physics, engineering, economics and biological sciences. The ODE parameters often possess physiological meanings and can help scientists gain better understanding of the system. One key interest is thus to well estimate these parameters. Ideally, constant parameters are preferred due to their easy interpretation. In reality, however, constant parameters can be too restrictive such that even after incorporating error terms, there could still be unknown sources of disturbance that lead to poor agreement between observed data and the estimated ODE system. In this paper, we address this issue and accommodate short-term interferences by allowing parameters to vary with time. We propose a new regularized estimation procedure on the time-varying parameters of an ODE system so that these parameters could change with time during transitions but remain constants within stable stages. We found, through simulation studies, that the proposed method performs well and tends to have less variation in comparison to the non-regularized approach. On the theoretical front, we derive finite-sample estimation error bounds for the proposed method. Applications of the proposed method to modeling the hare-lynx relationship and the measles incidence dynamic in Ontario, Canada lead to satisfactory and meaningful results. PMID:26392639
Fast incorporation of optical flow into active polygons.
Unal, Gozde; Krim, Hamid; Yezzi, Anthony
2005-06-01
In this paper, we first reconsider, in a different light, the addition of a prediction step to active contour-based visual tracking using an optical flow and clarify the local computation of the latter along the boundaries of continuous active contours with appropriate regularizers. We subsequently detail our contribution of computing an optical flow-based prediction step directly from the parameters of an active polygon, and of exploiting it in object tracking. This is in contrast to an explicitly separate computation of the optical flow and its ad hoc application. It also provides an inherent regularization effect resulting from integrating measurements along polygon edges. As a result, we completely avoid the need of adding ad hoc regularizing terms to the optical flow computations, and the inevitably arbitrary associated weighting parameters. This direct integration of optical flow into the active polygon framework distinguishes this technique from most previous contour-based approaches, where regularization terms are theoretically, as well as practically, essential. The greater robustness and speed due to a reduced number of parameters of this technique are additional and appealing features.
The LPM effect in sequential bremsstrahlung: dimensional regularization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arnold, Peter; Chang, Han-Chih; Iqbal, Shahin
The splitting processes of bremsstrahlung and pair production in a medium are coherent over large distances in the very high energy limit, which leads to a suppression known as the Landau-Pomeranchuk-Migdal (LPM) effect. Of recent interest is the case when the coherence lengths of two consecutive splitting processes overlap (which is important for understanding corrections to standard treatments of the LPM effect in QCD). In previous papers, we have developed methods for computing such corrections without making soft-gluon approximations. However, our methods require consistent treatment of canceling ultraviolet (UV) divergences associated with coincident emission times, even for processes with tree-levelmore » amplitudes. In this paper, we show how to use dimensional regularization to properly handle the UV contributions. We also present a simple diagnostic test that any consistent UV regularization method for this problem needs to pass.« less
The LPM effect in sequential bremsstrahlung: dimensional regularization
Arnold, Peter; Chang, Han-Chih; Iqbal, Shahin
2016-10-19
The splitting processes of bremsstrahlung and pair production in a medium are coherent over large distances in the very high energy limit, which leads to a suppression known as the Landau-Pomeranchuk-Migdal (LPM) effect. Of recent interest is the case when the coherence lengths of two consecutive splitting processes overlap (which is important for understanding corrections to standard treatments of the LPM effect in QCD). In previous papers, we have developed methods for computing such corrections without making soft-gluon approximations. However, our methods require consistent treatment of canceling ultraviolet (UV) divergences associated with coincident emission times, even for processes with tree-levelmore » amplitudes. In this paper, we show how to use dimensional regularization to properly handle the UV contributions. We also present a simple diagnostic test that any consistent UV regularization method for this problem needs to pass.« less
Calculating Proper Motions in the WFCAM Science Archive for the UKIRT Infrared Deep Sky Surveys
NASA Astrophysics Data System (ADS)
Collins, R.; Hambly, N.
2012-09-01
The ninth data release from the UKIRT Infrared Deep Sky Surveys (hereafter UKIDSS DR9), represents five years worth of observations by its wide-field camera (WFCAM) and will be the first to include proper motion values in its source catalogues for the shallow, wide-area surveys; the Large Area Survey (LAS), Galactic Clusters Survey (GCS) and (ultimately) Galactic Plane Survey (GPS). We, the Wide Field Astronomy Unit (WFAU) at the University of Edinburgh who prepare these regular data releases in the WFCAM Science Archive (WSA), describe in this paper how we make optimal use of the individual detection catalogues from each observation to derive high-quality astrometric fits for the positions of each detection enabling us to calculate a proper motion solution across multiple epochs and passbands when constructing a merged source catalogue. We also describe how the proper motion solutions affect the calculation of the various attributes provided in the database source catalogue tables, what measures of data quality we provide and a demonstration of the results for observations of the Pleiades cluster.
Multiple graph regularized protein domain ranking.
Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin
2012-11-19
Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.
Multiple graph regularized protein domain ranking
2012-01-01
Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. PMID:23157331
Nguyen, N; Milanfar, P; Golub, G
2001-01-01
In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.
Understanding, Classifying, and Selecting Environmentally Acceptable Hydraulic Fluids
2016-08-01
installed in land facilities or off- road vehicles such as excavators, bulldozers, backhoes, etc.), while others are installed on floating plants...and oily bilge tanks for the collection and proper disposal of oil- contaminated bilge water • Performing routine maintenance , including regular...regulations. Maintenance of machinery systems containing EA hydraulic fluids must strictly follow the hydraulic fluid manufacturer’s recommendations
The Link Between Nutrition and Physical Activity in Increasing Academic Achievement.
Asigbee, Fiona M; Whitney, Stephen D; Peterson, Catherine E
2018-06-01
Research demonstrates a link between decreased cognitive function in overweight school-aged children and improved cognitive function among students with high fitness levels and children engaging in regular physical activity (PA). The purpose of this study was to examine whether regular PA and proper nutrition together had a significant effect on academic achievement. Using the seventh wave of the Early Childhood Longitudinal Study, Kindergarten Class 1998-99 (ECLS-K) dataset, linear regression analysis with a Jackknife resampling correction was conducted to analyze the relationship among nutrition, PA, and academic achievement, while controlling for socioeconomic status, age, and sex. A nonactive, unhealthy nutrition group and a physically active, healthy nutrition group were compared on standardized tests of academic achievement. Findings indicated that PA levels and proper nutrition significantly predicted achievement scores. Thus, the active, healthy nutrition group scored higher on reading, math, and science standardized achievement tests scores. There is a strong connection between healthy nutrition and adequate PA, and the average performance within the population. Thus, results from this study suggest a supporting relationship between students' health and academic achievement. Findings also provide implications for school and district policy changes. © 2018, American School Health Association.
Höfle, Stefan; Bernhard, Christoph; Bruns, Michael; Kübel, Christian; Scherer, Torsten; Lemmer, Uli; Colsmann, Alexander
2015-04-22
Tandem organic light emitting diodes (OLEDs) utilizing fluorescent polymers in both sub-OLEDs and a regular device architecture were fabricated from solution, and their structure and performance characterized. The charge carrier generation layer comprised a zinc oxide layer, modified by a polyethylenimine interface dipole, for electron injection and either MoO3, WO3, or VOx for hole injection into the adjacent sub-OLEDs. ToF-SIMS investigations and STEM-EDX mapping verified the distinct functional layers throughout the layer stack. At a given device current density, the current efficiencies of both sub-OLEDs add up to a maximum of 25 cd/A, indicating a properly working tandem OLED.
NASA Technical Reports Server (NTRS)
Smith, R. C.; Bowers, K. L.
1991-01-01
A fully Sinc-Galerkin method for recovering the spatially varying stiffness and damping parameters in Euler-Bernoulli beam models is presented. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which converges exponentially and is valid on the infinite time interval. Hence the method avoids the time-stepping which is characteristic of many of the forward schemes which are used in parameter recovery algorithms. Tikhonov regularization is used to stabilize the resulting inverse problem, and the L-curve method for determining an appropriate value of the regularization parameter is briefly discussed. Numerical examples are given which demonstrate the applicability of the method for both individual and simultaneous recovery of the material parameters.
Dense motion estimation using regularization constraints on local parametric models.
Patras, Ioannis; Worring, Marcel; van den Boomgaard, Rein
2004-11-01
This paper presents a method for dense optical flow estimation in which the motion field within patches that result from an initial intensity segmentation is parametrized with models of different order. We propose a novel formulation which introduces regularization constraints between the model parameters of neighboring patches. In this way, we provide the additional constraints for very small patches and for patches whose intensity variation cannot sufficiently constrain the estimation of their motion parameters. In order to preserve motion discontinuities, we use robust functions as a regularization mean. We adopt a three-frame approach and control the balance between the backward and forward constraints by a real-valued direction field on which regularization constraints are applied. An iterative deterministic relaxation method is employed in order to solve the corresponding optimization problem. Experimental results show that the proposed method deals successfully with motions large in magnitude, motion discontinuities, and produces accurate piecewise-smooth motion fields.
NASA Astrophysics Data System (ADS)
Vignati, F.; Guardone, A.
2017-11-01
An analytical model for the evolution of regular reflections of cylindrical converging shock waves over circular-arc obstacles is proposed. The model based on the new (local) parameter, the perceived wedge angle, which substitutes the (global) wedge angle of planar surfaces and accounts for the time-dependent curvature of both the shock and the obstacle at the reflection point, is introduced. The new model compares fairly well with numerical results. Results from numerical simulations of the regular to Mach transition—eventually occurring further downstream along the obstacle—point to the perceived wedge angle as the most significant parameter to identify regular to Mach transitions. Indeed, at the transition point, the value of the perceived wedge angle is between 39° and 42° for all investigated configurations, whereas, e.g., the absolute local wedge angle varies in between 10° and 45° in the same conditions.
Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery.
Feng, Yunlong; Lv, Shao-Gao; Hang, Hanyuan; Suykens, Johan A K
2016-03-01
Kernelized elastic net regularization (KENReg) is a kernelization of the well-known elastic net regularization (Zou & Hastie, 2005). The kernel in KENReg is not required to be a Mercer kernel since it learns from a kernelized dictionary in the coefficient space. Feng, Yang, Zhao, Lv, and Suykens (2014) showed that KENReg has some nice properties including stability, sparseness, and generalization. In this letter, we continue our study on KENReg by conducting a refined learning theory analysis. This letter makes the following three main contributions. First, we present refined error analysis on the generalization performance of KENReg. The main difficulty of analyzing the generalization error of KENReg lies in characterizing the population version of its empirical target function. We overcome this by introducing a weighted Banach space associated with the elastic net regularization. We are then able to conduct elaborated learning theory analysis and obtain fast convergence rates under proper complexity and regularity assumptions. Second, we study the sparse recovery problem in KENReg with fixed design and show that the kernelization may improve the sparse recovery ability compared to the classical elastic net regularization. Finally, we discuss the interplay among different properties of KENReg that include sparseness, stability, and generalization. We show that the stability of KENReg leads to generalization, and its sparseness confidence can be derived from generalization. Moreover, KENReg is stable and can be simultaneously sparse, which makes it attractive theoretically and practically.
NASA Astrophysics Data System (ADS)
Cheong, Kwang-Ho; Lee, MeYeon; Kang, Sei-Kwon; Yoon, Jai-Woong; Park, SoAh; Hwang, Taejin; Kim, Haeyoung; Kim, KyoungJu; Han, Tae Jin; Bae, Hoonsik
2015-01-01
Despite the considerable importance of accurately estimating the respiration regularity of a patient in motion compensation treatment, not to mention the necessity of maintaining that regularity through the following sessions, an effective and simply applicable method by which those goals can be accomplished has rarely been reported. The authors herein propose a simple respiration regularity index based on parameters derived from a correspondingly simplified respiration model. In order to simplify a patient's breathing pattern while preserving the data's intrinsic properties, we defined a respiration model as a cos4( ω( t) · t) wave form with a baseline drift. According to this respiration formula, breathing-pattern fluctuation could be explained using four factors: the sample standard deviation of respiration period ( s f ), the sample standard deviation of amplitude ( s a ) and the results of a simple regression of the baseline drift (slope as β, and standard deviation of residuals as σ r ) of a respiration signal. The overall irregularity ( δ) was defined as , where is a variable newly-derived by using principal component analysis (PCA) for the four fluctuation parameters and has two principal components ( ω 1, ω 2). The proposed respiration regularity index was defined as ρ = ln(1 + (1/ δ))/2, a higher ρ indicating a more regular breathing pattern. We investigated its clinical relevance by comparing it with other known parameters. Subsequently, we applied it to 110 respiration signals acquired from five liver and five lung cancer patients by using real-time position management (RPM; Varian Medical Systems, Palo Alto, CA). Correlations between the regularity of the first session and the remaining fractions were investigated using Pearson's correlation coefficient. Additionally, the respiration regularity was compared between the liver and lung cancer patient groups. The respiration regularity was determined based on ρ; patients with ρ < 0.3 showed worse regularity than the others whereas ρ > 0.7 was suitable for respiratory-gated radiation therapy (RGRT). Fluctuations in the breathing cycle and the amplitude were especially determinative of ρ. If the respiration regularity of a patient's first session was known, it could be estimated through subsequent sessions. Notably, the breathing patterns of the lung cancer patients were more irregular than those of the liver cancer patients. Respiration regularity could be objectively determined by using a composite index, ρ. Such a single-index testing of respiration regularity can facilitate determination of RGRT availability in clinical settings, especially for free-breathing cases.
Advanced morphological analysis of patterns of thin anodic porous alumina
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toccafondi, C.; Istituto Italiano di Tecnologia, Department of Nanostructures, Via Morego 30, Genova I 16163; Stępniowski, W.J.
2014-08-15
Different conditions of fabrication of thin anodic porous alumina on glass substrates have been explored, obtaining two sets of samples with varying pore density and porosity, respectively. The patterns of pores have been imaged by high resolution scanning electron microscopy and analyzed by innovative methods. The regularity ratio has been extracted from radial profiles of the fast Fourier transforms of the images. Additionally, the Minkowski measures have been calculated. It was first observed that the regularity ratio averaged across all directions is properly corrected by the coefficient previously determined in the literature. Furthermore, the angularly averaged regularity ratio for themore » thin porous alumina made during short single-step anodizations is lower than that of hexagonal patterns of pores as for thick porous alumina from aluminum electropolishing and two-step anodization. Therefore, the regularity ratio represents a reliable measure of pattern order. At the same time, the lower angular spread of the regularity ratio shows that disordered porous alumina is more isotropic. Within each set, when changing either pore density or porosity, both regularity and isotropy remain rather constant, showing consistent fabrication quality of the experimental patterns. Minor deviations are tentatively discussed with the aid of the Minkowski measures, and the slight decrease in both regularity and isotropy for the final data-points of the porosity set is ascribed to excess pore opening and consequent pore merging. - Highlights: • Thin porous alumina is partly self-ordered and pattern analysis is required. • Regularity ratio is often misused: we fix the averaging and consider its spread. • We also apply the mathematical tool of Minkowski measures, new in this field. • Regularity ratio shows pattern isotropy and Minkowski helps in assessment. • General agreement with perfect artificial patterns confirms the good manufacturing.« less
NASA Astrophysics Data System (ADS)
Prot, Olivier; SantolíK, OndřEj; Trotignon, Jean-Gabriel; Deferaudy, Hervé
2006-06-01
An entropy regularization algorithm (ERA) has been developed to compute the wave-energy density from electromagnetic field measurements. It is based on the wave distribution function (WDF) concept. To assess its suitability and efficiency, the algorithm is applied to experimental data that has already been analyzed using other inversion techniques. The FREJA satellite data that is used consists of six spectral matrices corresponding to six time-frequency points of an ELF hiss-event spectrogram. The WDF analysis is performed on these six points and the results are compared with those obtained previously. A statistical stability analysis confirms the stability of the solutions. The WDF computation is fast and without any prespecified parameters. The regularization parameter has been chosen in accordance with the Morozov's discrepancy principle. The Generalized Cross Validation and L-curve criterions are then tentatively used to provide a fully data-driven method. However, these criterions fail to determine a suitable value of the regularization parameter. Although the entropy regularization leads to solutions that agree fairly well with those already published, some differences are observed, and these are discussed in detail. The main advantage of the ERA is to return the WDF that exhibits the largest entropy and to avoid the use of a priori models, which sometimes seem to be more accurate but without any justification.
Regularization of the Perturbed Spatial Restricted Three-Body Problem by L-Transformations
NASA Astrophysics Data System (ADS)
Poleshchikov, S. M.
2018-03-01
Equations of motion for the perturbed circular restricted three-body problem have been regularized in canonical variables in a moving coordinate system. Two different L-matrices of the fourth order are used in the regularization. Conditions for generalized symplecticity of the constructed transform have been checked. In the unperturbed case, the regular equations have a polynomial structure. The regular equations have been numerically integrated using the Runge-Kutta-Fehlberg method. The results of numerical experiments are given for the Earth-Moon system parameters taking into account the perturbation of the Sun for different L-matrices.
Mbuthia, Jackson Mwenda; Rewe, Thomas Odiwuor; Kahi, Alexander Kigunzu
2015-02-01
This study evaluated pig production practices by smallholder farmers in two distinct production systems geared towards addressing their constraints and prospects for improvement. The production systems evaluated were semi-intensive and extensive and differed in remoteness, market access, resource availability and pig production intensity. Data were collected using structured questionnaires where a total of 102 pig farmers were interviewed. Qualitative and quantitative research methods were employed to define the socioeconomic characteristics of the production systems, understanding the different roles that pigs play, marketing systems and constraints to production. In both systems, regular cash income and insurance against emergencies were ranked as the main reasons for rearing pigs. Marketing of pigs was mainly driven by the type of production operation. Finances, feeds and housing were identified as the major constraints to production. The study provides important parameters and identifies constraints important for consideration in design of sustainable production improvement strategies. Feeding challenges can be improved through understanding the composition and proper utilization of local feed resources. Provision of adequate housing would improve the stocking rates and control mating.
Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.
Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai
2017-11-01
For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Chen, Zhen; Chan, Tommy H. T.
2017-08-01
This paper proposes a new methodology for moving force identification (MFI) from the responses of bridge deck. Based on the existing time domain method (TDM), the MFI problem eventually becomes solving the linear algebraic equation in the form Ax = b . The vector b is usually contaminated by an unknown error e generating from measurement error, which often called the vector e as ''noise''. With the ill-posed problems that exist in the inverse problem, the identification force would be sensitive to the noise e . The proposed truncated generalized singular value decomposition method (TGSVD) aims at obtaining an acceptable solution and making the noise to be less sensitive to perturbations with the ill-posed problems. The illustrated results show that the TGSVD has many advantages such as higher precision, better adaptability and noise immunity compared with TDM. In addition, choosing a proper regularization matrix L and a truncation parameter k are very useful to improve the identification accuracy and to solve ill-posed problems when it is used to identify the moving force on bridge.
Method of Individual Forecasting of Technical State of Logging Machines
NASA Astrophysics Data System (ADS)
Kozlov, V. G.; Gulevsky, V. A.; Skrypnikov, A. V.; Logoyda, V. S.; Menzhulova, A. S.
2018-03-01
Development of the model that evaluates the possibility of failure requires the knowledge of changes’ regularities of technical condition parameters of the machines in use. To study the regularities, the need to develop stochastic models that take into account physical essence of the processes of destruction of structural elements of the machines, the technology of their production, degradation and the stochastic properties of the parameters of the technical state and the conditions and modes of operation arose.
Zhang, Cheng; Zhang, Tao; Li, Ming; Lu, Yanfei; You, Jiali; Guan, Yihui
2015-01-01
In recent years, X-ray computed tomography (CT) is becoming widely used to reveal patient's anatomical information. However, the side effect of radiation, relating to genetic or cancerous diseases, has caused great public concern. The problem is how to minimize radiation dose significantly while maintaining image quality. As a practical application of compressed sensing theory, one category of methods takes total variation (TV) minimization as the sparse constraint, which makes it possible and effective to get a reconstruction image of high quality in the undersampling situation. On the other hand, a preliminary attempt of low-dose CT reconstruction based on dictionary learning seems to be another effective choice. But some critical parameters, such as the regularization parameter, cannot be determined by detecting datasets. In this paper, we propose a reweighted objective function that contributes to a numerical calculation model of the regularization parameter. A number of experiments demonstrate that this strategy performs well with better reconstruction images and saving of a large amount of time. PMID:26550024
Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A
2015-02-01
Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.
Noll, Douglas C.; Fessler, Jeffrey A.
2014-01-01
Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms. PMID:25330484
Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D
2002-07-01
Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.
Chiao, P C; Rogers, W L; Fessler, J A; Clinthorne, N H; Hero, A O
1994-01-01
The authors have previously developed a model-based strategy for joint estimation of myocardial perfusion and boundaries using ECT (emission computed tomography). They have also reported difficulties with boundary estimation in low contrast and low count rate situations. Here they propose using boundary side information (obtainable from high resolution MRI and CT images) or boundary regularization to improve both perfusion and boundary estimation in these situations. To fuse boundary side information into the emission measurements, the authors formulate a joint log-likelihood function to include auxiliary boundary measurements as well as ECT projection measurements. In addition, they introduce registration parameters to align auxiliary boundary measurements with ECT measurements and jointly estimate these parameters with other parameters of interest from the composite measurements. In simulated PET O-15 water myocardial perfusion studies using a simplified model, the authors show that the joint estimation improves perfusion estimation performance and gives boundary alignment accuracy of <0.5 mm even at 0.2 million counts. They implement boundary regularization through formulating a penalized log-likelihood function. They also demonstrate in simulations that simultaneous regularization of the epicardial boundary and myocardial thickness gives comparable perfusion estimation accuracy with the use of boundary side information.
A classification of freshwater Louisiana lakes based on water quality and user perception data.
Burden, D G; Malone, R F
1987-09-01
An index system developed for Louisiana lakes was based on correlations between measurable water quality parameters and perceived lake quality. Support data was provided by an extensive monitoring program of 30 lakes coordinated with opinion surveys undertaken during summer 1984. Lakes included in the survey ranged from 4 to 735 km(2) in surface area with mean depths ranging from 0.5 to 8.0 m. Water quality data indicated most of these lakes are eutrophic, although many have productive fisheries and are considered recreational assets. Perception ratings of fishing quality and its associated water quality were obtained by distributing approximately 1200 surveys to Louisiana Bass Club Associaton members. The ability of Secchi disc transparency, total organic carbon, total Kjeldahl nitrogen, total phosphorus, and chlorophyll a to discriminate between perception classes was examined using probability distributions and multivariate analyses. Secchi disc and total organic carbon best reflected perceived lake conditions; however, these parameters did not provide the discrimination necessary for developing a quantitative risk assessment of lake trophic state. Consequently, an interim lakes index system was developed based on total organic carbon and perceived lake conditions. The developed index system will aid State officials in interpretating and evaluating regularly collected lake quality data, recognizing potential problem areas, and identifying proper management policies for protecting fisheries usage within the State.
NASA Astrophysics Data System (ADS)
García-Mayordomo, Julián; Martín-Banda, Raquel; Insua-Arévalo, Juan M.; Álvarez-Gómez, José A.; Martínez-Díaz, José J.; Cabral, João
2017-08-01
Active fault databases are a very powerful and useful tool in seismic hazard assessment, particularly when singular faults are considered seismogenic sources. Active fault databases are also a very relevant source of information for earth scientists, earthquake engineers and even teachers or journalists. Hence, active fault databases should be updated and thoroughly reviewed on a regular basis in order to keep a standard quality and uniformed criteria. Desirably, active fault databases should somehow indicate the quality of the geological data and, particularly, the reliability attributed to crucial fault-seismic parameters, such as maximum magnitude and recurrence interval. In this paper we explain how we tackled these issues during the process of updating and reviewing the Quaternary Active Fault Database of Iberia (QAFI) to its current version 3. We devote particular attention to describing the scheme devised for classifying the quality and representativeness of the geological evidence of Quaternary activity and the accuracy of the slip rate estimation in the database. Subsequently, we use this information as input for a straightforward rating of the level of reliability of maximum magnitude and recurrence interval fault seismic parameters. We conclude that QAFI v.3 is a much better database than version 2 either for proper use in seismic hazard applications or as an informative source for non-specialized users. However, we already envision new improvements for a future update.
Singh, Darshan; Murugaiyah, Vikneswaran; Hamid, Shahrul Bariyah Sahul; Kasinather, Vicknasingam; Chan, Michelle Su Ann; Ho, Eric Tatt Wei; Grundmann, Oliver; Chear, Nelson Jeng Yeou; Mansor, Sharif Mahsufi
2018-07-15
Mitragyna speciosa (Korth.) also known as kratom, is a native medicinal plant of Southeast Asia with opioid-like effects. Kratom tea/juice have been traditionally used as a folk remedy and for controlling opiate withdrawal in Malaysia. Long-term opioid use is associated with depletion in testosterone levels. Since kratom is reported to deform sperm morphology and reduce sperm motility, we aimed to clinically investigate the testosterone levels following long-term kratom tea/juice use in regular kratom users. A total of 19 regular kratom users were recruited for this cross-sectional study. A full-blood test was conducted including determination of testosterone level, follicle stimulating hormone (FSH) and luteinizing hormone (LH) profile, as well as hematological and biochemical parameters of participants. We found long-term kratom tea/juice consumption with a daily mitragynine dose of 76.23-94.15 mg did not impair testosterone levels, or gonadotrophins, hematological and biochemical parameters in regular kratom users. Regular kratom tea/juice consumption over prolonged periods (>2 years) was not associated with testosterone impairing effects in humans. Copyright © 2018 Elsevier B.V. All rights reserved.
Estimating parameter of influenza transmission using regularized least square
NASA Astrophysics Data System (ADS)
Nuraini, N.; Syukriah, Y.; Indratno, S. W.
2014-02-01
Transmission process of influenza can be presented in a mathematical model as a non-linear differential equations system. In this model the transmission of influenza is determined by the parameter of contact rate of the infected host and susceptible host. This parameter will be estimated using a regularized least square method where the Finite Element Method and Euler Method are used for approximating the solution of the SIR differential equation. The new infected data of influenza from CDC is used to see the effectiveness of the method. The estimated parameter represents the contact rate proportion of transmission probability in a day which can influence the number of infected people by the influenza. Relation between the estimated parameter and the number of infected people by the influenza is measured by coefficient of correlation. The numerical results show positive correlation between the estimated parameters and the infected people.
SPILC: An expert student advisor
NASA Technical Reports Server (NTRS)
Read, D. R.
1990-01-01
The Lamar University Computer Science Department serves about 350 undergraduate C.S. majors, and 70 graduate majors. B.S. degrees are offered in Computer Science and Computer and Information Science, and an M.S. degree is offered in Computer Science. In addition, the Computer Science Department plays a strong service role, offering approximately sixteen service course sections per long semester. The department has eight regular full-time faculty members, including the Department Chairman and the Undergraduate Advisor, and from three to seven part-time faculty members. Due to the small number of regular faculty members and the resulting very heavy teaching loads, undergraduate advising has become a difficult problem for the department. There is a one week early registration period and a three-day regular registration period once each semester. The Undergraduate Advisor's regular teaching load of two classes, 6 - 8 semester hours, per semester, together with the large number of majors and small number of regular faculty, cause long queues and short tempers during these advising periods. The situation is aggravated by the fact that entering freshmen are rarely accompanied by adequate documentation containing the facts necessary for proper counselling. There has been no good method of obtaining necessary facts and documenting both the information provided by the student and the resulting advice offered by the counsellors.
Gravitational lensing and ghost images in the regular Bardeen no-horizon spacetimes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schee, Jan; Stuchlík, Zdeněk, E-mail: jan.schee@fpf.slu.cz, E-mail: zdenek.stuchlik@fpf.slu.cz
We study deflection of light rays and gravitational lensing in the regular Bardeen no-horizon spacetimes. Flatness of these spacetimes in the central region implies existence of interesting optical effects related to photons crossing the gravitational field of the no-horizon spacetimes with low impact parameters. These effects occur due to existence of a critical impact parameter giving maximal deflection of light rays in the Bardeen no-horizon spacetimes. We give the critical impact parameter in dependence on the specific charge of the spacetimes, and discuss 'ghost' direct and indirect images of Keplerian discs, generated by photons with low impact parameters. The ghostmore » direct images can occur only for large inclination angles of distant observers, while ghost indirect images can occur also for small inclination angles. We determine the range of the frequency shift of photons generating the ghost images and determine distribution of the frequency shift across these images. We compare them to those of the standard direct images of the Keplerian discs. The difference of the ranges of the frequency shift on the ghost and direct images could serve as a quantitative measure of the Bardeen no-horizon spacetimes. The regions of the Keplerian discs giving the ghost images are determined in dependence on the specific charge of the no-horizon spacetimes. For comparison we construct direct and indirect (ordinary and ghost) images of Keplerian discs around Reissner-Nördström naked singularities demonstrating a clear qualitative difference to the ghost direct images in the regular Bardeen no-horizon spacetimes. The optical effects related to the low impact parameter photons thus give clear signature of the regular Bardeen no-horizon spacetimes, as no similar phenomena could occur in the black hole or naked singularity spacetimes. Similar direct ghost images have to occur in any regular no-horizon spacetimes having nearly flat central region.« less
Khatri, Nitasha; Tyagi, Sanjiv; Rawtani, Deepak
2017-12-07
Water pollution and water scarcity are major environmental issues in rural and urban areas. They lead to decline in the quality of water, especially drinking water. Proper qualitative assessment of water is thus necessary to ensure that the water consumed is potable. This study aims to analyze the physicochemical parameters in different sources of water in rural areas and assess the quality of water through a classification system based on BIS and CPCB standards. The classification method has defined water quality in six categories, viz., A, B, C, D, E, and F depending on the levels of physicochemical parameters in the water samples. The proposed classification system was applied to nine villages in Kadi Taluka, Mehsana district of Gujarat. The water samples were collected from borewells, lakes, Narmada Canal, and sewerage systems and were analyzed as per APHA and IS methods. It was observed that most of the physicochemical parameters of Narmada Canal and borewell water fell under class A, thus making them most suitable for drinking. Further, a health camp conducted at Karannagar village, Mehsana revealed no incidents of any waterborne diseases. However, there were certain incidents of kidney stones and joint pain in few villages due to high levels of TDS. Toxic metal analysis in all the water sources revealed low to undetectable concentration of toxic metals such as lead, arsenic, mercury, and cadmium in all the water sources. It is also recommended that the regular treatment of the Narmada Canal water be continued to maintain its excellent quality.
Spacetime completeness of non-singular black holes in conformal gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bambi, Cosimo; Rachwał, Lesław; Modesto, Leonardo, E-mail: bambi@fudan.edu.cn, E-mail: lmodesto@sustc.edu.cn, E-mail: grzerach@gmail.com
We explicitly prove that the Weyl conformal symmetry solves the black hole singularity problem, otherwise unavoidable in a generally covariant local or non-local gravitational theory. Moreover, we yield explicit examples of local and non-local theories enjoying Weyl and diffeomorphism symmetry (in short co-covariant theories). Following the seminal paper by Narlikar and Kembhavi, we provide an explicit construction of singularity-free spherically symmetric and axi-symmetric exact solutions for black hole spacetimes conformally equivalent to the Schwarzschild or the Kerr spacetime. We first check the absence of divergences in the Kretschmann invariant for the rescaled metrics. Afterwords, we show that the new typesmore » of black holes are geodesically complete and linked by a Newman-Janis transformation just as in standard general relativity (based on Einstein-Hilbert action). Furthermore, we argue that no massive or massless particles can reach the former Schwarzschild singularity or touch the former Kerr ring singularity in a finite amount of their proper time or of their affine parameter. Finally, we discuss the Raychaudhuri equation in a co-covariant theory and we show that the expansion parameter for congruences of both types of geodesics (for massless and massive particles) never reaches minus infinity. Actually, the null geodesics become parallel at the r =0 point in the Schwarzschild spacetime (the origin) and the focusing of geodesics is avoided. The arguments of regularity of curvature invariants, geodesic completeness, and finiteness of geodesics' expansion parameter ensure us that we are dealing with singularity-free and geodesically-complete black hole spacetimes.« less
Dynamic positioning configuration and its first-order optimization
NASA Astrophysics Data System (ADS)
Xue, Shuqiang; Yang, Yuanxi; Dang, Yamin; Chen, Wu
2014-02-01
Traditional geodetic network optimization deals with static and discrete control points. The modern space geodetic network is, on the other hand, composed of moving control points in space (satellites) and on the Earth (ground stations). The network configuration composed of these facilities is essentially dynamic and continuous. Moreover, besides the position parameter which needs to be estimated, other geophysical information or signals can also be extracted from the continuous observations. The dynamic (continuous) configuration of the space network determines whether a particular frequency of signals can be identified by this system. In this paper, we employ the functional analysis and graph theory to study the dynamic configuration of space geodetic networks, and mainly focus on the optimal estimation of the position and clock-offset parameters. The principle of the D-optimization is introduced in the Hilbert space after the concept of the traditional discrete configuration is generalized from the finite space to the infinite space. It shows that the D-optimization developed in the discrete optimization is still valid in the dynamic configuration optimization, and this is attributed to the natural generalization of least squares from the Euclidean space to the Hilbert space. Then, we introduce the principle of D-optimality invariance under the combination operation and rotation operation, and propose some D-optimal simplex dynamic configurations: (1) (Semi) circular configuration in 2-dimensional space; (2) the D-optimal cone configuration and D-optimal helical configuration which is close to the GPS constellation in 3-dimensional space. The initial design of GPS constellation can be approximately treated as a combination of 24 D-optimal helixes by properly adjusting the ascending node of different satellites to realize a so-called Walker constellation. In the case of estimating the receiver clock-offset parameter, we show that the circular configuration, the symmetrical cone configuration and helical curve configuration are still D-optimal. It shows that the given total observation time determines the optimal frequency (repeatability) of moving known points and vice versa, and one way to improve the repeatability is to increase the rotational speed. Under the Newton's law of motion, the frequency of satellite motion determines the orbital altitude. Furthermore, we study three kinds of complex dynamic configurations, one of which is the combination of D-optimal cone configurations and a so-called Walker constellation composed of D-optimal helical configuration, the other is the nested cone configuration composed of n cones, and the last is the nested helical configuration composed of n orbital planes. It shows that an effective way to achieve high coverage is to employ the configuration composed of a certain number of moving known points instead of the simplex configuration (such as D-optimal helical configuration), and one can use the D-optimal simplex solutions or D-optimal complex configurations in any combination to achieve powerful configurations with flexile coverage and flexile repeatability. Alternately, how to optimally generate and assess the discrete configurations sampled from the continuous one is discussed. The proposed configuration optimization framework has taken the well-known regular polygons (such as equilateral triangle and quadrangular) in two-dimensional space and regular polyhedrons (regular tetrahedron, cube, regular octahedron, regular icosahedron, or regular dodecahedron) into account. It shows that the conclusions made by the proposed technique are more general and no longer limited by different sampling schemes. By the conditional equation of D-optimal nested helical configuration, the relevance issues of GNSS constellation optimization are solved and some examples are performed by GPS constellation to verify the validation of the newly proposed optimization technique. The proposed technique is potentially helpful in maintenance and quadratic optimization of single GNSS of which the orbital inclination and the orbital altitude change under the precession, as well as in optimally nesting GNSSs to perform global homogeneous coverage of the Earth.
Calabi-Yau structures on categories of matrix factorizations
NASA Astrophysics Data System (ADS)
Shklyarov, Dmytro
2017-09-01
Using tools of complex geometry, we construct explicit proper Calabi-Yau structures, that is, non-degenerate cyclic cocycles on differential graded categories of matrix factorizations of regular functions with isolated critical points. The formulas involve the Kapustin-Li trace and its higher corrections. From the physics perspective, our result yields explicit 'off-shell' models for categories of topological D-branes in B-twisted Landau-Ginzburg models.
Planning and scheduling for success
NASA Technical Reports Server (NTRS)
Manzanera, Ignacio
1994-01-01
Planning and scheduling programs are excellent management tools when properly introduced to the project management team and regularly maintained. Communications, creativity, flexibility and accuracy are substantially improved by following a simple set of rules. A planning and scheduling program will work for you if you believe in it, make others in your project team realize its benefits, and make it an extension of your project cost control philosophy.
Time-Optimized High-Resolution Readout-Segmented Diffusion Tensor Imaging
Reishofer, Gernot; Koschutnig, Karl; Langkammer, Christian; Porter, David; Jehna, Margit; Enzinger, Christian; Keeling, Stephen; Ebner, Franz
2013-01-01
Readout-segmented echo planar imaging with 2D navigator-based reacquisition is an uprising technique enabling the sampling of high-resolution diffusion images with reduced susceptibility artifacts. However, low signal from the small voxels and long scan times hamper the clinical applicability. Therefore, we introduce a regularization algorithm based on total variation that is applied directly on the entire diffusion tensor. The spatially varying regularization parameter is determined automatically dependent on spatial variations in signal-to-noise ratio thus, avoiding over- or under-regularization. Information about the noise distribution in the diffusion tensor is extracted from the diffusion weighted images by means of complex independent component analysis. Moreover, the combination of those features enables processing of the diffusion data absolutely user independent. Tractography from in vivo data and from a software phantom demonstrate the advantage of the spatially varying regularization compared to un-regularized data with respect to parameters relevant for fiber-tracking such as Mean Fiber Length, Track Count, Volume and Voxel Count. Specifically, for in vivo data findings suggest that tractography results from the regularized diffusion tensor based on one measurement (16 min) generates results comparable to the un-regularized data with three averages (48 min). This significant reduction in scan time renders high resolution (1×1×2.5 mm3) diffusion tensor imaging of the entire brain applicable in a clinical context. PMID:24019951
A regularization approach to hydrofacies delineation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wohlberg, Brendt; Tartakovsky, Daniel
2009-01-01
We consider an inverse problem of identifying complex internal structures of composite (geological) materials from sparse measurements of system parameters and system states. Two conceptual frameworks for identifying internal boundaries between constitutive materials in a composite are considered. A sequential approach relies on support vector machines, nearest neighbor classifiers, or geostatistics to reconstruct boundaries from measurements of system parameters and then uses system states data to refine the reconstruction. A joint approach inverts the two data sets simultaneously by employing a regularization approach.
Salt, Julián; Cuenca, Ángel; Palau, Francisco; Dormido, Sebastián
2014-01-01
In many control applications, the sensor technology used for the measurement of the variable to be controlled is not able to maintain a restricted sampling period. In this context, the assumption of regular and uniform sampling pattern is questionable. Moreover, if the control action updating can be faster than the output measurement frequency in order to fulfill the proposed closed loop behavior, the solution is usually a multirate controller. There are some known aspects to be careful of when a multirate system (MR) is going to be designed. The proper multiplicity between input-output sampling periods, the proper controller structure, the existence of ripples and others issues need to be considered. A useful way to save time and achieve good results is to have an assisted computer design tool. An interactive simulation tool to deal with MR seems to be the right solution. In this paper this kind of simulation application is presented. It allows an easy understanding of the performance degrading or improvement when changing the multirate sampling pattern parameters. The tool was developed using Sysquake, a Matlab-like language with fast execution and powerful graphic facilities. It can be delivered as an executable. In the paper a detailed explanation of MR treatment is also included and the design of four different MR controllers with flexible structure to be adapted to different schemes will also be presented. The Smith's predictor in these MR schemes is also explained, justified and used when time delays appear. Finally some interesting observations achieved using this interactive tool are included. PMID:24583971
[Folate metabolism--epigenetic role of choline and vitamin B12 during pregnancy].
Drews, Krzysztof
2015-12-01
Adequate choline intake during pregnancy is essential for proper fetal development. Nowadays studies suggest that even in high income countries regular pregnant women diet does not provide the satisfactory amount of choline. Choline demand during pregnancy is high and it seems to exceed present choline intake recommendations. Moreover lactation period also demands choline supplementation because of its high concentration in female milk. Numerous studies on animal model proved correlation between choline supplementation during pregnancy and proper fetal cognitive function development. Despite increased synthesis in maternal liver during pregnancy choline demand is much higher than common dietary uptake. Nowadays studies as to the nutritional recommendations during pregnancy concern also vitamin B12 supplementation. Vitamin B12 deficiency may be an important risk factor of neural tube defects development. Presented article contains a review of data on proper choline and vitamin B12 uptake during pregnancy and lactation and potential results of choline and vitamin B12 poor maternal status.
Modified Denavit-Hartenberg parameters for better location of joint axis systems in robot arms
NASA Technical Reports Server (NTRS)
Barker, L. K.
1986-01-01
The Denavit-Hartenberg parameters define the relative location of successive joint axis systems in a robot arm. A recent justifiable criticism is that one of these parameters becomes extremely large when two successive joints have near-parallel rotational axes. Geometrically, this parameter then locates a joint axis system at an excessive distance from the robot arm and, computationally, leads to an ill-conditioned transformation matrix. In this paper, a simple modification (which results from constraining a transverse vector between successive joint rotational axes to be normal to one of the rotational axes, instead of both) overcomes this criticism and favorably locates the joint axis system. An example is given for near-parallel rotational axes of the elbow and shoulder joints in a robot arm. The regular and modified parameters are extracted by an algebraic method with simulated measurement data. Unlike the modified parameters, extracted values of the regular parameters are very sensitive to measurement accuracy.
NASA Astrophysics Data System (ADS)
Goyal, M.; Goyal, R.; Bhargava, R.
2017-12-01
In this paper, triple diffusive natural convection under Darcy flow over an inclined plate embedded in a porous medium saturated with a binary base fluid containing nanoparticles and two salts is studied. The model used for the nanofluid is the one which incorporates the effects of Brownian motion and thermophoresis. In addition, the thermal energy equations include regular diffusion and cross-diffusion terms. The vertical surface has the heat, mass and nanoparticle fluxes each prescribed as a power law function of the distance along the wall. The boundary layer equations are transformed into a set of ordinary differential equations with the help of group theory transformations. A wide range of parameter values are chosen to bring out the effect of buoyancy ratio, regular Lewis number and modified Dufour parameters of both salts and nanofluid parameters with varying angle of inclinations. The effects of parameters on the velocity, temperature, solutal and nanoparticles volume fraction profiles, as well as on the important parameters of heat and mass transfer, i.e., the reduced Nusselt, regular and nanofluid Sherwood numbers, are discussed. Such problems find application in extrusion of metals, polymers and ceramics, production of plastic films, insulation of wires and liquid packaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunke, Elizabeth Clare; Urrego Blanco, Jorge Rolando; Urban, Nathan Mark
Coupled climate models have a large number of input parameters that can affect output uncertainty. We conducted a sensitivity analysis of sea ice proper:es and Arc:c related climate variables to 5 parameters in the HiLAT climate model: air-ocean turbulent exchange parameter (C), conversion of water vapor to clouds (cldfrc_rhminl) and of ice crystals to snow (micro_mg_dcs), snow thermal conduc:vity (ksno), and maximum snow grain size (rsnw_mlt). We used an elementary effect (EE) approach to rank their importance for output uncertainty. EE is an extension of one-at-a-time sensitivity analyses, but it is more efficient in sampling multi-dimensional parameter spaces. We lookedmore » for emerging relationships among climate variables across the model ensemble, and used causal discovery algorithms to establish potential pathways for those relationships.« less
Radivojevic, Ubavka D; Lazovic, Gordana B; Kravic-Stevovic, Tamara K; Puzigaca, Zarko D; Canovic, Fadil M; Nikolic, Rajko R; Milicevic, Srboljub M
2014-08-01
Exploring the relation between the age, time since menarche, anthropometric parameters and the growth of the uterus and ovaries in postmenarcheal girls. Cross sectional. Department of Human reproduction at a tertiary pediatric referral center. Eight hundred thirty-five adolescent girls. Postmenarcheal girls were classified according to the regularity of their menstrual cycles in 2 groups (regular and irregular cycles) and compared. Anthropometric measurements and ultrasonographic examination of the pelvis was conducted with all participants. Anthropometric and ultrasonographic parameters were evaluated. Results of our study showed that girls with regular and irregular cycles differed in height, weight, body mass index, percentage of body fat and ovarian volumes. The size of the ovaries decreases in the group of girls with regular cycles (r = 0.14; P < .005), while it increases in girls with irregular cycles (r = 0.15; P < .001) with advancing age. Uterine volume in all patients increases gradually with age reaching consistent values at 16 years (r = 0.5; P < .001). Age at menarche, the time elapsed since menarche, the height, weight, body mass index and percentage of body fat in patients correlated with uterine volume. Ovarian volume correlated with patients' weight, BMI and percentage of fat. Uterus continues to grow in postmenarcheal years, with increasing height and weight of girls, regardless of the regularity of cycles. Postmenarcheal girls with irregular cycles were found to have heavier figures and larger ovaries. Copyright © 2014 North American Society for Pediatric and Adolescent Gynecology. Published by Elsevier Inc. All rights reserved.
Multiplicative Multitask Feature Learning
Wang, Xin; Bi, Jinbo; Yu, Shipeng; Sun, Jiangwen; Song, Minghu
2016-01-01
We investigate a general framework of multiplicative multitask feature learning which decomposes individual task’s model parameters into a multiplication of two components. One of the components is used across all tasks and the other component is task-specific. Several previous methods can be proved to be special cases of our framework. We study the theoretical properties of this framework when different regularization conditions are applied to the two decomposed components. We prove that this framework is mathematically equivalent to the widely used multitask feature learning methods that are based on a joint regularization of all model parameters, but with a more general form of regularizers. Further, an analytical formula is derived for the across-task component as related to the task-specific component for all these regularizers, leading to a better understanding of the shrinkage effects of different regularizers. Study of this framework motivates new multitask learning algorithms. We propose two new learning formulations by varying the parameters in the proposed framework. An efficient blockwise coordinate descent algorithm is developed suitable for solving the entire family of formulations with rigorous convergence analysis. Simulation studies have identified the statistical properties of data that would be in favor of the new formulations. Extensive empirical studies on various classification and regression benchmark data sets have revealed the relative advantages of the two new formulations by comparing with the state of the art, which provides instructive insights into the feature learning problem with multiple tasks. PMID:28428735
Ma, Wan-li; Cai, Peng-cheng; Xiong, Xian-zhi; Ye, Hong
2013-02-01
FIZZ/RELM is a new gene family named "found in inflammatory zone" (FIZZ) or "resistin-like molecule" (RELM). FIZZ1/RELMα is specifically expressed in lung tissue and associated with pulmonary inflammation. Chronic cigarette smoking up-regulates FIZZ1/RELMα expression in rat lung tissues, the mechanism of which is related to cigarette smoking-induced airway hyperresponsiveness. To investigate the effect of exercise training on chronic cigarette smoking-induced airway hyperresponsiveness and up-regulation of FIZZ1/RELMα, rat chronic cigarette smoking model was established. The rats were treated with regular exercise training and their airway responsiveness was measured. Hematoxylin and eosin (HE) staining, immunohistochemistry and in situ hybridization of lung tissues were performed to detect the expression of FIZZ1/RELMα. Results revealed that proper exercise training decreased airway hyperresponsiveness and pulmonary inflammation in rat chronic cigarette smoking model. Cigarette smoking increased the mRNA and protein levels of FIZZ1/RELMα, which were reversed by the proper exercise. It is concluded that proper exercise training prevents up-regulation of FIZZ1/RELMα induced by cigarette smoking, which may be involved in the mechanism of proper exercise training modulating airway hyperresponsiveness.
Using Tranformation Group Priors and Maximum Relative Entropy for Bayesian Glaciological Inversions
NASA Astrophysics Data System (ADS)
Arthern, R. J.; Hindmarsh, R. C. A.; Williams, C. R.
2014-12-01
One of the key advances that has allowed better simulations of the large ice sheets of Greenland and Antarctica has been the use of inverse methods. These have allowed poorly known parameters such as the basal drag coefficient and ice viscosity to be constrained using a wide variety of satellite observations. Inverse methods used by glaciologists have broadly followed one of two related approaches. The first is minimization of a cost function that describes the misfit to the observations, often accompanied by some kind of explicit or implicit regularization that promotes smallness or smoothness in the inverted parameters. The second approach is a probabilistic framework that makes use of Bayes' theorem to update prior assumptions about the probability of parameters, making use of data with known error estimates. Both approaches have much in common and questions of regularization often map onto implicit choices of prior probabilities that are made explicit in the Bayesian framework. In both approaches questions can arise that seem to demand subjective input. What should the functional form of the cost function be if there are alternatives? What kind of regularization should be applied, and how much? How should the prior probability distribution for a parameter such as basal slipperiness be specified when we know so little about the details of the subglacial environment? Here we consider some approaches that have been used to address these questions and discuss ways that probabilistic prior information used for regularizing glaciological inversions might be specified with greater objectivity.
Boudreau, Mathieu; Pike, G Bruce
2018-05-07
To develop and validate a regularization approach of optimizing B 1 insensitivity of the quantitative magnetization transfer (qMT) pool-size ratio (F). An expression describing the impact of B 1 inaccuracies on qMT fitting parameters was derived using a sensitivity analysis. To simultaneously optimize for robustness against noise and B 1 inaccuracies, the optimization condition was defined as the Cramér-Rao lower bound (CRLB) regularized by the B 1 -sensitivity expression for the parameter of interest (F). The qMT protocols were iteratively optimized from an initial search space, with and without B 1 regularization. Three 10-point qMT protocols (Uniform, CRLB, CRLB+B 1 regularization) were compared using Monte Carlo simulations for a wide range of conditions (e.g., SNR, B 1 inaccuracies, tissues). The B 1 -regularized CRLB optimization protocol resulted in the best robustness of F against B 1 errors, for a wide range of SNR and for both white matter and gray matter tissues. For SNR = 100, this protocol resulted in errors of less than 1% in mean F values for B 1 errors ranging between -10 and 20%, the range of B 1 values typically observed in vivo in the human head at field strengths of 3 T and less. Both CRLB-optimized protocols resulted in the lowest σ F values for all SNRs and did not increase in the presence of B 1 inaccuracies. This work demonstrates a regularized optimization approach for improving the robustness of auxiliary measurements (e.g., B 1 ) sensitivity of qMT parameters, particularly the pool-size ratio (F). Predicting substantially less B 1 sensitivity using protocols optimized with this method, B 1 mapping could even be omitted for qMT studies primarily interested in F. © 2018 International Society for Magnetic Resonance in Medicine.
Army Hearing Program Status Report Quarter 3 Fiscal Year 2017
2017-09-14
chapter. This provides a vehicle for the collection of Measures of Performance and Measures of Effectiveness (MOE) in order to report the metrics as...Program representatives and managers to visit and inspect these areas regularly for noise exposure and proper protective measures . As evidenced by the...welcome measure to report. RECOMMENDATIONS • Increase participation in the survey as directed by Chief of Staff, U.S. Army Medical Command (MEDCOM
[Antibiotic prescription usage and assessment in geriatric patients].
Dinh, Aurélien; Davido, Benjamin; Salomon, Jérôme; Le Quintrec, Jean-Laurent; Teillet, Laurent
2016-01-01
Due to the high risk of infection, the geriatric population is regularly subjected to antibiotics. Faced with bacterial resistance, particularly among elderly dependent patients, it is essential to promote proper use and correct prescription of antibiotics. A study evaluated antibiotic prescription in a geriatric hospital with 598 beds and highlighted the importance of collaboration between geriatricians and infectious disease specialists. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Olafsson, Valur T; Noll, Douglas C; Fessler, Jeffrey A
2018-02-01
Penalized least-squares iterative image reconstruction algorithms used for spatial resolution-limited imaging, such as functional magnetic resonance imaging (fMRI), commonly use a quadratic roughness penalty to regularize the reconstructed images. When used for complex-valued images, the conventional roughness penalty regularizes the real and imaginary parts equally. However, these imaging methods sometimes benefit from separate penalties for each part. The spatial smoothness from the roughness penalty on the reconstructed image is dictated by the regularization parameter(s). One method to set the parameter to a desired smoothness level is to evaluate the full width at half maximum of the reconstruction method's local impulse response. Previous work has shown that when using the conventional quadratic roughness penalty, one can approximate the local impulse response using an FFT-based calculation. However, that acceleration method cannot be applied directly for separate real and imaginary regularization. This paper proposes a fast and stable calculation for this case that also uses FFT-based calculations to approximate the local impulse responses of the real and imaginary parts. This approach is demonstrated with a quadratic image reconstruction of fMRI data that uses separate roughness penalties for the real and imaginary parts.
Bayesian Recurrent Neural Network for Language Modeling.
Chien, Jen-Tzung; Ku, Yuan-Chu
2016-02-01
A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.
Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.
Dazard, Jean-Eudes; Rao, J Sunil
2012-07-01
The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.
Parameter identification in ODE models with oscillatory dynamics: a Fourier regularization approach
NASA Astrophysics Data System (ADS)
Chiara D'Autilia, Maria; Sgura, Ivonne; Bozzini, Benedetto
2017-12-01
In this paper we consider a parameter identification problem (PIP) for data oscillating in time, that can be described in terms of the dynamics of some ordinary differential equation (ODE) model, resulting in an optimization problem constrained by the ODEs. In problems with this type of data structure, simple application of the direct method of control theory (discretize-then-optimize) yields a least-squares cost function exhibiting multiple ‘low’ minima. Since in this situation any optimization algorithm is liable to fail in the approximation of a good solution, here we propose a Fourier regularization approach that is able to identify an iso-frequency manifold {{ S}} of codimension-one in the parameter space \
Iterative Nonlocal Total Variation Regularization Method for Image Restoration
Xu, Huanyu; Sun, Quansen; Luo, Nan; Cao, Guo; Xia, Deshen
2013-01-01
In this paper, a Bregman iteration based total variation image restoration algorithm is proposed. Based on the Bregman iteration, the algorithm splits the original total variation problem into sub-problems that are easy to solve. Moreover, non-local regularization is introduced into the proposed algorithm, and a method to choose the non-local filter parameter locally and adaptively is proposed. Experiment results show that the proposed algorithms outperform some other regularization methods. PMID:23776560
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheong, K; Lee, M; Kang, S
2014-06-01
Purpose: Despite the importance of accurately estimating the respiration regularity of a patient in motion compensation treatment, an effective and simply applicable method has rarely been reported. The authors propose a simple respiration regularity index based on parameters derived from a correspondingly simplified respiration model. Methods: In order to simplify a patient's breathing pattern while preserving the data's intrinsic properties, we defined a respiration model as a power of cosine form with a baseline drift. According to this respiration formula, breathing-pattern fluctuation could be explained using four factors: sample standard deviation of respiration period, sample standard deviation of amplitude andmore » the results of simple regression of the baseline drift (slope and standard deviation of residuals of a respiration signal. Overall irregularity (δ) was defined as a Euclidean norm of newly derived variable using principal component analysis (PCA) for the four fluctuation parameters. Finally, the proposed respiration regularity index was defined as ρ=ln(1+(1/ δ))/2, a higher ρ indicating a more regular breathing pattern. Subsequently, we applied it to simulated and clinical respiration signals from real-time position management (RPM; Varian Medical Systems, Palo Alto, CA) and investigated respiration regularity. Moreover, correlations between the regularity of the first session and the remaining fractions were investigated using Pearson's correlation coefficient. Results: The respiration regularity was determined based on ρ; patients with ρ<0.3 showed worse regularity than the others, whereas ρ>0.7 was suitable for respiratory-gated radiation therapy (RGRT). Fluctuations in breathing cycle and amplitude were especially determinative of ρ. If the respiration regularity of a patient's first session was known, it could be estimated through subsequent sessions. Conclusions: Respiration regularity could be objectively determined using a respiration regularity index, ρ. Such single-index testing of respiration regularity can facilitate determination of RGRT availability in clinical settings, especially for free-breathing cases. This work was supported by a Korea Science and Engineering Foundation (KOSEF) grant funded by the Korean Ministry of Science, ICT and Future Planning (No. 2013043498)« less
NASA Astrophysics Data System (ADS)
Lee, Haenghwa; Choi, Sunghoon; Jo, Byungdu; Kim, Hyemi; Lee, Donghoon; Kim, Dohyeon; Choi, Seungyeon; Lee, Youngjin; Kim, Hee-Joung
2017-03-01
Chest digital tomosynthesis (CDT) is a new 3D imaging technique that can be expected to improve the detection of subtle lung disease over conventional chest radiography. Algorithm development for CDT system is challenging in that a limited number of low-dose projections are acquired over a limited angular range. To confirm the feasibility of algebraic reconstruction technique (ART) method under variations in key imaging parameters, quality metrics were conducted using LUNGMAN phantom included grand-glass opacity (GGO) tumor. Reconstructed images were acquired from the total 41 projection images over a total angular range of +/-20°. We evaluated contrast-to-noise ratio (CNR) and artifacts spread function (ASF) to investigate the effect of reconstruction parameters such as number of iterations, relaxation parameter and initial guess on image quality. We found that proper value of ART relaxation parameter could improve image quality from the same projection. In this study, proper value of relaxation parameters for zero-image (ZI) and back-projection (BP) initial guesses were 0.4 and 0.6, respectively. Also, the maximum CNR values and the minimum full width at half maximum (FWHM) of ASF were acquired in the reconstructed images after 20 iterations and 3 iterations, respectively. According to the results, BP initial guess for ART method could provide better image quality than ZI initial guess. In conclusion, ART method with proper reconstruction parameters could improve image quality due to the limited angular range in CDT system.
PRIFIRA: General regularization using prior-conditioning for fast radio interferometric imaging†
NASA Astrophysics Data System (ADS)
Naghibzadeh, Shahrzad; van der Veen, Alle-Jan
2018-06-01
Image formation in radio astronomy is a large-scale inverse problem that is inherently ill-posed. We present a general algorithmic framework based on a Bayesian-inspired regularized maximum likelihood formulation of the radio astronomical imaging problem with a focus on diffuse emission recovery from limited noisy correlation data. The algorithm is dubbed PRIor-conditioned Fast Iterative Radio Astronomy (PRIFIRA) and is based on a direct embodiment of the regularization operator into the system by right preconditioning. The resulting system is then solved using an iterative method based on projections onto Krylov subspaces. We motivate the use of a beamformed image (which includes the classical "dirty image") as an efficient prior-conditioner. Iterative reweighting schemes generalize the algorithmic framework and can account for different regularization operators that encourage sparsity of the solution. The performance of the proposed method is evaluated based on simulated one- and two-dimensional array arrangements as well as actual data from the core stations of the Low Frequency Array radio telescope antenna configuration, and compared to state-of-the-art imaging techniques. We show the generality of the proposed method in terms of regularization schemes while maintaining a competitive reconstruction quality with the current reconstruction techniques. Furthermore, we show that exploiting Krylov subspace methods together with the proper noise-based stopping criteria results in a great improvement in imaging efficiency.
A regularity result for fixed points, with applications to linear response
NASA Astrophysics Data System (ADS)
Sedro, Julien
2018-04-01
In this paper, we show a series of abstract results on fixed point regularity with respect to a parameter. They are based on a Taylor development taking into account a loss of regularity phenomenon, typically occurring for composition operators acting on spaces of functions with finite regularity. We generalize this approach to higher order differentiability, through the notion of an n-graded family. We then give applications to the fixed point of a nonlinear map, and to linear response in the context of (uniformly) expanding dynamics (theorem 3 and corollary 2), in the spirit of Gouëzel-Liverani.
Controlled wavelet domain sparsity for x-ray tomography
NASA Astrophysics Data System (ADS)
Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli
2018-01-01
Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \
NASA Astrophysics Data System (ADS)
Deng, Shuxian; Ge, Xinxin
2017-10-01
Considering the non-Newtonian fluid equation of incompressible porous media, using the properties of operator semigroup and measure space and the principle of squeezed image, Fourier analysis and a priori estimate in the measurement space are used to discuss the non-compressible porous media, the properness of the solution of the equation, its gradual behavior and its topological properties. Through the diffusion regularization method and the compressed limit compact method, we study the overall decay rate of the solution of the equation in a certain space when the initial value is sufficient. The decay estimation of the solution of the incompressible seepage equation is obtained, and the asymptotic behavior of the solution is obtained by using the double regularization model and the Duhamel principle.
Towards adjoint-based inversion for rheological parameters in nonlinear viscous mantle flow
NASA Astrophysics Data System (ADS)
Worthen, Jennifer; Stadler, Georg; Petra, Noemi; Gurnis, Michael; Ghattas, Omar
2014-09-01
We address the problem of inferring mantle rheological parameter fields from surface velocity observations and instantaneous nonlinear mantle flow models. We formulate this inverse problem as an infinite-dimensional nonlinear least squares optimization problem governed by nonlinear Stokes equations. We provide expressions for the gradient of the cost functional of this optimization problem with respect to two spatially-varying rheological parameter fields: the viscosity prefactor and the exponent of the second invariant of the strain rate tensor. Adjoint (linearized) Stokes equations, which are characterized by a 4th order anisotropic viscosity tensor, facilitates efficient computation of the gradient. A quasi-Newton method for the solution of this optimization problem is presented, which requires the repeated solution of both nonlinear forward Stokes and linearized adjoint Stokes equations. For the solution of the nonlinear Stokes equations, we find that Newton’s method is significantly more efficient than a Picard fixed point method. Spectral analysis of the inverse operator given by the Hessian of the optimization problem reveals that the numerical eigenvalues collapse rapidly to zero, suggesting a high degree of ill-posedness of the inverse problem. To overcome this ill-posedness, we employ Tikhonov regularization (favoring smooth parameter fields) or total variation (TV) regularization (favoring piecewise-smooth parameter fields). Solution of two- and three-dimensional finite element-based model inverse problems show that a constant parameter in the constitutive law can be recovered well from surface velocity observations. Inverting for a spatially-varying parameter field leads to its reasonable recovery, in particular close to the surface. When inferring two spatially varying parameter fields, only an effective viscosity field and the total viscous dissipation are recoverable. Finally, a model of a subducting plate shows that a localized weak zone at the plate boundary can be partially recovered, especially with TV regularization.
Knapik, Joseph J
2014-01-01
Foot blisters are the most common medical problem faced by Soldiers during foot march operations and, if untreated, they can lead to infection. Foot blisters are caused by boots rubbing on the foot (frictional forces), which separates skin layers and allows fluid to seep in. Blisters can be prevented by wearing properly sized boots, conditioning feet through regular road marching, wearing socks that reduce reduce friction and moisture, and possibly applying antiperspirants to the feet. 2014.
Calibration process of highly parameterized semi-distributed hydrological model
NASA Astrophysics Data System (ADS)
Vidmar, Andrej; Brilly, Mitja
2017-04-01
Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group. Third step is to set appropriate bounds to parameters in their range of realistic values. Fourth step is to use of singular value decomposition (SVD) ensures that PEST maintains numerical stability, regardless of how ill-posed is the inverse problem Fifth step is to run PWTADJ1. This creates a new PEST control file in which weights are adjusted such that the contribution made to the total objective function by each observation group is the same. This prevents the information content of any group from being invisible to the inversion process. Sixth step is to add Tikhonov regularization to the PEST control file by running the ADDREG1 utility (Doherty, J, 2013). In adding regularization to the PEST control file ADDREG1 automatically provides a prior information equation for each parameter in which the preferred value of that parameter is equated to its initial value. Last step is to run PEST. We run BeoPEST which a parallel version of PEST and can be run on multiple computers in parallel in same time on TCP communications and this speedup process of calibrations. The case study with results of calibration and validation of the model will be presented.
An adaptive tracking observer for failure-detection systems
NASA Technical Reports Server (NTRS)
Sidar, M.
1982-01-01
The design problem of adaptive observers applied to linear, constant and variable parameters, multi-input, multi-output systems, is considered. It is shown that, in order to keep the observer's (or Kalman filter) false-alarm rate (FAR) under a certain specified value, it is necessary to have an acceptable proper matching between the observer (or KF) model and the system parameters. An adaptive observer algorithm is introduced in order to maintain desired system-observer model matching, despite initial mismatching and/or system parameter variations. Only a properly designed adaptive observer is able to detect abrupt changes in the system (actuator, sensor failures, etc.) with adequate reliability and FAR. Conditions for convergence for the adaptive process were obtained, leading to a simple adaptive law (algorithm) with the possibility of an a priori choice of fixed adaptive gains. Simulation results show good tracking performance with small observer output errors and accurate and fast parameter identification, in both deterministic and stochastic cases.
Dwivedi, Tanima; Sadhana; Chaudhary, Raju
2017-01-01
Introduction Patient’s satisfaction is the need of the hour and one of the most important quality indicators in the laboratory medicine. Aim To assess the patient’s satisfaction with phlebotomy services in a neuropsychiatric hospital by a structured questionnaire with grading scale. Also, identify the problems causing dissatisfactions and to undertake necessary Corrective and Preventative Action (CAPA). Materials and Methods Total 1200 patients were randomly selected over a period of two months (June and July 2016). A structured self designed questionnaire (feedback form) was devised in both Hindi and English languages containing ten questions with a grading scale for each question. It also included suggestions from the users. All the selected patients or their attendants filled up this questionnaire. At the same time, they were also interviewed by phlebotomy staff. A statistical analysis was conducted using SPSS version 16.0 software and Likert scale. Results A total of 94% of the patients were satisfied with the phlebotomy services. Almost 30.0% patients found the phlebotomy services to be very good, but the majority of them (40.5%) found it to be good and another 23.5% found it to be satisfactory while, 4% found the services to be poor and 2% found it to be very poor. The highest rate of satisfaction (4.21) was noted in case of parameter-ease to find collection sample room and lowest rate of satisfaction (3.92) was scored by the parameter-staff’s wearing proper uniform. Depending upon the deficient areas some corrective actions were suggested such as strict compliance of personal protective equipments, regular training to improve technical skill, knowledge and behaviour with emphasis on cleanliness of work area. Conclusion Even though the overall patient’s satisfaction was high, there were areas which needed our attention such as waiting time for phlebotomy procedure, lack of proper sitting arrangement, techniques of sample collection, knowledge of universal precautions etc. Appropriate corrective and preventive actions were taken to solve the problems. Thereby, feedback proved effective in maintenance and improvement of phlebotomy services. PMID:29207713
Gupta, Anshu; Dwivedi, Tanima; Sadhana; Chaudhary, Raju
2017-09-01
Patient's satisfaction is the need of the hour and one of the most important quality indicators in the laboratory medicine. To assess the patient's satisfaction with phlebotomy services in a neuropsychiatric hospital by a structured questionnaire with grading scale. Also, identify the problems causing dissatisfactions and to undertake necessary Corrective and Preventative Action (CAPA). Total 1200 patients were randomly selected over a period of two months (June and July 2016). A structured self designed questionnaire (feedback form) was devised in both Hindi and English languages containing ten questions with a grading scale for each question. It also included suggestions from the users. All the selected patients or their attendants filled up this questionnaire. At the same time, they were also interviewed by phlebotomy staff. A statistical analysis was conducted using SPSS version 16.0 software and Likert scale. A total of 94% of the patients were satisfied with the phlebotomy services. Almost 30.0% patients found the phlebotomy services to be very good, but the majority of them (40.5%) found it to be good and another 23.5% found it to be satisfactory while, 4% found the services to be poor and 2% found it to be very poor. The highest rate of satisfaction (4.21) was noted in case of parameter-ease to find collection sample room and lowest rate of satisfaction (3.92) was scored by the parameter-staff's wearing proper uniform. Depending upon the deficient areas some corrective actions were suggested such as strict compliance of personal protective equipments, regular training to improve technical skill, knowledge and behaviour with emphasis on cleanliness of work area. Even though the overall patient's satisfaction was high, there were areas which needed our attention such as waiting time for phlebotomy procedure, lack of proper sitting arrangement, techniques of sample collection, knowledge of universal precautions etc. Appropriate corrective and preventive actions were taken to solve the problems. Thereby, feedback proved effective in maintenance and improvement of phlebotomy services.
Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data
Dazard, Jean-Eudes; Rao, J. Sunil
2012-01-01
The paper addresses a common problem in the analysis of high-dimensional high-throughput “omics” data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel “similarity statistic”-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called ‘MVR’ (‘Mean-Variance Regularization’), downloadable from the CRAN website. PMID:22711950
Multichannel feedforward control schemes with coupling compensation for active sound profiling
NASA Astrophysics Data System (ADS)
Mosquera-Sánchez, Jaime A.; Desmet, Wim; de Oliveira, Leopoldo P. R.
2017-05-01
Active sound profiling includes a number of control techniques that enables the equalization, rather than the mere reduction, of acoustic noise. Challenges may rise when trying to achieve distinct targeted sound profiles simultaneously at multiple locations, e.g., within a vehicle cabin. This paper introduces distributed multichannel control schemes for independently tailoring structural borne sound reaching a number of locations within a cavity. The proposed techniques address the cross interactions amongst feedforward active sound profiling units, which compensate for interferences of the primary sound at each location of interest by exchanging run-time data amongst the control units, while attaining the desired control targets. Computational complexity, convergence, and stability of the proposed multichannel schemes are examined in light of the physical system at which they are implemented. The tuning performance of the proposed algorithms is benchmarked with the centralized and pure-decentralized control schemes through computer simulations on a simplified numerical model, which has also been subjected to plant magnitude variations. Provided that the representation of the plant is accurate enough, the proposed multichannel control schemes have been shown as the only ones that properly deliver targeted active sound profiling tasks at each error sensor location. Experimental results in a 1:3-scaled vehicle mock-up further demonstrate that the proposed schemes are able to attain reductions of more than 60 dB upon periodic disturbances at a number of positions, while resolving cross-channel interferences. Moreover, when the sensor/actuator placement is found as defective at a given frequency, the inclusion of a regularization parameter in the cost function is seen to not hinder the proper operation of the proposed compensation schemes, at the time that it assures their stability, at the expense of losing control performance.
Implementing the Regular Education Initiative in Secondary Schools: A Different Ball Game.
ERIC Educational Resources Information Center
Schumaker, Jean B.; Deshler, Donald D.
1988-01-01
The article reviews potential barriers to implementing the Regular Education Initiative (REI) in secondary schools and then discusses a set of factors central to developing a workable partnership, one that is compatible with the goals of the REI but that also responds to the unique parameters of secondary schools. (Author/DB)
NASA Astrophysics Data System (ADS)
Bai, Bing
2012-03-01
There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.
NASA Astrophysics Data System (ADS)
Petržala, Jaromír
2018-07-01
The knowledge of the emission function of a city is crucial for simulation of sky glow in its vicinity. The indirect methods to achieve this function from radiances measured over a part of the sky have been recently developed. In principle, such methods represent an ill-posed inverse problem. This paper deals with the theoretical feasibility study of various approaches to solving of given inverse problem. Particularly, it means testing of fitness of various stabilizing functionals within the Tikhonov's regularization. Further, the L-curve and generalized cross validation methods were investigated as indicators of an optimal regularization parameter. At first, we created the theoretical model for calculation of a sky spectral radiance in the form of a functional of an emission spectral radiance. Consequently, all the mentioned approaches were examined in numerical experiments with synthetical data generated for the fictitious city and loaded by random errors. The results demonstrate that the second order Tikhonov's regularization method together with regularization parameter choice by the L-curve maximum curvature criterion provide solutions which are in good agreement with the supposed model emission functions.
Deflection of light by rotating regular black holes using the Gauss-Bonnet theorem
NASA Astrophysics Data System (ADS)
Jusufi, Kimet; Övgün, Ali; Saavedra, Joel; Vásquez, Yerko; González, P. A.
2018-06-01
In this paper, we study the weak gravitational lensing in the spacetime of rotating regular black hole geometries such as Ayon-Beato-García (ABG), Bardeen, and Hayward black holes. We calculate the deflection angle of light using the Gauss-Bonnet theorem (GBT) and show that the deflection of light can be viewed as a partially topological effect in which the deflection angle can be calculated by considering a domain outside of the light ray applied to the black hole optical geometries. Then, we demonstrate also the deflection angle via the geodesics formalism for these black holes to verify our results and explore the differences with the Kerr solution. These black holes have, in addition to the total mass and rotation parameter, different parameters of electric charge, magnetic charge, and deviation parameter. We find that the deflection of light has correction terms coming from these parameters, which generalizes the Kerr deflection angle.
Image segmentation with a novel regularized composite shape prior based on surrogate study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu
Purpose: Incorporating training into image segmentation is a good approach to achieve additional robustness. This work aims to develop an effective strategy to utilize shape prior knowledge, so that the segmentation label evolution can be driven toward the desired global optimum. Methods: In the variational image segmentation framework, a regularization for the composite shape prior is designed to incorporate the geometric relevance of individual training data to the target, which is inferred by an image-based surrogate relevance metric. Specifically, this regularization is imposed on the linear weights of composite shapes and serves as a hyperprior. The overall problem is formulatedmore » in a unified optimization setting and a variational block-descent algorithm is derived. Results: The performance of the proposed scheme is assessed in both corpus callosum segmentation from an MR image set and clavicle segmentation based on CT images. The resulted shape composition provides a proper preference for the geometrically relevant training data. A paired Wilcoxon signed rank test demonstrates statistically significant improvement of image segmentation accuracy, when compared to multiatlas label fusion method and three other benchmark active contour schemes. Conclusions: This work has developed a novel composite shape prior regularization, which achieves superior segmentation performance than typical benchmark schemes.« less
NASA Astrophysics Data System (ADS)
Chvetsov, Alevei V.; Sandison, George A.; Schwartz, Jeffrey L.; Rengan, Ramesh
2015-11-01
The main objective of this article is to improve the stability of reconstruction algorithms for estimation of radiobiological parameters using serial tumor imaging data acquired during radiation therapy. Serial images of tumor response to radiation therapy represent a complex summation of several exponential processes as treatment induced cell inactivation, tumor growth rates, and the rate of cell loss. Accurate assessment of treatment response would require separation of these processes because they define radiobiological determinants of treatment response and, correspondingly, tumor control probability. However, the estimation of radiobiological parameters using imaging data can be considered an inverse ill-posed problem because a sum of several exponentials would produce the Fredholm integral equation of the first kind which is ill posed. Therefore, the stability of reconstruction of radiobiological parameters presents a problem even for the simplest models of tumor response. To study stability of the parameter reconstruction problem, we used a set of serial CT imaging data for head and neck cancer and a simplest case of a two-level cell population model of tumor response. Inverse reconstruction was performed using a simulated annealing algorithm to minimize a least squared objective function. Results show that the reconstructed values of cell surviving fractions and cell doubling time exhibit significant nonphysical fluctuations if no stabilization algorithms are applied. However, after applying a stabilization algorithm based on variational regularization, the reconstruction produces statistical distributions for survival fractions and doubling time that are comparable to published in vitro data. This algorithm is an advance over our previous work where only cell surviving fractions were reconstructed. We conclude that variational regularization allows for an increase in the number of free parameters in our model which enables development of more-advanced parameter reconstruction algorithms.
Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data
NASA Astrophysics Data System (ADS)
Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.
2017-10-01
The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.
Slow dynamics and regularization phenomena in ensembles of chaotic neurons
NASA Astrophysics Data System (ADS)
Rabinovich, M. I.; Varona, P.; Torres, J. J.; Huerta, R.; Abarbanel, H. D. I.
1999-02-01
We have explored the role of calcium concentration dynamics in the generation of chaos and in the regularization of the bursting oscillations using a minimal neural circuit of two coupled model neurons. In regions of the control parameter space where the slowest component, namely the calcium concentration in the endoplasmic reticulum, weakly depends on the other variables, this model is analogous to three dimensional systems as found in [1] or [2]. These are minimal models that describe the fundamental characteristics of the chaotic spiking-bursting behavior observed in real neurons. We have investigated different regimes of cooperative behavior in large assemblies of such units using lattice of non-identical Hindmarsh-Rose neurons electrically coupled with parameters chosen randomly inside the chaotic region. We study the regularization mechanisms in large assemblies and the development of several spatio-temporal patterns as a function of the interconnectivity among nearest neighbors.
NASA Astrophysics Data System (ADS)
Bernard, Laura; Blanchet, Luc; Bohé, Alejandro; Faye, Guillaume; Marsat, Sylvain
2017-11-01
The Fokker action of point-particle binaries at the fourth post-Newtonian (4PN) approximation of general relativity has been determined previously. However two ambiguity parameters associated with infrared (IR) divergencies of spatial integrals had to be introduced. These two parameters were fixed by comparison with gravitational self-force (GSF) calculations of the conserved energy and periastron advance for circular orbits in the test-mass limit. In the present paper together with a companion paper, we determine both these ambiguities from first principle, by means of dimensional regularization. Our computation is thus entirely defined within the dimensional regularization scheme, for treating at once the IR and ultra-violet (UV) divergencies. In particular, we obtain crucial contributions coming from the Einstein-Hilbert part of the action and from the nonlocal tail term in arbitrary dimensions, which resolve the ambiguities.
Saving Strokes with Space Technology
NASA Technical Reports Server (NTRS)
1980-01-01
Inventor Dave Pelz developed a space spinoff Teacher Alignment Computer for Sunmark Preceptor Golf Ltd. which helps golfers learn proper putting aim. The light beam, reflected into the computer, measures putter alignment and lights atop the box tell the golfer he is on target or off to either side and how much. A related putting aid idea is to stroke the ball at the putter's "sweet spot," which is bracketed by metal prongs. Regular practice develops solid impacts for better putting.
Levy, Mark L; Dekhuijzen, P N R; Barnes, P J; Broeders, M; Corrigan, C J; Chawes, B L; Corbetta, L; Dubus, J C; Hausen, Th; Lavorini, F; Roche, N; Sanchis, J; Usmani, Omar S; Viejo, J; Vincken, W; Voshaar, Th; Crompton, G K; Pedersen, Soren
2016-04-21
Health professionals tasked with advising patients with asthma and chronic obstructive pulmonary disease (COPD) how to use inhaler devices properly and what to do about unwanted effects will be aware of a variety of commonly held precepts. The evidence for many of these is, however, lacking or old and therefore in need of re-examination. Few would disagree that facilitating and encouraging regular and proper use of inhaler devices for the treatment of asthma and COPD is critical for successful outcomes. It seems logical that the abandonment of unnecessary or ill-founded practices forms an integral part of this process: the use of inhalers is bewildering enough, particularly with regular introduction of new drugs, devices and ancillary equipment, without unnecessary and pointless adages. We review the evidence, or lack thereof, underlying ten items of inhaler 'lore' commonly passed on by health professionals to each other and thence to patients. The exercise is intended as a pragmatic, evidence-informed review by a group of clinicians with appropriate experience. It is not intended to be an exhaustive review of the literature; rather, we aim to stimulate debate, and to encourage researchers to challenge some of these ideas and to provide new, updated evidence on which to base relevant, meaningful advice in the future. The discussion on each item is followed by a formal, expert opinion by members of the ADMIT Working Group.
Circular geodesic of Bardeen and Ayon-Beato-Garcia regular black-hole and no-horizon spacetimes
NASA Astrophysics Data System (ADS)
Stuchlík, Zdeněk; Schee, Jan
2015-12-01
In this paper, we study circular geodesic motion of test particles and photons in the Bardeen and Ayon-Beato-Garcia (ABG) geometry describing spherically symmetric regular black-hole or no-horizon spacetimes. While the Bardeen geometry is not exact solution of Einstein's equations, the ABG spacetime is related to self-gravitating charged sources governed by Einstein's gravity and nonlinear electrodynamics. They both are characterized by the mass parameter m and the charge parameter g. We demonstrate that in similarity to the Reissner-Nordstrom (RN) naked singularity spacetimes an antigravity static sphere should exist in all the no-horizon Bardeen and ABG solutions that can be surrounded by a Keplerian accretion disc. However, contrary to the RN naked singularity spacetimes, the ABG no-horizon spacetimes with parameter g/m > 2 can contain also an additional inner Keplerian disc hidden under the static antigravity sphere. Properties of the geodesic structure are reflected by simple observationally relevant optical phenomena. We give silhouette of the regular black-hole and no-horizon spacetimes, and profiled spectral lines generated by Keplerian rings radiating at a fixed frequency and located in strong gravity region at or nearby the marginally stable circular geodesics. We demonstrate that the profiled spectral lines related to the regular black-holes are qualitatively similar to those of the Schwarzschild black-holes, giving only small quantitative differences. On the other hand, the regular no-horizon spacetimes give clear qualitative signatures of their presence while compared to the Schwarschild spacetimes. Moreover, it is possible to distinguish the Bardeen and ABG no-horizon spacetimes, if the inclination angle to the observer is known.
Invariant models in the inversion of gravity and magnetic fields and their derivatives
NASA Astrophysics Data System (ADS)
Ialongo, Simone; Fedi, Maurizio; Florio, Giovanni
2014-11-01
In potential field inversion problems we usually solve underdetermined systems and realistic solutions may be obtained by introducing a depth-weighting function in the objective function. The choice of the exponent of such power-law is crucial. It was suggested to determine it from the field-decay due to a single source-block; alternatively it has been defined as the structural index of the investigated source distribution. In both cases, when k-order derivatives of the potential field are considered, the depth-weighting exponent has to be increased by k with respect that of the potential field itself, in order to obtain consistent source model distributions. We show instead that invariant and realistic source-distribution models are obtained using the same depth-weighting exponent for the magnetic field and for its k-order derivatives. A similar behavior also occurs in the gravity case. In practice we found that the depth weighting-exponent is invariant for a given source-model and equal to that of the corresponding magnetic field, in the magnetic case, and of the 1st derivative of the gravity field, in the gravity case. In the case of the regularized inverse problem, with depth-weighting and general constraints, the mathematical demonstration of such invariance is difficult, because of its non-linearity, and of its variable form, due to the different constraints used. However, tests performed on a variety of synthetic cases seem to confirm the invariance of the depth-weighting exponent. A final consideration regards the role of the regularization parameter; we show that the regularization can severely affect the depth to the source because the estimated depth tends to increase proportionally with the size of the regularization parameter. Hence, some care is needed in handling the combined effect of the regularization parameter and depth weighting.
ERIC Educational Resources Information Center
Sinharay, Sandip
2015-01-01
The maximum likelihood estimate (MLE) of the ability parameter of an item response theory model with known item parameters was proved to be asymptotically normally distributed under a set of regularity conditions for tests involving dichotomous items and a unidimensional ability parameter (Klauer, 1990; Lord, 1983). This article first considers…
1993-02-01
amplification induced by the inverse filter. The problem of noise amplification that arises in conventional image deblurring problems has often been... noise sensitivity, and strategies for selecting a regularization parameter have been developed. The probability of convergence to within a prescribed...Strategies in Image Deblurring .................. 12 2.2.2 CLS Parameter Selection ........................... 14 2.2.3 Wiener Parameter Selection
Acar, Burak; Yayla, Cagri; Gucuk Ipek, Esra; Unal, Sefa; Ertem, Ahmet Goktug; Burak, Cengiz; Senturk, Bihter; Bayraktar, Fatih; Kara, Meryem; Demirkan, Burcu; Guray, Yesim
2017-10-01
Coronary artery disease is the leading cause of mortality worldwide. Regular physical activity is part of a comprehensive management strategy for these patients. We investigated the parameters that influence physical activity in patients with a history of coronary revascularization. We included outpatients with a history of coronary revascularization at least six months prior to enrollment. Data on physical activity, demographics, and clinical characteristics were collected via a questionnaire. A total of 202 consecutive outpatients (age 61.3±11.2 years, 73% male) were enrolled. One hundred and four (51%) patients had previous percutaneous coronary intervention, 67 (33%) had coronary bypass graft surgery, and 31 (15%) had both procedures. Only 46 patients (23%) engaged in regular physical activity. Patients were classified into two subgroups according to their physical activity. There were no significant differences between subgroups in terms of age, comorbid conditions or revascularization type. Multivariate regression analysis revealed that low education level (OR=3.26, 95% CI: 1.31-8.11, p=0.01), and lack of regular follow-up (OR=2.95, 95% CI: 1.01-8.61, p=0.04) were independent predictors of non-adherence to regular physical activity among study subjects. Regular exercise rates were lower in outpatients with previous coronary revascularization. Education level and regular follow-up visits were associated with adherence to physical activity in these patients. Copyright © 2017 Sociedade Portuguesa de Cardiologia. Publicado por Elsevier España, S.L.U. All rights reserved.
NASA Astrophysics Data System (ADS)
Bukhari, Hassan J.
2017-12-01
In this paper a framework for robust optimization of mechanical design problems and process systems that have parametric uncertainty is presented using three different approaches. Robust optimization problems are formulated so that the optimal solution is robust which means it is minimally sensitive to any perturbations in parameters. The first method uses the price of robustness approach which assumes the uncertain parameters to be symmetric and bounded. The robustness for the design can be controlled by limiting the parameters that can perturb.The second method uses the robust least squares method to determine the optimal parameters when data itself is subjected to perturbations instead of the parameters. The last method manages uncertainty by restricting the perturbation on parameters to improve sensitivity similar to Tikhonov regularization. The methods are implemented on two sets of problems; one linear and the other non-linear. This methodology will be compared with a prior method using multiple Monte Carlo simulation runs which shows that the approach being presented in this paper results in better performance.
a Comparison Between Two Ols-Based Approaches to Estimating Urban Multifractal Parameters
NASA Astrophysics Data System (ADS)
Huang, Lin-Shan; Chen, Yan-Guang
Multifractal theory provides a new spatial analytical tool for urban studies, but many basic problems remain to be solved. Among various pending issues, the most significant one is how to obtain proper multifractal dimension spectrums. If an algorithm is improperly used, the parameter spectrums will be abnormal. This paper is devoted to investigating two ordinary least squares (OLS)-based approaches for estimating urban multifractal parameters. Using empirical study and comparative analysis, we demonstrate how to utilize the adequate linear regression to calculate multifractal parameters. The OLS regression analysis has two different approaches. One is that the intercept is fixed to zero, and the other is that the intercept is not limited. The results of comparative study show that the zero-intercept regression yields proper multifractal parameter spectrums within certain scale range of moment order, while the common regression method often leads to abnormal multifractal parameter values. A conclusion can be reached that fixing the intercept to zero is a more advisable regression method for multifractal parameters estimation, and the shapes of spectral curves and value ranges of fractal parameters can be employed to diagnose urban problems. This research is helpful for scientists to understand multifractal models and apply a more reasonable technique to multifractal parameter calculations.
Consistent Partial Least Squares Path Modeling via Regularization.
Jung, Sunho; Park, JaeHong
2018-01-01
Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.
FIELD MEASUREMENT OF DISSOLVED OXYGEN: A COMPARISON OF TECHNIQUES
The measurement and interpretation of geochemical redox parameters are key components of ground water remedial investigations. Dissolved oxygen (DO) is perhaps the most robust geochemical parameter in redox characterization; however, recent work has indicated a need for proper da...
[Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.
Takacs, T; Jüttler, B
2012-11-01
Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping.
Nutrition and health in hotel staff on different shift patterns.
Seibt, R; Süße, T; Spitzer, S; Hunger, B; Rudolf, M
2015-08-01
Limited research is available that examines the nutritional behaviour and health of hotel staff working alternating and regular shifts. To analyse the nutritional behaviour and health of employees working in alternating and regular shifts. The study used an ex post facto cross-sectional analysis to compare the nutritional behaviour and health parameters of workers with alternating shifts and regular shift workers. Nutritional behaviour was assessed with the Food Frequency Questionnaire. Body dimensions (body mass index, waist hip ratio, fat mass and active cell mass), metabolic values (glucose, triglyceride, total cholesterol and low- and high-density lipoprotein), diseases and health complaints were included as health parameters. Participants worked in alternating (n = 53) and regular shifts (n = 97). The average age of subjects was 35 ± 10 years. There was no significant difference in nutritional behaviour, most surveyed body dimensions or metabolic values between the two groups. However, alternating shift workers had significantly lower fat mass and higher active cell mass but nevertheless reported more pronounced health complaints. Sex and age were also confirmed as influencing the surveyed parameters. Shift-dependent nutritional problems were not conspicuously apparent in this sample of hotel industry workers. Health parameters did not show significantly negative attributes for alternating shift workers. Conceivably, both groups could have the same level of knowledge on the health effects of nutrition and comparable opportunities to apply this. Further studies on nutritional and health behaviour in the hotel industry are necessary in order to create validated screening programmes. © The Author 2015. Published by Oxford University Press on behalf of the Society of Occupational Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Anaemia, iron deficiency and iron deficiency anaemia among blood donors in Port Harcourt, Nigeria.
Jeremiah, Zaccheaus Awortu; Koate, Baribefe Banavule
2010-04-01
There is paucity of information on the effect of blood donation on iron stores in Port Harcourt, Nigeria. The present study was, therefore, designed to assess, using a combination of haemoglobin and iron status parameters, the development of anaemia and prevalence of iron deficiency anaemia in this area of Nigeria. Three hundred and forty-eight unselected consecutive whole blood donors, comprising 96 regular donors, 156 relatives of patients and 96 voluntary donors, constituted the study population. Three haematological parameters (haemoglobin, packed cell volume, and mean cell haemoglobin concentration) and four biochemical iron parameters (serum ferritin, serum iron, total iron binding capacity and transferrin saturation) were assessed using standard colorimetric and ELISA techniques. The prevalence of anaemia alone (haemoglobin <11.0 g/dL) was 13.7%. The prevalence of isolated iron deficiency (serum ferritin <12 ng/mL) was 20.6% while that of iron-deficiency anaemia (haemoglobin <11.0 g/dL + serum ferritin <12.0 ng/mL) was 12.0%. Among the three categories of the donors, the regular donors were found to be most adversely affected as shown by the reduction in mean values of both haematological and biochemical iron parameters. Interestingly, anaemia, iron deficiency and iron-deficiency anaemia were present almost exclusively among regular blood donors, all of whom were over 35 years old. Anaemia, iron deficiency and iron-deficiency anaemia are highly prevalent among blood donors in Port Harcourt, Nigeria. It will be necessary to review the screening tests for the selection of blood donors and also include serum ferritin measurement for the routine assessment of blood donors, especially among regular blood donors.
Artificial Bone and Teeth through Controlled Ice Growth in Colloidal Suspensions
NASA Astrophysics Data System (ADS)
Tomsia, Antoni P.; Saiz, Eduardo; Deville, Sylvain
2007-06-01
The formation of regular patterns is a common feature of many solidification processes involving cast materials. We describe here how regular patterns can be obtained in porous alumina and hydroxyapatite (HAP) by controlling the freezing of ceramic slurries followed by subsequent ice sublimation and sintering, leading to multilayered porous ceramic structures with homogeneous and well-defined architecture. These porous materials can be infiltrated with a second phase of choice to yield biomimetic nacre-like composites with improved mechanical properties, which could be used for artificial bone and teeth applications. Proper control of the solidification patterns provides powerful means of control over the final functional properties. We discuss the relationships between the experimental results, ice growth fundamentals, the physics of ice and the interaction between inert particles and the solidification front during directional freezing.
Artificial Bone and Teeth through Controlled Ice Growth in Colloidal Suspensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomsia, Antoni P.; Saiz, Eduardo; Deville, Sylvain
2007-06-14
The formation of regular patterns is a common feature of many solidification processes involving cast materials. We describe here how regular patterns can be obtained in porous alumina and hydroxyapatite (HAP) by controlling the freezing of ceramic slurries followed by subsequent ice sublimation and sintering, leading to multilayered porous ceramic structures with homogeneous and well-defined architecture. These porous materials can be infiltrated with a second phase of choice to yield biomimetic nacre-like composites with improved mechanical properties, which could be used for artificial bone and teeth applications. Proper control of the solidification patterns provides powerful means of control over themore » final functional properties. We discuss the relationships between the experimental results, ice growth fundamentals, the physics of ice and the interaction between inert particles and the solidification front during directional freezing.« less
Statistical approach to Higgs boson couplings in the standard model effective field theory
NASA Astrophysics Data System (ADS)
Murphy, Christopher W.
2018-01-01
We perform a parameter fit in the standard model effective field theory (SMEFT) with an emphasis on using regularized linear regression to tackle the issue of the large number of parameters in the SMEFT. In regularized linear regression, a positive definite function of the parameters of interest is added to the usual cost function. A cross-validation is performed to try to determine the optimal value of the regularization parameter to use, but it selects the standard model (SM) as the best model to explain the measurements. Nevertheless as proof of principle of this technique we apply it to fitting Higgs boson signal strengths in SMEFT, including the latest Run-2 results. Results are presented in terms of the eigensystem of the covariance matrix of the least squares estimators as it has a degree model-independent to it. We find several results in this initial work: the SMEFT predicts the total width of the Higgs boson to be consistent with the SM prediction; the ATLAS and CMS experiments at the LHC are currently sensitive to non-resonant double Higgs boson production. Constraints are derived on the viable parameter space for electroweak baryogenesis in the SMEFT, reinforcing the notion that a first order phase transition requires fairly low-scale beyond the SM physics. Finally, we study which future experimental measurements would give the most improvement on the global constraints on the Higgs sector of the SMEFT.
Leander, Jacob; Lundh, Torbjörn; Jirstrand, Mats
2014-05-01
In this paper we consider the problem of estimating parameters in ordinary differential equations given discrete time experimental data. The impact of going from an ordinary to a stochastic differential equation setting is investigated as a tool to overcome the problem of local minima in the objective function. Using two different models, it is demonstrated that by allowing noise in the underlying model itself, the objective functions to be minimized in the parameter estimation procedures are regularized in the sense that the number of local minima is reduced and better convergence is achieved. The advantage of using stochastic differential equations is that the actual states in the model are predicted from data and this will allow the prediction to stay close to data even when the parameters in the model is incorrect. The extended Kalman filter is used as a state estimator and sensitivity equations are provided to give an accurate calculation of the gradient of the objective function. The method is illustrated using in silico data from the FitzHugh-Nagumo model for excitable media and the Lotka-Volterra predator-prey system. The proposed method performs well on the models considered, and is able to regularize the objective function in both models. This leads to parameter estimation problems with fewer local minima which can be solved by efficient gradient-based methods. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.
Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo
2017-06-01
Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.
NASA Astrophysics Data System (ADS)
Nekrasova, N. A.; Kurbatova, S. V.; Zemtsova, M. N.
2016-12-01
Regularities of the sorption of 1,2,3,4-tetrahydroquinoline derivatives on octadecylsilyl silica gel and porous graphitic carbon from aqueous acetonitrile solutions were investigated. The effect the molecular structure and physicochemical parameters of the sorbates have on their retention characteristics under conditions of reversed phase HPLC are analyzed.
Per Linguam: A Journal of Language Learning, Vol. 1-3, 1985-1987.
ERIC Educational Resources Information Center
van der Vyver, D. H., Ed.
1987-01-01
Regular issues of "Per Linguam" appear twice a year. The document consists of the six regular issues for the years 1985, 1986, and 1987. These issues contain the following 32 articles: (1) "SALT in South Africa: Needs and Parameters" (van der Vyver); (2) "An Analysis of SALT in Practice" (Botha); (3) "SALT and…
Encoding of speed and direction of movement in the human supplementary motor area
Tankus, Ariel; Yeshurun, Yehezkel; Flash, Tamar; Fried, Itzhak
2010-01-01
Object The supplementary motor area (SMA) plays an important role in planning, initiation, and execution of motor acts. Patients with SMA lesions are impaired in various kinematic parameters, such as velocity and duration of movement. However, the relationships between neuronal activity and these parameters in the human brain have not been fully characterized. This is a study of single-neuron activity during a continuous volitional motor task, with the goal of clarifying these relationships for SMA neurons and other frontal lobe regions in humans. Methods The participants were 7 patients undergoing evaluation for epilepsy surgery requiring implantation of intracranial depth electrodes. Single-unit recordings were conducted while the patients played a computer game involving movement of a cursor in a simple maze. Results In the SMA proper, most of the recorded units exhibited a monotonic relationship between the unit firing rate and hand motion speed. The vast majority of SMA proper units with this property showed an inverse relation, that is, firing rate decrease with speed increase. In addition, most of the SMA proper units were selective to the direction of hand motion. These relationships were far less frequent in the pre-SMA, anterior cingulate gyrus, and orbitofrontal cortex. Conclusions The findings suggest that the SMA proper takes part in the control of kinematic parameters of end-effector motion, and thus lend support to the idea of connecting neuroprosthetic devices to the human SMA. PMID:19231930
Learning the manifold of quality ultrasound acquisition.
El-Zehiry, Noha; Yan, Michelle; Good, Sara; Fang, Tong; Zhou, S Kevin; Grady, Leo
2013-01-01
Ultrasound acquisition is a challenging task that requires simultaneous adjustment of several acquisition parameters (the depth, the focus, the frequency and its operation mode). If the acquisition parameters are not properly chosen, the resulting image will have a poor quality and will degrade the patient diagnosis and treatment workflow. Several hardware-based systems for autotuning the acquisition parameters have been previously proposed, but these solutions were largely abandoned because they failed to properly account for tissue inhomogeneity and other patient-specific characteristics. Consequently, in routine practice the clinician either uses population-based parameter presets or manually adjusts the acquisition parameters for each patient during the scan. In this paper, we revisit the problem of autotuning the acquisition parameters by taking a completely novel approach and producing a solution based on image analytics. Our solution is inspired by the autofocus capability of conventional digital cameras, but is significantly more challenging because the number of acquisition parameters is large and the determination of "good quality" images is more difficult to assess. Surprisingly, we show that the set of acquisition parameters which produce images that are favored by clinicians comprise a 1D manifold, allowing for a real-time optimization to maximize image quality. We demonstrate our method for acquisition parameter autotuning on several live patients, showing that our system can start with a poor initial set of parameters and automatically optimize the parameters to produce high quality images.
ERIC Educational Resources Information Center
Finch, Holmes
2010-01-01
The accuracy of item parameter estimates in the multidimensional item response theory (MIRT) model context is one that has not been researched in great detail. This study examines the ability of two confirmatory factor analysis models specifically for dichotomous data to properly estimate item parameters using common formulae for converting factor…
NASA Astrophysics Data System (ADS)
Kim, Bong-Sik
Three dimensional (3D) Navier-Stokes-alpha equations are considered for uniformly rotating geophysical fluid flows (large Coriolis parameter f = 2O). The Navier-Stokes-alpha equations are a nonlinear dispersive regularization of usual Navier-Stokes equations obtained by Lagrangian averaging. The focus is on the existence and global regularity of solutions of the 3D rotating Navier-Stokes-alpha equations and the uniform convergence of these solutions to those of the original 3D rotating Navier-Stokes equations for large Coriolis parameters f as alpha → 0. Methods are based on fast singular oscillating limits and results are obtained for periodic boundary conditions for all domain aspect ratios, including the case of three wave resonances which yields nonlinear "2½-dimensional" limit resonant equations for f → 0. The existence and global regularity of solutions of limit resonant equations is established, uniformly in alpha. Bootstrapping from global regularity of the limit equations, the existence of a regular solution of the full 3D rotating Navier-Stokes-alpha equations for large f for an infinite time is established. Then, the uniform convergence of a regular solution of the 3D rotating Navier-Stokes-alpha equations (alpha ≠ 0) to the one of the original 3D rotating NavierStokes equations (alpha = 0) for f large but fixed as alpha → 0 follows; this implies "shadowing" of trajectories of the limit dynamical systems by those of the perturbed alpha-dynamical systems. All the estimates are uniform in alpha, in contrast with previous estimates in the literature which blow up as alpha → 0. Finally, the existence of global attractors as well as exponential attractors is established for large f and the estimates are uniform in alpha.
ERIC Educational Resources Information Center
Adachi, Kohei
2013-01-01
Rubin and Thayer ("Psychometrika," 47:69-76, 1982) proposed the EM algorithm for exploratory and confirmatory maximum likelihood factor analysis. In this paper, we prove the following fact: the EM algorithm always gives a proper solution with positive unique variances and factor correlations with absolute values that do not exceed one,…
Further investigation on "A multiplicative regularization for force reconstruction"
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2018-05-01
We have recently proposed a multiplicative regularization to reconstruct mechanical forces acting on a structure from vibration measurements. This method does not require any selection procedure for choosing the regularization parameter, since the amount of regularization is automatically adjusted throughout an iterative resolution process. The proposed iterative algorithm has been developed with performance and efficiency in mind, but it is actually a simplified version of a full iterative procedure not described in the original paper. The present paper aims at introducing the full resolution algorithm and comparing it with its simplified version in terms of computational efficiency and solution accuracy. In particular, it is shown that both algorithms lead to very similar identified solutions.
Iterative image reconstruction for PROPELLER-MRI using the nonuniform fast fourier transform.
Tamhane, Ashish A; Anastasio, Mark A; Gui, Minzhi; Arfanakis, Konstantinos
2010-07-01
To investigate an iterative image reconstruction algorithm using the nonuniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) MRI. Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it with that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased signal to noise ratio, reduced artifacts, for similar spatial resolution, compared with gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter, the new reconstruction technique may provide PROPELLER images with improved image quality compared with conventional gridding. (c) 2010 Wiley-Liss, Inc.
Iterative Image Reconstruction for PROPELLER-MRI using the NonUniform Fast Fourier Transform
Tamhane, Ashish A.; Anastasio, Mark A.; Gui, Minzhi; Arfanakis, Konstantinos
2013-01-01
Purpose To investigate an iterative image reconstruction algorithm using the non-uniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping parallEL Lines with Enhanced Reconstruction) MRI. Materials and Methods Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it to that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. Results It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased SNR, reduced artifacts, for similar spatial resolution, compared to gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. Conclusion An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter the new reconstruction technique may provide PROPELLER images with improved image quality compared to conventional gridding. PMID:20578028
Joint image and motion reconstruction for PET using a B-spline motion model.
Blume, Moritz; Navab, Nassir; Rafecas, Magdalena
2012-12-21
We present a novel joint image and motion reconstruction method for PET. The method is based on gated data and reconstructs an image together with a motion function. The motion function can be used to transform the reconstructed image to any of the input gates. All available events (from all gates) are used in the reconstruction. The presented method uses a B-spline motion model, together with a novel motion regularization procedure that does not need a regularization parameter (which is usually extremely difficult to adjust). Several image and motion grid levels are used in order to reduce the reconstruction time. In a simulation study, the presented method is compared to a recently proposed joint reconstruction method. While the presented method provides comparable reconstruction quality, it is much easier to use since no regularization parameter has to be chosen. Furthermore, since the B-spline discretization of the motion function depends on fewer parameters than a displacement field, the presented method is considerably faster and consumes less memory than its counterpart. The method is also applied to clinical data, for which a novel purely data-driven gating approach is presented.
A genetic algorithm approach to estimate glacier mass variations from GRACE data
NASA Astrophysics Data System (ADS)
Reimond, Stefan; Klinger, Beate; Krauss, Sandro; Mayer-Gürr, Torsten
2017-04-01
The application of a genetic algorithm (GA) to the inference of glacier mass variations with a point-mass modeling method is described. GRACE K-band ranging data (available since April 2002) processed at the Graz University of Technology serve as input for this study. The reformulation of the point-mass inversion method in terms of an optimization problem is motivated by two reasons: first, an improved choice of the positions of the modeled point-masses (with a particular focus on the depth parameter) is expected to increase the signal-to-noise ratio. Considering these coordinates as additional unknown parameters (besides from the mass change magnitudes) results in a highly non-linear optimization problem. The second reason is that the mass inversion from satellite tracking data is an ill-posed problem, and hence regularization becomes necessary. The main task in this context is the determination of the regularization parameter, which is typically done by means of heuristic selection rules like, e.g., the L-curve criterion. In this study, however, the challenge of selecting a suitable balancing parameter (or even a matrix) is tackled by introducing regularization to the overall optimization problem. Based on this novel approach, estimations of ice-mass changes in various alpine glacier systems (e.g. Svalbard) are presented and compared to existing results and alternative inversion methods.
NASA Astrophysics Data System (ADS)
Hu, Han; Ding, Yulin; Zhu, Qing; Wu, Bo; Lin, Hui; Du, Zhiqiang; Zhang, Yeting; Zhang, Yunsheng
2014-06-01
The filtering of point clouds is a ubiquitous task in the processing of airborne laser scanning (ALS) data; however, such filtering processes are difficult because of the complex configuration of the terrain features. The classical filtering algorithms rely on the cautious tuning of parameters to handle various landforms. To address the challenge posed by the bundling of different terrain features into a single dataset and to surmount the sensitivity of the parameters, in this study, we propose an adaptive surface filter (ASF) for the classification of ALS point clouds. Based on the principle that the threshold should vary in accordance to the terrain smoothness, the ASF embeds bending energy, which quantitatively depicts the local terrain structure to self-adapt the filter threshold automatically. The ASF employs a step factor to control the data pyramid scheme in which the processing window sizes are reduced progressively, and the ASF gradually interpolates thin plate spline surfaces toward the ground with regularization to handle noise. Using the progressive densification strategy, regularization and self-adaption, both performance improvement and resilience to parameter tuning are achieved. When tested against the benchmark datasets provided by ISPRS, the ASF performs the best in comparison with all other filtering methods, yielding an average total error of 2.85% when optimized and 3.67% when using the same parameter set.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, J.L.
1992-06-18
Properly selected and maintained chain drives can be expected to give thousands of hours of reliable service. Selection is usually done just once. This paper reports on good maintenance which must be done regularly to keep the drive operating. An effective maintenance program for roller chain should include correct type and adequate amounts of lubrication, replacement of worn chains and sprockets, and elimination of drive interferences. It is important to set u a lubrication and inspection/correction schedule to ensure that all required maintenance is carried out.
Exact RG flow equations and quantum gravity
NASA Astrophysics Data System (ADS)
de Alwis, S. P.
2018-03-01
We discuss the different forms of the functional RG equation and their relation to each other. In particular we suggest a generalized background field version that is close in spirit to the Polchinski equation as an alternative to the Wetterich equation to study Weinberg's asymptotic safety program for defining quantum gravity, and argue that the former is better suited for this purpose. Using the heat kernel expansion and proper time regularization we find evidence in support of this program in agreement with previous work.
Acetylcholine molecular arrays enable quantum information processing
NASA Astrophysics Data System (ADS)
Tamulis, Arvydas; Majauskaite, Kristina; Talaikis, Martynas; Zborowski, Krzysztof; Kairys, Visvaldas
2017-09-01
We have found self-assembly of four neurotransmitter acetylcholine (ACh) molecular complexes in a water molecules environment by using geometry optimization with DFT B97d method. These complexes organizes to regular arrays of ACh molecules possessing electronic spins, i.e. quantum information bits. These spin arrays could potentially be controlled by the application of a non-uniform external magnetic field. The proper sequence of resonant electromagnetic pulses would then drive all the spin groups into the 3-spin entangled state and proceed large scale quantum information bits.
2009-01-01
and J. A. Lewis, "Microperiodic structures - Direct writing of three-dimensional webs ," Nature, vol. 428, pp. 386-386, 2004. [9] M. Campbell, D. N...of Applied Physics Part 1-Regular Papers Brief Communications & Review Papers , vol. 44, pp. 6355-6367, 2005. [75] P. Cloetens, W. Ludwig, J... paper screen on the sample holder and marking the beam position. If the central beam is properly aligned, the spot on the screen remains at the
Regularity of a renewal process estimated from binary data.
Rice, John D; Strawderman, Robert L; Johnson, Brent A
2017-10-09
Assessment of the regularity of a sequence of events over time is important for clinical decision-making as well as informing public health policy. Our motivating example involves determining the effect of an intervention on the regularity of HIV self-testing behavior among high-risk individuals when exact self-testing times are not recorded. Assuming that these unobserved testing times follow a renewal process, the goals of this work are to develop suitable methods for estimating its distributional parameters when only the presence or absence of at least one event per subject in each of several observation windows is recorded. We propose two approaches to estimation and inference: a likelihood-based discrete survival model using only time to first event; and a potentially more efficient quasi-likelihood approach based on the forward recurrence time distribution using all available data. Regularity is quantified and estimated by the coefficient of variation (CV) of the interevent time distribution. Focusing on the gamma renewal process, where the shape parameter of the corresponding interevent time distribution has a monotone relationship with its CV, we conduct simulation studies to evaluate the performance of the proposed methods. We then apply them to our motivating example, concluding that the use of text message reminders significantly improves the regularity of self-testing, but not its frequency. A discussion on interesting directions for further research is provided. © 2017, The International Biometric Society.
Towards the mechanical characterization of abdominal wall by inverse analysis.
Simón-Allué, R; Calvo, B; Oberai, A A; Barbone, P E
2017-02-01
The aim of this study is to characterize the passive mechanical behaviour of abdominal wall in vivo in an animal model using only external cameras and numerical analysis. The main objective lies in defining a methodology that provides in vivo information of a specific patient without altering mechanical properties. It is demonstrated in the mechanical study of abdomen for hernia purposes. Mechanical tests consisted on pneumoperitoneum tests performed on New Zealand rabbits, where inner pressure was varied from 0mmHg to 12mmHg. Changes in the external abdominal surface were recorded and several points were tracked. Based on their coordinates we reconstructed a 3D finite element model of the abdominal wall, considering an incompressible hyperelastic material model defined by two parameters. The spatial distributions of these parameters (shear modulus and non linear parameter) were calculated by inverse analysis, using two different types of regularization: Total Variation Diminishing (TVD) and Tikhonov (H 1 ). After solving the inverse problem, the distribution of the material parameters were obtained along the abdominal surface. Accuracy of the results was evaluated for the last level of pressure. Results revealed a higher value of the shear modulus in a wide stripe along the craneo-caudal direction, associated with the presence of linea alba in conjunction with fascias and rectus abdominis. Non linear parameter distribution was smoother and the location of higher values varied with the regularization type. Both regularizations proved to yield in an accurate predicted displacement field, but H 1 obtained a smoother material parameter distribution while TVD included some discontinuities. The methodology here presented was able to characterize in vivo the passive non linear mechanical response of the abdominal wall. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cooley, Richard L.
1983-01-01
This paper investigates factors influencing the degree of improvement in estimates of parameters of a nonlinear regression groundwater flow model by incorporating prior information of unknown reliability. Consideration of expected behavior of the regression solutions and results of a hypothetical modeling problem lead to several general conclusions. First, if the parameters are properly scaled, linearized expressions for the mean square error (MSE) in parameter estimates of a nonlinear model will often behave very nearly as if the model were linear. Second, by using prior information, the MSE in properly scaled parameters can be reduced greatly over the MSE of ordinary least squares estimates of parameters. Third, plots of estimated MSE and the estimated standard deviation of MSE versus an auxiliary parameter (the ridge parameter) specifying the degree of influence of the prior information on regression results can help determine the potential for improvement of parameter estimates. Fourth, proposed criteria can be used to make appropriate choices for the ridge parameter and another parameter expressing degree of overall bias in the prior information. Results of a case study of Truckee Meadows, Reno-Sparks area, Washoe County, Nevada, conform closely to the results of the hypothetical problem. In the Truckee Meadows case, incorporation of prior information did not greatly change the parameter estimates from those obtained by ordinary least squares. However, the analysis showed that both sets of estimates are more reliable than suggested by the standard errors from ordinary least squares.
A Primer on the 2- and 3-Parameter Item Response Theory Models.
ERIC Educational Resources Information Center
Thornton, Artist
Item response theory (IRT) is a useful and effective tool for item response measurement if used in the proper context. This paper discusses the sets of assumptions under which responses can be modeled while exploring the framework of the IRT models relative to response testing. The one parameter model, or one parameter logistic model, is perhaps…
Small bodies and the outer planets and Appendices 1 and 2
NASA Technical Reports Server (NTRS)
Davis, D. R.
1974-01-01
Correlations of asteroid spectral reflectivity characteristics with orbital parameters have been sought. Asteroid proper elements and extreme heliocentric distance were examined. Only general trends were noted, primarily red asteroids and asteroids with IR (.95 micron) absorption bands are concentrated toward the inner part of the belt. Also, asteroids with the pyroxene band tend to have larger proper eccentricities relative to non-banded asteroids.
Real-Time Gait Cycle Parameter Recognition Using a Wearable Accelerometry System
Yang, Che-Chang; Hsu, Yeh-Liang; Shih, Kao-Shang; Lu, Jun-Ming
2011-01-01
This paper presents the development of a wearable accelerometry system for real-time gait cycle parameter recognition. Using a tri-axial accelerometer, the wearable motion detector is a single waist-mounted device to measure trunk accelerations during walking. Several gait cycle parameters, including cadence, step regularity, stride regularity and step symmetry can be estimated in real-time by using autocorrelation procedure. For validation purposes, five Parkinson’s disease (PD) patients and five young healthy adults were recruited in an experiment. The gait cycle parameters among the two subject groups of different mobility can be quantified and distinguished by the system. Practical considerations and limitations for implementing the autocorrelation procedure in such a real-time system are also discussed. This study can be extended to the future attempts in real-time detection of disabling gaits, such as festinating or freezing of gait in PD patients. Ambulatory rehabilitation, gait assessment and personal telecare for people with gait disorders are also possible applications. PMID:22164019
Electrical Distribution System (EDS) and Caution and Warning System (CWS)
NASA Technical Reports Server (NTRS)
Mcclung, T.
1975-01-01
An astronaut caution and warning system is described which monitors various life support system parameters and detects out-of-range parameter conditions. The warning system generates a warning tone and displays the malfunction condition to the astronaut along with the proper corrective procedures required.
Settling velocity of microplastic particles of regular shapes.
Khatmullina, Liliya; Isachenko, Igor
2017-01-30
Terminal settling velocity of around 600 microplastic particles, ranging from 0.5 to 5mm, of three regular shapes was measured in a series of sink experiments: Polycaprolactone (material density 1131kgm -3 ) spheres and short cylinders with equal dimensions, and long cylinders cut from fishing lines (1130-1168kgm -3 ) of different diameters (0.15-0.71mm). Settling velocities ranging from 5 to 127mms -1 were compared with several semi-empirical predictions developed for natural sediments showing reasonable consistency with observations except for the case of long cylinders, for which the new approximation is proposed. The effect of particle's shape on its settling velocity is highlighted, indicating the need of further experiments with real marine microplastics of different shapes and the necessity of the development of reasonable parameterization of microplastics settling for proper modeling of their transport in the water column. Copyright © 2016 Elsevier Ltd. All rights reserved.
[Ambulant compression therapy for crural ulcers; an effective treatment when applied skilfully].
de Boer, Edith M; Geerkens, Maud; Mooij, Michael C
2015-01-01
The incidence of crural ulcers is high. They reduce quality of life considerably and create a burden on the healthcare budget. The key treatment is ambulant compression therapy (ACT). We describe two patients with crural ulcers whose ambulant compression treatment was suboptimal and did not result in healing. When the bandages were applied correctly healing was achieved. If correctly applied ACT should provide sufficient pressure to eliminate oedema, whilst taking local circumstances such as bony structures and arterial qualities into consideration. To provide pressure-to-measure regular practical training, skills and regular quality checks are needed. Knowledge of the properties of bandages and the proper use of materials for padding under the bandage enables good personalised ACT. In trained hands adequate compression and making use of simple bandages and dressings provides good care for patients suffering from crural ulcers in contrast to inadequate ACT using the same materials.
Labrude, Pierre
2010-01-01
Every regular text relative to pharmaceutical activities is very precise about the prohibition of "public" exercise of pharmacy, and generally all medical activity, by members of clergy. However, the examination of archives demonstrates that violations of the law are constant, in spite of judicial procedures and sentences. Secular clergy is certainly very implicated, but its activity of preparation and distribution of drugs seems to be relatively discreet. Oppositely, the members of regular clergy open almost community pharmacies in towns and are competitors with apothecaries. Among them, in Lorraine, the most important are Jesuits and sisters in charge of charity houses and hospitals. Jesuits have no diplomas but their installations are very correctly organized. On the contrary, sisters are often poorly proper in pharmacy and their dispensaries appear to be badly managed with drugs of mediocre quality and poorly stored.
Residual neuropsychologic effects of cannabis.
Pope, H G; Gruber, A J; Yurgelun-Todd, D
2001-12-01
Acute intoxication with cannabis clearly produces cognitive impairment, but it is less clear how long cognitive deficits persist after an individual stops regular cannabis use. Numerous methodologic difficulties confront investigators in the field attempting to assess the residual neuropsychologic effects of cannabis among heavy users, and these must be understood to properly evaluate available studies. At present, it appears safe to conclude that deficits in attention and memory persist for at least several days after discontinuing regular heavy cannabis use. Some of these deficits may be caused or exacerbated by withdrawal effects from the abrupt discontinuation of cannabis; these effects typically peak after 3 to 7 days of abstinence. It is less clear, however, whether heavy cannabis use can cause neurotoxicity that persists long after discontinuation of use. It seems likely that such long-term effects, if they exist, are subtle and not clinically disabling--at least in the majority of cases.
Brief Trauma and Mental Health Assessments for Female Offenders in Addiction Treatment
Rowan-Szal, Grace A.; Joe, George W.; Bartholomew, Norma G; Pankow, Jennifer; Simpson, D. Dwayne
2012-01-01
Increasing numbers of women in prison raise concerns about gender-specific problems and needs severity. Female offenders report higher trauma as well as mental and medical health complications than males, but large inmate populations and limited resources create challenges in administering proper diagnostic screening and assessments. This study focuses on brief instruments that address specialized trauma and health problems, along with related psychosocial functioning. Women from two prison-based treatment programs for substance abuse were assessed (N = 1,397), including one facility for special needs and one for regular female offenders. Results affirmed that admissions to the special needs facility reported more posttraumatic stress symptoms, higher rates of psychological stress and previous hospitalizations, and more health issues than those in the regular treatment facility. Findings supporting use of these short forms and their applications as tools for monitoring needs, progress, and change over time are discussed. PMID:23087587
Interior of a charged distorted black hole
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdolrahimi, Shohreh; Frolov, Valeri P.; Shoom, Andrey A.
We study the interior of a charged, nonrotating distorted black hole. We consider static and axisymmetric black holes, and focus on a special case when an electrically charged distorted solution is obtained by the Harrison-Ernst transformation from an uncharged one. We demonstrate that the Cauchy horizon of such a black hole remains regular, provided the distortion is regular at the event horizon. The shape and the inner geometry of both the outer and inner (Cauchy) horizons are studied. We demonstrate that there exists a duality between the properties of the horizons. Proper time of a free fall of a testmore » particle moving in the interior of the distorted black hole along the symmetry axis is calculated. We also study the property of the curvature in the inner domain between the horizons. Simple relations between the 4D curvature invariants and the Gaussian curvature of the outer and inner horizon surfaces are found.« less
A new method for skin color enhancement
NASA Astrophysics Data System (ADS)
Zeng, Huanzhao; Luo, Ronnier
2012-01-01
Skin tone is the most important color category in memory colors. Reproducing it pleasingly is an important factor in photographic color reproduction. Moving skin colors toward their preferred skin color center improves the skin color preference on photographic color reproduction. Two key factors to successfully enhance skin colors are: a method to detect original skin colors effectively even if they are shifted far away from the regular skin color region, and a method to morph skin colors toward a preferred skin color region properly without introducing artifacts. A method for skin color enhancement presented by the authors in the same conference last year applies a static skin color model for skin color detection, which may miss to detect skin colors that are far away from regular skin tones. In this paper, a new method using the combination of face detection and statistical skin color modeling is proposed to effectively detect skin pixels and to enhance skin colors more effectively.
Kinematics of our Galaxy from the PMA and TGAS catalogues
NASA Astrophysics Data System (ADS)
Velichko, Anna B.; Akhmetov, Volodymyr S.; Fedorov, Peter N.
2018-04-01
We derive and compare kinematic parameters of the Galaxy using the PMA and Gaia TGAS data. Two methods are used in calculations: evaluation of the Ogorodnikov-Milne model (OMM) parameters by the least square method (LSM) and a decomposition on a set of vector spherical harmonics (VSH). We trace dependencies on the distance of the derived parameters including the Oort constants A and B and the rotational velocity of the Galaxy V rot at the Solar distance for the common sample of stars of mixed spectral composition of the PMA and TGAS catalogues. The distances were obtained from the TGAS parallaxes or from reduced proper motions for fainter stars. The A, B and V rot parameters derived from proper motions of both catalogues used show identical behaviour but the values are systematically shifted by about 0.5 mas/yr. The Oort B parameter derived from the PMA sample of red giants shows gradual decrease with increasing the distance while the Oort A has a minimum at about 2 kpc and then gradually increases. As for models chosen for calculations, first, we confirm conclusions of other authors about the existence of extra-model harmonics in the stellar velocity field. Secondly, not all parameters of the OMM are statistically significant, and the set of parameters depends on the stellar sample used.
NASA Astrophysics Data System (ADS)
Apiñaniz, Estibaliz; Mendioroz, Arantza; Salazar, Agustín; Celorrio, Ricardo
2010-09-01
We analyze the ability of the Tikhonov regularization to retrieve different shapes of in-depth thermal conductivity profiles, usually encountered in hardened materials, from surface temperature data. Exponential, oscillating, and sigmoidal profiles are studied. By performing theoretical experiments with added white noises, the influence of the order of the Tikhonov functional and of the parameters that need to be tuned to carry out the inversion are investigated. The analysis shows that the Tikhonov regularization is very well suited to reconstruct smooth profiles but fails when the conductivity exhibits steep slopes. We check a natural alternative regularization, the total variation functional, which gives much better results for sigmoidal profiles. Accordingly, a strategy to deal with real data is proposed in which we introduce this total variation regularization. This regularization is applied to the inversion of real data corresponding to a case hardened AISI1018 steel plate, giving much better anticorrelation of the retrieved conductivity with microindentation test data than the Tikhonov regularization. The results suggest that this is a promising way to improve the reliability of local inversion methods.
Consistent Partial Least Squares Path Modeling via Regularization
Jung, Sunho; Park, JaeHong
2018-01-01
Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present. PMID:29515491
Estimation variance bounds of importance sampling simulations in digital communication systems
NASA Technical Reports Server (NTRS)
Lu, D.; Yao, K.
1991-01-01
In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.
NASA Astrophysics Data System (ADS)
Khusnutdinova, K. R.; Stepanyants, Y. A.; Tranter, M. R.
2018-02-01
We study solitary wave solutions of the fifth-order Korteweg-de Vries equation which contains, besides the traditional quadratic nonlinearity and third-order dispersion, additional terms including cubic nonlinearity and fifth order linear dispersion, as well as two nonlinear dispersive terms. An exact solitary wave solution to this equation is derived, and the dependence of its amplitude, width, and speed on the parameters of the governing equation is studied. It is shown that the derived solution can represent either an embedded or regular soliton depending on the equation parameters. The nonlinear dispersive terms can drastically influence the existence of solitary waves, their nature (regular or embedded), profile, polarity, and stability with respect to small perturbations. We show, in particular, that in some cases embedded solitons can be stable even with respect to interactions with regular solitons. The results obtained are applicable to surface and internal waves in fluids, as well as to waves in other media (plasma, solid waveguides, elastic media with microstructure, etc.).
A Model of Objective Weighting for EIA.
ERIC Educational Resources Information Center
Ying, Long Gen; Liu, You Ci
1995-01-01
In the research of environmental impact assessment (EIA), the problem of weight distribution for a set of parameters has not yet been properly solved. Presents an approach of objective weighting by using a procedure of Pij principal component-factor analysis (Pij PCFA), which suits specifically those parameters measured directly by physical…
Putting Parameters in Their Proper Place
ERIC Educational Resources Information Center
Montrul, Silvina; Yoon, James
2009-01-01
Seeing the logical problem of second language acquisition as that of primarily selecting and re-assembling bundles of features anew, Lardiere proposes to dispense with the deductive learning approach and its broad range of consequences subsumed under the concept of parameters. While we agree that feature assembly captures more precisely the…
Analytic continuation of quantum Monte Carlo data by stochastic analytical inference.
Fuchs, Sebastian; Pruschke, Thomas; Jarrell, Mark
2010-05-01
We present an algorithm for the analytic continuation of imaginary-time quantum Monte Carlo data which is strictly based on principles of Bayesian statistical inference. Within this framework we are able to obtain an explicit expression for the calculation of a weighted average over possible energy spectra, which can be evaluated by standard Monte Carlo simulations, yielding as by-product also the distribution function as function of the regularization parameter. Our algorithm thus avoids the usual ad hoc assumptions introduced in similar algorithms to fix the regularization parameter. We apply the algorithm to imaginary-time quantum Monte Carlo data and compare the resulting energy spectra with those from a standard maximum-entropy calculation.
Optimal Tikhonov regularization for DEER spectroscopy
NASA Astrophysics Data System (ADS)
Edwards, Thomas H.; Stoll, Stefan
2018-03-01
Tikhonov regularization is the most commonly used method for extracting distance distributions from experimental double electron-electron resonance (DEER) spectroscopy data. This method requires the selection of a regularization parameter, α , and a regularization operator, L. We analyze the performance of a large set of α selection methods and several regularization operators, using a test set of over half a million synthetic noisy DEER traces. These are generated from distance distributions obtained from in silico double labeling of a protein crystal structure of T4 lysozyme with the spin label MTSSL. We compare the methods and operators based on their ability to recover the model distance distributions from the noisy time traces. The results indicate that several α selection methods perform quite well, among them the Akaike information criterion and the generalized cross validation method with either the first- or second-derivative operator. They perform significantly better than currently utilized L-curve methods.
[Acoustic conditions in open plan office - Application of technical measures in a typical room].
Mikulski, Witold
2018-03-09
Noise in open plan offices should not exceed acceptable levels for the hearing protection. Its major negative effects on employees are nuisance and impediment in execution of work. Specific technical solutions should be introduced to provide proper acoustic conditions for work performance. Acoustic evaluation of a typical open plan office was presented in the article published in "Medycyna Pracy" 5/2016. None of the rooms meets all the criteria, therefore, in this article one of the rooms was chosen to apply different technical solutions to check the possibility of reaching proper acoustic conditions. Acoustic effectiveness of those solutions was verified by means of digital simulation. The model was checked by comparing the results of measurements and calculations before using simulation. The analyzis revealed that open plan offices supplemented with signals for masking speech signals can meet all the required criteria. It is relatively easy to reach proper reverberation time (i.e., sound absorption). It is more difficult to reach proper values of evaluation parameters determined from A-weighted sound pressure level (SPLA) of speech. The most difficult is to provide proper values of evaluation parameters determined from speech transmission index (STI). Finally, it is necessary (besides acoustic treatment) to use devices for speech masking. The study proved that it is technically possible to reach proper acoustic condition. Main causes of employees complaints in open plan office are inadequate acoustic work conditions. Therefore, it is necessary to apply specific technical solutions - not only sound absorbing suspended ceiling and high acoustic barriers, but also devices for speech masking. Med Pr 2018;69(2):153-165. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.
NASA Astrophysics Data System (ADS)
Meresescu, Alina G.; Kowalski, Matthieu; Schmidt, Frédéric; Landais, François
2018-06-01
The Water Residence Time distribution is the equivalent of the impulse response of a linear system allowing the propagation of water through a medium, e.g. the propagation of rain water from the top of the mountain towards the aquifers. We consider the output aquifer levels as the convolution between the input rain levels and the Water Residence Time, starting with an initial aquifer base level. The estimation of Water Residence Time is important for a better understanding of hydro-bio-geochemical processes and mixing properties of wetlands used as filters in ecological applications, as well as protecting fresh water sources for wells from pollutants. Common methods of estimating the Water Residence Time focus on cross-correlation, parameter fitting and non-parametric deconvolution methods. Here we propose a 1D full-deconvolution, regularized, non-parametric inverse problem algorithm that enforces smoothness and uses constraints of causality and positivity to estimate the Water Residence Time curve. Compared to Bayesian non-parametric deconvolution approaches, it has a fast runtime per test case; compared to the popular and fast cross-correlation method, it produces a more precise Water Residence Time curve even in the case of noisy measurements. The algorithm needs only one regularization parameter to balance between smoothness of the Water Residence Time and accuracy of the reconstruction. We propose an approach on how to automatically find a suitable value of the regularization parameter from the input data only. Tests on real data illustrate the potential of this method to analyze hydrological datasets.
Svolos, Patricia; Tsougos, Ioannis; Kyrgias, Georgios; Kappas, Constantine; Theodorou, Kiki
2011-04-01
In this study we sought to evaluate and accent the importance of radiobiological parameter selection and implementation to the normal tissue complication probability (NTCP) models. The relative seriality (RS) and the Lyman-Kutcher-Burman (LKB) models were studied. For each model, a minimum and maximum set of radiobiological parameter sets was selected from the overall published sets applied in literature and a theoretical mean parameter set was computed. In order to investigate the potential model weaknesses in NTCP estimation and to point out the correct use of model parameters, these sets were used as input to the RS and the LKB model, estimating radiation induced complications for a group of 36 breast cancer patients treated with radiotherapy. The clinical endpoint examined was Radiation Pneumonitis. Each model was represented by a certain dose-response range when the selected parameter sets were applied. Comparing the models with their ranges, a large area of coincidence was revealed. If the parameter uncertainties (standard deviation) are included in the models, their area of coincidence might be enlarged, constraining even greater their predictive ability. The selection of the proper radiobiological parameter set for a given clinical endpoint is crucial. Published parameter values are not definite but should be accompanied by uncertainties, and one should be very careful when applying them to the NTCP models. Correct selection and proper implementation of published parameters provides a quite accurate fit of the NTCP models to the considered endpoint.
Rank-Optimized Logistic Matrix Regression toward Improved Matrix Data Classification.
Zhang, Jianguang; Jiang, Jianmin
2018-02-01
While existing logistic regression suffers from overfitting and often fails in considering structural information, we propose a novel matrix-based logistic regression to overcome the weakness. In the proposed method, 2D matrices are directly used to learn two groups of parameter vectors along each dimension without vectorization, which allows the proposed method to fully exploit the underlying structural information embedded inside the 2D matrices. Further, we add a joint [Formula: see text]-norm on two parameter matrices, which are organized by aligning each group of parameter vectors in columns. This added co-regularization term has two roles-enhancing the effect of regularization and optimizing the rank during the learning process. With our proposed fast iterative solution, we carried out extensive experiments. The results show that in comparison to both the traditional tensor-based methods and the vector-based regression methods, our proposed solution achieves better performance for matrix data classifications.
Modelling topographic potential for erosion and deposition using GIS
Helena Mitasova; Louis R. Iverson
1996-01-01
Modelling of erosion and deposition in complex terrain within a geographical information system (GIS) requires a high resolution digital elevation model (DEM), reliable estimation of topographic parameters, and formulation of erosion models adequate for digital representation of spatially distributed parameters. Regularized spline with tension was integrated within a...
Schuck, P
2000-03-01
A new method for the size-distribution analysis of polymers by sedimentation velocity analytical ultracentrifugation is described. It exploits the ability of Lamm equation modeling to discriminate between the spreading of the sedimentation boundary arising from sample heterogeneity and from diffusion. Finite element solutions of the Lamm equation for a large number of discrete noninteracting species are combined with maximum entropy regularization to represent a continuous size-distribution. As in the program CONTIN, the parameter governing the regularization constraint is adjusted by variance analysis to a predefined confidence level. Estimates of the partial specific volume and the frictional ratio of the macromolecules are used to calculate the diffusion coefficients, resulting in relatively high-resolution sedimentation coefficient distributions c(s) or molar mass distributions c(M). It can be applied to interference optical data that exhibit systematic noise components, and it does not require solution or solvent plateaus to be established. More details on the size-distribution can be obtained than from van Holde-Weischet analysis. The sensitivity to the values of the regularization parameter and to the shape parameters is explored with the help of simulated sedimentation data of discrete and continuous model size distributions, and by applications to experimental data of continuous and discrete protein mixtures.
Gutierrez-Lopez, Liliana; Garcia-Sanchez, Jose Ruben; Rincon-Viquez, Maria de Jesus; Lara-Padilla, Eleazar; Sierra-Vargas, Martha P; Olivares-Corichi, Ivonne M
2012-01-01
Studies show that diet and exercise are important in the treatment of obesity. The aim of this study was to determine whether additional regular moderate aerobic exercise during a treatment with hypocaloric diet has a beneficial effect on oxidative stress and molecular damage in the obese patient. Oxidative stress of 16 normal-weight (NW) and 32 obese 1 (O1) subjects (BMI 30-34.9 kg/m(2)) were established by biomarkers of oxidative stress in plasma. Recombinant human insulin was incubated with blood from NW or O1 subjects, and the molecular damage to the hormone was analyzed. Two groups of treatment, hypocaloric diet (HD) and hypocaloric diet plus regular moderate aerobic exercise (HDMAE), were formed, and their effects in obese subjects were analyzed. The data showed the presence of oxidative stress in O1 subjects. Molecular damage and polymerization of insulin was observed more frequently in the blood from O1 subjects. The treatment of O1 subjects with HD decreased the anthropometric parameters as well as oxidative stress and molecular damage, which was more effectively prevented by the treatment with HDMAE. HD and HDMAE treatments decreased anthropometric parameters, oxidative stress, and molecular damage in O1 subjects. Copyright © 2012 S. Karger GmbH, Freiburg.
Gaitanis, Anastasios; Kastis, George A; Vlastou, Elena; Bouziotis, Penelope; Verginis, Panayotis; Anagnostopoulos, Constantinos D
2017-08-01
The Tera-Tomo 3D image reconstruction algorithm (a version of OSEM), provided with the Mediso nanoScan® PC (PET8/2) small-animal positron emission tomograph (PET)/x-ray computed tomography (CT) scanner, has various parameter options such as total level of regularization, subsets, and iterations. Also, the acquisition time in PET plays an important role. This study aims to assess the performance of this new small-animal PET/CT scanner for different acquisition times and reconstruction parameters, for 2-deoxy-2-[ 18 F]fluoro-D-glucose ([ 18 F]FDG) and Ga-68, under the NEMA NU 4-2008 standards. Various image quality metrics were calculated for different realizations of [ 18 F]FDG and Ga-68 filled image quality (IQ) phantoms. [ 18 F]FDG imaging produced improved images over Ga-68. The best compromise for the optimization of all image quality factors is achieved for at least 30 min acquisition and image reconstruction with 52 iteration updates combined with a high regularization level. A high regularization level at 52 iteration updates and 30 min acquisition time were found to optimize most of the figures of merit investigated.
Acoustic and elastic waveform inversion best practices
NASA Astrophysics Data System (ADS)
Modrak, Ryan T.
Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence, one or two test cases are not enough to reliably inform such decisions. We identify best practices instead using two global, one regional and four near-surface acoustic test problems. To obtain meaningful quantitative comparisons, we carry out hundreds acoustic inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that L-BFGS provides computational savings over nonlinear conjugate gradient methods in a wide variety of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization, and total variation regularization are effective in different contexts. Besides these issues, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details have a strong effect on computational cost, regardless of the chosen material parameterization or nonlinear optimization algorithm. Building on the acoustic inversion results, we carry out elastic experiments with four test problems, three objective functions, and four material parameterizations. The choice of parameterization for isotropic elastic media is found to be more complicated than previous studies suggests, with "wavespeed-like'' parameters performing well with phase-based objective functions and Lame parameters performing well with amplitude-based objective functions. Reliability and efficiency can be even harder to achieve in transversely isotropic elastic inversions because rotation angle parameters describing fast-axis direction are difficult to recover. Using Voigt or Chen-Tromp parameters avoids the need to include rotation angles explicitly and provides an effective strategy for anisotropic inversion. The need for flexible and portable workflow management tools for seismic inversion also poses a major challenge. In a final chapter, the software used to the carry out the above experiments is described and instructions for reproducing experimental results are given.
The cost of uniqueness in groundwater model calibration
NASA Astrophysics Data System (ADS)
Moore, Catherine; Doherty, John
2006-04-01
Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The "cost of uniqueness" is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, in turn, can lead to erroneous predictions made by a model that is ostensibly "well calibrated". Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration parameter covariance matrices shows that the latter often possess a much smaller spectral bandwidth than the former. It is also demonstrated that, as an inevitable consequence of the fact that a calibrated model cannot replicate every detail of the true system, model-to-measurement residuals can show a high degree of spatial correlation, a fact which must be taken into account when assessing these residuals either qualitatively, or quantitatively in the exploration of model predictive uncertainty. These principles are demonstrated using a synthetic case in which spatial parameter definition is based on pilot points, and calibration is implemented using both zones of piecewise constancy and constrained minimization regularization.
NASA Astrophysics Data System (ADS)
Tufekci, Duygu; Lutfi Suzen, Mehmet; Cevdet Yalciner, Ahmet
2017-04-01
The resilience of coastal communities against tsunamis are dependent on preparedness of the communities. Preparedness covers social and structural components which increases with the awareness in the community against tsunamis. Therefore, proper evaluation of all components of preparedness will help communities to reduce the adverse effects of tsunamis and increase the overall resilience of communities. On the other hand, the complexity of the metropolitan life with its social and structural components necessitates explicit vulnerability assessments for proper determination of tsunami risk, and development of proper mitigation strategies and recovery plans. Assessing the vulnerability and resilience level of a region against tsunamis and efforts for reducing the tsunami risk are the key components of disaster management. Since increasing the awareness of coastal communities against tsunamis is one of the main objectives of disaster management, then it should be considered as one of the parameter in tsunami risk analysis. In the method named MetHuVA (METU - Metropolitan Human Tsunami Vulnerability Assessment) proposed by Cankaya et al., (2016) and Tufekci et al., (2016), the awareness and preparedness level of the community is revealed to be an indispensable parameter with a great effect on tsunami risk. According to the results obtained from those studies, it becomes important that the awareness and preparedness parameter (n) must be analyzed by considering their interaction and all related components. While increasing awareness can be achieved, vulnerability and risk will be reduced. In this study the components of awareness and preparedness parameter (n) is analyzed in different categories by considering administrative, social, educational, economic and structural preparedness of the coastal communities. Hence the proposed awareness and preparedness parameter can properly be analyzed and further improvements can be achieved in vulnerability and risk analysis. Furthermore, the components of the awareness and preparedness parameter n, is widely investigated in global and local practices by using the method of categorization to determine different levels for different coastal metropolitan areas with different cultures and with different hazard perception. Moreover, consistency between the theoretical maximum and practical applications of parameter n is estimated, discussed and presented. In the applications mainly the Bakirkoy district of Istanbul is analyzed and the results are presented. Acknowledgements: Partial support by 603839 ASTARTE Project of EU, UDAPC-12-14 project of AFAD, Turkey, 213M534 projects of TUBITAK, Japan-Turkey Joint Research Project by JICA on earthquakes and tsunamis in Marmara Region in (JICA SATREPS - MarDiM Project), and Istanbul Metropolitan Municipality are acknowledged.
NASA Astrophysics Data System (ADS)
Parekh, Ankit
Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal decomposition technique for an important biomedical signal processing problem: the detection of sleep spindles and K-complexes in human sleep electroencephalography (EEG). We propose a non-linear model for the EEG consisting of three components: (1) a transient (sparse piecewise constant) component, (2) a low-frequency component, and (3) an oscillatory component. The oscillatory component admits a sparse time-frequency representation. Using a convex objective function, we propose a fast non-linear optimization algorithm to estimate the three components in the proposed signal model. The low-frequency and oscillatory components are then used to estimate the K-complexes and sleep spindles respectively. The proposed detection method is shown to outperform several state-of-the-art automated sleep spindles detection methods.
Mathematical form models of tree trunks
Rudolfs Ozolins
2000-01-01
Assortment structure analysis of tree trunks is a characteristic and proper problem that can be solved by using mathematical modeling and standard computer programs. Mathematical form model of tree trunks consists of tapering curve equations and their parameters. Parameters for nine species were obtained by processing measurements of 2,794 model trees and studying the...
NASA Astrophysics Data System (ADS)
de Saint Jean, C.; Habert, B.; Archier, P.; Noguere, G.; Bernard, D.; Tommasi, J.; Blaise, P.
2010-10-01
In the [eV;MeV] energy range, modelling of the neutron induced reactions are based on nuclear reaction models having parameters. Estimation of co-variances on cross sections or on nuclear reaction model parameters is a recurrent puzzle in nuclear data evaluation. Major breakthroughs were asked by nuclear reactor physicists to assess proper uncertainties to be used in applications. In this paper, mathematical methods developped in the CONRAD code[2] will be presented to explain the treatment of all type of uncertainties, including experimental ones (statistical and systematic) and propagate them to nuclear reaction model parameters or cross sections. Marginalization procedure will thus be exposed using analytical or Monte-Carlo solutions. Furthermore, one major drawback found by reactor physicist is the fact that integral or analytical experiments (reactor mock-up or simple integral experiment, e.g. ICSBEP, …) were not taken into account sufficiently soon in the evaluation process to remove discrepancies. In this paper, we will describe a mathematical framework to take into account properly this kind of information.
Doping-Induced Type-II to Type-I Transition and Interband Optical Gain in InAs/AlSb Quantum Wells
NASA Technical Reports Server (NTRS)
Kolokolov, K. I.; Ning, C. Z.
2003-01-01
We show that proper doping of the barrier regions can convert the well-known type-II InAs/AlSb QWs to type I, producing strong interband transitions comparable to regular type-I QWs. The interband gain for TM mode is as high as 4000 l/cm, thus providing an important alternative material system in the mid-infrared wavelength range. We also study the TE and TM gain as functions of doping level and intrinsic electron-hole density.
Renormalization in Large Momentum Effective Theory of Parton Physics.
Ji, Xiangdong; Zhang, Jian-Hui; Zhao, Yong
2018-03-16
In the large-momentum effective field theory approach to parton physics, the matrix elements of nonlocal operators of quark and gluon fields, linked by straight Wilson lines in a spatial direction, are calculated in lattice quantum chromodynamics as a function of hadron momentum. Using the heavy-quark effective theory formalism, we show a multiplicative renormalization of these operators at all orders in perturbation theory, both in dimensional and lattice regularizations. The result provides a theoretical basis for extracting parton properties through properly renormalized observables in Monte Carlo simulations.
On regularizing the MCTDH equations of motion
NASA Astrophysics Data System (ADS)
Meyer, Hans-Dieter; Wang, Haobin
2018-03-01
The Multiconfiguration Time-Dependent Hartree (MCTDH) approach leads to equations of motion (EOM) which become singular when there are unoccupied so-called single-particle functions (SPFs). Starting from a Hartree product, all SPFs, except the first one, are unoccupied initially. To solve the MCTDH-EOMs numerically, one therefore has to remove the singularity by a regularization procedure. Usually the inverse of a density matrix is regularized. Here we argue and show that regularizing the coefficient tensor, which in turn regularizes the density matrix as well, leads to an improved performance of the EOMs. The initially unoccupied SPFs are rotated faster into their "correct direction" in Hilbert space and the final results are less sensitive to the choice of the value of the regularization parameter. For a particular example (a spin-boson system studied with a transformed Hamiltonian), we could even show that only with the new regularization scheme could one obtain correct results. Finally, in Appendix A, a new integration scheme for the MCTDH-EOMs developed by Lubich and co-workers is discussed. It is argued that this scheme does not solve the problem of the unoccupied natural orbitals because this scheme ignores the latter and does not propagate them at all.
Stark width regularities within spectral series of the lithium isoelectronic sequence
NASA Astrophysics Data System (ADS)
Tapalaga, Irinel; Trklja, Nora; Dojčinović, Ivan P.; Purić, Jagoš
2018-03-01
Stark width regularities within spectral series of the lithium isoelectronic sequence have been studied in an approach that includes both neutrals and ions. The influence of environmental conditions and certain atomic parameters on the Stark widths of spectral lines has been investigated. This study gives a simple model for the calculation of Stark broadening data for spectral lines within the lithium isoelectronic sequence. The proposed model requires fewer parameters than any other model. The obtained relations were used for predictions of Stark widths for transitions that have not yet been measured or calculated. In the framework of the present research, three algorithms for fast data processing have been made and they enable quality control and provide verification of the theoretically calculated results.
On a full Bayesian inference for force reconstruction problems
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2018-05-01
In a previous paper, the authors introduced a flexible methodology for reconstructing mechanical sources in the frequency domain from prior local information on both their nature and location over a linear and time invariant structure. The proposed approach was derived from Bayesian statistics, because of its ability in mathematically accounting for experimenter's prior knowledge. However, since only the Maximum a Posteriori estimate was computed, the posterior uncertainty about the regularized solution given the measured vibration field, the mechanical model and the regularization parameter was not assessed. To answer this legitimate question, this paper fully exploits the Bayesian framework to provide, from a Markov Chain Monte Carlo algorithm, credible intervals and other statistical measures (mean, median, mode) for all the parameters of the force reconstruction problem.
ERIC Educational Resources Information Center
Ordonez, F. J.; Rosety-Rodriguez, M.
2007-01-01
Since we have recently found that regular exercise increased erythrocyte antioxidant enzyme activities such as glutathione peroxidase (GPX) in adolescents with Down syndrome, these programs may be recommended. This study was designed to assess the role of anthropometrical parameters as easy, economic and non-invasive biomarkers of GPX. Thirty-one…
Study of the method of water-injected meat identifying based on low-field nuclear magnetic resonance
NASA Astrophysics Data System (ADS)
Xu, Jianmei; Lin, Qing; Yang, Fang; Zheng, Zheng; Ai, Zhujun
2018-01-01
The aim of this study to apply low-field nuclear magnetic resonance technique was to study regular variation of the transverse relaxation spectral parameters of water-injected meat with the proportion of water injection. Based on this, the method of one-way ANOVA and discriminant analysis was used to analyse the differences between these parameters in the capacity of distinguishing water-injected proportion, and established a model for identifying water-injected meat. The results show that, except for T 21b, T 22e and T 23b, the other parameters of the T 2 relaxation spectrum changed regularly with the change of water-injected proportion. The ability of different parameters to distinguish water-injected proportion was different. Based on S, P 22 and T 23m as the prediction variable, the Fisher model and the Bayes model were established by discriminant analysis method, qualitative and quantitative classification of water-injected meat can be realized. The rate of correct discrimination of distinguished validation and cross validation were 88%, the model was stable.
Bayesian SEM for Specification Search Problems in Testing Factorial Invariance.
Shi, Dexin; Song, Hairong; Liao, Xiaolan; Terry, Robert; Snyder, Lori A
2017-01-01
Specification search problems refer to two important but under-addressed issues in testing for factorial invariance: how to select proper reference indicators and how to locate specific non-invariant parameters. In this study, we propose a two-step procedure to solve these issues. Step 1 is to identify a proper reference indicator using the Bayesian structural equation modeling approach. An item is selected if it is associated with the highest likelihood to be invariant across groups. Step 2 is to locate specific non-invariant parameters, given that a proper reference indicator has already been selected in Step 1. A series of simulation analyses show that the proposed method performs well under a variety of data conditions, and optimal performance is observed under conditions of large magnitude of non-invariance, low proportion of non-invariance, and large sample sizes. We also provide an empirical example to demonstrate the specific procedures to implement the proposed method in applied research. The importance and influences are discussed regarding the choices of informative priors with zero mean and small variances. Extensions and limitations are also pointed out.
NASA Astrophysics Data System (ADS)
Sibileau, Alberto; Auricchio, Ferdinando; Morganti, Simone; Díez, Pedro
2018-01-01
Architectured materials (or metamaterials) are constituted by a unit-cell with a complex structural design repeated periodically forming a bulk material with emergent mechanical properties. One may obtain specific macro-scale (or bulk) properties in the resulting architectured material by properly designing the unit-cell. Typically, this is stated as an optimal design problem in which the parameters describing the shape and mechanical properties of the unit-cell are selected in order to produce the desired bulk characteristics. This is especially pertinent due to the ease manufacturing of these complex structures with 3D printers. The proper generalized decomposition provides explicit parametic solutions of parametric PDEs. Here, the same ideas are used to obtain parametric solutions of the algebraic equations arising from lattice structural models. Once the explicit parametric solution is available, the optimal design problem is a simple post-process. The same strategy is applied in the numerical illustrations, first to a unit-cell (and then homogenized with periodicity conditions), and in a second phase to the complete structure of a lattice material specimen.
Fluorescence molecular tomography reconstruction via discrete cosine transform-based regularization
NASA Astrophysics Data System (ADS)
Shi, Junwei; Liu, Fei; Zhang, Jiulou; Luo, Jianwen; Bai, Jing
2015-05-01
Fluorescence molecular tomography (FMT) as a noninvasive imaging modality has been widely used for biomedical preclinical applications. However, FMT reconstruction suffers from severe ill-posedness, especially when a limited number of projections are used. In order to improve the quality of FMT reconstruction results, a discrete cosine transform (DCT) based reweighted L1-norm regularization algorithm is proposed. In each iteration of the reconstruction process, different reweighted regularization parameters are adaptively assigned according to the values of DCT coefficients to suppress the reconstruction noise. In addition, the permission region of the reconstructed fluorophores is adaptively constructed to increase the convergence speed. In order to evaluate the performance of the proposed algorithm, physical phantom and in vivo mouse experiments with a limited number of projections are carried out. For comparison, different L1-norm regularization strategies are employed. By quantifying the signal-to-noise ratio (SNR) of the reconstruction results in the phantom and in vivo mouse experiments with four projections, the proposed DCT-based reweighted L1-norm regularization shows higher SNR than other L1-norm regularizations employed in this work.
NASA Astrophysics Data System (ADS)
Schuster, Thomas; Hofmann, Bernd; Kaltenbacher, Barbara
2012-10-01
Inverse problems can usually be modelled as operator equations in infinite-dimensional spaces with a forward operator acting between Hilbert or Banach spaces—a formulation which quite often also serves as the basis for defining and analyzing solution methods. The additional amount of structure and geometric interpretability provided by the concept of an inner product has rendered these methods amenable to a convergence analysis, a fact which has led to a rigorous and comprehensive study of regularization methods in Hilbert spaces over the last three decades. However, for numerous problems such as x-ray diffractometry, certain inverse scattering problems and a number of parameter identification problems in PDEs, the reasons for using a Hilbert space setting seem to be based on conventions rather than an appropriate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, non-Hilbertian regularization and data fidelity terms incorporating a priori information on solution and noise, such as general Lp-norms, TV-type norms, or the Kullback-Leibler divergence, have recently become very popular. These facts have motivated intensive investigations on regularization methods in Banach spaces, a topic which has emerged as a highly active research field within the area of inverse problems. Meanwhile some of the most well-known regularization approaches, such as Tikhonov-type methods requiring the solution of extremal problems, and iterative ones like the Landweber method, the Gauss-Newton method, as well as the approximate inverse method, have been investigated for linear and nonlinear operator equations in Banach spaces. Convergence with rates has been proven and conditions on the solution smoothness and on the structure of nonlinearity have been formulated. Still, beyond the existing results a large number of challenging open questions have arisen, due to the more involved handling of general Banach spaces and the larger variety of concrete instances with special properties. The aim of this special section is to provide a forum for highly topical ongoing work in the area of regularization in Banach spaces, its numerics and its applications. Indeed, we have been lucky enough to obtain a number of excellent papers both from colleagues who have previously been contributing to this topic and from researchers entering the field due to its relevance in practical inverse problems. We would like to thank all contributers for enabling us to present a high quality collection of papers on topics ranging from various aspects of regularization via efficient numerical solution to applications in PDE models. We give a brief overview of the contributions included in this issue (here ordered alphabetically by first author). In their paper, Iterative regularization with general penalty term—theory and application to L1 and TV regularization, Radu Bot and Torsten Hein provide an extension of the Landweber iteration for linear operator equations in Banach space to general operators in place of the inverse duality mapping, which corresponds to the use of general regularization functionals in variational regularization. The L∞ topology in data space corresponds to the frequently occuring situation of uniformly distributed data noise. A numerically efficient solution of the resulting Tikhonov regularization problem via a Moreau-Yosida appriximation and a semismooth Newton method, along with a δ-free regularization parameter choice rule, is the topic of the paper L∞ fitting for inverse problems with uniform noise by Christian Clason. Extension of convergence rates results from classical source conditions to their generalization via variational inequalities with a priori and a posteriori stopping rules is the main contribution of the paper Regularization of linear ill-posed problems by the augmented Lagrangian method and variational inequalities by Klaus Frick and Markus Grasmair, again in the context of some iterative method. A powerful tool for proving convergence rates of Tikhonov type but also other regularization methods in Banach spaces are assumptions of the type of variational inequalities that combine conditions on solution smoothness (i.e., source conditions in the Hilbert space case) and nonlinearity of the forward operator. In Parameter choice in Banach space regularization under variational inequalities, Bernd Hofmann and Peter Mathé provide results with general error measures and especially study the question of regularization parameter choice. Daijun Jiang, Hui Feng, and Jun Zou consider an application of Banach space ideas in the context of an application problem in their paper Convergence rates of Tikhonov regularizations for parameter identifiation in a parabolic-elliptic system, namely the identification of a distributed diffusion coefficient in a coupled elliptic-parabolic system. In particular, they show convergence rates of Lp-H1 (variational) regularization for the application under consideration via the use and verification of certain source and nonlinearity conditions. In computational practice, the Lp norm with p close to one is often used as a substitute for the actually sparsity promoting L1 norm. In Norm sensitivity of sparsity regularization with respect to p, Kamil S Kazimierski, Peter Maass and Robin Strehlow consider the question of how sensitive the Tikhonov regularized solution is with respect to p. They do so by computing the derivative via the implicit function theorem, particularly at the crucial value, p=1. Another iterative regularization method in Banach space is considered by Qinian Jin and Linda Stals in Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces. Using a variational formulation and under some smoothness and convexity assumption on the preimage space, they extend the convergence analysis of the well-known iterative Tikhonov method for linear problems in Hilbert space to a more general Banach space framework. Systems of linear or nonlinear operators can be efficiently treated by cyclic iterations, thus several variants of gradient and Newton-type Kaczmarz methods have already been studied in the Hilbert space setting. Antonio Leitão and M Marques Alves in their paper On Landweber---Kaczmarz methods for regularizing systems of ill-posed equations in Banach spaces carry out an extension to Banach spaces for the fundamental Landweber version. The impact of perturbations in the evaluation of the forward operator and its derivative on the convergence behaviour of regularization methods is a practically and highly relevant issue. It is treated in the paper Convergence rates analysis of Tikhonov regularization for nonlinear ill-posed problems with noisy operators by Shuai Lu and Jens Flemming for variational regularization of nonlinear problems in Banach spaces. In The approximate inverse in action: IV. Semi-discrete equations in a Banach space setting, Thomas Schuster, Andreas Rieder and Frank Schöpfer extend the concept of approximate inverse to the practically and highly relevant situation of finitely many measurements and a general smooth and convex Banach space as preimage space. They devise two approaches for computing the reconstruction kernels required in the method and provide convergence and regularization results. Frank Werner and Thorsten Hohage in Convergence rates in expectation for Tikhonov-type regularization of inverse problems with Poisson data prove convergence rates results for variational regularization with general convex regularization term and the Kullback-Leibler distance as data fidelity term by combining a new result on Poisson distributed data with a deterministic rates analysis. Finally, we would like to thank the Inverse Problems team, especially Joanna Evangelides and Chris Wileman, for their extraordinary smooth and productive cooperation, as well as Alfred K Louis for his kind support of our initiative.
NASA Astrophysics Data System (ADS)
Pachhai, S.; Masters, G.; Laske, G.
2017-12-01
Earth's normal-mode spectra are crucial to studying the long wavelength structure of the Earth. Such observations have been used extensively to estimate "splitting coefficients" which, in turn, can be used to determine the three-dimensional velocity and density structure. Most past studies apply a non-linear iterative inversion to estimate the splitting coefficients which requires that the earthquake source is known. However, it is challenging to know the source details, particularly for big events as used in normal-mode analyses. Additionally, the final solution of the non-linear inversion can depend on the choice of damping parameter and starting model. To circumvent the need to know the source, a two-step linear inversion has been developed and successfully applied to many mantle and core sensitive modes. The first step takes combinations of the data from a single event to produce spectra known as "receiver strips". The autoregressive nature of the receiver strips can then be used to estimate the structure coefficients without the need to know the source. Based on this approach, we recently employed a neighborhood algorithm to measure the splitting coefficients for an isolated inner-core sensitive mode (13S2). This approach explores the parameter space efficiently without any need of regularization and finds the structure coefficients which best fit the observed strips. Here, we implement a Bayesian approach to data collected for earthquakes from early 2000 and more recent. This approach combines the data (through likelihood) and prior information to provide rigorous parameter values and their uncertainties for both isolated and coupled modes. The likelihood function is derived from the inferred errors of the receiver strips which allows us to retrieve proper uncertainties. Finally, we apply model selection criteria that balance the trade-offs between fit (likelihood) and model complexity to investigate the degree and type of structure (elastic and anelastic) required to explain the data.
Shaqour, F; Taany, R; Rimawi, O; Saffarini, G
2016-01-01
Modeling groundwater properties is an important tool by means of which water resources management can judge whether these properties are within the safe limits or not. This is usually done regularly and in the aftermath of crises that are expected to reflect negatively on groundwater properties, as occurred in Jordan due to crises in neighboring countries. In this study, specific capacity and salinity of groundwater of B2/A7 aquifer in Amman Zarqa Basin were evaluated to figure out the effect of population increase in this basin as a result of refugee flux from neighboring countries to this heavily populated basin after Gulf crises 1990 and 2003. Both properties were found to exhibit a three-parameter lognormal distribution. The empirically calculated β parameter of this distribution mounted up to 0.39 m(3)/h/min for specific capacity and 238 ppm for salinity. This parameter is suggested to account for the global changes that took place all over the basin during the entire period of observation and not for local changes at every well or at certain localities in the basin. It can be considered as an exploratory result of data analysis. Formal and implicit evaluation followed this step using structural analysis and construction of experimental semivariograms that represent the spatial variability of both properties. The adopted semivariograms were then used to construct maps to illustrate the spatial variability of the properties under consideration using kriging interpolation techniques. Semivariograms show that specific capacity and salinity values are spatially dependent within 14,529 and 16,309 m, respectively. Specific capacity semivariogram exhibit a nugget effect on a small scale (324 m). This can be attributed to heterogeneity or inadequacies in measurement. Specific capacity and salinity maps show that the major changes exhibit a northwest southeast trend, near As-Samra Wastewater Treatment Plant. The results of this study suggest proper management practices.
Sideroudi, Haris; Labiris, Georgios; Georgantzoglou, Kimon; Ntonti, Panagiota; Siganos, Charalambos; Kozobolis, Vassilios
2017-07-01
To develop an algorithm for the Fourier analysis of posterior corneal videokeratographic data and to evaluate the derived parameters in the diagnosis of Subclinical Keratoconus (SKC) and Keratoconus (KC). This was a cross-sectional, observational study that took place in the Eye Institute of Thrace, Democritus University, Greece. Eighty eyes formed the KC group, 55 eyes formed the SKC group while 50 normal eyes populated the control group. A self-developed algorithm in visual basic for Microsoft Excel performed a Fourier series harmonic analysis for the posterior corneal sagittal curvature data. The algorithm decomposed the obtained curvatures into a spherical component, regular astigmatism, asymmetry and higher order irregularities for averaged central 4 mm and for each individual ring separately (1, 2, 3 and 4 mm). The obtained values were evaluated for their diagnostic capacity using receiver operating curves (ROC). Logistic regression was attempted for the identification of a combined diagnostic model. Significant differences were detected in regular astigmatism, asymmetry and higher order irregularities among groups. For the SKC group, the parameters with high diagnostic ability (AUC > 90%) were the higher order irregularities, the asymmetry and the regular astigmatism, mainly in the corneal periphery. Higher predictive accuracy was identified using diagnostic models that combined the asymmetry, regular astigmatism and higher order irregularities in averaged 3and 4 mm area (AUC: 98.4%, Sensitivity: 91.7% and Specificity:100%). Fourier decomposition of posterior Keratometric data provides parameters with high accuracy in differentiating SKC from normal corneas and should be included in the prompt diagnosis of KC. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.
Dinçer, Şensu; Altan, Mehmet; Terzioğlu, Duygu; Uslu, Ezel; Karşidağ, Kubilay; Batu, Şule; Metin, Gökhan
2016-11-01
We aimed to investigate the effects of a regular exercise program on exercise capacity, blood biochemical profiles, certain antioxidant and oxidative stress parameters of type 2 Diabetes mellitus (DM) patients. Thirty one type 2 DM patients (ages ranging from 42-65 years) who have hemoglobin A1c (HbA1c) levels ≥7.5% and ≤9.5% were included to study and performed two cardiopulmonary exercise tests (CPET) before and after the exercise program. Subjects performed aerobic exercise training for 90 minutes a day; 3 days a week during 12 weeks. Blood samples were collected to analyze certain oxidant and antioxidant parameters (advanced oxidation protein products [AOPP], ferric reducing ability of plasma [FRAP], malondialdehyde [MDA], and sialic acid [SA]), blood lipid profile, fasting blood glucose (FBG) and HbA1c. At the end of the program HbA1c and FBG, triglyceride (TG) and very-low-density lipoprotein (VLDL) levels decreased and high-density lipoprotein (HDL) increased significantly (P=0.000, P=0.001, P=0.008, P=0,001 and P=0.02, respectively). AOPP, FRAP, SA levels of the patients increased significantly following first CPET (P=0.000, P=0.049, P=0.014 respectively). At the end of the exercise program AOPP level increased significantly following last CPET. Baseline SA level increased significantly following exercise program (P=0.002). We suggest that poor glycemic control which plays the major role in the pathogenesis of DM and its complications would be improved by 12 weeks of a regular exercise program. Whereas the acute exercise induces protein oxidation, regularly aerobic training may enhance the antioxidant status of type 2 DM patients.
Chang, Hao-Hueng; Lee, Ming-Shu; Hsu, You-Chyun; Tsai, Shang-Jye; Lin, Chun-Pin
2015-10-01
Impacted third molars can be extracted by regular surgery or piezosurgery. The aim of this study was to compare clinical parameters and device-produced noise levels between regular surgery and piezosurgery for the extraction of impacted third molars. Twenty patients (18 women and 2 men, 17-29 years of age) with bilateral symmetrical impacted mandibular or maxillary third molars of the same level were included in this randomized crossover clinical trial. The 40 impacted third molars were divided into a control group (n = 20), in which the third molar was extracted by regular surgery using a high-speed handpiece and an elevator, and an experimental group (n = 20), in which the third molar was extracted by piezosurgery using a high-speed handpiece and a piezotome. The clinical parameters were evaluated by a self-reported questionnaire. The noise levels produced by the high-speed handpiece and piezotome were measured and compared between the experimental and control groups. Patients in the experimental group had a better feeling about tooth extraction and force delivery during extraction and less facial swelling than patients in the control group. However, there were no significant differences in noise-related disturbance, extraction period, degree of facial swelling, pain score, pain duration, any noise levels produced by the devices under different circumstances during tooth extraction between the control and experimental groups. The piezosurgery device produced noise levels similar to or lower than those of the high-speed drilling device. However, piezosurgery provides advantages of increased patient comfort during extraction of impacted third molars. Copyright © 2014. Published by Elsevier B.V.
Formation of Large-Amplitude Wave Groups in an Experimental Model Basin
2008-08-01
varying parameters, including amplitude, frequency, and signal duration. Superposition of thes finite regular waves produced repeatable wave groups at a...19 Regular Waves 20 Irregular Waves 21 Senix Wave Gages 21 GLRP 23 Instrumentation Calibration and Uncertainty 26 Senix Ultrasonic Wave Gages... signal output from sine wave superposition, two sine waves combined: x] + x2 (top) and x3 + x4 (middle), all four waves (x, + x2 + x, + xA
NASA Astrophysics Data System (ADS)
Scala, Antonio; Festa, Gaetano; Vilotte, Jean-Pierre
2015-04-01
Faults are often interfaces between materials with different elastic properties. This is generally the case of plate boundaries in subduction zones, where the ruptures extend for many kilometers crossing materials with strong impedance contrasts (oceanic crust, continental crust, mantle wedge, accretionary prism). From a physical point of view, several peculiar features emerged both from analogic experiments and numerical simulations for a rupture propagating along a bimaterial interface. The elastodynamic flux at the rupture tip breaks its symmetry, inducing normal stress changes and an asymmetric propagation. This latter was widely shown for rupture velocity and slip rate (e.g. Xia et al, 2005) and was supposed to generate an asymmetric distribution of the aftershocks (Rubin and Ampuero, 2007). The bimaterial problem coupled with a Coulomb friction law is ill-posed for a wide range of impedance contrasts, due to a missing length scale in the instantaneous response to the normal traction changes. The ill-posedness also results into simulations no longer independent of the grid size. A regularization can be introduced by delaying the tangential traction from the normal traction as suggested by Cochard and Rice (2000) and Ranjith and Rice (2000) δσeff α|v|+-v* δt = δσ (σn - σeff) where σeff represents the effective normal stress to be used in the Coulomb friction. This regularization introduces two delays depending on the slip rate and on a fixed time scale. In this study we performed a large number of 2D numerical simulations of in plane rupture with the spectral element method dynamic and we systematically investigated the effect of parameter selection on the rupture propagation, dissipation and radiation, by also performing a direct comparison with solutions provided by numerical and experimental results. We found that a purely time-dependent regularization requires a fine tuning rapidly jumping from a too fast, ineffective delay to a slow, invasive, regularization as a function of the actual slip rate. Conversely, the choice of a fixed relaxation length, smaller than the critical slip weakening distance, provides a reliable class of solutions for a wide range of elastic and frictional parameters. Nevertheless critical rupture stages, such as the nucleation or the very fast steady-state propagation may show resolution problems and may take advantage of adaptive schemes, with a space/time variation of the parameters. We used recipes for bimaterial regularization to perform along-dip dynamic simulations of the Tohoku earthquake in the framework of a slip weakening model, with a realistic description of the geometry of the interface and the geological structure. We finely investigated the role of the impedance contrasts on the evolution of the rupture and short wavelength radiation. We also show that pathological effects may arise from a bad selection of regularization parameters.
Nucleosome occupancy as a novel chromatin parameter for replication origin functions
Rodriguez, Jairo; Lee, Laura; Lynch, Bryony; Tsukiyama, Toshio
2017-01-01
Eukaryotic DNA replication initiates from multiple discrete sites in the genome, termed origins of replication (origins). Prior to S phase, multiple origins are poised to initiate replication by recruitment of the pre-replicative complex (pre-RC). For proper replication to occur, origin activation must be tightly regulated. At the population level, each origin has a distinct firing time and frequency of activation within S phase. Many studies have shown that chromatin can strongly influence initiation of DNA replication. However, the chromatin parameters that affect properties of origins have not been thoroughly established. We found that nucleosome occupancy in G1 varies greatly around origins across the S. cerevisiae genome, and nucleosome occupancy around origins significantly correlates with the activation time and efficiency of origins, as well as pre-RC formation. We further demonstrate that nucleosome occupancy around origins in G1 is established during transition from G2/M to G1 in a pre-RC-dependent manner. Importantly, the diminished cell-cycle changes in nucleosome occupancy around origins in the orc1-161 mutant are associated with an abnormal global origin usage profile, suggesting that proper establishment of nucleosome occupancy around origins is a critical step for regulation of global origin activities. Our work thus establishes nucleosome occupancy as a novel and key chromatin parameter for proper origin regulation. PMID:27895110
NASA Astrophysics Data System (ADS)
Schaffrin, Burkhard
2008-02-01
In a linear Gauss-Markov model, the parameter estimates from BLUUE (Best Linear Uniformly Unbiased Estimate) are not robust against possible outliers in the observations. Moreover, by giving up the unbiasedness constraint, the mean squared error (MSE) risk may be further reduced, in particular when the problem is ill-posed. In this paper, the α-weighted S-homBLE (Best homogeneously Linear Estimate) is derived via formulas originally used for variance component estimation on the basis of the repro-BIQUUE (reproducing Best Invariant Quadratic Uniformly Unbiased Estimate) principle in a model with stochastic prior information. In the present model, however, such prior information is not included, which allows the comparison of the stochastic approach (α-weighted S-homBLE) with the well-established algebraic approach of Tykhonov-Phillips regularization, also known as R-HAPS (Hybrid APproximation Solution), whenever the inverse of the “substitute matrix” S exists and is chosen as the R matrix that defines the relative impact of the regularizing term on the final result.
Image degradation characteristics and restoration based on regularization for diffractive imaging
NASA Astrophysics Data System (ADS)
Zhi, Xiyang; Jiang, Shikai; Zhang, Wei; Wang, Dawei; Li, Yun
2017-11-01
The diffractive membrane optical imaging system is an important development trend of ultra large aperture and lightweight space camera. However, related investigations on physics-based diffractive imaging degradation characteristics and corresponding image restoration methods are less studied. In this paper, the model of image quality degradation for the diffraction imaging system is first deduced mathematically based on diffraction theory and then the degradation characteristics are analyzed. On this basis, a novel regularization model of image restoration that contains multiple prior constraints is established. After that, the solving approach of the equation with the multi-norm coexistence and multi-regularization parameters (prior's parameters) is presented. Subsequently, the space-variant PSF image restoration method for large aperture diffractive imaging system is proposed combined with block idea of isoplanatic region. Experimentally, the proposed algorithm demonstrates its capacity to achieve multi-objective improvement including MTF enhancing, dispersion correcting, noise and artifact suppressing as well as image's detail preserving, and produce satisfactory visual quality. This can provide scientific basis for applications and possesses potential application prospects on future space applications of diffractive membrane imaging technology.
Lowering the Radiation Dose in Dental Offices.
Radan, Elham
2017-04-01
While the use of dental imaging continues to evolve into more advanced modalities such as 3-D cone beam computed tomography, in addition to conventional 2-D imaging (intraoral, panoramic and cephalometric), the public concern for radiation safety is also increasing. This article is a guide for how to reduce patients’ exposure to the minimum with proper selection criteria (as needed only if it benefits the patient) and knowledge of effective doses, exposure parameters and proper collimation.
NASA Astrophysics Data System (ADS)
Zhong, Qiu-Xiang; Wu, Chuan-Sheng; Shu, Qiao-Ling; Liu, Ryan Wen
2018-04-01
Image deblurring under impulse noise is a typical ill-posed problem which requires regularization methods to guarantee high-quality imaging. L1-norm data-fidelity term and total variation (TV) regularizer have been combined to contribute the popular regularization method. However, the TV-regularized variational image deblurring model often suffers from the staircase-like artifacts leading to image quality degradation. To enhance image quality, the detailpreserving total generalized variation (TGV) was introduced to replace TV to eliminate the undesirable artifacts. The resulting nonconvex optimization problem was effectively solved using the alternating direction method of multipliers (ADMM). In addition, an automatic method for selecting spatially adapted regularization parameters was proposed to further improve deblurring performance. Our proposed image deblurring framework is able to remove blurring and impulse noise effects while maintaining the image edge details. Comprehensive experiments have been conducted to demonstrate the superior performance of our proposed method over several state-of-the-art image deblurring methods.
Learning Aggregation Operators for Preference Modeling
NASA Astrophysics Data System (ADS)
Torra, Vicenç
Aggregation operators are useful tools for modeling preferences. Such operators include weighted means, OWA and WOWA operators, as well as some fuzzy integrals, e.g. Choquet and Sugeno integrals. To apply these operators in an effective way, their parameters have to be properly defined. In this chapter, we review some of the existing tools for learning these parameters from examples.
One hundred years of hydrographic measurements in the Baltic Sea
NASA Astrophysics Data System (ADS)
Fonselius, Stig; Valderrama, Jorge
2003-06-01
The first measurements of salinity of the deep water in the open Baltic Sea were made in the last decades of the 1800s. At a Scandinavian science meeting in Copenhagen in 1892, Professor Otto Pettersson from Sweden suggested that regular measurements of hydrographic parameters should be carried out at some important deep stations in the Baltic Sea. His suggestion was adopted and since that time we have rather complete hydrographical data from the Bornholm Deep, the Gotland Deep, and the Landsort Deep and from some stations in the Gulf of Bothnia. The measurements were interrupted in the Baltic Proper during the two World Wars. At the beginning only salinity, temperature and dissolved oxygen were measured and one or two expeditions were carried out annually, mostly in summer. In the 1920s also alkalinity and pH were occasionally measured and total carbonate was calculated. A few nutrient measurements were also carried out. After World War II we find results from four or more expeditions every year and intercalibration of methods was arranged. Results of temperature, salinity and dissolved oxygen measurements from the Bornholm Deep, the Gotland Deep, the Landsort Deep and salinity measurements from three stations in the Gulf of Bothnia, covering the whole 20th century are presented and discussed. The salinity distribution and the variations between oxygen and hydrogen sulphide periods in the deep water of the Gotland Deep and the Landsort Deep are demonstrated. Series of phosphate and nitrate distribution in the Gotland Deep are shown from the 1950s to the present and the effects of the stagnant conditions are briefly discussed. Two large inflows of highly saline water, the first during the First World War and the second in 1951, are demonstrated. The 20th century minimum salinity of the bottom water in the Baltic Proper in 1992 is discussed.
NASA Astrophysics Data System (ADS)
Pasyanos, Michael E.; Franz, Gregory A.; Ramirez, Abelardo L.
2006-03-01
In an effort to build seismic models that are the most consistent with multiple data sets we have applied a new probabilistic inverse technique. This method uses a Markov chain Monte Carlo (MCMC) algorithm to sample models from a prior distribution and test them against multiple data types to generate a posterior distribution. While computationally expensive, this approach has several advantages over deterministic models, notably the seamless reconciliation of different data types that constrain the model, the proper handling of both data and model uncertainties, and the ability to easily incorporate a variety of prior information, all in a straightforward, natural fashion. A real advantage of the technique is that it provides a more complete picture of the solution space. By mapping out the posterior probability density function, we can avoid simplistic assumptions about the model space and allow alternative solutions to be identified, compared, and ranked. Here we use this method to determine the crust and upper mantle structure of the Yellow Sea and Korean Peninsula region. The model is parameterized as a series of seven layers in a regular latitude-longitude grid, each of which is characterized by thickness and seismic parameters (Vp, Vs, and density). We use surface wave dispersion and body wave traveltime data to drive the model. We find that when properly tuned (i.e., the Markov chains have had adequate time to fully sample the model space and the inversion has converged), the technique behaves as expected. The posterior model reflects the prior information at the edge of the model where there is little or no data to constrain adjustments, but the range of acceptable models is significantly reduced in data-rich regions, producing values of sediment thickness, crustal thickness, and upper mantle velocities consistent with expectations based on knowledge of the regional tectonic setting.
Caramia, Carlotta; Bernabucci, Ivan; D'Anna, Carmen; De Marchis, Cristiano; Schmid, Maurizio
2017-01-01
The widespread and pervasive use of smartphones for sending messages, calling, and entertainment purposes, mainly among young adults, is often accompanied by the concurrent execution of other tasks. Recent studies have analyzed how texting, reading or calling while walking-in some specific conditions-might significantly influence gait parameters. The aim of this study is to examine the effect of different smartphone activities on walking, evaluating the variations of several gait parameters. 10 young healthy students (all smartphone proficient users) were instructed to text chat (with two different levels of cognitive load), call, surf on a social network or play with a math game while walking in a real-life outdoor setting. Each of these activities is characterized by a different cognitive load. Using an inertial measurement unit on the lower trunk, spatio-temporal gait parameters, together with regularity, symmetry and smoothness parameters, were extracted and grouped for comparison among normal walking and different dual task demands. An overall significant effect of task type on the aforementioned parameters group was observed. The alterations in gait parameters vary as a function of cognitive effort. In particular, stride frequency, step length and gait speed show a decrement, while step time increases as a function of cognitive effort. Smoothness, regularity and symmetry parameters are significantly altered for specific dual task conditions, mainly along the mediolateral direction. These results may lead to a better understanding of the possible risks related to walking and concurrent smartphone use.
Bernabucci, Ivan; D'Anna, Carmen; De Marchis, Cristiano; Schmid, Maurizio
2017-01-01
The widespread and pervasive use of smartphones for sending messages, calling, and entertainment purposes, mainly among young adults, is often accompanied by the concurrent execution of other tasks. Recent studies have analyzed how texting, reading or calling while walking–in some specific conditions–might significantly influence gait parameters. The aim of this study is to examine the effect of different smartphone activities on walking, evaluating the variations of several gait parameters. 10 young healthy students (all smartphone proficient users) were instructed to text chat (with two different levels of cognitive load), call, surf on a social network or play with a math game while walking in a real-life outdoor setting. Each of these activities is characterized by a different cognitive load. Using an inertial measurement unit on the lower trunk, spatio-temporal gait parameters, together with regularity, symmetry and smoothness parameters, were extracted and grouped for comparison among normal walking and different dual task demands. An overall significant effect of task type on the aforementioned parameters group was observed. The alterations in gait parameters vary as a function of cognitive effort. In particular, stride frequency, step length and gait speed show a decrement, while step time increases as a function of cognitive effort. Smoothness, regularity and symmetry parameters are significantly altered for specific dual task conditions, mainly along the mediolateral direction. These results may lead to a better understanding of the possible risks related to walking and concurrent smartphone use. PMID:29023456
Behavioral Patterns in Special Education. Good Teaching Practices
Rodríguez-Dorta, Manuela; Borges, África
2017-01-01
Providing quality education means to respond to the diversity in the classroom. The teacher is a key figure in responding to the various educational needs presented by students. Specifically, special education professionals are of great importance as they are the ones who lend their support to regular classroom teachers and offer specialized educational assistance to students who require it. Therefore, special education is different from what takes place in the regular classroom, demanding greater commitment by the teacher. There are certain behaviors, considered good teaching practices, which teachers have always been connected with to achieve good teaching and good learning. To ensure that these teachers are carrying out their educational work properly it is necessary to evaluate. This means having appropriate instruments. The Observational Protocol for Teaching Functions in Primary School and Special Education (PROFUNDO-EPE, v.3., in Spanish) allows to capture behaviors from these professionals and behavioral patterns that correspond to good teaching practices. This study evaluates the behavior of two special education teachers who work with students from different educational stages and educational needs. It reveals that the analyzed teachers adapt their behavior according the needs and characteristics of their students to the students responding more adequately to the needs presented by the students and showing good teaching practices. The patterns obtained indicate that they offer support, help and clear guidelines to perform the tasks. They motivate them toward learning by providing positive feedback and they check that students have properly assimilated the contents through questions or non-verbal supervision. Also, they provide a safe and reliable climate for learning. PMID:28512437
Behavioral Patterns in Special Education. Good Teaching Practices.
Rodríguez-Dorta, Manuela; Borges, África
2017-01-01
Providing quality education means to respond to the diversity in the classroom. The teacher is a key figure in responding to the various educational needs presented by students. Specifically, special education professionals are of great importance as they are the ones who lend their support to regular classroom teachers and offer specialized educational assistance to students who require it. Therefore, special education is different from what takes place in the regular classroom, demanding greater commitment by the teacher. There are certain behaviors, considered good teaching practices, which teachers have always been connected with to achieve good teaching and good learning. To ensure that these teachers are carrying out their educational work properly it is necessary to evaluate. This means having appropriate instruments. The Observational Protocol for Teaching Functions in Primary School and Special Education (PROFUNDO-EPE, v.3., in Spanish) allows to capture behaviors from these professionals and behavioral patterns that correspond to good teaching practices. This study evaluates the behavior of two special education teachers who work with students from different educational stages and educational needs. It reveals that the analyzed teachers adapt their behavior according the needs and characteristics of their students to the students responding more adequately to the needs presented by the students and showing good teaching practices. The patterns obtained indicate that they offer support, help and clear guidelines to perform the tasks. They motivate them toward learning by providing positive feedback and they check that students have properly assimilated the contents through questions or non-verbal supervision. Also, they provide a safe and reliable climate for learning.
Chemical interactions and thermodynamic studies in aluminum alloy/molten salt systems
NASA Astrophysics Data System (ADS)
Narayanan, Ramesh
The recycling of aluminum and aluminum alloys such as Used Beverage Container (UBC) is done under a cover of molten salt flux based on (NaCl-KCl+fluorides). The reactions of aluminum alloys with molten salt fluxes have been investigated. Thermodynamic calculations are performed in the alloy/salt flux systems which allow quantitative predictions of the equilibrium compositions. There is preferential reaction of Mg in Al-Mg alloy with molten salt fluxes, especially those containing fluorides like NaF. An exchange reaction between Al-Mg alloy and molten salt flux has been demonstrated. Mg from the Al-Mg alloy transfers into the salt flux while Na from the salt flux transfers into the metal. Thermodynamic calculations indicated that the amount of Na in metal increases as the Mg content in alloy and/or NaF content in the reacting flux increases. This is an important point because small amounts of Na have a detrimental effect on the mechanical properties of the Al-Mg alloy. The reactions of Al alloys with molten salt fluxes result in the formation of bluish purple colored "streamers". It was established that the streamer is liquid alkali metal (Na and K in the case of NaCl-KCl-NaF systems) dissipating into the melt. The melts in which such streamers were observed are identified. The metal losses occurring due to reactions have been quantified, both by thermodynamic calculations and experimentally. A computer program has been developed to calculate ternary phase diagrams in molten salt systems from the constituting binary phase diagrams, based on a regular solution model. The extent of deviation of the binary systems from regular solution has been quantified. The systems investigated in which good agreement was found between the calculated and experimental phase diagrams included NaF-KF-LiF, NaCl-NaF-NaI and KNOsb3-TINOsb3-LiNOsb3. Furthermore, an insight has been provided on the interrelationship between the regular solution parameters and the topology of the phase diagram. The isotherms are flat (i.e. no skewness) when the regular solution parameters are zero. When the regular solution parameters are non-zero, the isotherms are skewed. A regular solution model is not adequate to accurately model the molten salt systems used in recycling like NaCl-KCl-LiF and NaCl-KCl-NaF.
Postnova, Svetlana; Robinson, Peter A; Postnov, Dmitry D
2013-01-01
Shift work has become an integral part of our life with almost 20% of the population being involved in different shift schedules in developed countries. However, the atypical work times, especially the night shifts, are associated with reduced quality and quantity of sleep that leads to increase of sleepiness often culminating in accidents. It has been demonstrated that shift workers' sleepiness can be improved by a proper scheduling of light exposure and optimizing shifts timing. Here, an integrated physiologically-based model of sleep-wake cycles is used to predict adaptation to shift work in different light conditions and for different shift start times for a schedule of four consecutive days of work. The integrated model combines a model of the ascending arousal system in the brain that controls the sleep-wake switch and a human circadian pacemaker model. To validate the application of the integrated model and demonstrate its utility, its dynamics are adjusted to achieve a fit to published experimental results showing adaptation of night shift workers (n = 8) in conditions of either bright or regular lighting. Further, the model is used to predict the shift workers' adaptation to the same shift schedule, but for conditions not considered in the experiment. The model demonstrates that the intensity of shift light can be reduced fourfold from that used in the experiment and still produce good adaptation to night work. The model predicts that sleepiness of the workers during night shifts on a protocol with either bright or regular lighting can be significantly improved by starting the shift earlier in the night, e.g.; at 21:00 instead of 00:00. Finally, the study predicts that people of the same chronotype, i.e. with identical sleep times in normal conditions, can have drastically different responses to shift work depending on their intrinsic circadian and homeostatic parameters.
Postnova, Svetlana; Robinson, Peter A.; Postnov, Dmitry D.
2013-01-01
Shift work has become an integral part of our life with almost 20% of the population being involved in different shift schedules in developed countries. However, the atypical work times, especially the night shifts, are associated with reduced quality and quantity of sleep that leads to increase of sleepiness often culminating in accidents. It has been demonstrated that shift workers’ sleepiness can be improved by a proper scheduling of light exposure and optimizing shifts timing. Here, an integrated physiologically-based model of sleep-wake cycles is used to predict adaptation to shift work in different light conditions and for different shift start times for a schedule of four consecutive days of work. The integrated model combines a model of the ascending arousal system in the brain that controls the sleep-wake switch and a human circadian pacemaker model. To validate the application of the integrated model and demonstrate its utility, its dynamics are adjusted to achieve a fit to published experimental results showing adaptation of night shift workers (n = 8) in conditions of either bright or regular lighting. Further, the model is used to predict the shift workers’ adaptation to the same shift schedule, but for conditions not considered in the experiment. The model demonstrates that the intensity of shift light can be reduced fourfold from that used in the experiment and still produce good adaptation to night work. The model predicts that sleepiness of the workers during night shifts on a protocol with either bright or regular lighting can be significantly improved by starting the shift earlier in the night, e.g.; at 21∶00 instead of 00∶00. Finally, the study predicts that people of the same chronotype, i.e. with identical sleep times in normal conditions, can have drastically different responses to shift work depending on their intrinsic circadian and homeostatic parameters. PMID:23308206
Aktaruzzaman, M; Migliorini, M; Tenhunen, M; Himanen, S L; Bianchi, A M; Sassi, R
2015-05-01
The work considers automatic sleep stage classification, based on heart rate variability (HRV) analysis, with a focus on the distinction of wakefulness (WAKE) from sleep and rapid eye movement (REM) from non-REM (NREM) sleep. A set of 20 automatically annotated one-night polysomnographic recordings was considered, and artificial neural networks were selected for classification. For each inter-heartbeat (RR) series, beside features previously presented in literature, we introduced a set of four parameters related to signal regularity. RR series of three different lengths were considered (corresponding to 2, 6, and 10 successive epochs, 30 s each, in the same sleep stage). Two sets of only four features captured 99 % of the data variance in each classification problem, and both of them contained one of the new regularity features proposed. The accuracy of classification for REM versus NREM (68.4 %, 2 epochs; 83.8 %, 10 epochs) was higher than when distinguishing WAKE versus SLEEP (67.6 %, 2 epochs; 71.3 %, 10 epochs). Also, the reliability parameter (Cohens's Kappa) was higher (0.68 and 0.45, respectively). Sleep staging classification based on HRV was still less precise than other staging methods, employing a larger variety of signals collected during polysomnographic studies. However, cheap and unobtrusive HRV-only sleep classification proved sufficiently precise for a wide range of applications.
Classification of mislabelled microarrays using robust sparse logistic regression.
Bootkrajang, Jakramate; Kabán, Ata
2013-04-01
Previous studies reported that labelling errors are not uncommon in microarray datasets. In such cases, the training set may become misleading, and the ability of classifiers to make reliable inferences from the data is compromised. Yet, few methods are currently available in the bioinformatics literature to deal with this problem. The few existing methods focus on data cleansing alone, without reference to classification, and their performance crucially depends on some tuning parameters. In this article, we develop a new method to detect mislabelled arrays simultaneously with learning a sparse logistic regression classifier. Our method may be seen as a label-noise robust extension of the well-known and successful Bayesian logistic regression classifier. To account for possible mislabelling, we formulate a label-flipping process as part of the classifier. The regularization parameter is automatically set using Bayesian regularization, which not only saves the computation time that cross-validation would take, but also eliminates any unwanted effects of label noise when setting the regularization parameter. Extensive experiments with both synthetic data and real microarray datasets demonstrate that our approach is able to counter the bad effects of labelling errors in terms of predictive performance, it is effective at identifying marker genes and simultaneously it detects mislabelled arrays to high accuracy. The code is available from http://cs.bham.ac.uk/∼jxb008. Supplementary data are available at Bioinformatics online.
An Assessment of Oral Hygiene in 7-14-Year-Old Children undergoing Orthodontic Treatment.
Krupińska-Nanys, Magdalena; Zarzecka, Joanna
2015-01-01
The study is focused on increased risk of dental plaque accumulation among the children undergoing orthodontic treatment in consideration of individual hygiene and dietary habits. The study was conducted among 91 children aged 7-14 including 47 girls and 44 boys. The main objectives of the study were: API index, plaque pH, DMF index, proper hygiene and dietary habits. Statistical analysis was provided in Microsoft Office Exel spreadsheet and STATISTICA statistical software. The average API index among the children wearing removable appliance was 9 (SD = 13), and among children without appliances was 16 (SD = 21). DMF index for patients using appliances was 5 (SD = 3) and for those without appliances was 4 (SD = 2). The average plaque pH was 6 for children with appliances (SD = 0.9) and 6.2 without ones (SD = 0.3). In patients in whom there is a higher risk of dental plaque accumulating, correct oral hygiene supported with regular visits to the dentist is one of the best ways to control dental caries. In the fight against caries the most effective and only approach is to promote awareness of the problem, foster proper hygiene and nutritional habits, as well as educate children from a very young age in how to maintain proper oral hygiene.
Marlan, Stanton
2016-04-01
This paper represents an archetypal and deconstructive reading of the work of Wolfgang Giegerich. In an attempt to extend and philosophically develop Jung's late-life view of the objective psyche, Giegerich, via Hegel, defines psychology proper as fundamentally separate from the everyday person and the 'human, all-too-human' aspects of the soul. It is argued that, in so doing, Giegerich removes the human person from being the primary focus of his psychology and creates instead a hierarchy of ideas and values privileging syntax over semantics, the logical over the empirical, and thinking over imagination. This bypasses the emotionality of the everyday person/patient and also renders psychology proper unable to address the day-to-day practice of the analyst. Giegerich attempts to rectify this problem by re-incorporating what he had previously rejected, making his theory more complex than is apparent in his binary oppositions. In the end, however, it remains a question to what extent Giegerich is successful in avoiding a binary scission (Saban 2015) or a tendency to regularly de-emphasize the human aspect of the soul (Hoedl 2015) in his need to continue to heroically push off from the ego seeking total freedom from neurosis and from our humanity. © 2016, The Society of Analytical Psychology.
Subcaliber discarding sabot airgun projectiles.
Frank, Matthias; Schönekeß, Holger; Herbst, Jörg; Staats, Hans-Georg; Ekkernkamp, Axel; Nguyen, Thanh Tien; Bockholdt, Britta
2014-03-01
Medical literature abounds with reports on injuries and fatalities caused by airgun projectiles. While round balls or diabolo pellets have been the standard projectiles for airguns for decades, today, there are a large number of different airgun projectiles available. A very uncommon--and until now unique--discarding sabot airgun projectile (Sussex Sabo Bullet) was introduced into the market in the 1980s. The projectile, available in 0.177 (4.5 mm) and 0.22 (5.5 mm) caliber, consists of a plastic sabot cup surrounding a subcaliber copper-coated lead projectile in typical bullet shape. Following the typical principle of a discarding sabot projectile, the lightweight sabot is supposed to quickly loose velocity and to fall to the ground downrange while the bullet continues on target. These sabot-loaded projectiles are of special forensic interest due to their non-traceability and ballistic parameters. Therefore, it is the aim of this work to investigate the ballistic performance of these sabot airgun projectiles by high-speed video analyses and by measurement of the kinetic parameters of the projectile parts by a transient recording system as well as observing their physical features after being fired. While the sabot principle worked properly in high-energy airguns (E > 17 J), separation of the core projectile from the sabot cup was also observed when discharged in low-energy airguns (E < 7.5 J). While the velocity of the discarded Sussex Sabo core projectile was very close to the velocity of a diabolo-type reference projectile (RWS Meisterkugel), energy density was up to 60 % higher. To conclude, this work is the first study to demonstrate the regular function of this uncommon type of airgun projectile.
Kennedy, Chelsey E; Krieger, Kari Beth; Sutovsky, Miriam; Xu, Wei; Vargovič, Peter; Didion, Bradley A; Ellersieck, Mark R; Hennessy, Madison E; Verstegen, John; Oko, Richard; Sutovsky, Peter
2014-05-01
Post-acrosomal WW-domain binding protein (PAWP) is a signaling molecule located in the post-acrosomal sheath (PAS) of mammalian spermatozoa. We hypothesized that the proper integration of PAWP in the sperm PAS is reflective of bull-sperm quality and fertility. Cryopreserved semen samples from 298 sires of acceptable, but varied, fertility used in artificial insemination services were analyzed using immunofluorescence microscopy and flow cytometry for PAWP protein. In normal spermatozoa, PAWP fluorescence formed a regular band around the proximal PAS. Anomalies of PAWP labeling in defective spermatozoa were reflected in flow cytometry by varied intensities of PAWP-induced fluorescence. Distinct sperm phenotypes were also identified, including morphologically normal and some defective spermatozoa with moderate levels of PAWP; grossly defective spermatozoa with low/no PAWP; and defective spermatozoa with high PAWP. Analysis by ImageStream flow cytometry confirmed the prevalence of abnormal sperm phenotypes in the spermatozoa with abnormal PAWP content. Live/dead staining and video recording showed that some abnormal spermatozoa are viable and capable of progressive motility. Conventional flow-cytometric measurements of PAWP correlated significantly with semen quality and fertility parameters that reflect the sires' artificial insemination fertility, including secondary sperm morphology, conception rate, non-return rate, and residual value. A multiplex, flow-cytometric test detecting PAWP, aggresomes (ubiquitinated protein aggregates), and acrosomal integrity (peanut-agglutinin-lectin labeling) had a predictive value for conception rate, as demonstrated by step-wise regression analysis. We conclude that PAWP correlates with semen/fertility parameters used in the cattle artificial insemination industry, making PAWP a potential biomarker of bull fertility. © 2014 Wiley Periodicals, Inc.
Using a voice to put a name to a face: the psycholinguistics of proper name comprehension.
Barr, Dale J; Jackson, Laura; Phillips, Isobel
2014-02-01
We propose that hearing a proper name (e.g., Kevin) in a particular voice serves as a compound memory cue that directly activates representations of a mutually known target person, often permitting reference resolution without any complex computation of shared knowledge. In a referential communication study, pairs of friends played a communication game, in which we monitored the eyes of one friend (the addressee) while he or she sought to identify the target person, in a set of four photos, on the basis of a name spoken aloud. When the name was spoken by a friend, addressees rapidly identified the target person, and this facilitation was independent of whether the friend was articulating a message he or she had designed versus one from a third party with whom the target person was not shared. Our findings suggest that the comprehension system takes advantage of regularities in the environment to minimize effortful computation about who knows what.
Tactical physical preparation: the case for a movement-based approach.
Kechijian, Doug; Rush, Stephen
2012-01-01
Progressive injury prevention and physical preparation programs are needed in military special operations to optimize mission success and Operator quality of life and longevity. While physical risk is inherent in Special Operations, non-traumatic injuries resulting from overuse, poor biomechanics, and arbitrary exercise selection can be alleviated with proper medical care and patient education. An integrated approach to physical readiness that recognizes the continuity between rehabilitation and performance training is advocated to ensure that physiological adaptations do not come at the expense of orthopedic health or movement proficiency. Movement quality should be regularly evaluated and enforced throughout the training process to minimize preventable injuries and avoid undermining previous rehabilitative care. While fitness and proper movement are not substitutes for Operator specific tasks, they are foundational to many tactically-relevant skills. In light of how much is at stake, sports medicine care in the military, especially special operations, should parallel that which is practiced in professional and collegiate athletics. 2012.
Symmetry-plane model of 3D Euler flows: Mapping to regular systems and numerical solutions of blowup
NASA Astrophysics Data System (ADS)
Mulungye, Rachel M.; Lucas, Dan; Bustamante, Miguel D.
2014-11-01
We introduce a family of 2D models describing the dynamics on the so-called symmetry plane of the full 3D Euler fluid equations. These models depend on a free real parameter and can be solved analytically. For selected representative values of the free parameter, we apply the method introduced in [M.D. Bustamante, Physica D: Nonlinear Phenom. 240, 1092 (2011)] to map the fluid equations bijectively to globally regular systems. By comparing the analytical solutions with the results of numerical simulations, we establish that the numerical simulations of the mapped regular systems are far more accurate than the numerical simulations of the original systems, at the same spatial resolution and CPU time. In particular, the numerical integrations of the mapped regular systems produce robust estimates for the growth exponent and singularity time of the main blowup quantity (vorticity stretching rate), converging well to the analytically-predicted values even beyond the time at which the flow becomes under-resolved (i.e. the reliability time). In contrast, direct numerical integrations of the original systems develop unstable oscillations near the reliability time. We discuss the reasons for this improvement in accuracy, and explain how to extend the analysis to the full 3D case. Supported under the programme for Research in Third Level Institutions (PRTLI) Cycle 5 and co-funded by the European Regional Development Fund.
Effects of high-frequency damping on iterative convergence of implicit viscous solver
NASA Astrophysics Data System (ADS)
Nishikawa, Hiroaki; Nakashima, Yoshitaka; Watanabe, Norihiko
2017-11-01
This paper discusses effects of high-frequency damping on iterative convergence of an implicit defect-correction solver for viscous problems. The study targets a finite-volume discretization with a one parameter family of damped viscous schemes. The parameter α controls high-frequency damping: zero damping with α = 0, and larger damping for larger α (> 0). Convergence rates are predicted for a model diffusion equation by a Fourier analysis over a practical range of α. It is shown that the convergence rate attains its minimum at α = 1 on regular quadrilateral grids, and deteriorates for larger values of α. A similar behavior is observed for regular triangular grids. In both quadrilateral and triangular grids, the solver is predicted to diverge for α smaller than approximately 0.5. Numerical results are shown for the diffusion equation and the Navier-Stokes equations on regular and irregular grids. The study suggests that α = 1 and 4/3 are suitable values for robust and efficient computations, and α = 4 / 3 is recommended for the diffusion equation, which achieves higher-order accuracy on regular quadrilateral grids. Finally, a Jacobian-Free Newton-Krylov solver with the implicit solver (a low-order Jacobian approximately inverted by a multi-color Gauss-Seidel relaxation scheme) used as a variable preconditioner is recommended for practical computations, which provides robust and efficient convergence for a wide range of α.
Vacuum casting of thick polymeric films
NASA Technical Reports Server (NTRS)
Cuddihy, E. F.; Moacanin, J.
1979-01-01
Bubble formation and layering, which often plague vacuum-evaporated films, are prevented by properly regulating process parameters. Vacuum casting may be applicable to forming thick films of other polymer/solvent solutions.
Chen, Po-Wen; Lin, Chang; Chen, Chung-De; Chen, Wen-Ying; Mao, Frank Chiahung
2013-04-01
Glucocorticoids (GCs) are often prescribed in clinics but many adverse effects are also attributed to GCs. It is important to determine the role of GCs in the development of those adverse effects. Here, we investigated the impact of GCs on trivalent chromium (Cr) distribution in animals. Cr has been proposed to be important for proper insulin sensitivity, and deficits may lead to disruption of metabolism. For comparison, the effect of a high-fat diet on Cr modulation was also evaluated. C57BL/6JNarl mice were fed regular or high-fat diets for 12 weeks and further grouped for treatment with prednisolone or saline. Cr levels in tissues were determined 12 h after the treatments. Interestingly, prednisolone treatment led to significantly reduced Cr levels in fat tissue in mice fed regular diets; compared to the high-fat diet alone, prednisolone plus the high-fat diet led to a further reduction in Cr levels in the liver, muscle, and fat. Notably, a single dose of prednisolone was linked with elevated Cr levels in the thigh bones of mice fed by either regular or high-fat diets. In conclusion, this report has provided evidence that prednisolone in combination with a high-fat diet effects modulation of Cr levels in selected tissues.
Cortical dipole imaging using truncated total least squares considering transfer matrix error.
Hori, Junichi; Takeuchi, Kosuke
2013-01-01
Cortical dipole imaging has been proposed as a method to visualize electroencephalogram in high spatial resolution. We investigated the inverse technique of cortical dipole imaging using a truncated total least squares (TTLS). The TTLS is a regularization technique to reduce the influence from both the measurement noise and the transfer matrix error caused by the head model distortion. The estimation of the regularization parameter was also investigated based on L-curve. The computer simulation suggested that the estimation accuracy was improved by the TTLS compared with Tikhonov regularization. The proposed method was applied to human experimental data of visual evoked potentials. We confirmed the TTLS provided the high spatial resolution of cortical dipole imaging.
The Majorana Demonstrator calibration system
Abgrall, N.; Arnquist, I. J.; Avignone, III, F. T.; ...
2017-08-08
The Majorana Collaboration is searching for the neutrinoless double-beta decay of the nucleus 76Ge. The Majorana Demonstrator is an array of germanium detectors deployed with the aim of implementing background reduction techniques suitable for a 1-ton 76Ge-based search. The ultra low-background conditions require regular calibrations to verify proper function of the detectors. Radioactive line sources can be deployed around the cryostats containing the detectors for regular energy calibrations. When measuring in low-background mode, these line sources have to be stored outside the shielding so they do not contribute to the background. The deployment and the retraction of the source aremore » designed to be controlled by the data acquisition system and do not require any direct human interaction. In this study, we detail the design requirements and implementation of the calibration apparatus, which provides the event rates needed to define the pulse-shape cuts and energy calibration used in the final analysis as well as data that can be compared to simulations.« less
The MAJORANA DEMONSTRATOR calibration system
NASA Astrophysics Data System (ADS)
Abgrall, N.; Arnquist, I. J.; Avignone, F. T., III; Barabash, A. S.; Bertrand, F. E.; Boswell, M.; Bradley, A. W.; Brudanin, V.; Busch, M.; Buuck, M.; Caldwell, T. S.; Christofferson, C. D.; Chu, P.-H.; Cuesta, C.; Detwiler, J. A.; Dunagan, C.; Efremenko, Yu.; Ejiri, H.; Elliott, S. R.; Fu, Z.; Gehman, V. M.; Gilliss, T.; Giovanetti, G. K.; Goett, J.; Green, M. P.; Gruszko, J.; Guinn, I. S.; Guiseppe, V. E.; Haufe, C. R.; Henning, R.; Hoppe, E. W.; Howe, M. A.; Jasinski, B. R.; Keeter, K. J.; Kidd, M. F.; Konovalov, S. I.; Kouzes, R. T.; Lopez, A. M.; MacMullin, J.; Martin, R. D.; Massarczyk, R.; Meijer, S. J.; Mertens, S.; Orrell, J. L.; O'Shaughnessy, C.; Poon, A. W. P.; Radford, D. C.; Rager, J.; Reine, A. L.; Rielage, K.; Robertson, R. G. H.; Shanks, B.; Shirchenko, M.; Suriano, A. M.; Tedeschi, D.; Trimble, J. E.; Varner, R. L.; Vasilyev, S.; Vetter, K.; Vorren, K.; White, B. R.; Wilkerson, J. F.; Wiseman, C.; Xu, W.; Yu, C.-H.; Yumatov, V.; Zhitnikov, I.; Zhu, B. X.
2017-11-01
The MAJORANA Collaboration is searching for the neutrinoless double-beta decay of the nucleus 76Ge. The MAJORANA DEMONSTRATOR is an array of germanium detectors deployed with the aim of implementing background reduction techniques suitable for a 1-ton 76Ge-based search. The ultra low-background conditions require regular calibrations to verify proper function of the detectors. Radioactive line sources can be deployed around the cryostats containing the detectors for regular energy calibrations. When measuring in low-background mode, these line sources have to be stored outside the shielding so they do not contribute to the background. The deployment and the retraction of the source are designed to be controlled by the data acquisition system and do not require any direct human interaction. In this paper, we detail the design requirements and implementation of the calibration apparatus, which provides the event rates needed to define the pulse-shape cuts and energy calibration used in the final analysis as well as data that can be compared to simulations.
The Majorana Demonstrator calibration system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abgrall, N.; Arnquist, I. J.; Avignone, III, F. T.
The Majorana Collaboration is searching for the neutrinoless double-beta decay of the nucleus 76Ge. The Majorana Demonstrator is an array of germanium detectors deployed with the aim of implementing background reduction techniques suitable for a 1-ton 76Ge-based search. The ultra low-background conditions require regular calibrations to verify proper function of the detectors. Radioactive line sources can be deployed around the cryostats containing the detectors for regular energy calibrations. When measuring in low-background mode, these line sources have to be stored outside the shielding so they do not contribute to the background. The deployment and the retraction of the source aremore » designed to be controlled by the data acquisition system and do not require any direct human interaction. In this study, we detail the design requirements and implementation of the calibration apparatus, which provides the event rates needed to define the pulse-shape cuts and energy calibration used in the final analysis as well as data that can be compared to simulations.« less
Künzel, W
1980-01-01
The regular examination of 20,000 children and juveniles aged 6 to 15 (permanent teeth) and some 12.500 three- to eight-year-olds (deciduous teeth) has shown that the incidence of dental caries (DMF/T and df/t indices) is directly dependent upon the constant fluoridation of drinking water (1.0 +/- 0.1 ppm F). The reduction in dental caries observed on both deciduous and permanent teeth as a result of twelve years of fluoridation of drinking water, which was started in Karl-Marx-Stadt in 1959, was followed, because of the necessity to temporarily discontinue the addition of fluorine salts to the drinking water, by a slight increase in caries which could be checked through refluorination. After eighteen years of fluoridation of drinking water, the situation can again be considered to be in equilibrium. The need for proper fluoridation and regular control thereof through analyzing the fluorine content of drinking water is pointed out.
Facial motion parameter estimation and error criteria in model-based image coding
NASA Astrophysics Data System (ADS)
Liu, Yunhai; Yu, Lu; Yao, Qingdong
2000-04-01
Model-based image coding has been given extensive attention due to its high subject image quality and low bit-rates. But the estimation of object motion parameter is still a difficult problem, and there is not a proper error criteria for the quality assessment that are consistent with visual properties. This paper presents an algorithm of the facial motion parameter estimation based on feature point correspondence and gives the motion parameter error criteria. The facial motion model comprises of three parts. The first part is the global 3-D rigid motion of the head, the second part is non-rigid translation motion in jaw area, and the third part consists of local non-rigid expression motion in eyes and mouth areas. The feature points are automatically selected by a function of edges, brightness and end-node outside the blocks of eyes and mouth. The numbers of feature point are adjusted adaptively. The jaw translation motion is tracked by the changes of the feature point position of jaw. The areas of non-rigid expression motion can be rebuilt by using block-pasting method. The estimation approach of motion parameter error based on the quality of reconstructed image is suggested, and area error function and the error function of contour transition-turn rate are used to be quality criteria. The criteria reflect the image geometric distortion caused by the error of estimated motion parameters properly.
NASA Astrophysics Data System (ADS)
Tikhonov, Denis S.; Vishnevskiy, Yury V.; Rykov, Anatolii N.; Grikina, Olga E.; Khaikin, Leonid S.
2017-03-01
A semi-experimental equilibrium structure of free molecules of pyrazinamide has been determined for the first time using gas electron diffraction method. The refinement was carried using regularization of geometry by calculated quantum chemical parameters. It is discussed to which extent is the final structure experimental. A numerical approach for estimation of the amount of experimental information in the refined parameters is suggested. The following values of selected internuclear distances were determined (values are in Å with 1σ in the parentheses): re(Cpyrazine-Cpyrazine)av = 1.397(2), re(Npyrazine-Cpyrazine)av = 1.332(3), re(Cpyrazine-Camide) = 1.493(1), re(Namide-Camide) = 1.335(2), re(Oamide-Camide) = 1.219(1). The given standard deviations represent pure experimental uncertainties without the influence of regularization.
The unsaturated flow in porous media with dynamic capillary pressure
NASA Astrophysics Data System (ADS)
Milišić, Josipa-Pina
2018-05-01
In this paper we consider a degenerate pseudoparabolic equation for the wetting saturation of an unsaturated two-phase flow in porous media with dynamic capillary pressure-saturation relationship where the relaxation parameter depends on the saturation. Following the approach given in [13] the existence of a weak solution is proved using Galerkin approximation and regularization techniques. A priori estimates needed for passing to the limit when the regularization parameter goes to zero are obtained by using appropriate test-functions, motivated by the fact that considered PDE allows a natural generalization of the classical Kullback entropy. Finally, a special care was given in obtaining an estimate of the mixed-derivative term by combining the information from the capillary pressure with the obtained a priori estimates on the saturation.
Regularities in Low-Temperature Phosphatization of Silicates
NASA Astrophysics Data System (ADS)
Savenko, A. V.
2018-01-01
The regularities in low-temperature phosphatization of silicates are defined from long-term experiments on the interaction between different silicate minerals and phosphate-bearing solutions in a wide range of medium acidity. It is shown that the parameters of the reaction of phosphatization of hornblende, orthoclase, and labradorite have the same values as for clayey minerals (kaolinite and montmorillonite). This effect may appear, if phosphotization proceeds, not after silicate minerals with a different structure and composition, but after a secondary silicate phase formed upon interaction between silicates and water and stable in a certain pH range. Variation in the parameters of the reaction of phosphatization at pH ≈ 1.8 is due to the stability of the silicate phase different from that at higher pH values.
A unified framework for approximation in inverse problems for distributed parameter systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Ito, K.
1988-01-01
A theoretical framework is presented that can be used to treat approximation techniques for very general classes of parameter estimation problems involving distributed systems that are either first or second order in time. Using the approach developed, one can obtain both convergence and stability (continuous dependence of parameter estimates with respect to the observations) under very weak regularity and compactness assumptions on the set of admissible parameters. This unified theory can be used for many problems found in the recent literature and in many cases offers significant improvements to existing results.
Detailed studies om three open clusters from Gaia ESO Survey (GES)
NASA Astrophysics Data System (ADS)
Balaguer-Núnez, L.; Casamiquela, L.; Jordana, N.; Massana, P.; Jordi, C.; Masana, E.
2017-03-01
We present results for the intermediate-age and old open clusters NGC 6633, NGC 6705 (M 11) and NGC 2682 (M 67). We have used new Str ̈omgren-Crawford photometry, proper motions from ROA observations and spectral information from Gaia-ESO Survey (GES), to study the physical parameters of the stars in the three cluster's areas. The astrometric studies cover an area of about 1°x2° and down to r' ˜ 17 while our INT-WFC CCD intermediate-band photometry covers an area of about 40'x40' down to V ˜ 19. The stars of those areas selected as cluster members from their proper motions, are classified into photometric regions and their physical parameters determined, using uvbyHβ photometry and standard relations among colour indices for each of the photometric regions of the HR diagram. That allows us to determine reddening, distances, absolute magnitudes, spectral types, effective temperatures, gravities and metallicities, thus providing an astrophysical characterization of the clusters. These results are compared with the physical parameters obtained from GES spectral data as well as radial velocities to confirm membership. All these data lead us to a comparison of photometric and spectroscopic physical parameters.
Veraart, Jelle; Sijbers, Jan; Sunaert, Stefan; Leemans, Alexander; Jeurissen, Ben
2013-11-01
Linear least squares estimators are widely used in diffusion MRI for the estimation of diffusion parameters. Although adding proper weights is necessary to increase the precision of these linear estimators, there is no consensus on how to practically define them. In this study, the impact of the commonly used weighting strategies on the accuracy and precision of linear diffusion parameter estimators is evaluated and compared with the nonlinear least squares estimation approach. Simulation and real data experiments were done to study the performance of the weighted linear least squares estimators with weights defined by (a) the squares of the respective noisy diffusion-weighted signals; and (b) the squares of the predicted signals, which are reconstructed from a previous estimate of the diffusion model parameters. The negative effect of weighting strategy (a) on the accuracy of the estimator was surprisingly high. Multi-step weighting strategies yield better performance and, in some cases, even outperformed the nonlinear least squares estimator. If proper weighting strategies are applied, the weighted linear least squares approach shows high performance characteristics in terms of accuracy/precision and may even be preferred over nonlinear estimation methods. Copyright © 2013 Elsevier Inc. All rights reserved.
Structural characterization of the packings of granular regular polygons.
Wang, Chuncheng; Dong, Kejun; Yu, Aibing
2015-12-01
By using a recently developed method for discrete modeling of nonspherical particles, we simulate the random packings of granular regular polygons with three to 11 edges under gravity. The effects of shape and friction on the packing structures are investigated by various structural parameters, including packing fraction, the radial distribution function, coordination number, Voronoi tessellation, and bond-orientational order. We find that packing fraction is generally higher for geometrically nonfrustrated regular polygons, and can be increased by the increase of edge number and decrease of friction. The changes of packing fraction are linked with those of the microstructures, such as the variations of the translational and orientational orders and local configurations. In particular, the free areas of Voronoi tessellations (which are related to local packing fractions) can be described by log-normal distributions for all polygons. The quantitative analyses establish a clearer picture for the packings of regular polygons.
Zhou, Hua; Li, Lexin
2014-01-01
Summary Modern technologies are producing a wealth of data with complex structures. For instance, in two-dimensional digital imaging, flow cytometry and electroencephalography, matrix-type covariates frequently arise when measurements are obtained for each combination of two underlying variables. To address scientific questions arising from those data, new regression methods that take matrices as covariates are needed, and sparsity or other forms of regularization are crucial owing to the ultrahigh dimensionality and complex structure of the matrix data. The popular lasso and related regularization methods hinge on the sparsity of the true signal in terms of the number of its non-zero coefficients. However, for the matrix data, the true signal is often of, or can be well approximated by, a low rank structure. As such, the sparsity is frequently in the form of low rank of the matrix parameters, which may seriously violate the assumption of the classical lasso. We propose a class of regularized matrix regression methods based on spectral regularization. A highly efficient and scalable estimation algorithm is developed, and a degrees-of-freedom formula is derived to facilitate model selection along the regularization path. Superior performance of the method proposed is demonstrated on both synthetic and real examples. PMID:24648830
Zhang, Jie; Fan, Shangang; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki
2017-01-01
Both L1/2 and L2/3 are two typical non-convex regularizations of Lp (0
Nonsmooth, nonconvex regularizers applied to linear electromagnetic inverse problems
NASA Astrophysics Data System (ADS)
Hidalgo-Silva, H.; Gomez-Trevino, E.
2017-12-01
Tikhonov's regularization method is the standard technique applied to obtain models of the subsurface conductivity distribution from electric or electromagnetic measurements by solving UT (m) = | F (m) - d |2 + λ P(m). The second term correspond to the stabilizing functional, with P (m) = | ∇ m |2 the usual approach, and λ the regularization parameter. Due to the roughness penalizer inclusion, the model developed by Tikhonov's algorithm tends to smear discontinuities, a feature that may be undesirable. An important requirement for the regularizer is to allow the recovery of edges, and smooth the homogeneous parts. As is well known, Total Variation (TV) is now the standard approach to meet this requirement. Recently, Wang et.al. proved convergence for alternating direction method of multipliers in nonconvex, nonsmooth optimization. In this work we present a study of several algorithms for model recovering of Geosounding data based on Infimal Convolution, and also on hybrid, TV and second order TV and nonsmooth, nonconvex regularizers, observing their performance on synthetic and real data. The algorithms are based on Bregman iteration and Split Bregman method, and the geosounding method is the low-induction numbers magnetic dipoles. Non-smooth regularizers are considered using the Legendre-Fenchel transform.
Li, Yunyi; Zhang, Jie; Fan, Shangang; Yang, Jie; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki; Gui, Guan
2017-12-15
Both L 1/2 and L 2/3 are two typical non-convex regularizations of L p (0
Fractal dimensions of graph of Weierstrass-type function and local Hölder exponent spectra
NASA Astrophysics Data System (ADS)
Otani, Atsuya
2018-01-01
We study several fractal properties of the Weierstrass-type function where τ :[0, 1)\\to[0, 1) is a cookie cutter map with possibly fractal repeller, and λ and g are functions with proper regularity. In the first part, we determine the box dimension of the graph of W and Hausdorff dimension of its randomised version. In the second part, the Hausdorff spectrum of the local Hölder exponent is characterised in terms of thermodynamic formalism. Furthermore, in the randomised case, a novel formula for the lifted Hausdorff spectrum on the graph is provided.
Local Variation of Hashtag Spike Trains and Popularity in Twitter
Sanlı, Ceyda; Lambiotte, Renaud
2015-01-01
We draw a parallel between hashtag time series and neuron spike trains. In each case, the process presents complex dynamic patterns including temporal correlations, burstiness, and all other types of nonstationarity. We propose the adoption of the so-called local variation in order to uncover salient dynamical properties, while properly detrending for the time-dependent features of a signal. The methodology is tested on both real and randomized hashtag spike trains, and identifies that popular hashtags present regular and so less bursty behavior, suggesting its potential use for predicting online popularity in social media. PMID:26161650
NASA Technical Reports Server (NTRS)
Chamberlin, R.
2002-01-01
TIN is short for 'triangulated irregular network,' which is a piecewise planar model of a surface. If properly constructed, a TIN can be more than 30 times as efficient as a regular triangulation. In our project (a ground combat simulation to support U.S. Army training exercises), the TIN is used to represent the Earth's surface and is used primarily to determine whether line of sight is blocked by terrain. High efficiency requires accurate identification of ridgelines with as few triangles as possible. The work currently in progress is the implementation of a TINning process that we hope will produce superlative TINs. This presentation describes that process.
[Myocardial infarction in a 26-year-old patient with diabetes type 1].
Rogowicz, Anita; Zozulińska, Dorota; Wierusz-Wysocka, Bogna
2007-11-01
A case of a 26-year-old patient with acute myocardial infarction and hypertension, hyperlipidaemia as well as type 1 diabetes from 18 years complicated by background retinopathy and nephropathy in the state of proteinuria is described. State of metabolic compensation of the diabetes was poor. The patient did not perform regular self-monitoring of glycaemia, smoked, and used oral contraception. Early diagnosis of vascular lesions in young persons with long-lasting of type 1 diabetes as well as the introduction of proper preventive and treatment methods may improve prognosis in these high-risk patients.
CONCENTRATION OF NATURAL RADIONUCLIDES IN PRIVATE DRINKING WATER WELLS.
Cerny, R; Otahal, P; Merta, J; Burian, I
2017-11-01
Water is one of the most important resources for a human being; therefore, its quality should be properly tested. According to Council Directive No. 2013/51/EUROATOM, there shall be established requirements for the general public health protection with regard to radioactive substances in water intended for human consumption. This article summarises measurement results of selected water samples at 444 private drinking water wells, which are not subject to regular inspection in terms of the Czech legislation. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Dvorak, Jiri; Kramer, Efraim B; Schmied, Christian M; Drezner, Jonathan A; Zideman, David; Patricios, Jon; Correia, Luis; Pedrinelli, André; Mandelbaum, Bert
2013-12-01
Life-threatening medical emergencies are an infrequent but regular occurrence on the football field. Proper prevention strategies, emergency medical planning and timely access to emergency equipment are required to prevent catastrophic outcomes. In a continuing commitment to player safety during football, this paper presents the FIFA Medical Emergency Bag and FIFA 11 Steps to prevent sudden cardiac death. These recommendations are intended to create a global standard for emergency preparedness and the medical response to serious or catastrophic on-field injuries in football.
Sensor Suitcase Tablet Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
The Retrocommissioning Sensor Suitcase is targeted for use in small commercial buildings of less than 50,000 square feet of floor space that regularly receive basic services such as maintenance and repair, but don't have in-house energy management staff or buildings experts. The Suitcase is designed to be easy-to-use by building maintenance staff, or other professionals such as telecom and alarm technicians. The software in the hand-held is designed to guide the staff to input the building and system information, deploy the sensors in proper location, configure the sensor hardware, and start the data collection.
A Prior for Neural Networks utilizing Enclosing Spheres for Normalization
NASA Astrophysics Data System (ADS)
v. Toussaint, U.; Gori, S.; Dose, V.
2004-11-01
Neural Networks are famous for their advantageous flexibility for problems when there is insufficient knowledge to set up a proper model. On the other hand this flexibility can cause over-fitting and can hamper the generalization properties of neural networks. Many approaches to regularize NN have been suggested but most of them based on ad-hoc arguments. Employing the principle of transformation invariance we derive a general prior in accordance with the Bayesian probability theory for a class of feedforward networks. Optimal networks are determined by Bayesian model comparison verifying the applicability of this approach.
Dimensionally regularized Tsallis' statistical mechanics and two-body Newton's gravitation
NASA Astrophysics Data System (ADS)
Zamora, J. D.; Rocca, M. C.; Plastino, A.; Ferri, G. L.
2018-05-01
Typical Tsallis' statistical mechanics' quantifiers like the partition function and the mean energy exhibit poles. We are speaking of the partition function Z and the mean energy 〈 U 〉 . The poles appear for distinctive values of Tsallis' characteristic real parameter q, at a numerable set of rational numbers of the q-line. These poles are dealt with dimensional regularization resources. The physical effects of these poles on the specific heats are studied here for the two-body classical gravitation potential.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balakin, A. B.; Zayats, A. E.; Sushkov, S. V.
2007-04-15
We discuss exact solutions of a three-parameter nonminimal Einstein-Yang-Mills model, which describe the wormholes of a new type. These wormholes are considered to be supported by the SU(2)-symmetric Yang-Mills field, nonminimally coupled to gravity, the Wu-Yang ansatz for the gauge field being used. We distinguish between regular solutions, describing traversable nonminimal Wu-Yang wormholes, and black wormholes possessing one or two event horizons. The relation between the asymptotic mass of the regular traversable Wu-Yang wormhole and its throat radius is analyzed.
Vector mesons in the Nambu-Jona-Lasinio model
NASA Astrophysics Data System (ADS)
Schüren, C.; Döring, F.; Ruiz Arriola, E.; Goeke, K.
1993-12-01
We investigate solitonic solutions with baryon number equal to one of the semi-bosonized SU(2) Nambu-Jona-Lasinio model including σ -, π -, ρ -, A 1- and ω-mesons both on the chiral circle ( σ2r) + π2( r) = f2π) and beyond it ( σ2( r) + π2( r) ≠ f2π). The action is treated in the mesonic and baryonic sector in the leading order of the large- Nc expansion (one-quark-loop approximation). The UV-divergent real part of the effective action is rendered finite using different gauge-invariant regularization methods (Pauli-Villars and proper time). The parameters of the model are fixed in two different ways: either approximately by a heat kernel expansion of the effective action up to second order or by an exact calculation of the mesonic on-shell masses. This leaves the constituent quark mass as the only free parameter of the model. In the solitonic sector we pay special attention to the way the Wick rotation from euclidean space back to Minkowski space has to be performed. We get solitonic solutions from hedgehoglike field configurations on the chiral circle for a wide range of couplings. We also find that if the chiral-circle constraint is relaxed vector mesons provide stable solitonic solutions. Moreover, whether the baryon number is carried by the valence quarks or by the Dirac sea depends strongly on the particular values of the constituent quark mass. We also study the low-energy limit of the model and its connection to chiral perturbation theory. To this end a covariant-derivative expansion is performed in the presence of external fields. After integrating out the scalar, vector and axial degrees of freedom this leads to the corresponding low-energy parameters as e.g. pion radii and some threshold parameters for pion-pion scattering. Vector mesons provide a natural explanation for an axial coupling constant at the quark level gAQ lower than one. However, we find for the gAN of the nucleon noticeable deviations from the non-relativistic quark model prediction g AN = {5}/{3}g AQ. For the values of the parameters where solitons are found, pionic radii come out to be too small. Finally, the relation of the present model to other chiral soliton models as well as some effective lagrangians is displayed.
Higher order sensitivity of solutions to convex programming problems without strict complementarity
NASA Technical Reports Server (NTRS)
Malanowski, Kazimierz
1988-01-01
Consideration is given to a family of convex programming problems which depend on a vector parameter. It is shown that the solutions of the problems and the associated Lagrange multipliers are arbitrarily many times directionally differentiable functions of the parameter, provided that the data of the problems are sufficiently regular. The characterizations of the respective derivatives are given.
Optical rogue waves for the inhomogeneous generalized nonlinear Schrödinger equation.
Loomba, Shally; Kaur, Harleen
2013-12-01
We present optical rogue wave solutions for a generalized nonlinear Schrodinger equation by using similarity transformation. We have predicted the propagation of rogue waves through a nonlinear optical fiber for three cases: (i) dispersion increasing (decreasing) fiber, (ii) periodic dispersion parameter, and (iii) hyperbolic dispersion parameter. We found that the rogue waves and their interactions can be tuned by properly choosing the parameters. We expect that our results can be used to realize improved signal transmission through optical rogue waves.
Identification of Bouc-Wen hysteretic parameters based on enhanced response sensitivity approach
NASA Astrophysics Data System (ADS)
Wang, Li; Lu, Zhong-Rong
2017-05-01
This paper aims to identify parameters of Bouc-Wen hysteretic model using time-domain measured data. It follows a general inverse identification procedure, that is, identifying model parameters is treated as an optimization problem with the nonlinear least squares objective function. Then, the enhanced response sensitivity approach, which has been shown convergent and proper for such kind of problems, is adopted to solve the optimization problem. Numerical tests are undertaken to verify the proposed identification approach.
Certainty Equivalence M-MRAC for Systems with Unmatched Uncertainties
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmanje
2012-01-01
The paper presents a certainty equivalence state feedback indirect adaptive control design method for the systems of any relative degree with unmatched uncertainties. The approach is based on the parameter identification (estimation) model, which is completely separated from the control design and is capable of producing parameter estimates as fast as the computing power allows without generating high frequency oscillations. It is shown that the system's input and output tracking errors can be systematically decreased by the proper choice of the design parameters.
Regularized estimation of Euler pole parameters
NASA Astrophysics Data System (ADS)
Aktuğ, Bahadir; Yildirim, Ömer
2013-07-01
Euler vectors provide a unified framework to quantify the relative or absolute motions of tectonic plates through various geodetic and geophysical observations. With the advent of space geodesy, Euler parameters of several relatively small plates have been determined through the velocities derived from the space geodesy observations. However, the available data are usually insufficient in number and quality to estimate both the Euler vector components and the Euler pole parameters reliably. Since Euler vectors are defined globally in an Earth-centered Cartesian frame, estimation with the limited geographic coverage of the local/regional geodetic networks usually results in highly correlated vector components. In the case of estimating the Euler pole parameters directly, the situation is even worse, and the position of the Euler pole is nearly collinear with the magnitude of the rotation rate. In this study, a new method, which consists of an analytical derivation of the covariance matrix of the Euler vector in an ideal network configuration, is introduced and a regularized estimation method specifically tailored for estimating the Euler vector is presented. The results show that the proposed method outperforms the least squares estimation in terms of the mean squared error.
A hybrid Pade-Galerkin technique for differential equations
NASA Technical Reports Server (NTRS)
Geer, James F.; Andersen, Carl M.
1993-01-01
A three-step hybrid analysis technique, which successively uses the regular perturbation expansion method, the Pade expansion method, and then a Galerkin approximation, is presented and applied to some model boundary value problems. In the first step of the method, the regular perturbation method is used to construct an approximation to the solution in the form of a finite power series in a small parameter epsilon associated with the problem. In the second step of the method, the series approximation obtained in step one is used to construct a Pade approximation in the form of a rational function in the parameter epsilon. In the third step, the various powers of epsilon which appear in the Pade approximation are replaced by new (unknown) parameters (delta(sub j)). These new parameters are determined by requiring that the residual formed by substituting the new approximation into the governing differential equation is orthogonal to each of the perturbation coordinate functions used in step one. The technique is applied to model problems involving ordinary or partial differential equations. In general, the technique appears to provide good approximations to the solution even when the perturbation and Pade approximations fail to do so. The method is discussed and topics for future investigations are indicated.
NASA Astrophysics Data System (ADS)
Hoeksema, J. T.; Baldner, C. S.; Bush, R. I.; Schou, J.; Scherrer, P. H.
2018-03-01
The Helioseismic and Magnetic Imager (HMI) instrument is a major component of NASA's Solar Dynamics Observatory (SDO) spacecraft. Since commencement of full regular science operations on 1 May 2010, HMI has operated with remarkable continuity, e.g. during the more than five years of the SDO prime mission that ended 30 September 2015, HMI collected 98.4% of all possible 45-second velocity maps; minimizing gaps in these full-disk Dopplergrams is crucial for helioseismology. HMI velocity, intensity, and magnetic-field measurements are used in numerous investigations, so understanding the quality of the data is important. This article describes the calibration measurements used to track the performance of the HMI instrument, and it details trends in important instrument parameters during the prime mission. Regular calibration sequences provide information used to improve and update the calibration of HMI data. The set-point temperature of the instrument front window and optical bench is adjusted regularly to maintain instrument focus, and changes in the temperature-control scheme have been made to improve stability in the observable quantities. The exposure time has been changed to compensate for a 20% decrease in instrument throughput. Measurements of the performance of the shutter and tuning mechanisms show that they are aging as expected and continue to perform according to specification. Parameters of the tunable optical-filter elements are regularly adjusted to account for drifts in the central wavelength. Frequent measurements of changing CCD-camera characteristics, such as gain and flat field, are used to calibrate the observations. Infrequent expected events such as eclipses, transits, and spacecraft off-points interrupt regular instrument operations and provide the opportunity to perform additional calibration. Onboard instrument anomalies are rare and seem to occur quite uniformly in time. The instrument continues to perform very well.
Optimizing solar-cell grid geometry
NASA Technical Reports Server (NTRS)
Crossley, A. P.
1969-01-01
Trade-off analysis and mathematical expressions calculate optimum grid geometry in terms of various cell parameters. Determination of the grid geometry provides proper balance between grid resistance and cell output to optimize the energy conversion process.
Reference values of clinical chemistry and hematology parameters in rhesus monkeys (Macaca mulatta).
Chen, Younan; Qin, Shengfang; Ding, Yang; Wei, Lingling; Zhang, Jie; Li, Hongxia; Bu, Hong; Lu, Yanrong; Cheng, Jingqiu
2009-01-01
Rhesus monkey models are valuable to the studies of human biology. Reference values for clinical chemistry and hematology parameters of rhesus monkeys are required for proper data interpretation. Whole blood was collected from 36 healthy Chinese rhesus monkeys (Macaca mulatta) of either sex, 3 to 5 yr old. Routine chemistry and hematology parameters, and some special coagulation parameters including thromboelastograph and activities of coagulation factors were tested. We presented here the baseline values of clinical chemistry and hematology parameters in normal Chinese rhesus monkeys. These data may provide valuable information for veterinarians and investigators using rhesus monkeys in experimental studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chvetsov, A; Sandison, G; Schwartz, J
Purpose: Combination of serial tumor imaging with radiobiological modeling can provide more accurate information on the nature of treatment response and what underlies resistance. The purpose of this article is to improve the algorithms related to imaging-based radiobilogical modeling of tumor response. Methods: Serial imaging of tumor response to radiation therapy represents a sum of tumor cell sensitivity, tumor growth rates, and the rate of cell loss which are not separated explicitly. Accurate treatment response assessment would require separation of these radiobiological determinants of treatment response because they define tumor control probability. We show that the problem of reconstruction ofmore » radiobiological parameters from serial imaging data can be considered as inverse ill-posed problem described by the Fredholm integral equation of the first kind because it is governed by a sum of several exponential processes. Therefore, the parameter reconstruction can be solved using regularization methods. Results: To study the reconstruction problem, we used a set of serial CT imaging data for the head and neck cancer and a two-level cell population model of tumor response which separates the entire tumor cell population in two subpopulations of viable and lethally damage cells. The reconstruction was done using a least squared objective function and a simulated annealing algorithm. Using in vitro data for radiobiological parameters as reference data, we shown that the reconstructed values of cell surviving fractions and potential doubling time exhibit non-physical fluctuations if no stabilization algorithms are applied. The variational regularization allowed us to obtain statistical distribution for cell surviving fractions and cell number doubling times comparable to in vitro data. Conclusion: Our results indicate that using variational regularization can increase the number of free parameters in the model and open the way to development of more advanced algorithms which take into account tumor heterogeneity, for example, related to hypoxia.« less
Rotational flow in tapered slab rocket motors
NASA Astrophysics Data System (ADS)
Saad, Tony; Sams, Oliver C.; Majdalani, Joseph
2006-10-01
Internal flow modeling is a requisite for obtaining critical parameters in the design and fabrication of modern solid rocket motors. In this work, the analytical formulation of internal flows particular to motors with tapered sidewalls is pursued. The analysis employs the vorticity-streamfunction approach to treat this problem assuming steady, incompressible, inviscid, and nonreactive flow conditions. The resulting solution is rotational following the analyses presented by Culick for a cylindrical motor. In an extension to Culick's work, Clayton has recently managed to incorporate the effect of tapered walls. Here, an approach similar to that of Clayton is applied to a slab motor in which the chamber is modeled as a rectangular channel with tapered sidewalls. The solutions are shown to be reducible, at leading order, to Taylor's inviscid profile in a porous channel. The analysis also captures the generation of vorticity at the surface of the propellant and its transport along the streamlines. It is from the axial pressure gradient that the proper form of the vorticity is ascertained. Regular perturbations are then used to solve the vorticity equation that prescribes the mean flow motion. Subsequently, numerical simulations via a finite volume solver are carried out to gain further confidence in the analytical approximations. In illustrating the effects of the taper on flow conditions, comparisons of total pressure and velocity profiles in tapered and nontapered chambers are entertained. Finally, a comparison with the axisymmetric flow analog is presented.
NASA Astrophysics Data System (ADS)
Jeffs, Brian D.; Christou, Julian C.
1998-09-01
This paper addresses post processing for resolution enhancement of sequences of short exposure adaptive optics (AO) images of space objects. The unknown residual blur is removed using Bayesian maximum a posteriori blind image restoration techniques. In the problem formulation, both the true image and the unknown blur psf's are represented by the flexible generalized Gaussian Markov random field (GGMRF) model. The GGMRF probability density function provides a natural mechanism for expressing available prior information about the image and blur. Incorporating such prior knowledge in the deconvolution optimization is crucial for the success of blind restoration algorithms. For example, space objects often contain sharp edge boundaries and geometric structures, while the residual blur psf in the corresponding partially corrected AO image is spectrally band limited, and exhibits while the residual blur psf in the corresponding partially corrected AO image is spectrally band limited, and exhibits smoothed, random , texture-like features on a peaked central core. By properly choosing parameters, GGMRF models can accurately represent both the blur psf and the object, and serve to regularize the deconvolution problem. These two GGMRF models also serve as discriminator functions to separate blur and object in the solution. Algorithm performance is demonstrated with examples from synthetic AO images. Results indicate significant resolution enhancement when applied to partially corrected AO images. An efficient computational algorithm is described.
Load Sharing Behavior of Star Gearing Reducer for Geared Turbofan Engine
NASA Astrophysics Data System (ADS)
Mo, Shuai; Zhang, Yidu; Wu, Qiong; Wang, Feiming; Matsumura, Shigeki; Houjoh, Haruo
2017-07-01
Load sharing behavior is very important for power-split gearing system, star gearing reducer as a new type and special transmission system can be used in many industry fields. However, there is few literature regarding the key multiple-split load sharing issue in main gearbox used in new type geared turbofan engine. Further mechanism analysis are made on load sharing behavior among star gears of star gearing reducer for geared turbofan engine. Comprehensive meshing error analysis are conducted on eccentricity error, gear thickness error, base pitch error, assembly error, and bearing error of star gearing reducer respectively. Floating meshing error resulting from meshing clearance variation caused by the simultaneous floating of sun gear and annular gear are taken into account. A refined mathematical model for load sharing coefficient calculation is established in consideration of different meshing stiffness and supporting stiffness for components. The regular curves of load sharing coefficient under the influence of interactions, single action and single variation of various component errors are obtained. The accurate sensitivity of load sharing coefficient toward different errors is mastered. The load sharing coefficient of star gearing reducer is 1.033 and the maximum meshing force in gear tooth is about 3010 N. This paper provides scientific theory evidences for optimal parameter design and proper tolerance distribution in advanced development and manufacturing process, so as to achieve optimal effects in economy and technology.
Efficient robust conditional random fields.
Song, Dongjin; Liu, Wei; Zhou, Tianyi; Tao, Dacheng; Meyer, David A
2015-10-01
Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs.
A Tikhonov Regularization Scheme for Focus Rotations with Focused Ultrasound Phased Arrays
Hughes, Alec; Hynynen, Kullervo
2016-01-01
Phased arrays have a wide range of applications in focused ultrasound therapy. By using an array of individually-driven transducer elements, it is possible to steer a focus through space electronically and compensate for acoustically heterogeneous media with phase delays. In this paper, the concept of focusing an ultrasound phased array is expanded to include a method to control the orientation of the focus using a Tikhonov regularization scheme. It is then shown that the Tikhonov regularization parameter used to solve the ill-posed focus rotation problem plays an important role in the balance between quality focusing and array efficiency. Finally, the technique is applied to the synthesis of multiple foci, showing that this method allows for multiple independent spatial rotations. PMID:27913323
A Tikhonov Regularization Scheme for Focus Rotations With Focused Ultrasound-Phased Arrays.
Hughes, Alec; Hynynen, Kullervo
2016-12-01
Phased arrays have a wide range of applications in focused ultrasound therapy. By using an array of individually driven transducer elements, it is possible to steer a focus through space electronically and compensate for acoustically heterogeneous media with phase delays. In this paper, the concept of focusing an ultrasound-phased array is expanded to include a method to control the orientation of the focus using a Tikhonov regularization scheme. It is then shown that the Tikhonov regularization parameter used to solve the ill-posed focus rotation problem plays an important role in the balance between quality focusing and array efficiency. Finally, the technique is applied to the synthesis of multiple foci, showing that this method allows for multiple independent spatial rotations.
NASA Astrophysics Data System (ADS)
Rozyczka, M.; Narloch, W.; Pietrukowicz, P.; Thompson, I. B.; Pych, W.; Poleski, R.
2018-03-01
We adapt the friends of friends algorithm to the analysis of light curves, and show that it can be succesfully applied to searches for transient phenomena in large photometric databases. As a test case we search OGLE-III light curves for known dwarf novae. A single combination of control parameters allows us to narrow the search to 1% of the data while reaching a ≍90% detection efficiency. A search involving ≍2% of the data and three combinations of control parameters can be significantly more effective - in our case a 100% efficiency is reached. The method can also quite efficiently detect semi-regular variability. In particular, 28 new semi-regular variables have been found in the field of the globular cluster M22, which was examined earlier with the help of periodicity-searching algorithms.
Sinc-Galerkin estimation of diffusivity in parabolic problems
NASA Technical Reports Server (NTRS)
Smith, Ralph C.; Bowers, Kenneth L.
1991-01-01
A fully Sinc-Galerkin method for the numerical recovery of spatially varying diffusion coefficients in linear partial differential equations is presented. Because the parameter recovery problems are inherently ill-posed, an output error criterion in conjunction with Tikhonov regularization is used to formulate them as infinite-dimensional minimization problems. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which displays an exponential convergence rate and is valid on the infinite time interval. The minimization problems are then solved via a quasi-Newton/trust region algorithm. The L-curve technique for determining an approximate value of the regularization parameter is briefly discussed, and numerical examples are given which show the applicability of the method both for problems with noise-free data as well as for those whose data contains white noise.
Influence of control parameters on the joint tracking performance of a coaxial weld vision system
NASA Technical Reports Server (NTRS)
Gangl, K. J.; Weeks, J. L.
1985-01-01
The first phase of a series of evaluations of a vision-based welding control sensor for the Space Shuttle Main Engine Robotic Welding System is described. The robotic welding system is presently under development at the Marshall Space Flight Center. This evaluation determines the standard control response parameters necessary for proper trajectory of the welding torch along the joint.
An experimental comparison of various methods of nearfield acoustic holography
Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.
2017-05-19
An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less
An experimental comparison of various methods of nearfield acoustic holography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.
An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less
Yang, Fang; Zhong, Ke; Chen, Yonghang; Kang, Yanming
2017-10-01
Numerical simulations were conducted to investigate the effects of building height ratio (i.e., HR, the height ratio of the upstream building to the downstream building) on the air quality in buildings beside street canyons, and both regular and staggered canyons were considered for the simulations. The results show that the building height ratio affects not only the ventilation fluxes of the rooms in the downstream building but also the pollutant concentrations around the building. The parameter, outdoor effective source intensity of a room, is then proposed to calculate the amount of vehicular pollutants that enters into building rooms. Smaller value of this parameter indicates less pollutant enters the room. The numerical results reveal that HRs from 2/7 to 7/2 are the favorable height ratios for the regular canyons, as they obtain smaller values than the other cases. While HR values of 5/7, 7/7, and 7/5 are appropriate for staggered canyons. In addition, in terms of improving indoor air quality by natural ventilation, the staggered canyons with favorable HR are better than those of the regular canyons.
Design of 4D x-ray tomography experiments for reconstruction using regularized iterative algorithms
NASA Astrophysics Data System (ADS)
Mohan, K. Aditya
2017-10-01
4D X-ray computed tomography (4D-XCT) is widely used to perform non-destructive characterization of time varying physical processes in various materials. The conventional approach to improving temporal resolution in 4D-XCT involves the development of expensive and complex instrumentation that acquire data faster with reduced noise. It is customary to acquire data with many tomographic views at a high signal to noise ratio. Instead, temporal resolution can be improved using regularized iterative algorithms that are less sensitive to noise and limited views. These algorithms benefit from optimization of other parameters such as the view sampling strategy while improving temporal resolution by reducing the total number of views or the detector exposure time. This paper presents the design principles of 4D-XCT experiments when using regularized iterative algorithms derived using the framework of model-based reconstruction. A strategy for performing 4D-XCT experiments is presented that allows for improving the temporal resolution by progressively reducing the number of views or the detector exposure time. Theoretical analysis of the effect of the data acquisition parameters on the detector signal to noise ratio, spatial reconstruction resolution, and temporal reconstruction resolution is also presented in this paper.
Martins, C C; Bagatini, M D; Cardoso, A M; Zanini, D; Abdalla, F H; Baldissarelli, J; Dalenogare, D P; Dos Santos, D L; Schetinger, M R C; Morsch, V M M
2016-11-01
In this study, we investigated the cardiovascular risk factors as well as ectonucleotidase activities in lymphocytes of metabolic syndrome (MetS) patients before and after an exercise intervention. 20 MetS patients, who performed regular concurrent exercise training for 30 weeks, 3 times/week, were studied. Anthropometric, biochemical, inflammatory and hepatic parameters and hydrolysis of adenine nucleotides and nucleoside in lymphocytes were collected from patients before and after 15 and 30 weeks of the exercise intervention as well as from participants of the control group. An increase in the hydrolysis of ATP and ADP, and a decrease in adenosine deamination in lymphocytes of MetS patients before the exercise intervention were observed (P<0.001). However, these alterations were reversed by exercise training after 30 weeks of intervention. Additionally, exercise training reduced the inflammatory and hepatic markers to baseline levels after 30 weeks of exercise. Our results clearly indicated alteration in ectonucleotidase enzymes in lymphocytes in the MetS, whereas regular exercise training had a protective effect on the enzymatic alterations and on inflammatory and hepatic parameters, especially if it is performed regularly and for a long period. © Georg Thieme Verlag KG Stuttgart · New York.
The impact of Nordic walking training on the gait of the elderly.
Ben Mansour, Khaireddine; Gorce, Philippe; Rezzoug, Nasser
2018-03-27
The purpose of the current study was to define the impact of regular practice of Nordic walking on the gait of the elderly. Thereby, we aimed to determine whether the gait characteristics of active elderly persons practicing Nordic walking are more similar to healthy adults than that of the sedentary elderly. Comparison was made based on parameters computed from three inertial sensors during walking at a freely chosen velocity. Results showed differences in gait pattern in terms of the amplitude computed from acceleration and angular velocity at the lumbar region (root mean square), the distribution (Skewness) quantified from the vertical and Euclidean norm of the lumbar acceleration, the complexity (Sample Entropy) of the mediolateral component of lumbar angular velocity and the Euclidean norm of the shank acceleration and angular velocity, the regularity of the lower limbs, the spatiotemporal parameters and the variability (standard deviation) of stance and stride durations. These findings reveal that the pattern of active elderly differs significantly from sedentary elderly of the same age while similarity was observed between the active elderly and healthy adults. These results advance that regular physical activity such as Nordic walking may counteract the deterioration of gait quality that occurs with aging.
Estimation of the Parameters in a Two-State System Coupled to a Squeezed Bath
NASA Astrophysics Data System (ADS)
Hu, Yao-Hua; Yang, Hai-Feng; Tan, Yong-Gang; Tao, Ya-Ping
2018-04-01
Estimation of the phase and weight parameters of a two-state system in a squeezed bath by calculating quantum Fisher information is investigated. The results show that, both for the phase estimation and for the weight estimation, the quantum Fisher information always decays with time and changes periodically with the phases. The estimation precision can be enhanced by choosing the proper values of the phases and the squeezing parameter. These results can be provided as an analysis reference for the practical application of the parameter estimation in a squeezed bath.
NASA Technical Reports Server (NTRS)
Rais-Rohani, Masoud
1996-01-01
The identification of airframe Manufacturability Factors/Cost Drivers (MFCD) and the method by which the relationships between MFCD and designer-controlled parameters could be properly modeled are described.
Predictive cues for auditory stream formation in humans and monkeys.
Aggelopoulos, Nikolaos C; Deike, Susann; Selezneva, Elena; Scheich, Henning; Brechmann, André; Brosch, Michael
2017-12-18
Auditory perception is improved when stimuli are predictable, and this effect is evident in a modulation of the activity of neurons in the auditory cortex as shown previously. Human listeners can better predict the presence of duration deviants embedded in stimulus streams with fixed interonset interval (isochrony) and repeated duration pattern (regularity), and neurons in the auditory cortex of macaque monkeys have stronger sustained responses in the 60-140 ms post-stimulus time window under these conditions. Subsequently, the question has arisen whether isochrony or regularity in the sensory input contributed to the enhancement of the neuronal and behavioural responses. Therefore, we varied the two factors isochrony and regularity independently and measured the ability of human subjects to detect deviants embedded in these sequences as well as measuring the responses of neurons the primary auditory cortex of macaque monkeys during presentations of the sequences. The performance of humans in detecting deviants was significantly increased by regularity. Isochrony enhanced detection only in the presence of the regularity cue. In monkeys, regularity increased the sustained component of neuronal tone responses in auditory cortex while isochrony had no consistent effect. Although both regularity and isochrony can be considered as parameters that would make a sequence of sounds more predictable, our results from the human and monkey experiments converge in that regularity has a greater influence on behavioural performance and neuronal responses. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Kulkarni, Ankur H; Ghosh, Prasenjit; Seetharaman, Ashwin; Kondaiah, Paturu; Gundiah, Namrata
2018-05-09
Traction forces exerted by adherent cells are quantified using displacements of embedded markers on polyacrylamide substrates due to cell contractility. Fourier Transform Traction Cytometry (FTTC) is widely used to calculate tractions but has inherent limitations due to errors in the displacement fields; these are mitigated through a regularization parameter (γ) in the Reg-FTTC method. An alternate finite element (FE) approach computes tractions on a domain using known boundary conditions. Robust verification and recovery studies are lacking but essential in assessing the accuracy and noise sensitivity of the traction solutions from the different methods. We implemented the L2 regularization method and defined a maximum curvature point in the traction with γ plot as the optimal regularization parameter (γ*) in the Reg-FTTC approach. Traction reconstructions using γ* yield accurate values of low and maximum tractions (Tmax) in the presence of up to 5% noise. Reg-FTTC is hence a clear improvement over the FTTC method but is inadequate to reconstruct low stresses such as those at nascent focal adhesions. FE, implemented using a node-by-node comparison, showed an intermediate reconstruction compared to Reg-FTTC. We performed experiments using mouse embryonic fibroblast (MEF) and compared results between these approaches. Tractions from FTTC and FE showed differences of ∼92% and 22% as compared to Reg-FTTC. Selection of an optimum value of γ for each cell reduced variability in the computed tractions as compared to using a single value of γ for all the MEF cells in this study.
Investigating the Metallicity–Mixing-length Relation
NASA Astrophysics Data System (ADS)
Viani, Lucas S.; Basu, Sarbani; Joel Ong J., M.; Bonaca, Ana; Chaplin, William J.
2018-05-01
Stellar models typically use the mixing-length approximation as a way to implement convection in a simplified manner. While conventionally the value of the mixing-length parameter, α, used is the solar-calibrated value, many studies have shown that other values of α are needed to properly model stars. This uncertainty in the value of the mixing-length parameter is a major source of error in stellar models and isochrones. Using asteroseismic data, we determine the value of the mixing-length parameter required to properly model a set of about 450 stars ranging in log g, {T}eff}, and [{Fe}/{{H}}]. The relationship between the value of α required and the properties of the star is then investigated. For Eddington atmosphere, non-diffusion models, we find that the value of α can be approximated by a linear model, in the form of α /{α }ȯ =5.426{--}0.101 {log}(g)-1.071 {log}({T}eff}) +0.437([{Fe}/{{H}}]). This process is repeated using a variety of model physics, as well as compared with previous studies and results from 3D convective simulations.
NASA Astrophysics Data System (ADS)
Dang, Van Tuan; Lafon, Pascal; Labergere, Carl
2017-10-01
In this work, a combination of Proper Orthogonal Decomposition (POD) and Radial Basis Function (RBF) is proposed to build a surrogate model based on the Benchmark Springback 3D bending from the Numisheet2011 congress. The influence of the two design parameters, the geometrical parameter of the die radius and the process parameter of the blank holder force, on the springback of the sheet after a stamping operation is analyzed. The classical Design of Experience (DoE) uses Full Factorial to design the parameter space with sample points as input data for finite element method (FEM) numerical simulation of the sheet metal stamping process. The basic idea is to consider the design parameters as additional dimensions for the solution of the displacement fields. The order of the resultant high-fidelity model is reduced through the use of POD method which performs model space reduction and results in the basis functions of the low order model. Specifically, the snapshot method is used in our work, in which the basis functions is derived from snapshot deviation of the matrix of the final displacements fields of the FEM numerical simulation. The obtained basis functions are then used to determine the POD coefficients and RBF is used for the interpolation of these POD coefficients over the parameter space. Finally, the presented POD-RBF approach which is used for shape optimization can be performed with high accuracy.
High Dietary Fructose Intake on Cardiovascular Disease Related Parameters in Growing Rats.
Yoo, SooYeon; Ahn, Hyejin; Park, Yoo Kyoung
2016-12-26
The objective of this study was to determine the effects of a high-fructose diet on cardiovascular disease (CVD)-related parameters in growing rats. Three-week-old female Sprague Dawley rats were randomly assigned to four experimental groups; a regular diet group (RD: fed regular diet based on AIN-93G, n = 8), a high-fructose diet group (30Frc: fed regular diet with 30% fructose, n = 8), a high-fat diet group (45Fat: fed regular diet with 45 kcal% fat, n = 8) or a high fructose with high-fat diet group (30Frc + 45Fat, fed diet 30% fructose with 45 kcal% fat, n = 8). After an eight-week treatment period, the body weight, total-fat weight, serum glucose, insulin, lipid profiles and pro-inflammatory cytokines, abdominal aortic wall thickness, and expressions of eNOS and ET-1 mRNA were analyzed. The result showed that total-fat weight was higher in the 30Frc, 45Fat, and 30Frc + 45Fat groups compared to the RD group ( p < 0.05). Serum triglyceride (TG) levels were highest in the 30Frc group than the other groups ( p < 0.05). The abdominal aorta of 30Frc, 45Fat, and 30Frc + 45Fat groups had higher wall thickness than the RD group ( p < 0.05). Abdominal aortic eNOS mRNA level was decreased in 30Frc, 45Fat, and 30Frc + 45Fat groups compared to the RD group ( p < 0.05), and also 45Fat and 30Frc + 45Fat groups had decreased mRNA expression of eNOS compared to the 30Frc group ( p < 0.05). ET-1 mRNA level was higher in 30Frc, 45Fat, and 30Frc + 45Fat groups than the RD group ( p < 0.05). Both high fructose consumption and high fat consumption in growing rats had similar negative effects on CVD-related parameters.
NASA Astrophysics Data System (ADS)
Zhou, Jianmei; Wang, Jianxun; Shang, Qinglong; Wang, Hongnian; Yin, Changchun
2014-04-01
We present an algorithm for inverting controlled source audio-frequency magnetotelluric (CSAMT) data in horizontally layered transversely isotropic (TI) media. The popular inversion method parameterizes the media into a large number of layers which have fixed thickness and only reconstruct the conductivities (e.g. Occam's inversion), which does not enable the recovery of the sharp interfaces between layers. In this paper, we simultaneously reconstruct all the model parameters, including both the horizontal and vertical conductivities and layer depths. Applying the perturbation principle and the dyadic Green's function in TI media, we derive the analytic expression of Fréchet derivatives of CSAMT responses with respect to all the model parameters in the form of Sommerfeld integrals. A regularized iterative inversion method is established to simultaneously reconstruct all the model parameters. Numerical results show that the inverse algorithm, including the depths of the layer interfaces, can significantly improve the inverse results. It can not only reconstruct the sharp interfaces between layers, but also can obtain conductivities close to the true value.
NASA Astrophysics Data System (ADS)
Samaras, Stefanos; Böckmann, Christine; Nicolae, Doina
2016-06-01
In this work we propose a two-step advancement of the Mie spherical-particle model accounting for particle non-sphericity. First, a naturally two-dimensional (2D) generalized model (GM) is made, which further triggers analogous 2D re-definitions of microphysical parameters. We consider a spheroidal-particle approach where the size distribution is additionally dependent on aspect ratio. Second, we incorporate the notion of a sphere-spheroid particle mixture (PM) weighted by a non-sphericity percentage. The efficiency of these two models is investigated running synthetic data retrievals with two different regularization methods to account for the inherent instability of the inversion procedure. Our preliminary studies show that a retrieval with the PM model improves the fitting errors and the microphysical parameter retrieval and it has at least the same efficiency as the GM. While the general trend of the initial size distributions is captured in our numerical experiments, the reconstructions are subject to artifacts. Finally, our approach is applied to a measurement case yielding acceptable results.
Hierarchical Bayesian modeling of ionospheric TEC disturbances as non-stationary processes
NASA Astrophysics Data System (ADS)
Seid, Abdu Mohammed; Berhane, Tesfahun; Roininen, Lassi; Nigussie, Melessew
2018-03-01
We model regular and irregular variation of ionospheric total electron content as stationary and non-stationary processes, respectively. We apply the method developed to SCINDA GPS data set observed at Bahir Dar, Ethiopia (11.6 °N, 37.4 °E) . We use hierarchical Bayesian inversion with Gaussian Markov random process priors, and we model the prior parameters in the hyperprior. We use Matérn priors via stochastic partial differential equations, and use scaled Inv -χ2 hyperpriors for the hyperparameters. For drawing posterior estimates, we use Markov Chain Monte Carlo methods: Gibbs sampling and Metropolis-within-Gibbs for parameter and hyperparameter estimations, respectively. This allows us to quantify model parameter estimation uncertainties as well. We demonstrate the applicability of the method proposed using a synthetic test case. Finally, we apply the method to real GPS data set, which we decompose to regular and irregular variation components. The result shows that the approach can be used as an accurate ionospheric disturbance characterization technique that quantifies the total electron content variability with corresponding error uncertainties.
Fukuda, Shinichi; Beheregaray, Simone; Hoshi, Sujin; Yamanari, Masahiro; Lim, Yiheng; Hiraoka, Takahiro; Yasuno, Yoshiaki; Oshika, Tetsuro
2013-12-01
To evaluate the ability of parameters measured by three-dimensional (3D) corneal and anterior segment optical coherence tomography (CAS-OCT) and a rotating Scheimpflug camera combined with a Placido topography system (Scheimpflug camera with topography) to discriminate between normal eyes and forme fruste keratoconus. Forty-eight eyes of 48 patients with keratoconus, 25 eyes of 25 patients with forme fruste keratoconus and 128 eyes of 128 normal subjects were evaluated. Anterior and posterior keratometric parameters (steep K, flat K, average K), elevation, topographic parameters, regular and irregular astigmatism (spherical, asymmetry, regular and higher-order astigmatism) and five pachymetric parameters (minimum, minimum-median, inferior-superior, inferotemporal-superonasal, vertical thinnest location of the cornea) were measured using 3D CAS-OCT and a Scheimpflug camera with topography. The area under the receiver operating curve (AUROC) was calculated to assess the discrimination ability. Compatibility and repeatability of both devices were evaluated. Posterior surface elevation showed higher AUROC values in discrimination analysis of forme fruste keratoconus using both devices. Both instruments showed significant linear correlations (p<0.05, Pearson's correlation coefficient) and good repeatability (ICCs: 0.885-0.999) for normal and forme fruste keratoconus. Posterior elevation was the best discrimination parameter for forme fruste keratoconus. Both instruments presented good correlation and repeatability for this condition.
Han, Jong-Min; Kim, Hyeong-Geug; Lee, Jin-Seok; Choi, Min-Kyung; Kim, Young-Ae; Son, Chang-Gue
2014-01-01
Obesity-related disorders, especially metabolic syndrome, contribute to 2.8 million deaths each year worldwide, with significantly increasing morbidity. Eating at regular times and proper food quantity are crucial for maintaining a healthy status. However, many people in developed countries do not follow a regular eating schedule due to a busy lifestyle. Herein, we show that a repeated sense of hunger leads to a high risk of developing visceral obesity and metabolic syndrome in a mouse model (both 3-week and 6-week-old age, 10 mice in each group). The ad libitum (AL) group (normal eating pattern) and the food restriction (FR) group (alternate-day partially food restriction by given only 1/3 of average amount) were compared after 8-week experimental period. The total food consumption in the FR group was lower than in the AL group, however, the FR group showed a metabolic syndrome-like condition with significant fat accumulation in adipose tissues. Consequently, the repeated sense of hunger induced the typical characteristics of metabolic syndrome in an animal model; a distinct visceral obesity, hyperlipidemia, hyperglycemia and hepatic steatosis. Furthermore, we found that specifically leptin, a major metabolic hormone, played a major role in the development of these pathological disorders. Our study indicated the importance of regular eating habits besides controlling calorie intake.
Márquez Neila, Pablo; Baumela, Luis; González-Soriano, Juncal; Rodríguez, Jose-Rodrigo; DeFelipe, Javier; Merchán-Pérez, Ángel
2016-04-01
Recent electron microscopy (EM) imaging techniques permit the automatic acquisition of a large number of serial sections from brain samples. Manual segmentation of these images is tedious, time-consuming and requires a high degree of user expertise. Therefore, there is considerable interest in developing automatic segmentation methods. However, currently available methods are computationally demanding in terms of computer time and memory usage, and to work properly many of them require image stacks to be isotropic, that is, voxels must have the same size in the X, Y and Z axes. We present a method that works with anisotropic voxels and that is computationally efficient allowing the segmentation of large image stacks. Our approach involves anisotropy-aware regularization via conditional random field inference and surface smoothing techniques to improve the segmentation and visualization. We have focused on the segmentation of mitochondria and synaptic junctions in EM stacks from the cerebral cortex, and have compared the results to those obtained by other methods. Our method is faster than other methods with similar segmentation results. Our image regularization procedure introduces high-level knowledge about the structure of labels. We have also reduced memory requirements with the introduction of energy optimization in overlapping partitions, which permits the regularization of very large image stacks. Finally, the surface smoothing step improves the appearance of three-dimensional renderings of the segmented volumes.
Recursive regularization step for high-order lattice Boltzmann methods
NASA Astrophysics Data System (ADS)
Coreixas, Christophe; Wissocq, Gauthier; Puigt, Guillaume; Boussuge, Jean-François; Sagaut, Pierre
2017-09-01
A lattice Boltzmann method (LBM) with enhanced stability and accuracy is presented for various Hermite tensor-based lattice structures. The collision operator relies on a regularization step, which is here improved through a recursive computation of nonequilibrium Hermite polynomial coefficients. In addition to the reduced computational cost of this procedure with respect to the standard one, the recursive step allows to considerably enhance the stability and accuracy of the numerical scheme by properly filtering out second- (and higher-) order nonhydrodynamic contributions in under-resolved conditions. This is first shown in the isothermal case where the simulation of the doubly periodic shear layer is performed with a Reynolds number ranging from 104 to 106, and where a thorough analysis of the case at Re=3 ×104 is conducted. In the latter, results obtained using both regularization steps are compared against the Bhatnagar-Gross-Krook LBM for standard (D2Q9) and high-order (D2V17 and D2V37) lattice structures, confirming the tremendous increase of stability range of the proposed approach. Further comparisons on thermal and fully compressible flows, using the general extension of this procedure, are then conducted through the numerical simulation of Sod shock tubes with the D2V37 lattice. They confirm the stability increase induced by the recursive approach as compared with the standard one.
Aquino-Pérez, Dulce María; Peña-Cadena, Daniel; Trujillo-García, José Ubaldo; Jiménez-Sandoval, Jaime Omar; Machorro-Muñoz, Olga Stephanie
2013-01-01
The use of metered dose inhaler (MDI) is key in the treatment of asthma; its effectiveness is related to proper technique. The purpose of this study is to evaluate the use of the technique of metered dose inhalers for the parents or guardians of school children with asthma. In this cross-sectional study, we used a sample of 221 individual caregivers (parent or guardian) of asthmatic children from 5 to 12 years old, who use MDI. We designed a validated questionnaire consisting of 27 items which addressed the handling of inhaler technique. Descriptive statistics was used. Caregivers were rated as "good technique" in 41 fathers (18.6%), 77 mothers (34.8%) and 9 tutors (4.1%), and with a "regular technique" 32 fathers (14.5%), 48 mothers (21.2%) and 14 guardians (6.3%). Asthmatic children aged 9 were rated as with "good technique" in 24 (10.9%). According to gender, we found a "good technique" in 80 boys (36.2%) and 47 girls (21.3%) and with a "regular technique" in 59 boys (26.7%) and 35 girls (15.8%), P 0.0973, RP 0.9. We found with a "regular technique" mainly those asthmatic children diagnosed at ages between 1 to 3 years. Most of the participants had a good technical qualification; however major mistakes were made at key points in the performance of it.
Sexual violence against female university students in Ethiopia.
Adinew, Yohannes Mehretie; Hagos, Mihiret Abreham
2017-07-24
Though many women are suffering the consequences of sexual violence, only few victims speak out as it is sensitive and prone to stigma. This lack of data made it difficult to get full picture of the problem and design proper interventions. Thus, the aim of this study was to assess the prevalence and factors associated with sexual violence among female students of Wolaita Sodo University, south Ethiopia. Institution based cross-sectional study was conducted among 462 regular female Wolaita Sodo University students on April 7/2015. Participants were selected by simple random sampling. Data were collected by self-administered questionnaire. Data entry and analysis was done by EPI info and SPSS statistical packages respectively. Descriptive statistics were done. Moreover, bivariate and multivariate analyses were also carried out to identify predictors of sexual violence. The age of respondents ranged from 18 to 26 years. Lifetime sexual violence was found to be 45.4%. However, 36.1% and 24.4% of respondents reported experiencing sexual violence since entering university and in the current academic year respectively. Life time sexual violence was positively associated with witnessing inter-parental violence as a child, rural childhood residence, having regular boyfriend, alcohol consumption and having friends who drink regularly; while it was negatively associated with discussing sexual issues with parents. Sexual violence is a common phenomenon among the students. More detailed research has to be conducted to develop prevention and intervention strategies.
Electron Energy Deposition in Atomic Nitrogen
1987-10-06
knovn theoretical results, and their relative accuracy in comparison to existing measurements and calculations is given elsevhere. 20 2.1 The Source Term...with the proper choice of parameters, reduces to vell-known theoretical results. 20 Table 2 gives the parameters for collisional excitation of the...calculations of McGuire 36 and experimental measurements of Brook et al.3 7 Additional theoretical and experimental results are discussed in detail elsevhere
Bayesian Inference for Time Trends in Parameter Values using Weighted Evidence Sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
D. L. Kelly; A. Malkhasyan
2010-09-01
There is a nearly ubiquitous assumption in PSA that parameter values are at least piecewise-constant in time. As a result, Bayesian inference tends to incorporate many years of plant operation, over which there have been significant changes in plant operational and maintenance practices, plant management, etc. These changes can cause significant changes in parameter values over time; however, failure to perform Bayesian inference in the proper time-dependent framework can mask these changes. Failure to question the assumption of constant parameter values, and failure to perform Bayesian inference in the proper time-dependent framework were noted as important issues in NUREG/CR-6813, performedmore » for the U. S. Nuclear Regulatory Commission’s Advisory Committee on Reactor Safeguards in 2003. That report noted that “in-dustry lacks tools to perform time-trend analysis with Bayesian updating.” This paper describes an applica-tion of time-dependent Bayesian inference methods developed for the European Commission Ageing PSA Network. These methods utilize open-source software, implementing Markov chain Monte Carlo sampling. The paper also illustrates an approach to incorporating multiple sources of data via applicability weighting factors that address differences in key influences, such as vendor, component boundaries, conditions of the operating environment, etc.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana L. Kelly; Albert Malkhasyan
2010-06-01
There is a nearly ubiquitous assumption in PSA that parameter values are at least piecewise-constant in time. As a result, Bayesian inference tends to incorporate many years of plant operation, over which there have been significant changes in plant operational and maintenance practices, plant management, etc. These changes can cause significant changes in parameter values over time; however, failure to perform Bayesian inference in the proper time-dependent framework can mask these changes. Failure to question the assumption of constant parameter values, and failure to perform Bayesian inference in the proper time-dependent framework were noted as important issues in NUREG/CR-6813, performedmore » for the U. S. Nuclear Regulatory Commission’s Advisory Committee on Reactor Safeguards in 2003. That report noted that “industry lacks tools to perform time-trend analysis with Bayesian updating.” This paper describes an application of time-dependent Bayesian inference methods developed for the European Commission Ageing PSA Network. These methods utilize open-source software, implementing Markov chain Monte Carlo sampling. The paper also illustrates the development of a generic prior distribution, which incorporates multiple sources of generic data via weighting factors that address differences in key influences, such as vendor, component boundaries, conditions of the operating environment, etc.« less
NASA Astrophysics Data System (ADS)
Balakin, Alexander B.; Bochkarev, Vladimir V.; Lemos, José P. S.
2008-04-01
Using a Lagrangian formalism, a three-parameter nonminimal Einstein-Maxwell theory is established. The three parameters q1, q2, and q3 characterize the cross-terms in the Lagrangian, between the Maxwell field and terms linear in the Ricci scalar, Ricci tensor, and Riemann tensor, respectively. Static spherically symmetric equations are set up, and the three parameters are interrelated and chosen so that effectively the system reduces to a one parameter only, q. Specific black hole and other type of one-parameter solutions are studied. First, as a preparation, the Reissner-Nordström solution, with q1=q2=q3=0, is displayed. Then, we search for solutions in which the electric field is regular everywhere as well as asymptotically Coulombian, and the metric potentials are regular at the center as well as asymptotically flat. In this context, the one-parameter model with q1≡-q, q2=2q, q3=-q, called the Gauss-Bonnet model, is analyzed in detail. The study is done through the solution of the Abel equation (the key equation), and the dynamical system associated with the model. There is extra focus on an exact solution of the model and its critical properties. Finally, an exactly integrable one-parameter model, with q1≡-q, q2=q, q3=0, is considered also in detail. A special submodel, in which the Fibonacci number appears naturally, of this one-parameter model is shown, and the corresponding exact solution is presented. Interestingly enough, it is a soliton of the theory, the Fibonacci soliton, without horizons and with a mild conical singularity at the center.
NASA Astrophysics Data System (ADS)
Dimitrijevic, M. S.; Tankosic, D.
1998-04-01
In order to find out if regularities and systematic trends found to be apparent among experimental Stark line shifts allow the accurate interpolation of new data and critical evaluation of experimental results, the exceptions to the established regularities are analysed on the basis of critical reviews of experimental data, and reasons for such exceptions are discussed. We found that such exceptions are mostly due to the situations when: (i) the energy gap between atomic energy levels within a supermultiplet is equal or comparable to the energy gap to the nearest perturbing levels; (ii) the most important perturbing level is embedded between the energy levels of the supermultiplet; (iii) the forbidden transitions have influence on Stark line shifts.
Nonclassical states of light with a smooth P function
NASA Astrophysics Data System (ADS)
Damanet, François; Kübler, Jonas; Martin, John; Braun, Daniel
2018-02-01
There is a common understanding in quantum optics that nonclassical states of light are states that do not have a positive semidefinite and sufficiently regular Glauber-Sudarshan P function. Almost all known nonclassical states have P functions that are highly irregular, which makes working with them difficult and direct experimental reconstruction impossible. Here we introduce classes of nonclassical states with regular, non-positive-definite P functions. They are constructed by "puncturing" regular smooth positive P functions with negative Dirac-δ peaks or other sufficiently narrow smooth negative functions. We determine the parameter ranges for which such punctures are possible without losing the positivity of the state, the regimes yielding antibunching of light, and the expressions of the Wigner functions for all investigated punctured states. Finally, we propose some possible experimental realizations of such states.
Propagation of spiking regularity and double coherence resonance in feedforward networks.
Men, Cong; Wang, Jiang; Qin, Ying-Mei; Deng, Bin; Tsang, Kai-Ming; Chan, Wai-Lok
2012-03-01
We investigate the propagation of spiking regularity in noisy feedforward networks (FFNs) based on FitzHugh-Nagumo neuron model systematically. It is found that noise could modulate the transmission of firing rate and spiking regularity. Noise-induced synchronization and synfire-enhanced coherence resonance are also observed when signals propagate in noisy multilayer networks. It is interesting that double coherence resonance (DCR) with the combination of synaptic input correlation and noise intensity is finally attained after the processing layer by layer in FFNs. Furthermore, inhibitory connections also play essential roles in shaping DCR phenomena. Several properties of the neuronal network such as noise intensity, correlation of synaptic inputs, and inhibitory connections can serve as control parameters in modulating both rate coding and the order of temporal coding.
Regular black holes in Einstein-Gauss-Bonnet gravity
NASA Astrophysics Data System (ADS)
Ghosh, Sushant G.; Singh, Dharm Veer; Maharaj, Sunil D.
2018-05-01
Einstein-Gauss-Bonnet theory, a natural generalization of general relativity to a higher dimension, admits a static spherically symmetric black hole which was obtained by Boulware and Deser. This black hole is similar to its general relativity counterpart with a curvature singularity at r =0 . We present an exact 5D regular black hole metric, with parameter (k >0 ), that interpolates between the Boulware-Deser black hole (k =0 ) and the Wiltshire charged black hole (r ≫k ). Owing to the appearance of the exponential correction factor (e-k /r2), responsible for regularizing the metric, the thermodynamical quantities are modified, and it is demonstrated that the Hawking-Page phase transition is achievable. The heat capacity diverges at a critical radius r =rC, where incidentally the temperature is maximum. Thus, we have a regular black hole with Cauchy and event horizons, and evaporation leads to a thermodynamically stable double-horizon black hole remnant with vanishing temperature. The entropy does not satisfy the usual exact horizon area result of general relativity.
Macke, A; Mishchenko, M I
1996-07-20
We ascertain the usefulness of simple ice particle geometries for modeling the intensity distribution of light scattering by atmospheric ice particles. To this end, similarities and differences in light scattering by axis-equivalent, regular and distorted hexagonal cylindric, ellipsoidal, and circular cylindric ice particles are reported. All the results pertain to particles with sizes much larger than a wavelength and are based on a geometrical optics approximation. At a nonabsorbing wavelength of 0.55 µm, ellipsoids (circular cylinders) have a much (slightly) larger asymmetry parameter g than regular hexagonal cylinders. However, our computations show that only random distortion of the crystal shape leads to a closer agreement with g values as small as 0.7 as derived from some remote-sensing data analysis. This may suggest that scattering by regular particle shapes is not necessarily representative of real atmospheric ice crystals at nonabsorbing wavelengths. On the other hand, if real ice particles happen to be hexagonal, they may be approximated by circular cylinders at absorbing wavelengths.
Patch-based image reconstruction for PET using prior-image derived dictionaries
NASA Astrophysics Data System (ADS)
Tahaei, Marzieh S.; Reader, Andrew J.
2016-09-01
In PET image reconstruction, regularization is often needed to reduce the noise in the resulting images. Patch-based image processing techniques have recently been successfully used for regularization in medical image reconstruction through a penalized likelihood framework. Re-parameterization within reconstruction is another powerful regularization technique in which the object in the scanner is re-parameterized using coefficients for spatially-extensive basis vectors. In this work, a method for extracting patch-based basis vectors from the subject’s MR image is proposed. The coefficients for these basis vectors are then estimated using the conventional MLEM algorithm. Furthermore, using the alternating direction method of multipliers, an algorithm for optimizing the Poisson log-likelihood while imposing sparsity on the parameters is also proposed. This novel method is then utilized to find sparse coefficients for the patch-based basis vectors extracted from the MR image. The results indicate the superiority of the proposed methods to patch-based regularization using the penalized likelihood framework.
An analytical method for the inverse Cauchy problem of Lame equation in a rectangle
NASA Astrophysics Data System (ADS)
Grigor’ev, Yu
2018-04-01
In this paper, we present an analytical computational method for the inverse Cauchy problem of Lame equation in the elasticity theory. A rectangular domain is frequently used in engineering structures and we only consider the analytical solution in a two-dimensional rectangle, wherein a missing boundary condition is recovered from the full measurement of stresses and displacements on an accessible boundary. The essence of the method consists in solving three independent Cauchy problems for the Laplace and Poisson equations. For each of them, the Fourier series is used to formulate a first-kind Fredholm integral equation for the unknown function of data. Then, we use a Lavrentiev regularization method, and the termwise separable property of kernel function allows us to obtain a closed-form regularized solution. As a result, for the displacement components, we obtain solutions in the form of a sum of series with three regularization parameters. The uniform convergence and error estimation of the regularized solutions are proved.
On constraining pilot point calibration with regularization in PEST
Fienen, M.N.; Muffels, C.T.; Hunt, R.J.
2009-01-01
Ground water model calibration has made great advances in recent years with practical tools such as PEST being instrumental for making the latest techniques available to practitioners. As models and calibration tools get more sophisticated, however, the power of these tools can be misapplied, resulting in poor parameter estimates and/or nonoptimally calibrated models that do not suit their intended purpose. Here, we focus on an increasingly common technique for calibrating highly parameterized numerical models - pilot point parameterization with Tikhonov regularization. Pilot points are a popular method for spatially parameterizing complex hydrogeologic systems; however, additional flexibility offered by pilot points can become problematic if not constrained by Tikhonov regularization. The objective of this work is to explain and illustrate the specific roles played by control variables in the PEST software for Tikhonov regularization applied to pilot points. A recent study encountered difficulties implementing this approach, but through examination of that analysis, insight into underlying sources of potential misapplication can be gained and some guidelines for overcoming them developed. ?? 2009 National Ground Water Association.
VizieR Online Data Catalog: Nine new open clusters within 500pc from the Sun (Roser+, 2016)
NASA Astrophysics Data System (ADS)
Roser, S.; Schilbach, E.; Goldman, B.
2017-03-01
We used URAT1 (Zacharias et al., 2015, Cat. I/329) to improve the Tycho-2 proper motions and to test what proper motions, which are more precise than those of Tycho-2 (Hog et al., 2000, Cat. I/259), can do for open cluster studies. URAT1 contains 228 million objects down to about R=18.5 mag, north of about -20° declination. For the bulk of the Tycho-2 stars, URAT1 gives positions at a mean epoch around 2013.5 and an accuracy level of about 20mas per co-ordinate. We cross-matched URAT1 with Tycho-2 (the original data set tyc2.dat from CDS), and obtained new proper motions via a least-squares adjustment as described, for example in PPMXL (Roeser et al., 2010, Cat. I/317). To avoid formally ultra-precise astrometry for a small number of stars, we chose a 10mas floor for the precision of a URAT1 position. The newly detected clusterings are located in the solar neighbourhood at distances below 500pc from the Sun. The candidates RSG1 to RSG8 are very probably genuine physical groups. Membership and astrophysical parameters could be determined sufficiently well. Nevertheless, accurate parallaxes of at least several reliable cluster stars could improve the quality of parameter determination. A definite age cannot be derived for RSG9; this critically depends on the secure membership status of the two brightest stars. Table 1 summarises the astrophysical parameters of the newly found objects. (1 data file).
Quantitative evaluation of first-order retardation corrections to the quarkonium spectrum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brambilla, N.; Prosperi, G.M.
1992-08-01
We evaluate numerically first-order retardation corrections for some charmonium and bottomonium masses under the usual assumption of a Bethe-Salpeter purely scalar confinement kernel. The result depends strictly on the use of an additional effective potential to express the corrections (rather than to resort to Kato perturbation theory) and on an appropriate regularization prescription. The kernel has been chosen in order to reproduce in the instantaneous approximation a semirelativistic potential suggested by the Wilson loop method. The calculations are performed for two sets of parameters determined by fits in potential theory. The corrections turn out to be typically of the ordermore » of a few hundred MeV and depend on an additional scale parameter introduced in the regularization. A conjecture existing in the literature on the origin of the constant term in the potential is also discussed.« less
Retrieving cloudy atmosphere parameters from RPG-HATPRO radiometer data
NASA Astrophysics Data System (ADS)
Kostsov, V. S.
2015-03-01
An algorithm for simultaneously determining both tropospheric temperature and humidity profiles and cloud liquid water content from ground-based measurements of microwave radiation is presented. A special feature of this algorithm is that it combines different types of measurements and different a priori information on the sought parameters. The features of its use in processing RPG-HATPRO radiometer data obtained in the course of atmospheric remote sensing experiments carried out by specialists from the Faculty of Physics of St. Petersburg State University are discussed. The results of a comparison of both temperature and humidity profiles obtained using a ground-based microwave remote sensing method with those obtained from radiosonde data are analyzed. It is shown that this combined algorithm is comparable (in accuracy) to the classical method of statistical regularization in determining temperature profiles; however, this algorithm demonstrates better accuracy (when compared to the method of statistical regularization) in determining humidity profiles.
The quasi-optimality criterion in the linear functional strategy
NASA Astrophysics Data System (ADS)
Kindermann, Stefan; Pereverzyev, Sergiy, Jr.; Pilipenko, Andrey
2018-07-01
The linear functional strategy for the regularization of inverse problems is considered. For selecting the regularization parameter therein, we propose the heuristic quasi-optimality principle and some modifications including the smoothness of the linear functionals. We prove convergence rates for the linear functional strategy with these heuristic rules taking into account the smoothness of the solution and the functionals and imposing a structural condition on the noise. Furthermore, we study these noise conditions in both a deterministic and stochastic setup and verify that for mildly-ill-posed problems and Gaussian noise, these conditions are satisfied almost surely, where on the contrary, in the severely-ill-posed case and in a similar setup, the corresponding noise condition fails to hold. Moreover, we propose an aggregation method for adaptively optimizing the parameter choice rule by making use of improved rates for linear functionals. Numerical results indicate that this method yields better results than the standard heuristic rule.
NASA Astrophysics Data System (ADS)
Penenko, Alexey; Penenko, Vladimir
2014-05-01
Contact concentration measurement data assimilation problem is considered for convection-diffusion-reaction models originating from the atmospheric chemistry study. High dimensionality of models imposes strict requirements on the computational efficiency of the algorithms. Data assimilation is carried out within the variation approach on a single time step of the approximated model. A control function is introduced into the source term of the model to provide flexibility for data assimilation. This function is evaluated as the minimum of the target functional that connects its norm to a misfit between measured and model-simulated data. In the case mathematical model acts as a natural Tikhonov regularizer for the ill-posed measurement data inversion problem. This provides flow-dependent and physically-plausible structure of the resulting analysis and reduces a need to calculate model error covariance matrices that are sought within conventional approach to data assimilation. The advantage comes at the cost of the adjoint problem solution. This issue is solved within the frameworks of splitting-based realization of the basic convection-diffusion-reaction model. The model is split with respect to physical processes and spatial variables. A contact measurement data is assimilated on each one-dimensional convection-diffusion splitting stage. In this case a computationally-efficient direct scheme for both direct and adjoint problem solution can be constructed based on the matrix sweep method. Data assimilation (or regularization) parameter that regulates ratio between model and data in the resulting analysis is obtained with Morozov discrepancy principle. For the proper performance the algorithm takes measurement noise estimation. In the case of Gaussian errors the probability that the used Chi-squared-based estimate is the upper one acts as the assimilation parameter. A solution obtained can be used as the initial guess for data assimilation algorithms that assimilate outside the splitting stages and involve iterations. Splitting method stage that is responsible for chemical transformation processes is realized with the explicit discrete-analytical scheme with respect to time. The scheme is based on analytical extraction of the exponential terms from the solution. This provides unconditional positive sign for the evaluated concentrations. Splitting-based structure of the algorithm provides means for efficient parallel realization. The work is partially supported by the Programs No 4 of Presidium RAS and No 3 of Mathematical Department of RAS, by RFBR project 11-01-00187 and Integrating projects of SD RAS No 8 and 35. Our studies are in the line with the goals of COST Action ES1004.
Kozłowska, Emilia; Puszynski, Krzysztof
2016-11-07
Many diseases with a genetic background such as some types of cancer are caused by damage in the p53 signaling pathway. The damage changes the system dynamics providing cancer cells with resistance to therapy such as radiation therapy. The change can be observed as the difference in bifurcation diagrams and equilibria type and location between normal and damaged cells, and summarized as the changes of the mathematical model parameters and following changes of the eigenvalues of Jacobian matrix. Therefore a change in other model parameters, such as mRNA degradation rates, may restore the proper eigenvalues and by that proper system dynamics. From the biological point of view, the change of mRNA degradation rate can be achieved by application of the small interfering RNA (siRNA). Here, we propose a general mathematical framework based on the bifurcation theory and siRNA-based control signal in order to study how to restore the proper response of cells with damaged p53 signaling pathway to therapy by using ionizing radiation (IR) therapy as an example. We show the difference between the cells with normal p53 signaling pathway and cells with abnormalities in the negative (as observed in SJSA-1 cell line) or positive (as observed in MCF-7 or PNT1a cell lines) feedback loop. Then we show how the dynamics of these cells can be restored to normal cell dynamics by using selected siRNA. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Parrondo's games based on complex networks and the paradoxical effect.
Ye, Ye; Wang, Lu; Xie, Nenggang
2013-01-01
Parrondo's games were first constructed using a simple tossing scenario, which demonstrates the following paradoxical situation: in sequences of games, a winning expectation may be obtained by playing the games in a random order, although each game (game A or game B) in the sequence may result in losing when played individually. The available Parrondo's games based on the spatial niche (the neighboring environment) are applied in the regular networks. The neighbors of each node are the same in the regular graphs, whereas they are different in the complex networks. Here, Parrondo's model based on complex networks is proposed, and a structure of game B applied in arbitrary topologies is constructed. The results confirm that Parrondo's paradox occurs. Moreover, the size of the region of the parameter space that elicits Parrondo's paradox depends on the heterogeneity of the degree distributions of the networks. The higher heterogeneity yields a larger region of the parameter space where the strong paradox occurs. In addition, we use scale-free networks to show that the network size has no significant influence on the region of the parameter space where the strong or weak Parrondo's paradox occurs. The region of the parameter space where the strong Parrondo's paradox occurs reduces slightly when the average degree of the network increases.
What Drives the Variability of the Mid-Latitude Ionosphere?
NASA Astrophysics Data System (ADS)
Goncharenko, L. P.; Zhang, S.; Erickson, P. J.; Harvey, L.; Spraggs, M. E.; Maute, A. I.
2016-12-01
The state of the ionosphere is determined by the superposition of the regular changes and stochastic variations of the ionospheric parameters. Regular variations are represented by diurnal, seasonal and solar cycle changes, and can be well described by empirical models. Short-term perturbations that vary from a few seconds to a few hours or days can be induced in the ionosphere by solar flares, changes in solar wind, coronal mass ejections, travelling ionospheric disturbances, or meteorological influences. We use over 40 years of observations by the Millstone Hill incoherent scatter radar (42.6oN, 288.5oE) to develop an updated empirical model of ionospheric parameters, and wintertime data collected in 2004-2016 to study variability in ionospheric parameters. We also use NASA MERRA2 atmospheric reanalysis data to examine possible connections between the state of the stratosphere & mesosphere and the upper atmosphere (250-400km). A case of major SSW of January 2013 is selected for in-depth study and reveals large anomalies in ionospheric parameters. Modeling with the NCAR Thermospheric-Ionospheric-Mesospheric-Electrodynamics general Circulation Model (TIME-GCM) nudged by WACCM-GEOS5 simulation indicates that during the 2013 SSW the neutral and ion temperature in the polar through mid-latitude region deviates from the seasonal behavior.
[Principles of intervertebral disc assessment in private accident insurance].
Steinmetz, M; Dittrich, V; Röser, K
2015-09-01
Due to the spread of intervertebral disc degeneration, insurance companies and experts are regularly confronted with related assessments of insured persons under their private accident insurance. These claims pose a particular challenge for experts, since, in addition to the clinical assessment of the facts, extensive knowledge of general accident insurance conditions, case law and current study findings is required. Each case can only be properly assessed through simultaneous consideration of both the medical and legal facts. These guidelines serve as the basis for experts and claims.managers with respect to the appropriate individual factual assessment of intervertebral disc degeneration in private accident insurance.
Endosseous dental implant vis-à-vis conservative management: Is it a dilemma?
Chandra, Ramesh; Bains, Rhythm; Loomba, Kapil; Pal, U. S.; Ram, Hari; Bains, Vivek K.
2010-01-01
To overview the current prospective of endosseous dental implant and conservative management. Although emphasis has been made in reinstating the oral functions, less consideration has been given to formulate the best treatment tactics in a particular situation. Properly restored, root canal treated natural teeth surrounded by healthy periodontium tissues yield a very high longevity, and periodontally compromised teeth that are treated and maintained regularly may have longer survival rate. Current trends in implantology have weakened the conservative paradigm, and practitioner's objectivity has been inclined more toward providing the tooth substitutes often flaunted as equal or even superior to conservation of natural tooth PMID:22442546
Relaxation in control systems of subdifferential type
NASA Astrophysics Data System (ADS)
Tolstonogov, A. A.
2006-02-01
In a separable Hilbert space we consider a control system with evolution operators that are subdifferentials of a proper convex lower semicontinuous function depending on time. The constraint on the control is given by a multivalued function with non-convex values that is lower semicontinuous with respect to the variable states. Along with the original system we consider the system in which the constraint on the control is the upper semicontinuous convex-valued regularization of the original constraint. We study relations between the solution sets of these systems. As an application we consider a control variational inequality. We give an example of a control system of parabolic type with an obstacle.
Baryon octet electromagnetic form factors in a confining NJL model
NASA Astrophysics Data System (ADS)
Carrillo-Serrano, Manuel E.; Bentz, Wolfgang; Cloët, Ian C.; Thomas, Anthony W.
2016-08-01
Electromagnetic form factors of the baryon octet are studied using a Nambu-Jona-Lasinio model which utilizes the proper-time regularization scheme to simulate aspects of colour confinement. In addition, the model also incorporates corrections to the dressed quarks from vector meson correlations in the t-channel and the pion cloud. Comparison with recent chiral extrapolations of lattice QCD results shows a remarkable level of consistency. For the charge radii we find the surprising result that rEp < rEΣ+ and | rEn | < | rEΞ0 |, whereas the magnetic radii have a pattern largely consistent with a naive expectation based on the dressed quark masses.
On Asymptotic Behaviour and W 2, p Regularity of Potentials in Optimal Transportation
NASA Astrophysics Data System (ADS)
Liu, Jiakun; Trudinger, Neil S.; Wang, Xu-Jia
2015-03-01
In this paper we study local properties of cost and potential functions in optimal transportation. We prove that in a proper normalization process, the cost function is uniformly smooth and converges locally smoothly to a quadratic cost x · y, while the potential function converges to a quadratic function. As applications we obtain the interior W 2, p estimates and sharp C 1, α estimates for the potentials, which satisfy a Monge-Ampère type equation. The W 2, p estimate was previously proved by Caffarelli for the quadratic transport cost and the associated standard Monge-Ampère equation.
Joseph, L; Paul, H; Premkumar, J; Paul, R; Michael, J S
2015-01-01
Bio-medical waste has a higher potential of infection and injury to the healthcare worker, patient and the surrounding community. Awareness programmes on their proper handling and management to healthcare workers can prevent the spread of infectious diseases and epidemics. This study was conducted in a tertiary care hospital to assess the impact of training, audits and education/implementations from 2009 to 2012 on awareness and practice of biomedical waste segregation. Our study reveals focused training, strict supervision, daily surveillance, audits inspections, involvement of hospital administrators and regular appraisals are essential to optimise the segregation of biomedical waste.
The global increase in dental caries. A pending public health crisis.
Bagramian, Robert A; Garcia-Godoy, Franklin; Volpe, Anthony R
2009-02-01
A current review of the available epidemiological data from many countries clearly indicates that there is a marked increase in the prevalence of dental caries. This global increase in dental caries prevalence affects children as well as adults, primary as well as permanent teeth, and coronal as well as root surfaces. This increase in dental caries signals a pending public health crisis. Although there are differences of opinion regarding the cause of this global dental caries increase, the remedy is well known: a return to the public health strategies that were so successful in the past, a renewed campaign for water fluoridation, topical fluoride application, the use of fluoride rinses, a return to school oral health educational programs, an emphasis on proper tooth brushing with a fluoride dentifrice, as well as flossing, a proper diet and regular dental office visits. If these remedies are not initiated, there could be a serious negative impact upon the future oral health (and systemic health) of the global community, as well as a strain on the dental profession along with a major increase in the cost of dental services.
SHIRAISHI, Rikiya; NISHIMURA, Masaaki; NAKASHIMA, Ryuji; ENTA, Chiho; HIRAYAMA, Norio
2013-01-01
ABSTRACT In Japan, the import quarantine regulation against rabies has required from 2005 that dogs and cats should be inoculated with the rabies vaccine and that the neutralizing antibody titer should be confirmed to be at least 0.5 international units (IU)/ml. The fluorescent antibody virus neutralization (FAVN) test is used as an international standard method for serological testing for rabies. To achieve proper immunization of dogs and cats at the time of import and export, changes in the neutralizing antibody titer after inoculation of the rabies vaccine should be understood in detail. However, few reports have provided this information. In this study, we aimed to determine evaluated, such changes by using sera from experimental dogs and cats inoculated with the rabies vaccine, and we tested samples using the routine FAVN test. In both dogs and cats, proper, regular vaccination enabled the necessary titer of neutralizing antibodies to be maintained in the long term. However, inappropriate timing of blood sampling after vaccination could result in insufficient detected levels of neutralizing antibodies. PMID:24389741
Shiraishi, Rikiya; Nishimura, Masaaki; Nakashima, Ryuji; Enta, Chiho; Hirayama, Norio
2014-04-01
In Japan, the import quarantine regulation against rabies has required from 2005 that dogs and cats should be inoculated with the rabies vaccine and that the neutralizing antibody titer should be confirmed to be at least 0.5 international units (IU)/ml. The fluorescent antibody virus neutralization (FAVN) test is used as an international standard method for serological testing for rabies. To achieve proper immunization of dogs and cats at the time of import and export, changes in the neutralizing antibody titer after inoculation of the rabies vaccine should be understood in detail. However, few reports have provided this information. In this study, we aimed to determine evaluated, such changes by using sera from experimental dogs and cats inoculated with the rabies vaccine, and we tested samples using the routine FAVN test. In both dogs and cats, proper, regular vaccination enabled the necessary titer of neutralizing antibodies to be maintained in the long term. However, inappropriate timing of blood sampling after vaccination could result in insufficient detected levels of neutralizing antibodies.
Vulvar Lichen Sclerosus et Atrophicus
Nair, Pragya Ashok
2017-01-01
Vulvar lichen sclerosus (VLS) is a chronic inflammatory dermatosis characterized by ivory-white plaques or patches with glistening surface commonly affecting the vulva and anus. Common symptoms are irritation, soreness, dyspareunia, dysuria, and urinary or fecal incontinence. Anogenital lichen sclerosus (LS) is characterized by porcelain-white atrophic plaques, which may become confluent extending around the vulval and perianal skin in a figure of eight configuration. Thinning and shrinkage of the genital area make coitus, urination, and defecation painful. LS is not uncommon in India and present as an itchy vulvar dermatosis which a gynecologist may mistake for candidal vulvovaginitis. There is often a delay in diagnosis of VLS due to its asymptomatic nature and lack of awareness in patients as well as physicians. Embarrassment of patients due to private nature of the disease and failure to examine the genital skin properly are the other reasons for delay in diagnosis. There is no curative treatment for LS. Various medications available only relieve the symptoms. Chronic nature of the disease affects the quality of life. Proper and regular follow-up is required as there are chances of the development of squamous cell carcinoma. PMID:28706405
Wang, Kewu; Xiao, Shengxiang; Jiang, Lina; Hu, Jingkai
2017-09-30
In order to regularly detect the performance parameters of automated external defibrillator (AED), to make sure it is safe before using the instrument, research and design of a system for detecting automated external defibrillator performance parameters. According to the research of the characteristics of its performance parameters, combing the STM32's stability and high speed with PWM modulation control, the system produces a variety of ECG normal and abnormal signals through the digital sampling methods. Completed the design of the hardware and software, formed a prototype. This system can accurate detect automated external defibrillator discharge energy, synchronous defibrillation time, charging time and other key performance parameters.
Gap probability - Measurements and models of a pecan orchard
NASA Technical Reports Server (NTRS)
Strahler, Alan H.; Li, Xiaowen; Moody, Aaron; Liu, YI
1992-01-01
Measurements and models are compared for gap probability in a pecan orchard. Measurements are based on panoramic photographs of 50* by 135 view angle made under the canopy looking upwards at regular positions along transects between orchard trees. The gap probability model is driven by geometric parameters at two levels-crown and leaf. Crown level parameters include the shape of the crown envelope and spacing of crowns; leaf level parameters include leaf size and shape, leaf area index, and leaf angle, all as functions of canopy position.
Effect of Physical and Academic Stress on Illness and Injury in Division 1 College Football Players.
Mann, J Bryan; Bryant, Kirk R; Johnstone, Brick; Ivey, Patrick A; Sayers, Stephen P
2016-01-01
Stress-injury models of health suggest that athletes experience more physical injuries during times of high stress. The purpose of this study was to evaluate the effect of increased physical and academic stress on injury restrictions for athletes (n = 101) on a division I college football team. Weeks of the season were categorized into 3 levels: high physical stress (HPS) (i.e., preseason), high academic stress (HAS) (i.e., weeks with regularly scheduled examinations such as midterms, finals, and week before Thanksgiving break), and low academic stress (LAS) (i.e., regular season without regularly scheduled academic examinations). During each week, we recorded whether a player had an injury restriction, thereby creating a longitudinal binary outcome. The data were analyzed using a hierarchical logistic regression model to properly account for the dependency induced by the repeated observations over time within each subject. Significance for regression models was accepted at p ≤ 0.05. We found that the odds of an injury restriction during training camp (HPS) were the greatest compared with weeks of HAS (odds ratio [OR] = 2.05, p = 0.0003) and LAS (OR = 3.65, p < 0.001). However, the odds of an injury restriction during weeks of HAS were nearly twice as high as during weeks of LAS (OR = 1.78, p = 0.0088). Moreover, the difference in injury rates reported in all athletes during weeks of HPS and weeks of HAS disappeared when considering only athletes that regularly played in games (OR = 1.13, p = 0.75) suggesting that HAS may affect athletes that play to an even greater extent than HPS. Coaches should be aware of both types of stressors and consider carefully the types of training methods imposed during times of HAS when injuries are most likely.
Ujihara, Yoshihiro; Mohri, Satoshi; Katanosaka, Yuki
2016-11-25
The Na + /Ca 2+ exchanger 1 (NCX1) is an essential Ca 2+ efflux system in cardiomyocytes. Although NCX1 is distributed throughout the sarcolemma, a subpopulation of NCX1 is localized to transverse (T)-tubules. There is growing evidence that T-tubule disorganization is a causal event that shifts the transition from hypertrophy to heart failure (HF). However, the detailed molecular mechanisms have not been clarified. Previously, we showed that induced NCX1 expression in pressure-overloaded hearts attenuates defective excitation-contraction coupling and HF progression. Here, we examined the effects of induced NCX1 overexpression on the spatial distribution of L-type Ca 2+ channels (LTCCs) and junctophilin-2 (JP2), a structural protein that connects the T-tubule and sarcoplasmic reticulum membrane, in pressure-overloaded hearts. Quantitative analysis showed that the regularity of NCX1 localization was significantly decreased at 8 weeks after transverse aortic constriction (TAC)-surgery; however, T-tubule organization and the regularities of LTCC and JP2 immunofluorescent signals were maintained at this time point. These observations demonstrated that release of NCX1 from the T-tubule area occurred before the onset of T-tubule disorganization and LTCC and JP2 mislocalization. Moreover, induced NCX1 overexpression at 8 weeks post-TAC not only recovered NCX1 regularity but also prevented the decrease in LTCC and JP2 regularities at 16 weeks post-TAC. These results suggested that NCX1 may play an important role in the proper spatial distribution of LTCC and JP2 in T-tubules in the context of pressure-overloading. Copyright © 2016 Elsevier Inc. All rights reserved.
A Learning-Style Theory for Understanding Autistic Behaviors
Qian, Ning; Lipkin, Richard M.
2011-01-01
Understanding autism's ever-expanding array of behaviors, from sensation to cognition, is a major challenge. We posit that autistic and typically developing brains implement different algorithms that are better suited to learn, represent, and process different tasks; consequently, they develop different interests and behaviors. Computationally, a continuum of algorithms exists, from lookup table (LUT) learning, which aims to store experiences precisely, to interpolation (INT) learning, which focuses on extracting underlying statistical structure (regularities) from experiences. We hypothesize that autistic and typical brains, respectively, are biased toward LUT and INT learning, in low- and high-dimensional feature spaces, possibly because of their narrow and broad tuning functions. The LUT style is good at learning relationships that are local, precise, rigid, and contain little regularity for generalization (e.g., the name–number association in a phonebook). However, it is poor at learning relationships that are context dependent, noisy, flexible, and do contain regularities for generalization (e.g., associations between gaze direction and intention, language and meaning, sensory input and interpretation, motor-control signal and movement, and social situation and proper response). The LUT style poorly compresses information, resulting in inefficiency, sensory overload (overwhelm), restricted interests, and resistance to change. It also leads to poor prediction and anticipation, frequent surprises and over-reaction (hyper-sensitivity), impaired attentional selection and switching, concreteness, strong local focus, weak adaptation, and superior and inferior performances on simple and complex tasks. The spectrum nature of autism can be explained by different degrees of LUT learning among different individuals, and in different systems of the same individual. Our theory suggests that therapy should focus on training autistic LUT algorithm to learn regularities. PMID:21886617
Differences in foot self-care and lifestyle between men and women with diabetes mellitus 1
Rossaneis, Mariana Angela; Haddad, Maria do Carmo Fernandez Lourenço; Mathias, Thaís Aidar de Freitas; Marcon, Sonia Silva
2016-01-01
ABSTRACT Objective: to investigate differences with regard to foot self-care and lifestyle between men and women with diabetes mellitus. Method: cross-sectional study conducted in a sample of 1,515 individuals with diabetes mellitus aged 40 years old or older. Poisson regression models were used to identity differences in foot self-care deficit and lifestyle between sexes, adjusting for socioeconomic and clinical characteristics, smoking and alcohol consumption. Results: foot self-care deficit, characterized by not regularly drying between toes; not regularly checking feet; walking barefoot; poor hygiene and inappropriately trimmed nails, was significantly higher among men, though men presented a lower prevalence of feet scaling and use of inappropriate shoes when compared to women. With regard to lifestyle, men presented less healthy habits, such as not adhering to a proper diet and taking laboratory exams to check for lipid profile at the frequency recommended. Conclusion: the nursing team should take into account gender differences concerning foot self-care and lifestyle when implementing educational activities and interventions intended to decrease risk factors for foot ulceration. PMID:27533270
NASA Astrophysics Data System (ADS)
Zhou, Weifeng; Cai, Jian-Feng; Gao, Hao
2013-12-01
A popular approach for medical image reconstruction has been through the sparsity regularization, assuming the targeted image can be well approximated by sparse coefficients under some properly designed system. The wavelet tight frame is such a widely used system due to its capability for sparsely approximating piecewise-smooth functions, such as medical images. However, using a fixed system may not always be optimal for reconstructing a variety of diversified images. Recently, the method based on the adaptive over-complete dictionary that is specific to structures of the targeted images has demonstrated its superiority for image processing. This work is to develop the adaptive wavelet tight frame method image reconstruction. The proposed scheme first constructs the adaptive wavelet tight frame that is task specific, and then reconstructs the image of interest by solving an l1-regularized minimization problem using the constructed adaptive tight frame system. The proof-of-concept study is performed for computed tomography (CT), and the simulation results suggest that the adaptive tight frame method improves the reconstructed CT image quality from the traditional tight frame method.
Field experience and performance evaluation of a medium-concentration CPV system
NASA Astrophysics Data System (ADS)
Norton, Matthew; Bentley, Roger; Georghiou, George E.; Chonavel, Sylvain; De Mutiis, Alfredo
2012-10-01
With the aim of gaining experience and performance data from location with a harsh summer climate, a 70 X concentrating photovoltaic (CPV) system was installed in Janurary 2009 in Nicosia, Cyprus. The performance of this system has been monitored using regular current-voltage characterisations for three years. Over this period, the output of the system has remained fairly constant. Measured performance ratios varied from 0.79 to 0.86 in the winter, but fell to 0.64 over the year when left uncleaned. Operating cell temperatures were modeled and found to be similar to those of flat plate modules. The most significant causes of energy loss have been identified as originating from tracking issues and soiling. Losses due to soiling could account for a drop in output of 0.2% per day. When cleaned and properly oriented, the normalized output of the system has remained constant, suggesting that this particular design is tolerant to the physical strain of long-term outdoor exposure in harsh summer conditions. Regular cleaning and reliable tracker operation are shown to be essential for maximizing energy yield.
Spatially multiplexed interferometric microscopy with partially coherent illumination
NASA Astrophysics Data System (ADS)
Picazo-Bueno, José Ángel; Zalevsky, Zeev; García, Javier; Ferreira, Carlos; Micó, Vicente
2016-10-01
We have recently reported on a simple, low cost, and highly stable way to convert a standard microscope into a holographic one [Opt. Express 22, 14929 (2014)]. The method, named spatially multiplexed interferometric microscopy (SMIM), proposes an off-axis holographic architecture implemented onto a regular (nonholographic) microscope with minimum modifications: the use of coherent illumination and a properly placed and selected one-dimensional diffraction grating. In this contribution, we report on the implementation of partially (temporally reduced) coherent illumination in SMIM as a way to improve quantitative phase imaging. The use of low coherence sources forces the application of phase shifting algorithm instead of off-axis holographic recording to recover the sample's phase information but improves phase reconstruction due to coherence noise reduction. In addition, a less restrictive field of view limitation (1/2) is implemented in comparison with our previously reported scheme (1/3). The proposed modification is experimentally validated in a regular Olympus BX-60 upright microscope considering a wide range of samples (resolution test, microbeads, swine sperm cells, red blood cells, and prostate cancer cells).
Multitask SVM learning for remote sensing data classification
NASA Astrophysics Data System (ADS)
Leiva-Murillo, Jose M.; Gómez-Chova, Luis; Camps-Valls, Gustavo
2010-10-01
Many remote sensing data processing problems are inherently constituted by several tasks that can be solved either individually or jointly. For instance, each image in a multitemporal classification setting could be taken as an individual task but relation to previous acquisitions should be properly considered. In such problems, different modalities of the data (temporal, spatial, angular) gives rise to changes between the training and test distributions, which constitutes a difficult learning problem known as covariate shift. Multitask learning methods aim at jointly solving a set of prediction problems in an efficient way by sharing information across tasks. This paper presents a novel kernel method for multitask learning in remote sensing data classification. The proposed method alleviates the dataset shift problem by imposing cross-information in the classifiers through matrix regularization. We consider the support vector machine (SVM) as core learner and two regularization schemes are introduced: 1) the Euclidean distance of the predictors in the Hilbert space; and 2) the inclusion of relational operators between tasks. Experiments are conducted in the challenging remote sensing problems of cloud screening from multispectral MERIS images and for landmine detection.
Quadratic semiparametric Von Mises calculus
Robins, James; Li, Lingling; Tchetgen, Eric
2009-01-01
We discuss a new method of estimation of parameters in semiparametric and nonparametric models. The method is based on U-statistics constructed from quadratic influence functions. The latter extend ordinary linear influence functions of the parameter of interest as defined in semiparametric theory, and represent second order derivatives of this parameter. For parameters for which the matching cannot be perfect the method leads to a bias-variance trade-off, and results in estimators that converge at a slower than n–1/2-rate. In a number of examples the resulting rate can be shown to be optimal. We are particularly interested in estimating parameters in models with a nuisance parameter of high dimension or low regularity, where the parameter of interest cannot be estimated at n–1/2-rate. PMID:23087487
NASA Astrophysics Data System (ADS)
Zhang, Xian-tao; Yang, Jian-min; Xiao, Long-fei
2016-07-01
Floating oscillating bodies constitute a large class of wave energy converters, especially for offshore deployment. Usually the Power-Take-Off (PTO) system is a directly linear electric generator or a hydraulic motor that drives an electric generator. The PTO system is simplified as a linear spring and a linear damper. However the conversion is less powerful with wave periods off resonance. Thus, a nonlinear snap-through mechanism with two symmetrically oblique springs and a linear damper is applied in the PTO system. The nonlinear snap-through mechanism is characteristics of negative stiffness and double-well potential. An important nonlinear parameter γ is defined as the ratio of half of the horizontal distance between the two springs to the original length of both springs. Time domain method is applied to the dynamics of wave energy converter in regular waves. And the state space model is used to replace the convolution terms in the time domain equation. The results show that the energy harvested by the nonlinear PTO system is larger than that by linear system for low frequency input. While the power captured by nonlinear converters is slightly smaller than that by linear converters for high frequency input. The wave amplitude, damping coefficient of PTO systems and the nonlinear parameter γ affect power capture performance of nonlinear converters. The oscillation of nonlinear wave energy converters may be local or periodically inter well for certain values of the incident wave frequency and the nonlinear parameter γ, which is different from linear converters characteristics of sinusoidal response in regular waves.
NASA Astrophysics Data System (ADS)
Talaghat, Mohammad Reza; Jokar, Seyyed Mohammad
2017-12-01
This article offers a study on estimation of heat transfer parameters (coefficient and thermal diffusivity) using analytical solutions and experimental data for regular geometric shapes (such as infinite slab, infinite cylinder, and sphere). Analytical solutions have a broad use in experimentally determining these parameters. Here, the method of Finite Integral Transform (FIT) was used for solutions of governing differential equations. The temperature change at centerline location of regular shapes was recorded to determine both the thermal diffusivity and heat transfer coefficient. Aluminum and brass were used for testing. Experiments were performed for different conditions such as in a highly agitated water medium ( T = 52 °C) and in air medium ( T = 25 °C). Then, with the known slope of the temperature ratio vs. time curve and thickness of slab or radius of the cylindrical or spherical materials, thermal diffusivity value and heat transfer coefficient may be determined. According to the method presented in this study, the estimated of thermal diffusivity of aluminum and brass is 8.395 × 10-5 and 3.42 × 10-5 for a slab, 8.367 × 10-5 and 3.41 × 10-5 for a cylindrical rod and 8.385 × 10-5 and 3.40 × 10-5 m2/s for a spherical shape, respectively. The results showed there is close agreement between the values estimated here and those already published in the literature. The TAAD% is 0.42 and 0.39 for thermal diffusivity of aluminum and brass, respectively.
Spectral Regularization Algorithms for Learning Large Incomplete Matrices.
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-03-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.
Spectral Regularization Algorithms for Learning Large Incomplete Matrices
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-01-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465
A fractional-order accumulative regularization filter for force reconstruction
NASA Astrophysics Data System (ADS)
Wensong, Jiang; Zhongyu, Wang; Jing, Lv
2018-02-01
The ill-posed inverse problem of the force reconstruction comes from the influence of noise to measured responses and results in an inaccurate or non-unique solution. To overcome this ill-posedness, in this paper, the transfer function of the reconstruction model is redefined by a Fractional order Accumulative Regularization Filter (FARF). First, the measured responses with noise are refined by a fractional-order accumulation filter based on a dynamic data refresh strategy. Second, a transfer function, generated by the filtering results of the measured responses, is manipulated by an iterative Tikhonov regularization with a serious of iterative Landweber filter factors. Third, the regularization parameter is optimized by the Generalized Cross-Validation (GCV) to improve the ill-posedness of the force reconstruction model. A Dynamic Force Measurement System (DFMS) for the force reconstruction is designed to illustrate the application advantages of our suggested FARF method. The experimental result shows that the FARF method with r = 0.1 and α = 20, has a PRE of 0.36% and an RE of 2.45%, is superior to other cases of the FARF method and the traditional regularization methods when it comes to the dynamic force reconstruction.
Three-dimensional Gravity Inversion with a New Gradient Scheme on Unstructured Grids
NASA Astrophysics Data System (ADS)
Sun, S.; Yin, C.; Gao, X.; Liu, Y.; Zhang, B.
2017-12-01
Stabilized gradient-based methods have been proved to be efficient for inverse problems. Based on these methods, setting gradient close to zero can effectively minimize the objective function. Thus the gradient of objective function determines the inversion results. By analyzing the cause of poor resolution on depth in gradient-based gravity inversion methods, we find that imposing depth weighting functional in conventional gradient can improve the depth resolution to some extent. However, the improvement is affected by the regularization parameter and the effect of the regularization term becomes smaller with increasing depth (shown as Figure 1 (a)). In this paper, we propose a new gradient scheme for gravity inversion by introducing a weighted model vector. The new gradient can improve the depth resolution more efficiently, which is independent of the regularization parameter, and the effect of regularization term will not be weakened when depth increases. Besides, fuzzy c-means clustering method and smooth operator are both used as regularization terms to yield an internal consecutive inverse model with sharp boundaries (Sun and Li, 2015). We have tested our new gradient scheme with unstructured grids on synthetic data to illustrate the effectiveness of the algorithm. Gravity forward modeling with unstructured grids is based on the algorithm proposed by Okbe (1979). We use a linear conjugate gradient inversion scheme to solve the inversion problem. The numerical experiments show a great improvement in depth resolution compared with regular gradient scheme, and the inverse model is compact at all depths (shown as Figure 1 (b)). AcknowledgeThis research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900). ReferencesSun J, Li Y. 2015. Multidomain petrophysically constrained inversion and geology differentiation using guided fuzzy c-means clustering. Geophysics, 80(4): ID1-ID18. Okabe M. 1979. Analytical expressions for gravity anomalies due to homogeneous polyhedral bodies and translations into magnetic anomalies. Geophysics, 44(4), 730-741.
Characterizing the functional MRI response using Tikhonov regularization.
Vakorin, Vasily A; Borowsky, Ron; Sarty, Gordon E
2007-09-20
The problem of evaluating an averaged functional magnetic resonance imaging (fMRI) response for repeated block design experiments was considered within a semiparametric regression model with autocorrelated residuals. We applied functional data analysis (FDA) techniques that use a least-squares fitting of B-spline expansions with Tikhonov regularization. To deal with the noise autocorrelation, we proposed a regularization parameter selection method based on the idea of combining temporal smoothing with residual whitening. A criterion based on a generalized chi(2)-test of the residuals for white noise was compared with a generalized cross-validation scheme. We evaluated and compared the performance of the two criteria, based on their effect on the quality of the fMRI response. We found that the regularization parameter can be tuned to improve the noise autocorrelation structure, but the whitening criterion provides too much smoothing when compared with the cross-validation criterion. The ultimate goal of the proposed smoothing techniques is to facilitate the extraction of temporal features in the hemodynamic response for further analysis. In particular, these FDA methods allow us to compute derivatives and integrals of the fMRI signal so that fMRI data may be correlated with behavioral and physiological models. For example, positive and negative hemodynamic responses may be easily and robustly identified on the basis of the first derivative at an early time point in the response. Ultimately, these methods allow us to verify previously reported correlations between the hemodynamic response and the behavioral measures of accuracy and reaction time, showing the potential to recover new information from fMRI data. 2007 John Wiley & Sons, Ltd
NASA Astrophysics Data System (ADS)
Adavi, Zohre; Mashhadi-Hossainali, Masoud
2015-04-01
Water vapor is considered as one of the most important weather parameter in meteorology. Its non-uniform distribution, which is due to the atmospheric phenomena above the surface of the earth, depends both on space and time. Due to the limited spatial and temporal coverage of observations, estimating water vapor is still a challenge in meteorology and related fields such as positioning and geodetic techniques. Tomography is a method for modeling the spatio-temporal variations of this parameter. By analyzing the impact of troposphere on the Global Navigation Satellite (GNSS) signals, inversion techniques are used for modeling the water vapor in this approach. Non-uniqueness and instability of solution are the two characteristic features of this problem. Horizontal and/or vertical constraints are usually used to compute a unique solution for this problem. Here, a hybrid regularization method is used for computing a regularized solution. The adopted method is based on the Least-Square QR (LSQR) and Tikhonov regularization techniques. This method benefits from the advantages of both the iterative and direct techniques. Moreover, it is independent of initial values. Based on this property and using an appropriate resolution for the model, firstly the number of model elements which are not constrained by GPS measurement are minimized and then; water vapor density is only estimated at the voxels which are constrained by these measurements. In other words, no constraint is added to solve the problem. Reconstructed profiles of water vapor are validated using radiosonde measurements.
Photovoltaic characteristics of diffused P/+N bulk GaAs solar cells
NASA Technical Reports Server (NTRS)
Borrego, J. M.; Keeney, R. P.; Bhat, I. B.; Bhat, K. N.; Sundaram, L. G.; Ghandhi, S. K.
1982-01-01
The photovoltaic characteristics of P(+)N junction solar cells fabricated on bulk GaAs by an open tube diffusion technique are described in this paper.Spectral response measurements were analyzed in detail and compared to a computer simulation in order to determine important material parameters. It is projected that proper optimization of the cell parameters can increase the efficiency of the cells from 12.2 percent to close to 20 percent.
Automated dynamic analytical model improvement for damped structures
NASA Technical Reports Server (NTRS)
Fuh, J. S.; Berman, A.
1985-01-01
A method is described to improve a linear nonproportionally damped analytical model of a structure. The procedure finds the smallest changes in the analytical model such that the improved model matches the measured modal parameters. Features of the method are: (1) ability to properly treat complex valued modal parameters of a damped system; (2) applicability to realistically large structural models; and (3) computationally efficiency without involving eigensolutions and inversion of a large matrix.
Early diagnosis of lymph node metastasis: Importance of intranodal pressures.
Miura, Yoshinobu; Mikada, Mamoru; Ouchi, Tomoki; Horie, Sachiko; Takeda, Kazu; Yamaki, Teppei; Sakamoto, Maya; Mori, Shiro; Kodama, Tetsuya
2016-03-01
Regional lymph node status is an important prognostic indicator of tumor aggressiveness. However, early diagnosis of metastasis using intranodal pressure, at a stage when lymph node size has not changed significantly, has not been investigated. Here, we use an MXH10/Mo-lpr/lpr mouse model of lymph node metastasis to show that intranodal pressure increases in both the subiliac lymph node and proper axillary lymph node, which are connected by lymphatic vessels, when tumor cells are injected into the subiliac lymph node to induce metastasis to the proper axillary lymph node. We found that intranodal pressure in the subiliac lymph node increased at the stage when metastasis was detected by in vivo bioluminescence, but when proper axillary lymph node volume (measured by high-frequency ultrasound imaging) had not increased significantly. Intravenously injected liposomes, encapsulating indocyanine green, were detected in solid tumors by in vivo bioluminescence, but not in the proper axillary lymph node. Basic blood vessel and lymphatic channel structures were maintained in the proper axillary lymph node, although sinus histiocytosis was detected. These results show that intranodal pressure in the proper axillary lymph node increases at early stages when metastatic tumor cells have not fully proliferated. Intranodal pressure may be a useful parameter for facilitating early diagnosis of lymph node metastasis. © 2015 The Authors. Cancer Science published by John Wiley & Sons Australia, Ltd on behalf of Japanese Cancer Association.
Mattsson, Elisabet; Funkquist, Eva-Lotta; Wickström, Maria; Nyqvist, Kerstin H; Volgsten, Helena
2015-04-01
to compare the influence of supplementary artificial milk feeds on breast feeding and certain clinical parameters among healthy late preterm infants given regular supplementary artificial milk feeds versus being exclusively breast fed from birth. a comparative study using quantitative methods. Data were collected via a parental diary and medical records. parents of 77 late preterm infants (34 5/7-36 6/7 weeks), whose mothers intended to breast feed, completed a diary during the infants׳ hospital stay. infants who received regular supplementary artificial milk feeds experienced a longer delay before initiation of breast feeding, were breast fed less frequently and had longer hospital stays than infants exclusively breast fed from birth. Exclusively breast-fed infants had a greater weight loss than infants with regular artificial milk supplementation. A majority of the mothers (65%) with an infant prescribed artificial milk never expressed their milk and among the mothers who used a breast-pump, milk expression commenced late (10-84 hours after birth). At discharge, all infants were breast fed to some extent, 43% were exclusively breast fed. clinical practice and routines influence the initiation of breast feeding among late preterm infants and may act as barriers to the mothers׳ establishment of exclusive breast feeding. Copyright © 2015 Elsevier Ltd. All rights reserved.
Joint image registration and fusion method with a gradient strength regularization
NASA Astrophysics Data System (ADS)
Lidong, Huang; Wei, Zhao; Jun, Wang
2015-05-01
Image registration is an essential process for image fusion, and fusion performance can be used to evaluate registration accuracy. We propose a maximum likelihood (ML) approach to joint image registration and fusion instead of treating them as two independent processes in the conventional way. To improve the visual quality of a fused image, a gradient strength (GS) regularization is introduced in the cost function of ML. The GS of the fused image is controllable by setting the target GS value in the regularization term. This is useful because a larger target GS brings a clearer fused image and a smaller target GS makes the fused image smoother and thus restrains noise. Hence, the subjective quality of the fused image can be improved whether the source images are polluted by noise or not. We can obtain the fused image and registration parameters successively by minimizing the cost function using an iterative optimization method. Experimental results show that our method is effective with transformation, rotation, and scale parameters in the range of [-2.0, 2.0] pixel, [-1.1 deg, 1.1 deg], and [0.95, 1.05], respectively, and variances of noise smaller than 300. It also demonstrated that our method yields a more visual pleasing fused image and higher registration accuracy compared with a state-of-the-art algorithm.
Regularized solution of a nonlinear problem in electromagnetic sounding
NASA Astrophysics Data System (ADS)
Piero Deidda, Gian; Fenu, Caterina; Rodriguez, Giuseppe
2014-12-01
Non destructive investigation of soil properties is crucial when trying to identify inhomogeneities in the ground or the presence of conductive substances. This kind of survey can be addressed with the aid of electromagnetic induction measurements taken with a ground conductivity meter. In this paper, starting from electromagnetic data collected by this device, we reconstruct the electrical conductivity of the soil with respect to depth, with the aid of a regularized damped Gauss-Newton method. We propose an inversion method based on the low-rank approximation of the Jacobian of the function to be inverted, for which we develop exact analytical formulae. The algorithm chooses a relaxation parameter in order to ensure the positivity of the solution and implements various methods for the automatic estimation of the regularization parameter. This leads to a fast and reliable algorithm, which is tested on numerical experiments both on synthetic data sets and on field data. The results show that the algorithm produces reasonable solutions in the case of synthetic data sets, even in the presence of a noise level consistent with real applications, and yields results that are compatible with those obtained by electrical resistivity tomography in the case of field data. Research supported in part by Regione Sardegna grant CRP2_686.
Density of Asphalt Concrete - How Much is Needed?
DOT National Transportation Integrated Search
1990-01-01
Density is one of the most important parameters in construction of asphalt : mixtures. A mixture that is properly designed and compacted will contain enough : air voids to prevent rutting due to plastic flow but low enough air voids to : prevent perm...
Radar systems for the water resources mission, volume 3
NASA Technical Reports Server (NTRS)
Moore, R. K.; Claassen, J. P.; Erickson, R. L.; Fong, R. K. T.; Hanson, B. C.; Komen, M. J.; Mcmillan, S. B.; Parashar, S. K.
1976-01-01
Recent work was reviewed in the field of remote sensing relative to soil moisture. The target parameters were recognized that are necessary if optimum data retrieval is to be realized, and proper sensor instrumentation was recommended to achieve this goal.
Energy and momentum analysis of the deployment dynamics of nets in space
NASA Astrophysics Data System (ADS)
Botta, Eleonora M.; Sharf, Inna; Misra, Arun K.
2017-11-01
In this paper, the deployment dynamics of nets in space is investigated through a combination of analysis and numerical simulations. The considered net is deployed by ejecting several corner masses and thanks to momentum and energy transfer from those to the innermost threads of the net. In this study, the net is modeled with a lumped-parameter approach, and assumed to be symmetrical, subject to symmetrical initial conditions, and initially slack. The work-energy and momentum conservation principles are employed to carry out centroidal analysis of the net, by conceptually partitioning the net into a system of corner masses and the net proper and applying the aforementioned principles to the corresponding centers of mass. The analysis provides bounds on the values that the velocity of the center of mass of the corner masses and the velocity of the center of mass of the net proper can individually attain, as well as relationships between these and different energy contributions. The analytical results allow to identify key parameters characterizing the deployment dynamics of nets in space, which include the ratio between the mass of the corner masses and the total mass, the initial linear momentum, and the direction of the initial velocity vectors. Numerical tools are employed to validate and interpret further the analytical observations. Comparison of deployment results with and without initial velocity of the net proper suggests that more complete and lasting deployment can be achieved if the corner masses alone are ejected. A sensitivity study is performed for the key parameters identified from the energy/momentum analysis, and the outcome establishes that more lasting deployment and safer capture (i.e., characterized by higher traveled distance) can be achieved by employing reasonably lightweight corner masses, moderate shooting angles, and low shooting velocities. A comparison with current literature on tether-nets for space debris capture confirms overall agreement on the importance and effect of the relevant inertial and ejection parameters on the deployment dynamics.
Selection of the battery pack parameters for an electric vehicle based on performance requirements
NASA Astrophysics Data System (ADS)
Koniak, M.; Czerepicki, A.
2017-06-01
Each type of vehicle has specific power requirements. Some require a rapid charging, other make long distances between charges, but a common feature is the longest battery life time. Additionally, the battery is influenced by factors such as temperature, depth of discharge and the operation current. The article contain the parameters of chemical cells that should be taken into account during the design of the battery for a specific application. This is particularly important because the batteries are not properly matched and can wear prematurely and cause an additional costs. The method of selecting the correct cell type should take previously discussed features and operating characteristics of the vehicle into account. The authors present methods of obtaining such characteristics along with their assessment and examples. Also there has been described an example of the battery parameters selection based on design assumptions of the vehicle and the expected performance characteristics. Selecting proper battery operating parameters is important due to its impact on the economic result of investments in electric vehicles. For example, for some Li-Ion technologies, the earlier worn out of batteries in a fleet of cruise boats or buses having estimated lifetime of 10 years is not acceptable, because this will cause substantial financial losses for the owner of the rolling stock. The presented method of choosing the right cell technology in the selected application, can be the basis for making the decision on future battery technical parameters.
Optimization and evaluation of metal injection molding by using X-ray tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Shidi; Zhang, Ruijie; Qu, Xuanhui, E-mail: quxh@ustb.edu.cn
2015-06-15
6061 aluminum alloy and 316L stainless steel green bodies were obtained by using different injection parameters (injection pressure, speed and temperature). After injection process, the green bodies were scanned by X-ray tomography. The projection and reconstruction images show the different kinds of defects obtained by the improper injection parameters. Then, 3D rendering of the Al alloy green bodies was used to demonstrate the spatial morphology characteristics of the serious defects. Based on the scanned and calculated results, it is convenient to obtain the proper injection parameters for the Al alloy. Then, reasons of the defect formation were discussed. During moldmore » filling, the serious defects mainly formed in the case of low injection temperature and high injection speed. According to the gray value distribution of projection image, a threshold gray value was obtained to evaluate whether the quality of green body can meet the desired standard. The proper injection parameters of 316L stainless steel can be obtained efficiently by using the method of analyzing the Al alloy injection. - Highlights: • Different types of defects in green bodies were scanned by using X-ray tomography. • Reasons of the defect formation were discussed. • Optimization of the injection parameters can be simplified greatly by the way of X-ray tomography. • Evaluation standard of the injection process can be obtained by using the gray value distribution of projection image.« less
Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms
NASA Astrophysics Data System (ADS)
Lee, Chien-Cheng; Huang, Shin-Sheng; Shih, Cheng-Yuan
2010-12-01
This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB) with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA). It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO) algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.
Hadamard States for the Klein-Gordon Equation on Lorentzian Manifolds of Bounded Geometry
NASA Astrophysics Data System (ADS)
Gérard, Christian; Oulghazi, Omar; Wrochna, Michał
2017-06-01
We consider the Klein-Gordon equation on a class of Lorentzian manifolds with Cauchy surface of bounded geometry, which is shown to include examples such as exterior Kerr, Kerr-de Sitter spacetime and the maximal globally hyperbolic extension of the Kerr outer region. In this setup, we give an approximate diagonalization and a microlocal decomposition of the Cauchy evolution using a time-dependent version of the pseudodifferential calculus on Riemannian manifolds of bounded geometry. We apply this result to construct all pure regular Hadamard states (and associated Feynman inverses), where regular refers to the state's two-point function having Cauchy data given by pseudodifferential operators. This allows us to conclude that there is a one-parameter family of elliptic pseudodifferential operators that encodes both the choice of (pure, regular) Hadamard state and the underlying spacetime metric.
Constraints for transonic black hole accretion
NASA Technical Reports Server (NTRS)
Abramowicz, Marek A.; Kato, Shoji
1989-01-01
Regularity conditions and global topological constraints leave some forbidden regions in the parameter space of the transonic isothermal, rotating matter onto black holes. Unstable flows occupy regions touching the boundaries of the forbidden regions. The astrophysical consequences of these results are discussed.
Classifying orbits in galaxy models with a prolate or an oblate dark matter halo component
NASA Astrophysics Data System (ADS)
Zotos, Euaggelos E.
2014-03-01
Aims: The distinction between regular and chaotic motion in galaxies is undoubtedly an issue of paramount importance. We explore the nature of orbits of stars moving in the meridional plane (R,z) of an axially symmetric galactic model with a disk, a spherical nucleus, and a flat biaxial dark matter halo component. In particular, we study the influence of all the involved parameters of the dynamical system by computing both the percentage of chaotic orbits and the percentages of orbits of the main regular resonant families in each case. Methods: To distinguish between ordered and chaotic motion, we use the smaller alignment index (SALI) method to extensive samples of orbits by numerically integrating the equations of motion as well as the variational equations. Moreover, a method based on the concept of spectral dynamics that utilizes the Fourier transform of the time series of each coordinate is used to identify the various families of regular orbits and also to recognize the secondary resonances that bifurcate from them. Two cases are studied for every parameter: (i) the case where the halo component is prolate and (ii) the case where an oblate dark halo is present. Results: Our numerical investigation indicates that all the dynamical quantities affect, more or less, the overall orbital structure. It was observed that the mass of the nucleus, the halo flattening parameter, the scale length of the halo, the angular momentum, and the orbital energy are the most influential quantities, while the effect of all the other parameters is much weaker. It was also found that all the parameters corresponding to the disk only have a minor influence on the nature of orbits. Furthermore, some other quantities, such as the minimum distance to the origin, the horizontal, and the vertical force, were tested as potential chaos detectors. Our analysis revealed that only general information can be obtained from these quantities. We also compared our results with early related work. Appendix A is available in electronic form at http://www.aanda.org
Nonlinear PP and PS joint inversion based on the exact Zoeppritz equations: a two-stage procedure
NASA Astrophysics Data System (ADS)
Zhi, Lixia; Chen, Shuangquan; Song, Baoshan; Li, Xiang-yang
2018-04-01
S-velocity and density are very important parameters in distinguishing lithology and estimating other petrophysical properties. A reliable estimate of S-velocity and density is very difficult to obtain, even from long-offset gather data. Joint inversion of PP and PS data provides a promising strategy for stabilizing and improving the results of inversion in estimating elastic parameters and density. For 2D or 3D inversion, the trace-by-trace strategy is still the most widely used method although it often suffers from a lack of clarity because of its high efficiency, which is due to parallel computing. This paper describes a two-stage inversion method for nonlinear PP and PS joint inversion based on the exact Zoeppritz equations. There are several advantages for our proposed methods as follows: (1) Thanks to the exact Zoeppritz equation, our joint inversion method is applicable for wide angle amplitude-versus-angle inversion; (2) The use of both P- and S-wave information can further enhance the stability and accuracy of parameter estimation, especially for the S-velocity and density; (3) The two-stage inversion procedure proposed in this paper can achieve a good compromise between efficiency and precision. On the one hand, the trace-by-trace strategy used in the first stage can be processed in parallel so that it has high computational efficiency. On the other hand, to deal with the indistinctness of and undesired disturbances to the inversion results obtained from the first stage, we apply the second stage—total variation (TV) regularization. By enforcing spatial and temporal constraints, the TV regularization stage deblurs the inversion results and leads to parameter estimation with greater precision. Notably, the computation consumption of the TV regularization stage can be ignored compared to the first stage because it is solved using the fast split Bregman iterations. Numerical examples using a well log and the Marmousi II model show that the proposed joint inversion is a reliable method capable of accurately estimating the density parameter as well as P-wave velocity and S-wave velocity, even when the seismic data is noisy with signal-to-noise ratio of 5.
Ultrasonic cleaning: Fundamental theory and application
NASA Technical Reports Server (NTRS)
Fuchs, F. John
1995-01-01
This presentation describes: the theory of ultrasonics, cavitation and implosion; the importance and application of ultrasonics in precision cleaning; explanations of ultrasonic cleaning equipment options and their application; process parameters for ultrasonic cleaning; and proper operation of ultrasonic cleaning equipment to achieve maximum results.
40 CFR 86.005-17 - On-board diagnostics.
Code of Federal Regulations, 2013 CFR
2013-07-01
... other available operating parameters), and functionality checks for computer output components (proper... considered acceptable. (e) Storing of computer codes. The OBD system shall record and store in computer... monitors that can be considered continuously operating monitors (e.g., misfire monitor, fuel system monitor...
40 CFR 86.005-17 - On-board diagnostics.
Code of Federal Regulations, 2012 CFR
2012-07-01
... other available operating parameters), and functionality checks for computer output components (proper... considered acceptable. (e) Storing of computer codes. The OBD system shall record and store in computer... monitors that can be considered continuously operating monitors (e.g., misfire monitor, fuel system monitor...