Sample records for lump correction algorithm

  1. Inverse dynamics of adaptive structures used as space cranes

    NASA Technical Reports Server (NTRS)

    Das, S. K.; Utku, S.; Wada, B. K.

    1990-01-01

    As a precursor to the real-time control of fast moving adaptive structures used as space cranes, a formulation is given for the flexibility induced motion relative to the nominal motion (i.e., the motion that assumes no flexibility) and for obtaining the open loop time varying driving forces. An algorithm is proposed for the computation of the relative motion and driving forces. The governing equations are given in matrix form with explicit functional dependencies. A simulator is developed to implement the algorithm on a digital computer. In the formulations, the distributed mass of the crane is lumped by two schemes, vz., 'trapezoidal' lumping and 'Simpson's rule' lumping. The effects of the mass lumping schemes are shown by simulator runs.

  2. Path lumping: An efficient algorithm to identify metastable path channels for conformational dynamics of multi-body systems

    NASA Astrophysics Data System (ADS)

    Meng, Luming; Sheong, Fu Kit; Zeng, Xiangze; Zhu, Lizhe; Huang, Xuhui

    2017-07-01

    Constructing Markov state models from large-scale molecular dynamics simulation trajectories is a promising approach to dissect the kinetic mechanisms of complex chemical and biological processes. Combined with transition path theory, Markov state models can be applied to identify all pathways connecting any conformational states of interest. However, the identified pathways can be too complex to comprehend, especially for multi-body processes where numerous parallel pathways with comparable flux probability often coexist. Here, we have developed a path lumping method to group these parallel pathways into metastable path channels for analysis. We define the similarity between two pathways as the intercrossing flux between them and then apply the spectral clustering algorithm to lump these pathways into groups. We demonstrate the power of our method by applying it to two systems: a 2D-potential consisting of four metastable energy channels and the hydrophobic collapse process of two hydrophobic molecules. In both cases, our algorithm successfully reveals the metastable path channels. We expect this path lumping algorithm to be a promising tool for revealing unprecedented insights into the kinetic mechanisms of complex multi-body processes.

  3. An interactive method based on the live wire for segmentation of the breast in mammography images.

    PubMed

    Zewei, Zhang; Tianyue, Wang; Li, Guo; Tingting, Wang; Lu, Xu

    2014-01-01

    In order to improve accuracy of computer-aided diagnosis of breast lumps, the authors introduce an improved interactive segmentation method based on Live Wire. This paper presents the Gabor filters and FCM clustering algorithm is introduced to the Live Wire cost function definition. According to the image FCM analysis for image edge enhancement, we eliminate the interference of weak edge and access external features clear segmentation results of breast lumps through improving Live Wire on two cases of breast segmentation data. Compared with the traditional method of image segmentation, experimental results show that the method achieves more accurate segmentation of breast lumps and provides more accurate objective basis on quantitative and qualitative analysis of breast lumps.

  4. Multi-objective optimal design of magnetorheological engine mount based on an improved non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Ling; Duan, Xuwei; Deng, Zhaoxue; Li, Yinong

    2014-03-01

    A novel flow-mode magneto-rheological (MR) engine mount integrated a diaphragm de-coupler and the spoiler plate is designed and developed to isolate engine and the transmission from the chassis in a wide frequency range and overcome the stiffness in high frequency. A lumped parameter model of the MR engine mount in single degree of freedom system is further developed based on bond graph method to predict the performance of the MR engine mount accurately. The optimization mathematical model is established to minimize the total of force transmissibility over several frequency ranges addressed. In this mathematical model, the lumped parameters are considered as design variables. The maximum of force transmissibility and the corresponding frequency in low frequency range as well as individual lumped parameter are limited as constraints. The multiple interval sensitivity analysis method is developed to select the optimized variables and improve the efficiency of optimization process. An improved non-dominated sorting genetic algorithm (NSGA-II) is used to solve the multi-objective optimization problem. The synthesized distance between the individual in Pareto set and the individual in possible set in engineering is defined and calculated. A set of real design parameters is thus obtained by the internal relationship between the optimal lumped parameters and practical design parameters for the MR engine mount. The program flowchart for the improved non-dominated sorting genetic algorithm (NSGA-II) is given. The obtained results demonstrate the effectiveness of the proposed optimization approach in minimizing the total of force transmissibility over several frequency ranges addressed.

  5. Energy flux parametrization as an opportunity to get Urban Heat Island insights: The case of Athens, Greece (Thermopolis 2009 Campaign).

    PubMed

    Loupa, G; Rapsomanikis, S; Trepekli, A; Kourtidis, K

    2016-01-15

    Energy flux parameterization was effected for the city of Athens, Greece, by utilizing two approaches, the Local-Scale Urban Meteorological Parameterization Scheme (LUMPS) and the Bulk Approach (BA). In situ acquired data are used to validate the algorithms of these schemes and derive coefficients applicable to the study area. Model results from these corrected algorithms are compared with literature results for coefficients applicable to other cities and their varying construction materials. Asphalt and concrete surfaces, canyons and anthropogenic heat releases were found to be the key characteristics of the city center that sustain the elevated surface and air temperatures, under hot, sunny and dry weather, during the Mediterranean summer. A relationship between storage heat flux plus anthropogenic energy flux and temperatures (surface and lower atmosphere) is presented, that results in understanding of the interplay between temperatures, anthropogenic energy releases and the city characteristics under the Urban Heat Island conditions.

  6. An algorithm for minimum-cost set-point ordering in a cryogenic wind tunnel

    NASA Technical Reports Server (NTRS)

    Tripp, J. S.

    1981-01-01

    An algorithm for minimum cost ordering of set points in a cryogenic wind tunnel is developed. The procedure generates a matrix of dynamic state transition costs, which is evaluated by means of a single-volume lumped model of the cryogenic wind tunnel and the use of some idealized minimum-costs, which is evaluated by means of a single-volume lumped model of the cryogenic wind tunnel and the use of some idealized minimum-cost state-transition control strategies. A branch and bound algorithm is employed to determine the least costly sequence of state transitions from the transition-cost matrix. Some numerical results based on data for the National Transonic Facility are presented which show a strong preference for state transitions that consume to coolant. Results also show that the choice of the terminal set point in an open odering can produce a wide variation in total cost.

  7. lumpR 2.0.0: an R package facilitating landscape discretisation for hillslope-based hydrological models

    NASA Astrophysics Data System (ADS)

    Pilz, Tobias; Francke, Till; Bronstert, Axel

    2017-08-01

    The characteristics of a landscape pose essential factors for hydrological processes. Therefore, an adequate representation of the landscape of a catchment in hydrological models is vital. However, many of such models exist differing, amongst others, in spatial concept and discretisation. The latter constitutes an essential pre-processing step, for which many different algorithms along with numerous software implementations exist. In that context, existing solutions are often model specific, commercial, or depend on commercial back-end software, and allow only a limited or no workflow automation at all. Consequently, a new package for the scientific software and scripting environment R, called lumpR, was developed. lumpR employs an algorithm for hillslope-based landscape discretisation directed to large-scale application via a hierarchical multi-scale approach. The package addresses existing limitations as it is free and open source, easily extendible to other hydrological models, and the workflow can be fully automated. Moreover, it is user-friendly as the direct coupling to a GIS allows for immediate visual inspection and manual adjustment. Sufficient control is furthermore retained via parameter specification and the option to include expert knowledge. Conversely, completely automatic operation also allows for extensive analysis of aspects related to landscape discretisation. In a case study, the application of the package is presented. A sensitivity analysis of the most important discretisation parameters demonstrates its efficient workflow automation. Considering multiple streamflow metrics, the employed model proved reasonably robust to the discretisation parameters. However, parameters determining the sizes of subbasins and hillslopes proved to be more important than the others, including the number of representative hillslopes, the number of attributes employed for the lumping algorithm, and the number of sub-discretisations of the representative hillslopes.

  8. VAXELN Experimentation: Programming a Real-Time Periodic Task Dispatcher Using VAXELN Ada 1.1

    DTIC Science & Technology

    1987-11-01

    synchronization to the SQM and VAXELN semaphores. Based on real-time scheduling theory, the optimal rate-monotonic scheduling algorithm [Lui 73...schedulability test based on the rate-monotonic algorithm , namely task-lumping [Sha 871, was necessary to cal- culate the theoretically expected schedulability...8217 Guide Digital Equipment Corporation, Maynard, MA, 1986. [Lui 73] Liu, C.L., Layland, J.W. Scheduling Algorithms for Multi-programming in a Hard-Real-Time

  9. Parameter interdependence and uncertainty induced by lumping in a hydrologic model

    NASA Astrophysics Data System (ADS)

    Gallagher, Mark R.; Doherty, John

    2007-05-01

    Throughout the world, watershed modeling is undertaken using lumped parameter hydrologic models that represent real-world processes in a manner that is at once abstract, but nevertheless relies on algorithms that reflect real-world processes and parameters that reflect real-world hydraulic properties. In most cases, values are assigned to the parameters of such models through calibration against flows at watershed outlets. One criterion by which the utility of the model and the success of the calibration process are judged is that realistic values are assigned to parameters through this process. This study employs regularization theory to examine the relationship between lumped parameters and corresponding real-world hydraulic properties. It demonstrates that any kind of parameter lumping or averaging can induce a substantial amount of "structural noise," which devices such as Box-Cox transformation of flows and autoregressive moving average (ARMA) modeling of residuals are unlikely to render homoscedastic and uncorrelated. Furthermore, values estimated for lumped parameters are unlikely to represent average values of the hydraulic properties after which they are named and are often contaminated to a greater or lesser degree by the values of hydraulic properties which they do not purport to represent at all. As a result, the question of how rigidly they should be bounded during the parameter estimation process is still an open one.

  10. Implementation of interconnect simulation tools in spice

    NASA Technical Reports Server (NTRS)

    Satsangi, H.; Schutt-Aine, J. E.

    1993-01-01

    Accurate computer simulation of high speed digital computer circuits and communication circuits requires a multimode approach to simulate both the devices and the interconnects between devices. Classical circuit analysis algorithms (lumped parameter) are needed for circuit devices and the network formed by the interconnected devices. The interconnects, however, have to be modeled as transmission lines which incorporate electromagnetic field analysis. An approach to writing a multimode simulator is to take an existing software package which performs either lumped parameter analysis or field analysis and add the missing type of analysis routines to the package. In this work a traditionally lumped parameter simulator, SPICE, is modified so that it will perform lossy transmission line analysis using a different model approach. Modifying SPICE3E2 or any other large software package is not a trivial task. An understanding of the programming conventions used, simulation software, and simulation algorithms is required. This thesis was written to clarify the procedure for installing a device into SPICE3E2. The installation of three devices is documented and the installations of the first two provide a foundation for installation of the lossy line which is the third device. The details of discussions are specific to SPICE, but the concepts will be helpful when performing installations into other circuit analysis packages.

  11. Stabilization and synchronization for a mechanical system via adaptive sliding mode control.

    PubMed

    Song, Zhankui; Sun, Kaibiao; Ling, Shuai

    2017-05-01

    In this paper, we investigate the synchronization problem of chaotic centrifugal flywheel governor with parameters uncertainty and lumped disturbances. A slave centrifugal flywheel governor system is considered as an underactuated following-system which a control input is designed to follow a master centrifugal flywheel governor system. To tackle lumped disturbances and uncertainty parameters, a novel synchronization control law is developed by employing sliding mode control strategy and Nussbaum gain technique. Adaptation updating algorithms are derived in the sense of Lyapunov stability analysis such that the lumped disturbances can be suppressed and the adverse effect caused by uncertainty parameters can be compensated. In addition, the synchronization tracking-errors are proven to converge to a small neighborhood of the origin. Finally, simulation results demonstrate the effectiveness of the proposed control scheme. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  12. 20 CFR 404.401 - Deduction, reduction, and nonpayment of monthly benefits or lump-sum death payments.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... work (see §§ 404.415 and 404.417); (2) Failure of certain beneficiaries receiving wife's or mother's...). (c) Adjustments. We may adjust your benefits to correct errors in payments under title II of the Act. We may also adjust your benefits if you received more than the correct amount due under titles VIII...

  13. 20 CFR 404.401 - Deduction, reduction, and nonpayment of monthly benefits or lump-sum death payments.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... work (see §§ 404.415 and 404.417); (2) Failure of certain beneficiaries receiving wife's or mother's...). (c) Adjustments. We may adjust your benefits to correct errors in payments under title II of the Act. We may also adjust your benefits if you received more than the correct amount due under titles VIII...

  14. 20 CFR 404.401 - Deduction, reduction, and nonpayment of monthly benefits or lump-sum death payments.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... work (see §§ 404.415 and 404.417); (2) Failure of certain beneficiaries receiving wife's or mother's...). (c) Adjustments. We may adjust your benefits to correct errors in payments under title II of the Act. We may also adjust your benefits if you received more than the correct amount due under titles VIII...

  15. 20 CFR 404.401 - Deduction, reduction, and nonpayment of monthly benefits or lump-sum death payments.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... work (see §§ 404.415 and 404.417); (2) Failure of certain beneficiaries receiving wife's or mother's...). (c) Adjustments. We may adjust your benefits to correct errors in payments under title II of the Act. We may also adjust your benefits if you received more than the correct amount due under titles VIII...

  16. Text image authenticating algorithm based on MD5-hash function and Henon map

    NASA Astrophysics Data System (ADS)

    Wei, Jinqiao; Wang, Ying; Ma, Xiaoxue

    2017-07-01

    In order to cater to the evidentiary requirements of the text image, this paper proposes a fragile watermarking algorithm based on Hash function and Henon map. The algorithm is to divide a text image into parts, get flippable pixels and nonflippable pixels of every lump according to PSD, generate watermark of non-flippable pixels with MD5-Hash, encrypt watermark with Henon map and select embedded blocks. The simulation results show that the algorithm with a good ability in tampering localization can be used to authenticate and forensics the authenticity and integrity of text images

  17. A Subgeneric Classification of the Genus Uranotaenia Lynch Arribalzaga, with a Historical Review and Notes on Other Categories

    DTIC Science & Technology

    1972-01-01

    three species of Pseudoficalbia from New Guinea, While he was correct in his assignment of species, the characters, though they will separate a...and African material:, I have made no attempt to correct these errors, except in the Southeast Asian fauna, In a few cases, I have brought them to...current practice of lumping everything into one supposedly homogeneous genus.” While the statement may ultimately prove correct , I prefer to consider at

  18. The physiological kinetics of nitrogen and the prevention of decompression sickness.

    PubMed

    Doolette, D J; Mitchell, S J

    2001-01-01

    Decompression sickness (DCS) is a potentially crippling disease caused by intracorporeal bubble formation during or after decompression from a compressed gas underwater dive. Bubbles most commonly evolve from dissolved inert gas accumulated during the exposure to increased ambient pressure. Most diving is performed breathing air, and the inert gas of interest is nitrogen. Divers use algorithms based on nitrogen kinetic models to plan the duration and degree of exposure to increased ambient pressure and to control their ascent rate. However, even correct execution of dives planned using such algorithms often results in bubble formation and may result in DCS. This reflects the importance of idiosyncratic host factors that are difficult to model, and deficiencies in current nitrogen kinetic models. Models describing the exchange of nitrogen between tissues and blood may be based on distributed capillary units or lumped compartments, either of which may be perfusion- or diffusion-limited. However, such simplistic models are usually poor predictors of experimental nitrogen kinetics at the organ or tissue level, probably because they fail to account for factors such as heterogeneity in both tissue composition and blood perfusion and non-capillary exchange mechanisms. The modelling of safe decompression procedures is further complicated by incomplete understanding of the processes that determine bubble formation. Moreover, any formation of bubbles during decompression alters subsequent nitrogen kinetics. Although these factors mandate complex resolutions to account for the interaction between dissolved nitrogen kinetics and bubble formation and growth, most decompression schedules are based on relatively simple perfusion-limited lumped compartment models of blood: tissue nitrogen exchange. Not surprisingly, all models inevitably require empirical adjustment based on outcomes in the field. Improvements in the predictive power of decompression calculations are being achieved using probabilistic bubble models, but divers will always be subject to the possibility of developing DCS despite adherence to prescribed limits.

  19. Distributed ESO based cooperative tracking control for high-order nonlinear multiagent systems with lumped disturbance and application in multi flight simulators systems.

    PubMed

    Cong, Zhang

    2018-03-01

    Based on extended state observer, a novel and practical design method is developed to solve the distributed cooperative tracking problem of higher-order nonlinear multiagent systems with lumped disturbance in a fixed communication topology directed graph. The proposed method is designed to guarantee all the follower nodes ultimately and uniformly converge to the leader node with bounded residual errors. The leader node, modeled as a higher-order non-autonomous nonlinear system, acts as a command generator giving commands only to a small portion of the networked follower nodes. Extended state observer is used to estimate the local states and lumped disturbance of each follower node. Moreover, each distributed controller can work independently only requiring the relative states and/or the estimated relative states information between itself and its neighbors. Finally an engineering application of multi flight simulators systems is demonstrated to test and verify the effectiveness of the proposed algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Frequency method for determining the parameters of the electromagnetic brakes and slip-type couplings with solid magnetic circuits

    NASA Technical Reports Server (NTRS)

    Guseynov, F. G.; Abbasova, E. M.

    1977-01-01

    The equivalent representation of brakes and coupling by lumped circuits is investigated. Analytical equations are derived for relating the indices of the transients to the parameters of the equivalent circuits for arbitrary rotor speed. A computer algorithm is given for the calculations.

  1. Modelling rogue waves through exact dynamical lump soliton controlled by ocean currents.

    PubMed

    Kundu, Anjan; Mukherjee, Abhik; Naskar, Tapan

    2014-04-08

    Rogue waves are extraordinarily high and steep isolated waves, which appear suddenly in a calm sea and disappear equally fast. However, though the rogue waves are localized surface waves, their theoretical models and experimental observations are available mostly in one dimension, with the majority of them admitting only limited and fixed amplitude and modular inclination of the wave. We propose two dimensions, exactly solvable nonlinear Schrödinger (NLS) equation derivable from the basic hydrodynamic equations and endowed with integrable structures. The proposed two-dimensional equation exhibits modulation instability and frequency correction induced by the nonlinear effect, with a directional preference, all of which can be determined through precise analytic result. The two-dimensional NLS equation allows also an exact lump soliton which can model a full-grown surface rogue wave with adjustable height and modular inclination. The lump soliton under the influence of an ocean current appears and disappears preceded by a hole state, with its dynamics controlled by the current term. These desirable properties make our exact model promising for describing ocean rogue waves.

  2. Modelling rogue waves through exact dynamical lump soliton controlled by ocean currents

    PubMed Central

    Kundu, Anjan; Mukherjee, Abhik; Naskar, Tapan

    2014-01-01

    Rogue waves are extraordinarily high and steep isolated waves, which appear suddenly in a calm sea and disappear equally fast. However, though the rogue waves are localized surface waves, their theoretical models and experimental observations are available mostly in one dimension, with the majority of them admitting only limited and fixed amplitude and modular inclination of the wave. We propose two dimensions, exactly solvable nonlinear Schrödinger (NLS) equation derivable from the basic hydrodynamic equations and endowed with integrable structures. The proposed two-dimensional equation exhibits modulation instability and frequency correction induced by the nonlinear effect, with a directional preference, all of which can be determined through precise analytic result. The two-dimensional NLS equation allows also an exact lump soliton which can model a full-grown surface rogue wave with adjustable height and modular inclination. The lump soliton under the influence of an ocean current appears and disappears preceded by a hole state, with its dynamics controlled by the current term. These desirable properties make our exact model promising for describing ocean rogue waves. PMID:24711719

  3. A Preliminary Comparison of the Reforms at Beijing University and Zhongshan University

    ERIC Educational Resources Information Center

    Yang, Gan

    2005-01-01

    By writing this article, I wish first to correct a current consensual misrepresentation, namely the fact that many media outlets are frequently lumping all criticisms of the Beida reform plan under the title "opposition to the reform." This is a grave misrepresentation. The title of the interviews with Zhang Weiying, a person in charge…

  4. Quantifying watershed surface depression storage: determination and application in a hydrologic model

    Treesearch

    Joseph K. O. Amoah; Devendra M. Amatya; Soronnadi Nnaji

    2012-01-01

    Hydrologic models often require correct estimates of surface macro-depressional storage to accurately simulate rainfall–runoff processes. Traditionally, depression storage is determined through model calibration or lumped with soil storage components or on an ad hoc basis. This paper investigates a holistic approach for estimating surface depressional storage capacity...

  5. Improving operational flood ensemble prediction by the assimilation of satellite soil moisture: comparison between lumped and semi-distributed schemes

    USDA-ARS?s Scientific Manuscript database

    Assimilation of remotely sensed soil moisture data (SM-DA) to correct soil water stores of rainfall-runoff models has shown skill in improving streamflow prediction. In the case of large and sparsely monitored catchments, SM-DA is a particularly attractive tool.Within this context, we assimilate act...

  6. Magnetostatic focal spot correction for x-ray tubes operating in strong magnetic fields using iterative optimization

    PubMed Central

    Lillaney, Prasheel; Shin, Mihye; Conolly, Steven M.; Fahrig, Rebecca

    2012-01-01

    Purpose: Combining x-ray fluoroscopy and MR imaging systems for guidance of interventional procedures has become more commonplace. By designing an x-ray tube that is immune to the magnetic fields outside of the MR bore, the two systems can be placed in close proximity to each other. A major obstacle to robust x-ray tube design is correcting for the effects of the magnetic fields on the x-ray tube focal spot. A potential solution is to design active shielding that locally cancels the magnetic fields near the focal spot. Methods: An iterative optimization algorithm is implemented to design resistive active shielding coils that will be placed outside the x-ray tube insert. The optimization procedure attempts to minimize the power consumption of the shielding coils while satisfying magnetic field homogeneity constraints. The algorithm is composed of a linear programming step and a nonlinear programming step that are interleaved with each other. The coil results are verified using a finite element space charge simulation of the electron beam inside the x-ray tube. To alleviate heating concerns an optimized coil solution is derived that includes a neodymium permanent magnet. Any demagnetization of the permanent magnet is calculated prior to solving for the optimized coils. The temperature dynamics of the coil solutions are calculated using a lumped parameter model, which is used to estimate operation times of the coils before temperature failure. Results: For a magnetic field strength of 88 mT, the algorithm solves for coils that consume 588 A/cm2. This specific coil geometry can operate for 15 min continuously before reaching temperature failure. By including a neodymium magnet in the design the current density drops to 337 A/cm2, which increases the operation time to 59 min. Space charge simulations verify that the coil designs are effective, but for oblique x-ray tube geometries there is still distortion of the focal spot shape along with deflections of approximately 3 mm in the radial and circumferential directions on the anode. Conclusions: Active shielding is an attractive solution for correcting the effects of magnetic fields on the x-ray focal spot. If extremely long fluoroscopic exposure times are required, longer operation times can be achieved by including a permanent magnet with the active shielding design. PMID:22957623

  7. 76 FR 55213 - Technical Amendments to Federal Employees' Retirement System; Present Value Conversion Factors...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-07

    ...) revising the factor at 5 CFR 843.309(b)(2) used to convert a lump sum basic employee death benefit under 5... would apply to deaths occurring on or after October 1, 2004. The revised factor, however, applies to deaths occurring on or after October 1, 2011. Therefore, this document corrects the final regulation by...

  8. Network Speech Systems Technology Program.

    DTIC Science & Technology

    1980-09-30

    ognized that the lumped-speaker approximation could be extended even more generally to include cases of combined circuit-switched speech and packet...based on these tables. The first function is an im- portant element of the more general task of system control for a switched network, which in...programs are in preparation, as described below, for both steady-state evaluation and dynamic performance simulation of the algorithm in general

  9. The Shock and Vibration Bulletin. Part 2. Structural Analysis, Design Techniques

    DTIC Science & Technology

    1973-06-01

    FLOATING SHOCK PLATFORM SUBJECTED TO UNDERWATER EXPLOSIONS R. P. Brooks, and B. C, McNalght Naval Air Engineering Center Philadelphia, Pa, A lumped...Lohwasser, Air Force Flight Dynamics Laboratory, Wright -Patterson APB, Ohio AN ALGORITHM FOR SEMI-INVERSE ANALYSIS OF NONLINEAR DYNAMIC SYSTEMS ... 65 R...MATHEMATICAL MODEL OF A TYPICAL.FOATING SHOCK PLATFORM SSUBJECTED TO-UNDERWATE- EXPLOSIONS .......... ...................... 143 R. P. Brooks and B. C

  10. Malpractice issues in radiology: medicare compliance versus standard of care conformance--real or imaginary conflict?

    PubMed

    Duszak, Richard; Berlin, Leonard

    2010-06-01

    Plaintiff's Attorney (Pl Att:: Doctor, the record shows that the patient was referred to the hospital's radiology department by her gynecologist for a screening mammogram. The record also shows that when completing the mammography information form, the patient wrote that she had a lump in her left breast, correct? Defendant Radiologist (Df Ra:): Yes. Pl Att: But your technologist performed, and you interpreted, a screening mammogram. Doesn't the radiology standard of care require you to do a diagnostic mammogram when the patient has a breast lump? Df Ra:: Well, normally yes, but if it's going to be a diagnostic mammogram, then the referring physician has to order it. In this case our tech called the gynecologist and asked him whether he wanted to order a diagnostic study, and he said no, he didn't feel the lump, and that we should only do a plain screening mammogram. Pl Att:: Please explain something. You're agreeing that a woman with a breast lump should have a diagnostic mammogram, but you are saying that you didn't do one because the patient's physician wouldn't order it? Don't you have a duty to do the diagnostic mammogram in a case like this on your own, without having to ask permission from the patient's gynecologist? Df Ra:: Only the treating physician can change a screening mammogram into a diagnostic mammogram, and I am not the treating physician. If I went ahead and did a diagnostic mammography examination on my own, it would be Medicare fraud, and our hospital's compliance officer says it could result in our hospital being fined and thrown out of the Medicare program. Pl Atty: What prevents you then from recommending-not ordering, but just recommending-a diagnostic mammogram in your report, because the patient says she's got a lump? Df Rad: Well, according to our hospital's compliance officer, that would also be fraud.

  11. Lump, periodic lump and interaction lump stripe solutions to the (2+1)-dimensional B-type Kadomtsev-Petviashvili equation

    NASA Astrophysics Data System (ADS)

    Wu, Pinxia; Zhang, Yufeng; Muhammad, Iqbal; Yin, Qiqi

    2018-03-01

    In this paper, the Hirota’s bilinear form is employed to investigate the lump, periodic lump and interaction lump stripe solutions of the (2+1)-dimensional B-type Kadomtsev-Petviashvili (BKP) equation. Many results are obtained by dynamic process of figures. We analyze the propagation direction and horizontal velocity of lump solutions to find some constraint conditions which include positiveness and localization. In the process of the travel of the periodic lump solutions, it appears that the energy distribution is not symmetrical. The interaction lump stripe solutions of non-elastic indicate that the lump solitons are dropped and swallowed by the stripe soliton.

  12. Geometry correction Algorithm for UAV Remote Sensing Image Based on Improved Neural Network

    NASA Astrophysics Data System (ADS)

    Liu, Ruian; Liu, Nan; Zeng, Beibei; Chen, Tingting; Yin, Ninghao

    2018-03-01

    Aiming at the disadvantage of current geometry correction algorithm for UAV remote sensing image, a new algorithm is proposed. Adaptive genetic algorithm (AGA) and RBF neural network are introduced into this algorithm. And combined with the geometry correction principle for UAV remote sensing image, the algorithm and solving steps of AGA-RBF are presented in order to realize geometry correction for UAV remote sensing. The correction accuracy and operational efficiency is improved through optimizing the structure and connection weight of RBF neural network separately with AGA and LMS algorithm. Finally, experiments show that AGA-RBF algorithm has the advantages of high correction accuracy, high running rate and strong generalization ability.

  13. Hybrid Correlation Algorithms. A Bridge Between Feature Matching and Image Correlation,

    DTIC Science & Technology

    1979-11-01

    spa- tially into groups of pixels. The intensity level preprocessing is designed to compensate for any biases or gain changes in the system ; whereas...number of error sources that affect the performance of the system . It would be desirable to lump these errors into ge- neric categories in discussing... system performance rather than treat- ing each error source separately. Such a generic categorization should possess the following properties: 1. The

  14. Rational Solutions and Lump Solutions of the Potential YTSF Equation

    NASA Astrophysics Data System (ADS)

    Sun, Hong-Qian; Chen, Ai-Hua

    2017-07-01

    By using of the bilinear form, rational solutions and lump solutions of the potential Yu-Toda-Sasa-Fukuyama (YTSF) equation are derived. Dynamics of the fundamental lump solution, n1-order lump solutions, and N-lump solutions are studied for some special cases. We also find some interaction behaviours of solitary waves and one lump of rational solutions.

  15. On the performance of explicit and implicit algorithms for transient thermal analysis

    NASA Astrophysics Data System (ADS)

    Adelman, H. M.; Haftka, R. T.

    1980-09-01

    The status of an effort to increase the efficiency of calculating transient temperature fields in complex aerospace vehicle structures is described. The advantages and disadvantages of explicit and implicit algorithms are discussed. A promising set of implicit algorithms, known as the GEAR package is described. Four test problems, used for evaluating and comparing various algorithms, have been selected and finite element models of the configurations are discribed. These problems include a space shuttle frame component, an insulated cylinder, a metallic panel for a thermal protection system and a model of the space shuttle orbiter wing. Calculations were carried out using the SPAR finite element program, the MITAS lumped parameter program and a special purpose finite element program incorporating the GEAR algorithms. Results generally indicate a preference for implicit over explicit algorithms for solution of transient structural heat transfer problems when the governing equations are stiff. Careful attention to modeling detail such as avoiding thin or short high-conducting elements can sometimes reduce the stiffness to the extent that explicit methods become advantageous.

  16. Model reduction for experimental thermal characterization of a holding furnace

    NASA Astrophysics Data System (ADS)

    Loussouarn, Thomas; Maillet, Denis; Remy, Benjamin; Dan, Diane

    2017-09-01

    Vacuum holding induction furnaces are used for the manufacturing of turbine blades by loss wax foundry process. The control of solidification parameters is a key factor for the manufacturing of these parts. The definition of the structure of a reduced heat transfer model with experimental identification through an estimation of its parameters is required here. Internal sensors outputs, together with this model, can be used for assessing the thermal state of the furnace through an inverse approach, for a better control. Here, an axisymmetric furnace and its load have been numerically modelled using FlexPDE, a finite elements code. The internal induction heat source as well as the transient radiative transfer inside the furnace are calculated through this detailed model. A reduced lumped body model has been constructed to represent the numerical furnace. The model reduction and the estimation of the parameters of the lumped body have been made using a Levenberg-Marquardt least squares minimization algorithm, using two synthetic temperature signals with a further validation test.

  17. Experimental testing of four correction algorithms for the forward scattering spectrometer probe

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.; Oldenburg, John R.; Lock, James A.

    1992-01-01

    Three number density correction algorithms and one size distribution correction algorithm for the Forward Scattering Spectrometer Probe (FSSP) were compared with data taken by the Phase Doppler Particle Analyzer (PDPA) and an optical number density measuring instrument (NDMI). Of the three number density correction algorithms, the one that compared best to the PDPA and NDMI data was the algorithm developed by Baumgardner, Strapp, and Dye (1985). The algorithm that corrects sizing errors in the FSSP that was developed by Lock and Hovenac (1989) was shown to be within 25 percent of the Phase Doppler measurements at number densities as high as 3000/cc.

  18. Temporal high-pass non-uniformity correction algorithm based on grayscale mapping and hardware implementation

    NASA Astrophysics Data System (ADS)

    Jin, Minglei; Jin, Weiqi; Li, Yiyang; Li, Shuo

    2015-08-01

    In this paper, we propose a novel scene-based non-uniformity correction algorithm for infrared image processing-temporal high-pass non-uniformity correction algorithm based on grayscale mapping (THP and GM). The main sources of non-uniformity are: (1) detector fabrication inaccuracies; (2) non-linearity and variations in the read-out electronics and (3) optical path effects. The non-uniformity will be reduced by non-uniformity correction (NUC) algorithms. The NUC algorithms are often divided into calibration-based non-uniformity correction (CBNUC) algorithms and scene-based non-uniformity correction (SBNUC) algorithms. As non-uniformity drifts temporally, CBNUC algorithms must be repeated by inserting a uniform radiation source which SBNUC algorithms do not need into the view, so the SBNUC algorithm becomes an essential part of infrared imaging system. The SBNUC algorithms' poor robustness often leads two defects: artifacts and over-correction, meanwhile due to complicated calculation process and large storage consumption, hardware implementation of the SBNUC algorithms is difficult, especially in Field Programmable Gate Array (FPGA) platform. The THP and GM algorithm proposed in this paper can eliminate the non-uniformity without causing defects. The hardware implementation of the algorithm only based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay: less than 20 lines, it can be transplanted to a variety of infrared detectors equipped with FPGA image processing module, it can reduce the stripe non-uniformity and the ripple non-uniformity.

  19. 20 CFR 234.12 - 1937 Act lump-sum death payment.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false 1937 Act lump-sum death payment. 234.12 Section 234.12 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT LUMP-SUM PAYMENTS Lump-Sum Death Payment § 234.12 1937 Act lump-sum death payment. (a) The 1937 Act...

  20. 20 CFR 234.12 - 1937 Act lump-sum death payment.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false 1937 Act lump-sum death payment. 234.12 Section 234.12 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT LUMP-SUM PAYMENTS Lump-Sum Death Payment § 234.12 1937 Act lump-sum death payment. (a) The 1937 Act...

  1. 20 CFR 234.12 - 1937 Act lump-sum death payment.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false 1937 Act lump-sum death payment. 234.12 Section 234.12 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT LUMP-SUM PAYMENTS Lump-Sum Death Payment § 234.12 1937 Act lump-sum death payment. (a) The 1937 Act...

  2. 20 CFR 234.12 - 1937 Act lump-sum death payment.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true 1937 Act lump-sum death payment. 234.12 Section 234.12 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT LUMP-SUM PAYMENTS Lump-Sum Death Payment § 234.12 1937 Act lump-sum death payment. (a) The 1937 Act...

  3. 20 CFR 234.12 - 1937 Act lump-sum death payment.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true 1937 Act lump-sum death payment. 234.12 Section 234.12 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT LUMP-SUM PAYMENTS Lump-Sum Death Payment § 234.12 1937 Act lump-sum death payment. (a) The 1937 Act...

  4. A new unequal-weighted triple-frequency first order ionosphere correction algorithm and its application in COMPASS

    NASA Astrophysics Data System (ADS)

    Liu, WenXiang; Mou, WeiHua; Wang, FeiXue

    2012-03-01

    As the introduction of triple-frequency signals in GNSS, the multi-frequency ionosphere correction technology has been fast developing. References indicate that the triple-frequency second order ionosphere correction is worse than the dual-frequency first order ionosphere correction because of the larger noise amplification factor. On the assumption that the variances of three frequency pseudoranges were equal, other references presented the triple-frequency first order ionosphere correction, which proved worse or better than the dual-frequency first order correction in different situations. In practice, the PN code rate, carrier-to-noise ratio, parameters of DLL and multipath effect of each frequency are not the same, so three frequency pseudorange variances are unequal. Under this consideration, a new unequal-weighted triple-frequency first order ionosphere correction algorithm, which minimizes the variance of the pseudorange ionosphere-free combination, is proposed in this paper. It is found that conventional dual-frequency first-order correction algorithms and the equal-weighted triple-frequency first order correction algorithm are special cases of the new algorithm. A new pseudorange variance estimation method based on the three carrier combination is also introduced. Theoretical analysis shows that the new algorithm is optimal. The experiment with COMPASS G3 satellite observations demonstrates that the ionosphere-free pseudorange combination variance of the new algorithm is smaller than traditional multi-frequency correction algorithms.

  5. Lump solutions and interaction phenomenon to the third-order nonlinear evolution equation

    NASA Astrophysics Data System (ADS)

    Kofane, T. C.; Fokou, M.; Mohamadou, A.; Yomba, E.

    2017-11-01

    In this work, the lump solution and the kink solitary wave solution from the (2 + 1) -dimensional third-order evolution equation, using the Hirota bilinear method are obtained through symbolic computation with Maple. We have assumed that the lump solution is centered at the origin, when t = 0 . By considering a mixing positive quadratic function with exponential function, as well as a mixing positive quadratic function with hyperbolic cosine function, interaction solutions like lump-exponential and lump-hyperbolic cosine are presented. A completely non-elastic interaction between a lump and kink soliton is observed, showing that a lump solution can be swallowed by a kink soliton.

  6. Armpit lump

    MedlinePlus

    ... You will be asked questions about your medical history and symptoms, such as: When did you first notice the lump? Has the lump changed? Are you breastfeeding? Is there anything that makes the lump worse? ...

  7. Devolatilization Characteristics and Kinetic Analysis of Lump Coal from China COREX3000 Under High Temperature

    NASA Astrophysics Data System (ADS)

    Xu, Runsheng; Zhang, Jianliang; Wang, Guangwei; Zuo, Haibin; Liu, Zhengjian; Jiao, Kexin; Liu, Yanxiang; Li, Kejiang

    2016-08-01

    A devolatilization study of two lump coals used in China COREX3000 was carried out in a self-developed thermo-gravimetry at four temperature conditions [1173 K, 1273 K, 1373 K, and 1473 K (900 °C, 1000 °C, 1100 °C, and 1200 °C)] under N2. This study reveals that the working temperature has a strong impact on the devolatilization rate of the lump coal: the reaction rate increases with the increasing temperature. However, the temperature has little influence on the maximum mass loss. The conversion rate curve shows that the reaction rate of HY lump coal is higher than KG lump coal. The lump coals were analyzed by XRD, FTIR, and optical microscopy to explore the correlation between devolatilization rate and properties of lump coal. The results show that the higher reaction rate of HY lump coal attributes to its more active maceral components, less aromaticity and orientation degree of the crystallite, and more oxygenated functional groups. The random nucleation and nuclei growth model (RNGM), volume model (VM), and unreacted shrinking core model (URCM) were employed to describe the reaction behavior of lump coal. It was concluded from kinetics analysis that RNGM model was the best model for describing the devolatilization of lump coals. The apparent activation energies of isothermal devolatilization of HY lump coal and KG lump coal are 42.35 and 45.83 kJ/mol, respectively. This study has implications for the characteristics and mechanism modeling of devolatilization of lump coal in COREX gasifier.

  8. Atmospheric Correction Algorithm for Hyperspectral Remote Sensing of Ocean Color from Space

    DTIC Science & Technology

    2000-02-20

    Existing atmospheric correction algorithms for multichannel remote sensing of ocean color from space were designed for retrieving water-leaving...atmospheric correction algorithm for hyperspectral remote sensing of ocean color with the near-future Coastal Ocean Imaging Spectrometer. The algorithm uses

  9. Generation of future potential scenarios in an Alpine Catchment by applying bias-correction techniques, delta-change approaches and stochastic Weather Generators at different spatial scale. Analysis of their influence on basic and drought statistics.

    NASA Astrophysics Data System (ADS)

    Collados-Lara, Antonio-Juan; Pulido-Velazquez, David; Pardo-Iguzquiza, Eulogio

    2017-04-01

    Assessing impacts of potential future climate change scenarios in precipitation and temperature is essential to design adaptive strategies in water resources systems. The objective of this work is to analyze the possibilities of different statistical downscaling methods to generate future potential scenarios in an Alpine Catchment from historical data and the available climate models simulations performed in the frame of the CORDEX EU project. The initial information employed to define these downscaling approaches are the historical climatic data (taken from the Spain02 project for the period 1971-2000 with a spatial resolution of 12.5 Km) and the future series provided by climatic models in the horizon period 2071-2100 . We have used information coming from nine climate model simulations (obtained from five different Regional climate models (RCM) nested to four different Global Climate Models (GCM)) from the European CORDEX project. In our application we have focused on the Representative Concentration Pathways (RCP) 8.5 emissions scenario, which is the most unfavorable scenario considered in the fifth Assessment Report (AR5) by the Intergovernmental Panel on Climate Change (IPCC). For each RCM we have generated future climate series for the period 2071-2100 by applying two different approaches, bias correction and delta change, and five different transformation techniques (first moment correction, first and second moment correction, regression functions, quantile mapping using distribution derived transformation and quantile mapping using empirical quantiles) for both of them. Ensembles of the obtained series were proposed to obtain more representative potential future climate scenarios to be employed to study potential impacts. In this work we propose a non-equifeaseble combination of the future series giving more weight to those coming from models (delta change approaches) or combination of models and techniques that provides better approximation to the basic and drought statistic of the historical data. A multi-objective analysis using basic statistics (mean, standard deviation and asymmetry coefficient) and droughts statistics (duration, magnitude and intensity) has been performed to identify which models are better in terms of goodness of fit to reproduce the historical series. The drought statistics have been obtained from the Standard Precipitation index (SPI) series using the Theory of Runs. This analysis allows discriminate the best RCM and the best combination of model and correction technique in the bias-correction method. We have also analyzed the possibilities of using different Stochastic Weather Generators to approximate the basic and droughts statistics of the historical series. These analyses have been performed in our case study in a lumped and in a distributed way in order to assess its sensibility to the spatial scale. The statistic of the future temperature series obtained with different ensemble options are quite homogeneous, but the precipitation shows a higher sensibility to the adopted method and spatial scale. The global increment in the mean temperature values are 31.79 %, 31.79 %, 31.03 % and 31.74 % for the distributed bias-correction, distributed delta-change, lumped bias-correction and lumped delta-change ensembles respectively and in the precipitation they are -25.48 %, -28.49 %, -26.42 % and -27.35% respectively. Acknowledgments: This research work has been partially supported by the GESINHIMPADAPT project (CGL2013-48424-C2-2-R) with Spanish MINECO funds. We would also like to thank Spain02 and CORDEX projects for the data provided for this study and the R package qmap.

  10. Mixed lump-kink and rogue wave-kink solutions for a (3 + 1) -dimensional B-type Kadomtsev-Petviashvili equation in fluid mechanics

    NASA Astrophysics Data System (ADS)

    Hu, Cong-Cong; Tian, Bo; Wu, Xiao-Yu; Yuan, Yu-Qiang; Du, Zhong

    2018-02-01

    Under investigation is a (3 + 1) -dimensional B-type Kadomtsev-Petviashvili equation, which describes the weakly dispersive waves in a fluid. Via the Hirota method and symbolic computation, we obtain the mixed lump-kink and mixed rogue wave-kink solutions. Through the mixed lump-kink solutions, we observe three different phenomena between a lump and one kink. For the fusion phenomenon, a lump and a kink are merged with the lump's energy transferring into the kink gradually, until the lump merges into the kink completely. Fission phenomenon displays that a lump separates from a kink. The last phenomenon shows that a lump travels together with a kink with their amplitudes unchanged. In addition, we graphically study the interaction between a rogue wave and a pair of the kinks. It can be observed that the rogue wave arises from one kink and disappears into the other kink. At certain time, the amplitude of the rogue wave reaches the maximum.

  11. Development of PET projection data correction algorithm

    NASA Astrophysics Data System (ADS)

    Bazhanov, P. V.; Kotina, E. D.

    2017-12-01

    Positron emission tomography is modern nuclear medicine method used in metabolism and internals functions examinations. This method allows to diagnosticate treatments on their early stages. Mathematical algorithms are widely used not only for images reconstruction but also for PET data correction. In this paper random coincidences and scatter correction algorithms implementation are considered, as well as algorithm of PET projection data acquisition modeling for corrections verification.

  12. An efficient algorithm for automatic phase correction of NMR spectra based on entropy minimization

    NASA Astrophysics Data System (ADS)

    Chen, Li; Weng, Zhiqiang; Goh, LaiYoong; Garland, Marc

    2002-09-01

    A new algorithm for automatic phase correction of NMR spectra based on entropy minimization is proposed. The optimal zero-order and first-order phase corrections for a NMR spectrum are determined by minimizing entropy. The objective function is constructed using a Shannon-type information entropy measure. Entropy is defined as the normalized derivative of the NMR spectral data. The algorithm has been successfully applied to experimental 1H NMR spectra. The results of automatic phase correction are found to be comparable to, or perhaps better than, manual phase correction. The advantages of this automatic phase correction algorithm include its simple mathematical basis and the straightforward, reproducible, and efficient optimization procedure. The algorithm is implemented in the Matlab program ACME—Automated phase Correction based on Minimization of Entropy.

  13. Multi-agent systems design for aerospace applications

    NASA Astrophysics Data System (ADS)

    Waslander, Steven L.

    2007-12-01

    Engineering systems with independent decision makers are becoming increasingly prevalent and present many challenges in coordinating actions to achieve systems goals. In particular, this work investigates the applications of air traffic flow control and autonomous vehicles as motivation to define algorithms that allow agents to agree to safe, efficient and equitable solutions in a distributed manner. To ensure system requirements will be satisfied in practice, each method is evaluated for a specific model of agent behavior, be it cooperative or non-cooperative. The air traffic flow control problem is investigated from the point of view of the airlines, whose costs are directly affected by resource allocation decisions made by the Federal Aviation Administration in order to mitigate traffic disruptions caused by weather. Airlines are first modeled as cooperative, and a distributed algorithm is presented with various global cost metrics which balance efficient and equitable use of resources differently. Next, a competitive airline model is assumed and two market mechanisms are developed for allocating contested airspace resources. The resource market mechanism provides a solution for which convergence to an efficient solution can be guaranteed, and each airline will improve on the solution that would occur without its inclusion in the decision process. A lump-sum market is then introduced as an alternative mechanism, for which efficiency loss bounds exist if airlines attempt to manipulate prices. Initial convergence results for lump-sum markets are presented for simplified problems with a single resource. To validate these algorithms, two air traffic flow models are developed which extend previous techniques, the first a convenient convex model made possible by assuming constant velocity flow, and the second a more complex flow model with full inflow, velocity and rerouting control. Autonomous vehicle teams are envisaged for many applications including mobile sensing and search and rescue. To enable these high-level applications, multi-vehicle collision avoidance is solved using a cooperative, decentralized algorithm. For the development of coordination algorithms for autonomous vehicles, the Stanford Testbed of Autonomous Rotorcraft for Multi-Agent Control (STARMAC) is presented. This testbed provides significant advantages over other aerial testbeds due to its small size and low maintenance requirements.

  14. Fission and fusion interaction phenomena of mixed lump kink solutions for a generalized (3+1)-dimensional B-type Kadomtsev-Petviashvili equation

    NASA Astrophysics Data System (ADS)

    Liu, Yaqing; Wen, Xiaoyong

    2018-05-01

    In this paper, a generalized (3+1)-dimensional B-type Kadomtsev-Petviashvili (gBKP) equation is investigated by using the Hirota’s bilinear method. With the aid of symbolic computation, some new lump, mixed lump kink and periodic lump solutions are derived. Based on the derived solutions, some novel interaction phenomena like the fission and fusion interactions between one lump soliton and one kink soliton, the fission and fusion interactions between one lump soliton and a pair of kink solitons and the interactions between two periodic lump solitons are discussed graphically. Results might be helpful for understanding the propagation of the shallow water wave.

  15. 20 CFR 234.11 - 1974 Act lump-sum death payment.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false 1974 Act lump-sum death payment. 234.11... LUMP-SUM PAYMENTS Lump-Sum Death Payment § 234.11 1974 Act lump-sum death payment. (a) The total amount... household” as the employee at the time of the employee's death. (Refer to § 234.21 for an explanation of...

  16. 20 CFR 234.11 - 1974 Act lump-sum death payment.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true 1974 Act lump-sum death payment. 234.11... LUMP-SUM PAYMENTS Lump-Sum Death Payment § 234.11 1974 Act lump-sum death payment. (a) The total amount... household” as the employee at the time of the employee's death. (Refer to § 234.21 for an explanation of...

  17. 20 CFR 234.11 - 1974 Act lump-sum death payment.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false 1974 Act lump-sum death payment. 234.11... LUMP-SUM PAYMENTS Lump-Sum Death Payment § 234.11 1974 Act lump-sum death payment. (a) The total amount... household” as the employee at the time of the employee's death. (Refer to § 234.21 for an explanation of...

  18. 20 CFR 234.11 - 1974 Act lump-sum death payment.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false 1974 Act lump-sum death payment. 234.11... LUMP-SUM PAYMENTS Lump-Sum Death Payment § 234.11 1974 Act lump-sum death payment. (a) The total amount... household” as the employee at the time of the employee's death. (Refer to § 234.21 for an explanation of...

  19. 20 CFR 234.11 - 1974 Act lump-sum death payment.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true 1974 Act lump-sum death payment. 234.11... LUMP-SUM PAYMENTS Lump-Sum Death Payment § 234.11 1974 Act lump-sum death payment. (a) The total amount... household” as the employee at the time of the employee's death. (Refer to § 234.21 for an explanation of...

  20. Genetic particle swarm parallel algorithm analysis of optimization arrangement on mistuned blades

    NASA Astrophysics Data System (ADS)

    Zhao, Tianyu; Yuan, Huiqun; Yang, Wenjun; Sun, Huagang

    2017-12-01

    This article introduces a method of mistuned parameter identification which consists of static frequency testing of blades, dichotomy and finite element analysis. A lumped parameter model of an engine bladed-disc system is then set up. A bladed arrangement optimization method, namely the genetic particle swarm optimization algorithm, is presented. It consists of a discrete particle swarm optimization and a genetic algorithm. From this, the local and global search ability is introduced. CUDA-based co-evolution particle swarm optimization, using a graphics processing unit, is presented and its performance is analysed. The results show that using optimization results can reduce the amplitude and localization of the forced vibration response of a bladed-disc system, while optimization based on the CUDA framework can improve the computing speed. This method could provide support for engineering applications in terms of effectiveness and efficiency.

  1. Lump and lump-soliton solutions to the (2+1) -dimensional Ito equation

    NASA Astrophysics Data System (ADS)

    Yang, Jin-Yun; Ma, Wen-Xiu; Qin, Zhenyun

    2017-06-01

    Based on the Hirota bilinear form of the (2+1) -dimensional Ito equation, one class of lump solutions and two classes of interaction solutions between lumps and line solitons are generated through analysis and symbolic computations with Maple. Analyticity is naturally guaranteed for the presented lump and interaction solutions, and the interaction solutions reduce to lumps (or line solitons) while the hyperbolic-cosine (or the quadratic function) disappears. Three-dimensional plots and contour plots are made for two specific examples of the resulting interaction solutions.

  2. Fast Multilevel Solvers for a Class of Discrete Fourth Order Parabolic Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Bin; Chen, Luoping; Hu, Xiaozhe

    2016-03-05

    In this paper, we study fast iterative solvers for the solution of fourth order parabolic equations discretized by mixed finite element methods. We propose to use consistent mass matrix in the discretization and use lumped mass matrix to construct efficient preconditioners. We provide eigenvalue analysis for the preconditioned system and estimate the convergence rate of the preconditioned GMRes method. Furthermore, we show that these preconditioners only need to be solved inexactly by optimal multigrid algorithms. Our numerical examples indicate that the proposed preconditioners are very efficient and robust with respect to both discretization parameters and diffusion coefficients. We also investigatemore » the performance of multigrid algorithms with either collective smoothers or distributive smoothers when solving the preconditioner systems.« less

  3. Flood predictions using the parallel version of distributed numerical physical rainfall-runoff model TOPKAPI

    NASA Astrophysics Data System (ADS)

    Boyko, Oleksiy; Zheleznyak, Mark

    2015-04-01

    The original numerical code TOPKAPI-IMMS of the distributed rainfall-runoff model TOPKAPI ( Todini et al, 1996-2014) is developed and implemented in Ukraine. The parallel version of the code has been developed recently to be used on multiprocessors systems - multicore/processors PC and clusters. Algorithm is based on binary-tree decomposition of the watershed for the balancing of the amount of computation for all processors/cores. Message passing interface (MPI) protocol is used as a parallel computing framework. The numerical efficiency of the parallelization algorithms is demonstrated for the case studies for the flood predictions of the mountain watersheds of the Ukrainian Carpathian regions. The modeling results is compared with the predictions based on the lumped parameters models.

  4. Representing Lumped Markov Chains by Minimal Polynomials over Field GF(q)

    NASA Astrophysics Data System (ADS)

    Zakharov, V. M.; Shalagin, S. V.; Eminov, B. F.

    2018-05-01

    A method has been proposed to represent lumped Markov chains by minimal polynomials over a finite field. The accuracy of representing lumped stochastic matrices, the law of lumped Markov chains depends linearly on the minimum degree of polynomials over field GF(q). The method allows constructing the realizations of lumped Markov chains on linear shift registers with a pre-defined “linear complexity”.

  5. Nonuniformity correction for an infrared focal plane array based on diamond search block matching.

    PubMed

    Sheng-Hui, Rong; Hui-Xin, Zhou; Han-Lin, Qin; Rui, Lai; Kun, Qian

    2016-05-01

    In scene-based nonuniformity correction algorithms, artificial ghosting and image blurring degrade the correction quality severely. In this paper, an improved algorithm based on the diamond search block matching algorithm and the adaptive learning rate is proposed. First, accurate transform pairs between two adjacent frames are estimated by the diamond search block matching algorithm. Then, based on the error between the corresponding transform pairs, the gradient descent algorithm is applied to update correction parameters. During the process of gradient descent, the local standard deviation and a threshold are utilized to control the learning rate to avoid the accumulation of matching error. Finally, the nonuniformity correction would be realized by a linear model with updated correction parameters. The performance of the proposed algorithm is thoroughly studied with four real infrared image sequences. Experimental results indicate that the proposed algorithm can reduce the nonuniformity with less ghosting artifacts in moving areas and can also overcome the problem of image blurring in static areas.

  6. Impact of model structure on flow simulation and hydrological realism: from a lumped to a semi-distributed approach

    NASA Astrophysics Data System (ADS)

    Garavaglia, Federico; Le Lay, Matthieu; Gottardi, Fréderic; Garçon, Rémy; Gailhard, Joël; Paquet, Emmanuel; Mathevet, Thibault

    2017-08-01

    Model intercomparison experiments are widely used to investigate and improve hydrological model performance. However, a study based only on runoff simulation is not sufficient to discriminate between different model structures. Hence, there is a need to improve hydrological models for specific streamflow signatures (e.g., low and high flow) and multi-variable predictions (e.g., soil moisture, snow and groundwater). This study assesses the impact of model structure on flow simulation and hydrological realism using three versions of a hydrological model called MORDOR: the historical lumped structure and a revisited formulation available in both lumped and semi-distributed structures. In particular, the main goal of this paper is to investigate the relative impact of model equations and spatial discretization on flow simulation, snowpack representation and evapotranspiration estimation. Comparison of the models is based on an extensive dataset composed of 50 catchments located in French mountainous regions. The evaluation framework is founded on a multi-criterion split-sample strategy. All models were calibrated using an automatic optimization method based on an efficient genetic algorithm. The evaluation framework is enriched by the assessment of snow and evapotranspiration modeling against in situ and satellite data. The results showed that the new model formulations perform significantly better than the initial one in terms of the various streamflow signatures, snow and evapotranspiration predictions. The semi-distributed approach provides better calibration-validation performance for the snow cover area, snow water equivalent and runoff simulation, especially for nival catchments.

  7. Lump solutions of the BKP equation

    NASA Astrophysics Data System (ADS)

    Gilson, C. R.; Nimmo, J. J. C.

    1990-07-01

    Rational solutions of the BKP equation which decay to zero in all directions in the plane are obtained. These solutions are analogous to the lump solutions of the KPI equation. Properties of the single lump solution are described and the form of the N-lump solution is given. It is shown that single lump solutions are only non-singular for spectral parameters lying in certain regions of the complex plane.

  8. Interaction Solutions for Lump-line Solitons and Lump-kink Waves of the Dimensionally Reduced Generalised KP Equation

    NASA Astrophysics Data System (ADS)

    Ahmed, Iftikhar

    2017-09-01

    In this work, we investigate dimensionally reduced generalised Kadomtsev-Petviashvili equation, which can describe many nonlinear phenomena in fluid dynamics. Based on the bilinear formalism, direct Maple symbolic computations are used with an ansätz function to construct three classes of interaction solutions between lump and line solitons. Furthermore, the dynamics of interaction phenomena is explained with 3D plots and 2D contour plots. For the first class of interaction solutions, lump appeared at t=0, and there was a normal interaction between lump and line solitons at t=1, 2, 5, and 10. For the second class of interaction solutions, lump appeared from one side of line soliton at t=0, but it started moving downward at t=1, 2, and 5. Finally, at t=10, this lump was completely swallowed by other side. By contrast, for the third class of interaction solutions, lump appeared from one side of line soliton at t=0, but it started moving upward at t=1, 2, and 5. Finally, at t=10, this lump was completely swallowed by other side. Furthermore, interaction solutions between lump solutions and kink wave are also investigated. These results might be helpful to understand the propagation processes for nonlinear waves in fluid mechanics.

  9. Estimating Daily Evapotranspiration Based on A Model of Evapotranspiration Fraction (EF) for Mixed Pixels

    NASA Astrophysics Data System (ADS)

    Xin, X.; Li, F.; Peng, Z.; Qinhuo, L.

    2017-12-01

    Land surface heterogeneities significantly affect the reliability and accuracy of remotely sensed evapotranspiration (ET), and it gets worse for lower resolution data. At the same time, temporal scale extrapolation of the instantaneous latent heat flux (LE) at satellite overpass time to daily ET are crucial for applications of such remote sensing product. The purpose of this paper is to propose a simple but efficient model for estimating daytime evapotranspiration considering heterogeneity of mixed pixels. In order to do so, an equation to calculate evapotranspiration fraction (EF) of mixed pixels was derived based on two key assumptions. Assumption 1: the available energy (AE) of each sub-pixel equals approximately to that of any other sub-pixels in the same mixed pixel within acceptable margin of bias, and as same as the AE of the mixed pixel. It's only for a simpification of the equation, and its uncertainties and resulted errors in estimated ET are very small. Assumption 2: EF of each sub-pixel equals to the EF of the nearest pure pixel(s) of same land cover type. This equation is supposed to be capable of correcting the spatial scale error of the mixed pixels EF and can be used to calculated daily ET with daily AE data.The model was applied to an artificial oasis in the midstream of Heihe River. HJ-1B satellite data were used to estimate the lumped fluxes at the scale of 300 m after resampling the 30-m resolution datasets to 300 m resolution, which was used to carry on the key step of the model. The results before and after correction were compare to each other and validated using site data of eddy-correlation systems. Results indicated that the new model is capable of improving accuracy of daily ET estimation relative to the lumped method. Validations at 12 sites of eddy-correlation systems for 9 days of HJ-1B overpass showed that the R² increased to 0.82 from 0.62; the RMSE decreased to 1.60 MJ/m² from 2.47MJ/m²; the MBE decreased from 1.92 MJ/m² to 1.18MJ/m², which is a quite significant enhancement.The model is easy to apply. And the moduler of inhomogeneous surfaces is independent and easy to be embedded in the traditional remote sensing algorithms of heat fluxes to get daily ET, which were mainly designed to calculate LE or ET under unsaturated conditions and did not consider heterogeneities of land surface.

  10. Lump Solutions for the (3+1)-Dimensional Kadomtsev-Petviashvili Equation

    NASA Astrophysics Data System (ADS)

    Liu, De-Yin; Tian, Bo; Xie, Xi-Yang

    2016-12-01

    In this article, we investigate the lump solutions for the Kadomtsev-Petviashvili equation in (3+1) dimensions that describe the dynamics of plasmas or fluids. Via the symbolic computation, lump solutions for the (3+1)-dimensional Kadomtsev-Petviashvili equation are derived based on the bilinear forms. The conditions to guarantee analyticity and rational localisation of the lump solutions are presented. The lump solutions contain eight parameters, two of which are totally free, and the other six of which need to satisfy the presented conditions. Plots with particular choices of the involved parameters are made to show the lump solutions and their energy distributions.

  11. Temporal disconnectivity of the energy landscape in glassy systems

    NASA Astrophysics Data System (ADS)

    Lempesis, Nikolaos; Boulougouris, Georgios C.; Theodorou, Doros N.

    2013-03-01

    An alternative graphical representation of the potential energy landscape (PEL) has been developed and applied to a binary Lennard-Jones glassy system, providing insight into the unique topology of the system's potential energy hypersurface. With the help of this representation one is able to monitor the different explored basins of the PEL, as well as how - and mainly when - subsets of basins communicate with each other via transitions in such a way that details of the prior temporal history have been erased, i.e., local equilibration between the basins in each subset has been achieved. In this way, apart from detailed information about the structure of the PEL, the system's temporal evolution on the PEL is described. In order to gather all necessary information about the identities of two or more basins that are connected with each other, we consider two different approaches. The first one is based on consideration of the time needed for two basins to mutually equilibrate their populations according to the transition rate between them, in the absence of any effect induced by the rest of the landscape. The second approach is based on an analytical solution of the master equation that explicitly takes into account the entire explored landscape. It is shown that both approaches lead to the same result concerning the topology of the PEL and dynamical evolution on it. Moreover, a "temporal disconnectivity graph" is introduced to represent a lumped system stemming from the initial one. The lumped system is obtained via a specially designed algorithm [N. Lempesis, D. G. Tsalikis, G. C. Boulougouris, and D. N. Theodorou, J. Chem. Phys. 135, 204507 (2011), 10.1063/1.3663207]. The temporal disconnectivity graph provides useful information about both the lumped and the initial systems, including the definition of "metabasins" as collections of basins that communicate with each other via transitions that are fast relative to the observation time. Finally, the two examined approaches are compared to an "on the fly" molecular dynamics-based algorithm [D. G. Tsalikis, N. Lempesis, G. C. Boulougouris, and D. N. Theodorou, J. Chem. Theory Comput. 6, 1307 (2010), 10.1021/ct9004245].

  12. Improved artificial bee colony algorithm for wavefront sensor-less system in free space optical communication

    NASA Astrophysics Data System (ADS)

    Niu, Chaojun; Han, Xiang'e.

    2015-10-01

    Adaptive optics (AO) technology is an effective way to alleviate the effect of turbulence on free space optical communication (FSO). A new adaptive compensation method can be used without a wave-front sensor. Artificial bee colony algorithm (ABC) is a population-based heuristic evolutionary algorithm inspired by the intelligent foraging behaviour of the honeybee swarm with the advantage of simple, good convergence rate, robust and less parameter setting. In this paper, we simulate the application of the improved ABC to correct the distorted wavefront and proved its effectiveness. Then we simulate the application of ABC algorithm, differential evolution (DE) algorithm and stochastic parallel gradient descent (SPGD) algorithm to the FSO system and analyze the wavefront correction capabilities by comparison of the coupling efficiency, the error rate and the intensity fluctuation in different turbulence before and after the correction. The results show that the ABC algorithm has much faster correction speed than DE algorithm and better correct ability for strong turbulence than SPGD algorithm. Intensity fluctuation can be effectively reduced in strong turbulence, but not so effective in week turbulence.

  13. 42 CFR 411.46 - Lump-sum payments.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Lump-sum payments. 411.46 Section 411.46 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES MEDICARE PROGRAM... Covered Under Workers' Compensation § 411.46 Lump-sum payments. (a) Lump-sum commutation of future...

  14. 20 CFR 225.26 - Residual Lump-Sum PIA.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ..., except that social security earnings are not used to compute the RLS PIA. ... INSURANCE AMOUNT DETERMINATIONS PIA's Used in Computing Survivor Annuities and the Amount of the Residual Lump-Sum Payable § 225.26 Residual Lump-Sum PIA. The Residual Lump-Sum PIA (RLS PIA) is used to compute...

  15. Implementation of a numerical holding furnace model in foundry and construction of a reduced model

    NASA Astrophysics Data System (ADS)

    Loussouarn, Thomas; Maillet, Denis; Remy, Benjamin; Dan, Diane

    2016-09-01

    Vacuum holding induction furnaces are used for the manufacturing of turbine blades by loss wax foundry process. The control of solidification parameters is a key factor for the manufacturing of these parts in according to geometrical and structural expectations. The definition of a reduced heat transfer model with experimental identification through an estimation of its parameters is required here. In a further stage this model will be used to characterize heat exchanges using internal sensors through inverse techniques to optimize the furnace command and the optimization of its design. Here, an axisymmetric furnace and its load have been numerically modelled using FlexPDE, a finite elements code. A detailed model allows the calculation of the internal induction heat source as well as transient radiative transfer inside the furnace. A reduced lumped body model has been defined to represent the numerical furnace. The model reduction and the estimation of the parameters of the lumped body have been made using a Levenberg-Marquardt least squares minimization algorithm with Matlab, using two synthetic temperature signals with a further validation test.

  16. Simulation of adaptive semi-active magnetorheological seat damper for vehicle occupant blast protection

    NASA Astrophysics Data System (ADS)

    Yoo, Jin-Hyeong; Murugan, Muthuvel; Wereley, Norman M.

    2013-04-01

    This study investigates a lumped-parameter human body model which includes lower leg in seated posture within a quarter-car model for blast injury assessment simulation. To simulate the shock acceleration of the vehicle, mine blast analysis was conducted on a generic land vehicle crew compartment (sand box) structure. For the purpose of simulating human body dynamics with non-linear parameters, a physical model of a lumped-parameter human body within a quarter car model was implemented using multi-body dynamic simulation software. For implementing the control scheme, a skyhook algorithm was made to work with the multi-body dynamic model by running a co-simulation with the control scheme software plug-in. The injury criteria and tolerance levels for the biomechanical effects are discussed for each of the identified vulnerable body regions, such as the relative head displacement and the neck bending moment. The desired objective of this analytical model development is to study the performance of adaptive semi-active magnetorheological damper that can be used for vehicle-occupant protection technology enhancements to the seat design in a mine-resistant military vehicle.

  17. A Bayesian Uncertainty Framework for Conceptual Snowmelt and Hydrologic Models Applied to the Tenderfoot Creek Experimental Forest

    NASA Astrophysics Data System (ADS)

    Smith, T.; Marshall, L.

    2007-12-01

    In many mountainous regions, the single most important parameter in forecasting the controls on regional water resources is snowpack (Williams et al., 1999). In an effort to bridge the gap between theoretical understanding and functional modeling of snow-driven watersheds, a flexible hydrologic modeling framework is being developed. The aim is to create a suite of models that move from parsimonious structures, concentrated on aggregated watershed response, to those focused on representing finer scale processes and distributed response. This framework will operate as a tool to investigate the link between hydrologic model predictive performance, uncertainty, model complexity, and observable hydrologic processes. Bayesian methods, and particularly Markov chain Monte Carlo (MCMC) techniques, are extremely useful in uncertainty assessment and parameter estimation of hydrologic models. However, these methods have some difficulties in implementation. In a traditional Bayesian setting, it can be difficult to reconcile multiple data types, particularly those offering different spatial and temporal coverage, depending on the model type. These difficulties are also exacerbated by sensitivity of MCMC algorithms to model initialization and complex parameter interdependencies. As a way of circumnavigating some of the computational complications, adaptive MCMC algorithms have been developed to take advantage of the information gained from each successive iteration. Two adaptive algorithms are compared is this study, the Adaptive Metropolis (AM) algorithm, developed by Haario et al (2001), and the Delayed Rejection Adaptive Metropolis (DRAM) algorithm, developed by Haario et al (2006). While neither algorithm is truly Markovian, it has been proven that each satisfies the desired ergodicity and stationarity properties of Markov chains. Both algorithms were implemented as the uncertainty and parameter estimation framework for a conceptual rainfall-runoff model based on the Probability Distributed Model (PDM), developed by Moore (1985). We implement the modeling framework in Stringer Creek watershed in the Tenderfoot Creek Experimental Forest (TCEF), Montana. The snowmelt-driven watershed offers that additional challenge of modeling snow accumulation and melt and current efforts are aimed at developing a temperature- and radiation-index snowmelt model. Auxiliary data available from within TCEF's watersheds are used to support in the understanding of information value as it relates to predictive performance. Because the model is based on lumped parameters, auxiliary data are hard to incorporate directly. However, these additional data offer benefits through the ability to inform prior distributions of the lumped, model parameters. By incorporating data offering different information into the uncertainty assessment process, a cross-validation technique is engaged to better ensure that modeled results reflect real process complexity.

  18. 28 CFR 523.16 - Lump sum awards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.16 Lump sum awards. Any staff member may recommend to the Warden the approval of an inmate for a lump sum award of extra good time. Such recommendations... make lump sum awards of extra good time not to exceed thirty days. If the recommendation is for an...

  19. 28 CFR 523.16 - Lump sum awards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.16 Lump sum awards. Any staff member may recommend to the Warden the approval of an inmate for a lump sum award of extra good time. Such recommendations... make lump sum awards of extra good time not to exceed thirty days. If the recommendation is for an...

  20. 28 CFR 523.16 - Lump sum awards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.16 Lump sum awards. Any staff member may recommend to the Warden the approval of an inmate for a lump sum award of extra good time. Such recommendations... make lump sum awards of extra good time not to exceed thirty days. If the recommendation is for an...

  1. 28 CFR 523.16 - Lump sum awards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.16 Lump sum awards. Any staff member may recommend to the Warden the approval of an inmate for a lump sum award of extra good time. Such recommendations... make lump sum awards of extra good time not to exceed thirty days. If the recommendation is for an...

  2. 28 CFR 523.16 - Lump sum awards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.16 Lump sum awards. Any staff member may recommend to the Warden the approval of an inmate for a lump sum award of extra good time. Such recommendations... make lump sum awards of extra good time not to exceed thirty days. If the recommendation is for an...

  3. Some Interaction Solutions of a Reduced Generalised (3+1)-Dimensional Shallow Water Wave Equation for Lump Solutions and a Pair of Resonance Solitons

    NASA Astrophysics Data System (ADS)

    Wang, Yao; Chen, Mei-Dan; Li, Xian; Li, Biao

    2017-05-01

    Through Hirota bilinear transformation and symbolic computation with Maple, a class of lump solutions, rationally localised in all directions in the space, to a reduced generalised (3+1)-dimensional shallow water wave (SWW) equation are prensented. The resulting lump solutions all contain six parameters, two of which are free due to the translation invariance of the SWW equation and the other four of which must satisfy a nonzero determinant condition guaranteeing analyticity and rational localisation of the solutions. Then we derived the interaction solutions for lump solutions and one stripe soliton and the result shows that the particular lump solutions with specific values of the involved parameters will be drowned or swallowed by the stripe soliton. Furthermore, we extend this method to a more general combination of positive quadratic function and hyperbolic functions. Especially, it is interesting that a rogue wave is found to be aroused by the interaction between lump solutions and a pair of resonance stripe solitons. By choosing the values of the parameters, the dynamic properties of lump solutions, interaction solutions for lump solutions and one stripe soliton and interaction solutions for lump solutions and a pair of resonance solitons, are shown by dynamic graphs.

  4. Research and implementation of the algorithm for unwrapped and distortion correction basing on CORDIC for panoramic image

    NASA Astrophysics Data System (ADS)

    Zhang, Zhenhai; Li, Kejie; Wu, Xiaobing; Zhang, Shujiang

    2008-03-01

    The unwrapped and correcting algorithm based on Coordinate Rotation Digital Computer (CORDIC) and bilinear interpolation algorithm was presented in this paper, with the purpose of processing dynamic panoramic annular image. An original annular panoramic image captured by panoramic annular lens (PAL) can be unwrapped and corrected to conventional rectangular image without distortion, which is much more coincident with people's vision. The algorithm for panoramic image processing is modeled by VHDL and implemented in FPGA. The experimental results show that the proposed panoramic image algorithm for unwrapped and distortion correction has the lower computation complexity and the architecture for dynamic panoramic image processing has lower hardware cost and power consumption. And the proposed algorithm is valid.

  5. Qualitative and quantitative evaluation of six algorithms for correcting intensity nonuniformity effects.

    PubMed

    Arnold, J B; Liow, J S; Schaper, K A; Stern, J J; Sled, J G; Shattuck, D W; Worth, A J; Cohen, M S; Leahy, R M; Mazziotta, J C; Rottenberg, D A

    2001-05-01

    The desire to correct intensity nonuniformity in magnetic resonance images has led to the proliferation of nonuniformity-correction (NUC) algorithms with different theoretical underpinnings. In order to provide end users with a rational basis for selecting a given algorithm for a specific neuroscientific application, we evaluated the performance of six NUC algorithms. We used simulated and real MRI data volumes, including six repeat scans of the same subject, in order to rank the accuracy, precision, and stability of the nonuniformity corrections. We also compared algorithms using data volumes from different subjects and different (1.5T and 3.0T) MRI scanners in order to relate differences in algorithmic performance to intersubject variability and/or differences in scanner performance. In phantom studies, the correlation of the extracted with the applied nonuniformity was highest in the transaxial (left-to-right) direction and lowest in the axial (top-to-bottom) direction. Two of the six algorithms demonstrated a high degree of stability, as measured by the iterative application of the algorithm to its corrected output. While none of the algorithms performed ideally under all circumstances, locally adaptive methods generally outperformed nonadaptive methods. Copyright 2001 Academic Press.

  6. Atmospheric Correction Prototype Algorithm for High Spatial Resolution Multispectral Earth Observing Imaging Systems

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary

    2006-01-01

    This viewgraph presentation reviews the creation of a prototype algorithm for atmospheric correction using high spatial resolution earth observing imaging systems. The objective of the work was to evaluate accuracy of a prototype algorithm that uses satellite-derived atmospheric products to generate scene reflectance maps for high spatial resolution (HSR) systems. This presentation focused on preliminary results of only the satellite-based atmospheric correction algorithm.

  7. Nonuniformity correction algorithm with efficient pixel offset estimation for infrared focal plane arrays.

    PubMed

    Orżanowski, Tomasz

    2016-01-01

    This paper presents an infrared focal plane array (IRFPA) response nonuniformity correction (NUC) algorithm which is easy to implement by hardware. The proposed NUC algorithm is based on the linear correction scheme with the useful method of pixel offset correction coefficients update. The new approach to IRFPA response nonuniformity correction consists in the use of pixel response change determined at the actual operating conditions in relation to the reference ones by means of shutter to compensate a pixel offset temporal drift. Moreover, it permits to remove any optics shading effect in the output image as well. To show efficiency of the proposed NUC algorithm some test results for microbolometer IRFPA are presented.

  8. The generation of symmetric and asymmetric lump solitons by a bottom topography

    NASA Astrophysics Data System (ADS)

    Lu, Zhiming

    2016-11-01

    A group of Lump solutions to the (2+1)-dimensional Kadomtsev-Petviashvili (KP) equation is obtained analytically by making use of Hirota bilinear transform method. Then the generation of symmetric and asymmetric lump solitons by an obliquely-placed three-dimensional bottom topography is numerically investigated using the forced Kadomtsev-Petviashvili-I (fKP-I) equation. The numerical method is based on the third order Runge-Kutta method and the Crank-Nicolson scheme. The main result is the asymmetric generation of asymmetric lump-type solitons downstream of the obstacle.The lump soliton with a smaller amplitude is generated with a longer period and moves in a larger angle with respect to the positive x-axis than the one with a larger amplitude. The amplitude of the lump solitons strongly depend on the volume of the obstacle rather than the shape. Finally the effects of the detuning parameter on the generation of lump solitons is also studied. Project supported by NSFC with No. 11272196.

  9. Lump waves and breather waves for a (3+1)-dimensional generalized Kadomtsev-Petviashvili Benjamin-Bona-Mahony equation for an offshore structure

    NASA Astrophysics Data System (ADS)

    Yin, Ying; Tian, Bo; Wu, Xiao-Yu; Yin, Hui-Min; Zhang, Chen-Rong

    2018-04-01

    In this paper, we investigate a (3+1)-dimensional generalized Kadomtsev-Petviashvili Benjamin-Bona-Mahony equation, which describes the fluid flow in the case of an offshore structure. By virtue of the Hirota method and symbolic computation, bilinear forms, the lump-wave and breather-wave solutions are derived. Propagation characteristics and interaction of lump waves and breather waves are graphically discussed. Amplitudes and locations of the lump waves, amplitudes and periods of the breather waves all vary with the wavelengths in the three spatial directions, ratio of the wave amplitude to the depth of water, or product of the depth of water and the relative wavelength along the main direction of propagation. Of the interactions between the lump waves and solitons, there exist two different cases: (i) the energy is transferred from the lump wave to the soliton; (ii) the energy is transferred from the soliton to the lump wave.

  10. Testicle lump

    MedlinePlus

    ... A testicle lump is swelling or a growth (mass) in one or both testicles. Considerations A testicle ... ages. Causes Possible causes of a painful scrotal mass include: A cyst-like lump in the scrotum ...

  11. Diagnostic Performance of a Novel Coronary CT Angiography Algorithm: Prospective Multicenter Validation of an Intracycle CT Motion Correction Algorithm for Diagnostic Accuracy.

    PubMed

    Andreini, Daniele; Lin, Fay Y; Rizvi, Asim; Cho, Iksung; Heo, Ran; Pontone, Gianluca; Bartorelli, Antonio L; Mushtaq, Saima; Villines, Todd C; Carrascosa, Patricia; Choi, Byoung Wook; Bloom, Stephen; Wei, Han; Xing, Yan; Gebow, Dan; Gransar, Heidi; Chang, Hyuk-Jae; Leipsic, Jonathon; Min, James K

    2018-06-01

    Motion artifact can reduce the diagnostic accuracy of coronary CT angiography (CCTA) for coronary artery disease (CAD). The purpose of this study was to compare the diagnostic performance of an algorithm dedicated to correcting coronary motion artifact with the performance of standard reconstruction methods in a prospective international multicenter study. Patients referred for clinically indicated invasive coronary angiography (ICA) for suspected CAD prospectively underwent an investigational CCTA examination free from heart rate-lowering medications before they underwent ICA. Blinded core laboratory interpretations of motion-corrected and standard reconstructions for obstructive CAD (≥ 50% stenosis) were compared with ICA findings. Segments unevaluable owing to artifact were considered obstructive. The primary endpoint was per-subject diagnostic accuracy of the intracycle motion correction algorithm for obstructive CAD found at ICA. Among 230 patients who underwent CCTA with the motion correction algorithm and standard reconstruction, 92 (40.0%) had obstructive CAD on the basis of ICA findings. At a mean heart rate of 68.0 ± 11.7 beats/min, the motion correction algorithm reduced the number of nondiagnostic scans compared with standard reconstruction (20.4% vs 34.8%; p < 0.001). Diagnostic accuracy for obstructive CAD with the motion correction algorithm (62%; 95% CI, 56-68%) was not significantly different from that of standard reconstruction on a per-subject basis (59%; 95% CI, 53-66%; p = 0.28) but was superior on a per-vessel basis: 77% (95% CI, 74-80%) versus 72% (95% CI, 69-75%) (p = 0.02). The motion correction algorithm was superior in subgroups of patients with severely obstructive (≥ 70%) stenosis, heart rate ≥ 70 beats/min, and vessels in the atrioventricular groove. The motion correction algorithm studied reduces artifacts and improves diagnostic performance for obstructive CAD on a per-vessel basis and in selected subgroups on a per-subject basis.

  12. Adaptation of a Hyperspectral Atmospheric Correction Algorithm for Multi-spectral Ocean Color Data in Coastal Waters. Chapter 3

    NASA Technical Reports Server (NTRS)

    Gao, Bo-Cai; Montes, Marcos J.; Davis, Curtiss O.

    2003-01-01

    This SIMBIOS contract supports several activities over its three-year time-span. These include certain computational aspects of atmospheric correction, including the modification of our hyperspectral atmospheric correction algorithm Tafkaa for various multi-spectral instruments, such as SeaWiFS, MODIS, and GLI. Additionally, since absorbing aerosols are becoming common in many coastal areas, we are making the model calculations to incorporate various absorbing aerosol models into tables used by our Tafkaa atmospheric correction algorithm. Finally, we have developed the algorithms to use MODIS data to characterize thin cirrus effects on aerosol retrieval.

  13. Filtering method of star control points for geometric correction of remote sensing image based on RANSAC algorithm

    NASA Astrophysics Data System (ADS)

    Tan, Xiangli; Yang, Jungang; Deng, Xinpu

    2018-04-01

    In the process of geometric correction of remote sensing image, occasionally, a large number of redundant control points may result in low correction accuracy. In order to solve this problem, a control points filtering algorithm based on RANdom SAmple Consensus (RANSAC) was proposed. The basic idea of the RANSAC algorithm is that using the smallest data set possible to estimate the model parameters and then enlarge this set with consistent data points. In this paper, unlike traditional methods of geometric correction using Ground Control Points (GCPs), the simulation experiments are carried out to correct remote sensing images, which using visible stars as control points. In addition, the accuracy of geometric correction without Star Control Points (SCPs) optimization is also shown. The experimental results show that the SCPs's filtering method based on RANSAC algorithm has a great improvement on the accuracy of remote sensing image correction.

  14. Non-Uniformity Correction Using Nonlinear Characteristic Performance Curves for Calibration

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna Roberts

    Infrared imaging is an expansive field with many applications. Advances in infrared technology have lead to a greater demand from both commercial and military sectors. However, a known problem with infrared imaging is its non-uniformity. This non-uniformity stems from the fact that each pixel in an infrared focal plane array has its own photoresponse. Many factors such as exposure time, temperature, and amplifier choice affect how the pixels respond to incoming illumination and thus impact image uniformity. To improve performance non-uniformity correction (NUC) techniques are applied. Standard calibration based techniques commonly use a linear model to approximate the nonlinear response. This often leaves unacceptable levels of residual non-uniformity. Calibration techniques often have to be repeated during use to continually correct the image. In this dissertation alternates to linear NUC algorithms are investigated. The goal of this dissertation is to determine and compare nonlinear non-uniformity correction algorithms. Ideally the results will provide better NUC performance resulting in less residual non-uniformity as well as reduce the need for recalibration. This dissertation will consider new approaches to nonlinear NUC such as higher order polynomials and exponentials. More specifically, a new gain equalization algorithm has been developed. The various nonlinear non-uniformity correction algorithms will be compared with common linear non-uniformity correction algorithms. Performance will be compared based on RMS errors, residual non-uniformity, and the impact quantization has on correction. Performance will be improved by identifying and replacing bad pixels prior to correction. Two bad pixel identification and replacement techniques will be investigated and compared. Performance will be presented in the form of simulation results as well as before and after images taken with short wave infrared cameras. The initial results show, using a third order polynomial with 16-bit precision, significant improvement over the one and two-point correction algorithms. All algorithm have been implemented in software with satisfactory results and the third order gain equalization non-uniformity correction algorithm has been implemented in hardware.

  15. Generalized Likelihood Uncertainty Estimation (GLUE) Using Multi-Optimization Algorithm as Sampling Method

    NASA Astrophysics Data System (ADS)

    Wang, Z.

    2015-12-01

    For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.

  16. On the stability of lumps and wave collapse in water waves.

    PubMed

    Akylas, T R; Cho, Yeunwoo

    2008-08-13

    In the classical water-wave problem, fully localized nonlinear waves of permanent form, commonly referred to as lumps, are possible only if both gravity and surface tension are present. While much attention has been paid to shallow-water lumps, which are generalizations of Korteweg-de Vries solitary waves, the present study is concerned with a distinct class of gravity-capillary lumps recently found on water of finite or infinite depth. In the near linear limit, these lumps resemble locally confined wave packets with envelope and wave crests moving at the same speed, and they can be approximated in terms of a particular steady solution (ground state) of an elliptic equation system of the Benney-Roskes-Davey-Stewartson (BRDS) type, which governs the coupled evolution of the envelope along with the induced mean flow. According to the BRDS equations, however, initial conditions above a certain threshold develop a singularity in finite time, known as wave collapse, due to nonlinear focusing; the ground state, in fact, being exactly at the threshold for collapse suggests that the newly discovered lumps are unstable. In an effort to understand the role of this singularity in the dynamics of lumps, here we consider the fifth-order Kadomtsev-Petviashvili equation, a model for weakly nonlinear gravity-capillary waves on water of finite depth when the Bond number is close to one-third, which also admits lumps of the wave packet type. It is found that an exchange of stability occurs at a certain finite wave steepness, lumps being unstable below but stable above this critical value. As a result, a small-amplitude lump, which is linearly unstable and according to the BRDS equations would be prone to wave collapse, depending on the perturbation, either decays into dispersive waves or evolves into an oscillatory state near a finite-amplitude stable lump.

  17. Spiking Neural Classifier with Lumped Dendritic Nonlinearity and Binary Synapses: A Current Mode VLSI Implementation and Analysis.

    PubMed

    Bhaduri, Aritra; Banerjee, Amitava; Roy, Subhrajit; Kar, Sougata; Basu, Arindam

    2018-03-01

    We present a neuromorphic current mode implementation of a spiking neural classifier with lumped square law dendritic nonlinearity. It has been shown previously in software simulations that such a system with binary synapses can be trained with structural plasticity algorithms to achieve comparable classification accuracy with fewer synaptic resources than conventional algorithms. We show that even in real analog systems with manufacturing imperfections (CV of 23.5% and 14.4% for dendritic branch gains and leaks respectively), this network is able to produce comparable results with fewer synaptic resources. The chip fabricated in [Formula: see text]m complementary metal oxide semiconductor has eight dendrites per cell and uses two opposing cells per class to cancel common-mode inputs. The chip can operate down to a [Formula: see text] V and dissipates 19 nW of static power per neuronal cell and [Formula: see text] 125 pJ/spike. For two-class classification problems of high-dimensional rate encoded binary patterns, the hardware achieves comparable performance as software implementation of the same with only about a 0.5% reduction in accuracy. On two UCI data sets, the IC integrated circuit has classification accuracy comparable to standard machine learners like support vector machines and extreme learning machines while using two to five times binary synapses. We also show that the system can operate on mean rate encoded spike patterns, as well as short bursts of spikes. To the best of our knowledge, this is the first attempt in hardware to perform classification exploiting dendritic properties and binary synapses.

  18. Effect of ambient temperature on species lumping for total organic gases in gasoline exhaust emissions

    NASA Astrophysics Data System (ADS)

    Roy, Anirban; Choi, Yunsoo

    2017-03-01

    Volatile organic compound (VOCs) emissions from sources often need to be compressed or "lumped" into species classes for use in emissions inventories intended for air quality modeling. This needs to be done to ensure computational efficiency. The lumped profiles are usually reported for one value of ambient temperature. However, temperature-specific detailed profiles have been constructed in the recent past - the current study investigates how the lumping of species from those profiles into different atmospheric chemistry mechanisms is affected by temperature, considering three temperatures (-18 °C, -7 °C and 24 °C). The mechanisms considered differed on the assumptions used for lumping: CB05 (carbon bond type), SAPRC (ozone formation potential) and RACM2 (molecular surrogate and reactivity weighting). In this space, four sub-mechanisms for SAPRC were considered. Scaling factors were developed for each lumped model species and mechanism in terms of moles of lumped species per unit mass. Species which showed a direct one-to-one mapping (SAPRC/RACM2) reported scaling factors that were unchanged across mechanisms. However, CB05 showed different trends since one compound often is mapped onto multiple model species, out of which the paraffinic double bond (PAR) is predominant. Temperature-dependent parameterizations for emission factors pertaining to each lumped species class and mechanism were developed as part of the study. Here, the same kind of model species showed varying lumping parameters across the different mechanisms. These differences could be attributed to differing approaches in lumping. The scaling factors and temperature-dependent parameterizations could be used to update emissions inventories such as MOVES or SMOKE for use in chemical transport modeling.

  19. Wafer hotspot prevention using etch aware OPC correction

    NASA Astrophysics Data System (ADS)

    Hamouda, Ayman; Power, Dave; Salama, Mohamed; Chen, Ao

    2016-03-01

    As technology development advances into deep-sub-wavelength nodes, multiple patterning is becoming more essential to achieve the technology shrink requirements. Recently, Optical Proximity Correction (OPC) technology has proposed simultaneous correction of multiple mask-patterns to enable multiple patterning awareness during OPC correction. This is essential to prevent inter-layer hot-spots during the final pattern transfer. In state-of-art literature, multi-layer awareness is achieved using simultaneous resist-contour simulations to predict and correct for hot-spots during mask generation. However, this approach assumes a uniform etch shrink response for all patterns independent of their proximity, which isn't sufficient for the full prevention of inter-exposure hot-spot, for example different color space violations post etch or via coverage/enclosure post etch. In this paper, we explain the need to include the etch component during multiple patterning OPC. We also introduce a novel approach for Etch-aware simultaneous Multiple-patterning OPC, where we calibrate and verify a lumped model that includes the combined resist and etch responses. Adding this extra simulation condition during OPC is suitable for full chip processing from a computation intensity point of view. Also, using this model during OPC to predict and correct inter-exposures hot-spots is similar to previously proposed multiple-patterning OPC, yet our proposed approach more accurately corrects post-etch defects too.

  20. Solution for the nonuniformity correction of infrared focal plane arrays.

    PubMed

    Zhou, Huixin; Liu, Shangqian; Lai, Rui; Wang, Dabao; Cheng, Yubao

    2005-05-20

    Based on the S-curve model of the detector response of infrared focal plan arrays (IRFPAs), an improved two-point correction algorithm is presented. The algorithm first transforms the nonlinear image data into linear data and then uses the normal two-point algorithm to correct the linear data. The algorithm can effectively overcome the influence of nonlinearity of the detector's response, and it enlarges the correction precision and the dynamic range of the response. A real-time imaging-signal-processing system for IRFPAs that is based on a digital signal processor and field-programmable gate arrays is also presented. The nonuniformity correction capability of the presented solution is validated by experimental imaging procedures of a 128 x 128 pixel IRFPA camera prototype.

  1. Baseline correction combined partial least squares algorithm and its application in on-line Fourier transform infrared quantitative analysis.

    PubMed

    Peng, Jiangtao; Peng, Silong; Xie, Qiong; Wei, Jiping

    2011-04-01

    In order to eliminate the lower order polynomial interferences, a new quantitative calibration algorithm "Baseline Correction Combined Partial Least Squares (BCC-PLS)", which combines baseline correction and conventional PLS, is proposed. By embedding baseline correction constraints into PLS weights selection, the proposed calibration algorithm overcomes the uncertainty in baseline correction and can meet the requirement of on-line attenuated total reflectance Fourier transform infrared (ATR-FTIR) quantitative analysis. The effectiveness of the algorithm is evaluated by the analysis of glucose and marzipan ATR-FTIR spectra. BCC-PLS algorithm shows improved prediction performance over PLS. The root mean square error of cross-validation (RMSECV) on marzipan spectra for the prediction of the moisture is found to be 0.53%, w/w (range 7-19%). The sugar content is predicted with a RMSECV of 2.04%, w/w (range 33-68%). Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Health monitoring system for transmission shafts based on adaptive parameter identification

    NASA Astrophysics Data System (ADS)

    Souflas, I.; Pezouvanis, A.; Ebrahimi, K. M.

    2018-05-01

    A health monitoring system for a transmission shaft is proposed. The solution is based on the real-time identification of the physical characteristics of the transmission shaft i.e. stiffness and damping coefficients, by using a physical oriented model and linear recursive identification. The efficacy of the suggested condition monitoring system is demonstrated on a prototype transient engine testing facility equipped with a transmission shaft capable of varying its physical properties. Simulation studies reveal that coupling shaft faults can be detected and isolated using the proposed condition monitoring system. Besides, the performance of various recursive identification algorithms is addressed. The results of this work recommend that the health status of engine dynamometer shafts can be monitored using a simple lumped-parameter shaft model and a linear recursive identification algorithm which makes the concept practically viable.

  3. Digital algorithm for dispersion correction in optical coherence tomography for homogeneous and stratified media.

    PubMed

    Marks, Daniel L; Oldenburg, Amy L; Reynolds, J Joshua; Boppart, Stephen A

    2003-01-10

    The resolution of optical coherence tomography (OCT) often suffers from blurring caused by material dispersion. We present a numerical algorithm for computationally correcting the effect of material dispersion on OCT reflectance data for homogeneous and stratified media. This is experimentally demonstrated by correcting the image of a polydimethyl siloxane microfludic structure and of glass slides. The algorithm can be implemented using the fast Fourier transform. With broad spectral bandwidths and highly dispersive media or thick objects, dispersion correction becomes increasingly important.

  4. Digital Algorithm for Dispersion Correction in Optical Coherence Tomography for Homogeneous and Stratified Media

    NASA Astrophysics Data System (ADS)

    Marks, Daniel L.; Oldenburg, Amy L.; Reynolds, J. Joshua; Boppart, Stephen A.

    2003-01-01

    The resolution of optical coherence tomography (OCT) often suffers from blurring caused by material dispersion. We present a numerical algorithm for computationally correcting the effect of material dispersion on OCT reflectance data for homogeneous and stratified media. This is experimentally demonstrated by correcting the image of a polydimethyl siloxane microfludic structure and of glass slides. The algorithm can be implemented using the fast Fourier transform. With broad spectral bandwidths and highly dispersive media or thick objects, dispersion correction becomes increasingly important.

  5. Lump and rogue waves for the variable-coefficient Kadomtsev-Petviashvili equation in a fluid

    NASA Astrophysics Data System (ADS)

    Jia, Xiao-Yue; Tian, Bo; Du, Zhong; Sun, Yan; Liu, Lei

    2018-04-01

    Under investigation in this paper is the variable-coefficient Kadomtsev-Petviashvili equation, which describes the long waves with small amplitude and slow dependence on the transverse coordinate in a single-layer shallow fluid. Employing the bilinear form and symbolic computation, we obtain the lump, mixed lump-stripe soliton and mixed rogue wave-stripe soliton solutions. Discussions indicate that the variable coefficients are related to both the lump soliton’s velocity and amplitude. Mixed lump-stripe soliton solutions display two different properties, fusion and fission. Mixed rogue wave-stripe soliton solutions show that a rogue wave arises from one of the stripe solitons and disappears into the other. When the time approaches 0, rogue wave’s energy reaches the maximum. Interactions between a lump soliton and one-stripe soliton, and between a rogue wave and a pair of stripe solitons, are shown graphically.

  6. Rogue waves and lump solutions for a (3+1)-dimensional generalized B-type Kadomtsev-Petviashvili equation in fluid mechanics

    NASA Astrophysics Data System (ADS)

    Wu, Xiao-Yu; Tian, Bo; Chai, Han-Peng; Sun, Yan

    2017-08-01

    Under investigation in this letter is a (3+1)-dimensional generalized B-type Kadomtsev-Petviashvili equation, which describes the weakly dispersive waves propagating in a fluid. Employing the Hirota method and symbolic computation, we obtain the lump, breather-wave and rogue-wave solutions under certain constraints. We graphically study the lump waves with the influence of the parameters h1, h3 and h5 which are all the real constants: When h1 increases, amplitude of the lump wave increases, and location of the peak moves; when h3 increases, lump wave’s amplitude decreases, but location of the peak keeps unchanged; when h5 changes, lump wave’s peak location moves, but amplitude keeps unchanged. Breather waves and rogue waves are displayed: Rogue waves emerge when the periods of the breather waves go to the infinity.

  7. "ON ALGEBRAIC DECODING OF Q-ARY REED-MULLER AND PRODUCT REED-SOLOMON CODES"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SANTHI, NANDAKISHORE

    We consider a list decoding algorithm recently proposed by Pellikaan-Wu for q-ary Reed-Muller codes RM{sub q}({ell}, m, n) of length n {le} q{sup m} when {ell} {le} q. A simple and easily accessible correctness proof is given which shows that this algorithm achieves a relative error-correction radius of {tau} {le} (1-{radical}{ell}q{sup m-1}/n). This is an improvement over the proof using one-point Algebraic-Geometric decoding method given in. The described algorithm can be adapted to decode product Reed-Solomon codes. We then propose a new low complexity recursive aJgebraic decoding algorithm for product Reed-Solomon codes and Reed-Muller codes. This algorithm achieves a relativemore » error correction radius of {tau} {le} {Pi}{sub i=1}{sup m} (1 - {radical}k{sub i}/q). This algorithm is then proved to outperform the Pellikaan-Wu algorithm in both complexity and error correction radius over a wide range of code rates.« less

  8. An algorithm developed in Matlab for the automatic selection of cut-off frequencies, in the correction of strong motion data

    NASA Astrophysics Data System (ADS)

    Sakkas, Georgios; Sakellariou, Nikolaos

    2018-05-01

    Strong motion recordings are the key in many earthquake engineering applications and are also fundamental for seismic design. The present study focuses on the automated correction of accelerograms, analog and digital. The main feature of the proposed algorithm is the automatic selection for the cut-off frequencies based on a minimum spectral value in a predefined frequency bandwidth, instead of the typical signal-to-noise approach. The algorithm follows the basic steps of the correction procedure (instrument correction, baseline correction and appropriate filtering). Besides the corrected time histories, Peak Ground Acceleration, Peak Ground Velocity, Peak Ground Displacement values and the corrected Fourier Spectra are also calculated as well as the response spectra. The algorithm is written in Matlab environment, is fast enough and can be used for batch processing or in real-time applications. In addition, the possibility to also perform a signal-to-noise ratio is added as well as to perform causal or acausal filtering. The algorithm has been tested in six significant earthquakes (Kozani-Grevena 1995, Aigio 1995, Athens 1999, Lefkada 2003 and Kefalonia 2014) of the Greek territory with analog and digital accelerograms.

  9. [Application of an Adaptive Inertia Weight Particle Swarm Algorithm in the Magnetic Resonance Bias Field Correction].

    PubMed

    Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao

    2016-06-01

    An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.

  10. Development of a transient, lumped hydrologic model for geomorphologic units in a geomorphology based rainfall-runoff modelling framework

    NASA Astrophysics Data System (ADS)

    Vannametee, E.; Karssenberg, D.; Hendriks, M. R.; de Jong, S. M.; Bierkens, M. F. P.

    2010-05-01

    We propose a modelling framework for distributed hydrological modelling of 103-105 km2 catchments by discretizing the catchment in geomorphologic units. Each of these units is modelled using a lumped model representative for the processes in the unit. Here, we focus on the development and parameterization of this lumped model as a component of our framework. The development of the lumped model requires rainfall-runoff data for an extensive set of geomorphological units. Because such large observational data sets do not exist, we create artificial data. With a high-resolution, physically-based, rainfall-runoff model, we create artificial rainfall events and resulting hydrographs for an extensive set of different geomorphological units. This data set is used to identify the lumped model of geomorphologic units. The advantage of this approach is that it results in a lumped model with a physical basis, with representative parameters that can be derived from point-scale measurable physical parameters. The approach starts with the development of the high-resolution rainfall-runoff model that generates an artificial discharge dataset from rainfall inputs as a surrogate of a real-world dataset. The model is run for approximately 105 scenarios that describe different characteristics of rainfall, properties of the geomorphologic units (i.e. slope gradient, unit length and regolith properties), antecedent moisture conditions and flow patterns. For each scenario-run, the results of the high-resolution model (i.e. runoff and state variables) at selected simulation time steps are stored in a database. The second step is to develop the lumped model of a geomorphological unit. This forward model consists of a set of simple equations that calculate Hortonian runoff and state variables of the geomorphologic unit over time. The lumped model contains only three parameters: a ponding factor, a linear reservoir parameter, and a lag time. The model is capable of giving an appropriate representation of the transient rainfall-runoff relations that exist in the artificial data set generated with the high-resolution model. The third step is to find the values of empirical parameters in the lumped forward model using the artificial dataset. For each scenario of the high-resolution model run, a set of lumped model parameters is determined with a fitting method using the corresponding time series of state variables and outputs retrieved from the database. Thus, the parameters in the lumped model can be estimated by using the artificial data set. The fourth step is to develop an approach to assign lumped model parameters based upon the properties of the geomorphological unit. This is done by finding relationships between the measurable physical properties of geomorphologic units (i.e. slope gradient, unit length, and regolith properties) and the lumped forward model parameters using multiple regression techniques. In this way, a set of lumped forward model parameters can be estimated as a function of morphology and physical properties of the geomorphologic units. The lumped forward model can then be applied to different geomorphologic units. Finally, the performance of the lumped forward model is evaluated; the outputs of the lumped forward model are compared with the results of the high-resolution model. Our results show that the lumped forward model gives the best estimates of total discharge volumes and peak discharges when rain intensities are not significantly larger than the infiltration capacities of the units and when the units are small with a flat gradient. Hydrograph shapes are fairly well reproduced for most cases except for flat and elongated units with large runoff volumes. The results of this study provide a first step towards developing low-dimensional models for large ungauged basins.

  11. The influence of and the identification of nonlinearity in flexible structures

    NASA Technical Reports Server (NTRS)

    Zavodney, Lawrence D.

    1988-01-01

    Several models were built at NASA Langley and used to demonstrate the following nonlinear behavior: internal resonance in a free response, principal parametric resonance and subcritical instability in a cantilever beam-lumped mass structure, combination resonance in a parametrically excited flexible beam, autoparametric interaction in a two-degree-of-freedom system, instability of the linear solution, saturation of the excited mode, subharmonic bifurcation, and chaotic responses. A video tape documenting these phenomena was made. An attempt to identify a simple structure consisting of two light-weight beams and two lumped masses using the Eigensystem Realization Algorithm showed the inherent difficulty of using a linear based theory to identify a particular nonlinearity. Preliminary results show the technique requires novel interpretation, and hence may not be useful for structural modes that are coupled by a guadratic nonlinearity. A literature survey was also completed on recent work in parametrically excited nonlinear system. In summary, nonlinear systems may possess unique behaviors that require nonlinear identification techniques based on an understanding of how nonlinearity affects the dynamic response of structures. In this was, the unique behaviors of nonlinear systems may be properly identified. Moreover, more accutate quantifiable estimates can be made once the qualitative model has been determined.

  12. Optimized algorithm for the spatial nonuniformity correction of an imaging system based on a charge-coupled device color camera.

    PubMed

    de Lasarte, Marta; Pujol, Jaume; Arjona, Montserrat; Vilaseca, Meritxell

    2007-01-10

    We present an optimized linear algorithm for the spatial nonuniformity correction of a CCD color camera's imaging system and the experimental methodology developed for its implementation. We assess the influence of the algorithm's variables on the quality of the correction, that is, the dark image, the base correction image, and the reference level, and the range of application of the correction using a uniform radiance field provided by an integrator cube. The best spatial nonuniformity correction is achieved by having a nonzero dark image, by using an image with a mean digital level placed in the linear response range of the camera as the base correction image and taking the mean digital level of the image as the reference digital level. The response of the CCD color camera's imaging system to the uniform radiance field shows a high level of spatial uniformity after the optimized algorithm has been applied, which also allows us to achieve a high-quality spatial nonuniformity correction of captured images under different exposure conditions.

  13. Calculated X-ray Intensities Using Monte Carlo Algorithms: A Comparison to Experimental EPMA Data

    NASA Technical Reports Server (NTRS)

    Carpenter, P. K.

    2005-01-01

    Monte Carlo (MC) modeling has been used extensively to simulate electron scattering and x-ray emission from complex geometries. Here are presented comparisons between MC results and experimental electron-probe microanalysis (EPMA) measurements as well as phi(rhoz) correction algorithms. Experimental EPMA measurements made on NIST SRM 481 (AgAu) and 482 (CuAu) alloys, at a range of accelerating potential and instrument take-off angles, represent a formal microanalysis data set that has been widely used to develop phi(rhoz) correction algorithms. X-ray intensity data produced by MC simulations represents an independent test of both experimental and phi(rhoz) correction algorithms. The alpha-factor method has previously been used to evaluate systematic errors in the analysis of semiconductor and silicate minerals, and is used here to compare the accuracy of experimental and MC-calculated x-ray data. X-ray intensities calculated by MC are used to generate a-factors using the certificated compositions in the CuAu binary relative to pure Cu and Au standards. MC simulations are obtained using the NIST, WinCasino, and WinXray algorithms; derived x-ray intensities have a built-in atomic number correction, and are further corrected for absorption and characteristic fluorescence using the PAP phi(rhoz) correction algorithm. The Penelope code additionally simulates both characteristic and continuum x-ray fluorescence and thus requires no further correction for use in calculating alpha-factors.

  14. Improving operational flood ensemble prediction by the assimilation of satellite soil moisture: comparison between lumped and semi-distributed schemes

    NASA Astrophysics Data System (ADS)

    Alvarez-Garreton, C.; Ryu, D.; Western, A. W.; Su, C.-H.; Crow, W. T.; Robertson, D. E.; Leahy, C.

    2014-09-01

    Assimilation of remotely sensed soil moisture data (SM-DA) to correct soil water stores of rainfall-runoff models has shown skill in improving streamflow prediction. In the case of large and sparsely monitored catchments, SM-DA is a particularly attractive tool. Within this context, we assimilate active and passive satellite soil moisture (SSM) retrievals using an ensemble Kalman filter to improve operational flood prediction within a large semi-arid catchment in Australia (>40 000 km2). We assess the importance of accounting for channel routing and the spatial distribution of forcing data by applying SM-DA to a lumped and a semi-distributed scheme of the probability distributed model (PDM). Our scheme also accounts for model error representation and seasonal biases and errors in the satellite data. Before assimilation, the semi-distributed model provided more accurate streamflow prediction (Nash-Sutcliffe efficiency, NS = 0.77) than the lumped model (NS = 0.67) at the catchment outlet. However, this did not ensure good performance at the "ungauged" inner catchments. After SM-DA, the streamflow ensemble prediction at the outlet was improved in both the lumped and the semi-distributed schemes: the root mean square error of the ensemble was reduced by 27 and 31%, respectively; the NS of the ensemble mean increased by 7 and 38%, respectively; the false alarm ratio was reduced by 15 and 25%, respectively; and the ensemble prediction spread was reduced while its reliability was maintained. Our findings imply that even when rainfall is the main driver of flooding in semi-arid catchments, adequately processed SSM can be used to reduce errors in the model soil moisture, which in turn provides better streamflow ensemble prediction. We demonstrate that SM-DA efficacy is enhanced when the spatial distribution in forcing data and routing processes are accounted for. At ungauged locations, SM-DA is effective at improving streamflow ensemble prediction, however, the updated prediction is still poor since SM-DA does not address systematic errors in the model.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iwai, P; Lins, L Nadler

    Purpose: There is a lack of studies with significant cohort data about patients using pacemaker (PM), implanted cardioverter defibrillator (ICD) or cardiac resynchronization therapy (CRT) device undergoing radiotherapy. There is no literature comparing the cumulative doses delivered to those cardiac implanted electronic devices (CIED) calculated by different algorithms neither studies comparing doses with heterogeneity correction or not. The aim of this study was to evaluate the influence of the algorithms Pencil Beam Convolution (PBC), Analytical Anisotropic Algorithm (AAA) and Acuros XB (AXB) as well as heterogeneity correction on risk categorization of patients. Methods: A retrospective analysis of 19 3DCRT ormore » IMRT plans of 17 patients was conducted, calculating the dose delivered to CIED using three different calculation algorithms. Doses were evaluated with and without heterogeneity correction for comparison. Risk categorization of the patients was based on their CIED dependency and cumulative dose in the devices. Results: Total estimated doses at CIED calculated by AAA or AXB were higher than those calculated by PBC in 56% of the cases. In average, the doses at CIED calculated by AAA and AXB were higher than those calculated by PBC (29% and 4% higher, respectively). The maximum difference of doses calculated by each algorithm was about 1 Gy, either using heterogeneity correction or not. Values of maximum dose calculated with heterogeneity correction showed that dose at CIED was at least equal or higher in 84% of the cases with PBC, 77% with AAA and 67% with AXB than dose obtained with no heterogeneity correction. Conclusion: The dose calculation algorithm and heterogeneity correction did not change the risk categorization. Since higher estimated doses delivered to CIED do not compromise treatment precautions to be taken, it’s recommend that the most sophisticated algorithm available should be used to predict dose at the CIED using heterogeneity correction.« less

  16. Comparative analysis of peak-detection techniques for comprehensive two-dimensional chromatography.

    PubMed

    Latha, Indu; Reichenbach, Stephen E; Tao, Qingping

    2011-09-23

    Comprehensive two-dimensional gas chromatography (GC×GC) is a powerful technology for separating complex samples. The typical goal of GC×GC peak detection is to aggregate data points of analyte peaks based on their retention times and intensities. Two techniques commonly used for two-dimensional peak detection are the two-step algorithm and the watershed algorithm. A recent study [4] compared the performance of the two-step and watershed algorithms for GC×GC data with retention-time shifts in the second-column separations. In that analysis, the peak retention-time shifts were corrected while applying the two-step algorithm but the watershed algorithm was applied without shift correction. The results indicated that the watershed algorithm has a higher probability of erroneously splitting a single two-dimensional peak than the two-step approach. This paper reconsiders the analysis by comparing peak-detection performance for resolved peaks after correcting retention-time shifts for both the two-step and watershed algorithms. Simulations with wide-ranging conditions indicate that when shift correction is employed with both algorithms, the watershed algorithm detects resolved peaks with greater accuracy than the two-step method. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Optimal spiral phase modulation in Gerchberg-Saxton algorithm for wavefront reconstruction and correction

    NASA Astrophysics Data System (ADS)

    Baránek, M.; Běhal, J.; Bouchal, Z.

    2018-01-01

    In the phase retrieval applications, the Gerchberg-Saxton (GS) algorithm is widely used for the simplicity of implementation. This iterative process can advantageously be deployed in the combination with a spatial light modulator (SLM) enabling simultaneous correction of optical aberrations. As recently demonstrated, the accuracy and efficiency of the aberration correction using the GS algorithm can be significantly enhanced by a vortex image spot used as the target intensity pattern in the iterative process. Here we present an optimization of the spiral phase modulation incorporated into the GS algorithm.

  18. 29 CFR 4050.8 - Automatic lump sum.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... present value (determined as of the deemed distribution date under the missing participant lump sum... Relating to Labor (Continued) PENSION BENEFIT GUARANTY CORPORATION PLAN TERMINATIONS MISSING PARTICIPANTS § 4050.8 Automatic lump sum. This section applies to a missing participant whose designated benefit was...

  19. Lump-type solutions for the (4+1)-dimensional Fokas equation via symbolic computations

    NASA Astrophysics Data System (ADS)

    Cheng, Li; Zhang, Yi

    2017-09-01

    Based on the Hirota bilinear form, two classes of lump-type solutions of the (4+1)-dimensional nonlinear Fokas equation, rationally localized in almost all directions in the space are obtained through a direct symbolic computation with Maple. The resulting lump-type solutions contain free parameters. To guarantee the analyticity and rational localization of the solutions, the involved parameters need to satisfy certain constraints. A few particular lump-type solutions with special choices of the involved parameters are given.

  20. VETERANS BENEFITS: Veterans Have Mixed Views on a Lump Sum Disability Payment Option

    DTIC Science & Technology

    2000-12-01

    These advantages and disadvantages generally weigh the benefit of financial flexibility against the risk of financial loss. 8If the lump sum is... BENEFITS Veterans Have Mixed Views on a Lump Sum Disability Payment OptionGAO-01-172 Form SF298 Citation Data Report Date ("DD MON YYYY") 00DEC2000...Report Type N/A Dates Covered (from... to) ("DD MON YYYY") Title and Subtitle VETERANS BENEFITS Veterans Have Mixed Views on a Lump Sum Disability

  1. Implementation and performance of shutterless uncooled micro-bolometer cameras

    NASA Astrophysics Data System (ADS)

    Das, J.; de Gaspari, D.; Cornet, P.; Deroo, P.; Vermeiren, J.; Merken, P.

    2015-06-01

    A shutterless algorithm is implemented into the Xenics LWIR thermal cameras and modules. Based on a calibration set and a global temperature coefficient the optimal non-uniformity correction is calculated onboard of the camera. The limited resources in the camera require a compact algorithm, hence the efficiency of the coding is important. The performance of the shutterless algorithm is studied by a comparison of the residual non-uniformity (RNU) and signal-to-noise ratio (SNR) between the shutterless and shuttered correction algorithm. From this comparison we conclude that the shutterless correction is only slightly less performant compared to the standard shuttered algorithm, making this algorithm very interesting for thermal infrared applications where small weight and size, and continuous operation are important.

  2. Efficient error correction for next-generation sequencing of viral amplicons

    PubMed Central

    2012-01-01

    Background Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. Results In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Conclusions Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses. The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm PMID:22759430

  3. Efficient error correction for next-generation sequencing of viral amplicons.

    PubMed

    Skums, Pavel; Dimitrova, Zoya; Campo, David S; Vaughan, Gilberto; Rossi, Livia; Forbi, Joseph C; Yokosawa, Jonny; Zelikovsky, Alex; Khudyakov, Yury

    2012-06-25

    Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses.The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm.

  4. A micro-hydrology computation ordering algorithm

    NASA Astrophysics Data System (ADS)

    Croley, Thomas E.

    1980-11-01

    Discrete-distributed-parameter models are essential for watershed modelling where practical consideration of spatial variations in watershed properties and inputs is desired. Such modelling is necessary for analysis of detailed hydrologic impacts from management strategies and land-use effects. Trade-offs between model validity and model complexity exist in resolution of the watershed. Once these are determined, the watershed is then broken into sub-areas which each have essentially spatially-uniform properties. Lumped-parameter (micro-hydrology) models are applied to these sub-areas and their outputs are combined through the use of a computation ordering technique, as illustrated by many discrete-distributed-parameter hydrology models. Manual ordering of these computations requires fore-thought, and is tedious, error prone, sometimes storage intensive and least adaptable to changes in watershed resolution. A programmable algorithm for ordering micro-hydrology computations is presented that enables automatic ordering of computations within the computer via an easily understood and easily implemented "node" definition, numbering and coding scheme. This scheme and the algorithm are detailed in logic flow-charts and an example application is presented. Extensions and modifications of the algorithm are easily made for complex geometries or differing microhydrology models. The algorithm is shown to be superior to manual ordering techniques and has potential use in high-resolution studies.

  5. Hybrid soliton solutions in the (2+1)-dimensional nonlinear Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Chen, Meidan; Li, Biao

    2017-11-01

    Rational solutions and hybrid solutions from N-solitons are obtained by using the bilinear method and a long wave limit method. Line rogue waves and lumps in the (2+1)-dimensional nonlinear Schrödinger (NLS) equation are derived from two-solitons. Then from three-solitons, hybrid solutions between kink soliton with breathers, periodic line waves and lumps are derived. Interestingly, after the collision, the breathers are kept invariant, but the amplitudes of the periodic line waves and lumps change greatly. For the four-solitons, the solutions describe as breathers with breathers, line rogue waves or lumps. After the collision, breathers and lumps are kept invariant, but the line rogue wave has a great change.

  6. Meterological correction of optical beam refraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lukin, V.P.; Melamud, A.E.; Mironov, V.L.

    1986-02-01

    At the present time laser reference systems (LRS's) are widely used in agrotechnology and in geodesy. The demands for accuracy in LRS's constantly increase, so that a study of error sources and means of considering and correcting them is of practical importance. A theoretical algorithm is presented for correction of the regular component of atmospheric refraction for various types of hydrostatic stability of the atmospheric layer adjacent to the earth. The algorithm obtained is compared to regression equations obtained by processing an experimental data base. It is shown that within admissible accuracy limits the refraction correction algorithm obtained permits constructionmore » of correction tables and design of optical systems with programmable correction for atmospheric refraction on the basis of rapid meteorological measurements.« less

  7. Management of palpable breast lumps. Consensus guideline for family physicians.

    PubMed Central

    Heisey, R.; Mahoney, L.; Watson, B.

    1999-01-01

    OBJECTIVE: To describe an approach to managing women who present with palpable breast lumps. QUALITY OF EVIDENCE: Databases were searched from 1990 to 1998 using the search terms breast lumps, breast diseases, and breast cysts. Bibliographies of the articles obtained were searched for further relevant titles. Most evidence on management of breast cysts was obtained from cohort studies. Evidence on family physicians' approach to managing breast lumps is based on a review of the 1998 Canadian consensus guidelines and a review of a 1998 consensus guideline by 12 University of Toronto surgical oncologists (U of T guidelines). MAIN MESSAGE: Family physicians can manage women presenting with breast lumps if they have skill in breast cyst aspiration. Most breast cysts can be cured in minutes, thus avoiding unwarranted anxiety and eliminating unnecessary additional investigations and referrals. Women presenting with solid lesions should be referred to a surgeon. CONCLUSIONS: Breast cyst aspiration is a simple technique family physicians can use to either cure breast lumps or define appropriate cases for referral. PMID:10463093

  8. Dynamics of lumps and dark-dark solitons in the multi-component long-wave-short-wave resonance interaction system.

    PubMed

    Rao, Jiguang; Porsezian, Kuppuswamy; He, Jingsong; Kanna, Thambithurai

    2018-01-01

    General semi-rational solutions of an integrable multi-component (2+1)-dimensional long-wave-short-wave resonance interaction system comprising multiple short waves and a single long wave are obtained by employing the bilinear method. These solutions describe the interactions between various types of solutions, including line rogue waves, lumps, breathers and dark solitons. We only focus on the dynamical behaviours of the interactions between lumps and dark solitons in this paper. Our detailed study reveals two different types of excitation phenomena: fusion and fission. It is shown that the fundamental (simplest) semi-rational solutions can exhibit fission of a dark soliton into a lump and a dark soliton or fusion of one lump and one dark soliton into a dark soliton. The non-fundamental semi-rational solutions are further classified into three subclasses: higher-order, multi- and mixed-type semi-rational solutions. The higher-order semi-rational solutions show the process of annihilation (production) of two or more lumps into (from) one dark soliton. The multi-semi-rational solutions describe N ( N ≥2) lumps annihilating into or producing from N -dark solitons. The mixed-type semi-rational solutions are a hybrid of higher-order semi-rational solutions and multi-semi-rational solutions. For the mixed-type semi-rational solutions, we demonstrate an interesting dynamical behaviour that is characterized by partial suppression or creation of lumps from the dark solitons.

  9. Dynamics of lumps and dark-dark solitons in the multi-component long-wave-short-wave resonance interaction system

    NASA Astrophysics Data System (ADS)

    Rao, Jiguang; Porsezian, Kuppuswamy; He, Jingsong; Kanna, Thambithurai

    2018-01-01

    General semi-rational solutions of an integrable multi-component (2+1)-dimensional long-wave-short-wave resonance interaction system comprising multiple short waves and a single long wave are obtained by employing the bilinear method. These solutions describe the interactions between various types of solutions, including line rogue waves, lumps, breathers and dark solitons. We only focus on the dynamical behaviours of the interactions between lumps and dark solitons in this paper. Our detailed study reveals two different types of excitation phenomena: fusion and fission. It is shown that the fundamental (simplest) semi-rational solutions can exhibit fission of a dark soliton into a lump and a dark soliton or fusion of one lump and one dark soliton into a dark soliton. The non-fundamental semi-rational solutions are further classified into three subclasses: higher-order, multi- and mixed-type semi-rational solutions. The higher-order semi-rational solutions show the process of annihilation (production) of two or more lumps into (from) one dark soliton. The multi-semi-rational solutions describe N(N≥2) lumps annihilating into or producing from N-dark solitons. The mixed-type semi-rational solutions are a hybrid of higher-order semi-rational solutions and multi-semi-rational solutions. For the mixed-type semi-rational solutions, we demonstrate an interesting dynamical behaviour that is characterized by partial suppression or creation of lumps from the dark solitons.

  10. Modelling of deformation process for the layer of elastoviscoplastic media under surface action of periodic force of arbitrary type

    NASA Astrophysics Data System (ADS)

    Mikheyev, V. V.; Saveliev, S. V.

    2018-01-01

    Description of deflected mode for different types of materials under action of external force plays special role for wide variety of applications - from construction mechanics to circuits engineering. This article con-siders the problem of plastic deformation of the layer of elastoviscolastic soil under surface periodic force. The problem was solved with use of the modified lumped parameters approach which takes into account close to real distribution of normal stress in the depth of the layer along with changes in local mechanical properties of the material taking place during plastic deformation. Special numeric algorithm was worked out for computer modeling of the process. As an example of application suggested algorithm was realized for the deformation of the layer of elasoviscoplastic material by the source of external lateral force with the parameters of real technological process of soil compaction.

  11. Evaluation of a spatially-distributed Thornthwaite water-balance model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lough, J.A.

    1993-03-01

    A small watershed of low relief in coastal New Hampshire was divided into hydrologic sub-areas in a geographic information system on the basis of soils, sub-basins and remotely-sensed landcover. Three variables were spatially modeled for input to 49 individual water-balances: available water content of the root zone, water input and potential evapotranspiration (PET). The individual balances were weight-summed to generate the aggregate watershed-balance, which saw 9% (48--50 mm) less annual actual-evapotranspiration (AET) compared to a lumped approach. Analysis of streamflow coefficients suggests that the spatially-distributed approach is more representative of the basin dynamics. Variation of PET by landcover accounted formore » the majority of the 9% AET reduction. Variation of soils played a near-negligible role. As a consequence of the above points, estimates of landcover proportions and annual PET by landcover are sufficient to correct a lumped water-balance in the Northeast. If remote sensing is used to estimate the landcover area, a sensor with a high spatial resolution is required. Finally, while the lower Thornthwaite model has conceptual limitations for distributed application, the upper Thornthwaite model is highly adaptable to distributed problems and may prove useful in many earth-system models.« less

  12. Mapping Asian anthropogenic emissions of non-methane volatile organic compounds to multiple chemical mechanisms

    NASA Astrophysics Data System (ADS)

    Li, M.; Zhang, Q.; Streets, D. G.; He, K. B.; Cheng, Y. F.; Emmons, L. K.; Huo, H.; Kang, S. C.; Lu, Z.; Shao, M.; Su, H.; Yu, X.; Zhang, Y.

    2014-06-01

    An accurate speciation mapping of non-methane volatile organic compounds (NMVOC) emissions has an important impact on the performance of chemical transport models (CTMs) in simulating ozone mixing ratios and secondary organic aerosols. Taking the INTEX-B Asian NMVOC emission inventory as the case, we developed an improved speciation framework to generate model-ready anthropogenic NMVOC emissions for various gas-phase chemical mechanisms commonly used in CTMs in this work, by using an explicit assignment approach and updated NMVOC profiles. NMVOC profiles were selected and aggregated from a wide range of new measurements and the SPECIATE database v.4.2. To reduce potential uncertainty from individual measurements, composite profiles were developed by grouping and averaging source profiles from the same category. The fractions of oxygenated volatile organic compounds (OVOC) were corrected during the compositing process for those profiles which used improper sampling and analyzing methods. Emissions of individual species were then lumped into species in different chemical mechanisms used in CTMs by applying mechanism-dependent species mapping tables, which overcomes the weakness of inaccurate mapping in previous studies. Emission estimates for individual NMVOC species differ between one and three orders of magnitude for some species when different sets of profiles are used, indicating that source profile is the most important source of uncertainties of individual species emissions. However, those differences are diminished in lumped species as a result of the lumping in the chemical mechanisms. Gridded emissions for eight chemical mechanisms at 30 min × 30 min resolution as well as the auxiliary data are available at http://mic.greenresource.cn/intex-b2006. The framework proposed in this work can be also used to develop speciated NMVOC emissions for other regions.

  13. Mapping Asian anthropogenic emissions of non-methane volatile organic compounds to multiple chemical mechanisms

    NASA Astrophysics Data System (ADS)

    Li, M.; Zhang, Q.; Streets, D. G.; He, K. B.; Cheng, Y. F.; Emmons, L. K.; Huo, H.; Kang, S. C.; Lu, Z.; Shao, M.; Su, H.; Yu, X.; Zhang, Y.

    2013-12-01

    An accurate speciation mapping of non-methane volatile organic compounds (NMVOC) emissions has an important impact on the performance of chemical transport models (CTMs) in simulating ozone mixing ratios and secondary organic aerosols. In this work, we developed an improved speciation framework to generate model-ready anthropogenic Asian NMVOC emissions for various gas-phase chemical mechanisms commonly used in CTMs by using an explicit assignment approach and updated NMVOC profiles, based on the total NMVOC emissions in the INTEX-B Asian inventory for the year 2006. NMVOC profiles were selected and aggregated from a wide range of new measurements and the SPECIATE database. To reduce potential uncertainty from individual measurements, composite profiles were developed by grouping and averaging source profiles from the same category. The fractions of oxygenated volatile organic compounds (OVOC) were corrected during the compositing process for those profiles which used improper sampling and analyzing methods. Emissions of individual species were then lumped into species in different chemical mechanisms used in CTMs by applying mechanism-dependent species mapping tables, which overcomes the weakness of inaccurate mapping in previous studies. Gridded emissions for eight chemical mechanisms are developed at 30 min × 30 min resolution using various spatial proxies and are provided through the website: http://mic.greenresource.cn/intex-b2006. Emission estimates for individual NMVOC species differ between one and three orders of magnitude for some species when different sets of profiles are used, indicating that source profile is the most important source of uncertainties of individual species emissions. However, those differences are diminished in lumped species as a result of the lumping in the chemical mechanisms.

  14. 23 CFR 140.920 - Lump sum payments.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 23 Highways 1 2010-04-01 2010-04-01 false Lump sum payments. 140.920 Section 140.920 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PAYMENT PROCEDURES REIMBURSEMENT Reimbursement for Railroad Work § 140.920 Lump sum payments. Where approved by FHWA, pursuant to 23 CFR 646.216...

  15. 23 CFR 140.920 - Lump sum payments.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 23 Highways 1 2011-04-01 2011-04-01 false Lump sum payments. 140.920 Section 140.920 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PAYMENT PROCEDURES REIMBURSEMENT Reimbursement for Railroad Work § 140.920 Lump sum payments. Where approved by FHWA, pursuant to 23 CFR 646.216...

  16. 23 CFR 140.920 - Lump sum payments.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 23 Highways 1 2013-04-01 2013-04-01 false Lump sum payments. 140.920 Section 140.920 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PAYMENT PROCEDURES REIMBURSEMENT Reimbursement for Railroad Work § 140.920 Lump sum payments. Where approved by FHWA, pursuant to 23 CFR 646.216...

  17. 23 CFR 140.920 - Lump sum payments.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 23 Highways 1 2014-04-01 2014-04-01 false Lump sum payments. 140.920 Section 140.920 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PAYMENT PROCEDURES REIMBURSEMENT Reimbursement for Railroad Work § 140.920 Lump sum payments. Where approved by FHWA, pursuant to 23 CFR 646.216...

  18. Development of a novel three-dimensional deformable mirror with removable influence functions for high precision wavefront correction in adaptive optics system

    NASA Astrophysics Data System (ADS)

    Huang, Lei; Zhou, Chenlu; Gong, Mali; Ma, Xingkun; Bian, Qi

    2016-07-01

    Deformable mirror is a widely used wavefront corrector in adaptive optics system, especially in astronomical, image and laser optics. A new structure of DM-3D DM is proposed, which has removable actuators and can correct different aberrations with different actuator arrangements. A 3D DM consists of several reflection mirrors. Every mirror has a single actuator and is independent of each other. Two kinds of actuator arrangement algorithm are compared: random disturbance algorithm (RDA) and global arrangement algorithm (GAA). Correction effects of these two algorithms and comparison are analyzed through numerical simulation. The simulation results show that 3D DM with removable actuators can obviously improve the correction effects.

  19. A survey of provably correct fault-tolerant clock synchronization techniques

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.

    1988-01-01

    Six provably correct fault-tolerant clock synchronization algorithms are examined. These algorithms are all presented in the same notation to permit easier comprehension and comparison. The advantages and disadvantages of the different techniques are examined and issues related to the implementation of these algorithms are discussed. The paper argues for the use of such algorithms in life-critical applications.

  20. Simulation of co-phase error correction of optical multi-aperture imaging system based on stochastic parallel gradient decent algorithm

    NASA Astrophysics Data System (ADS)

    He, Xiaojun; Ma, Haotong; Luo, Chuanxin

    2016-10-01

    The optical multi-aperture imaging system is an effective way to magnify the aperture and increase the resolution of telescope optical system, the difficulty of which lies in detecting and correcting of co-phase error. This paper presents a method based on stochastic parallel gradient decent algorithm (SPGD) to correct the co-phase error. Compared with the current method, SPGD method can avoid detecting the co-phase error. This paper analyzed the influence of piston error and tilt error on image quality based on double-aperture imaging system, introduced the basic principle of SPGD algorithm, and discuss the influence of SPGD algorithm's key parameters (the gain coefficient and the disturbance amplitude) on error control performance. The results show that SPGD can efficiently correct the co-phase error. The convergence speed of the SPGD algorithm is improved with the increase of gain coefficient and disturbance amplitude, but the stability of the algorithm reduced. The adaptive gain coefficient can solve this problem appropriately. This paper's results can provide the theoretical reference for the co-phase error correction of the multi-aperture imaging system.

  1. 20 CFR 222.44 - Other relationship determinations for lump-sum payments.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... THE RAILROAD RETIREMENT ACT FAMILY RELATIONSHIPS Relationship as Parent, Grandchild, Brother or Sister § 222.44 Other relationship determinations for lump-sum payments. Other claimants will be considered to... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Other relationship determinations for lump...

  2. 29 CFR 4050.9 - Annuity or elective lump sum-living missing participant.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Annuity or elective lump sum-living missing participant... CORPORATION PLAN TERMINATIONS MISSING PARTICIPANTS § 4050.9 Annuity or elective lump sum—living missing participant. This section applies to a missing participant whose designated benefit was determined under...

  3. 29 CFR 4050.9 - Annuity or elective lump sum-living missing participant.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Annuity or elective lump sum-living missing participant... CORPORATION PLAN TERMINATIONS MISSING PARTICIPANTS § 4050.9 Annuity or elective lump sum—living missing participant. This section applies to a missing participant whose designated benefit was determined under...

  4. 29 CFR 4050.9 - Annuity or elective lump sum-living missing participant.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Annuity or elective lump sum-living missing participant... CORPORATION PLAN TERMINATIONS MISSING PARTICIPANTS § 4050.9 Annuity or elective lump sum—living missing participant. This section applies to a missing participant whose designated benefit was determined under...

  5. 29 CFR 4050.9 - Annuity or elective lump sum-living missing participant.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Annuity or elective lump sum-living missing participant... CORPORATION PLAN TERMINATIONS MISSING PARTICIPANTS § 4050.9 Annuity or elective lump sum—living missing participant. This section applies to a missing participant whose designated benefit was determined under...

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wight, L.; Zaslawsky, M.

    Two approaches for calculating soil structure interaction (SSI) are compared: finite element and lumped mass. Results indicate that the calculations with the lumped mass method are generally conservative compared to those obtained by the finite element method. They also suggest that a closer agreement between the two sets of calculations is possible, depending on the use of frequency-dependent soil springs and dashpots in the lumped mass calculations. There is a total lack of suitable guidelines for implementing the lumped mass method of calculating SSI, which leads to the conclusion that the finite element method is generally superior for calculative purposes.

  7. Lumped element filters for electronic warfare systems

    NASA Astrophysics Data System (ADS)

    Morgan, D.; Ragland, R.

    1986-02-01

    Increasing demands which future generations of electronic warfare (EW) systems are to satisfy include a reduction in the size of the equipment. The present paper is concerned with lumped element filters which can make a significant contribution to the downsizing of advanced EW systems. Lumped element filter design makes it possible to obtain very small package sizes by utilizing classical low frequency inductive and capacitive components which are small compared to the size of a wavelength. Cost-effective, temperature-stable devices can be obtained on the basis of new design techniques. Attention is given to aspects of design flexibility, an interdigital filter equivalent circuit diagram, conditions for which the use of lumped element filters can be recommended, construction techniques, a design example, and questions regarding the application of lumped element filters to EW processing systems.

  8. Algorithms for calculating mass-velocity and Darwin relativistic corrections with n-electron explicitly correlated Gaussians with shifted centers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stanke, Monika, E-mail: monika@fizyka.umk.pl; Palikot, Ewa, E-mail: epalikot@doktorant.umk.pl; Adamowicz, Ludwik, E-mail: ludwik@email.arizona.edu

    2016-05-07

    Algorithms for calculating the leading mass-velocity (MV) and Darwin (D) relativistic corrections are derived for electronic wave functions expanded in terms of n-electron explicitly correlated Gaussian functions with shifted centers and without pre-exponential angular factors. The algorithms are implemented and tested in calculations of MV and D corrections for several points on the ground-state potential energy curves of the H{sub 2} and LiH molecules. The algorithms are general and can be applied in calculations of systems with an arbitrary number of electrons.

  9. Correction of oral contrast artifacts in CT-based attenuation correction of PET images using an automated segmentation algorithm.

    PubMed

    Ahmadian, Alireza; Ay, Mohammad R; Bidgoli, Javad H; Sarkar, Saeed; Zaidi, Habib

    2008-10-01

    Oral contrast is usually administered in most X-ray computed tomography (CT) examinations of the abdomen and the pelvis as it allows more accurate identification of the bowel and facilitates the interpretation of abdominal and pelvic CT studies. However, the misclassification of contrast medium with high-density bone in CT-based attenuation correction (CTAC) is known to generate artifacts in the attenuation map (mumap), thus resulting in overcorrection for attenuation of positron emission tomography (PET) images. In this study, we developed an automated algorithm for segmentation and classification of regions containing oral contrast medium to correct for artifacts in CT-attenuation-corrected PET images using the segmented contrast correction (SCC) algorithm. The proposed algorithm consists of two steps: first, high CT number object segmentation using combined region- and boundary-based segmentation and second, object classification to bone and contrast agent using a knowledge-based nonlinear fuzzy classifier. Thereafter, the CT numbers of pixels belonging to the region classified as contrast medium are substituted with their equivalent effective bone CT numbers using the SCC algorithm. The generated CT images are then down-sampled followed by Gaussian smoothing to match the resolution of PET images. A piecewise calibration curve was then used to convert CT pixel values to linear attenuation coefficients at 511 keV. The visual assessment of segmented regions performed by an experienced radiologist confirmed the accuracy of the segmentation and classification algorithms for delineation of contrast-enhanced regions in clinical CT images. The quantitative analysis of generated mumaps of 21 clinical CT colonoscopy datasets showed an overestimation ranging between 24.4% and 37.3% in the 3D-classified regions depending on their volume and the concentration of contrast medium. Two PET/CT studies known to be problematic demonstrated the applicability of the technique in clinical setting. More importantly, correction of oral contrast artifacts improved the readability and interpretation of the PET scan and showed substantial decrease of the SUV (104.3%) after correction. An automated segmentation algorithm for classification of irregular shapes of regions containing contrast medium was developed for wider applicability of the SCC algorithm for correction of oral contrast artifacts during the CTAC procedure. The algorithm is being refined and further validated in clinical setting.

  10. Cardiac MRI in mice at 9.4 Tesla with a transmit-receive surface coil and a cardiac-tailored intensity-correction algorithm.

    PubMed

    Sosnovik, David E; Dai, Guangping; Nahrendorf, Matthias; Rosen, Bruce R; Seethamraju, Ravi

    2007-08-01

    To evaluate the use of a transmit-receive surface (TRS) coil and a cardiac-tailored intensity-correction algorithm for cardiac MRI in mice at 9.4 Tesla (9.4T). Fast low-angle shot (FLASH) cines, with and without delays alternating with nutations for tailored excitation (DANTE) tagging, were acquired in 13 mice. An intensity-correction algorithm was developed to compensate for the sensitivity profile of the surface coil, and was tailored to account for the unique distribution of noise and flow artifacts in cardiac MR images. Image quality was extremely high and allowed fine structures such as trabeculations, valve cusps, and coronary arteries to be clearly visualized. The tag lines created with the surface coil were also sharp and clearly visible. Application of the intensity-correction algorithm improved signal intensity, tissue contrast, and image quality even further. Importantly, the cardiac-tailored properties of the correction algorithm prevented noise and flow artifacts from being significantly amplified. The feasibility and value of cardiac MRI in mice with a TRS coil has been demonstrated. In addition, a cardiac-tailored intensity-correction algorithm has been developed and shown to improve image quality even further. The use of these techniques could produce significant potential benefits over a broad range of scanners, coil configurations, and field strengths. (c) 2007 Wiley-Liss, Inc.

  11. An Efficient Correction Algorithm for Eliminating Image Misalignment Effects on Co-Phasing Measurement Accuracy for Segmented Active Optics Systems

    PubMed Central

    Yue, Dan; Xu, Shuyan; Nie, Haitao; Wang, Zongyang

    2016-01-01

    The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD) algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality. PMID:26934045

  12. Hormones, Women and Breast Cancer

    MedlinePlus

    ... sure that it is benign (not cancer). These tests can include • A mammogram • A breast ultrasound • A sample of cells from the lump (called a fine needle aspirate) • A sample of a piece of tissue from the lump (called a core biopsy) Possible Symptoms of Breast Cancer • A lump • ...

  13. Lump Solutions and Interaction Phenomenon for (2+1)-Dimensional Sawada-Kotera Equation

    NASA Astrophysics Data System (ADS)

    Huang, Li-Li; Chen, Yong

    2017-05-01

    In this paper, a class of lump solutions to the (2+1)-dimensional Sawada-Kotera equation is studied by searching for positive quadratic function solutions to the associated bilinear equation. To guarantee rational localization and analyticity of the lumps, some sufficient and necessary conditions are presented on the parameters involved in the solutions. Then, a completely non-elastic interaction between a lump and a stripe of the (2+1)-dimensional Sawada-Kotera equation is obtained, which shows a lump solution is drowned or swallowed by a stripe soliton. Finally, 2-dimensional curves, 3-dimensional plots and density plots with particular choices of the involved parameters are presented to show the dynamic characteristics of the obtained lump and interaction solutions. Supported by the Global Change Research Program of China under Grant No. 2015CB953904, National Natural Science Foundation of China under Grant Nos. 11675054 and 11435005, Outstanding Doctoral Dissertation Cultivation Plan of Action under Grant No. YB2016039, and Shanghai Collaborative Innovation Center of Trustworthy Software for Internet of Things under Grant No. ZF1213

  14. Algorithm Updates for the Fourth SeaWiFS Data Reprocessing

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford, B. (Editor); Firestone, Elaine R. (Editor); Patt, Frederick S.; Barnes, Robert A.; Eplee, Robert E., Jr.; Franz, Bryan A.; Robinson, Wayne D.; Feldman, Gene Carl; Bailey, Sean W.

    2003-01-01

    The efforts to improve the data quality for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) data products have continued, following the third reprocessing of the global data set in May 2000. Analyses have been ongoing to address all aspects of the processing algorithms, particularly the calibration methodologies, atmospheric correction, and data flagging and masking. All proposed changes were subjected to rigorous testing, evaluation and validation. The results of these activities culminated in the fourth reprocessing, which was completed in July 2002. The algorithm changes, which were implemented for this reprocessing, are described in the chapters of this volume. Chapter 1 presents an overview of the activities leading up to the fourth reprocessing, and summarizes the effects of the changes. Chapter 2 describes the modifications to the on-orbit calibration, specifically the focal plane temperature correction and the temporal dependence. Chapter 3 describes the changes to the vicarious calibration, including the stray light correction to the Marine Optical Buoy (MOBY) data and improved data screening procedures. Chapter 4 describes improvements to the near-infrared (NIR) band correction algorithm. Chapter 5 describes changes to the atmospheric correction and the oceanic property retrieval algorithms, including out-of-band corrections, NIR noise reduction, and handling of unusual conditions. Chapter 6 describes various changes to the flags and masks, to increase the number of valid retrievals, improve the detection of the flag conditions, and add new flags. Chapter 7 describes modifications to the level-la and level-3 algorithms, to improve the navigation accuracy, correct certain types of spacecraft time anomalies, and correct a binning logic error. Chapter 8 describes the algorithm used to generate the SeaWiFS photosynthetically available radiation (PAR) product. Chapter 9 describes a coupled ocean-atmosphere model, which is used in one of the changes described in Chapter 4. Finally, Chapter 10 describes a comparison of results from the third and fourth reprocessings along the US. Northeast coast.

  15. Phase 2 development of Great Lakes algorithms for Nimbus-7 coastal zone color scanner

    NASA Technical Reports Server (NTRS)

    Tanis, Fred J.

    1984-01-01

    A series of experiments have been conducted in the Great Lakes designed to evaluate the application of the NIMBUS-7 Coastal Zone Color Scanner (CZCS). Atmospheric and water optical models were used to relate surface and subsurface measurements to satellite measured radiances. Absorption and scattering measurements were reduced to obtain a preliminary optical model for the Great Lakes. Algorithms were developed for geometric correction, correction for Rayleigh and aerosol path radiance, and prediction of chlorophyll-a pigment and suspended mineral concentrations. The atmospheric algorithm developed compared favorably with existing algorithms and was the only algorithm found to adequately predict the radiance variations in the 670 nm band. The atmospheric correction algorithm developed was designed to extract needed algorithm parameters from the CZCS radiance values. The Gordon/NOAA ocean algorithms could not be demonstrated to work for Great Lakes waters. Predicted values of chlorophyll-a concentration compared favorably with expected and measured data for several areas of the Great Lakes.

  16. Age group estimation in free-ranging African elephants based on acoustic cues of low-frequency rumbles

    PubMed Central

    Stoeger, Angela S.; Zeppelzauer, Matthias; Baotic, Anton

    2015-01-01

    Animal vocal signals are increasingly used to monitor wildlife populations and to obtain estimates of species occurrence and abundance. In the future, acoustic monitoring should function not only to detect animals, but also to extract detailed information about populations by discriminating sexes, age groups, social or kin groups, and potentially individuals. Here we show that it is possible to estimate age groups of African elephants (Loxodonta africana) based on acoustic parameters extracted from rumbles recorded under field conditions in a National Park in South Africa. Statistical models reached up to 70 % correct classification to four age groups (infants, calves, juveniles, adults) and 95 % correct classification when categorising into two groups (infants/calves lumped into one group versus adults). The models revealed that parameters representing absolute frequency values have the most discriminative power. Comparable classification results were obtained by fully automated classification of rumbles by high-dimensional features that represent the entire spectral envelope, such as MFCC (75 % correct classification) and GFCC (74 % correct classification). The reported results and methods provide the scientific foundation for a future system that could potentially automatically estimate the demography of an acoustically monitored elephant group or population. PMID:25821348

  17. The SEASAT altimeter wet tropospheric range correction revisited

    NASA Technical Reports Server (NTRS)

    Tapley, D. B.; Lundberg, J. B.; Born, G. H.

    1984-01-01

    An expanded set of radiosonde observations was used to calculate the wet tropospheric range correction for the brightness temperature measurements of the SEASAT scanning multichannel microwave radiometer (SMMR). The accuracy of the conventional algorithm for wet tropospheric range correction was evaluated. On the basis of the expanded observational data set, the algorithm was found to have a bias of about 1.0 cm, and a standard deviation 2.8 cm. In order to improve the algorithm, the exact linear, quadratic and logarithmic relationships between brightness temperatures and range corrections were determined. Various combinations of measurement parameters were used to reduce the standard deviation between SEASAT SMMR and radiosonde observations to about 2.1 cm. The performance of various range correction formulas is compared in a table.

  18. Evaluation and analysis of Seasat-A scanning multichannel Microwave Radiometer (SMMR) Antenna Pattern Correction (APC) algorithm

    NASA Technical Reports Server (NTRS)

    Kitzis, J. L.; Kitzis, S. N.

    1979-01-01

    The brightness temperature data produced by the SMMR final Antenna Pattern Correction (APC) algorithm is discussed. The algorithm consisted of: (1) a direct comparison of the outputs of the final and interim APC algorithms; and (2) an analysis of a possible relationship between observed cross track gradients in the interim brightness temperatures and the asymmetry in the antenna temperature data. Results indicate a bias between the brightness temperature produced by the final and interim APC algorithm.

  19. Atmospheric correction of the ocean color observations of the medium resolution imaging spectrometer (MERIS)

    NASA Astrophysics Data System (ADS)

    Antoine, David; Morel, Andre

    1997-02-01

    An algorithm is proposed for the atmospheric correction of the ocean color observations by the MERIS instrument. The principle of the algorithm, which accounts for all multiple scattering effects, is presented. The algorithm is then teste, and its accuracy assessed in terms of errors in the retrieved marine reflectances.

  20. 45 CFR 158.241 - Form of rebate.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... check or lump-sum reimbursement using the same method that was used for payment, such as credit card or... in the form of a premium credit, lump-sum check, or, if an enrollee paid the premium using a credit card or direct debit, by lump-sum reimbursement to the account used to pay the premium. (2) Any rebate...

  1. Affects of Anxiety and Depression on Health-Related Quality of Life among Patients with Benign Breast Lumps Diagnosed via Ultrasonography in China.

    PubMed

    Lou, Zhe; Li, Yinyan; Yang, Yilong; Wang, Lie; Yang, Jun

    2015-08-28

    There is a high incidence of benign breast lumps among women, and these lumps may lead to physical and psychological problems. This study aims to evaluate anxiety and depressive symptoms among patients with benign breast lumps diagnosed via ultrasonography and investigate their impacts on health-related quality of life (HRQOL). A cross-sectional survey was conducted in Shenyang, China, from January to November 2013. Data were collected with self-administered questionnaires, including the Zung Self-Rating Anxiety Scale (SAS), the Center for Epidemiologic Studies Depression Scale (CES-D), and the 36-item Short-Form Health Survey (SF-36), together with demographic characteristics, from patients of the Department of Breast Surgery of the First Affiliated Hospital of China Medical University. Hierarchical multiple regression analysis (HMR) was performed to explore the effects of anxiety and depression on HRQOL. The overall prevalences of anxiety (SAS score ≥ 40) and depression (CES-D scores ≥ 16) were 40.2% and 62.0%, respectively, and 37.5% of the participants had both of these psychological symptoms. The means and standard deviations of PCS and MCS were 75.42 (15.22) and 68.70 (17.71), respectively. Anxiety and depressive symptoms were significantly negatively associated with the HRQOL of patients with benign breast lumps diagnosed via ultrasonography. Women with benign breast lumps diagnosed via ultrasonography in China experienced relatively high levels of anxiety and depressive symptoms. Anxiety and depressive symptoms had significant negative impacts on both the mental and physical quality of life (QOL) of women with benign breast lumps. Beyond the necessary clinical treatment procedures, psychological guidance and detailed explanations of the disease should be offered to alleviate the anxiety and depressive symptoms and enhance the HRQOL of patients with benign breast lumps.

  2. A model for flexi-bar to evaluate intervertebral disc and muscle forces in exercises.

    PubMed

    Abdollahi, Masoud; Nikkhoo, Mohammad; Ashouri, Sajad; Asghari, Mohsen; Parnianpour, Mohamad; Khalaf, Kinda

    2016-10-01

    This study developed and validated a lumped parameter model for the FLEXI-BAR, a popular training instrument that provides vibration stimulation. The model which can be used in conjunction with musculoskeletal-modeling software for quantitative biomechanical analyses, consists of 3 rigid segments, 2 torsional springs, and 2 torsional dashpots. Two different sets of experiments were conducted to determine the model's key parameters including the stiffness of the springs and the damping ratio of the dashpots. In the first set of experiments, the free vibration of the FLEXI-BAR with an initial displacement at its end was considered, while in the second set, forced oscillations of the bar were studied. The properties of the mechanical elements in the lumped parameter model were derived utilizing a non-linear optimization algorithm which minimized the difference between the model's prediction and the experimental data. The results showed that the model is valid (8% error) and can be used for simulating exercises with the FLEXI-BAR for excitations in the range of the natural frequency. The model was then validated in combination with AnyBody musculoskeletal modeling software, where various lumbar disc, spinal muscles and hand muscles forces were determined during different FLEXI-BAR exercise simulations. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  3. 29 CFR 4044.73 - Lump sums and other alternative forms of distribution in lieu of annuities.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... distribution is the present value of the normal form of benefit provided by the plan payable at normal... 29 Labor 9 2010-07-01 2010-07-01 false Lump sums and other alternative forms of distribution in... Benefits and Assets Non-Trusteed Plans § 4044.73 Lump sums and other alternative forms of distribution in...

  4. 20 CFR 222.31 - Relationship as child for annuity and lump-sum payment purposes.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... that person is— (1) The natural or legally adopted child of the employee (see § 222.33); or (2) The... equitably adopted child of the employee. (b) Lump-sum payment claimant. A claimant for a lump-sum payment... of the employee; (2) A child legally adopted by the employee (this does not include any child adopted...

  5. Development of the atmospheric correction algorithm for the next generation geostationary ocean color sensor data

    NASA Astrophysics Data System (ADS)

    Lee, Kwon-Ho; Kim, Wonkook

    2017-04-01

    The geostationary ocean color imager-II (GOCI-II), designed to be focused on the ocean environmental monitoring with better spatial (250m for local and 1km for full disk) and spectral resolution (13 bands) then the current operational mission of the GOCI-I. GOCI-II will be launched in 2018. This study presents currently developing algorithm for atmospheric correction and retrieval of surface reflectance over land to be optimized with the sensor's characteristics. We first derived the top-of-atmosphere radiances as the proxy data derived from the parameterized radiative transfer code in the 13 bands of GOCI-II. Based on the proxy data, the algorithm has been made with cloud masking, gas absorption correction, aerosol inversion, computation of aerosol extinction correction. The retrieved surface reflectances are evaluated by the MODIS level 2 surface reflectance products (MOD09). For the initial test period, the algorithm gave error of within 0.05 compared to MOD09. Further work will be progressed to fully implement the GOCI-II Ground Segment system (G2GS) algorithm development environment. These atmospherically corrected surface reflectance product will be the standard GOCI-II product after launch.

  6. Decay of Kadomtsev-Petviashvili lumps in dissipative media

    NASA Astrophysics Data System (ADS)

    Clarke, S.; Gorshkov, K.; Grimshaw, R.; Stepanyants, Y.

    2018-03-01

    The decay of Kadomtsev-Petviashvili lumps is considered for a few typical dissipations-Rayleigh dissipation, Reynolds dissipation, Landau damping, Chezy bottom friction, viscous dissipation in the laminar boundary layer, and radiative losses caused by large-scale dispersion. It is shown that the straight-line motion of lumps is unstable under the influence of dissipation. The lump trajectories are calculated for two most typical models of dissipation-the Rayleigh and Reynolds dissipations. A comparison of analytical results obtained within the framework of asymptotic theory with the direct numerical calculations of the Kadomtsev-Petviashvili equation is presented. Good agreement between the theoretical and numerical results is obtained.

  7. Analysis and synthesis of distributed-lumped-active networks by digital computer

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The use of digital computational techniques in the analysis and synthesis of DLA (distributed lumped active) networks is considered. This class of networks consists of three distinct types of elements, namely, distributed elements (modeled by partial differential equations), lumped elements (modeled by algebraic relations and ordinary differential equations), and active elements (modeled by algebraic relations). Such a characterization is applicable to a broad class of circuits, especially including those usually referred to as linear integrated circuits, since the fabrication techniques for such circuits readily produce elements which may be modeled as distributed, as well as the more conventional lumped and active ones.

  8. Single lump breast surface stress assessment study

    NASA Astrophysics Data System (ADS)

    Vairavan, R.; Ong, N. R.; Sauli, Z.; Kirtsaeng, S.; Sakuntasathien, S.; Paitong, P.; Alcain, J. B.; Lai, S. L.; Retnasamy, V.

    2017-09-01

    Breast cancer is one of the commonest cancers diagnosed among women around the world. Simulation approach has been utilized to study, characterize and improvise detection methods for breast cancer. However, minimal simulation work has been done to evaluate the surface stress of the breast with lumps. Thus, in this work, simulation analysis was utilized to evaluate and assess the breast surface stress due to the presence of a lump within the internal structure of the breast. The simulation was conducted using the Elmer software. Simulation results have confirmed that the presence of a lump within the breast causes stress on the skin surface of the breast.

  9. Ultrathin lightweight plate-type acoustic metamaterials with positive lumped coupling resonant

    NASA Astrophysics Data System (ADS)

    Ma, Fuyin; Huang, Meng; Wu, Jiu Hui

    2017-01-01

    The experimental realization and theoretical understanding of a two-dimensional multiple cells lumped ultrathin lightweight plate-type acoustic metamaterials structures have been presented, wherein broadband excellent sound attenuation ability at low frequencies is realized by employing a lumped element coupling resonant effect. The basic unit cell of the metamaterials consists of an ultrathin stiff nylon plate clamped by two elastic ethylene-vinyl acetate copolymer or acrylonitrile butadiene styrene frames. The strong sound attenuation (up to nearly 99%) at low frequencies is experimentally revealed by the precisely designed metamaterials, for which the physical mechanism of the sound attenuation could be explicitly understood using the finite element simulations. As to the designed samples, the lumped effect from the frame compliance leads to a coupling flexural resonance at designable low frequencies. As a result, the whole composite structure become strongly anti-resonant with the incident sound waves, followed by a higher sound attenuation, i.e., the lumped resonant effect has been effectively reversed to be positive from negative for sound attenuation, and the acoustic metamaterial design could be extended to the lumped element containing multiple cells, rather than confined to a single cell.

  10. Application of Biologically Based Lumping To Investigate the Toxicokinetic Interactions of a Complex Gasoline Mixture.

    PubMed

    Jasper, Micah N; Martin, Sheppard A; Oshiro, Wendy M; Ford, Jermaine; Bushnell, Philip J; El-Masri, Hisham

    2016-03-15

    People are often exposed to complex mixtures of environmental chemicals such as gasoline, tobacco smoke, water contaminants, or food additives. We developed an approach that applies chemical lumping methods to complex mixtures, in this case gasoline, based on biologically relevant parameters used in physiologically based pharmacokinetic (PBPK) modeling. Inhalation exposures were performed with rats to evaluate the performance of our PBPK model and chemical lumping method. There were 109 chemicals identified and quantified in the vapor in the chamber. The time-course toxicokinetic profiles of 10 target chemicals were also determined from blood samples collected during and following the in vivo experiments. A general PBPK model was used to compare the experimental data to the simulated values of blood concentration for 10 target chemicals with various numbers of lumps, iteratively increasing from 0 to 99. Large reductions in simulation error were gained by incorporating enzymatic chemical interactions, in comparison to simulating the individual chemicals separately. The error was further reduced by lumping the 99 nontarget chemicals. The same biologically based lumping approach can be used to simplify any complex mixture with tens, hundreds, or thousands of constituents.

  11. Lens correction algorithm based on the see-saw diagram to correct Seidel aberrations employing aspheric surfaces

    NASA Astrophysics Data System (ADS)

    Rosete-Aguilar, Martha

    2000-06-01

    In this paper a lens correction algorithm based on the see- saw diagram developed by Burch is described. The see-saw diagram describes the image correction in rotationally symmetric systems over a finite field of view by means of aspherics surfaces. The algorithm is applied to the design of some basic telescopic configurations such as the classical Cassegrain telescope, the Dall-Kirkham telescope, the Pressman-Camichel telescope and the Ritchey-Chretien telescope in order to show a physically visualizable concept of image correction for optical systems that employ aspheric surfaces. By using the see-saw method the student can visualize the different possible configurations of such telescopes as well as their performances and also the student will be able to understand that it is not always possible to correct more primary aberrations by aspherizing more surfaces.

  12. Nonuniformity correction based on focal plane array temperature in uncooled long-wave infrared cameras without a shutter.

    PubMed

    Liang, Kun; Yang, Cailan; Peng, Li; Zhou, Bo

    2017-02-01

    In uncooled long-wave IR camera systems, the temperature of a focal plane array (FPA) is variable along with the environmental temperature as well as the operating time. The spatial nonuniformity of the FPA, which is partly affected by the FPA temperature, obviously changes as well, resulting in reduced image quality. This study presents a real-time nonuniformity correction algorithm based on FPA temperature to compensate for nonuniformity caused by FPA temperature fluctuation. First, gain coefficients are calculated using a two-point correction technique. Then offset parameters at different FPA temperatures are obtained and stored in tables. When the camera operates, the offset tables are called to update the current offset parameters via a temperature-dependent interpolation. Finally, the gain coefficients and offset parameters are used to correct the output of the IR camera in real time. The proposed algorithm is evaluated and compared with two representative shutterless algorithms [minimizing the sum of the squares of errors algorithm (MSSE), template-based solution algorithm (TBS)] using IR images captured by a 384×288 pixel uncooled IR camera with a 17 μm pitch. Experimental results show that this method can quickly trace the response drift of the detector units when the FPA temperature changes. The quality of the proposed algorithm is as good as MSSE, while the processing time is as short as TBS, which means the proposed algorithm is good for real-time control and at the same time has a high correction effect.

  13. Modeling and new equipment definition for the vibration isolation box equipment system

    NASA Technical Reports Server (NTRS)

    Sani, Robert L.

    1993-01-01

    Our MSAD-funded research project is to provide numerical modeling support for the VIBES (Vibration Isolation Box Experiment System) which is an IML2 flight experiment being built by the Japanese research team of Dr. H. Azuma of the Japanese National Aerospace Laboratory. During this reporting period, the following have been accomplished: A semi-consistent mass finite element projection algorithm for 2D and 3D Boussinesq flows has been implemented on Sun, HP And Cray Platforms. The algorithm has better phase speed accuracy than similar finite difference or lumped mass finite element algorithms, an attribute which is essential for addressing realistic g-jitter effects as well as convectively-dominated transient systems. The projection algorithm has been benchmarked against solutions generated via the commercial code FIDAP. The algorithm appears to be accurate as well as computationally efficient. Optimization and potential parallelization studies are underway. Our implementation to date has focused on execution of the basic algorithm with at most a concern for vectorization. The initial time-varying gravity Boussinesq flow simulation is being set up. The mesh is being designed and the input file is being generated. Some preliminary 'small mesh' cases will be attempted on our HP9000/735 while our request to MSAD for supercomputing resources is being addressed. The Japanese research team for VIBES was visited, the current set up and status of the physical experiment was obtained and ongoing E-Mail communication link was established.

  14. [Design and Implementation of Image Interpolation and Color Correction for Ultra-thin Electronic Endoscope on FPGA].

    PubMed

    Luo, Qiang; Yan, Zhuangzhi; Gu, Dongxing; Cao, Lei

    This paper proposed an image interpolation algorithm based on bilinear interpolation and a color correction algorithm based on polynomial regression on FPGA, which focused on the limited number of imaging pixels and color distortion of the ultra-thin electronic endoscope. Simulation experiment results showed that the proposed algorithm realized the real-time display of 1280 x 720@60Hz HD video, and using the X-rite color checker as standard colors, the average color difference was reduced about 30% comparing with that before color correction.

  15. Energy shadowing correction of ultrasonic pulse-echo records by digital signal processing

    NASA Technical Reports Server (NTRS)

    Kishoni, D.; Heyman, J. S.

    1986-01-01

    Attention is given to a numerical algorithm that, via signal processing, enables the dynamic correction of the shadowing effect of reflections on ultrasonic displays. The algorithm was applied to experimental data from graphite-epoxy composite material immersed in a water bath. It is concluded that images of material defects with the shadowing corrections allow for a more quantitative interpretation of the material state. It is noted that the proposed algorithm is fast and simple enough to be adopted for real time applications in industry.

  16. Characterization of high order spatial discretizations and lumping techniques for discontinuous finite element SN transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maginot, P. G.; Ragusa, J. C.; Morel, J. E.

    2013-07-01

    We examine several possible methods of mass matrix lumping for discontinuous finite element discrete ordinates transport using a Lagrange interpolatory polynomial trial space. Though positive outflow angular flux is guaranteed with traditional mass matrix lumping in a purely absorbing 1-D slab cell for the linear discontinuous approximation, we show that when used with higher degree interpolatory polynomial trial spaces, traditional lumping does yield strictly positive outflows and does not increase in accuracy with an increase in trial space polynomial degree. As an alternative, we examine methods which are 'self-lumping'. Self-lumping methods yield diagonal mass matrices by using numerical quadrature restrictedmore » to the Lagrange interpolatory points. Using equally-spaced interpolatory points, self-lumping is achieved through the use of closed Newton-Cotes formulas, resulting in strictly positive outflows in pure absorbers for odd power polynomials in 1-D slab geometry. By changing interpolatory points from the traditional equally-spaced points to the quadrature points of the Gauss-Legendre or Lobatto-Gauss-Legendre quadratures, it is possible to generate solution representations with a diagonal mass matrix and a strictly positive outflow for any degree polynomial solution representation in a pure absorber medium in 1-D slab geometry. Further, there is no inherent limit to local truncation error order of accuracy when using interpolatory points that correspond to the quadrature points of high order accuracy numerical quadrature schemes. (authors)« less

  17. Analysing grouping of nucleotides in DNA sequences using lumped processes constructed from Markov chains.

    PubMed

    Guédon, Yann; d'Aubenton-Carafa, Yves; Thermes, Claude

    2006-03-01

    The most commonly used models for analysing local dependencies in DNA sequences are (high-order) Markov chains. Incorporating knowledge relative to the possible grouping of the nucleotides enables to define dedicated sub-classes of Markov chains. The problem of formulating lumpability hypotheses for a Markov chain is therefore addressed. In the classical approach to lumpability, this problem can be formulated as the determination of an appropriate state space (smaller than the original state space) such that the lumped chain defined on this state space retains the Markov property. We propose a different perspective on lumpability where the state space is fixed and the partitioning of this state space is represented by a one-to-many probabilistic function within a two-level stochastic process. Three nested classes of lumped processes can be defined in this way as sub-classes of first-order Markov chains. These lumped processes enable parsimonious reparameterizations of Markov chains that help to reveal relevant partitions of the state space. Characterizations of the lumped processes on the original transition probability matrix are derived. Different model selection methods relying either on hypothesis testing or on penalized log-likelihood criteria are presented as well as extensions to lumped processes constructed from high-order Markov chains. The relevance of the proposed approach to lumpability is illustrated by the analysis of DNA sequences. In particular, the use of lumped processes enables to highlight differences between intronic sequences and gene untranslated region sequences.

  18. Paul Drude's Prediction of Nonreciprocal Mutual Inductance for Tesla Transformers

    PubMed Central

    McGuyer, Bart

    2014-01-01

    Inductors, transmission lines, and Tesla transformers have been modeled with lumped-element equivalent circuits for over a century. In a well-known paper from 1904, Paul Drude predicts that the mutual inductance for an unloaded Tesla transformer should be nonreciprocal. This historical curiosity is mostly forgotten today, perhaps because it appears incorrect. However, Drude's prediction is shown to be correct for the conditions treated, demonstrating the importance of constraints in deriving equivalent circuits for distributed systems. The predicted nonreciprocity is not fundamental, but instead is an artifact of the misrepresentation of energy by an equivalent circuit. The application to modern equivalent circuits is discussed. PMID:25542040

  19. Paul Drude's prediction of nonreciprocal mutual inductance for Tesla transformers.

    PubMed

    McGuyer, Bart

    2014-01-01

    Inductors, transmission lines, and Tesla transformers have been modeled with lumped-element equivalent circuits for over a century. In a well-known paper from 1904, Paul Drude predicts that the mutual inductance for an unloaded Tesla transformer should be nonreciprocal. This historical curiosity is mostly forgotten today, perhaps because it appears incorrect. However, Drude's prediction is shown to be correct for the conditions treated, demonstrating the importance of constraints in deriving equivalent circuits for distributed systems. The predicted nonreciprocity is not fundamental, but instead is an artifact of the misrepresentation of energy by an equivalent circuit. The application to modern equivalent circuits is discussed.

  20. An Enhanced MWR-Based Wet Tropospheric Correction for Sentinel-3: Inheritance from Past ESA Altimetry Missions

    NASA Astrophysics Data System (ADS)

    Lazaro, Clara; Fernandes, Joanna M.

    2015-12-01

    The GNSS-derived Path Delay (GPD) and the Data Combination (DComb) algorithms were developed by University of Porto (U.Porto), in the scope of different projects funded by ESA, to compute a continuous and improved wet tropospheric correction (WTC) for use in satellite altimetry. Both algorithms are mission independent and are based on a linear space-time objective analysis procedure that combines various wet path delay data sources. A new algorithm that gets the best of each aforementioned algorithm (GNSS-derived Path Delay Plus, GPD+) has been developed at U.Porto in the scope of SL_cci project, where the use of consistent and stable in time datasets is of major importance. The algorithm has been applied to the main eight altimetric missions (TOPEX/Poseidon, Jason-1, Jason-2, ERS-1, ERS-2, Envisat and CryoSat-2 and SARAL). Upcoming Sentinel-3 possesses a two-channel on-board radiometer similar to those that were deployed in ERS-1/2 and Envisat. Consequently, the fine-tuning of the GPD+ algorithm to these missions datasets shall enrich it, by increasing its capability to quickly deal with Sentinel-3 data. Foreseeing that the computation of an improved MWR-based WTC for use with Sentinel-3 data will be required, this study focuses on the results obtained for ERS-1/2 and Envisat missions, which are expected to give insight into the computation of this correction for the upcoming ESA altimetric mission. The various WTC corrections available for each mission (in general, the original correction derived from the on-board MWR, the model correction and the one derived from GPD+) are inter-compared either directly or using various sea level anomaly variance statistical analyses. Results show that the GPD+ algorithm is efficient in generating global and continuous datasets, corrected for land and ice contamination and spurious measurements of instrumental origin, with significant impacts on all ESA missions.

  1. An urban runoff model designed to inform stormwater management decisions.

    PubMed

    Beck, Nicole G; Conley, Gary; Kanner, Lisa; Mathias, Margaret

    2017-05-15

    We present an urban runoff model designed for stormwater managers to quantify runoff reduction benefits of mitigation actions that has lower input data and user expertise requirements than most commonly used models. The stormwater tool to estimate load reductions (TELR) employs a semi-distributed approach, where landscape characteristics and process representation are spatially-lumped within urban catchments on the order of 100 acres (40 ha). Hydrologic computations use a set of metrics that describe a 30-year rainfall distribution, combined with well-tested algorithms for rainfall-runoff transformation and routing to generate average annual runoff estimates for each catchment. User inputs include the locations and specifications for a range of structural best management practice (BMP) types. The model was tested in a set of urban catchments within the Lake Tahoe Basin of California, USA, where modeled annual flows matched that of the observed flows within 18% relative error for 5 of the 6 catchments and had good regional performance for a suite of performance metrics. Comparisons with continuous simulation models showed an average of 3% difference from TELR predicted runoff for a range of hypothetical urban catchments. The model usually identified the dominant BMP outflow components within 5% relative error of event-based measured flow data and simulated the correct proportionality between outflow components. TELR has been implemented as a web-based platform for use by municipal stormwater managers to inform prioritization, report program benefits and meet regulatory reporting requirements (www.swtelr.com). Copyright © 2017. Published by Elsevier Ltd.

  2. Single image non-uniformity correction using compressive sensing

    NASA Astrophysics Data System (ADS)

    Jian, Xian-zhong; Lu, Rui-zhi; Guo, Qiang; Wang, Gui-pu

    2016-05-01

    A non-uniformity correction (NUC) method for an infrared focal plane array imaging system was proposed. The algorithm, based on compressive sensing (CS) of single image, overcame the disadvantages of "ghost artifacts" and bulk calculating costs in traditional NUC algorithms. A point-sampling matrix was designed to validate the measurements of CS on the time domain. The measurements were corrected using the midway infrared equalization algorithm, and the missing pixels were solved with the regularized orthogonal matching pursuit algorithm. Experimental results showed that the proposed method can reconstruct the entire image with only 25% pixels. A small difference was found between the correction results using 100% pixels and the reconstruction results using 40% pixels. Evaluation of the proposed method on the basis of the root-mean-square error, peak signal-to-noise ratio, and roughness index (ρ) proved the method to be robust and highly applicable.

  3. Modeling Wind Wave Evolution from Deep to Shallow Water

    DTIC Science & Technology

    2011-09-30

    validation and calibration of new model developments. WORK COMPLETED Development of a Lumped Quadruplet Approximation ( LQA ) To make evaluation of the...interactions based on the WRT method. This Lumped Quadruplet Approximation ( LQA ) clusters (lumps) contributions to the integrations over the...total transfer rate. A procedure has been developed to test the implementation (of LQA and other reduced versions of the WRT) where 1) the non

  4. Geometric and shading correction for images of printed materials using boundary.

    PubMed

    Brown, Michael S; Tsoi, Yau-Chat

    2006-06-01

    A novel technique that uses boundary interpolation to correct geometric distortion and shading artifacts present in images of printed materials is presented. Unlike existing techniques, our algorithm can simultaneously correct a variety of geometric distortions, including skew, fold distortion, binder curl, and combinations of these. In addition, the same interpolation framework can be used to estimate the intrinsic illumination component of the distorted image to correct shading artifacts. We detail our algorithm for geometric and shading correction and demonstrate its usefulness on real-world and synthetic data.

  5. Circuit model of the ITER-like antenna for JET and simulation of its control algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Durodié, Frédéric, E-mail: frederic.durodie@rma.ac.be; Křivská, Alena; Dumortier, Pierre

    2015-12-10

    The ITER-like Antenna (ILA) for JET [1] is a 2 toroidal by 2 poloidal array of Resonant Double Loops (RDL) featuring in-vessel matching capacitors feeding RF current straps in conjugate-T manner, a low impedance quarter-wave impedance transformer, a service stub allowing hydraulic actuator and water cooling services to reach the aforementioned capacitors and a 2nd stage phase-shifter-stub matching circuit allowing to correct/choose the conjugate-T working impedance. Toroidally adjacent RDLs are fed from a 3dB hybrid splitter. It has been operated at 33, 42 and 47MHz on plasma (2008-2009) while it presently estimated frequency range is from 29 to 49MHz. Atmore » the time of the design (2001-2004) as well as the experiments the circuit models of the ILA were quite basic. The ILA front face and strap array Topica model was relatively crude and failed to correctly represent the poloidal central septum, Faraday Screen attachment as well as the segmented antenna central septum limiter. The ILA matching capacitors, T-junction, Vacuum Transmission Line (VTL) and Service Stubs were represented by lumped circuit elements and simple transmission line models. The assessment of the ILA results carried out to decide on the repair of the ILA identified that achieving routine full array operation requires a better understanding of the RF circuit, a feedback control algorithm for the 2nd stage matching as well as tighter calibrations of RF measurements. The paper presents the progress in modelling of the ILA comprising a more detailed Topica model of the front face for various plasma Scrape Off Layer profiles, a comprehensive HFSS model of the matching capacitors including internal bellows and electrode cylinders, 3D-EM models of the VTL including vacuum ceramic window, Service stub, a transmission line model of the 2nd stage matching circuit and main transmission lines including the 3dB hybrid splitters. A time evolving simulation using the improved circuit model allowed to design and simulate the effectiveness of a feedback control algorithm for the 2nd stage matching and demonstrates the simultaneous matching and control of the 4 RDLs: 11 feedback loops control 21 actuators (8 capacitors, 4 phase shifters and 4 stubs for the 2nd stage matching, 4 main phase shifters controlling of the toroidal phasing and the electronically controlled phase between RF sources feeding top and bottom parts of the array and determines the poloidal phasing of the array which is solved explicitly at each time step) on (simulated) ELMy plasmas.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.

    Purpose: Small calcifications are often the earliest and the main indicator of breast cancer. Dual-energy digital mammography (DEDM) has been considered as a promising technique to improve the detectability of calcifications since it can be used to suppress the contrast between adipose and glandular tissues of the breast. X-ray scatter leads to erroneous calculations of the DEDM image. Although the pinhole-array interpolation method can estimate scattered radiations, it requires extra exposures to measure the scatter and apply the correction. The purpose of this work is to design an algorithmic method for scatter correction in DEDM without extra exposures.Methods: In thismore » paper, a scatter correction method for DEDM was developed based on the knowledge that scattered radiation has small spatial variation and that the majority of pixels in a mammogram are noncalcification pixels. The scatter fraction was estimated in the DEDM calculation and the measured scatter fraction was used to remove scatter from the image. The scatter correction method was implemented on a commercial full-field digital mammography system with breast tissue equivalent phantom and calcification phantom. The authors also implemented the pinhole-array interpolation scatter correction method on the system. Phantom results for both methods are presented and discussed. The authors compared the background DE calcification signals and the contrast-to-noise ratio (CNR) of calcifications in the three DE calcification images: image without scatter correction, image with scatter correction using pinhole-array interpolation method, and image with scatter correction using the authors' algorithmic method.Results: The authors' results show that the resultant background DE calcification signal can be reduced. The root-mean-square of background DE calcification signal of 1962 μm with scatter-uncorrected data was reduced to 194 μm after scatter correction using the authors' algorithmic method. The range of background DE calcification signals using scatter-uncorrected data was reduced by 58% with scatter-corrected data by algorithmic method. With the scatter-correction algorithm and denoising, the minimum visible calcification size can be reduced from 380 to 280 μm.Conclusions: When applying the proposed algorithmic scatter correction to images, the resultant background DE calcification signals can be reduced and the CNR of calcifications can be improved. This method has similar or even better performance than pinhole-array interpolation method in scatter correction for DEDM; moreover, this method is convenient and requires no extra exposure to the patient. Although the proposed scatter correction method is effective, it is validated by a 5-cm-thick phantom with calcifications and homogeneous background. The method should be tested on structured backgrounds to more accurately gauge effectiveness.« less

  7. Optical Kerr Spatiotemporal Dark-Lump Dynamics of Hydrodynamic Origin

    NASA Astrophysics Data System (ADS)

    Baronio, Fabio; Wabnitz, Stefan; Kodama, Yuji

    2016-04-01

    There is considerable fundamental and applicative interest in obtaining nondiffractive and nondispersive spatiotemporal localized wave packets propagating in optical cubic nonlinear or Kerr media. Here, we analytically predict the existence of a novel family of spatiotemporal dark lump solitary wave solutions of the (2 +1 )D nonlinear Schrödinger equation. Dark lumps represent multidimensional holes of light on a continuous wave background. We analytically derive the dark lumps from the hydrodynamic exact soliton solutions of the (2 +1 )D shallow water Kadomtsev-Petviashvili model, inheriting their complex interaction properties. This finding opens a novel path for the excitation and control of optical spatiotemporal waveforms of hydrodynamic footprint and multidimensional optical extreme wave phenomena.

  8. Optical Kerr Spatiotemporal Dark-Lump Dynamics of Hydrodynamic Origin.

    PubMed

    Baronio, Fabio; Wabnitz, Stefan; Kodama, Yuji

    2016-04-29

    There is considerable fundamental and applicative interest in obtaining nondiffractive and nondispersive spatiotemporal localized wave packets propagating in optical cubic nonlinear or Kerr media. Here, we analytically predict the existence of a novel family of spatiotemporal dark lump solitary wave solutions of the (2+1)D nonlinear Schrödinger equation. Dark lumps represent multidimensional holes of light on a continuous wave background. We analytically derive the dark lumps from the hydrodynamic exact soliton solutions of the (2+1)D shallow water Kadomtsev-Petviashvili model, inheriting their complex interaction properties. This finding opens a novel path for the excitation and control of optical spatiotemporal waveforms of hydrodynamic footprint and multidimensional optical extreme wave phenomena.

  9. Lump solutions with interaction phenomena in the (2+1)-dimensional Ito equation

    NASA Astrophysics Data System (ADS)

    Zou, Li; Yu, Zong-Bing; Tian, Shou-Fu; Feng, Lian-Li; Li, Jin

    2018-03-01

    In this paper, we consider the (2+1)-dimensional Ito equation, which was introduced by Ito. By considering the Hirota’s bilinear method, and using the positive quadratic function, we obtain some lump solutions of the Ito equation. In order to ensure rational localization and analyticity of these lump solutions, some sufficient and necessary conditions are provided on the parameters that appeared in the solutions. Furthermore, the interaction solutions between lump solutions and the stripe solitons are discussed by combining positive quadratic function with exponential function. Finally, the dynamic properties of these solutions are shown via the way of graphical analysis by selecting appropriate values of the parameters.

  10. The Role of Community Education in Increasing Knowledge of Breast Health and Cancer: Findings from the Asian Breast Cancer Project in Boston, Massachusetts.

    PubMed

    Berger, Samantha; Huang, Chien-Chi; Rubin, Carolyn L

    2017-03-01

    In the past decade, cancer rates have significantly decreased in the USA, but breast cancer survival is lower in Asian American women, likely due to lower rates of screening behaviors in Asian Americans compared to other ethnicities, which could lead to later stage cancer diagnosis and increased mortality. This paper reports on the Asian Breast Cancer (ABC) Project, a three-phase peer-led community program designed to promote cancer prevention by improving breast cancer screening rates among Chinese and Vietnamese women in the Greater Boston area. The three phases of planning and coalition building, community health worker training, and the community workshop intervention are described. The workshop intervention was evaluated by comparing pre- and post-workshop questionnaires evaluating knowledge about breast cancer screening and prevention. Two hundred fifty-two women participated in the program across 14 workshops. Each participant completed questionnaires about demographics, access to health care, and a five-item self-administered questionnaire about breast cancer knowledge. Results showed that the majority of the women had received a clinical breast exam or mammogram in the past 12 months (69 and 59 %, respectively), and older women were more likely to get a mammogram (85 %) or clinical breast exams (74 %) compared to younger women. Eighty-one percent of women were interested in reminder systems. Baseline knowledge was high for three survey questions about mammograms and breast cancer risk (88-97 %). For questions with fewer correct answers at baseline, knowledge about the meaning of lumps in the breast significantly increased (69 to 80 % correct, p < 0.0001), as well as knowledge about frequency of clinical breast exam (48 to 67 % correct, p < 0.0001). This pilot project indicated a partial effectiveness of the community workshop in a population with high baseline knowledge. The education workshop increased knowledge about breast lumps and clinical exam frequency. We also identified that reminder systems and appointment assistance are desired by this population. Our findings inform future cancer screening strategies for Asian Americans.

  11. Ultrabroadband Microwave Metamaterial Absorber Based on Electric SRR Loaded with Lumped Resistors

    NASA Astrophysics Data System (ADS)

    Zhao, Jingcheng; Cheng, Yongzhi

    2016-10-01

    An ultrabroadband microwave metamaterial absorber (MMA) based on an electric split-ring resonator (ESRR) loaded with lumped resistors is presented. Compared with an ESRR MMA, the composite MMA (CMMA) loaded with lumped resistors offers stronger absorption over an extremely extended bandwidth. The reflectance simulated under different substrate loss conditions indicates that incident electromagnetic (EM) wave energy is mainly consumed by the lumped resistors. The simulated surface current and power loss density distributions further illustrate the mechanism underlying the observed absorption. Further simulation results indicate that the performance of the CMMA can be tuned by adjusting structural parameters of the ESRR and lumped resistor parameters. We fabricated and measured MMA and CMMA samples. The CMMA yielded below -10 dB reflectance from 4.4 GHz to 18 GHz experimentally, with absorption bandwidth and relative bandwidth of 13.6 GHz and 121.4%, respectively. This ultrabroadband microwave absorber has potential applications in the electromagnetic energy harvesting and stealth fields.

  12. Comparative study of contrast-enhanced ultrasound qualitative and quantitative analysis for identifying benign and malignant breast tumor lumps.

    PubMed

    Liu, Jian; Gao, Yun-Hua; Li, Ding-Dong; Gao, Yan-Chun; Hou, Ling-Mi; Xie, Ting

    2014-01-01

    To compare the value of contrast-enhanced ultrasound (CEUS) qualitative and quantitative analysis in the identification of breast tumor lumps. Qualitative and quantitative indicators of CEUS for 73 cases of breast tumor lumps were retrospectively analyzed by univariate and multivariate approaches. Logistic regression was applied and ROC curves were drawn for evaluation and comparison. The CEUS qualitative indicator-generated regression equation contained three indicators, namely enhanced homogeneity, diameter line expansion and peak intensity grading, which demonstrated prediction accuracy for benign and malignant breast tumor lumps of 91.8%; the quantitative indicator-generated regression equation only contained one indicator, namely the relative peak intensity, and its prediction accuracy was 61.5%. The corresponding areas under the ROC curve for qualitative and quantitative analyses were 91.3% and 75.7%, respectively, which exhibited a statistically significant difference by the Z test (P<0.05). The ability of CEUS qualitative analysis to identify breast tumor lumps is better than with quantitative analysis.

  13. Breast Lumps

    MedlinePlus

    ... 2015. Raftery AT, et al. Breast lumps. In: Churchill's Pocketbook of Differential Diagnosis. 4th ed. Philadelphia, Pa.: Churchill Livingston Elsevier; 2014. http://www.clinicalkey.com. Accessed ...

  14. Apparent resistivity for transient electromagnetic induction logging and its correction in radial layer identification

    NASA Astrophysics Data System (ADS)

    Meng, Qingxin; Hu, Xiangyun; Pan, Heping; Xi, Yufei

    2018-04-01

    We propose an algorithm for calculating all-time apparent resistivity from transient electromagnetic induction logging. The algorithm is based on the whole-space transient electric field expression of the uniform model and Halley's optimisation. In trial calculations for uniform models, the all-time algorithm is shown to have high accuracy. We use the finite-difference time-domain method to simulate the transient electromagnetic field in radial two-layer models without wall rock and convert the simulation results to apparent resistivity using the all-time algorithm. The time-varying apparent resistivity reflects the radially layered geoelectrical structure of the models and the apparent resistivity of the earliest time channel follows the true resistivity of the inner layer; however, the apparent resistivity at larger times reflects the comprehensive electrical characteristics of the inner and outer layers. To accurately identify the outer layer resistivity based on the series relationship model of the layered resistance, the apparent resistivity and diffusion depth of the different time channels are approximately replaced by related model parameters; that is, we propose an apparent resistivity correction algorithm. By correcting the time-varying apparent resistivity of radial two-layer models, we show that the correction results reflect the radially layered electrical structure and the corrected resistivities of the larger time channels follow the outer layer resistivity. The transient electromagnetic fields of radially layered models with wall rock are simulated to obtain the 2D time-varying profiles of the apparent resistivity and corrections. The results suggest that the time-varying apparent resistivity and correction results reflect the vertical and radial geoelectrical structures. For models with small wall-rock effect, the correction removes the effect of the low-resistance inner layer on the apparent resistivity of the larger time channels.

  15. Assessment, Validation, and Refinement of the Atmospheric Correction Algorithm for the Ocean Color Sensors. Chapter 19

    NASA Technical Reports Server (NTRS)

    Wang, Menghua

    2003-01-01

    The primary focus of this proposed research is for the atmospheric correction algorithm evaluation and development and satellite sensor calibration and characterization. It is well known that the atmospheric correction, which removes more than 90% of sensor-measured signals contributed from atmosphere in the visible, is the key procedure in the ocean color remote sensing (Gordon and Wang, 1994). The accuracy and effectiveness of the atmospheric correction directly affect the remotely retrieved ocean bio-optical products. On the other hand, for ocean color remote sensing, in order to obtain the required accuracy in the derived water-leaving signals from satellite measurements, an on-orbit vicarious calibration of the whole system, i.e., sensor and algorithms, is necessary. In addition, it is important to address issues of (i) cross-calibration of two or more sensors and (ii) in-orbit vicarious calibration of the sensor-atmosphere system. The goal of these researches is to develop methods for meaningful comparison and possible merging of data products from multiple ocean color missions. In the past year, much efforts have been on (a) understanding and correcting the artifacts appeared in the SeaWiFS-derived ocean and atmospheric produces; (b) developing an efficient method in generating the SeaWiFS aerosol lookup tables, (c) evaluating the effects of calibration error in the near-infrared (NIR) band to the atmospheric correction of the ocean color remote sensors, (d) comparing the aerosol correction algorithm using the singlescattering epsilon (the current SeaWiFS algorithm) vs. the multiple-scattering epsilon method, and (e) continuing on activities for the International Ocean-Color Coordinating Group (IOCCG) atmospheric correction working group. In this report, I will briefly present and discuss these and some other research activities.

  16. 29 CFR Appendix C to Part 4022 - Lump Sum Interest Rates for Private-Sector Payments

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Lump Sum Interest Rates for Private-Sector Payments C... Appendix C to Part 4022—Lump Sum Interest Rates for Private-Sector Payments [In using this table: (1) For... (where y is an integer and 0 n 1 + n 2), interest rate i 3 shall apply from the valuation date for a...

  17. 29 CFR Appendix B to Part 4022 - Lump Sum Interest Rates for PBGC Payments

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Lump Sum Interest Rates for PBGC Payments B Appendix B to... 4022—Lump Sum Interest Rates for PBGC Payments [In using this table: (1) For benefits for which the... + n2), interest rate i3 shall apply from the valuation date for a period of y−n1−n2 years; interest...

  18. An improved non-uniformity correction algorithm and its GPU parallel implementation

    NASA Astrophysics Data System (ADS)

    Cheng, Kuanhong; Zhou, Huixin; Qin, Hanlin; Zhao, Dong; Qian, Kun; Rong, Shenghui

    2018-05-01

    The performance of SLP-THP based non-uniformity correction algorithm is seriously affected by the result of SLP filter, which always leads to image blurring and ghosting artifacts. To address this problem, an improved SLP-THP based non-uniformity correction method with curvature constraint was proposed. Here we put forward a new way to estimate spatial low frequency component. First, the details and contours of input image were obtained respectively by minimizing local Gaussian curvature and mean curvature of image surface. Then, the guided filter was utilized to combine these two parts together to get the estimate of spatial low frequency component. Finally, we brought this SLP component into SLP-THP method to achieve non-uniformity correction. The performance of proposed algorithm was verified by several real and simulated infrared image sequences. The experimental results indicated that the proposed algorithm can reduce the non-uniformity without detail losing. After that, a GPU based parallel implementation that runs 150 times faster than CPU was presented, which showed the proposed algorithm has great potential for real time application.

  19. A Computational Framework for High-Throughput Isotopic Natural Abundance Correction of Omics-Level Ultra-High Resolution FT-MS Datasets

    PubMed Central

    Carreer, William J.; Flight, Robert M.; Moseley, Hunter N. B.

    2013-01-01

    New metabolomics applications of ultra-high resolution and accuracy mass spectrometry can provide thousands of detectable isotopologues, with the number of potentially detectable isotopologues increasing exponentially with the number of stable isotopes used in newer isotope tracing methods like stable isotope-resolved metabolomics (SIRM) experiments. This huge increase in usable data requires software capable of correcting the large number of isotopologue peaks resulting from SIRM experiments in a timely manner. We describe the design of a new algorithm and software system capable of handling these high volumes of data, while including quality control methods for maintaining data quality. We validate this new algorithm against a previous single isotope correction algorithm in a two-step cross-validation. Next, we demonstrate the algorithm and correct for the effects of natural abundance for both 13C and 15N isotopes on a set of raw isotopologue intensities of UDP-N-acetyl-D-glucosamine derived from a 13C/15N-tracing experiment. Finally, we demonstrate the algorithm on a full omics-level dataset. PMID:24404440

  20. Rogue Waves and Lump Solitons of the (3+1)-Dimensional Generalized B-type Kadomtsev-Petviashvili Equation for Water Waves

    NASA Astrophysics Data System (ADS)

    Sun, Yan; Tian, Bo; Liu, Lei; Chai, Han-Peng; Yuan, Yu-Qiang

    2017-12-01

    In this paper, the (3+1)-dimensional generalized B-type Kadomtsev-Petviashvili equation for water waves is investigated. Through the Hirota method and Kadomtsev-Petviashvili hierarchy reduction, we obtain the first-order, higher-order, multiple rogue waves and lump solitons based on the solutions in terms of the Gramian. The first-order rogue waves are the line rogue waves which arise from the constant background and then disappear into the constant background again, while the first-order lump solitons propagate stably. Interactions among several first-order rogue waves which are described by the multiple rogue waves are presented. Elastic interactions of several first-order lump solitons are also presented. We find that the higher-order rogue waves and lump solitons can be treated as the superpositions of several first-order ones, while the interaction between the second-order lump solitons is inelastic. Supported by the National Natural Science Foundation of China under Grant Nos. 11772017, 11272023, and 11471050, by the Open Fund of State Key Laboratory of Information Photonics and Optical Communications (Beijing University of Posts and Telecommunications), China (IPOC: 2017ZZ05), and by the Fundamental Research Funds for the Central Universities of China under Grant No. 2011BUPTYB02

  1. Axillary silicone lymphadenopathy presenting with a lump and altered sensation in the breast: a case report

    PubMed Central

    2009-01-01

    Introduction Silicone lymphadenopathy is a rare but recognised complication of procedures involving the use of silicone. It has a poorly understood mechanism but is thought to occur following the transportation of silicone particles from silicone-containing prostheses to lymph nodes by macrophages. Case presentation We report of a case involving a 35-year-old woman who presented to the breast clinic with a breast lump and altered sensation below her left nipple 5 years after bilateral cosmetic breast augmentations. A small lump was detected inferior to the nipple but clinical examination and initial ultrasound investigation showed both implants to be intact. However, mammography and magnetic resonance imaging of both breasts revealed both intracapsular and extracapsular rupture of the left breast prosthesis. The patient went on to develop a flu-like illness and tender lumps in the left axilla and right mastoid regions. An excision biopsy of the left axillary lesion and replacement of the ruptured implant was performed. Subsequent histological analysis showed that the axillary lump was a lymph node containing large amounts of silicone. Conclusion The exclusion of malignancy remains the priority when dealing with lumps in the breast or axilla. Silicone lymphadenopathy should however be considered as a differential diagnosis in patients in whom silicone prostheses are present. PMID:19830102

  2. Scene-based nonuniformity correction technique that exploits knowledge of the focal-plane array readout architecture.

    PubMed

    Narayanan, Balaji; Hardie, Russell C; Muse, Robert A

    2005-06-10

    Spatial fixed-pattern noise is a common and major problem in modern infrared imagers owing to the nonuniform response of the photodiodes in the focal plane array of the imaging system. In addition, the nonuniform response of the readout and digitization electronics, which are involved in multiplexing the signals from the photodiodes, causes further nonuniformity. We describe a novel scene based on a nonuniformity correction algorithm that treats the aggregate nonuniformity in separate stages. First, the nonuniformity from the readout amplifiers is corrected by use of knowledge of the readout architecture of the imaging system. Second, the nonuniformity resulting from the individual detectors is corrected with a nonlinear filter-based method. We demonstrate the performance of the proposed algorithm by applying it to simulated imagery and real infrared data. Quantitative results in terms of the mean absolute error and the signal-to-noise ratio are also presented to demonstrate the efficacy of the proposed algorithm. One advantage of the proposed algorithm is that it requires only a few frames to obtain high-quality corrections.

  3. Application and assessment of a robust elastic motion correction algorithm to dynamic MRI.

    PubMed

    Herrmann, K-H; Wurdinger, S; Fischer, D R; Krumbein, I; Schmitt, M; Hermosillo, G; Chaudhuri, K; Krishnan, A; Salganicoff, M; Kaiser, W A; Reichenbach, J R

    2007-01-01

    The purpose of this study was to assess the performance of a new motion correction algorithm. Twenty-five dynamic MR mammography (MRM) data sets and 25 contrast-enhanced three-dimensional peripheral MR angiographic (MRA) data sets which were affected by patient motion of varying severeness were selected retrospectively from routine examinations. Anonymized data were registered by a new experimental elastic motion correction algorithm. The algorithm works by computing a similarity measure for the two volumes that takes into account expected signal changes due to the presence of a contrast agent while penalizing other signal changes caused by patient motion. A conjugate gradient method is used to find the best possible set of motion parameters that maximizes the similarity measures across the entire volume. Images before and after correction were visually evaluated and scored by experienced radiologists with respect to reduction of motion, improvement of image quality, disappearance of existing lesions or creation of artifactual lesions. It was found that the correction improves image quality (76% for MRM and 96% for MRA) and diagnosability (60% for MRM and 96% for MRA).

  4. Characterization of the pharmacokinetics of gasoline using PBPK modeling with a complex mixtures chemical lumping approach.

    PubMed

    Dennison, James E; Andersen, Melvin E; Yang, Raymond S H

    2003-09-01

    Gasoline consists of a few toxicologically significant components and a large number of other hydrocarbons in a complex mixture. By using an integrated, physiologically based pharmacokinetic (PBPK) modeling and lumping approach, we have developed a method for characterizing the pharmacokinetics (PKs) of gasoline in rats. The PBPK model tracks selected target components (benzene, toluene, ethylbenzene, o-xylene [BTEX], and n-hexane) and a lumped chemical group representing all nontarget components, with competitive metabolic inhibition between all target compounds and the lumped chemical. PK data was acquired by performing gas uptake PK studies with male F344 rats in a closed chamber. Chamber air samples were analyzed every 10-20 min by gas chromatography/flame ionization detection and all nontarget chemicals were co-integrated. A four-compartment PBPK model with metabolic interactions was constructed using the BTEX, n-hexane, and lumped chemical data. Target chemical kinetic parameters were refined by studies with either the single chemical alone or with all five chemicals together. o-Xylene, at high concentrations, decreased alveolar ventilation, consistent with respiratory irritation. A six-chemical interaction model with the lumped chemical group was used to estimate lumped chemical partitioning and metabolic parameters for a winter blend of gasoline with methyl t-butyl ether and a summer blend without any oxygenate. Computer simulation results from this model matched well with experimental data from single chemical, five-chemical mixture, and the two blends of gasoline. The PBPK model analysis indicated that metabolism of individual components was inhibited up to 27% during the 6-h gas uptake experiments of gasoline exposures.

  5. Validation of the Thematic Mapper radiometric and geometric correction algorithms

    NASA Technical Reports Server (NTRS)

    Fischel, D.

    1984-01-01

    The radiometric and geometric correction algorithms for Thematic Mapper are critical to subsequent successful information extraction. Earlier Landsat scanners, known as Multispectral Scanners, produce imagery which exhibits striping due to mismatching of detector gains and biases. Thematic Mapper exhibits the same phenomenon at three levels: detector-to-detector, scan-to-scan, and multiscan striping. The cause of these variations has been traced to variations in the dark current of the detectors. An alternative formulation has been tested and shown to be very satisfactory. Unfortunately, the Thematic Mapper detectors exhibit saturation effects suffered while viewing extensive cloud areas, and is not easily correctable. The geometric correction algorithm has been shown to be remarkably reliable. Only minor and modest improvements are indicated and shown to be effective.

  6. Quantitative Evaluation of 2 Scatter-Correction Techniques for 18F-FDG Brain PET/MRI in Regard to MR-Based Attenuation Correction.

    PubMed

    Teuho, Jarmo; Saunavaara, Virva; Tolvanen, Tuula; Tuokkola, Terhi; Karlsson, Antti; Tuisku, Jouni; Teräs, Mika

    2017-10-01

    In PET, corrections for photon scatter and attenuation are essential for visual and quantitative consistency. MR attenuation correction (MRAC) is generally conducted by image segmentation and assignment of discrete attenuation coefficients, which offer limited accuracy compared with CT attenuation correction. Potential inaccuracies in MRAC may affect scatter correction, because the attenuation image (μ-map) is used in single scatter simulation (SSS) to calculate the scatter estimate. We assessed the impact of MRAC to scatter correction using 2 scatter-correction techniques and 3 μ-maps for MRAC. Methods: The tail-fitted SSS (TF-SSS) and a Monte Carlo-based single scatter simulation (MC-SSS) algorithm implementations on the Philips Ingenuity TF PET/MR were used with 1 CT-based and 2 MR-based μ-maps. Data from 7 subjects were used in the clinical evaluation, and a phantom study using an anatomic brain phantom was conducted. Scatter-correction sinograms were evaluated for each scatter correction method and μ-map. Absolute image quantification was investigated with the phantom data. Quantitative assessment of PET images was performed by volume-of-interest and ratio image analysis. Results: MRAC did not result in large differences in scatter algorithm performance, especially with TF-SSS. Scatter sinograms and scatter fractions did not reveal large differences regardless of the μ-map used. TF-SSS showed slightly higher absolute quantification. The differences in volume-of-interest analysis between TF-SSS and MC-SSS were 3% at maximum in the phantom and 4% in the patient study. Both algorithms showed excellent correlation with each other with no visual differences between PET images. MC-SSS showed a slight dependency on the μ-map used, with a difference of 2% on average and 4% at maximum when a μ-map without bone was used. Conclusion: The effect of different MR-based μ-maps on the performance of scatter correction was minimal in non-time-of-flight 18 F-FDG PET/MR brain imaging. The SSS algorithm was not affected significantly by MRAC. The performance of the MC-SSS algorithm is comparable but not superior to TF-SSS, warranting further investigations of algorithm optimization and performance with different radiotracers and time-of-flight imaging. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  7. Rutting resistance of asphalt mixture with cup lumps modified binder

    NASA Astrophysics Data System (ADS)

    Shaffie, E.; Hanif, W. M. M. Wan; Arshad, A. K.; Hashim, W.

    2017-11-01

    Rutting is the most common pavement distress in pavement structures which occurs mainly due to several factors such as increasing of traffic volume, climatic conditions and also due to construction design errors. This failure reduced the service life of the pavement, reduced driver safety and increase cost of maintenance. Polymer Modified Binder has been observed for a long time in improving asphalt pavement performance. Research shows that the use of polymer in bituminous mix not only improve the resistance to rutting but also increase the life span of the pavement. This research evaluates the physical properties and rutting performance of dense graded Superpave-designed HMA mix. Two different types of dense graded Superpave HMA mix were developed consists of unmodified binder mix (UMB) and cup lumps rubber (liquid form) modified binder mix (CLMB). Natural rubber polymer modified binder was prepared from addition of 8 percent of cup lumps into binder. Results showed that all the mixes passed the Superpave volumetric properties criteria which indicate that these mixtures were good with respect to durability and flexibility. Furthermore, rutting results from APA rutting test was determined to evaluate the performance of these mixtures. The rutting result of CLMB demonstrates better resistance to rutting than those prepared using UMB mix. Addition of cup lumps rubber in asphalt mixture was found to be significant, where the cup lumps rubber has certainly improves the binder properties and enhanced its rutting resistance due to greater elasticity offered by the cup lumps rubber particles. It shows that the use of cup lumps rubber can significantly reduce the rut depth of asphalt mixture by 41% compared to the minimum rut depth obtained for the UMB mix. Therefore, it can be concluded that the cup lumps rubber is suitable to be used as a modifier to modified binder in order to enhance the properties of the binder and thus improves the performance of asphalt mixes.

  8. An adaptive optics approach for laser beam correction in turbulence utilizing a modified plenoptic camera

    NASA Astrophysics Data System (ADS)

    Ko, Jonathan; Wu, Chensheng; Davis, Christopher C.

    2015-09-01

    Adaptive optics has been widely used in the field of astronomy to correct for atmospheric turbulence while viewing images of celestial bodies. The slightly distorted incoming wavefronts are typically sensed with a Shack-Hartmann sensor and then corrected with a deformable mirror. Although this approach has proven to be effective for astronomical purposes, a new approach must be developed when correcting for the deep turbulence experienced in ground to ground based optical systems. We propose the use of a modified plenoptic camera as a wavefront sensor capable of accurately representing an incoming wavefront that has been significantly distorted by strong turbulence conditions (C2n <10-13 m- 2/3). An intelligent correction algorithm can then be developed to reconstruct the perturbed wavefront and use this information to drive a deformable mirror capable of correcting the major distortions. After the large distortions have been corrected, a secondary mode utilizing more traditional adaptive optics algorithms can take over to fine tune the wavefront correction. This two-stage algorithm can find use in free space optical communication systems, in directed energy applications, as well as for image correction purposes.

  9. Algorithm for Atmospheric Corrections of Aircraft and Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Fraser, Robert S.; Kaufman, Yoram J.; Ferrare, Richard A.; Mattoo, Shana

    1989-01-01

    A simple and fast atmospheric correction algorithm is described which is used to correct radiances of scattered sunlight measured by aircraft and/or satellite above a uniform surface. The atmospheric effect, the basic equations, a description of the computational procedure, and a sensitivity study are discussed. The program is designed to take the measured radiances, view and illumination directions, and the aerosol and gaseous absorption optical thickness to compute the radiance just above the surface, the irradiance on the surface, and surface reflectance. Alternatively, the program will compute the upward radiance at a specific altitude for a given surface reflectance, view and illumination directions, and aerosol and gaseous absorption optical thickness. The algorithm can be applied for any view and illumination directions and any wavelength in the range 0.48 micron to 2.2 micron. The relation between the measured radiance and surface reflectance, which is expressed as a function of atmospheric properties and measurement geometry, is computed using a radiative transfer routine. The results of the computations are presented in a table which forms the basis of the correction algorithm. The algorithm can be used for atmospheric corrections in the presence of a rural aerosol. The sensitivity of the derived surface reflectance to uncertainties in the model and input data is discussed.

  10. Algorithm for atmospheric corrections of aircraft and satellite imagery

    NASA Technical Reports Server (NTRS)

    Fraser, R. S.; Ferrare, R. A.; Kaufman, Y. J.; Markham, B. L.; Mattoo, S.

    1992-01-01

    A simple and fast atmospheric correction algorithm is described which is used to correct radiances of scattered sunlight measured by aircraft and/or satellite above a uniform surface. The atmospheric effect, the basic equations, a description of the computational procedure, and a sensitivity study are discussed. The program is designed to take the measured radiances, view and illumination directions, and the aerosol and gaseous absorption optical thickness to compute the radiance just above the surface, the irradiance on the surface, and surface reflectance. Alternatively, the program will compute the upward radiance at a specific altitude for a given surface reflectance, view and illumination directions, and aerosol and gaseous absorption optical thickness. The algorithm can be applied for any view and illumination directions and any wavelength in the range 0.48 micron to 2.2 microns. The relation between the measured radiance and surface reflectance, which is expressed as a function of atmospheric properties and measurement geometry, is computed using a radiative transfer routine. The results of the computations are presented in a table which forms the basis of the correction algorithm. The algorithm can be used for atmospheric corrections in the presence of a rural aerosol. The sensitivity of the derived surface reflectance to uncertainties in the model and input data is discussed.

  11. Approximate string matching algorithms for limited-vocabulary OCR output correction

    NASA Astrophysics Data System (ADS)

    Lasko, Thomas A.; Hauser, Susan E.

    2000-12-01

    Five methods for matching words mistranslated by optical character recognition to their most likely match in a reference dictionary were tested on data from the archives of the National Library of Medicine. The methods, including an adaptation of the cross correlation algorithm, the generic edit distance algorithm, the edit distance algorithm with a probabilistic substitution matrix, Bayesian analysis, and Bayesian analysis on an actively thinned reference dictionary were implemented and their accuracy rates compared. Of the five, the Bayesian algorithm produced the most correct matches (87%), and had the advantage of producing scores that have a useful and practical interpretation.

  12. ICAP - An Interactive Cluster Analysis Procedure for analyzing remotely sensed data

    NASA Technical Reports Server (NTRS)

    Wharton, S. W.; Turner, B. J.

    1981-01-01

    An Interactive Cluster Analysis Procedure (ICAP) was developed to derive classifier training statistics from remotely sensed data. ICAP differs from conventional clustering algorithms by allowing the analyst to optimize the cluster configuration by inspection, rather than by manipulating process parameters. Control of the clustering process alternates between the algorithm, which creates new centroids and forms clusters, and the analyst, who can evaluate and elect to modify the cluster structure. Clusters can be deleted, or lumped together pairwise, or new centroids can be added. A summary of the cluster statistics can be requested to facilitate cluster manipulation. The principal advantage of this approach is that it allows prior information (when available) to be used directly in the analysis, since the analyst interacts with ICAP in a straightforward manner, using basic terms with which he is more likely to be familiar. Results from testing ICAP showed that an informed use of ICAP can improve classification, as compared to an existing cluster analysis procedure.

  13. Adaptive twisting sliding mode algorithm for hypersonic reentry vehicle attitude control based on finite-time observer.

    PubMed

    Guo, Zongyi; Chang, Jing; Guo, Jianguo; Zhou, Jun

    2018-06-01

    This paper focuses on the adaptive twisting sliding mode control for the Hypersonic Reentry Vehicles (HRVs) attitude tracking issue. The HRV attitude tracking model is transformed into the error dynamics in matched structure, whereas an unmeasurable state is redefined by lumping the existing unmatched disturbance with the angular rate. Hence, an adaptive finite-time observer is used to estimate the unknown state. Then, an adaptive twisting algorithm is proposed for systems subject to disturbances with unknown bounds. The stability of the proposed observer-based adaptive twisting approach is guaranteed, and the case of noisy measurement is analyzed. Also, the developed control law avoids the aggressive chattering phenomenon of the existing adaptive twisting approaches because the adaptive gains decrease close to the disturbance once the trajectories reach the sliding surface. Finally, numerical simulations on the attitude control of the HRV are conducted to verify the effectiveness and benefit of the proposed approach. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  14. An improved non-uniformity correction algorithm and its hardware implementation on FPGA

    NASA Astrophysics Data System (ADS)

    Rong, Shenghui; Zhou, Huixin; Wen, Zhigang; Qin, Hanlin; Qian, Kun; Cheng, Kuanhong

    2017-09-01

    The Non-uniformity of Infrared Focal Plane Arrays (IRFPA) severely degrades the infrared image quality. An effective non-uniformity correction (NUC) algorithm is necessary for an IRFPA imaging and application system. However traditional scene-based NUC algorithm suffers the image blurring and artificial ghosting. In addition, few effective hardware platforms have been proposed to implement corresponding NUC algorithms. Thus, this paper proposed an improved neural-network based NUC algorithm by the guided image filter and the projection-based motion detection algorithm. First, the guided image filter is utilized to achieve the accurate desired image to decrease the artificial ghosting. Then a projection-based moving detection algorithm is utilized to determine whether the correction coefficients should be updated or not. In this way the problem of image blurring can be overcome. At last, an FPGA-based hardware design is introduced to realize the proposed NUC algorithm. A real and a simulated infrared image sequences are utilized to verify the performance of the proposed algorithm. Experimental results indicated that the proposed NUC algorithm can effectively eliminate the fix pattern noise with less image blurring and artificial ghosting. The proposed hardware design takes less logic elements in FPGA and spends less clock cycles to process one frame of image.

  15. ICAP: An Interactive Cluster Analysis Procedure for analyzing remotely sensed data. [to classify the radiance data to produce a thematic map

    NASA Technical Reports Server (NTRS)

    Wharton, S. W.

    1980-01-01

    An Interactive Cluster Analysis Procedure (ICAP) was developed to derive classifier training statistics from remotely sensed data. The algorithm interfaces the rapid numerical processing capacity of a computer with the human ability to integrate qualitative information. Control of the clustering process alternates between the algorithm, which creates new centroids and forms clusters and the analyst, who evaluate and elect to modify the cluster structure. Clusters can be deleted or lumped pairwise, or new centroids can be added. A summary of the cluster statistics can be requested to facilitate cluster manipulation. The ICAP was implemented in APL (A Programming Language), an interactive computer language. The flexibility of the algorithm was evaluated using data from different LANDSAT scenes to simulate two situations: one in which the analyst is assumed to have no prior knowledge about the data and wishes to have the clusters formed more or less automatically; and the other in which the analyst is assumed to have some knowledge about the data structure and wishes to use that information to closely supervise the clustering process. For comparison, an existing clustering method was also applied to the two data sets.

  16. Evaluation of two Vaisala RS92 radiosonde solar radiative dry bias correction algorithms

    DOE PAGES

    Dzambo, Andrew M.; Turner, David D.; Mlawer, Eli J.

    2016-04-12

    Solar heating of the relative humidity (RH) probe on Vaisala RS92 radiosondes results in a large dry bias in the upper troposphere. Two different algorithms (Miloshevich et al., 2009, MILO hereafter; and Wang et al., 2013, WANG hereafter) have been designed to account for this solar radiative dry bias (SRDB). These corrections are markedly different with MILO adding up to 40 % more moisture to the original radiosonde profile than WANG; however, the impact of the two algorithms varies with height. The accuracy of these two algorithms is evaluated using three different approaches: a comparison of precipitable water vapor (PWV),more » downwelling radiative closure with a surface-based microwave radiometer at a high-altitude site (5.3 km m.s.l.), and upwelling radiative closure with the space-based Atmospheric Infrared Sounder (AIRS). The PWV computed from the uncorrected and corrected RH data is compared against PWV retrieved from ground-based microwave radiometers at tropical, midlatitude, and arctic sites. Although MILO generally adds more moisture to the original radiosonde profile in the upper troposphere compared to WANG, both corrections yield similar changes to the PWV, and the corrected data agree well with the ground-based retrievals. The two closure activities – done for clear-sky scenes – use the radiative transfer models MonoRTM and LBLRTM to compute radiance from the radiosonde profiles to compare against spectral observations. Both WANG- and MILO-corrected RHs are statistically better than original RH in all cases except for the driest 30 % of cases in the downwelling experiment, where both algorithms add too much water vapor to the original profile. In the upwelling experiment, the RH correction applied by the WANG vs. MILO algorithm is statistically different above 10 km for the driest 30 % of cases and above 8 km for the moistest 30 % of cases, suggesting that the MILO correction performs better than the WANG in clear-sky scenes. Lastly, the cause of this statistical significance is likely explained by the fact the WANG correction also accounts for cloud cover – a condition not accounted for in the radiance closure experiments.« less

  17. Evaluation of two Vaisala RS92 radiosonde solar radiative dry bias correction algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dzambo, Andrew M.; Turner, David D.; Mlawer, Eli J.

    Solar heating of the relative humidity (RH) probe on Vaisala RS92 radiosondes results in a large dry bias in the upper troposphere. Two different algorithms (Miloshevich et al., 2009, MILO hereafter; and Wang et al., 2013, WANG hereafter) have been designed to account for this solar radiative dry bias (SRDB). These corrections are markedly different with MILO adding up to 40 % more moisture to the original radiosonde profile than WANG; however, the impact of the two algorithms varies with height. The accuracy of these two algorithms is evaluated using three different approaches: a comparison of precipitable water vapor (PWV),more » downwelling radiative closure with a surface-based microwave radiometer at a high-altitude site (5.3 km m.s.l.), and upwelling radiative closure with the space-based Atmospheric Infrared Sounder (AIRS). The PWV computed from the uncorrected and corrected RH data is compared against PWV retrieved from ground-based microwave radiometers at tropical, midlatitude, and arctic sites. Although MILO generally adds more moisture to the original radiosonde profile in the upper troposphere compared to WANG, both corrections yield similar changes to the PWV, and the corrected data agree well with the ground-based retrievals. The two closure activities – done for clear-sky scenes – use the radiative transfer models MonoRTM and LBLRTM to compute radiance from the radiosonde profiles to compare against spectral observations. Both WANG- and MILO-corrected RHs are statistically better than original RH in all cases except for the driest 30 % of cases in the downwelling experiment, where both algorithms add too much water vapor to the original profile. In the upwelling experiment, the RH correction applied by the WANG vs. MILO algorithm is statistically different above 10 km for the driest 30 % of cases and above 8 km for the moistest 30 % of cases, suggesting that the MILO correction performs better than the WANG in clear-sky scenes. Lastly, the cause of this statistical significance is likely explained by the fact the WANG correction also accounts for cloud cover – a condition not accounted for in the radiance closure experiments.« less

  18. Emergence and space-time structure of lump solution to the (2+1)-dimensional generalized KP equation

    NASA Astrophysics Data System (ADS)

    Tan, Wei; Dai, Houping; Dai, Zhengde; Zhong, Wenyong

    2017-11-01

    A periodic breather-wave solution is obtained using homoclinic test approach and Hirota's bilinear method with a small perturbation parameter u0 for the (2+1)-dimensional generalized Kadomtsev-Petviashvili equation. Based on the periodic breather-wave, a lump solution is emerged by limit behaviour. Finally, three different forms of the space-time structure of the lump solution are investigated and discussed using the extreme value theory.

  19. 41 CFR 302-6.203 - May I retain any balance left over from my TQSE lump sum payment if such payment is more than...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 41 Public Contracts and Property Management 4 2013-07-01 2012-07-01 true May I retain any balance left over from my TQSE lump sum payment if such payment is more than adequate? 302-6.203 Section 302-6... TEMPORARY QUARTERS SUBSISTENCE EXPENSES Lump Sum Payment § 302-6.203 May I retain any balance left over from...

  20. 41 CFR 302-5.18 - May I retain any balance left over from my househunting reimbursement if my lump sum is more than...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 41 Public Contracts and Property Management 4 2012-07-01 2012-07-01 false May I retain any balance left over from my househunting reimbursement if my lump sum is more than adequate to cover my... Expenses § 302-5.18 May I retain any balance left over from my househunting reimbursement if my lump sum is...

  1. 41 CFR 302-6.203 - May I retain any balance left over from my TQSE lump sum payment if such payment is more than...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 41 Public Contracts and Property Management 4 2014-07-01 2014-07-01 false May I retain any balance left over from my TQSE lump sum payment if such payment is more than adequate? 302-6.203 Section 302-6... TEMPORARY QUARTERS SUBSISTENCE EXPENSES Lump Sum Payment § 302-6.203 May I retain any balance left over from...

  2. 41 CFR 302-5.18 - May I retain any balance left over from my househunting reimbursement if my lump sum is more than...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 41 Public Contracts and Property Management 4 2014-07-01 2014-07-01 false May I retain any balance left over from my househunting reimbursement if my lump sum is more than adequate to cover my... Expenses § 302-5.18 May I retain any balance left over from my househunting reimbursement if my lump sum is...

  3. 41 CFR 302-6.203 - May I retain any balance left over from my TQSE lump sum payment if such payment is more than...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 41 Public Contracts and Property Management 4 2012-07-01 2012-07-01 false May I retain any balance left over from my TQSE lump sum payment if such payment is more than adequate? 302-6.203 Section 302-6... TEMPORARY QUARTERS SUBSISTENCE EXPENSES Lump Sum Payment § 302-6.203 May I retain any balance left over from...

  4. 41 CFR 302-5.18 - May I retain any balance left over from my househunting reimbursement if my lump sum is more than...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 41 Public Contracts and Property Management 4 2013-07-01 2012-07-01 true May I retain any balance left over from my househunting reimbursement if my lump sum is more than adequate to cover my... Expenses § 302-5.18 May I retain any balance left over from my househunting reimbursement if my lump sum is...

  5. Design Document for Differential GPS Ground Reference Station Pseudorange Correction Generation Algorithm

    DOT National Transportation Integrated Search

    1986-12-01

    The algorithms described in this report determine the differential corrections to be broadcast to users of the Global Positioning System (GPS) who require higher accuracy navigation or position information than the 30 to 100 meters that GPS normally ...

  6. Hard decoding algorithm for optimizing thresholds under general Markovian noise

    NASA Astrophysics Data System (ADS)

    Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond

    2017-04-01

    Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.

  7. Prior image constrained scatter correction in cone-beam computed tomography image-guided radiation therapy.

    PubMed

    Brunner, Stephen; Nett, Brian E; Tolakanahalli, Ranjini; Chen, Guang-Hong

    2011-02-21

    X-ray scatter is a significant problem in cone-beam computed tomography when thicker objects and larger cone angles are used, as scattered radiation can lead to reduced contrast and CT number inaccuracy. Advances have been made in x-ray computed tomography (CT) by incorporating a high quality prior image into the image reconstruction process. In this paper, we extend this idea to correct scatter-induced shading artifacts in cone-beam CT image-guided radiation therapy. Specifically, this paper presents a new scatter correction algorithm which uses a prior image with low scatter artifacts to reduce shading artifacts in cone-beam CT images acquired under conditions of high scatter. The proposed correction algorithm begins with an empirical hypothesis that the target image can be written as a weighted summation of a series of basis images that are generated by raising the raw cone-beam projection data to different powers, and then, reconstructing using the standard filtered backprojection algorithm. The weight for each basis image is calculated by minimizing the difference between the target image and the prior image. The performance of the scatter correction algorithm is qualitatively and quantitatively evaluated through phantom studies using a Varian 2100 EX System with an on-board imager. Results show that the proposed scatter correction algorithm using a prior image with low scatter artifacts can substantially mitigate scatter-induced shading artifacts in both full-fan and half-fan modes.

  8. Design of the OMPS limb sensor correction algorithm

    NASA Astrophysics Data System (ADS)

    Jaross, Glen; McPeters, Richard; Seftor, Colin; Kowitt, Mark

    The Sensor Data Records (SDR) for the Ozone Mapping and Profiler Suite (OMPS) on NPOESS (National Polar-orbiting Operational Environmental Satellite System) contains geolocated and calibrated radiances, and are similar to the Level 1 data of NASA Earth Observing System and other programs. The SDR algorithms (one for each of the 3 OMPS focal planes) are the processes by which the Raw Data Records (RDR) from the OMPS sensors are converted into the records that contain all data necessary for ozone retrievals. Consequently, the algorithms must correct and calibrate Earth signals, geolocate the data, and identify and ingest collocated ancillary data. As with other limb sensors, ozone profile retrievals are relatively insensitive to calibration errors due to the use of altitude normalization and wavelength pairing. But the profile retrievals as they pertain to OMPS are not immune from sensor changes. In particular, the OMPS Limb sensor images an altitude range of > 100 km and a spectral range of 290-1000 nm on its detector. Uncorrected sensor degradation and spectral registration drifts can lead to changes in the measured radiance profile, which in turn affects the ozone trend measurement. Since OMPS is intended for long-term monitoring, sensor calibration is a specific concern. The calibration is maintained via the ground data processing. This means that all sensor calibration data, including direct solar measurements, are brought down in the raw data and processed separately by the SDR algorithms. One of the sensor corrections performed by the algorithm is the correction for stray light. The imaging spectrometer and the unique focal plane design of OMPS makes these corrections particularly challenging and important. Following an overview of the algorithm flow, we will briefly describe the sensor stray light characterization and the correction approach used in the code.

  9. Evaluation of a prototype correction algorithm to reduce metal artefacts in flat detector computed tomography of scaphoid fixation screws.

    PubMed

    Filli, Lukas; Marcon, Magda; Scholz, Bernhard; Calcagni, Maurizio; Finkenstädt, Tim; Andreisek, Gustav; Guggenberger, Roman

    2014-12-01

    The aim of this study was to evaluate a prototype correction algorithm to reduce metal artefacts in flat detector computed tomography (FDCT) of scaphoid fixation screws. FDCT has gained interest in imaging small anatomic structures of the appendicular skeleton. Angiographic C-arm systems with flat detectors allow fluoroscopy and FDCT imaging in a one-stop procedure emphasizing their role as an ideal intraoperative imaging tool. However, FDCT imaging can be significantly impaired by artefacts induced by fixation screws. Following ethical board approval, commercially available scaphoid fixation screws were inserted into six cadaveric specimens in order to fix artificially induced scaphoid fractures. FDCT images corrected with the algorithm were compared to uncorrected images both quantitatively and qualitatively by two independent radiologists in terms of artefacts, screw contour, fracture line visibility, bone visibility, and soft tissue definition. Normal distribution of variables was evaluated using the Kolmogorov-Smirnov test. In case of normal distribution, quantitative variables were compared using paired Student's t tests. The Wilcoxon signed-rank test was used for quantitative variables without normal distribution and all qualitative variables. A p value of < 0.05 was considered to indicate statistically significant differences. Metal artefacts were significantly reduced by the correction algorithm (p < 0.001), and the fracture line was more clearly defined (p < 0.01). The inter-observer reliability was "almost perfect" (intra-class correlation coefficient 0.85, p < 0.001). The prototype correction algorithm in FDCT for metal artefacts induced by scaphoid fixation screws may facilitate intra- and postoperative follow-up imaging. Flat detector computed tomography (FDCT) is a helpful imaging tool for scaphoid fixation. The correction algorithm significantly reduces artefacts in FDCT induced by scaphoid fixation screws. This may facilitate intra- and postoperative follow-up imaging.

  10. Assessment of a Bidirectional Reflectance Distribution Correction of Above-Water and Satellite Water-Leaving Radiance in Coastal Waters

    NASA Technical Reports Server (NTRS)

    Hlaing, Soe; Gilerson, Alexander; Harmal, Tristan; Tonizzo, Alberto; Weidemann, Alan; Arnone, Robert; Ahmed, Samir

    2012-01-01

    Water-leaving radiances, retrieved from in situ or satellite measurements, need to be corrected for the bidirectional properties of the measured light in order to standardize the data and make them comparable with each other. The current operational algorithm for the correction of bidirectional effects from the satellite ocean color data is optimized for typical oceanic waters. However, versions of bidirectional reflectance correction algorithms specifically tuned for typical coastal waters and other case 2 conditions are particularly needed to improve the overall quality of those data. In order to analyze the bidirectional reflectance distribution function (BRDF) of case 2 waters, a dataset of typical remote sensing reflectances was generated through radiative transfer simulations for a large range of viewing and illumination geometries. Based on this simulated dataset, a case 2 water focused remote sensing reflectance model is proposed to correct above-water and satellite water-leaving radiance data for bidirectional effects. The proposed model is first validated with a one year time series of in situ above-water measurements acquired by collocated multispectral and hyperspectral radiometers, which have different viewing geometries installed at the Long Island Sound Coastal Observatory (LISCO). Match-ups and intercomparisons performed on these concurrent measurements show that the proposed algorithm outperforms the algorithm currently in use at all wavelengths, with average improvement of 2.4% over the spectral range. LISCO's time series data have also been used to evaluate improvements in match-up comparisons of Moderate Resolution Imaging Spectroradiometer satellite data when the proposed BRDF correction is used in lieu of the current algorithm. It is shown that the discrepancies between coincident in-situ sea-based and satellite data decreased by 3.15% with the use of the proposed algorithm.

  11. Segmented Gamma Scanner for Small Containers of Uranium Processing Waste- 12295

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, K.E.; Smith, S.K.; Gailey, S.

    2012-07-01

    The Segmented Gamma Scanner (SGS) is commonly utilized in the assay of 55-gallon drums containing radioactive waste. Successfully deployed calibration methods include measurement of vertical line source standards in representative matrices and mathematical efficiency calibrations. The SGS technique can also be utilized to assay smaller containers, such as those used for criticality safety in uranium processing facilities. For such an application, a Can SGS System is aptly suited for the identification and quantification of radionuclides present in fuel processing wastes. Additionally, since the significant presence of uranium lumping can confound even a simple 'pass/fail' measurement regimen, the high-resolution gamma spectroscopymore » allows for the use of lump-detection techniques. In this application a lump correction is not required, but the application of a differential peak approach is used to simply identify the presence of U-235 lumps. The Can SGS is similar to current drum SGSs, but differs in the methodology for vertical segmentation. In the current drum SGS, the drum is placed on a rotator at a fixed vertical position while the detector, collimator, and transmission source are moved vertically to effect vertical segmentation. For the Can SGS, segmentation is more efficiently done by raising and lowering the rotator platform upon which the small container is positioned. This also reduces the complexity of the system mechanism. The application of the Can SGS introduces new challenges to traditional calibration and verification approaches. In this paper, we revisit SGS calibration methodology in the context of smaller waste containers, and as applied to fuel processing wastes. Specifically, we discuss solutions to the challenges introduced by requiring source standards to fit within the confines of the small containers and the unavailability of high-enriched uranium source standards. We also discuss the implementation of a previously used technique for identifying the presence of uranium lumping. The SGS technique is a well-accepted NDA technique applicable to containers of almost any size. It assumes a homogenous matrix and activity distribution throughout the entire container; an assumption that is at odds with the detection of lumps within the assay item typical of uranium-processing waste. This fact, in addition to the difficultly in constructing small reference standards of uranium-bearing materials, required the methodology used for performing an efficiency curve calibration to be altered. The solution discussed in this paper is demonstrated to provide good results for both the segment activity and full container activity when measuring heterogeneous source distributions. The application of this approach will need to be based on process knowledge of the assay items, as biases can be introduced if used with homogenous, or nearly homogenous, activity distributions. The bias will need to be quantified for each combination of container geometry and SGS scanning settings. One recommended approach for using the heterogeneous calibration discussed here is to assay each item using a homogenous calibration initially. Review of the segment activities compared to the full container activity will signal the presence of a non-uniform activity distribution as the segment activity will be grossly disproportionate to the full container activity. Upon seeing this result, the assay should either be reanalyzed or repeated using the heterogeneous calibration. (authors)« less

  12. Predicting nitrate discharge dynamics in mesoscale catchments using the lumped StreamGEM model and Bayesian parameter inference

    NASA Astrophysics Data System (ADS)

    Woodward, Simon James Roy; Wöhling, Thomas; Rode, Michael; Stenger, Roland

    2017-09-01

    The common practice of infrequent (e.g., monthly) stream water quality sampling for state of the environment monitoring may, when combined with high resolution stream flow data, provide sufficient information to accurately characterise the dominant nutrient transfer pathways and predict annual catchment yields. In the proposed approach, we use the spatially lumped catchment model StreamGEM to predict daily stream flow and nitrate concentration (mg L-1 NO3-N) in four contrasting mesoscale headwater catchments based on four years of daily rainfall, potential evapotranspiration, and stream flow measurements, and monthly or daily nitrate concentrations. Posterior model parameter distributions were estimated using the Markov Chain Monte Carlo sampling code DREAMZS and a log-likelihood function assuming heteroscedastic, t-distributed residuals. Despite high uncertainty in some model parameters, the flow and nitrate calibration data was well reproduced across all catchments (Nash-Sutcliffe efficiency against Log transformed data, NSL, in the range 0.62-0.83 for daily flow and 0.17-0.88 for nitrate concentration). The slight increase in the size of the residuals for a separate validation period was considered acceptable (NSL in the range 0.60-0.89 for daily flow and 0.10-0.74 for nitrate concentration, excluding one data set with limited validation data). Proportions of flow and nitrate discharge attributed to near-surface, fast seasonal groundwater and slow deeper groundwater were consistent with expectations based on catchment geology. The results for the Weida Stream in Thuringia, Germany, using monthly as opposed to daily nitrate data were, for all intents and purposes, identical, suggesting that four years of monthly nitrate sampling provides sufficient information for calibration of the StreamGEM model and prediction of catchment dynamics. This study highlights the remarkable effectiveness of process based, spatially lumped modelling with commonly available monthly stream sample data, to elucidate high resolution catchment function, when appropriate calibration methods are used that correctly handle the inherent uncertainties.

  13. A simplified procedure for correcting both errors and erasures of a Reed-Solomon code using the Euclidean algorithm

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Hsu, I. S.; Eastman, W. L.; Reed, I. S.

    1987-01-01

    It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial and the error evaluator polynomial in Berlekamp's key equation needed to decode a Reed-Solomon (RS) code. A simplified procedure is developed and proved to correct erasures as well as errors by replacing the initial condition of the Euclidean algorithm by the erasure locator polynomial and the Forney syndrome polynomial. By this means, the errata locator polynomial and the errata evaluator polynomial can be obtained, simultaneously and simply, by the Euclidean algorithm only. With this improved technique the complexity of time domain RS decoders for correcting both errors and erasures is reduced substantially from previous approaches. As a consequence, decoders for correcting both errors and erasures of RS codes can be made more modular, regular, simple, and naturally suitable for both VLSI and software implementation. An example illustrating this modified decoding procedure is given for a (15, 9) RS code.

  14. Atmospheric correction of SeaWiFS imagery for turbid coastal and inland waters.

    PubMed

    Ruddick, K G; Ovidio, F; Rijkeboer, M

    2000-02-20

    The standard SeaWiFS atmospheric correction algorithm, designed for open ocean water, has been extended for use over turbid coastal and inland waters. Failure of the standard algorithm over turbid waters can be attributed to invalid assumptions of zero water-leaving radiance for the near-infrared bands at 765 and 865 nm. In the present study these assumptions are replaced by the assumptions of spatial homogeneity of the 765:865-nm ratios for aerosol reflectance and for water-leaving reflectance. These two ratios are imposed as calibration parameters after inspection of the Rayleigh-corrected reflectance scatterplot. The performance of the new algorithm is demonstrated for imagery of Belgian coastal waters and yields physically realistic water-leaving radiance spectra. A preliminary comparison with in situ radiance spectra for the Dutch Lake Markermeer shows significant improvement over the standard atmospheric correction algorithm. An analysis is made of the sensitivity of results to the choice of calibration parameters, and perspectives for application of the method to other sensors are briefly discussed.

  15. A robust in-situ warp-correction algorithm for VISAR streak camera data at the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.; Kalantar, Daniel H.

    2015-02-01

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high energy density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.

  16. A nonlinear lag correction algorithm for a-Si flat-panel x-ray detectors

    PubMed Central

    Starman, Jared; Star-Lack, Josh; Virshup, Gary; Shapiro, Edward; Fahrig, Rebecca

    2012-01-01

    Purpose: Detector lag, or residual signal, in a-Si flat-panel (FP) detectors can cause significant shading artifacts in cone-beam computed tomography reconstructions. To date, most correction models have assumed a linear, time-invariant (LTI) model and correct lag by deconvolution with an impulse response function (IRF). However, the lag correction is sensitive to both the exposure intensity and the technique used for determining the IRF. Even when the LTI correction that produces the minimum error is found, residual artifact remains. A new non-LTI method was developed to take into account the IRF measurement technique and exposure dependencies. Methods: First, a multiexponential (N = 4) LTI model was implemented for lag correction. Next, a non-LTI lag correction, known as the nonlinear consistent stored charge (NLCSC) method, was developed based on the LTI multiexponential method. It differs from other nonlinear lag correction algorithms in that it maintains a consistent estimate of the amount of charge stored in the FP and it does not require intimate knowledge of the semiconductor parameters specific to the FP. For the NLCSC method, all coefficients of the IRF are functions of exposure intensity. Another nonlinear lag correction method that only used an intensity weighting of the IRF was also compared. The correction algorithms were applied to step-response projection data and CT acquisitions of a large pelvic phantom and an acrylic head phantom. The authors collected rising and falling edge step-response data on a Varian 4030CB a-Si FP detector operating in dynamic gain mode at 15 fps at nine incident exposures (2.0%–92% of the detector saturation exposure). For projection data, 1st and 50th frame lag were measured before and after correction. For the CT reconstructions, five pairs of ROIs were defined and the maximum and mean signal differences within a pair were calculated for the different exposures and step-response edge techniques. Results: The LTI corrections left residual 1st and 50th frame lag up to 1.4% and 0.48%, while the NLCSC lag correction reduced 1st and 50th frame residual lags to less than 0.29% and 0.0052%. For CT reconstructions, the NLCSC lag correction gave an average error of 11 HU for the pelvic phantom and 3 HU for the head phantom, compared to 14–19 HU and 2–11 HU for the LTI corrections and 15 HU and 9 HU for the intensity weighted non-LTI algorithm. The maximum ROI error was always smallest for the NLCSC correction. The NLCSC correction was also superior to the intensity weighting algorithm. Conclusions: The NLCSC lag algorithm corrected for the exposure dependence of lag, provided superior image improvement for the pelvic phantom reconstruction, and gave similar results to the best case LTI results for the head phantom. The blurred ring artifact that is left over in the LTI corrections was better removed by the NLCSC correction in all cases. PMID:23039642

  17. Evaluation of atmospheric correction algorithms for processing SeaWiFS data

    NASA Astrophysics Data System (ADS)

    Ransibrahmanakul, Varis; Stumpf, Richard; Ramachandran, Sathyadev; Hughes, Kent

    2005-08-01

    To enable the production of the best chlorophyll products from SeaWiFS data NOAA (Coastwatch and NOS) evaluated the various atmospheric correction algorithms by comparing the satellite derived water reflectance derived for each algorithm with in situ data. Gordon and Wang (1994) introduced a method to correct for Rayleigh and aerosol scattering in the atmosphere so that water reflectance may be derived from the radiance measured at the top of the atmosphere. However, since the correction assumed near infrared scattering to be negligible in coastal waters an invalid assumption, the method over estimates the atmospheric contribution and consequently under estimates water reflectance for the lower wavelength bands on extrapolation. Several improved methods to estimate near infrared correction exist: Siegel et al. (2000); Ruddick et al. (2000); Stumpf et al. (2002) and Stumpf et al. (2003), where an absorbing aerosol correction is also applied along with an additional 1.01% calibration adjustment for the 412 nm band. The evaluation show that the near infrared correction developed by Stumpf et al. (2003) result in an overall minimum error for U.S. waters. As of July 2004, NASA (SEADAS) has selected this as the default method for the atmospheric correction used to produce chlorophyll products.

  18. Automated general temperature correction method for dielectric soil moisture sensors

    NASA Astrophysics Data System (ADS)

    Kapilaratne, R. G. C. Jeewantinie; Lu, Minjiao

    2017-08-01

    An effective temperature correction method for dielectric sensors is important to ensure the accuracy of soil water content (SWC) measurements of local to regional-scale soil moisture monitoring networks. These networks are extensively using highly temperature sensitive dielectric sensors due to their low cost, ease of use and less power consumption. Yet there is no general temperature correction method for dielectric sensors, instead sensor or site dependent correction algorithms are employed. Such methods become ineffective at soil moisture monitoring networks with different sensor setups and those that cover diverse climatic conditions and soil types. This study attempted to develop a general temperature correction method for dielectric sensors which can be commonly used regardless of the differences in sensor type, climatic conditions and soil type without rainfall data. In this work an automated general temperature correction method was developed by adopting previously developed temperature correction algorithms using time domain reflectometry (TDR) measurements to ThetaProbe ML2X, Stevens Hydra probe II and Decagon Devices EC-TM sensor measurements. The rainy day effects removal procedure from SWC data was automated by incorporating a statistical inference technique with temperature correction algorithms. The temperature correction method was evaluated using 34 stations from the International Soil Moisture Monitoring Network and another nine stations from a local soil moisture monitoring network in Mongolia. Soil moisture monitoring networks used in this study cover four major climates and six major soil types. Results indicated that the automated temperature correction algorithms developed in this study can eliminate temperature effects from dielectric sensor measurements successfully even without on-site rainfall data. Furthermore, it has been found that actual daily average of SWC has been changed due to temperature effects of dielectric sensors with a significant error factor comparable to ±1% manufacturer's accuracy.

  19. Assessment of a bidirectional reflectance distribution correction of above-water and satellite water-leaving radiance in coastal waters.

    PubMed

    Hlaing, Soe; Gilerson, Alexander; Harmel, Tristan; Tonizzo, Alberto; Weidemann, Alan; Arnone, Robert; Ahmed, Samir

    2012-01-10

    Water-leaving radiances, retrieved from in situ or satellite measurements, need to be corrected for the bidirectional properties of the measured light in order to standardize the data and make them comparable with each other. The current operational algorithm for the correction of bidirectional effects from the satellite ocean color data is optimized for typical oceanic waters. However, versions of bidirectional reflectance correction algorithms specifically tuned for typical coastal waters and other case 2 conditions are particularly needed to improve the overall quality of those data. In order to analyze the bidirectional reflectance distribution function (BRDF) of case 2 waters, a dataset of typical remote sensing reflectances was generated through radiative transfer simulations for a large range of viewing and illumination geometries. Based on this simulated dataset, a case 2 water focused remote sensing reflectance model is proposed to correct above-water and satellite water-leaving radiance data for bidirectional effects. The proposed model is first validated with a one year time series of in situ above-water measurements acquired by collocated multispectral and hyperspectral radiometers, which have different viewing geometries installed at the Long Island Sound Coastal Observatory (LISCO). Match-ups and intercomparisons performed on these concurrent measurements show that the proposed algorithm outperforms the algorithm currently in use at all wavelengths, with average improvement of 2.4% over the spectral range. LISCO's time series data have also been used to evaluate improvements in match-up comparisons of Moderate Resolution Imaging Spectroradiometer satellite data when the proposed BRDF correction is used in lieu of the current algorithm. It is shown that the discrepancies between coincident in-situ sea-based and satellite data decreased by 3.15% with the use of the proposed algorithm. This confirms the advantages of the proposed model over the current one, demonstrating the need for a specific case 2 water BRDF correction algorithm as well as the feasibility of enhancing performance of current and future satellite ocean color remote sensing missions for monitoring of typical coastal waters. © 2012 Optical Society of America

  20. Potassium-based algorithm allows correction for the hematocrit bias in quantitative analysis of caffeine and its major metabolite in dried blood spots.

    PubMed

    De Kesel, Pieter M M; Capiau, Sara; Stove, Veronique V; Lambert, Willy E; Stove, Christophe P

    2014-10-01

    Although dried blood spot (DBS) sampling is increasingly receiving interest as a potential alternative to traditional blood sampling, the impact of hematocrit (Hct) on DBS results is limiting its final breakthrough in routine bioanalysis. To predict the Hct of a given DBS, potassium (K(+)) proved to be a reliable marker. The aim of this study was to evaluate whether application of an algorithm, based upon predicted Hct or K(+) concentrations as such, allowed correction for the Hct bias. Using validated LC-MS/MS methods, caffeine, chosen as a model compound, was determined in whole blood and corresponding DBS samples with a broad Hct range (0.18-0.47). A reference subset (n = 50) was used to generate an algorithm based on K(+) concentrations in DBS. Application of the developed algorithm on an independent test set (n = 50) alleviated the assay bias, especially at lower Hct values. Before correction, differences between DBS and whole blood concentrations ranged from -29.1 to 21.1%. The mean difference, as obtained by Bland-Altman comparison, was -6.6% (95% confidence interval (CI), -9.7 to -3.4%). After application of the algorithm, differences between corrected and whole blood concentrations lay between -19.9 and 13.9% with a mean difference of -2.1% (95% CI, -4.5 to 0.3%). The same algorithm was applied to a separate compound, paraxanthine, which was determined in 103 samples (Hct range, 0.17-0.47), yielding similar results. In conclusion, a K(+)-based algorithm allows correction for the Hct bias in the quantitative analysis of caffeine and its metabolite paraxanthine.

  1. Awareness that early cancer lump is painless could decrease breast cancer mortality in developing countries.

    PubMed

    Garg, Pankaj

    2016-06-10

    There are several factors which contribute to patients' reporting late to healthcare facility even after detecting the breast lump (patient delay). Amongst these, one of the important factors in low- and middle-income countries is lack of awareness that early cancer lump is painless (ECLIPs). Pain is often taken as a danger sign and absence of pain is often not taken seriously. The studies have shown that up to 98% of women in low-income countries are unaware that a painless lump could be a warning sign of early breast cancer. This fact is significant because this could be one of the prime reasons for the women having discovered a painless lump in the breast, accidentally or by breast self-examination, presume it to be harmless and don't report early to health care facility. Therefore, creating awareness about ECLIPs could be an effective strategy to reduce mortality due to breast cancer in low- and middle-income countries. Moreover, unlike modifying risk factors which requires long term behavior modification, creating awareness about ECLIPs is easy and cost effective.

  2. An empirical method to correct for temperature-dependent variations in the overlap function of CHM15k ceilometers

    NASA Astrophysics Data System (ADS)

    Hervo, Maxime; Poltera, Yann; Haefele, Alexander

    2016-07-01

    Imperfections in a lidar's overlap function lead to artefacts in the background, range and overlap-corrected lidar signals. These artefacts can erroneously be interpreted as an aerosol gradient or, in extreme cases, as a cloud base leading to false cloud detection. A correct specification of the overlap function is hence crucial in the use of automatic elastic lidars (ceilometers) for the detection of the planetary boundary layer or of low cloud. In this study, an algorithm is presented to correct such artefacts. It is based on the assumption of a homogeneous boundary layer and a correct specification of the overlap function down to a minimum range, which must be situated within the boundary layer. The strength of the algorithm lies in a sophisticated quality-check scheme which allows the reliable identification of favourable atmospheric conditions. The algorithm was applied to 2 years of data from a CHM15k ceilometer from the company Lufft. Backscatter signals corrected for background, range and overlap were compared using the overlap function provided by the manufacturer and the one corrected with the presented algorithm. Differences between corrected and uncorrected signals reached up to 45 % in the first 300 m above ground. The amplitude of the correction turned out to be temperature dependent and was larger for higher temperatures. A linear model of the correction as a function of the instrument's internal temperature was derived from the experimental data. Case studies and a statistical analysis of the strongest gradient derived from corrected signals reveal that the temperature model is capable of a high-quality correction of overlap artefacts, in particular those due to diurnal variations. The presented correction method has the potential to significantly improve the detection of the boundary layer with gradient-based methods because it removes false candidates and hence simplifies the attribution of the detected gradients to the planetary boundary layer. A particularly significant benefit can be expected for the detection of shallow stable layers typical of night-time situations. The algorithm is completely automatic and does not require any on-site intervention but requires the definition of an adequate instrument-specific configuration. It is therefore suited for use in large ceilometer networks.

  3. Correction of WindScat Scatterometric Measurements by Combining with AMSR Radiometric Data

    NASA Technical Reports Server (NTRS)

    Song, S.; Moore, R. K.

    1996-01-01

    The Seawinds scatterometer on the advanced Earth observing satellite-2 (ADEOS-2) will determine surface wind vectors by measuring the radar cross section. Multiple measurements will be made at different points in a wind-vector cell. When dense clouds and rain are present, the signal will be attenuated, thereby giving erroneous results for the wind. This report describes algorithms to use with the advanced mechanically scanned radiometer (AMSR) scanning radiometer on ADEOS-2 to correct for the attenuation. One can determine attenuation from a radiometer measurement based on the excess brightness temperature measured. This is the difference between the total measured brightness temperature and the contribution from surface emission. A major problem that the algorithm must address is determining the surface contribution. Two basic approaches were developed for this, one using the scattering coefficient measured along with the brightness temperature, and the other using the brightness temperature alone. For both methods, best results will occur if the wind from the preceding wind-vector cell can be used as an input to the algorithm. In the method based on the scattering coefficient, we need the wind direction from the preceding cell. In the method using brightness temperature alone, we need the wind speed from the preceding cell. If neither is available, the algorithm can work, but the corrections will be less accurate. Both correction methods require iterative solutions. Simulations show that the algorithms make significant improvements in the measured scattering coefficient and thus is the retrieved wind vector. For stratiform rains, the errors without correction can be quite large, so the correction makes a major improvement. For systems of separated convective cells, the initial error is smaller and the correction, although about the same percentage, has a smaller effect.

  4. Improved forest change detection with terrain illumination corrected landsat images

    USDA-ARS?s Scientific Manuscript database

    An illumination correction algorithm has been developed to improve the accuracy of forest change detection from Landsat reflectance data. This algorithm is based on an empirical rotation model and was tested on the Landsat imagery pair over Cherokee National Forest, Tennessee, Uinta-Wasatch-Cache N...

  5. Mastery Multiplied

    ERIC Educational Resources Information Center

    Shumway, Jessica F.; Kyriopoulos, Joan

    2014-01-01

    Being able to find the correct answer to a math problem does not always indicate solid mathematics mastery. A student who knows how to apply the basic algorithms can correctly solve problems without understanding the relationships between numbers or why the algorithms work. The Common Core standards require that students actually understand…

  6. Distributed Sensing and Shape Control of Piezoelectric Bimorph Mirrors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Redmond, James M.; Barney, Patrick S.; Henson, Tammy D.

    1999-07-28

    As part of a collaborative effort between Sandia National Laboratories and the University of Kentucky to develop a deployable mirror for remote sensing applications, research in shape sensing and control algorithms that leverage the distributed nature of electron gun excitation for piezoelectric bimorph mirrors is summarized. A coarse shape sensing technique is developed that uses reflected light rays from the sample surface to provide discrete slope measurements. Estimates of surface profiles are obtained with a cubic spline curve fitting algorithm. Experiments on a PZT bimorph illustrate appropriate deformation trends as a function of excitation voltage. A parallel effort to effectmore » desired shape changes through electron gun excitation is also summarized. A one dimensional model-based algorithm is developed to correct profile errors in bimorph beams. A more useful two dimensional algorithm is also developed that relies on measured voltage-curvature sensitivities to provide corrective excitation profiles for the top and bottom surfaces of bimorph plates. The two algorithms are illustrated using finite element models of PZT bimorph structures subjected to arbitrary disturbances. Corrective excitation profiles that yield desired parabolic forms are computed, and are shown to provide the necessary corrective action.« less

  7. A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting

    PubMed Central

    Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao

    2014-01-01

    We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813

  8. A 3D inversion for all-space magnetotelluric data with static shift correction

    NASA Astrophysics Data System (ADS)

    Zhang, Kun

    2017-04-01

    Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results. The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm.

  9. Optimizing wavefront-guided corrections for highly aberrated eyes in the presence of registration uncertainty

    PubMed Central

    Shi, Yue; Queener, Hope M.; Marsack, Jason D.; Ravikumar, Ayeswarya; Bedell, Harold E.; Applegate, Raymond A.

    2013-01-01

    Dynamic registration uncertainty of a wavefront-guided correction with respect to underlying wavefront error (WFE) inevitably decreases retinal image quality. A partial correction may improve average retinal image quality and visual acuity in the presence of registration uncertainties. The purpose of this paper is to (a) develop an algorithm to optimize wavefront-guided correction that improves visual acuity given registration uncertainty and (b) test the hypothesis that these corrections provide improved visual performance in the presence of these uncertainties as compared to a full-magnitude correction or a correction by Guirao, Cox, and Williams (2002). A stochastic parallel gradient descent (SPGD) algorithm was used to optimize the partial-magnitude correction for three keratoconic eyes based on measured scleral contact lens movement. Given its high correlation with logMAR acuity, the retinal image quality metric log visual Strehl was used as a predictor of visual acuity. Predicted values of visual acuity with the optimized corrections were validated by regressing measured acuity loss against predicted loss. Measured loss was obtained from normal subjects viewing acuity charts that were degraded by the residual aberrations generated by the movement of the full-magnitude correction, the correction by Guirao, and optimized SPGD correction. Partial-magnitude corrections optimized with an SPGD algorithm provide at least one line improvement of average visual acuity over the full magnitude and the correction by Guirao given the registration uncertainty. This study demonstrates that it is possible to improve the average visual acuity by optimizing wavefront-guided correction in the presence of registration uncertainty. PMID:23757512

  10. Effect of shape on bone cement polymerization time in knee joint replacement surgery

    PubMed Central

    Yoon, Jung-Ro; Ko, Young-Rok; Shin, Young-Soo

    2018-01-01

    Abstract Background: Although many factors are known to influence the polymerization time of bone cement, it remains unclear which bone cement shape predicts the precise polymerization time. The purpose of this study was to investigate whether different cement shapes influenced polymerization time and to identify the relationship between cement shape and ambient operating theater temperature, relative humidity, and equilibration time. Methods: Samples were gathered prospectively from 237 patients undergoing primary total knee arthroplasty. The cement components were made into 2 different shapes: lump and pan. The time at which no macroscopic indentation of both cement models was possible was recorded as the polymerization time. Results: There was no significant difference between hand mixing (lump shape: 789.3 ± 128.4 seconds, P = .591; pan shape: 899.3 ± 152.2 seconds, P = .584) and vacuum mixing (lump shape: 780.2 ± 131.1 seconds, P = .591; pan shape: 909.9 ± 143.3 seconds, P = .584) in terms of polymerization time. Conversely, the polymerization time was significantly shorter for Antibiotic Simplex (lump shape: 757.4 ± 114.9 seconds, P = .001; pan shape: 879.5 ± 125.0 seconds, P < .001) when compared with Palacos R+G (lump shape: 829.0 ± 139.3 seconds, P = .001; pan shape: 942.9 ± 172.0 seconds, P < .001). Polymerization time was also significantly longer (P < .001) for the pan shape model (904 ± 148.0 seconds) when compared with the lump shape model (785.2 ± 129.4 seconds). In addition, the polymerization time decreased with increasing temperature (lump shape: R2 = 0.334, P < .001; pan shape: R2 = 0.375, P < .001), humidity (lump shape: R2 = 0.091, P < .001; pan shape: R2 = 0.106, P < .001), and equilibration time (lump shape: R2 = 0.073, P < .001; pan shape: R2 = 0.044, P < .001). Conclusions: The polymerization time was equally affected by temperature, relative humidity, and equilibration time regardless of bone cement shape. Furthermore, the pan shape model better reflected the cement polymerization time between implant and bone compared with the lump shape model. The current findings suggest that, clinically, constant pressure with the knee in <45° of flexion needs to be applied until remaining pan shaped cement is completely polymerized. PMID:29703041

  11. 7 CFR 1726.205 - Multiparty lump sum quotations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., DEPARTMENT OF AGRICULTURE ELECTRIC SYSTEM CONSTRUCTION POLICIES AND PROCEDURES Procurement Procedures § 1726.205 Multiparty lump sum quotations. The borrower or its engineer must contact a sufficient number of...

  12. 7 CFR 1726.205 - Multiparty lump sum quotations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., DEPARTMENT OF AGRICULTURE ELECTRIC SYSTEM CONSTRUCTION POLICIES AND PROCEDURES Procurement Procedures § 1726.205 Multiparty lump sum quotations. The borrower or its engineer must contact a sufficient number of...

  13. 7 CFR 1726.205 - Multiparty lump sum quotations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., DEPARTMENT OF AGRICULTURE ELECTRIC SYSTEM CONSTRUCTION POLICIES AND PROCEDURES Procurement Procedures § 1726.205 Multiparty lump sum quotations. The borrower or its engineer must contact a sufficient number of...

  14. 7 CFR 1726.205 - Multiparty lump sum quotations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., DEPARTMENT OF AGRICULTURE ELECTRIC SYSTEM CONSTRUCTION POLICIES AND PROCEDURES Procurement Procedures § 1726.205 Multiparty lump sum quotations. The borrower or its engineer must contact a sufficient number of...

  15. 7 CFR 1726.205 - Multiparty lump sum quotations.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., DEPARTMENT OF AGRICULTURE ELECTRIC SYSTEM CONSTRUCTION POLICIES AND PROCEDURES Procurement Procedures § 1726.205 Multiparty lump sum quotations. The borrower or its engineer must contact a sufficient number of...

  16. Coastal Zone Color Scanner atmospheric correction - Influence of El Chichon

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.; Castano, Diego J.

    1988-01-01

    The addition of an El Chichon-like aerosol layer in the stratosphere is shown to have very little effect on the basic CZCS atmospheric correction algorithm. The additional stratospheric aerosol is found to increase the total radiance exiting the atmosphere, thereby increasing the probability that the sensor will saturate. It is suggested that in the absence of saturation the correction algorithm should perform as well as in the absence of the stratospheric layer.

  17. Breast Abscess Mimicking Breast Carcinoma in Male.

    PubMed

    Gochhait, Debasis; Dehuri, Priyadarshini; Umamahesweran, Sandyya; Kamat, Rohan

    2018-01-01

    Male breast can show almost all pathological entities described in female breast. Inflammatory conditions of the breast in male are not common; however, occasionally, it can be encountered in the form of an abscess. Clinically, gynecomastia always presents as a symmetric unilateral or bilateral lump in the retroareolar region, and any irregular asymmetric lump raises a possibility of malignancy. Radiology should be used as a part of the triple assessment protocol for breast lump along with fine-needle aspiration cytology for definite diagnosis and proper management.

  18. Gravity controlled anti-reverse rotation device

    DOEpatents

    Dickinson, Robert J.; Wetherill, Todd M.

    1983-01-01

    A gravity assisted anti-reverse rotation device for preventing reverse rotation of pumps and the like. A horizontally mounted pawl is disposed to mesh with a fixed ratchet preventing reverse rotation when the pawl is advanced into intercourse with the ratchet by a vertically mounted lever having a lumped mass. Gravitation action on the lumped mass urges the pawl into mesh with the ratchet, while centrifugal force on the lumped mass during forward, allowed rotation retracts the pawl away from the ratchet.

  19. Fast readout algorithm for cylindrical beam position monitors providing good accuracy for particle bunches with large offsets

    NASA Astrophysics Data System (ADS)

    Thieberger, P.; Gassner, D.; Hulsart, R.; Michnoff, R.; Miller, T.; Minty, M.; Sorrell, Z.; Bartnik, A.

    2018-04-01

    A simple, analytically correct algorithm is developed for calculating "pencil" relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a Field Programmable Gate Array-based BPM readout implementation of the new algorithm has been developed and characterized. Finally, the algorithm is tested with BPM data from the Cornell Preinjector.

  20. Fast readout algorithm for cylindrical beam position monitors providing good accuracy for particle bunches with large offsets

    DOE PAGES

    Thieberger, Peter; Gassner, D.; Hulsart, R.; ...

    2018-04-25

    Here, a simple, analytically correct algorithm is developed for calculating “pencil” relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a FPGA-based BPM readout implementation of the new algorithm has been developed and characterized. Lastly, the algorithm ismore » tested with BPM data from the Cornell Preinjector.« less

  1. Fast readout algorithm for cylindrical beam position monitors providing good accuracy for particle bunches with large offsets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thieberger, Peter; Gassner, D.; Hulsart, R.

    Here, a simple, analytically correct algorithm is developed for calculating “pencil” relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a FPGA-based BPM readout implementation of the new algorithm has been developed and characterized. Lastly, the algorithm ismore » tested with BPM data from the Cornell Preinjector.« less

  2. Fast readout algorithm for cylindrical beam position monitors providing good accuracy for particle bunches with large offsets.

    PubMed

    Thieberger, P; Gassner, D; Hulsart, R; Michnoff, R; Miller, T; Minty, M; Sorrell, Z; Bartnik, A

    2018-04-01

    A simple, analytically correct algorithm is developed for calculating "pencil" relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a Field Programmable Gate Array-based BPM readout implementation of the new algorithm has been developed and characterized. Finally, the algorithm is tested with BPM data from the Cornell Preinjector.

  3. A new bias field correction method combining N3 and FCM for improved segmentation of breast density on MRI.

    PubMed

    Lin, Muqing; Chan, Siwa; Chen, Jeon-Hor; Chang, Daniel; Nie, Ke; Chen, Shih-Ting; Lin, Cheng-Ju; Shih, Tzu-Ching; Nalcioglu, Orhan; Su, Min-Ying

    2011-01-01

    Quantitative breast density is known as a strong risk factor associated with the development of breast cancer. Measurement of breast density based on three-dimensional breast MRI may provide very useful information. One important step for quantitative analysis of breast density on MRI is the correction of field inhomogeneity to allow an accurate segmentation of the fibroglandular tissue (dense tissue). A new bias field correction method by combining the nonparametric nonuniformity normalization (N3) algorithm and fuzzy-C-means (FCM)-based inhomogeneity correction algorithm is developed in this work. The analysis is performed on non-fat-sat T1-weighted images acquired using a 1.5 T MRI scanner. A total of 60 breasts from 30 healthy volunteers was analyzed. N3 is known as a robust correction method, but it cannot correct a strong bias field on a large area. FCM-based algorithm can correct the bias field on a large area, but it may change the tissue contrast and affect the segmentation quality. The proposed algorithm applies N3 first, followed by FCM, and then the generated bias field is smoothed using Gaussian kernal and B-spline surface fitting to minimize the problem of mistakenly changed tissue contrast. The segmentation results based on the N3+FCM corrected images were compared to the N3 and FCM alone corrected images and another method, coherent local intensity clustering (CLIC), corrected images. The segmentation quality based on different correction methods were evaluated by a radiologist and ranked. The authors demonstrated that the iterative N3+FCM correction method brightens the signal intensity of fatty tissues and that separates the histogram peaks between the fibroglandular and fatty tissues to allow an accurate segmentation between them. In the first reading session, the radiologist found (N3+FCM > N3 > FCM) ranking in 17 breasts, (N3+FCM > N3 = FCM) ranking in 7 breasts, (N3+FCM = N3 > FCM) in 32 breasts, (N3+FCM = N3 = FCM) in 2 breasts, and (N3 > N3+FCM > FCM) in 2 breasts. The results of the second reading session were similar. The performance in each pairwise Wilcoxon signed-rank test is significant, showing N3+FCM superior to both N3 and FCM, and N3 superior to FCM. The performance of the new N3+FCM algorithm was comparable to that of CLIC, showing equivalent quality in 57/60 breasts. Choosing an appropriate bias field correction method is a very important preprocessing step to allow an accurate segmentation of fibroglandular tissues based on breast MRI for quantitative measurement of breast density. The proposed algorithm combining N3+FCM and CLIC both yield satisfactory results.

  4. Testing of next-generation nonlinear calibration based non-uniformity correction techniques using SWIR devices

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna R.; Wickert, Mark A.

    2017-05-01

    A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.

  5. The atmospheric correction algorithm for HY-1B/COCTS

    NASA Astrophysics Data System (ADS)

    He, Xianqiang; Bai, Yan; Pan, Delu; Zhu, Qiankun

    2008-10-01

    China has launched her second ocean color satellite HY-1B on 11 Apr., 2007, which carried two remote sensors. The Chinese Ocean Color and Temperature Scanner (COCTS) is the main sensor on HY-1B, and it has not only eight visible and near-infrared wavelength bands similar to the SeaWiFS, but also two more thermal infrared bands to measure the sea surface temperature. Therefore, COCTS has broad application potentiality, such as fishery resource protection and development, coastal monitoring and management and marine pollution monitoring. Atmospheric correction is the key of the quantitative ocean color remote sensing. In this paper, the operational atmospheric correction algorithm of HY-1B/COCTS has been developed. Firstly, based on the vector radiative transfer numerical model of coupled oceanatmosphere system- PCOART, the exact Rayleigh scattering look-up table (LUT), aerosol scattering LUT and atmosphere diffuse transmission LUT for HY-1B/COCTS have been generated. Secondly, using the generated LUTs, the exactly operational atmospheric correction algorithm for HY-1B/COCTS has been developed. The algorithm has been validated using the simulated spectral data generated by PCOART, and the result shows the error of the water-leaving reflectance retrieved by this algorithm is less than 0.0005, which meets the requirement of the exactly atmospheric correction of ocean color remote sensing. Finally, the algorithm has been applied to the HY-1B/COCTS remote sensing data, and the retrieved water-leaving radiances are consist with the Aqua/MODIS results, and the corresponding ocean color remote sensing products have been generated including the chlorophyll concentration and total suspended particle matter concentration.

  6. Optomechanical oscillator pumped and probed by optically two isolated photonic crystal cavity systems.

    PubMed

    Tian, Feng; Sumikura, Hisashi; Kuramochi, Eiichi; Taniyama, Hideaki; Takiguchi, Masato; Notomi, Masaya

    2016-11-28

    Optomechanical control of on-chip emitters is an important topic related to integrated all-optical circuits. However, there is neither a realization nor a suitable optomechanical structure for this control. The biggest obstacle is that the emission signal can hardly be distinguished from the pump light because of the several orders' power difference. In this study, we designed and experimentally verified an optomechanical oscillation system, in which a lumped mechanical oscillator connected two optically isolated pairs of coupled one-dimensional photonic crystal cavities. As a functional device, the two pairs of coupled cavities were respectively used as an optomechanical pump for the lumped oscillator (cavity pair II, wavelengths were designed to be within a 1.5 μm band) and a modulation target of the lumped oscillator (cavity pair I, wavelengths were designed to be within a 1.2 μm band). By conducting finite element method simulations, we found that the lumped-oscillator-supported configurations of both cavity pairs enhance the optomechanical interactions, especially for higher order optical modes, compared with their respective conventional side-clamped configurations. Besides the desired first-order in-plane antiphase mechanical mode, other mechanical modes of the lumped oscillator were investigated and found to possibly have optomechanical applications with a versatile degree of freedom. In experiments, the oscillator's RF spectra were probed using both cavity pairs I and II, and the results matched those of the simulations. Dynamic detuning of the optical spectrum of cavity pair I was then implemented with a pumped lumped oscillator. This was the first demonstration of an optomechanical lumped oscillator connecting two optically isolated pairs of coupled cavities, whose biggest advantage is that one cavity pair can be modulated with an lumped oscillator without interference from the pump light in the other cavity pair. Thus, the oscillator is a suitable platform for optomechanical control of integrated lasers, cavity quantum electrodynamics, and spontaneous emission. Furthermore, this device may open the door on the study of interactions between photons, phonons, and excitons in the quantum regime.

  7. See Something, Say Something: Correction of Global Health Misinformation on Social Media.

    PubMed

    Bode, Leticia; Vraga, Emily K

    2018-09-01

    Social media are often criticized for being a conduit for misinformation on global health issues, but may also serve as a corrective to false information. To investigate this possibility, an experiment was conducted exposing users to a simulated Facebook News Feed featuring misinformation and different correction mechanisms (one in which news stories featuring correct information were produced by an algorithm and another where the corrective news stories were posted by other Facebook users) about the Zika virus, a current global health threat. Results show that algorithmic and social corrections are equally effective in limiting misperceptions, and correction occurs for both high and low conspiracy belief individuals. Recommendations for social media campaigns to correct global health misinformation, including encouraging users to refute false or misleading health information, and providing them appropriate sources to accompany their refutation, are discussed.

  8. FACET - a "Flexible Artifact Correction and Evaluation Toolbox" for concurrently recorded EEG/fMRI data.

    PubMed

    Glaser, Johann; Beisteiner, Roland; Bauer, Herbert; Fischmeister, Florian Ph S

    2013-11-09

    In concurrent EEG/fMRI recordings, EEG data are impaired by the fMRI gradient artifacts which exceed the EEG signal by several orders of magnitude. While several algorithms exist to correct the EEG data, these algorithms lack the flexibility to either leave out or add new steps. The here presented open-source MATLAB toolbox FACET is a modular toolbox for the fast and flexible correction and evaluation of imaging artifacts from concurrently recorded EEG datasets. It consists of an Analysis, a Correction and an Evaluation framework allowing the user to choose from different artifact correction methods with various pre- and post-processing steps to form flexible combinations. The quality of the chosen correction approach can then be evaluated and compared to different settings. FACET was evaluated on a dataset provided with the FMRIB plugin for EEGLAB using two different correction approaches: Averaged Artifact Subtraction (AAS, Allen et al., NeuroImage 12(2):230-239, 2000) and the FMRI Artifact Slice Template Removal (FASTR, Niazy et al., NeuroImage 28(3):720-737, 2005). Evaluation of the obtained results were compared to the FASTR algorithm implemented in the EEGLAB plugin FMRIB. No differences were found between the FACET implementation of FASTR and the original algorithm across all gradient artifact relevant performance indices. The FACET toolbox not only provides facilities for all three modalities: data analysis, artifact correction as well as evaluation and documentation of the results but it also offers an easily extendable framework for development and evaluation of new approaches.

  9. FACET – a “Flexible Artifact Correction and Evaluation Toolbox” for concurrently recorded EEG/fMRI data

    PubMed Central

    2013-01-01

    Background In concurrent EEG/fMRI recordings, EEG data are impaired by the fMRI gradient artifacts which exceed the EEG signal by several orders of magnitude. While several algorithms exist to correct the EEG data, these algorithms lack the flexibility to either leave out or add new steps. The here presented open-source MATLAB toolbox FACET is a modular toolbox for the fast and flexible correction and evaluation of imaging artifacts from concurrently recorded EEG datasets. It consists of an Analysis, a Correction and an Evaluation framework allowing the user to choose from different artifact correction methods with various pre- and post-processing steps to form flexible combinations. The quality of the chosen correction approach can then be evaluated and compared to different settings. Results FACET was evaluated on a dataset provided with the FMRIB plugin for EEGLAB using two different correction approaches: Averaged Artifact Subtraction (AAS, Allen et al., NeuroImage 12(2):230–239, 2000) and the FMRI Artifact Slice Template Removal (FASTR, Niazy et al., NeuroImage 28(3):720–737, 2005). Evaluation of the obtained results were compared to the FASTR algorithm implemented in the EEGLAB plugin FMRIB. No differences were found between the FACET implementation of FASTR and the original algorithm across all gradient artifact relevant performance indices. Conclusion The FACET toolbox not only provides facilities for all three modalities: data analysis, artifact correction as well as evaluation and documentation of the results but it also offers an easily extendable framework for development and evaluation of new approaches. PMID:24206927

  10. Multivariate quantile mapping bias correction: an N-dimensional probability density function transform for climate model simulations of multiple variables

    NASA Astrophysics Data System (ADS)

    Cannon, Alex J.

    2018-01-01

    Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another—the N-dimensional probability density function transform—is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin, particularly for annual maxima of the FWI distribution and spatiotemporal autocorrelation of precipitation fields.

  11. Beam hardening correction in CT myocardial perfusion measurement

    NASA Astrophysics Data System (ADS)

    So, Aaron; Hsieh, Jiang; Li, Jian-Ying; Lee, Ting-Yim

    2009-05-01

    This paper presents a method for correcting beam hardening (BH) in cardiac CT perfusion imaging. The proposed algorithm works with reconstructed images instead of projection data. It applies thresholds to separate low (soft tissue) and high (bone and contrast) attenuating material in a CT image. The BH error in each projection is estimated by a polynomial function of the forward projection of the segmented image. The error image is reconstructed by back-projection of the estimated errors. A BH-corrected image is then obtained by subtracting a scaled error image from the original image. Phantoms were designed to simulate the BH artifacts encountered in cardiac CT perfusion studies of humans and animals that are most commonly used in cardiac research. These phantoms were used to investigate whether BH artifacts can be reduced with our approach and to determine the optimal settings, which depend upon the anatomy of the scanned subject, of the correction algorithm for patient and animal studies. The correction algorithm was also applied to correct BH in a clinical study to further demonstrate the effectiveness of our technique.

  12. A method of measuring and correcting tilt of anti - vibration wind turbines based on screening algorithm

    NASA Astrophysics Data System (ADS)

    Xiao, Zhongxiu

    2018-04-01

    A Method of Measuring and Correcting Tilt of Anti - vibration Wind Turbines Based on Screening Algorithm is proposed in this paper. First of all, we design a device which the core is the acceleration sensor ADXL203, the inclination is measured by installing it on the tower of the wind turbine as well as the engine room. Next using the Kalman filter algorithm to filter effectively by establishing a state space model for signal and noise. Then we use matlab for simulation. Considering the impact of the tower and nacelle vibration on the collected data, the original data and the filtering data are classified and stored by the Screening algorithm, then filter the filtering data to make the output data more accurate. Finally, we eliminate installation errors by using algorithm to achieve the tilt correction. The device based on this method has high precision, low cost and anti-vibration advantages. It has a wide range of application and promotion value.

  13. Analysis of L-band Multi-Channel Sea Clutter

    DTIC Science & Technology

    2010-08-01

    Some researchers found that the use of a hybrid algorithm of PS and GA could accelerate the convergence for array beamforming designs (Yeo and Lu...to be shown is array failure correction using the PS algorithm . Assume element 5 of a 32 half-wavelength spacing linear array is in failure. The goal... algorithm . The blue one is the 20 dB Chebyshev pattern and the template in red is the goal pattern to achieve. Two corrected beam patterns are

  14. A Novel Grid SINS/DVL Integrated Navigation Algorithm for Marine Application

    PubMed Central

    Kang, Yingyao; Zhao, Lin; Cheng, Jianhua; Fan, Xiaoliang

    2018-01-01

    Integrated navigation algorithms under the grid frame have been proposed based on the Kalman filter (KF) to solve the problem of navigation in some special regions. However, in the existing study of grid strapdown inertial navigation system (SINS)/Doppler velocity log (DVL) integrated navigation algorithms, the Earth models of the filter dynamic model and the SINS mechanization are not unified. Besides, traditional integrated systems with the KF based correction scheme are susceptible to measurement errors, which would decrease the accuracy and robustness of the system. In this paper, an adaptive robust Kalman filter (ARKF) based hybrid-correction grid SINS/DVL integrated navigation algorithm is designed with the unified reference ellipsoid Earth model to improve the navigation accuracy in middle-high latitude regions for marine application. Firstly, to unify the Earth models, the mechanization of grid SINS is introduced and the error equations are derived based on the same reference ellipsoid Earth model. Then, a more accurate grid SINS/DVL filter model is designed according to the new error equations. Finally, a hybrid-correction scheme based on the ARKF is proposed to resist the effect of measurement errors. Simulation and experiment results show that, compared with the traditional algorithms, the proposed navigation algorithm can effectively improve the navigation performance in middle-high latitude regions by the unified Earth models and the ARKF based hybrid-correction scheme. PMID:29373549

  15. Lumped Model Generation and Evaluation: Sensitivity and Lie Algebraic Techniques with Applications to Combustion

    DTIC Science & Technology

    1989-03-03

    address global parameter space mapping issues for first order differential equations. The rigorous criteria for the existence of exact lumping by linear projective transformations was also established.

  16. Male Breast Cancer

    MedlinePlus

    Although breast cancer is much more common in women, men can get it too. It happens most often to men between ... 60 and 70. Breast lumps usually aren't cancer. However, most men with breast cancer have lumps. ...

  17. Lump solutions to nonlinear partial differential equations via Hirota bilinear forms

    NASA Astrophysics Data System (ADS)

    Ma, Wen-Xiu; Zhou, Yuan

    2018-02-01

    Lump solutions are analytical rational function solutions localized in all directions in space. We analyze a class of lump solutions, generated from quadratic functions, to nonlinear partial differential equations. The basis of success is the Hirota bilinear formulation and the primary object is the class of positive multivariate quadratic functions. A complete determination of quadratic functions positive in space and time is given, and positive quadratic functions are characterized as sums of squares of linear functions. Necessary and sufficient conditions for positive quadratic functions to solve Hirota bilinear equations are presented, and such polynomial solutions yield lump solutions to nonlinear partial differential equations under the dependent variable transformations u = 2(ln ⁡ f) x and u = 2(ln ⁡ f) xx, where x is one spatial variable. Applications are made for a few generalized KP and BKP equations.

  18. Note: A calibration method to determine the lumped-circuit parameters of a magnetic probe.

    PubMed

    Li, Fuming; Chen, Zhipeng; Zhu, Lizhi; Liu, Hai; Wang, Zhijiang; Zhuang, Ge

    2016-06-01

    This paper describes a novel method to determine the lumped-circuit parameters of a magnetic inductive probe for calibration by using Helmholtz coils with high frequency power supply (frequency range: 10 kHz-400 kHz). The whole calibration circuit system can be separated into two parts: "generator" circuit and "receiver" circuit. By implementing the Fourier transform, two analytical lumped-circuit models, with respect to these separated circuits, are constructed to obtain the transfer function between each other. Herein, the precise lumped-circuit parameters (including the resistance, inductance, and capacitance) of the magnetic probe can be determined by fitting the experimental data to the transfer function. Regarding the fitting results, the finite impedance of magnetic probe can be used to analyze the transmission of a high-frequency signal between magnetic probes, cables, and acquisition system.

  19. Artificial food lump from porous neoprene and the method of its use for the evaluation of adaptation patients to the dental constructions

    NASA Astrophysics Data System (ADS)

    Reshetnikov, A.; Urakov, A.; Kasatkin, A.; Soiher, M. G.; Kopylov, M.

    2016-04-01

    New dental product called artificial food lump is offered for dental practices. In its size and shape it is similar to the natural food bolus, which is formed in adult's mouth when chewing white bread. This innovative product resembles an inedible and non-swallowable chewing gum. Artificial lump is made of porous neoprene; it is elastic and has food flavor. It is not destroyed by chewing and has stable elasticity during chewing. Besides, artificial lump is manufactured in a way that it can be attached to the patient's clothes with a braid line. New medical device is intended to create the masticatory loading in patients' mouth in order to evaluate the quality of mounted dental restorations as well as patient's adaptation to it during the chewing process.

  20. Outbreak of primary inoculation tuberculosis in an acupuncture clinic in southeastern China.

    PubMed

    Wang, J; Zhu, M Y; Li, C; Zhang, H B; Zuo, G B; Wang, M H; Teng, H L

    2015-04-01

    Outbreak of Mycobacterium tuberculosis infections associated with acupuncture has not been reported. Thirteen patients with a painful swollen lump were referred to our hospital. The index patient received acupuncture and paraspinal muscular injection at a local acupuncture clinic in April 2011 and was diagnosed with M. tuberculosis 1 month later. From May 2011 to August 2011, 12 more patients with a swollen lump on the nuchal region or in the lower back or the buttocks region were referred to our hospital. Tuberculin skin test (TST), T-SPOT.TB, acid-fast stain, M. tuberculosis culture, chest radiograph, and lump magnetic resonance imaging (MRI) were performed and the patients were diagnosed with tuberculous abscess of the lump. All 13 patients received intramuscular injection at the paraspinal muscle by two acupuncturists at a local clinic and reported a swollen lump at the injection site. The needles and syringes were reused after autoclave sterilization. The TST was positive in all patients. Twelve patients had positive acid-fast stains. Mycobacterial cultures of abscess specimens were positive in all 13 patients. T-SPOT.TB tests were positive in all patients who underwent the test. The lesions and biopsies were subjected to polymerase chain reaction (PCR) and gene sequencing by the Disease Control Center of Zhejiang Province, China and the causative agent was identified as M. tuberculosis, Beijing type. In conclusion, physicians should consider the possibility of mycobacterial infections, apart from other bacterial agents, in patients with a swollen paraspinal lump following intramuscular injection.

  1. High-order flux correction/finite difference schemes for strand grids

    NASA Astrophysics Data System (ADS)

    Katz, Aaron; Work, Dalon

    2015-02-01

    A novel high-order method combining unstructured flux correction along body surfaces and high-order finite differences normal to surfaces is formulated for unsteady viscous flows on strand grids. The flux correction algorithm is applied in each unstructured layer of the strand grid, and the layers are then coupled together via a source term containing derivatives in the strand direction. Strand-direction derivatives are approximated to high-order via summation-by-parts operators for first derivatives and second derivatives with variable coefficients. We show how this procedure allows for the proper truncation error canceling properties required for the flux correction scheme. The resulting scheme possesses third-order design accuracy, but often exhibits fourth-order accuracy when higher-order derivatives are employed in the strand direction, especially for highly viscous flows. We prove discrete conservation for the new scheme and time stability in the absence of the flux correction terms. Results in two dimensions are presented that demonstrate improvements in accuracy with minimal computational and algorithmic overhead over traditional second-order algorithms.

  2. Comparison of atmospheric correction algorithms for the Coastal Zone Color Scanner

    NASA Technical Reports Server (NTRS)

    Tanis, F. J.; Jain, S. C.

    1984-01-01

    Before Nimbus-7 Costal Zone Color Scanner (CZC) data can be used to distinguish between coastal water types, methods must be developed for the removal of spatial variations in aerosol path radiance. These can dominate radiance measurements made by the satellite. An assessment is presently made of the ability of four different algorithms to quantitatively remove haze effects; each was adapted for the extraction of the required scene-dependent parameters during an initial pass through the data set The CZCS correction algorithms considered are (1) the Gordon (1981, 1983) algorithm; (2) the Smith and Wilson (1981) iterative algorityhm; (3) the pseudooptical depth method; and (4) the residual component algorithm.

  3. Patient-specific biomechanical model of hypoplastic left heart to predict post-operative cardio-circulatory behaviour.

    PubMed

    Cutrì, Elena; Meoli, Alessio; Dubini, Gabriele; Migliavacca, Francesco; Hsia, Tain-Yen; Pennati, Giancarlo

    2017-09-01

    Hypoplastic left heart syndrome is a complex congenital heart disease characterised by the underdevelopment of the left ventricle normally treated with a three-stage surgical repair. In this study, a multiscale closed-loop cardio-circulatory model is created to reproduce the pre-operative condition of a patient suffering from such pathology and virtual surgery is performed. Firstly, cardio-circulatory parameters are estimated using a fully closed-loop cardio-circulatory lumped parameter model. Secondly, a 3D standalone FEA model is build up to obtain active and passive ventricular characteristics and unloaded reference state. Lastly, the 3D model of the single ventricle is coupled to the lumped parameter model of the circulation obtaining a multiscale closed-loop pre-operative model. Lacking any information on the fibre orientation, two cases were simulated: (i) fibre distributed as in the physiological right ventricle and (ii) fibre as in the physiological left ventricle. Once the pre-operative condition is satisfactorily simulated for the two cases, virtual surgery is performed. The post-operative results in the two cases highlighted similar hemodynamic behaviour but different local mechanics. This finding suggests that the knowledge of the patient-specific fibre arrangement is important to correctly estimate the single ventricle's working condition and consequently can be valuable to support clinical decision. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  4. A Robust In-Situ Warp-Correction Algorithm For VISAR Streak Camera Data at the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.

    2015-01-12

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high-energy-density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However,more » the camera nonlinearities drift over time, affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.« less

  5. A three-dimensional model-based partial volume correction strategy for gated cardiac mouse PET imaging

    NASA Astrophysics Data System (ADS)

    Dumouchel, Tyler; Thorn, Stephanie; Kordos, Myra; DaSilva, Jean; Beanlands, Rob S. B.; deKemp, Robert A.

    2012-07-01

    Quantification in cardiac mouse positron emission tomography (PET) imaging is limited by the imaging spatial resolution. Spillover of left ventricle (LV) myocardial activity into adjacent organs results in partial volume (PV) losses leading to underestimation of myocardial activity. A PV correction method was developed to restore accuracy of the activity distribution for FDG mouse imaging. The PV correction model was based on convolving an LV image estimate with a 3D point spread function. The LV model was described regionally by a five-parameter profile including myocardial, background and blood activities which were separated into three compartments by the endocardial radius and myocardium wall thickness. The PV correction was tested with digital simulations and a physical 3D mouse LV phantom. In vivo cardiac FDG mouse PET imaging was also performed. Following imaging, the mice were sacrificed and the tracer biodistribution in the LV and liver tissue was measured using a gamma-counter. The PV correction algorithm improved recovery from 50% to within 5% of the truth for the simulated and measured phantom data and image uniformity by 5-13%. The PV correction algorithm improved the mean myocardial LV recovery from 0.56 (0.54) to 1.13 (1.10) without (with) scatter and attenuation corrections. The mean image uniformity was improved from 26% (26%) to 17% (16%) without (with) scatter and attenuation corrections applied. Scatter and attenuation corrections were not observed to significantly impact PV-corrected myocardial recovery or image uniformity. Image-based PV correction algorithm can increase the accuracy of PET image activity and improve the uniformity of the activity distribution in normal mice. The algorithm may be applied using different tracers, in transgenic models that affect myocardial uptake, or in different species provided there is sufficient image quality and similar contrast between the myocardium and surrounding structures.

  6. Model-based sensor-less wavefront aberration correction in optical coherence tomography.

    PubMed

    Verstraete, Hans R G W; Wahls, Sander; Kalkman, Jeroen; Verhaegen, Michel

    2015-12-15

    Several sensor-less wavefront aberration correction methods that correct nonlinear wavefront aberrations by maximizing the optical coherence tomography (OCT) signal are tested on an OCT setup. A conventional coordinate search method is compared to two model-based optimization methods. The first model-based method takes advantage of the well-known optimization algorithm (NEWUOA) and utilizes a quadratic model. The second model-based method (DONE) is new and utilizes a random multidimensional Fourier-basis expansion. The model-based algorithms achieve lower wavefront errors with up to ten times fewer measurements. Furthermore, the newly proposed DONE method outperforms the NEWUOA method significantly. The DONE algorithm is tested on OCT images and shows a significantly improved image quality.

  7. Adaptive convergence nonuniformity correction algorithm.

    PubMed

    Qian, Weixian; Chen, Qian; Bai, Junqi; Gu, Guohua

    2011-01-01

    Nowadays, convergence and ghosting artifacts are common problems in scene-based nonuniformity correction (NUC) algorithms. In this study, we introduce the idea of space frequency to the scene-based NUC. Then the convergence speed factor is presented, which can adaptively change the convergence speed by a change of the scene dynamic range. In fact, the convergence speed factor role is to decrease the statistical data standard deviation. The nonuniformity space relativity characteristic was summarized by plenty of experimental statistical data. The space relativity characteristic was used to correct the convergence speed factor, which can make it more stable. Finally, real and simulated infrared image sequences were applied to demonstrate the positive effect of our algorithm.

  8. Statistical simplex approach to primary and secondary color correction in thick lens assemblies

    NASA Astrophysics Data System (ADS)

    Ament, Shelby D. V.; Pfisterer, Richard

    2017-11-01

    A glass selection optimization algorithm is developed for primary and secondary color correction in thick lens systems. The approach is based on the downhill simplex method, and requires manipulation of the surface color equations to obtain a single glass-dependent parameter for each lens element. Linear correlation is used to relate this parameter to all other glass-dependent variables. The algorithm provides a statistical distribution of Abbe numbers for each element in the system. Examples of several lenses, from 2-element to 6-element systems, are performed to verify this approach. The optimization algorithm proposed is capable of finding glass solutions with high color correction without requiring an exhaustive search of the glass catalog.

  9. Hybrid wavefront sensing and image correction algorithm for imaging through turbulent media

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Robertson Rzasa, John; Ko, Jonathan; Davis, Christopher C.

    2017-09-01

    It is well known that passive image correction of turbulence distortions often involves using geometry-dependent deconvolution algorithms. On the other hand, active imaging techniques using adaptive optic correction should use the distorted wavefront information for guidance. Our work shows that a hybrid hardware-software approach is possible to obtain accurate and highly detailed images through turbulent media. The processing algorithm also takes much fewer iteration steps in comparison with conventional image processing algorithms. In our proposed approach, a plenoptic sensor is used as a wavefront sensor to guide post-stage image correction on a high-definition zoomable camera. Conversely, we show that given the ground truth of the highly detailed image and the plenoptic imaging result, we can generate an accurate prediction of the blurred image on a traditional zoomable camera. Similarly, the ground truth combined with the blurred image from the zoomable camera would provide the wavefront conditions. In application, our hybrid approach can be used as an effective way to conduct object recognition in a turbulent environment where the target has been significantly distorted or is even unrecognizable.

  10. A distributed lumped active all-pass network configuration.

    NASA Technical Reports Server (NTRS)

    Huelsman, L. P.; Raghunath, S.

    1972-01-01

    In this correspondence a new and interesting distributed lumped active network configuration that realizes an all-pass network function is described. A design chart for determining the values of the network elements is included.

  11. A new algorithm for attitude-independent magnetometer calibration

    NASA Technical Reports Server (NTRS)

    Alonso, Roberto; Shuster, Malcolm D.

    1994-01-01

    A new algorithm is developed for inflight magnetometer bias determination without knowledge of the attitude. This algorithm combines the fast convergence of a heuristic algorithm currently in use with the correct treatment of the statistics and without discarding data. The algorithm performance is examined using simulated data and compared with previous algorithms.

  12. The Pointing Self-calibration Algorithm for Aperture Synthesis Radio Telescopes

    NASA Astrophysics Data System (ADS)

    Bhatnagar, S.; Cornwell, T. J.

    2017-11-01

    This paper is concerned with algorithms for calibration of direction-dependent effects (DDE) in aperture synthesis radio telescopes (ASRT). After correction of direction-independent effects (DIE) using self-calibration, imaging performance can be limited by the imprecise knowledge of the forward gain of the elements in the array. In general, the forward gain pattern is directionally dependent and varies with time due to a number of reasons. Some factors, such as rotation of the primary beam with Parallactic Angle for Azimuth-Elevation mount antennas are known a priori. Some, such as antenna pointing errors and structural deformation/projection effects for aperture-array elements cannot be measured a priori. Thus, in addition to algorithms to correct for DD effects known a priori, algorithms to solve for DD gains are required for high dynamic range imaging. Here, we discuss a mathematical framework for antenna-based DDE calibration algorithms and show that this framework leads to computationally efficient optimal algorithms that scale well in a parallel computing environment. As an example of an antenna-based DD calibration algorithm, we demonstrate the Pointing SelfCal (PSC) algorithm to solve for the antenna pointing errors. Our analysis show that the sensitivity of modern ASRT is sufficient to solve for antenna pointing errors and other DD effects. We also discuss the use of the PSC algorithm in real-time calibration systems and extensions for antenna Shape SelfCal algorithm for real-time tracking and corrections for pointing offsets and changes in antenna shape.

  13. The Pointing Self-calibration Algorithm for Aperture Synthesis Radio Telescopes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhatnagar, S.; Cornwell, T. J., E-mail: sbhatnag@nrao.edu

    This paper is concerned with algorithms for calibration of direction-dependent effects (DDE) in aperture synthesis radio telescopes (ASRT). After correction of direction-independent effects (DIE) using self-calibration, imaging performance can be limited by the imprecise knowledge of the forward gain of the elements in the array. In general, the forward gain pattern is directionally dependent and varies with time due to a number of reasons. Some factors, such as rotation of the primary beam with Parallactic Angle for Azimuth–Elevation mount antennas are known a priori. Some, such as antenna pointing errors and structural deformation/projection effects for aperture-array elements cannot be measuredmore » a priori. Thus, in addition to algorithms to correct for DD effects known a priori, algorithms to solve for DD gains are required for high dynamic range imaging. Here, we discuss a mathematical framework for antenna-based DDE calibration algorithms and show that this framework leads to computationally efficient optimal algorithms that scale well in a parallel computing environment. As an example of an antenna-based DD calibration algorithm, we demonstrate the Pointing SelfCal (PSC) algorithm to solve for the antenna pointing errors. Our analysis show that the sensitivity of modern ASRT is sufficient to solve for antenna pointing errors and other DD effects. We also discuss the use of the PSC algorithm in real-time calibration systems and extensions for antenna Shape SelfCal algorithm for real-time tracking and corrections for pointing offsets and changes in antenna shape.« less

  14. Analyte quantification with comprehensive two-dimensional gas chromatography: assessment of methods for baseline correction, peak delineation, and matrix effect elimination for real samples.

    PubMed

    Samanipour, Saer; Dimitriou-Christidis, Petros; Gros, Jonas; Grange, Aureline; Samuel Arey, J

    2015-01-02

    Comprehensive two-dimensional gas chromatography (GC×GC) is used widely to separate and measure organic chemicals in complex mixtures. However, approaches to quantify analytes in real, complex samples have not been critically assessed. We quantified 7 PAHs in a certified diesel fuel using GC×GC coupled to flame ionization detector (FID), and we quantified 11 target chlorinated hydrocarbons in a lake water extract using GC×GC with electron capture detector (μECD), further confirmed qualitatively by GC×GC with electron capture negative chemical ionization time-of-flight mass spectrometer (ENCI-TOFMS). Target analyte peak volumes were determined using several existing baseline correction algorithms and peak delineation algorithms. Analyte quantifications were conducted using external standards and also using standard additions, enabling us to diagnose matrix effects. We then applied several chemometric tests to these data. We find that the choice of baseline correction algorithm and peak delineation algorithm strongly influence the reproducibility of analyte signal, error of the calibration offset, proportionality of integrated signal response, and accuracy of quantifications. Additionally, the choice of baseline correction and the peak delineation algorithm are essential for correctly discriminating analyte signal from unresolved complex mixture signal, and this is the chief consideration for controlling matrix effects during quantification. The diagnostic approaches presented here provide guidance for analyte quantification using GC×GC. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  15. Modeling of two-hot-arm horizontal thermal actuator

    NASA Astrophysics Data System (ADS)

    Yan, Dong; Khajepour, Amir; Mansour, Raafat

    2003-03-01

    Electrothermal actuators have a very promising future in MEMS applications since they can generate large deflection and force with low actuating voltages and small device areas. In this study, a lumped model of a two-hot-arm horizontal thermal actuator is presented. In order to prove the accuracy of the lumped model, finite element analysis (FEA) and experimental results are provided. The two-hot-arm thermal actuator has been fabricated using the MUMPs process. Both the experimental and FEA results are in good agreement with the results of lumped modeling.

  16. Electrical Lumped Model Examination for Load Variation of Circulation System

    NASA Astrophysics Data System (ADS)

    Koya, Yoshiharu; Ito, Mitsuyo; Mizoshiri, Isao

    Modeling and analysis of the circulation system enables the characteristic decision of circulation system in the body to be made. So, many models of circulation system have been proposed. But, they are complicated because the models include a lot of elements. Therefore, we proposed a complete circulation model as a lumped electrical circuit, which is comparatively simple. In this paper, we examine the effectiveness of the complete circulation model as a lumped electrical circuit. We use normal, angina pectoris, dilated cardiomyopathy and myocardial infarction for evaluation of the ventricular contraction function.

  17. SU-F-J-198: A Cross-Platform Adaptation of An a Priori Scatter Correction Algorithm for Cone-Beam Projections to Enable Image- and Dose-Guided Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andersen, A; Casares-Magaz, O; Elstroem, U

    Purpose: Cone-beam CT (CBCT) imaging may enable image- and dose-guided proton therapy, but is challenged by image artefacts. The aim of this study was to demonstrate the general applicability of a previously developed a priori scatter correction algorithm to allow CBCT-based proton dose calculations. Methods: The a priori scatter correction algorithm used a plan CT (pCT) and raw cone-beam projections acquired with the Varian On-Board Imager. The projections were initially corrected for bow-tie filtering and beam hardening and subsequently reconstructed using the Feldkamp-Davis-Kress algorithm (rawCBCT). The rawCBCTs were intensity normalised before a rigid and deformable registration were applied on themore » pCTs to the rawCBCTs. The resulting images were forward projected onto the same angles as the raw CB projections. The two projections were subtracted from each other, Gaussian and median filtered, and then subtracted from the raw projections and finally reconstructed to the scatter-corrected CBCTs. For evaluation, water equivalent path length (WEPL) maps (from anterior to posterior) were calculated on different reconstructions of three data sets (CB projections and pCT) of three parts of an Alderson phantom. Finally, single beam spot scanning proton plans (0–360 deg gantry angle in steps of 5 deg; using PyTRiP) treating a 5 cm central spherical target in the pCT were re-calculated on scatter-corrected CBCTs with identical targets. Results: The scatter-corrected CBCTs resulted in sub-mm mean WEPL differences relative to the rigid registration of the pCT for all three data sets. These differences were considerably smaller than what was achieved with the regular Varian CBCT reconstruction algorithm (1–9 mm mean WEPL differences). Target coverage in the re-calculated plans was generally improved using the scatter-corrected CBCTs compared to the Varian CBCT reconstruction. Conclusion: We have demonstrated the general applicability of a priori CBCT scatter correction, potentially opening for CBCT-based image/dose-guided proton therapy, including adaptive strategies. Research agreement with Varian Medical Systems, not connected to the present project.« less

  18. Seasonal and Inter-Annual Patterns of Phytoplankton Community Structure in Monterey Bay, CA Derived from AVIRIS Data During the 2013-2015 HyspIRI Airborne Campaign

    NASA Astrophysics Data System (ADS)

    Palacios, S. L.; Thompson, D. R.; Kudela, R. M.; Negrey, K.; Guild, L. S.; Gao, B. C.; Green, R. O.; Torres-Perez, J. L.

    2015-12-01

    There is a need in the ocean color community to discriminate among phytoplankton groups within the bulk chlorophyll pool to understand ocean biodiversity, to track energy flow through ecosystems, and to identify and monitor for harmful algal blooms. Imaging spectrometer measurements enable use of sophisticated spectroscopic algorithms for applications such as differentiating among coral species, evaluating iron stress of phytoplankton, and discriminating phytoplankton taxa. These advanced algorithms rely on the fine scale, subtle spectral shape of the atmospherically corrected remote sensing reflectance (Rrs) spectrum of the ocean surface. As a consequence, these algorithms are sensitive to inaccuracies in the retrieved Rrs spectrum that may be related to the presence of nearby clouds, inadequate sensor calibration, low sensor signal-to-noise ratio, glint correction, and atmospheric correction. For the HyspIRI Airborne Campaign, flight planning considered optimal weather conditions to avoid flights with significant cloud/fog cover. Although best suited for terrestrial targets, the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) has enough signal for some coastal chlorophyll algorithms and meets sufficient calibration requirements for most channels. However, the coastal marine environment has special atmospheric correction needs due to error that may be introduced by aerosols and terrestrially sourced atmospheric dust and riverine sediment plumes. For this HyspIRI campaign, careful attention has been given to the correction of AVIRIS imagery of the Monterey Bay to optimize ocean Rrs retrievals for use in estimating chlorophyll (OC3 algorithm) and phytoplankton functional type (PHYDOTax algorithm) data products. This new correction method has been applied to several image collection dates during two oceanographic seasons - upwelling and the warm, stratified oceanic period for 2013 and 2014. These two periods are dominated by either diatom blooms (occasionally toxic) or red tides. Results presented include chlorophyll and phytoplankton community structure and in-water validation data for these dates during these two seasons.

  19. An algebraic algorithm for nonuniformity correction in focal-plane arrays.

    PubMed

    Ratliff, Bradley M; Hayat, Majeed M; Hardie, Russell C

    2002-09-01

    A scene-based algorithm is developed to compensate for bias nonuniformity in focal-plane arrays. Nonuniformity can be extremely problematic, especially for mid- to far-infrared imaging systems. The technique is based on use of estimates of interframe subpixel shifts in an image sequence, in conjunction with a linear-interpolation model for the motion, to extract information on the bias nonuniformity algebraically. The performance of the proposed algorithm is analyzed by using real infrared and simulated data. One advantage of this technique is its simplicity; it requires relatively few frames to generate an effective correction matrix, thereby permitting the execution of frequent on-the-fly nonuniformity correction as drift occurs. Additionally, the performance is shown to exhibit considerable robustness with respect to lack of the common types of temporal and spatial irradiance diversity that are typically required by statistical scene-based nonuniformity correction techniques.

  20. RAPID COMMUNICATION: Optical distortion correction for liquid droplet visualization using the ray tracing method: further considerations

    NASA Astrophysics Data System (ADS)

    Minor, G.; Oshkai, P.; Djilali, N.

    2007-11-01

    The original work of Kang et al (2004 Meas. Sci. Technol. 15 1104 12) presents a scheme for correcting optical distortion caused by the curved surface of a droplet, and illustrates its application in PIV measurements of the velocity field inside evaporating liquid droplets. In this work we re-derive the correction algorithm and show that several terms in the original algorithm proposed by Kang et al are erroneous. This was not evident in the original work because the erroneous terms are negligible for droplets with approximately hemispherical shapes. However, for the more general situation of droplets that have shapes closer to that of a sphere, with heights much larger than their contact-line radii, these errors become quite significant. The corrected algorithm is presented and its application illustrated in comparison with that of Kang et al.

  1. The evaluation of correction algorithms of intensity nonuniformity in breast MRI images: a phantom study

    NASA Astrophysics Data System (ADS)

    Borys, Damian; Serafin, Wojciech; Gorczewski, Kamil; Kijonka, Marek; Frackiewicz, Mariusz; Palus, Henryk

    2018-04-01

    The aim of this work was to test the most popular and essential algorithms of the intensity nonuniformity correction of the breast MRI imaging. In this type of MRI imaging, especially in the proximity of the coil, the signal is strong but also can produce some inhomogeneities. Evaluated methods of signal correction were: N3, N3FCM, N4, Nonparametric, and SPM. For testing purposes, a uniform phantom object was used to obtain test images using breast imaging MRI coil. To quantify the results, two measures were used: integral uniformity and standard deviation. For each algorithm minimum, average and maximum values of both evaluation factors have been calculated using the binary mask created for the phantom. In the result, two methods obtained the lowest values in these measures: N3FCM and N4, however, for the second method visually phantom was the most uniform after correction.

  2. The urban energy balance of a lightweight low-rise neighborhood in Andacollo, Chile

    NASA Astrophysics Data System (ADS)

    Crawford, Ben; Krayenhoff, E. Scott; Cordy, Paul

    2018-01-01

    Worldwide, the majority of rapidly growing neighborhoods are found in the Global South. They often exhibit different building construction and development patterns than the Global North, and urban climate research in many such neighborhoods has to date been sparse. This study presents local-scale observations of net radiation ( Q * ) and sensible heat flux ( Q H ) from a lightweight low-rise neighborhood in the desert climate of Andacollo, Chile, and compares observations with results from a process-based urban energy-balance model (TUF3D) and a local-scale empirical model (LUMPS) for a 14-day period in autumn 2009. This is a unique neighborhood-climate combination in the urban energy-balance literature, and results show good agreement between observations and models for Q * and Q H . The unmeasured latent heat flux ( Q E ) is modeled with an updated version of TUF3D and two versions of LUMPS (a forward and inverse application). Both LUMPS implementations predict slightly higher Q E than TUF3D, which may indicate a bias in LUMPS parameters towards mid-latitude, non-desert climates. Overall, the energy balance is dominated by sensible and storage heat fluxes with mean daytime Bowen ratios of 2.57 (observed Q H /LUMPS Q E )-3.46 (TUF3D). Storage heat flux ( ΔQ S ) is modeled with TUF3D, the empirical objective hysteresis model (OHM), and the inverse LUMPS implementation. Agreement between models is generally good; the OHM-predicted diurnal cycle deviates somewhat relative to the other two models, likely because OHM coefficients are not specified for the roof and wall construction materials found in this neighborhood. New facet-scale and local-scale OHM coefficients are developed based on modeled ΔQ S and observed Q * . Coefficients in the empirical models OHM and LUMPS are derived from observations in primarily non-desert climates in European/North American neighborhoods and must be updated as measurements in lightweight low-rise (and other) neighborhoods in various climates become available.

  3. Evaluation and analysis of SEASAT-A Scanning Multichannel Microwave Radiometer (SMMR) Antenna Pattern Correction (APC) algorithm

    NASA Technical Reports Server (NTRS)

    Kitzis, J. L.; Kitzis, S. N.

    1979-01-01

    An evaluation of the versions of the SEASAT-A SMMR antenna pattern correction (APC) algorithm is presented. Two efforts are focused upon in the APC evaluation: the intercomparison of the interim, box, cross, and nominal APC modes; and the development of software to facilitate the creation of matched spacecraft and surface truth data sets which are located together in time and space. The problems discovered in earlier versions of the APC, now corrected, are discussed.

  4. Lumped Parameter Model (LPM) for Light-Duty Vehicles

    EPA Pesticide Factsheets

    EPA’s Lumped Parameter Model (LPM) is a free, desktop computer application that estimates the effectiveness (CO2 Reduction) of various technology combinations or “packages,” in a manner that accounts for synergies between technologies.

  5. Rogue waves and lump solitons for a ?-dimensional B-type Kadomtsev-Petviashvili equation in fluid dynamics

    NASA Astrophysics Data System (ADS)

    Sun, Yan; Tian, Bo; Xie, Xi-Yang; Chai, Jun; Yin, Hui-Min

    2018-07-01

    Under investigation is a ?-dimensional B-type Kadomtsev-Petviashvili equation, which has applications in the propagation of non-linear waves in fluid dynamics. Through the Hirota method and the extended homoclinic test technique, we obtain the breather-type kink soliton solutions and breather rational soliton solutions. Rogue wave solutions are derived, which come from the derivation of breather rational solitons with respect to x. Amplitudes of the breather-type kink solitons and rogue waves decrease with a non-zero parameter in the equation, ?, increasing when ?. In addition, dark rogue waves are derived when ?. Furthermore, with the aid of the Hirota method and symbolic computation, two types of the lump solitons are obtained with the different choices of the parameters. We graphically study the lump solitons related to the parameter ?, and amplitude of the lump soliton is negatively correlated with ? when ?.

  6. Performance analysis of FET microwave devices by use of extended spectral-element time-domain method

    NASA Astrophysics Data System (ADS)

    Sheng, Yijun; Xu, Kan; Wang, Daoxiang; Chen, Rushan

    2013-05-01

    The extended spectral-element time-domain (SETD) method is employed to analyse field effect transistor (FET) microwave devices. In order to impose the contribution of the FET microwave devices into the electromagnetic simulation, the SETD method is extended by introducing a lumped current term into the vector Helmholtz equation. The change of currents on each lumped component can be expressed by the change of voltage via corresponding models of equivalent circuit. The electric fields around the lumped component must be influenced by the change of voltage on each lumped component, and vice versa. So a global coupling about the EM-circuit can be built directly. The fully explicit solving scheme is maintained in this extended SETD method and the CPU time can be saved spontaneously. Three practical FET microwave devices are analysed in this article. The numerical results demonstrate the ability and accuracy of this method.

  7. Model-based diagnosis through Structural Analysis and Causal Computation for automotive Polymer Electrolyte Membrane Fuel Cell systems

    NASA Astrophysics Data System (ADS)

    Polverino, Pierpaolo; Frisk, Erik; Jung, Daniel; Krysander, Mattias; Pianese, Cesare

    2017-07-01

    The present paper proposes an advanced approach for Polymer Electrolyte Membrane Fuel Cell (PEMFC) systems fault detection and isolation through a model-based diagnostic algorithm. The considered algorithm is developed upon a lumped parameter model simulating a whole PEMFC system oriented towards automotive applications. This model is inspired by other models available in the literature, with further attention to stack thermal dynamics and water management. The developed model is analysed by means of Structural Analysis, to identify the correlations among involved physical variables, defined equations and a set of faults which may occur in the system (related to both auxiliary components malfunctions and stack degradation phenomena). Residual generators are designed by means of Causal Computation analysis and the maximum theoretical fault isolability, achievable with a minimal number of installed sensors, is investigated. The achieved results proved the capability of the algorithm to theoretically detect and isolate almost all faults with the only use of stack voltage and temperature sensors, with significant advantages from an industrial point of view. The effective fault isolability is proved through fault simulations at a specific fault magnitude with an advanced residual evaluation technique, to consider quantitative residual deviations from normal conditions and achieve univocal fault isolation.

  8. Bio-Inspired Genetic Algorithms with Formalized Crossover Operators for Robotic Applications.

    PubMed

    Zhang, Jie; Kang, Man; Li, Xiaojuan; Liu, Geng-Yang

    2017-01-01

    Genetic algorithms are widely adopted to solve optimization problems in robotic applications. In such safety-critical systems, it is vitally important to formally prove the correctness when genetic algorithms are applied. This paper focuses on formal modeling of crossover operations that are one of most important operations in genetic algorithms. Specially, we for the first time formalize crossover operations with higher-order logic based on HOL4 that is easy to be deployed with its user-friendly programing environment. With correctness-guaranteed formalized crossover operations, we can safely apply them in robotic applications. We implement our technique to solve a path planning problem using a genetic algorithm with our formalized crossover operations, and the results show the effectiveness of our technique.

  9. Algorithmic detectability threshold of the stochastic block model

    NASA Astrophysics Data System (ADS)

    Kawamoto, Tatsuro

    2018-03-01

    The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.

  10. Experimental Evaluation of a Deformable Registration Algorithm for Motion Correction in PET-CT Guided Biopsy.

    PubMed

    Khare, Rahul; Sala, Guillaume; Kinahan, Paul; Esposito, Giuseppe; Banovac, Filip; Cleary, Kevin; Enquobahrie, Andinet

    2013-01-01

    Positron emission tomography computed tomography (PET-CT) images are increasingly being used for guidance during percutaneous biopsy. However, due to the physics of image acquisition, PET-CT images are susceptible to problems due to respiratory and cardiac motion, leading to inaccurate tumor localization, shape distortion, and attenuation correction. To address these problems, we present a method for motion correction that relies on respiratory gated CT images aligned using a deformable registration algorithm. In this work, we use two deformable registration algorithms and two optimization approaches for registering the CT images obtained over the respiratory cycle. The two algorithms are the BSpline and the symmetric forces Demons registration. In the first optmization approach, CT images at each time point are registered to a single reference time point. In the second approach, deformation maps are obtained to align each CT time point with its adjacent time point. These deformations are then composed to find the deformation with respect to a reference time point. We evaluate these two algorithms and optimization approaches using respiratory gated CT images obtained from 7 patients. Our results show that overall the BSpline registration algorithm with the reference optimization approach gives the best results.

  11. Improved algorithm for computerized detection and quantification of pulmonary emphysema at high-resolution computed tomography (HRCT)

    NASA Astrophysics Data System (ADS)

    Tylen, Ulf; Friman, Ola; Borga, Magnus; Angelhed, Jan-Erik

    2001-05-01

    Emphysema is characterized by destruction of lung tissue with development of small or large holes within the lung. These areas will have Hounsfield values (HU) approaching -1000. It is possible to detect and quantificate such areas using simple density mask technique. The edge enhancement reconstruction algorithm, gravity and motion of the heart and vessels during scanning causes artefacts, however. The purpose of our work was to construct an algorithm that detects such image artefacts and corrects them. The first step is to apply inverse filtering to the image removing much of the effect of the edge enhancement reconstruction algorithm. The next step implies computation of the antero-posterior density gradient caused by gravity and correction for that. Motion artefacts are in a third step corrected for by use of normalized averaging, thresholding and region growing. Twenty healthy volunteers were investigated, 10 with slight emphysema and 10 without. Using simple density mask technique it was not possible to separate persons with disease from those without. Our algorithm improved separation of the two groups considerably. Our algorithm needs further refinement, but may form a basis for further development of methods for computerized diagnosis and quantification of emphysema by HRCT.

  12. Ripple FPN reduced algorithm based on temporal high-pass filter and hardware implementation

    NASA Astrophysics Data System (ADS)

    Li, Yiyang; Li, Shuo; Zhang, Zhipeng; Jin, Weiqi; Wu, Lei; Jin, Minglei

    2016-11-01

    Cooled infrared detector arrays always suffer from undesired Ripple Fixed-Pattern Noise (FPN) when observe the scene of sky. The Ripple Fixed-Pattern Noise seriously affect the imaging quality of thermal imager, especially for small target detection and tracking. It is hard to eliminate the FPN by the Calibration based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified space low-pass and temporal high-pass nonuniformity correction algorithm using adaptive time domain threshold (THP&GM). The threshold is designed to significantly reduce ghosting artifacts. We test the algorithm on real infrared in comparison to several previously published methods. This algorithm not only can effectively correct common FPN such as Stripe, but also has obviously advantage compared with the current methods in terms of detail protection and convergence speed, especially for Ripple FPN correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA). The hardware implementation of the algorithm based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay (less than 20 lines). The hardware has been successfully applied in actual system.

  13. The E-Step of the MGROUP EM Algorithm. Program Statistics Research Technical Report No. 93-37.

    ERIC Educational Resources Information Center

    Thomas, Neal

    Mislevy (1984, 1985) introduced an EM algorithm for estimating the parameters of a latent distribution model that is used extensively by the National Assessment of Educational Progress. Second order asymptotic corrections are derived and applied along with more common first order asymptotic corrections to approximate the expectations required by…

  14. Correction factor for ablation algorithms used in corneal refractive surgery with gaussian-profile beams

    NASA Astrophysics Data System (ADS)

    Jimenez, Jose Ramón; González Anera, Rosario; Jiménez del Barco, Luis; Hita, Enrique; Pérez-Ocón, Francisco

    2005-01-01

    We provide a correction factor to be added in ablation algorithms when a Gaussian beam is used in photorefractive laser surgery. This factor, which quantifies the effect of pulse overlapping, depends on beam radius and spot size. We also deduce the expected post-surgical corneal radius and asphericity when considering this factor. Data on 141 eyes operated on LASIK (laser in situ keratomileusis) with a Gaussian profile show that the discrepancy between experimental and expected data on corneal power is significantly lower when using the correction factor. For an effective improvement of post-surgical visual quality, this factor should be applied in ablation algorithms that do not consider the effects of pulse overlapping with a Gaussian beam.

  15. Research on correction algorithm of laser positioning system based on four quadrant detector

    NASA Astrophysics Data System (ADS)

    Gao, Qingsong; Meng, Xiangyong; Qian, Weixian; Cai, Guixia

    2018-02-01

    This paper first introduces the basic principle of the four quadrant detector, and a set of laser positioning experiment system is built based on the four quadrant detector. Four quadrant laser positioning system in the actual application, not only exist interference of background light and detector dark current noise, and the influence of random noise, system stability, spot equivalent error can't be ignored, so it is very important to system calibration and correction. This paper analyzes the various factors of system positioning error, and then propose an algorithm for correcting the system error, the results of simulation and experiment show that the modified algorithm can improve the effect of system error on positioning and improve the positioning accuracy.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Simonetto, Andrea

    This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are establishedmore » to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.« less

  17. 46 CFR 148.245 - Direct reduced iron (DRI); lumps, pellets, and cold-molded briquettes.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... another during periods of rain or snow. (e) DRI lumps, pellets, or cold-molded briquettes may not be... percent hydrogen, by volume, is maintained throughout the voyage in any hold containing these materials...

  18. Algorithms and applications of aberration correction and American standard-based digital evaluation in surface defects evaluating system

    NASA Astrophysics Data System (ADS)

    Wu, Fan; Cao, Pin; Yang, Yongying; Li, Chen; Chai, Huiting; Zhang, Yihui; Xiong, Haoliang; Xu, Wenlin; Yan, Kai; Zhou, Lin; Liu, Dong; Bai, Jian; Shen, Yibing

    2016-11-01

    The inspection of surface defects is one of significant sections of optical surface quality evaluation. Based on microscopic scattering dark-field imaging, sub-aperture scanning and stitching, the Surface Defects Evaluating System (SDES) can acquire full-aperture image of defects on optical elements surface and then extract geometric size and position information of defects with image processing such as feature recognization. However, optical distortion existing in the SDES badly affects the inspection precision of surface defects. In this paper, a distortion correction algorithm based on standard lattice pattern is proposed. Feature extraction, polynomial fitting and bilinear interpolation techniques in combination with adjacent sub-aperture stitching are employed to correct the optical distortion of the SDES automatically in high accuracy. Subsequently, in order to digitally evaluate surface defects with American standard by using American military standards MIL-PRF-13830B to judge the surface defects information obtained from the SDES, an American standard-based digital evaluation algorithm is proposed, which mainly includes a judgment method of surface defects concentration. The judgment method establishes weight region for each defect and adopts the method of overlap of weight region to calculate defects concentration. This algorithm takes full advantage of convenience of matrix operations and has merits of low complexity and fast in running, which makes itself suitable very well for highefficiency inspection of surface defects. Finally, various experiments are conducted and the correctness of these algorithms are verified. At present, these algorithms have been used in SDES.

  19. Construct validation of an interactive digital algorithm for ostomy care.

    PubMed

    Beitz, Janice M; Gerlach, Mary A; Schafer, Vickie

    2014-01-01

    The purpose of this study was to evaluate construct validity for a previously face and content validated Ostomy Algorithm using digital real-life clinical scenarios. A cross-sectional, mixed-methods Web-based survey design study was conducted. Two hundred ninety-seven English-speaking RNs completed the study; participants practiced in both acute care and postacute settings, with 1 expert ostomy nurse (WOC nurse) and 2 nonexpert nurses. Following written consent, respondents answered demographic questions and completed a brief algorithm tutorial. Participants were then presented with 7 ostomy-related digital scenarios consisting of real-life photos and pertinent clinical information. Respondents used the 11 assessment components of the digital algorithm to choose management options. Participant written comments about the scenarios and the research process were collected. The mean overall percentage of correct responses was 84.23%. Mean percentage of correct responses for respondents with a self-reported basic ostomy knowledge was 87.7%; for those with a self-reported intermediate ostomy knowledge was 85.88% and those who were self-reported experts in ostomy care achieved 82.77% correct response rate. Five respondents reported having no prior ostomy care knowledge at screening and achieved an overall 45.71% correct response rate. No negative comments regarding the algorithm were recorded by participants. The new standardized Ostomy Algorithm remains the only face, content, and construct validated digital clinical decision instrument currently available. Further research on application at the bedside while tracking patient outcomes is warranted.

  20. Fermion number of twisted kinks in the NJL2 model revisited

    NASA Astrophysics Data System (ADS)

    Thies, Michael

    2018-03-01

    As a consequence of axial current conservation, fermions cannot be bound in localized lumps in the massless Nambu-Jona-Lasinio model. In the case of twisted kinks, this manifests itself in a cancellation between the valence fermion density and the fermion density induced in the Dirac sea. To attribute the correct fermion number to these bound states requires an infrared regularization. Recently, this has been achieved by introducing a bare fermion mass, at least in the nonrelativistic regime of small twist angles and fermion numbers. Here, we propose a simpler regularization using a finite box which preserves integrability and can be applied at any twist angle. A consistent and physically plausible assignment of fermion number to all twisted kinks emerges.

  1. Automatic Correction Algorithm of Hyfrology Feature Attribute in National Geographic Census

    NASA Astrophysics Data System (ADS)

    Li, C.; Guo, P.; Liu, X.

    2017-09-01

    A subset of the attributes of hydrologic features data in national geographic census are not clear, the current solution to this problem was through manual filling which is inefficient and liable to mistakes. So this paper proposes an automatic correction algorithm of hydrologic features attribute. Based on the analysis of the structure characteristics and topological relation, we put forward three basic principles of correction which include network proximity, structure robustness and topology ductility. Based on the WJ-III map workstation, we realize the automatic correction of hydrologic features. Finally, practical data is used to validate the method. The results show that our method is highly reasonable and efficient.

  2. Correction of rotational distortion for catheter-based en face OCT and OCT angiography

    PubMed Central

    Ahsen, Osman O.; Lee, Hsiang-Chieh; Giacomelli, Michael G.; Wang, Zhao; Liang, Kaicheng; Tsai, Tsung-Han; Potsaid, Benjamin; Mashimo, Hiroshi; Fujimoto, James G.

    2015-01-01

    We demonstrate a computationally efficient method for correcting the nonuniform rotational distortion (NURD) in catheter-based imaging systems to improve endoscopic en face optical coherence tomography (OCT) and OCT angiography. The method performs nonrigid registration using fiducial markers on the catheter to correct rotational speed variations. Algorithm performance is investigated with an ultrahigh-speed endoscopic OCT system and micromotor catheter. Scan nonuniformity is quantitatively characterized, and artifacts from rotational speed variations are significantly reduced. Furthermore, we present endoscopic en face OCT and OCT angiography images of human gastrointestinal tract in vivo to demonstrate the image quality improvement using the correction algorithm. PMID:25361133

  3. Iterative Correction of Reference Nucleotides (iCORN) using second generation sequencing technology.

    PubMed

    Otto, Thomas D; Sanders, Mandy; Berriman, Matthew; Newbold, Chris

    2010-07-15

    The accuracy of reference genomes is important for downstream analysis but a low error rate requires expensive manual interrogation of the sequence. Here, we describe a novel algorithm (Iterative Correction of Reference Nucleotides) that iteratively aligns deep coverage of short sequencing reads to correct errors in reference genome sequences and evaluate their accuracy. Using Plasmodium falciparum (81% A + T content) as an extreme example, we show that the algorithm is highly accurate and corrects over 2000 errors in the reference sequence. We give examples of its application to numerous other eukaryotic and prokaryotic genomes and suggest additional applications. The software is available at http://icorn.sourceforge.net

  4. Bandwidth correction for LED chromaticity based on Levenberg-Marquardt algorithm

    NASA Astrophysics Data System (ADS)

    Huang, Chan; Jin, Shiqun; Xia, Guo

    2017-10-01

    Light emitting diode (LED) is widely employed in industrial applications and scientific researches. With a spectrometer, the chromaticity of LED can be measured. However, chromaticity shift will occur due to the broadening effects of the spectrometer. In this paper, an approach is put forward to bandwidth correction for LED chromaticity based on Levenberg-Marquardt algorithm. We compare chromaticity of simulated LED spectra by using the proposed method and differential operator method to bandwidth correction. The experimental results show that the proposed approach achieves an excellent performance in bandwidth correction which proves the effectiveness of the approach. The method has also been tested on true blue LED spectra.

  5. Bidirectional Contrast agent leakage correction of dynamic susceptibility contrast (DSC)-MRI improves cerebral blood volume estimation and survival prediction in recurrent glioblastoma treated with bevacizumab.

    PubMed

    Leu, Kevin; Boxerman, Jerrold L; Lai, Albert; Nghiemphu, Phioanh L; Pope, Whitney B; Cloughesy, Timothy F; Ellingson, Benjamin M

    2016-11-01

    To evaluate a leakage correction algorithm for T 1 and T2* artifacts arising from contrast agent extravasation in dynamic susceptibility contrast magnetic resonance imaging (DSC-MRI) that accounts for bidirectional contrast agent flux and compare relative cerebral blood volume (CBV) estimates and overall survival (OS) stratification from this model to those made with the unidirectional and uncorrected models in patients with recurrent glioblastoma (GBM). We determined median rCBV within contrast-enhancing tumor before and after bevacizumab treatment in patients (75 scans on 1.5T, 19 scans on 3.0T) with recurrent GBM without leakage correction and with application of the unidirectional and bidirectional leakage correction algorithms to determine whether rCBV stratifies OS. Decreased post-bevacizumab rCBV from baseline using the bidirectional leakage correction algorithm significantly correlated with longer OS (Cox, P = 0.01), whereas rCBV change using the unidirectional model (P = 0.43) or the uncorrected rCBV values (P = 0.28) did not. Estimates of rCBV computed with the two leakage correction algorithms differed on average by 14.9%. Accounting for T 1 and T2* leakage contamination in DSC-MRI using a two-compartment, bidirectional rather than unidirectional exchange model might improve post-bevacizumab survival stratification in patients with recurrent GBM. J. Magn. Reson. Imaging 2016;44:1229-1237. © 2016 International Society for Magnetic Resonance in Medicine.

  6. A Method of Sky Ripple Residual Nonuniformity Reduction for a Cooled Infrared Imager and Hardware Implementation.

    PubMed

    Li, Yiyang; Jin, Weiqi; Li, Shuo; Zhang, Xu; Zhu, Jin

    2017-05-08

    Cooled infrared detector arrays always suffer from undesired ripple residual nonuniformity (RNU) in sky scene observations. The ripple residual nonuniformity seriously affects the imaging quality, especially for small target detection. It is difficult to eliminate it using the calibration-based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified temporal high-pass nonuniformity correction algorithm using fuzzy scene classification. The fuzzy scene classification is designed to control the correction threshold so that the algorithm can remove ripple RNU without degrading the scene details. We test the algorithm on a real infrared sequence by comparing it to several well-established methods. The result shows that the algorithm has obvious advantages compared with the tested methods in terms of detail conservation and convergence speed for ripple RNU correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA), which has two advantages: (1) low resources consumption; and (2) small hardware delay (less than 10 image rows). It has been successfully applied in an actual system.

  7. Reduction of chemical reaction models

    NASA Technical Reports Server (NTRS)

    Frenklach, Michael

    1991-01-01

    An attempt is made to reconcile the different terminologies pertaining to reduction of chemical reaction models. The approaches considered include global modeling, response modeling, detailed reduction, chemical lumping, and statistical lumping. The advantages and drawbacks of each of these methods are pointed out.

  8. Lump Solitons in Surface Tension Dominated Flows

    NASA Astrophysics Data System (ADS)

    Milewski, Paul; Berger, Kurt

    1999-11-01

    The Kadomtsev-Petviashvilli I equation (KPI) which models small-amplitude, weakly three-dimensional surface-tension dominated long waves is integrable and allows for algebraically decaying lump solitary waves. It is not known (theoretically or numerically) whether the full free-surface Euler equations support such solutions. We consider an intermediate model, the generalised Benney-Luke equation (gBL) which is isotropic (not weakly three-dimensional) and contains KPI as a limit. We show numerically that: 1. gBL supports lump solitary waves; 2. These waves collide elastically and are stable; 3. They are generated by resonant flow over an obstacle.

  9. Application of Biologically-Based Lumping To Investigate the ...

    EPA Pesticide Factsheets

    People are often exposed to complex mixtures of environmental chemicals such as gasoline, tobacco smoke, water contaminants, or food additives. However, investigators have often considered complex mixtures as one lumped entity. Valuable information can be obtained from these experiments, though this simplification provides little insight into the impact of a mixture's chemical composition on toxicologically-relevant metabolic interactions that may occur among its constituents. We developed an approach that applies chemical lumping methods to complex mixtures, in this case gasoline, based on biologically relevant parameters used in physiologically-based pharmacokinetic (PBPK) modeling. Inhalation exposures were performed with rats to evaluate performance of our PBPK model. There were 109 chemicals identified and quantified in the vapor in the chamber. The time-course kinetic profiles of 10 target chemicals were also determined from blood samples collected during and following the in vivo experiments. A general PBPK model was used to compare the experimental data to the simulated values of blood concentration for the 10 target chemicals with various numbers of lumps, iteratively increasing from 0 to 99. Large reductions in simulation error were gained by incorporating enzymatic chemical interactions, in comparison to simulating the individual chemicals separately. The error was further reduced by lumping the 99 non-target chemicals. Application of this biologic

  10. Poster - Thur Eve - 68: Evaluation and analytical comparison of different 2D and 3D treatment planning systems using dosimetry in anthropomorphic phantom.

    PubMed

    Khosravi, H R; Nodehi, Mr Golrokh; Asnaashari, Kh; Mahdavi, S R; Shirazi, A R; Gholami, S

    2012-07-01

    The aim of this study was to evaluate and analytically compare different calculation algorithms applied in our country radiotherapy centers base on the methodology developed by IAEA for treatment planning systems (TPS) commissioning (IAEA TEC-DOC 1583). Thorax anthropomorphic phantom (002LFC CIRS inc.), was used to measure 7 tests that simulate the whole chain of external beam TPS. The dose were measured with ion chambers and the deviation between measured and TPS calculated dose was reported. This methodology, which employs the same phantom and the same setup test cases, was tested in 4 different hospitals which were using 5 different algorithms/ inhomogeneity correction methods implemented in different TPS. The algorithms in this study were divided into two groups including correction based and model based algorithms. A total of 84 clinical test case datasets for different energies and calculation algorithms were produced, which amounts of differences in inhomogeneity points with low density (lung) and high density (bone) was decreased meaningfully with advanced algorithms. The number of deviations outside agreement criteria was increased with the beam energy and decreased with advancement of the TPS calculation algorithm. Large deviations were seen in some correction based algorithms, so sophisticated algorithms, would be preferred in clinical practices, especially for calculation in inhomogeneous media. Use of model based algorithms with lateral transport calculation, is recommended. Some systematic errors which were revealed during this study, is showing necessity of performing periodic audits on TPS in radiotherapy centers. © 2012 American Association of Physicists in Medicine.

  11. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    PubMed

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.

  12. A systematic review of lumped-parameter equivalent circuit models for real-time estimation of lithium-ion battery states

    NASA Astrophysics Data System (ADS)

    Nejad, S.; Gladwin, D. T.; Stone, D. A.

    2016-06-01

    This paper presents a systematic review for the most commonly used lumped-parameter equivalent circuit model structures in lithium-ion battery energy storage applications. These models include the Combined model, Rint model, two hysteresis models, Randles' model, a modified Randles' model and two resistor-capacitor (RC) network models with and without hysteresis included. Two variations of the lithium-ion cell chemistry, namely the lithium-ion iron phosphate (LiFePO4) and lithium nickel-manganese-cobalt oxide (LiNMC) are used for testing purposes. The model parameters and states are recursively estimated using a nonlinear system identification technique based on the dual Extended Kalman Filter (dual-EKF) algorithm. The dynamic performance of the model structures are verified using the results obtained from a self-designed pulsed-current test and an electric vehicle (EV) drive cycle based on the New European Drive Cycle (NEDC) profile over a range of operating temperatures. Analysis on the ten model structures are conducted with respect to state-of-charge (SOC) and state-of-power (SOP) estimation with erroneous initial conditions. Comparatively, both RC model structures provide the best dynamic performance, with an outstanding SOC estimation accuracy. For those cell chemistries with large inherent hysteresis levels (e.g. LiFePO4), the RC model with only one time constant is combined with a dynamic hysteresis model to further enhance the performance of the SOC estimator.

  13. A study of redundancy management strategy for tetrad strap-down inertial systems. [error detection codes

    NASA Technical Reports Server (NTRS)

    Hruby, R. J.; Bjorkman, W. S.; Schmidt, S. F.; Carestia, R. A.

    1979-01-01

    Algorithms were developed that attempt to identify which sensor in a tetrad configuration has experienced a step failure. An algorithm is also described that provides a measure of the confidence with which the correct identification was made. Experimental results are presented from real-time tests conducted on a three-axis motion facility utilizing an ortho-skew tetrad strapdown inertial sensor package. The effects of prediction errors and of quantization on correct failure identification are discussed as well as an algorithm for detecting second failures through prediction.

  14. A DSP-based neural network non-uniformity correction algorithm for IRFPA

    NASA Astrophysics Data System (ADS)

    Liu, Chong-liang; Jin, Wei-qi; Cao, Yang; Liu, Xiu

    2009-07-01

    An effective neural network non-uniformity correction (NUC) algorithm based on DSP is proposed in this paper. The non-uniform response in infrared focal plane array (IRFPA) detectors produces corrupted images with a fixed-pattern noise(FPN).We introduced and analyzed the artificial neural network scene-based non-uniformity correction (SBNUC) algorithm. A design of DSP-based NUC development platform for IRFPA is described. The DSP hardware platform designed is of low power consumption, with 32-bit fixed point DSP TMS320DM643 as the kernel processor. The dependability and expansibility of the software have been improved by DSP/BIOS real-time operating system and Reference Framework 5. In order to realize real-time performance, the calibration parameters update is set at a lower task priority then video input and output in DSP/BIOS. In this way, calibration parameters updating will not affect video streams. The work flow of the system and the strategy of real-time realization are introduced. Experiments on real infrared imaging sequences demonstrate that this algorithm requires only a few frames to obtain high quality corrections. It is computationally efficient and suitable for all kinds of non-uniformity.

  15. Ocean Observations with EOS/MODIS: Algorithm Development and Post Launch Studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1997-01-01

    Significant accomplishments made during the present reporting period are as follows: (1) We developed a new method for identifying the presence of absorbing aerosols and, simultaneously, performing atmospheric correction. The algorithm consists of optimizing the match between the top-of-atmosphere radiance spectrum and the result of models of both the ocean and aerosol optical properties; (2) We developed an algorithm for providing an accurate computation of the diffuse transmittance of the atmosphere given an aerosol model. A module for inclusion into the MODIS atmospheric-correction algorithm was completed; (3) We acquired reflectance data for oceanic whitecaps during a cruise on the RV Ka'imimoana in the Tropical Pacific (Manzanillo, Mexico to Honolulu, Hawaii). The reflectance spectrum of whitecaps was found to be similar to that for breaking waves in the surf zone measured by Frouin, Schwindling and Deschamps, however, the drop in augmented reflectance from 670 to 860 nm was not as great, and the magnitude of the augmented reflectance was significantly less than expected; and (4) We developed a method for the approximate correction for the effects of the MODIS polarization sensitivity. The correction, however, requires adequate characterization of the polarization sensitivity of MODIS prior to launch.

  16. Individual pore and interconnection size analysis of macroporous ceramic scaffolds using high-resolution X-ray tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jerban, Saeed, E-mail: saeed.jerban@usherbrooke.ca

    2016-08-15

    The pore interconnection size of β-tricalcium phosphate scaffolds plays an essential role in the bone repair process. Although, the μCT technique is widely used in the biomaterial community, it is rarely used to measure the interconnection size because of the lack of algorithms. In addition, discrete nature of the μCT introduces large systematic errors due to the convex geometry of interconnections. We proposed, verified and validated a novel pore-level algorithm to accurately characterize the individual pores and interconnections. Specifically, pores and interconnections were isolated, labeled, and individually analyzed with high accuracy. The technique was verified thoroughly by visually inspecting andmore » verifying over 3474 properties of randomly selected pores. This extensive verification process has passed a one-percent accuracy criterion. Scanning errors inherent in the discretization, which lead to both dummy and significantly overestimated interconnections, have been examined using computer-based simulations and additional high-resolution scanning. Then accurate correction charts were developed and used to reduce the scanning errors. Only after the corrections, both the μCT and SEM-based results converged, and the novel algorithm was validated. Material scientists with access to all geometrical properties of individual pores and interconnections, using the novel algorithm, will have a more-detailed and accurate description of the substitute architecture and a potentially deeper understanding of the link between the geometric and biological interaction. - Highlights: •An algorithm is developed to analyze individually all pores and interconnections. •After pore isolating, the discretization errors in interconnections were corrected. •Dummy interconnections and overestimated sizes were due to thin material walls. •The isolating algorithm was verified through visual inspection (99% accurate). •After correcting for the systematic errors, algorithm was validated successfully.« less

  17. Atmospheric correction over coastal waters using multilayer neural networks

    NASA Astrophysics Data System (ADS)

    Fan, Y.; Li, W.; Charles, G.; Jamet, C.; Zibordi, G.; Schroeder, T.; Stamnes, K. H.

    2017-12-01

    Standard atmospheric correction (AC) algorithms work well in open ocean areas where the water inherent optical properties (IOPs) are correlated with pigmented particles. However, the IOPs of turbid coastal waters may independently vary with pigmented particles, suspended inorganic particles, and colored dissolved organic matter (CDOM). In turbid coastal waters standard AC algorithms often exhibit large inaccuracies that may lead to negative water-leaving radiances (Lw) or remote sensing reflectance (Rrs). We introduce a new atmospheric correction algorithm for coastal waters based on a multilayer neural network (MLNN) machine learning method. We use a coupled atmosphere-ocean radiative transfer model to simulate the Rayleigh-corrected radiance (Lrc) at the top of the atmosphere (TOA) and the Rrs just above the surface simultaneously, and train a MLNN to derive the aerosol optical depth (AOD) and Rrs directly from the TOA Lrc. The SeaDAS NIR algorithm, the SeaDAS NIR/SWIR algorithm, and the MODIS version of the Case 2 regional water - CoastColour (C2RCC) algorithm are included in the comparison with AERONET-OC measurements. The results show that the MLNN algorithm significantly improves retrieval of normalized Lw in blue bands (412 nm and 443 nm) and yields minor improvements in green and red bands. These results indicate that the MLNN algorithm is suitable for application in turbid coastal waters. Application of the MLNN algorithm to MODIS Aqua images in several coastal areas also shows that it is robust and resilient to contamination due to sunglint or adjacency effects of land and cloud edges. The MLNN algorithm is very fast once the neural network has been properly trained and is therefore suitable for operational use. A significant advantage of the MLNN algorithm is that it does not need SWIR bands, which implies significant cost reduction for dedicated OC missions. A recent effort has been made to extend the MLNN AC algorithm to extreme atmospheric conditions (i.e. heavy polluted continental aerosols) over coastal areas by including additional aerosol and ocean models to generate the training dataset. Preliminary tests show very good results. Results of applying the extended MLNN algorithm to VIIRS images over the Yellow Sea and East China Sea areas with extreme atmospheric and marine conditions will be provided.

  18. Adaptive optics compensation of orbital angular momentum beams with a modified Gerchberg-Saxton-based phase retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Chang, Huan; Yin, Xiao-li; Cui, Xiao-zhou; Zhang, Zhi-chao; Ma, Jian-xin; Wu, Guo-hua; Zhang, Li-jia; Xin, Xiang-jun

    2017-12-01

    Practical orbital angular momentum (OAM)-based free-space optical (FSO) communications commonly experience serious performance degradation and crosstalk due to atmospheric turbulence. In this paper, we propose a wave-front sensorless adaptive optics (WSAO) system with a modified Gerchberg-Saxton (GS)-based phase retrieval algorithm to correct distorted OAM beams. We use the spatial phase perturbation (SPP) GS algorithm with a distorted probe Gaussian beam as the only input. The principle and parameter selections of the algorithm are analyzed, and the performance of the algorithm is discussed. The simulation results show that the proposed adaptive optics (AO) system can significantly compensate for distorted OAM beams in single-channel or multiplexed OAM systems, which provides new insights into adaptive correction systems using OAM beams.

  19. Improved neural network based scene-adaptive nonuniformity correction method for infrared focal plane arrays.

    PubMed

    Lai, Rui; Yang, Yin-tang; Zhou, Duan; Li, Yue-jin

    2008-08-20

    An improved scene-adaptive nonuniformity correction (NUC) algorithm for infrared focal plane arrays (IRFPAs) is proposed. This method simultaneously estimates the infrared detectors' parameters and eliminates the nonuniformity causing fixed pattern noise (FPN) by using a neural network (NN) approach. In the learning process of neuron parameter estimation, the traditional LMS algorithm is substituted with the newly presented variable step size (VSS) normalized least-mean square (NLMS) based adaptive filtering algorithm, which yields faster convergence, smaller misadjustment, and lower computational cost. In addition, a new NN structure is designed to estimate the desired target value, which promotes the calibration precision considerably. The proposed NUC method reaches high correction performance, which is validated by the experimental results quantitatively tested with a simulative testing sequence and a real infrared image sequence.

  20. Generalized algebraic scene-based nonuniformity correction algorithm.

    PubMed

    Ratliff, Bradley M; Hayat, Majeed M; Tyo, J Scott

    2005-02-01

    A generalization of a recently developed algebraic scene-based nonuniformity correction algorithm for focal plane array (FPA) sensors is presented. The new technique uses pairs of image frames exhibiting arbitrary one- or two-dimensional translational motion to compute compensator quantities that are then used to remove nonuniformity in the bias of the FPA response. Unlike its predecessor, the generalization does not require the use of either a blackbody calibration target or a shutter. The algorithm has a low computational overhead, lending itself to real-time hardware implementation. The high-quality correction ability of this technique is demonstrated through application to real IR data from both cooled and uncooled infrared FPAs. A theoretical and experimental error analysis is performed to study the accuracy of the bias compensator estimates in the presence of two main sources of error.

  1. Closed Loop, DM Diversity-based, Wavefront Correction Algorithm for High Contrast Imaging Systems

    NASA Technical Reports Server (NTRS)

    Give'on, Amir; Belikov, Ruslan; Shaklan, Stuart; Kasdin, Jeremy

    2007-01-01

    High contrast imaging from space relies on coronagraphs to limit diffraction and a wavefront control systems to compensate for imperfections in both the telescope optics and the coronagraph. The extreme contrast required (up to 10(exp -10) for terrestrial planets) puts severe requirements on the wavefront control system, as the achievable contrast is limited by the quality of the wavefront. This paper presents a general closed loop correction algorithm for high contrast imaging coronagraphs by minimizing the energy in a predefined region in the image where terrestrial planets could be found. The estimation part of the algorithm reconstructs the complex field in the image plane using phase diversity caused by the deformable mirror. This method has been shown to achieve faster and better correction than classical speckle nulling.

  2. Unweighted least squares phase unwrapping by means of multigrid techniques

    NASA Astrophysics Data System (ADS)

    Pritt, Mark D.

    1995-11-01

    We present a multigrid algorithm for unweighted least squares phase unwrapping. This algorithm applies Gauss-Seidel relaxation schemes to solve the Poisson equation on smaller, coarser grids and transfers the intermediate results to the finer grids. This approach forms the basis of our multigrid algorithm for weighted least squares phase unwrapping, which is described in a separate paper. The key idea of our multigrid approach is to maintain the partial derivatives of the phase data in separate arrays and to correct these derivatives at the boundaries of the coarser grids. This maintains the boundary conditions necessary for rapid convergence to the correct solution. Although the multigrid algorithm is an iterative algorithm, we demonstrate that it is nearly as fast as the direct Fourier-based method. We also describe how to parallelize the algorithm for execution on a distributed-memory parallel processor computer or a network-cluster of workstations.

  3. A 2-DOF model of an elastic rocket structure excited by a follower force

    NASA Astrophysics Data System (ADS)

    Brejão, Leandro F.; da Fonseca Brasil, Reyolando Manoel L. R.

    2017-10-01

    We present a two degree of freedom model of an elastic rocket structure excited by the follower force given by the motor thrust that is supposed to be always in the direction of the tangent to the deformed shape of the device at its lower tip. The model comprises two massless rigid pinned bars, initially in vertical position, connected by rotational springs. Lumped masses and dampers are considered at the connections. The generalized coordinates are the angular displacements of the bars with respect to the vertical. We derive the equations of motion via Lagrange’s equations and simulate its time evolution using Runge-Kutta 4th order time step-by-step numerical integration algorithm. Results indicate possible occurrence of stable and unstable vibrations, such as limit cycles.

  4. Clustering of Variables for Mixed Data

    NASA Astrophysics Data System (ADS)

    Saracco, J.; Chavent, M.

    2016-05-01

    This chapter presents clustering of variables which aim is to lump together strongly related variables. The proposed approach works on a mixed data set, i.e. on a data set which contains numerical variables and categorical variables. Two algorithms of clustering of variables are described: a hierarchical clustering and a k-means type clustering. A brief description of PCAmix method (that is a principal component analysis for mixed data) is provided, since the calculus of the synthetic variables summarizing the obtained clusters of variables is based on this multivariate method. Finally, the R packages ClustOfVar and PCAmixdata are illustrated on real mixed data. The PCAmix and ClustOfVar approaches are first used for dimension reduction (step 1) before applying in step 2 a standard clustering method to obtain groups of individuals.

  5. Simulating an underwater vehicle self-correcting guidance system with Simulink

    NASA Astrophysics Data System (ADS)

    Fan, Hui; Zhang, Yu-Wen; Li, Wen-Zhe

    2008-09-01

    Underwater vehicles have already adopted self-correcting directional guidance algorithms based on multi-beam self-guidance systems, not waiting for research to determine the most effective algorithms. The main challenges facing research on these guidance systems have been effective modeling of the guidance algorithm and a means to analyze the simulation results. A simulation structure based on Simulink that dealt with both issues was proposed. Initially, a mathematical model of relative motion between the vehicle and the target was developed, which was then encapsulated as a subsystem. Next, steps for constructing a model of the self-correcting guidance algorithm based on the Stateflow module were examined in detail. Finally, a 3-D model of the vehicle and target was created in VRML, and by processing mathematical results, the model was shown moving in a visual environment. This process gives more intuitive results for analyzing the simulation. The results showed that the simulation structure performs well. The simulation program heavily used modularization and encapsulation, so has broad applicability to simulations of other dynamic systems.

  6. Ocean observations with EOS/MODIS: Algorithm development and post launch studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1995-01-01

    An investigation of the influence of stratospheric aerosol on the performance of the atmospheric correction algorithm was carried out. The results indicate how the performance of the algorithm is degraded if the stratospheric aerosol is ignored. Use of the MODIS 1380 nm band to effect a correction for stratospheric aerosols was also studied. The development of a multi-layer Monte Carlo radiative transfer code that includes polarization by molecular and aerosol scattering and wind-induced sea surface roughness has been completed. Comparison tests with an existing two-layer successive order of scattering code suggests that both codes are capable of producing top-of-atmosphere radiances with errors usually less than 0.1 percent. An initial set of simulations to study the effects of ignoring the polarization of the the ocean-atmosphere light field, in both the development of the atmospheric correction algorithm and the generation of the lookup tables used for operation of the algorithm, have been completed. An algorithm was developed that can be used to invert the radiance exiting the top and bottom of the atmosphere to yield the columnar optical properties of the atmospheric aerosol under clear sky conditions over the ocean, for aerosol optical thicknesses as large as 2. The algorithm is capable of retrievals with such large optical thicknesses because all significant orders of multiple scattering are included.

  7. Lumped parametric model of the human ear for sound transmission.

    PubMed

    Feng, Bin; Gan, Rong Z

    2004-09-01

    A lumped parametric model of the human auditoria peripherals consisting of six masses suspended with six springs and ten dashpots was proposed. This model will provide the quantitative basis for the construction of a physical model of the human middle ear. The lumped model parameters were first identified using published anatomical data, and then determined through a parameter optimization process. The transfer function of the middle ear obtained from human temporal bone experiments with laser Doppler interferometers was used for creating the target function during the optimization process. It was found that, among 14 spring and dashpot parameters, there were five parameters which had pronounced effects on the dynamic behaviors of the model. The detailed discussion on the sensitivity of those parameters was provided with appropriate applications for sound transmission in the ear. We expect that the methods for characterizing the lumped model of the human ear and the model parameters will be useful for theoretical modeling of the ear function and construction of the ear physical model.

  8. Experimental Verification of Guided-Wave Lumped Circuits Using Waveguide Metamaterials

    NASA Astrophysics Data System (ADS)

    Li, Yue; Zhang, Zhijun

    2018-04-01

    Through the construction and characterization in microwave frequencies, we experimentally demonstrate our recently developed theory of waveguide lumped circuits, i.e., waveguide metatronics [Sci. Adv. 2, e1501790 (2016), 10.1126/sciadv.1501790], as a method to design subwavelength-scaled analog circuits. In the paradigm of waveguide metatronics, numbers of lumped inductors and capacitors are easily integrated functionally inside the waveguide, which is an irreplaceable transmission line in millimeter-wave and terahertz systems with the advantages of low radiation loss and low crosstalk. An example of multiple-ordered metatronic filters with layered structures is fabricated utilizing the technique of substrate integrated waveguides, which can be easily constructed by the printed-circuit-board process. The materials used in the construction are also typical microwave materials with positive permittivity, low loss, and negligible dispersion, imitating the plasmonic materials with negative permittivity in the optical domain. The results verify the theory of waveguide metatronics, which provides an efficient platform of functional lumped circuit design for guided-wave processing.

  9. Analytic study of solutions for a (3 + 1) -dimensional generalized KP equation

    NASA Astrophysics Data System (ADS)

    Gao, Hui; Cheng, Wenguang; Xu, Tianzhou; Wang, Gangwei

    2018-03-01

    The (3 + 1) -dimensional generalized KP (gKP) equation is an important nonlinear partial differential equation in theoretical and mathematical physics which can be used to describe nonlinear wave motion. Through the Hirota bilinear method, one-solition, two-solition and N-solition solutions are derived via symbolic computation. Two classes of lump solutions, rationally localized in all directions in space, to the dimensionally reduced cases in (2 + 1)-dimensions, are constructed by using a direct method based on the Hirota bilinear form of the equation. It implies that we can derive the lump solutions of the reduced gKP equation from positive quadratic function solutions to the aforementioned bilinear equation. Meanwhile, we get interaction solutions between a lump and a kink of the gKP equation. The lump appears from a kink and is swallowed by it with the change of time. This work offers a possibility which can enrich the variety of the dynamical features of solutions for higher-dimensional nonlinear evolution equations.

  10. Spatio-temporal colour correction of strongly degraded movies

    NASA Astrophysics Data System (ADS)

    Islam, A. B. M. Tariqul; Farup, Ivar

    2011-01-01

    The archives of motion pictures represent an important part of precious cultural heritage. Unfortunately, these cinematography collections are vulnerable to different distortions such as colour fading which is beyond the capability of photochemical restoration process. Spatial colour algorithms-Retinex and ACE provide helpful tool in restoring strongly degraded colour films but, there are some challenges associated with these algorithms. We present an automatic colour correction technique for digital colour restoration of strongly degraded movie material. The method is based upon the existing STRESS algorithm. In order to cope with the problem of highly correlated colour channels, we implemented a preprocessing step in which saturation enhancement is performed in a PCA space. Spatial colour algorithms tend to emphasize all details in the images, including dust and scratches. Surprisingly, we found that the presence of these defects does not affect the behaviour of the colour correction algorithm. Although the STRESS algorithm is already in itself more efficient than traditional spatial colour algorithms, it is still computationally expensive. To speed it up further, we went beyond the spatial domain of the frames and extended the algorithm to the temporal domain. This way, we were able to achieve an 80 percent reduction of the computational time compared to processing every single frame individually. We performed two user experiments and found that the visual quality of the resulting frames was significantly better than with existing methods. Thus, our method outperforms the existing ones in terms of both visual quality and computational efficiency.

  11. Pre-correction of distorted Bessel-Gauss beams without wavefront detection

    NASA Astrophysics Data System (ADS)

    Fu, Shiyao; Wang, Tonglu; Zhang, Zheyuan; Zhai, Yanwang; Gao, Chunqing

    2017-12-01

    By utilizing the property of the phase's rapid solution of the Gerchberg-Saxton algorithm, we experimentally demonstrate a scheme to correct distorted Bessel-Gauss beams resulting from inhomogeneous media as weak turbulent atmosphere with good performance. A probe Gaussian beam is employed and propagates coaxially with the Bessel-Gauss modes through the turbulence. No wavefront sensor but a matrix detector is used to capture the probe Gaussian beams, and then, the correction phase mask is computed through inputting such probe beam into the Gerchberg-Saxton algorithm. The experimental results indicate that both single and multiplexed BG beams can be corrected well, in terms of the improvement in mode purity and the mitigation of interchannel cross talk.

  12. An Adaptive Deghosting Method in Neural Network-Based Infrared Detectors Nonuniformity Correction

    PubMed Central

    Li, Yiyang; Jin, Weiqi; Zhu, Jin; Zhang, Xu; Li, Shuo

    2018-01-01

    The problems of the neural network-based nonuniformity correction algorithm for infrared focal plane arrays mainly concern slow convergence speed and ghosting artifacts. In general, the more stringent the inhibition of ghosting, the slower the convergence speed. The factors that affect these two problems are the estimated desired image and the learning rate. In this paper, we propose a learning rate rule that combines adaptive threshold edge detection and a temporal gate. Through the noise estimation algorithm, the adaptive spatial threshold is related to the residual nonuniformity noise in the corrected image. The proposed learning rate is used to effectively and stably suppress ghosting artifacts without slowing down the convergence speed. The performance of the proposed technique was thoroughly studied with infrared image sequences with both simulated nonuniformity and real nonuniformity. The results show that the deghosting performance of the proposed method is superior to that of other neural network-based nonuniformity correction algorithms and that the convergence speed is equivalent to the tested deghosting methods. PMID:29342857

  13. An Adaptive Deghosting Method in Neural Network-Based Infrared Detectors Nonuniformity Correction.

    PubMed

    Li, Yiyang; Jin, Weiqi; Zhu, Jin; Zhang, Xu; Li, Shuo

    2018-01-13

    The problems of the neural network-based nonuniformity correction algorithm for infrared focal plane arrays mainly concern slow convergence speed and ghosting artifacts. In general, the more stringent the inhibition of ghosting, the slower the convergence speed. The factors that affect these two problems are the estimated desired image and the learning rate. In this paper, we propose a learning rate rule that combines adaptive threshold edge detection and a temporal gate. Through the noise estimation algorithm, the adaptive spatial threshold is related to the residual nonuniformity noise in the corrected image. The proposed learning rate is used to effectively and stably suppress ghosting artifacts without slowing down the convergence speed. The performance of the proposed technique was thoroughly studied with infrared image sequences with both simulated nonuniformity and real nonuniformity. The results show that the deghosting performance of the proposed method is superior to that of other neural network-based nonuniformity correction algorithms and that the convergence speed is equivalent to the tested deghosting methods.

  14. Scene-based nonuniformity correction technique for infrared focal-plane arrays.

    PubMed

    Liu, Yong-Jin; Zhu, Hong; Zhao, Yi-Gong

    2009-04-20

    A scene-based nonuniformity correction algorithm is presented to compensate for the gain and bias nonuniformity in infrared focal-plane array sensors, which can be separated into three parts. First, an interframe-prediction method is used to estimate the true scene, since nonuniformity correction is a typical blind-estimation problem and both scene values and detector parameters are unavailable. Second, the estimated scene, along with its corresponding observed data obtained by detectors, is employed to update the gain and the bias by means of a line-fitting technique. Finally, with these nonuniformity parameters, the compensated output of each detector is obtained by computing a very simple formula. The advantages of the proposed algorithm lie in its low computational complexity and storage requirements and ability to capture temporal drifts in the nonuniformity parameters. The performance of every module is demonstrated with simulated and real infrared image sequences. Experimental results indicate that the proposed algorithm exhibits a superior correction effect.

  15. Prediction-Correction Algorithms for Time-Varying Constrained Optimization

    DOE PAGES

    Simonetto, Andrea; Dall'Anese, Emiliano

    2017-07-26

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  16. Station Correction Uncertainty in Multiple Event Location Algorithms and the Effect on Error Ellipses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, Jason P.; Carlson, Deborah K.; Ortiz, Anne

    Accurate location of seismic events is crucial for nuclear explosion monitoring. There are several sources of error in seismic location that must be taken into account to obtain high confidence results. Most location techniques account for uncertainties in the phase arrival times (measurement error) and the bias of the velocity model (model error), but they do not account for the uncertainty of the velocity model bias. By determining and incorporating this uncertainty in the location algorithm we seek to improve the accuracy of the calculated locations and uncertainty ellipses. In order to correct for deficiencies in the velocity model, itmore » is necessary to apply station specific corrections to the predicted arrival times. Both master event and multiple event location techniques assume that the station corrections are known perfectly, when in reality there is an uncertainty associated with these corrections. For multiple event location algorithms that calculate station corrections as part of the inversion, it is possible to determine the variance of the corrections. The variance can then be used to weight the arrivals associated with each station, thereby giving more influence to stations with consistent corrections. We have modified an existing multiple event location program (based on PMEL, Pavlis and Booker, 1983). We are exploring weighting arrivals with the inverse of the station correction standard deviation as well using the conditional probability of the calculated station corrections. This is in addition to the weighting already given to the measurement and modeling error terms. We re-locate a group of mining explosions that occurred at Black Thunder, Wyoming, and compare the results to those generated without accounting for station correction uncertainty.« less

  17. A hydrological emulator for global applications - HE v1.0.0

    NASA Astrophysics Data System (ADS)

    Liu, Yaling; Hejazi, Mohamad; Li, Hongyi; Zhang, Xuesong; Leng, Guoyong

    2018-03-01

    While global hydrological models (GHMs) are very useful in exploring water resources and interactions between the Earth and human systems, their use often requires numerous model inputs, complex model calibration, and high computation costs. To overcome these challenges, we construct an efficient open-source and ready-to-use hydrological emulator (HE) that can mimic complex GHMs at a range of spatial scales (e.g., basin, region, globe). More specifically, we construct both a lumped and a distributed scheme of the HE based on the monthly abcd model to explore the tradeoff between computational cost and model fidelity. Model predictability and computational efficiency are evaluated in simulating global runoff from 1971 to 2010 with both the lumped and distributed schemes. The results are compared against the runoff product from the widely used Variable Infiltration Capacity (VIC) model. Our evaluation indicates that the lumped and distributed schemes present comparable results regarding annual total quantity, spatial pattern, and temporal variation of the major water fluxes (e.g., total runoff, evapotranspiration) across the global 235 basins (e.g., correlation coefficient r between the annual total runoff from either of these two schemes and the VIC is > 0.96), except for several cold (e.g., Arctic, interior Tibet), dry (e.g., North Africa) and mountainous (e.g., Argentina) regions. Compared against the monthly total runoff product from the VIC (aggregated from daily runoff), the global mean Kling-Gupta efficiencies are 0.75 and 0.79 for the lumped and distributed schemes, respectively, with the distributed scheme better capturing spatial heterogeneity. Notably, the computation efficiency of the lumped scheme is 2 orders of magnitude higher than the distributed one and 7 orders more efficient than the VIC model. A case study of uncertainty analysis for the world's 16 basins with top annual streamflow is conducted using 100 000 model simulations, and it demonstrates the lumped scheme's extraordinary advantage in computational efficiency. Our results suggest that the revised lumped abcd model can serve as an efficient and reasonable HE for complex GHMs and is suitable for broad practical use, and the distributed scheme is also an efficient alternative if spatial heterogeneity is of more interest.

  18. Multi-model analysis in hydrological prediction

    NASA Astrophysics Data System (ADS)

    Lanthier, M.; Arsenault, R.; Brissette, F.

    2017-12-01

    Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been largely corrected on short-term predictions. For the longer term, the addition of the multi-model member has been beneficial to the quality of the predictions, although it is too early to determine whether the gain is related to the addition of a member or if multi-model member has plus-value itself.

  19. Asian dust aerosol: Optical effect on satellite ocean color signal and a scheme of its correction

    NASA Astrophysics Data System (ADS)

    Fukushima, H.; Toratani, M.

    1997-07-01

    The paper first exhibits the influence of the Asian dust aerosol (KOSA) on a coastal zone color scanner (CZCS) image which records erroneously low or negative satellite-derived water-leaving radiance especially in a shorter wavelength region. This suggests the presence of spectrally dependent absorption which was disregarded in the past atmospheric correction algorithms. On the basis of the analysis of the scene, a semiempirical optical model of the Asian dust aerosol that relates aerosol single scattering albedo (ωA) to the spectral ratio of aerosol optical thickness between 550 nm and 670 nm is developed. Then, as a modification to a standard CZCS atmospheric correction algorithm (NASA standard algorithm), a scheme which estimates pixel-wise aerosol optical thickness, and in turn ωA, is proposed. The assumption of constant normalized water-leaving radiance at 550 nm is adopted together with a model of aerosol scattering phase function. The scheme is combined to the standard algorithm, performing atmospheric correction just the same as the standard version with a fixed Angstrom coefficient except in the case where the presence of Asian dust aerosol is detected by the lowered satellite-derived Angstrom exponent. Some of the model parameter values are determined so that the scheme does not produce any spatial discontinuity with the standard scheme. The algorithm was tested against the Japanese Asian dust CZCS scene with parameter values of the spectral dependency of ωA, first statistically determined and second optimized for selected pixels. Analysis suggests that the parameter values depend on the assumed Angstrom coefficient for standard algorithm, at the same time defining the spatial extent of the area to apply the Asian dust scheme. The algorithm was also tested for a Saharan dust scene, showing the relevance of the scheme but with different parameter setting. Finally, the algorithm was applied to a data set of 25 CZCS scenes to produce a monthly composite of pigment concentration for April 1981. Through these analyses, the modified algorithm is considered robust in the sense that it operates most compatibly with the standard algorithm yet performs adaptively in response to the magnitude of the dust effect.

  20. A new phase correction method in NMR imaging based on autocorrelation and histogram analysis.

    PubMed

    Ahn, C B; Cho, Z H

    1987-01-01

    A new statistical approach to phase correction in NMR imaging is proposed. The proposed scheme consists of first-and zero-order phase corrections each by the inverse multiplication of estimated phase error. The first-order error is estimated by the phase of autocorrelation calculated from the complex valued phase distorted image while the zero-order correction factor is extracted from the histogram of phase distribution of the first-order corrected image. Since all the correction procedures are performed on the spatial domain after completion of data acquisition, no prior adjustments or additional measurements are required. The algorithm can be applicable to most of the phase-involved NMR imaging techniques including inversion recovery imaging, quadrature modulated imaging, spectroscopic imaging, and flow imaging, etc. Some experimental results with inversion recovery imaging as well as quadrature spectroscopic imaging are shown to demonstrate the usefulness of the algorithm.

  1. 3D segmentations of neuronal nuclei from confocal microscope image stacks

    PubMed Central

    LaTorre, Antonio; Alonso-Nanclares, Lidia; Muelas, Santiago; Peña, José-María; DeFelipe, Javier

    2013-01-01

    In this paper, we present an algorithm to create 3D segmentations of neuronal cells from stacks of previously segmented 2D images. The idea behind this proposal is to provide a general method to reconstruct 3D structures from 2D stacks, regardless of how these 2D stacks have been obtained. The algorithm not only reuses the information obtained in the 2D segmentation, but also attempts to correct some typical mistakes made by the 2D segmentation algorithms (for example, under segmentation of tightly-coupled clusters of cells). We have tested our algorithm in a real scenario—the segmentation of the neuronal nuclei in different layers of the rat cerebral cortex. Several representative images from different layers of the cerebral cortex have been considered and several 2D segmentation algorithms have been compared. Furthermore, the algorithm has also been compared with the traditional 3D Watershed algorithm and the results obtained here show better performance in terms of correctly identified neuronal nuclei. PMID:24409123

  2. 3D segmentations of neuronal nuclei from confocal microscope image stacks.

    PubMed

    Latorre, Antonio; Alonso-Nanclares, Lidia; Muelas, Santiago; Peña, José-María; Defelipe, Javier

    2013-01-01

    In this paper, we present an algorithm to create 3D segmentations of neuronal cells from stacks of previously segmented 2D images. The idea behind this proposal is to provide a general method to reconstruct 3D structures from 2D stacks, regardless of how these 2D stacks have been obtained. The algorithm not only reuses the information obtained in the 2D segmentation, but also attempts to correct some typical mistakes made by the 2D segmentation algorithms (for example, under segmentation of tightly-coupled clusters of cells). We have tested our algorithm in a real scenario-the segmentation of the neuronal nuclei in different layers of the rat cerebral cortex. Several representative images from different layers of the cerebral cortex have been considered and several 2D segmentation algorithms have been compared. Furthermore, the algorithm has also been compared with the traditional 3D Watershed algorithm and the results obtained here show better performance in terms of correctly identified neuronal nuclei.

  3. Performance of fusion algorithms for computer-aided detection and classification of mines in very shallow water obtained from testing in navy Fleet Battle Exercise-Hotel 2000

    NASA Astrophysics Data System (ADS)

    Ciany, Charles M.; Zurawski, William; Kerfoot, Ian

    2001-10-01

    The performance of Computer Aided Detection/Computer Aided Classification (CAD/CAC) Fusion algorithms on side-scan sonar images was evaluated using data taken at the Navy's's Fleet Battle Exercise-Hotel held in Panama City, Florida, in August 2000. A 2-of-3 binary fusion algorithm is shown to provide robust performance. The algorithm accepts the classification decisions and associated contact locations form three different CAD/CAC algorithms, clusters the contacts based on Euclidian distance, and then declares a valid target when a clustered contact is declared by at least 2 of the 3 individual algorithms. This simple binary fusion provided a 96 percent probability of correct classification at a false alarm rate of 0.14 false alarms per image per side. The performance represented a 3.8:1 reduction in false alarms over the best performing single CAD/CAC algorithm, with no loss in probability of correct classification.

  4. Correcting Satellite Image Derived Surface Model for Atmospheric Effects

    NASA Technical Reports Server (NTRS)

    Emery, William; Baldwin, Daniel

    1998-01-01

    This project was a continuation of the project entitled "Resolution Earth Surface Features from Repeat Moderate Resolution Satellite Imagery". In the previous study, a Bayesian Maximum Posterior Estimate (BMPE) algorithm was used to obtain a composite series of repeat imagery from the Advanced Very High Resolution Radiometer (AVHRR). The spatial resolution of the resulting composite was significantly greater than the 1 km resolution of the individual AVHRR images. The BMPE algorithm utilized a simple, no-atmosphere geometrical model for the short-wave radiation budget at the Earth's surface. A necessary assumption of the algorithm is that all non geometrical parameters remain static over the compositing period. This assumption is of course violated by temporal variations in both the surface albedo and the atmospheric medium. The effect of the albedo variations is expected to be minimal since the variations are on a fairly long time scale compared to the compositing period, however, the atmospheric variability occurs on a relatively short time scale and can be expected to cause significant errors in the surface reconstruction. The current project proposed to incorporate an atmospheric correction into the BMPE algorithm for the purpose of investigating the effects of a variable atmosphere on the surface reconstructions. Once the atmospheric effects were determined, the investigation could be extended to include corrections various cloud effects, including short wave radiation through thin cirrus clouds. The original proposal was written for a three year project, funded one year at a time. The first year of the project focused on developing an understanding of atmospheric corrections and choosing an appropriate correction model. Several models were considered and the list was narrowed to the two best suited. These were the 5S and 6S shortwave radiation models developed at NASA/GODDARD and tested extensively with data from the AVHRR instrument. Although the 6S model was a successor to the 5S and slightly more advanced, the 5S was selected because outputs from the individual components comprising the short-wave radiation budget were more easily separated. The separation was necessary since both the 5S and 6S did not include geometrical corrections for terrain, a fundamental constituent of the BMPE algorithm. The 5S correction code was incorporated into the BMPE algorithm and many sensitivity studies were performed.

  5. 5 CFR 550.1203 - Eligibility.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... on active duty in the armed forces may elect to receive a lump-sum payment for accumulated and... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL... Department of Defense (DOD) must make a lump-sum payment to an employee who has unused annual leave that was...

  6. 5 CFR 550.1203 - Eligibility.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... on active duty in the armed forces may elect to receive a lump-sum payment for accumulated and... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL... Department of Defense (DOD) must make a lump-sum payment to an employee who has unused annual leave that was...

  7. 29 CFR 4050.8 - Automatic lump sum.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Relating to Labor (Continued) PENSION BENEFIT GUARANTY CORPORATION PLAN TERMINATIONS MISSING PARTICIPANTS § 4050.8 Automatic lump sum. This section applies to a missing participant whose designated benefit was... PBGC pays the benefit. (2) Payee. Payment will be made— (i) To the missing participant, if located; (ii...

  8. Application of Biologically-Based Lumping To Investigate the Toxicological Interactions of a Complex Gasoline Mixture

    EPA Science Inventory

    People are often exposed to complex mixtures of environmental chemicals such as gasoline, tobacco smoke, water contaminants, or food additives. However, investigators have often considered complex mixtures as one lumped entity. Valuable information can be obtained from these exp...

  9. Development of a new metal artifact reduction algorithm by using an edge preserving method for CBCT imaging

    NASA Astrophysics Data System (ADS)

    Kim, Juhye; Nam, Haewon; Lee, Rena

    2015-07-01

    CT (computed tomography) images, metal materials such as tooth supplements or surgical clips can cause metal artifact and degrade image quality. In severe cases, this may lead to misdiagnosis. In this research, we developed a new MAR (metal artifact reduction) algorithm by using an edge preserving filter and the MATLAB program (Mathworks, version R2012a). The proposed algorithm consists of 6 steps: image reconstruction from projection data, metal segmentation, forward projection, interpolation, applied edge preserving smoothing filter, and new image reconstruction. For an evaluation of the proposed algorithm, we obtained both numerical simulation data and data for a Rando phantom. In the numerical simulation data, four metal regions were added into the Shepp Logan phantom for metal artifacts. The projection data of the metal-inserted Rando phantom were obtained by using a prototype CBCT scanner manufactured by medical engineering and medical physics (MEMP) laboratory research group in medical science at Ewha Womans University. After these had been adopted the proposed algorithm was performed, and the result were compared with the original image (with metal artifact without correction) and with a corrected image based on linear interpolation. Both visual and quantitative evaluations were done. Compared with the original image with metal artifacts and with the image corrected by using linear interpolation, both the numerical and the experimental phantom data demonstrated that the proposed algorithm reduced the metal artifact. In conclusion, the evaluation in this research showed that the proposed algorithm outperformed the interpolation based MAR algorithm. If an optimization and a stability evaluation of the proposed algorithm can be performed, the developed algorithm is expected to be an effective tool for eliminating metal artifacts even in commercial CT systems.

  10. Turning and Radius Deviation Correction for a Hexapod Walking Robot Based on an Ant-Inspired Sensory Strategy

    PubMed Central

    Guo, Tong; Liu, Qiong; Zhu, Qianwei; Zhao, Xiangmo; Jin, Bo

    2017-01-01

    In order to find a common approach to plan the turning of a bio-inspired hexapod robot, a locomotion strategy for turning and deviation correction of a hexapod walking robot based on the biological behavior and sensory strategy of ants. A series of experiments using ants were carried out where the gait and the movement form of ants was studied. Taking the results of the ant experiments as inspiration by imitating the behavior of ants during turning, an extended turning algorithm based on arbitrary gait was proposed. Furthermore, after the observation of the radius adjustment of ants during turning, a radius correction algorithm based on the arbitrary gait of the hexapod robot was raised. The radius correction surface function was generated by fitting the correction data, which made it possible for the robot to move in an outdoor environment without the positioning system and environment model. The proposed algorithm was verified on the hexapod robot experimental platform. The turning and radius correction experiment of the robot with several gaits were carried out. The results indicated that the robot could follow the ideal radius and maintain stability, and the proposed ant-inspired turning strategy could easily make free turns with an arbitrary gait. PMID:29168742

  11. Turning and Radius Deviation Correction for a Hexapod Walking Robot Based on an Ant-Inspired Sensory Strategy.

    PubMed

    Zhu, Yaguang; Guo, Tong; Liu, Qiong; Zhu, Qianwei; Zhao, Xiangmo; Jin, Bo

    2017-11-23

    Abstract : In order to find a common approach to plan the turning of a bio-inspired hexapod robot, a locomotion strategy for turning and deviation correction of a hexapod walking robot based on the biological behavior and sensory strategy of ants. A series of experiments using ants were carried out where the gait and the movement form of ants was studied. Taking the results of the ant experiments as inspiration by imitating the behavior of ants during turning, an extended turning algorithm based on arbitrary gait was proposed. Furthermore, after the observation of the radius adjustment of ants during turning, a radius correction algorithm based on the arbitrary gait of the hexapod robot was raised. The radius correction surface function was generated by fitting the correction data, which made it possible for the robot to move in an outdoor environment without the positioning system and environment model. The proposed algorithm was verified on the hexapod robot experimental platform. The turning and radius correction experiment of the robot with several gaits were carried out. The results indicated that the robot could follow the ideal radius and maintain stability, and the proposed ant-inspired turning strategy could easily make free turns with an arbitrary gait.

  12. Automatic red eye correction and its quality metric

    NASA Astrophysics Data System (ADS)

    Safonov, Ilia V.; Rychagov, Michael N.; Kang, KiMin; Kim, Sang Ho

    2008-01-01

    The red eye artifacts are troublesome defect of amateur photos. Correction of red eyes during printing without user intervention and making photos more pleasant for an observer are important tasks. The novel efficient technique of automatic correction of red eyes aimed for photo printers is proposed. This algorithm is independent from face orientation and capable to detect paired red eyes as well as single red eyes. The approach is based on application of 3D tables with typicalness levels for red eyes and human skin tones and directional edge detection filters for processing of redness image. Machine learning is applied for feature selection. For classification of red eye regions a cascade of classifiers including Gentle AdaBoost committee from Classification and Regression Trees (CART) is applied. Retouching stage includes desaturation, darkening and blending with initial image. Several versions of approach implementation using trade-off between detection and correction quality, processing time, memory volume are possible. The numeric quality criterion of automatic red eye correction is proposed. This quality metric is constructed by applying Analytic Hierarchy Process (AHP) for consumer opinions about correction outcomes. Proposed numeric metric helped to choose algorithm parameters via optimization procedure. Experimental results demonstrate high accuracy and efficiency of the proposed algorithm in comparison with existing solutions.

  13. Characterization of an air jet haptic lump display.

    PubMed

    Bianchi, Matteo; Gwilliam, James C; Degirmenci, Alperen; Okamura, Allison M

    2011-01-01

    During manual palpation, clinicians rely on distributed tactile information to identify and localize hard lumps embedded in soft tissue. The development of tactile feedback systems to enhance palpation using robot-assisted minimally invasive surgery (RMIS) systems is challenging due to size and weight constraints, motivating a pneumatic actuation strategy. Recently, an air jet approach has been proposed for generating a lump percept. We use this technique to direct a thin stream of air through an aperture directly on the finger pad, which indents the skin in a hemispherical manner, producing a compelling lump percept. We hypothesize that the perceived parameters of the lump (e.g. size and stiffness) can be controlled by jointly adjusting air pressure and the aperture size through which air escapes. In this work, we investigate how these control variables interact to affect perceived pressure on the finger pad. First, we used a capacitive tactile sensor array to measure the effect of aperture size on output pressure, and found that peak output pressure increases with aperture size. Second, we performed a psychophysical experiment for each aperture size to determine the just noticeable difference (JND) of air pressure on the finger pad. Subject-averaged pressure JND values ranged from 19.4-24.7 kPa, with no statistical differences observed between aperture sizes. The aperture-pressure relationship and the pressure JND values will be fundamental for future display control.

  14. Ground based measurements on reflectance towards validating atmospheric correction algorithms on IRS-P6 AWiFS data

    NASA Astrophysics Data System (ADS)

    Rani Sharma, Anu; Kharol, Shailesh Kumar; Kvs, Badarinath; Roy, P. S.

    In Earth observation, the atmosphere has a non-negligible influence on the visible and infrared radiation which is strong enough to modify the reflected electromagnetic signal and at-target reflectance. Scattering of solar irradiance by atmospheric molecules and aerosol generates path radiance, which increases the apparent surface reflectance over dark surfaces while absorption by aerosols and other molecules in the atmosphere causes loss of brightness to the scene, as recorded by the satellite sensor. In order to derive precise surface reflectance from satellite image data, it is indispensable to apply the atmospheric correction which serves to remove the effects of molecular and aerosol scattering. In the present study, we have implemented a fast atmospheric correction algorithm to IRS-P6 AWiFS satellite data which can effectively retrieve surface reflectance under different atmospheric and surface conditions. The algorithm is based on MODIS climatology products and simplified use of Second Simulation of Satellite Signal in Solar Spectrum (6S) radiative transfer code, which is used to generate look-up-tables (LUTs). The algorithm requires information on aerosol optical depth for correcting the satellite dataset. The proposed method is simple and easy to implement for estimating surface reflectance from the at sensor recorded signal, on a per pixel basis. The atmospheric correction algorithm has been tested for different IRS-P6 AWiFS False color composites (FCC) covering the ICRISAT Farm, Patancheru, Hyderabad, India under varying atmospheric conditions. Ground measurements of surface reflectance representing different land use/land cover, i.e., Red soil, Chick Pea crop, Groundnut crop and Pigeon Pea crop were conducted to validate the algorithm and found a very good match between surface reflectance and atmospherically corrected reflectance for all spectral bands. Further, we aggregated all datasets together and compared the retrieved AWiFS reflectance with aggregated ground measurements which showed a very good correlation of 0.96 in all four spectral bands (i.e. green, red, NIR and SWIR). In order to quantify the accuracy of the proposed method in the estimation of the surface reflectance, the root mean square error (RMSE) associated to the proposed method was evaluated. The analysis of the ground measured versus retrieved AWiFS reflectance yielded smaller RMSE values in case of all four spectral bands. EOS TERRA/AQUA MODIS derived AOD exhibited very good correlation of 0.92 and the data sets provides an effective means for carrying out atmospheric corrections in an operational way. Keywords: Atmospheric correction, 6S code, MODIS, Spectroradiometer, Sun-Photometer

  15. Network-level accident-mapping: Distance based pattern matching using artificial neural network.

    PubMed

    Deka, Lipika; Quddus, Mohammed

    2014-04-01

    The objective of an accident-mapping algorithm is to snap traffic accidents onto the correct road segments. Assigning accidents onto the correct segments facilitate to robustly carry out some key analyses in accident research including the identification of accident hot-spots, network-level risk mapping and segment-level accident risk modelling. Existing risk mapping algorithms have some severe limitations: (i) they are not easily 'transferable' as the algorithms are specific to given accident datasets; (ii) they do not perform well in all road-network environments such as in areas of dense road network; and (iii) the methods used do not perform well in addressing inaccuracies inherent in and type of road environment. The purpose of this paper is to develop a new accident mapping algorithm based on the common variables observed in most accident databases (e.g. road name and type, direction of vehicle movement before the accident and recorded accident location). The challenges here are to: (i) develop a method that takes into account uncertainties inherent to the recorded traffic accident data and the underlying digital road network data, (ii) accurately determine the type and proportion of inaccuracies, and (iii) develop a robust algorithm that can be adapted for any accident set and road network of varying complexity. In order to overcome these challenges, a distance based pattern-matching approach is used to identify the correct road segment. This is based on vectors containing feature values that are common in the accident data and the network data. Since each feature does not contribute equally towards the identification of the correct road segments, an ANN approach using the single-layer perceptron is used to assist in "learning" the relative importance of each feature in the distance calculation and hence the correct link identification. The performance of the developed algorithm was evaluated based on a reference accident dataset from the UK confirming that the accuracy is much better than other methods. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.

  16. Atmospheric correction using near-infrared bands for satellite ocean color data processing in the turbid western Pacific region.

    PubMed

    Wang, Menghua; Shi, Wei; Jiang, Lide

    2012-01-16

    A regional near-infrared (NIR) ocean normalized water-leaving radiance (nL(w)(λ)) model is proposed for atmospheric correction for ocean color data processing in the western Pacific region, including the Bohai Sea, Yellow Sea, and East China Sea. Our motivation for this work is to derive ocean color products in the highly turbid western Pacific region using the Geostationary Ocean Color Imager (GOCI) onboard South Korean Communication, Ocean, and Meteorological Satellite (COMS). GOCI has eight spectral bands from 412 to 865 nm but does not have shortwave infrared (SWIR) bands that are needed for satellite ocean color remote sensing in the turbid ocean region. Based on a regional empirical relationship between the NIR nL(w)(λ) and diffuse attenuation coefficient at 490 nm (K(d)(490)), which is derived from the long-term measurements with the Moderate-resolution Imaging Spectroradiometer (MODIS) on the Aqua satellite, an iterative scheme with the NIR-based atmospheric correction algorithm has been developed. Results from MODIS-Aqua measurements show that ocean color products in the region derived from the new proposed NIR-corrected atmospheric correction algorithm match well with those from the SWIR atmospheric correction algorithm. Thus, the proposed new atmospheric correction method provides an alternative for ocean color data processing for GOCI (and other ocean color satellite sensors without SWIR bands) in the turbid ocean regions of the Bohai Sea, Yellow Sea, and East China Sea, although the SWIR-based atmospheric correction approach is still much preferred. The proposed atmospheric correction methodology can also be applied to other turbid coastal regions.

  17. Interleaved segment correction achieves higher improvement factors in using genetic algorithm to optimize light focusing through scattering media

    NASA Astrophysics Data System (ADS)

    Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong

    2017-10-01

    Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.

  18. DNA-based watermarks using the DNA-Crypt algorithm.

    PubMed

    Heider, Dominik; Barnekow, Angelika

    2007-05-29

    The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms.

  19. DNA-based watermarks using the DNA-Crypt algorithm

    PubMed Central

    Heider, Dominik; Barnekow, Angelika

    2007-01-01

    Background The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. Results The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. Conclusion The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms. PMID:17535434

  20. A fast and pragmatic approach for scatter correction in flat-detector CT using elliptic modeling and iterative optimization

    NASA Astrophysics Data System (ADS)

    Meyer, Michael; Kalender, Willi A.; Kyriakou, Yiannis

    2010-01-01

    Scattered radiation is a major source of artifacts in flat detector computed tomography (FDCT) due to the increased irradiated volumes. We propose a fast projection-based algorithm for correction of scatter artifacts. The presented algorithm combines a convolution method to determine the spatial distribution of the scatter intensity distribution with an object-size-dependent scaling of the scatter intensity distributions using a priori information generated by Monte Carlo simulations. A projection-based (PBSE) and an image-based (IBSE) strategy for size estimation of the scanned object are presented. Both strategies provide good correction and comparable results; the faster PBSE strategy is recommended. Even with such a fast and simple algorithm that in the PBSE variant does not rely on reconstructed volumes or scatter measurements, it is possible to provide a reasonable scatter correction even for truncated scans. For both simulations and measurements, scatter artifacts were significantly reduced and the algorithm showed stable behavior in the z-direction. For simulated voxelized head, hip and thorax phantoms, a figure of merit Q of 0.82, 0.76 and 0.77 was reached, respectively (Q = 0 for uncorrected, Q = 1 for ideal). For a water phantom with 15 cm diameter, for example, a cupping reduction from 10.8% down to 2.1% was achieved. The performance of the correction method has limitations in the case of measurements using non-ideal detectors, intensity calibration, etc. An iterative approach to overcome most of these limitations was proposed. This approach is based on root finding of a cupping metric and may be useful for other scatter correction methods as well. By this optimization, cupping of the measured water phantom was further reduced down to 0.9%. The algorithm was evaluated on a commercial system including truncated and non-homogeneous clinically relevant objects.

  1. Applications of singular value analysis and partial-step algorithm for nonlinear orbit determination

    NASA Technical Reports Server (NTRS)

    Ryne, Mark S.; Wang, Tseng-Chan

    1991-01-01

    An adaptive method in which cruise and nonlinear orbit determination problems can be solved using a single program is presented. It involves singular value decomposition augmented with an extended partial step algorithm. The extended partial step algorithm constrains the size of the correction to the spacecraft state and other solve-for parameters. The correction is controlled by an a priori covariance and a user-supplied bounds parameter. The extended partial step method is an extension of the update portion of the singular value decomposition algorithm. It thus preserves the numerical stability of the singular value decomposition method, while extending the region over which it converges. In linear cases, this method reduces to the singular value decomposition algorithm with the full rank solution. Two examples are presented to illustrate the method's utility.

  2. Correction of partial volume effect in (18)F-FDG PET brain studies using coregistered MR volumes: voxel based analysis of tracer uptake in the white matter.

    PubMed

    Coello, Christopher; Willoch, Frode; Selnes, Per; Gjerstad, Leif; Fladby, Tormod; Skretting, Arne

    2013-05-15

    A voxel-based algorithm to correct for partial volume effect in PET brain volumes is presented. This method (named LoReAn) is based on MRI based segmentation of anatomical regions and accurate measurements of the effective point spread function of the PET imaging process. The objective is to correct for the spill-out of activity from high-uptake anatomical structures (e.g. grey matter) into low-uptake anatomical structures (e.g. white matter) in order to quantify physiological uptake in the white matter. The new algorithm is presented and validated against the state of the art region-based geometric transfer matrix (GTM) method with synthetic and clinical data. Using synthetic data, both bias and coefficient of variation were improved in the white matter region using LoReAn compared to GTM. An increased number of anatomical regions doesn't affect the bias (<5%) and misregistration affects equally LoReAn and GTM algorithms. The LoReAn algorithm appears to be a simple and promising voxel-based algorithm for studying metabolism in white matter regions. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. A Method of Sky Ripple Residual Nonuniformity Reduction for a Cooled Infrared Imager and Hardware Implementation

    PubMed Central

    Li, Yiyang; Jin, Weiqi; Li, Shuo; Zhang, Xu; Zhu, Jin

    2017-01-01

    Cooled infrared detector arrays always suffer from undesired ripple residual nonuniformity (RNU) in sky scene observations. The ripple residual nonuniformity seriously affects the imaging quality, especially for small target detection. It is difficult to eliminate it using the calibration-based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified temporal high-pass nonuniformity correction algorithm using fuzzy scene classification. The fuzzy scene classification is designed to control the correction threshold so that the algorithm can remove ripple RNU without degrading the scene details. We test the algorithm on a real infrared sequence by comparing it to several well-established methods. The result shows that the algorithm has obvious advantages compared with the tested methods in terms of detail conservation and convergence speed for ripple RNU correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA), which has two advantages: (1) low resources consumption; and (2) small hardware delay (less than 10 image rows). It has been successfully applied in an actual system. PMID:28481320

  4. SeaWiFS technical report series. Volume 13: Case studies for SeaWiFS calibration and validation, part 1

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Mcclain, Charles R.; Comiso, Josefino C.; Fraser, Robert S.; Firestone, James K.; Schieber, Brian D.; Yeh, Eueng-Nan; Arrigo, Kevin R.; Sullivan, Cornelius W.

    1994-01-01

    Although the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Calibration and Validation Program relies on the scientific community for the collection of bio-optical and atmospheric correction data as well as for algorithm development, it does have the responsibility for evaluating and comparing the algorithms and for ensuring that the algorithms are properly implemented within the SeaWiFS Data Processing System. This report consists of a series of sensitivity and algorithm (bio-optical, atmospheric correction, and quality control) studies based on Coastal Zone Color Scanner (CZCS) and historical ancillary data undertaken to assist in the development of SeaWiFS specific applications needed for the proper execution of that responsibility. The topics presented are as follows: (1) CZCS bio-optical algorithm comparison, (2) SeaWiFS ozone data analysis study, (3) SeaWiFS pressure and oxygen absorption study, (4) pixel-by-pixel pressure and ozone correction study for ocean color imagery, (5) CZCS overlapping scenes study, (6) a comparison of CZCS and in situ pigment concentrations in the Southern Ocean, (7) the generation of ancillary data climatologies, (8) CZCS sensor ringing mask comparison, and (9) sun glint flag sensitivity study.

  5. Automated interferometric synthetic aperture microscopy and computational adaptive optics for improved optical coherence tomography.

    PubMed

    Xu, Yang; Liu, Yuan-Zhi; Boppart, Stephen A; Carney, P Scott

    2016-03-10

    In this paper, we introduce an algorithm framework for the automation of interferometric synthetic aperture microscopy (ISAM). Under this framework, common processing steps such as dispersion correction, Fourier domain resampling, and computational adaptive optics aberration correction are carried out as metrics-assisted parameter search problems. We further present the results of this algorithm applied to phantom and biological tissue samples and compare with manually adjusted results. With the automated algorithm, near-optimal ISAM reconstruction can be achieved without manual adjustment. At the same time, the technical barrier for the nonexpert using ISAM imaging is also significantly lowered.

  6. Correction of Dual-PRF Doppler Velocity Outliers in the Presence of Aliasing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altube, Patricia; Bech, Joan; Argemí, Oriol

    In Doppler weather radars, the presence of unfolding errors or outliers is a well-known quality issue for radial velocity fields estimated using the dual–pulse repetition frequency (PRF) technique. Postprocessing methods have been developed to correct dual-PRF outliers, but these need prior application of a dealiasing algorithm for an adequate correction. Our paper presents an alternative procedure based on circular statistics that corrects dual-PRF errors in the presence of extended Nyquist aliasing. The correction potential of the proposed method is quantitatively tested by means of velocity field simulations and is exemplified in the application to real cases, including severe storm events.more » The comparison with two other existing correction methods indicates an improved performance in the correction of clustered outliers. The technique we propose is well suited for real-time applications requiring high-quality Doppler radar velocity fields, such as wind shear and mesocyclone detection algorithms, or assimilation in numerical weather prediction models.« less

  7. Correction of Dual-PRF Doppler Velocity Outliers in the Presence of Aliasing

    DOE PAGES

    Altube, Patricia; Bech, Joan; Argemí, Oriol; ...

    2017-07-18

    In Doppler weather radars, the presence of unfolding errors or outliers is a well-known quality issue for radial velocity fields estimated using the dual–pulse repetition frequency (PRF) technique. Postprocessing methods have been developed to correct dual-PRF outliers, but these need prior application of a dealiasing algorithm for an adequate correction. Our paper presents an alternative procedure based on circular statistics that corrects dual-PRF errors in the presence of extended Nyquist aliasing. The correction potential of the proposed method is quantitatively tested by means of velocity field simulations and is exemplified in the application to real cases, including severe storm events.more » The comparison with two other existing correction methods indicates an improved performance in the correction of clustered outliers. The technique we propose is well suited for real-time applications requiring high-quality Doppler radar velocity fields, such as wind shear and mesocyclone detection algorithms, or assimilation in numerical weather prediction models.« less

  8. Pitch-Learning Algorithm For Speech Encoders

    NASA Technical Reports Server (NTRS)

    Bhaskar, B. R. Udaya

    1988-01-01

    Adaptive algorithm detects and corrects errors in sequence of estimates of pitch period of speech. Algorithm operates in conjunction with techniques used to estimate pitch period. Used in such parametric and hybrid speech coders as linear predictive coders and adaptive predictive coders.

  9. The whole space three-dimensional magnetotelluric inversion algorithm with static shift correction

    NASA Astrophysics Data System (ADS)

    Zhang, K.

    2016-12-01

    Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results.The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm. The verification and application example of 3D inversion algorithm is shown in Figure 1. From the comparison of figure 1, the inversion model can reflect all the abnormal bodies and terrain clearly regardless of what type of data (impedance/tipper/impedance and tipper). And the resolution of the bodies' boundary can be improved by using tipper data. The algorithm is very effective for terrain inversion. So it is very useful for the study of continental shelf with continuous exploration of land, marine and underground.The three-dimensional electrical model of the ore zone reflects the basic information of stratum, rock and structure. Although it cannot indicate the ore body position directly, the important clues are provided for prospecting work by the delineation of diorite pluton uplift range. The test results show that, the high quality of the data processing and efficient inversion method for electromagnetic method is an important guarantee for porphyry ore.

  10. Energy shadowing correction of ultrasonic pulse-echo records by digital signal processing

    NASA Technical Reports Server (NTRS)

    Kishonio, D.; Heyman, J. S.

    1985-01-01

    A numerical algorithm is described that enables the correction of energy shadowing during the ultrasonic testing of bulk materials. In the conventional method, an ultrasonic transducer transmits sound waves into a material that is immersed in water so that discontinuities such as defects can be revealed when the waves are reflected and then detected and displayed graphically. Since a defect that lies behind another defect is shadowed in that it receives less energy, the conventional method has a major drawback. The algorithm normalizes the energy of the incoming wave by measuring the energy of the waves reflected off the water/air interface. The algorithm is fast and simple enough to be adopted for real time applications in industry. Images of material defects with the shadowing corrections permit more quantitative interpretation of the material state.

  11. Lumped transmission line avalanche pulser

    DOEpatents

    Booth, R.

    1995-07-18

    A lumped linear avalanche transistor pulse generator utilizes stacked transistors in parallel within a stage and couples a plurality of said stages, in series with increasing zener diode limited voltages per stage and decreasing balanced capacitance load per stage to yield a high voltage, high and constant current, very short pulse. 8 figs.

  12. 45 CFR 158.241 - Form of rebate.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... method that was used for payment, such as credit card or direct debit. ... in the form of a premium credit, lump-sum check, or, if an enrollee paid the premium using a credit card or direct debit, by lump-sum reimbursement to the account used to pay the premium. (2) Any rebate...

  13. A Laboratory Simulation of Urban Runoff and the Potential for Hydrograph Prediction with Curve Numbers

    USDA-ARS?s Scientific Manuscript database

    Urban drainages are mosaics of pervious and impervious surfaces, and prediction of runoff hydrology with a lumped modeling approach using the NRCS curve number may be appropriate. However, the prognostic capability of such a lumped approach is complicated by routing and connectivity amongst infiltra...

  14. Lumped transmission line avalanche pulser

    DOEpatents

    Booth, Rex

    1995-01-01

    A lumped linear avalanche transistor pulse generator utilizes stacked transistors in parallel within a stage and couples a plurality of said stages, in series with increasing zener diode limited voltages per stage and decreasing balanced capacitance load per stage to yield a high voltage, high and constant current, very short pulse.

  15. Improved Algorithm For Finite-Field Normal-Basis Multipliers

    NASA Technical Reports Server (NTRS)

    Wang, C. C.

    1989-01-01

    Improved algorithm reduces complexity of calculations that must precede design of Massey-Omura finite-field normal-basis multipliers, used in error-correcting-code equipment and cryptographic devices. Algorithm represents an extension of development reported in "Algorithm To Design Finite-Field Normal-Basis Multipliers" (NPO-17109), NASA Tech Briefs, Vol. 12, No. 5, page 82.

  16. Experimental Validation of Advanced Dispersed Fringe Sensing (ADFS) Algorithm Using Advanced Wavefront Sensing and Correction Testbed (AWCT)

    NASA Technical Reports Server (NTRS)

    Wang, Xu; Shi, Fang; Sigrist, Norbert; Seo, Byoung-Joon; Tang, Hong; Bikkannavar, Siddarayappa; Basinger, Scott; Lay, Oliver

    2012-01-01

    Large aperture telescope commonly features segment mirrors and a coarse phasing step is needed to bring these individual segments into the fine phasing capture range. Dispersed Fringe Sensing (DFS) is a powerful coarse phasing technique and its alteration is currently being used for JWST.An Advanced Dispersed Fringe Sensing (ADFS) algorithm is recently developed to improve the performance and robustness of previous DFS algorithms with better accuracy and unique solution. The first part of the paper introduces the basic ideas and the essential features of the ADFS algorithm and presents the some algorithm sensitivity study results. The second part of the paper describes the full details of algorithm validation process through the advanced wavefront sensing and correction testbed (AWCT): first, the optimization of the DFS hardware of AWCT to ensure the data accuracy and reliability is illustrated. Then, a few carefully designed algorithm validation experiments are implemented, and the corresponding data analysis results are shown. Finally the fiducial calibration using Range-Gate-Metrology technique is carried out and a <10nm or <1% algorithm accuracy is demonstrated.

  17. Optimisation of reconstruction--reprojection-based motion correction for cardiac SPECT.

    PubMed

    Kangasmaa, Tuija S; Sohlberg, Antti O

    2014-07-01

    Cardiac motion is a challenging cause of image artefacts in myocardial perfusion SPECT. A wide range of motion correction methods have been developed over the years, and so far automatic algorithms based on the reconstruction--reprojection principle have proved to be the most effective. However, these methods have not been fully optimised in terms of their free parameters and implementational details. Two slightly different implementations of reconstruction--reprojection-based motion correction techniques were optimised for effective, good-quality motion correction and then compared with each other. The first of these methods (Method 1) was the traditional reconstruction-reprojection motion correction algorithm, where the motion correction is done in projection space, whereas the second algorithm (Method 2) performed motion correction in reconstruction space. The parameters that were optimised include the type of cost function (squared difference, normalised cross-correlation and mutual information) that was used to compare measured and reprojected projections, and the number of iterations needed. The methods were tested with motion-corrupt projection datasets, which were generated by adding three different types of motion (lateral shift, vertical shift and vertical creep) to motion-free cardiac perfusion SPECT studies. Method 2 performed slightly better overall than Method 1, but the difference between the two implementations was small. The execution time for Method 2 was much longer than for Method 1, which limits its clinical usefulness. The mutual information cost function gave clearly the best results for all three motion sets for both correction methods. Three iterations were sufficient for a good quality correction using Method 1. The traditional reconstruction--reprojection-based method with three update iterations and mutual information cost function is a good option for motion correction in clinical myocardial perfusion SPECT.

  18. Correcting Spatial Variance of RCM for GEO SAR Imaging Based on Time-Frequency Scaling.

    PubMed

    Yu, Ze; Lin, Peng; Xiao, Peng; Kang, Lihong; Li, Chunsheng

    2016-07-14

    Compared with low-Earth orbit synthetic aperture radar (SAR), a geosynchronous (GEO) SAR can have a shorter revisit period and vaster coverage. However, relative motion between this SAR and targets is more complicated, which makes range cell migration (RCM) spatially variant along both range and azimuth. As a result, efficient and precise imaging becomes difficult. This paper analyzes and models spatial variance for GEO SAR in the time and frequency domains. A novel algorithm for GEO SAR imaging with a resolution of 2 m in both the ground cross-range and range directions is proposed, which is composed of five steps. The first is to eliminate linear azimuth variance through the first azimuth time scaling. The second is to achieve RCM correction and range compression. The third is to correct residual azimuth variance by the second azimuth time-frequency scaling. The fourth and final steps are to accomplish azimuth focusing and correct geometric distortion. The most important innovation of this algorithm is implementation of the time-frequency scaling to correct high-order azimuth variance. As demonstrated by simulation results, this algorithm can accomplish GEO SAR imaging with good and uniform imaging quality over the entire swath.

  19. Correcting Spatial Variance of RCM for GEO SAR Imaging Based on Time-Frequency Scaling

    PubMed Central

    Yu, Ze; Lin, Peng; Xiao, Peng; Kang, Lihong; Li, Chunsheng

    2016-01-01

    Compared with low-Earth orbit synthetic aperture radar (SAR), a geosynchronous (GEO) SAR can have a shorter revisit period and vaster coverage. However, relative motion between this SAR and targets is more complicated, which makes range cell migration (RCM) spatially variant along both range and azimuth. As a result, efficient and precise imaging becomes difficult. This paper analyzes and models spatial variance for GEO SAR in the time and frequency domains. A novel algorithm for GEO SAR imaging with a resolution of 2 m in both the ground cross-range and range directions is proposed, which is composed of five steps. The first is to eliminate linear azimuth variance through the first azimuth time scaling. The second is to achieve RCM correction and range compression. The third is to correct residual azimuth variance by the second azimuth time-frequency scaling. The fourth and final steps are to accomplish azimuth focusing and correct geometric distortion. The most important innovation of this algorithm is implementation of the time-frequency scaling to correct high-order azimuth variance. As demonstrated by simulation results, this algorithm can accomplish GEO SAR imaging with good and uniform imaging quality over the entire swath. PMID:27428974

  20. Migration of dispersive GPR data

    USGS Publications Warehouse

    Powers, M.H.; Oden, C.P.; ,

    2004-01-01

    Electrical conductivity and dielectric and magnetic relaxation phenomena cause electromagnetic propagation to be dispersive in earth materials. Both velocity and attenuation may vary with frequency, depending on the frequency content of the propagating energy and the nature of the relaxation phenomena. A minor amount of velocity dispersion is associated with high attenuation. For this reason, measuring effects of velocity dispersion in ground penetrating radar (GPR) data is difficult. With a dispersive forward model, GPR responses to propagation through materials with known frequency-dependent properties have been created. These responses are used as test data for migration algorithms that have been modified to handle specific aspects of dispersive media. When either Stolt or Gazdag migration methods are modified to correct for just velocity dispersion, the results are little changed from standard migration. For nondispersive propagating wavefield data, like deep seismic, ensuring correct phase summation in a migration algorithm is more important than correctly handling amplitude. However, the results of migrating model responses to dispersive media with modified algorithms indicate that, in this case, correcting for frequency-dependent amplitude loss has a much greater effect on the result than correcting for proper phase summation. A modified migration is only effective when it includes attenuation recovery, performing deconvolution and migration simultaneously.

  1. Comparison of 3-D Multi-Lag Cross-Correlation and Speckle Brightness Aberration Correction Algorithms on Static and Moving Targets

    PubMed Central

    Ivancevich, Nikolas M.; Dahl, Jeremy J.; Smith, Stephen W.

    2010-01-01

    Phase correction has the potential to increase the image quality of 3-D ultrasound, especially transcranial ultrasound. We implemented and compared 2 algorithms for aberration correction, multi-lag cross-correlation and speckle brightness, using static and moving targets. We corrected three 75-ns rms electronic aberrators with full-width at half-maximum (FWHM) auto-correlation lengths of 1.35, 2.7, and 5.4 mm. Cross-correlation proved the better algorithm at 2.7 and 5.4 mm correlation lengths (P < 0.05). Static cross-correlation performed better than moving-target cross-correlation at the 2.7 mm correlation length (P < 0.05). Finally, we compared the static and moving-target cross-correlation on a flow phantom with a skull casting aberrator. Using signal from static targets, the correction resulted in an average contrast increase of 22.2%, compared with 13.2% using signal from moving targets. The contrast-to-noise ratio (CNR) increased by 20.5% and 12.8% using static and moving targets, respectively. Doppler signal strength increased by 5.6% and 4.9% for the static and moving-targets methods, respectively. PMID:19942503

  2. Comparison of 3-D multi-lag cross- correlation and speckle brightness aberration correction algorithms on static and moving targets.

    PubMed

    Ivancevich, Nikolas M; Dahl, Jeremy J; Smith, Stephen W

    2009-10-01

    Phase correction has the potential to increase the image quality of 3-D ultrasound, especially transcranial ultrasound. We implemented and compared 2 algorithms for aberration correction, multi-lag cross-correlation and speckle brightness, using static and moving targets. We corrected three 75-ns rms electronic aberrators with full-width at half-maximum (FWHM) auto-correlation lengths of 1.35, 2.7, and 5.4 mm. Cross-correlation proved the better algorithm at 2.7 and 5.4 mm correlation lengths (P < 0.05). Static cross-correlation performed better than moving-target cross-correlation at the 2.7 mm correlation length (P < 0.05). Finally, we compared the static and moving-target cross-correlation on a flow phantom with a skull casting aberrator. Using signal from static targets, the correction resulted in an average contrast increase of 22.2%, compared with 13.2% using signal from moving targets. The contrast-to-noise ratio (CNR) increased by 20.5% and 12.8% using static and moving targets, respectively. Doppler signal strength increased by 5.6% and 4.9% for the static and moving-targets methods, respectively.

  3. Scene-based nonuniformity correction with reduced ghosting using a gated LMS algorithm.

    PubMed

    Hardie, Russell C; Baxley, Frank; Brys, Brandon; Hytla, Patrick

    2009-08-17

    In this paper, we present a scene-based nouniformity correction (NUC) method using a modified adaptive least mean square (LMS) algorithm with a novel gating operation on the updates. The gating is designed to significantly reduce ghosting artifacts produced by many scene-based NUC algorithms by halting updates when temporal variation is lacking. We define the algorithm and present a number of experimental results to demonstrate the efficacy of the proposed method in comparison to several previously published methods including other LMS and constant statistics based methods. The experimental results include simulated imagery and a real infrared image sequence. We show that the proposed method significantly reduces ghosting artifacts, but has a slightly longer convergence time. (c) 2009 Optical Society of America

  4. Breast density quantification using magnetic resonance imaging (MRI) with bias field correction: A postmortem study

    PubMed Central

    Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q.; Ducote, Justin L.; Su, Min-Ying; Molloi, Sabee

    2013-01-01

    Purpose: Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. Methods: T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left–right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. Results: The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left–right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left–right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. Conclusions: The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications. PMID:24320536

  5. Breast density quantification using magnetic resonance imaging (MRI) with bias field correction: a postmortem study.

    PubMed

    Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q; Ducote, Justin L; Su, Min-Ying; Molloi, Sabee

    2013-12-01

    Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left-right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left-right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left-right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications.

  6. High resolution time interval meter

    DOEpatents

    Martin, A.D.

    1986-05-09

    Method and apparatus are provided for measuring the time interval between two events to a higher resolution than reliability available from conventional circuits and component. An internal clock pulse is provided at a frequency compatible with conventional component operating frequencies for reliable operation. Lumped constant delay circuits are provided for generating outputs at delay intervals corresponding to the desired high resolution. An initiation START pulse is input to generate first high resolution data. A termination STOP pulse is input to generate second high resolution data. Internal counters count at the low frequency internal clock pulse rate between the START and STOP pulses. The first and second high resolution data are logically combined to directly provide high resolution data to one counter and correct the count in the low resolution counter to obtain a high resolution time interval measurement.

  7. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set-Effect of Pasteurization.

    PubMed

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-02-26

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified.

  8. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set—Effect of Pasteurization

    PubMed Central

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-01-01

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified. PMID:26927169

  9. A Portable Ground-Based Atmospheric Monitoring System (PGAMS) for the Calibration and Validation of Atmospheric Correction Algorithms Applied to Aircraft and Satellite Images

    NASA Technical Reports Server (NTRS)

    Schiller, Stephen; Luvall, Jeffrey C.; Rickman, Doug L.; Arnold, James E. (Technical Monitor)

    2000-01-01

    Detecting changes in the Earth's environment using satellite images of ocean and land surfaces must take into account atmospheric effects. As a result, major programs are underway to develop algorithms for image retrieval of atmospheric aerosol properties and atmospheric correction. However, because of the temporal and spatial variability of atmospheric transmittance it is very difficult to model atmospheric effects and implement models in an operational mode. For this reason, simultaneous in situ ground measurements of atmospheric optical properties are vital to the development of accurate atmospheric correction techniques. Presented in this paper is a spectroradiometer system that provides an optimized set of surface measurements for the calibration and validation of atmospheric correction algorithms. The Portable Ground-based Atmospheric Monitoring System (PGAMS) obtains a comprehensive series of in situ irradiance, radiance, and reflectance measurements for the calibration of atmospheric correction algorithms applied to multispectral. and hyperspectral images. The observations include: total downwelling irradiance, diffuse sky irradiance, direct solar irradiance, path radiance in the direction of the north celestial pole, path radiance in the direction of the overflying satellite, almucantar scans of path radiance, full sky radiance maps, and surface reflectance. Each of these parameters are recorded over a wavelength range from 350 to 1050 nm in 512 channels. The system is fast, with the potential to acquire the complete set of observations in only 8 to 10 minutes depending on the selected spatial resolution of the sky path radiance measurements

  10. Histogram-driven cupping correction (HDCC) in CT

    NASA Astrophysics Data System (ADS)

    Kyriakou, Y.; Meyer, M.; Lapp, R.; Kalender, W. A.

    2010-04-01

    Typical cupping correction methods are pre-processing methods which require either pre-calibration measurements or simulations of standard objects to approximate and correct for beam hardening and scatter. Some of them require the knowledge of spectra, detector characteristics, etc. The aim of this work was to develop a practical histogram-driven cupping correction (HDCC) method to post-process the reconstructed images. We use a polynomial representation of the raw-data generated by forward projection of the reconstructed images; forward and backprojection are performed on graphics processing units (GPU). The coefficients of the polynomial are optimized using a simplex minimization of the joint entropy of the CT image and its gradient. The algorithm was evaluated using simulations and measurements of homogeneous and inhomogeneous phantoms. For the measurements a C-arm flat-detector CT (FD-CT) system with a 30×40 cm2 detector, a kilovoltage on board imager (radiation therapy simulator) and a micro-CT system were used. The algorithm reduced cupping artifacts both in simulations and measurements using a fourth-order polynomial and was in good agreement to the reference. The minimization algorithm required less than 70 iterations to adjust the coefficients only performing a linear combination of basis images, thus executing without time consuming operations. HDCC reduced cupping artifacts without the necessity of pre-calibration or other scan information enabling a retrospective improvement of CT image homogeneity. However, the method can work with other cupping correction algorithms or in a calibration manner, as well.

  11. A comparison of five partial volume correction methods for Tau and Amyloid PET imaging with [18F]THK5351 and [11C]PIB.

    PubMed

    Shidahara, Miho; Thomas, Benjamin A; Okamura, Nobuyuki; Ibaraki, Masanobu; Matsubara, Keisuke; Oyama, Senri; Ishikawa, Yoichi; Watanuki, Shoichi; Iwata, Ren; Furumoto, Shozo; Tashiro, Manabu; Yanai, Kazuhiko; Gonda, Kohsuke; Watabe, Hiroshi

    2017-08-01

    To suppress partial volume effect (PVE) in brain PET, there have been many algorithms proposed. However, each methodology has different property due to its assumption and algorithms. Our aim of this study was to investigate the difference among partial volume correction (PVC) method for tau and amyloid PET study. We investigated two of the most commonly used PVC methods, Müller-Gärtner (MG) and geometric transfer matrix (GTM) and also other three methods for clinical tau and amyloid PET imaging. One healthy control (HC) and one Alzheimer's disease (AD) PET studies of both [ 18 F]THK5351 and [ 11 C]PIB were performed using a Eminence STARGATE scanner (Shimadzu Inc., Kyoto, Japan). All PET images were corrected for PVE by MG, GTM, Labbé (LABBE), Regional voxel-based (RBV), and Iterative Yang (IY) methods, with segmented or parcellated anatomical information processed by FreeSurfer, derived from individual MR images. PVC results of 5 algorithms were compared with the uncorrected data. In regions of high uptake of [ 18 F]THK5351 and [ 11 C]PIB, different PVCs demonstrated different SUVRs. The degree of difference between PVE uncorrected and corrected depends on not only PVC algorithm but also type of tracer and subject condition. Presented PVC methods are straight-forward to implement but the corrected images require careful interpretation as different methods result in different levels of recovery.

  12. Optimizing convergence rates of alternating minimization reconstruction algorithms for real-time explosive detection applications

    NASA Astrophysics Data System (ADS)

    Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.

    2016-05-01

    X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.

  13. Energy-state formulation of lumped volume dynamic equations with application to a simplified free piston Stirling engine

    NASA Technical Reports Server (NTRS)

    Daniele, C. J.; Lorenzo, C. F.

    1979-01-01

    Lumped volume dynamic equations are derived using an energy state formulation. This technique requires that kinetic and potential energy state functions be written for the physical system being investigated. To account for losses in the system, a Rayleigh dissipation function is formed. Using these functions, a Lagrangian is formed and using Lagrange's equation, the equations of motion for the system are derived. The results of the application of this technique to a lumped volume are used to derive a model for the free piston Stirling engine. The model was simplified and programmed on an analog computer. Results are given comparing the model response with experimental data.

  14. Energy-state formulation of lumped volume dynamic equations with application to a simplified free piston Stirling engine

    NASA Technical Reports Server (NTRS)

    Daniele, C. J.; Lorenzo, C. F.

    1979-01-01

    Lumped volume dynamic equations are derived using an energy-state formulation. This technique requires that kinetic and potential energy state functions be written for the physical system being investigated. To account for losses in the system, a Rayleigh dissipation function is also formed. Using these functions, a Lagrangian is formed and using Lagrange's equation, the equations of motion for the system are derived. The results of the application of this technique to a lumped volume are used to derive a model for the free-piston Stirling engine. The model was simplified and programmed on an analog computer. Results are given comparing the model response with experimental data.

  15. Dark lump excitations in superfluid Fermi gases

    NASA Astrophysics Data System (ADS)

    Xu, Yan-Xia; Duan, Wen-Shan

    2012-11-01

    We study the linear and nonlinear properties of two-dimensional matter-wave pulses in disk-shaped superfluid Fermi gases. A Kadomtsev—Petviashvili I (KPI) solitary wave has been realized for superfluid Fermi gases in the limited cases of Bardeen—Cooper—Schrieffer (BCS) regime, Bose—Einstein condensate (BEC) regime, and unitarity regime. One-lump solution as well as one-line soliton solutions for the KPI equation are obtained, and two-line soliton solutions with the same amplitude are also studied in the limited cases. The dependence of the lump propagating velocity and the sound speed of two-dimensional superfluid Fermi gases on the interaction parameter are investigated for the limited cases of BEC and unitarity.

  16. Sensitivity of Lumped Constraints Using the Adjoint Method

    NASA Technical Reports Server (NTRS)

    Akgun, Mehmet A.; Haftka, Raphael T.; Wu, K. Chauncey; Walsh, Joanne L.

    1999-01-01

    Adjoint sensitivity calculation of stress, buckling and displacement constraints may be much less expensive than direct sensitivity calculation when the number of load cases is large. Adjoint stress and displacement sensitivities are available in the literature. Expressions for local buckling sensitivity of isotropic plate elements are derived in this study. Computational efficiency of the adjoint method is sensitive to the number of constraints and, therefore, the method benefits from constraint lumping. A continuum version of the Kreisselmeier-Steinhauser (KS) function is chosen to lump constraints. The adjoint and direct methods are compared for three examples: a truss structure, a simple HSCT wing model, and a large HSCT model. These sensitivity derivatives are then used in optimization.

  17. Algorithms in Learning, Teaching, and Instructional Design. Studies in Systematic Instruction and Training Technical Report 51201.

    ERIC Educational Resources Information Center

    Gerlach, Vernon S.; And Others

    An algorithm is defined here as an unambiguous procedure which will always produce the correct result when applied to any problem of a given class of problems. This paper gives an extended discussion of the definition of an algorithm. It also explores in detail the elements of an algorithm, the representation of algorithms in standard prose, flow…

  18. Data Processing Algorithm for Diagnostics of Combustion Using Diode Laser Absorption Spectrometry.

    PubMed

    Mironenko, Vladimir R; Kuritsyn, Yuril A; Liger, Vladimir V; Bolshov, Mikhail A

    2018-02-01

    A new algorithm for the evaluation of the integral line intensity for inferring the correct value for the temperature of a hot zone in the diagnostic of combustion by absorption spectroscopy with diode lasers is proposed. The algorithm is based not on the fitting of the baseline (BL) but on the expansion of the experimental and simulated spectra in a series of orthogonal polynomials, subtracting of the first three components of the expansion from both the experimental and simulated spectra, and fitting the spectra thus modified. The algorithm is tested in the numerical experiment by the simulation of the absorption spectra using a spectroscopic database, the addition of white noise, and the parabolic BL. Such constructed absorption spectra are treated as experimental in further calculations. The theoretical absorption spectra were simulated with the parameters (temperature, total pressure, concentration of water vapor) close to the parameters used for simulation of the experimental data. Then, spectra were expanded in the series of orthogonal polynomials and first components were subtracted from both spectra. The value of the correct integral line intensities and hence the correct temperature evaluation were obtained by fitting of the thus modified experimental and simulated spectra. The dependence of the mean and standard deviation of the evaluation of the integral line intensity on the linewidth and the number of subtracted components (first two or three) were examined. The proposed algorithm provides a correct estimation of temperature with standard deviation better than 60 K (for T = 1000 K) for the line half-width up to 0.6 cm -1 . The proposed algorithm allows for obtaining the parameters of a hot zone without the fitting of usually unknown BL.

  19. Fast conjugate phase image reconstruction based on a Chebyshev approximation to correct for B0 field inhomogeneity and concomitant gradients.

    PubMed

    Chen, Weitian; Sica, Christopher T; Meyer, Craig H

    2008-11-01

    Off-resonance effects can cause image blurring in spiral scanning and various forms of image degradation in other MRI methods. Off-resonance effects can be caused by both B0 inhomogeneity and concomitant gradient fields. Previously developed off-resonance correction methods focus on the correction of a single source of off-resonance. This work introduces a computationally efficient method of correcting for B0 inhomogeneity and concomitant gradients simultaneously. The method is a fast alternative to conjugate phase reconstruction, with the off-resonance phase term approximated by Chebyshev polynomials. The proposed algorithm is well suited for semiautomatic off-resonance correction, which works well even with an inaccurate or low-resolution field map. The proposed algorithm is demonstrated using phantom and in vivo data sets acquired by spiral scanning. Semiautomatic off-resonance correction alone is shown to provide a moderate amount of correction for concomitant gradient field effects, in addition to B0 imhomogeneity effects. However, better correction is provided by the proposed combined method. The best results were produced using the semiautomatic version of the proposed combined method.

  20. Evaluation metrics for bone segmentation in ultrasound

    NASA Astrophysics Data System (ADS)

    Lougheed, Matthew; Fichtinger, Gabor; Ungi, Tamas

    2015-03-01

    Tracked ultrasound is a safe alternative to X-ray for imaging bones. The interpretation of bony structures is challenging as ultrasound has no specific intensity characteristic of bones. Several image segmentation algorithms have been devised to identify bony structures. We propose an open-source framework that would aid in the development and comparison of such algorithms by quantitatively measuring segmentation performance in the ultrasound images. True-positive and false-negative metrics used in the framework quantify algorithm performance based on correctly segmented bone and correctly segmented boneless regions. Ground-truth for these metrics are defined manually and along with the corresponding automatically segmented image are used for the performance analysis. Manually created ground truth tests were generated to verify the accuracy of the analysis. Further evaluation metrics for determining average performance per slide and standard deviation are considered. The metrics provide a means of evaluating accuracy of frames along the length of a volume. This would aid in assessing the accuracy of the volume itself and the approach to image acquisition (positioning and frequency of frame). The framework was implemented as an open-source module of the 3D Slicer platform. The ground truth tests verified that the framework correctly calculates the implemented metrics. The developed framework provides a convenient way to evaluate bone segmentation algorithms. The implementation fits in a widely used application for segmentation algorithm prototyping. Future algorithm development will benefit by monitoring the effects of adjustments to an algorithm in a standard evaluation framework.

  1. Characterization of the Photon Counting CHASE Jr., Chip Built in a 40-nm CMOS Process With a Charge Sharing Correction Algorithm Using a Collimated X-Ray Beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krzyżanowska, A.; Deptuch, G. W.; Maj, P.

    This paper presents the detailed characterization of a single photon counting chip, named CHASE Jr., built in a CMOS 40-nm process, operating with synchrotron radiation. The chip utilizes an on-chip implementation of the C8P1 algorithm. The algorithm eliminates the charge sharing related uncertainties, namely, the dependence of the number of registered photons on the discriminator’s threshold, set for monochromatic irradiation, and errors in the assignment of an event to a certain pixel. The article presents a short description of the algorithm as well as the architecture of the CHASE Jr., chip. The analog and digital functionalities, allowing for proper operationmore » of the C8P1 algorithm are described, namely, an offset correction for two discriminators independently, two-stage gain correction, and different operation modes of the digital blocks. The results of tests of the C8P1 operation are presented for the chip bump bonded to a silicon sensor and exposed to the 3.5- μm -wide pencil beam of 8-keV photons of synchrotron radiation. It was studied how sensitive the algorithm performance is to the chip settings, as well as the uniformity of parameters of the analog front-end blocks. Presented results prove that the C8P1 algorithm enables counting all photons hitting the detector in between readout channels and retrieving the actual photon energy.« less

  2. Pentacam Scheimpflug quantitative imaging of the crystalline lens and intraocular lens.

    PubMed

    Rosales, Patricia; Marcos, Susana

    2009-05-01

    To implement geometrical and optical distortion correction methods for anterior segment Scheimpflug images obtained with a commercially available system (Pentacam, Oculus Optikgeräte GmbH). Ray tracing algorithms were implemented to obtain corrected ocular surface geometry from the original images captured by the Pentacam's CCD camera. As details of the optical layout were not fully provided by the manufacturer, an iterative procedure (based on imaging of calibrated spheres) was developed to estimate the camera lens specifications. The correction procedure was tested on Scheimpflug images of a physical water cell model eye (with polymethylmethacrylate cornea and a commercial IOL of known dimensions) and of a normal human eye previously measured with a corrected optical and geometrical distortion Scheimpflug camera (Topcon SL-45 [Topcon Medical Systems Inc] from the Vrije University, Amsterdam, Holland). Uncorrected Scheimpflug images show flatter surfaces and thinner lenses than in reality. The application of geometrical and optical distortion correction algorithms improves the accuracy of the estimated anterior lens radii of curvature by 30% to 40% and of the estimated posterior lens by 50% to 100%. The average error in the retrieved radii was 0.37 and 0.46 mm for the anterior and posterior lens radii of curvature, respectively, and 0.048 mm for lens thickness. The Pentacam Scheimpflug system can be used to obtain quantitative information on the geometry of the crystalline lens, provided that geometrical and optical distortion correction algorithms are applied, within the accuracy of state-of-the art phakometry and biometry. The techniques could improve with exact knowledge of the technical specifications of the instrument, improved edge detection algorithms, consideration of aspheric and non-rotationally symmetrical surfaces, and introduction of a crystalline gradient index.

  3. Seasonal and Inter-Annual Patterns of Chlorophyll and Phytoplankton Community Structure in Monterey Bay, CA Derived from AVIRIS Data During the 2013-2015 HyspIRI Airborne Campaign

    NASA Astrophysics Data System (ADS)

    Palacios, S. L.; Thompson, D. R.; Kudela, R. M.; Negrey, K.; Guild, L. S.; Gao, B. C.; Green, R. O.; Torres-Perez, J. L.

    2016-02-01

    There is a need in the ocean color community to discriminate among phytoplankton groups within the bulk chlorophyll pool to understand ocean biodiversity, track energy flow through ecosystems, and identify and monitor for harmful algal blooms. Imaging spectrometer measurements enable the use of sophisticated spectroscopic algorithms for applications such as differentiating among coral species and discriminating phytoplankton taxa. These advanced algorithms rely on the fine scale, subtle spectral shape of the atmospherically corrected remote sensing reflectance (Rrs) spectrum of the ocean surface. Consequently, these algorithms are sensitive to inaccuracies in the retrieved Rrs spectrum that may be related to the presence of nearby clouds, inadequate sensor calibration, low sensor signal-to-noise ratio, glint correction, and atmospheric correction. For the HyspIRI Airborne Campaign, flight planning considered optimal weather conditions to avoid flights with significant cloud/fog cover. Although best suited for terrestrial targets, the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) has enough signal for some coastal chlorophyll algorithms and meets sufficient calibration requirements for most channels. The coastal marine environment has special atmospheric correction needs due to error introduced by aerosols and terrestrially sourced atmospheric dust and riverine sediment plumes. For this HyspIRI campaign, careful attention has been given to the correction of AVIRIS imagery of the Monterey Bay to optimize ocean Rrs retrievals to estimate chlorophyll (OC3) and phytoplankton functional type (PHYDOTax) data products. This new correction method has been applied to several image collection dates during two oceanographic seasons in 2013 and 2014. These two periods are dominated by either diatom blooms or red tides. Results to be presented include chlorophyll and phytoplankton community structure and in-water validation data for these dates during the two seasons.

  4. A rapid algorithm for realistic human reaching and its use in a virtual reality system

    NASA Technical Reports Server (NTRS)

    Aldridge, Ann; Pandya, Abhilash; Goldsby, Michael; Maida, James

    1994-01-01

    The Graphics Analysis Facility (GRAF) at JSC has developed a rapid algorithm for computing realistic human reaching. The algorithm was applied to GRAF's anthropometrically correct human model and used in a 3D computer graphics system and a virtual reality system. The nature of the algorithm and its uses are discussed.

  5. Chlorophyll-a concentration estimation with three bio-optical algorithms: correction for the low concentration range for the Yiam Reservoir, Korea

    USDA-ARS?s Scientific Manuscript database

    Bio-optical algorithms have been applied to monitor water quality in surface water systems. Empirical algorithms, such as Ritchie (2008), Gons (2008), and Gilerson (2010), have been applied to estimate the chlorophyll-a (chl-a) concentrations. However, the performance of each algorithm severely degr...

  6. 24 CFR 84.82 - Provisions applicable only to lump sum grants.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... INSTITUTIONS OF HIGHER EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Use of Lump Sum Grants § 84.82... to performance and unit cost data. (4) Where HUD guarantees or insures the repayment of money... acceptable sureties, as prescribed in 31 CFR part 223, “Surety Companies Doing Business with the United...

  7. 24 CFR 84.82 - Provisions applicable only to lump sum grants.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... INSTITUTIONS OF HIGHER EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Use of Lump Sum Grants § 84.82... to performance and unit cost data. (4) Where HUD guarantees or insures the repayment of money... acceptable sureties, as prescribed in 31 CFR part 223, “Surety Companies Doing Business with the United...

  8. 24 CFR 84.82 - Provisions applicable only to lump sum grants.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... INSTITUTIONS OF HIGHER EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Use of Lump Sum Grants § 84.82... to performance and unit cost data. (4) Where HUD guarantees or insures the repayment of money... acceptable sureties, as prescribed in 31 CFR part 223, “Surety Companies Doing Business with the United...

  9. 24 CFR 570.513 - Lump sum drawdown for financing of property rehabilitation activities.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... recipient shall execute a written agreement with one or more private financial institutions for the..., the anticipated level of rehabilitation activities by the financial institution, the rate of interest and other benefits to be provided by the financial institution in return for the lump sum deposit, and...

  10. 5 CFR 831.2005 - Designation of beneficiary for lump-sum payment.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 2 2013-01-01 2013-01-01 false Designation of beneficiary for lump-sum payment. 831.2005 Section 831.2005 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED.... (e) A change of beneficiary may be made at any time and without the knowledge or consent of the...

  11. 5 CFR 831.2005 - Designation of beneficiary for lump-sum payment.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 2 2011-01-01 2011-01-01 false Designation of beneficiary for lump-sum payment. 831.2005 Section 831.2005 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED.... (e) A change of beneficiary may be made at any time and without the knowledge or consent of the...

  12. 5 CFR 831.2005 - Designation of beneficiary for lump-sum payment.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 2 2012-01-01 2012-01-01 false Designation of beneficiary for lump-sum payment. 831.2005 Section 831.2005 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED.... (e) A change of beneficiary may be made at any time and without the knowledge or consent of the...

  13. 5 CFR 831.2005 - Designation of beneficiary for lump-sum payment.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 2 2014-01-01 2014-01-01 false Designation of beneficiary for lump-sum payment. 831.2005 Section 831.2005 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED.... (e) A change of beneficiary may be made at any time and without the knowledge or consent of the...

  14. 20 CFR 217.10 - Application filed after death.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Application filed after death. 217.10 Section... APPLICATION FOR ANNUITY OR LUMP SUM Applications § 217.10 Application filed after death. (a) A survivor... expenses dies before applying for the lump-sum death payment under part 234 of this chapter. The...

  15. 5 CFR 550.1205 - Calculating a lump-sum payment.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... rate employee for all regularly scheduled periods of night shift duty covered by the unused annual... day and night shifts, the night differential is payable for that portion of the lump-sum period that would have occurred when the employee was scheduled to work night shifts. (ii) Premium pay under 5 U.S.C...

  16. 26 CFR 1.411(a)(13)-1 - Statutory hybrid plans.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... another benefit amount) and at least one of those formulas is a statutory hybrid benefit formula, the... certain statutory hybrid plans that determine benefits under a lump sum-based benefit formula. Paragraph... current balance or current value under a lump sum-based benefit formula. Pursuant to section 411(a)(13)(A...

  17. 26 CFR 1.411(a)(13)-1 - Statutory hybrid plans.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... another benefit amount) and at least one of those formulas is a statutory hybrid benefit formula, the... certain statutory hybrid plans that determine benefits under a lump sum-based benefit formula. Paragraph... current balance or current value under a lump sum-based benefit formula. Pursuant to section 411(a)(13)(A...

  18. 26 CFR 1.411(a)(13)-1 - Statutory hybrid plans.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... another benefit amount) and at least one of those formulas is a statutory hybrid benefit formula, the... certain statutory hybrid plans that determine benefits under a lump sum-based benefit formula. Paragraph... current balance or current value under a lump sum-based benefit formula. Pursuant to section 411(a)(13)(A...

  19. 26 CFR 1.411(a)(13)-1 - Statutory hybrid plans.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... another benefit amount) and at least one of those formulas is a statutory hybrid benefit formula, the... certain statutory hybrid plans that determine benefits under a lump sum-based benefit formula. Paragraph... current balance or current value under a lump sum-based benefit formula. Pursuant to section 411(a)(13)(A...

  20. 20 CFR 10.422 - May compensation payments be issued in a lump sum?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...-sum payments for wage-loss benefits, OWCP will not exercise further discretion in the matter. This... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false May compensation payments be issued in a lump sum? 10.422 Section 10.422 Employees' Benefits OFFICE OF WORKERS' COMPENSATION PROGRAMS, DEPARTMENT OF...

  1. 5 CFR 831.2005 - Designation of beneficiary for lump-sum payment.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Designation of beneficiary for lump-sum payment. 831.2005 Section 831.2005 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED.... (e) A change of beneficiary may be made at any time and without the knowledge or consent of the...

  2. 29 CFR 1450.12 - Collection in installments.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the debtor is financially unable to pay the indebtedness in one lump sum, payment may be accepted in... unable to pay the debt in one lump sum. If FMCS agrees to accept payment in regular installments it will... the debtor's ability to pay. If possible, the installment payments should be sufficient in size and...

  3. 29 CFR 1450.12 - Collection in installments.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the debtor is financially unable to pay the indebtedness in one lump sum, payment may be accepted in... unable to pay the debt in one lump sum. If FMCS agrees to accept payment in regular installments it will... the debtor's ability to pay. If possible, the installment payments should be sufficient in size and...

  4. 29 CFR 1450.12 - Collection in installments.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... the debtor is financially unable to pay the indebtedness in one lump sum, payment may be accepted in... unable to pay the debt in one lump sum. If FMCS agrees to accept payment in regular installments it will... the debtor's ability to pay. If possible, the installment payments should be sufficient in size and...

  5. 29 CFR 1450.12 - Collection in installments.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... the debtor is financially unable to pay the indebtedness in one lump sum, payment may be accepted in... unable to pay the debt in one lump sum. If FMCS agrees to accept payment in regular installments it will... the debtor's ability to pay. If possible, the installment payments should be sufficient in size and...

  6. 20 CFR 217.10 - Application filed after death.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Application filed after death. 217.10 Section... APPLICATION FOR ANNUITY OR LUMP SUM Applications § 217.10 Application filed after death. (a) A survivor... expenses dies before applying for the lump-sum death payment under part 234 of this chapter. The...

  7. 20 CFR 217.10 - Application filed after death.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Application filed after death. 217.10 Section... APPLICATION FOR ANNUITY OR LUMP SUM Applications § 217.10 Application filed after death. (a) A survivor... expenses dies before applying for the lump-sum death payment under part 234 of this chapter. The...

  8. 20 CFR 217.10 - Application filed after death.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Application filed after death. 217.10 Section... APPLICATION FOR ANNUITY OR LUMP SUM Applications § 217.10 Application filed after death. (a) A survivor... expenses dies before applying for the lump-sum death payment under part 234 of this chapter. The...

  9. 20 CFR 217.10 - Application filed after death.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Application filed after death. 217.10 Section... APPLICATION FOR ANNUITY OR LUMP SUM Applications § 217.10 Application filed after death. (a) A survivor... expenses dies before applying for the lump-sum death payment under part 234 of this chapter. The...

  10. A lumped-circuit model for the radiation impedance of a circular piston in a rigid baffle.

    PubMed

    Bozkurt, Ayhan

    2008-09-01

    The radiation impedance of a piston transducer mounted in a rigid baffle has been widely addressed in the literature. The real and imaginary parts of the impedance are described by the first order Bessel and Struve functions, respectively. Although there are power series expansions for both functions, the analytic formulation of a lumped circuit is not trivial. In this paper, we present an empirical approach to the derivation of a lumped-circuit model for the radiation impedance expression, based on observations on the near-field behavior of stored kinetic and elastic energy. The field analysis is carried out using a finite element method model of the piston and surrounding fluid medium. We show that fluctuations in the real and imaginary components of the impedance can be modeled by series and shunt tank circuits, each of which shape a certain section of the impedance curve. Because the model is composed of lumped-circuit elements, it can be used in circuit simulators. Consequently, the proposed model is useful for the analysis of transducer front-end circuits.

  11. Multi-frame knowledge based text enhancement for mobile phone captured videos

    NASA Astrophysics Data System (ADS)

    Ozarslan, Suleyman; Eren, P. Erhan

    2014-02-01

    In this study, we explore automated text recognition and enhancement using mobile phone captured videos of store receipts. We propose a method which includes Optical Character Resolution (OCR) enhanced by our proposed Row Based Multiple Frame Integration (RB-MFI), and Knowledge Based Correction (KBC) algorithms. In this method, first, the trained OCR engine is used for recognition; then, the RB-MFI algorithm is applied to the output of the OCR. The RB-MFI algorithm determines and combines the most accurate rows of the text outputs extracted by using OCR from multiple frames of the video. After RB-MFI, KBC algorithm is applied to these rows to correct erroneous characters. Results of the experiments show that the proposed video-based approach which includes the RB-MFI and the KBC algorithm increases the word character recognition rate to 95%, and the character recognition rate to 98%.

  12. Infrared traffic image enhancement algorithm based on dark channel prior and gamma correction

    NASA Astrophysics Data System (ADS)

    Zheng, Lintao; Shi, Hengliang; Gu, Ming

    2017-07-01

    The infrared traffic image acquired by the intelligent traffic surveillance equipment has low contrast, little hierarchical differences in perceptions of image and the blurred vision effect. Therefore, infrared traffic image enhancement, being an indispensable key step, is applied to nearly all infrared imaging based traffic engineering applications. In this paper, we propose an infrared traffic image enhancement algorithm that is based on dark channel prior and gamma correction. In existing research dark channel prior, known as a famous image dehazing method, here is used to do infrared image enhancement for the first time. Initially, in the proposed algorithm, the original degraded infrared traffic image is transformed with dark channel prior as the initial enhanced result. A further adjustment based on the gamma curve is needed because initial enhanced result has lower brightness. Comprehensive validation experiments reveal that the proposed algorithm outperforms the current state-of-the-art algorithms.

  13. Formal verification of a fault tolerant clock synchronization algorithm

    NASA Technical Reports Server (NTRS)

    Rushby, John; Vonhenke, Frieder

    1989-01-01

    A formal specification and mechanically assisted verification of the interactive convergence clock synchronization algorithm of Lamport and Melliar-Smith is described. Several technical flaws in the analysis given by Lamport and Melliar-Smith were discovered, even though their presentation is unusally precise and detailed. It seems that these flaws were not detected by informal peer scrutiny. The flaws are discussed and a revised presentation of the analysis is given that not only corrects the flaws but is also more precise and easier to follow. Some of the corrections to the flaws require slight modifications to the original assumptions underlying the algorithm and to the constraints on its parameters, and thus change the external specifications of the algorithm. The formal analysis of the interactive convergence clock synchronization algorithm was performed using the Enhanced Hierarchical Development Methodology (EHDM) formal specification and verification environment. This application of EHDM provides a demonstration of some of the capabilities of the system.

  14. Verification of Numerical Programs: From Real Numbers to Floating Point Numbers

    NASA Technical Reports Server (NTRS)

    Goodloe, Alwyn E.; Munoz, Cesar; Kirchner, Florent; Correnson, Loiec

    2013-01-01

    Numerical algorithms lie at the heart of many safety-critical aerospace systems. The complexity and hybrid nature of these systems often requires the use of interactive theorem provers to verify that these algorithms are logically correct. Usually, proofs involving numerical computations are conducted in the infinitely precise realm of the field of real numbers. However, numerical computations in these algorithms are often implemented using floating point numbers. The use of a finite representation of real numbers introduces uncertainties as to whether the properties veri ed in the theoretical setting hold in practice. This short paper describes work in progress aimed at addressing these concerns. Given a formally proven algorithm, written in the Program Verification System (PVS), the Frama-C suite of tools is used to identify sufficient conditions and verify that under such conditions the rounding errors arising in a C implementation of the algorithm do not affect its correctness. The technique is illustrated using an algorithm for detecting loss of separation among aircraft.

  15. Sculling Compensation Algorithm for SINS Based on Two-Time Scale Perturbation Model of Inertial Measurements

    PubMed Central

    Wang, Lingling; Fu, Li

    2018-01-01

    In order to decrease the velocity sculling error under vibration environments, a new sculling error compensation algorithm for strapdown inertial navigation system (SINS) using angular rate and specific force measurements as inputs is proposed in this paper. First, the sculling error formula in incremental velocity update is analytically derived in terms of the angular rate and specific force. Next, two-time scale perturbation models of the angular rate and specific force are constructed. The new sculling correction term is derived and a gravitational search optimization method is used to determine the parameters in the two-time scale perturbation models. Finally, the performance of the proposed algorithm is evaluated in a stochastic real sculling environment, which is different from the conventional algorithms simulated in a pure sculling circumstance. A series of test results demonstrate that the new sculling compensation algorithm can achieve balanced real/pseudo sculling correction performance during velocity update with the advantage of less computation load compared with conventional algorithms. PMID:29346323

  16. Motion artifact detection and correction in functional near-infrared spectroscopy: a new hybrid method based on spline interpolation method and Savitzky-Golay filtering.

    PubMed

    Jahani, Sahar; Setarehdan, Seyed K; Boas, David A; Yücel, Meryem A

    2018-01-01

    Motion artifact contamination in near-infrared spectroscopy (NIRS) data has become an important challenge in realizing the full potential of NIRS for real-life applications. Various motion correction algorithms have been used to alleviate the effect of motion artifacts on the estimation of the hemodynamic response function. While smoothing methods, such as wavelet filtering, are excellent in removing motion-induced sharp spikes, the baseline shifts in the signal remain after this type of filtering. Methods, such as spline interpolation, on the other hand, can properly correct baseline shifts; however, they leave residual high-frequency spikes. We propose a hybrid method that takes advantage of different correction algorithms. This method first identifies the baseline shifts and corrects them using a spline interpolation method or targeted principal component analysis. The remaining spikes, on the other hand, are corrected by smoothing methods: Savitzky-Golay (SG) filtering or robust locally weighted regression and smoothing. We have compared our new approach with the existing correction algorithms in terms of hemodynamic response function estimation using the following metrics: mean-squared error, peak-to-peak error ([Formula: see text]), Pearson's correlation ([Formula: see text]), and the area under the receiver operator characteristic curve. We found that spline-SG hybrid method provides reasonable improvements in all these metrics with a relatively short computational time. The dataset and the code used in this study are made available online for the use of all interested researchers.

  17. Scene-based nonuniformity correction with video sequences and registration.

    PubMed

    Hardie, R C; Hayat, M M; Armstrong, E; Yasuda, B

    2000-03-10

    We describe a new, to our knowledge, scene-based nonuniformity correction algorithm for array detectors. The algorithm relies on the ability to register a sequence of observed frames in the presence of the fixed-pattern noise caused by pixel-to-pixel nonuniformity. In low-to-moderate levels of nonuniformity, sufficiently accurate registration may be possible with standard scene-based registration techniques. If the registration is accurate, and motion exists between the frames, then groups of independent detectors can be identified that observe the same irradiance (or true scene value). These detector outputs are averaged to generate estimates of the true scene values. With these scene estimates, and the corresponding observed values through a given detector, a curve-fitting procedure is used to estimate the individual detector response parameters. These can then be used to correct for detector nonuniformity. The strength of the algorithm lies in its simplicity and low computational complexity. Experimental results, to illustrate the performance of the algorithm, include the use of visible-range imagery with simulated nonuniformity and infrared imagery with real nonuniformity.

  18. Simultaneous quaternion estimation (QUEST) and bias determination

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1989-01-01

    Tests of a new method for the simultaneous estimation of spacecraft attitude and sensor biases, based on a quaternion estimation algorithm minimizing Wahba's loss function are presented. The new method is compared with a conventional batch least-squares differential correction algorithm. The estimates are based on data from strapdown gyros and star trackers, simulated with varying levels of Gaussian noise for both inertially-fixed and Earth-pointing reference attitudes. Both algorithms solve for the spacecraft attitude and the gyro drift rate biases. They converge to the same estimates at the same rate for inertially-fixed attitude, but the new algorithm converges more slowly than the differential correction for Earth-pointing attitude. The slower convergence of the new method for non-zero attitude rates is believed to be due to the use of an inadequate approximation for a partial derivative matrix. The new method requires about twice the computational effort of the differential correction. Improving the approximation for the partial derivative matrix in the new method is expected to improve its convergence at the cost of increased computational effort.

  19. Bidirectional reflectance function in coastal waters: modeling and validation

    NASA Astrophysics Data System (ADS)

    Gilerson, Alex; Hlaing, Soe; Harmel, Tristan; Tonizzo, Alberto; Arnone, Robert; Weidemann, Alan; Ahmed, Samir

    2011-11-01

    The current operational algorithm for the correction of bidirectional effects from the satellite ocean color data is optimized for typical oceanic waters. However, versions of bidirectional reflectance correction algorithms, specifically tuned for typical coastal waters and other case 2 conditions, are particularly needed to improve the overall quality of those data. In order to analyze the bidirectional reflectance distribution function (BRDF) of case 2 waters, a dataset of typical remote sensing reflectances was generated through radiative transfer simulations for a large range of viewing and illumination geometries. Based on this simulated dataset, a case 2 water focused remote sensing reflectance model is proposed to correct above-water and satellite water leaving radiance data for bidirectional effects. The proposed model is first validated with a one year time series of in situ above-water measurements acquired by collocated multi- and hyperspectral radiometers which have different viewing geometries installed at the Long Island Sound Coastal Observatory (LISCO). Match-ups and intercomparisons performed on these concurrent measurements show that the proposed algorithm outperforms the algorithm currently in use at all wavelengths.

  20. Atmospheric correction over case 2 waters with an iterative fitting algorithm: relative humidity effects.

    PubMed

    Land, P E; Haigh, J D

    1997-12-20

    In algorithms for the atmospheric correction of visible and near-IR satellite observations of the Earth's surface, it is generally assumed that the spectral variation of aerosol optical depth is characterized by an Angström power law or similar dependence. In an iterative fitting algorithm for atmospheric correction of ocean color imagery over case 2 waters, this assumption leads to an inability to retrieve the aerosol type and to the attribution to aerosol spectral variations of spectral effects actually caused by the water contents. An improvement to this algorithm is described in which the spectral variation of optical depth is calculated as a function of aerosol type and relative humidity, and an attempt is made to retrieve the relative humidity in addition to aerosol type. The aerosol is treated as a mixture of aerosol components (e.g., soot), rather than of aerosol types (e.g., urban). We demonstrate the improvement over the previous method by using simulated case 1 and case 2 sea-viewing wide field-of-view sensor data, although the retrieval of relative humidity was not successful.

  1. Retrieval of Aerosol Optical Depth Under Thin Cirrus from MODIS: Application to an Ocean Algorithm

    NASA Technical Reports Server (NTRS)

    Lee, Jaehwa; Hsu, Nai-Yung Christina; Sayer, Andrew Mark; Bettenhausen, Corey

    2013-01-01

    A strategy for retrieving aerosol optical depth (AOD) under conditions of thin cirrus coverage from the Moderate Resolution Imaging Spectroradiometer (MODIS) is presented. We adopt an empirical method that derives the cirrus contribution to measured reflectance in seven bands from the visible to shortwave infrared (0.47, 0.55, 0.65, 0.86, 1.24, 1.63, and 2.12 µm, commonly used for AOD retrievals) by using the correlations between the top-of-atmosphere (TOA) reflectance at 1.38 micron and these bands. The 1.38 micron band is used due to its strong absorption by water vapor and allows us to extract the contribution of cirrus clouds to TOA reflectance and create cirrus-corrected TOA reflectances in the seven bands of interest. These cirrus-corrected TOA reflectances are then used in the aerosol retrieval algorithm to determine cirrus-corrected AOD. The cirrus correction algorithm reduces the cirrus contamination in the AOD data as shown by a decrease in both magnitude and spatial variability of AOD over areas contaminated by thin cirrus. Comparisons of retrieved AOD against Aerosol Robotic Network observations at Nauru in the equatorial Pacific reveal that the cirrus correction procedure improves the data quality: the percentage of data within the expected error +/-(0.03 + 0.05 ×AOD) increases from 40% to 80% for cirrus-corrected points only and from 80% to 86% for all points (i.e., both corrected and uncorrected retrievals). Statistical comparisons with Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) retrievals are also carried out. A high correlation (R = 0.89) between the CALIOP cirrus optical depth and AOD correction magnitude suggests potential applicability of the cirrus correction procedure to other MODIS-like sensors.

  2. Testing of Lagrange multiplier damped least-squares control algorithm for woofer-tweeter adaptive optics

    PubMed Central

    Zou, Weiyao; Burns, Stephen A.

    2012-01-01

    A Lagrange multiplier-based damped least-squares control algorithm for woofer-tweeter (W-T) dual deformable-mirror (DM) adaptive optics (AO) is tested with a breadboard system. We show that the algorithm can complementarily command the two DMs to correct wavefront aberrations within a single optimization process: the woofer DM correcting the high-stroke, low-order aberrations, and the tweeter DM correcting the low-stroke, high-order aberrations. The optimal damping factor for a DM is found to be the median of the eigenvalue spectrum of the influence matrix of that DM. Wavefront control accuracy is maximized with the optimized control parameters. For the breadboard system, the residual wavefront error can be controlled to the precision of 0.03 μm in root mean square. The W-T dual-DM AO has applications in both ophthalmology and astronomy. PMID:22441462

  3. Automatic cortical segmentation in the developing brain.

    PubMed

    Xue, Hui; Srinivasan, Latha; Jiang, Shuzhou; Rutherford, Mary; Edwards, A David; Rueckert, Daniel; Hajnal, Jo V

    2007-01-01

    The segmentation of neonatal cortex from magnetic resonance (MR) images is much more challenging than the segmentation of cortex in adults. The main reason is the inverted contrast between grey matter (GM) and white matter (WM) that occurs when myelination is incomplete. This causes mislabeled partial volume voxels, especially at the interface between GM and cerebrospinal fluid (CSF). We propose a fully automatic cortical segmentation algorithm, detecting these mislabeled voxels using a knowledge-based approach and correcting errors by adjusting local priors to favor the correct classification. Our results show that the proposed algorithm corrects errors in the segmentation of both GM and WM compared to the classic EM scheme. The segmentation algorithm has been tested on 25 neonates with the gestational ages ranging from approximately 27 to 45 weeks. Quantitative comparison to the manual segmentation demonstrates good performance of the method (mean Dice similarity: 0.758 +/- 0.037 for GM and 0.794 +/- 0.078 for WM).

  4. More Effective Distributed ML via a Stale Synchronous Parallel Parameter Server

    PubMed Central

    Ho, Qirong; Cipar, James; Cui, Henggang; Kim, Jin Kyu; Lee, Seunghak; Gibbons, Phillip B.; Gibson, Garth A.; Ganger, Gregory R.; Xing, Eric P.

    2014-01-01

    We propose a parameter server system for distributed ML, which follows a Stale Synchronous Parallel (SSP) model of computation that maximizes the time computational workers spend doing useful work on ML algorithms, while still providing correctness guarantees. The parameter server provides an easy-to-use shared interface for read/write access to an ML model’s values (parameters and variables), and the SSP model allows distributed workers to read older, stale versions of these values from a local cache, instead of waiting to get them from a central storage. This significantly increases the proportion of time workers spend computing, as opposed to waiting. Furthermore, the SSP model ensures ML algorithm correctness by limiting the maximum age of the stale values. We provide a proof of correctness under SSP, as well as empirical results demonstrating that the SSP model achieves faster algorithm convergence on several different ML problems, compared to fully-synchronous and asynchronous schemes. PMID:25400488

  5. Testing of Lagrange multiplier damped least-squares control algorithm for woofer-tweeter adaptive optics.

    PubMed

    Zou, Weiyao; Burns, Stephen A

    2012-03-20

    A Lagrange multiplier-based damped least-squares control algorithm for woofer-tweeter (W-T) dual deformable-mirror (DM) adaptive optics (AO) is tested with a breadboard system. We show that the algorithm can complementarily command the two DMs to correct wavefront aberrations within a single optimization process: the woofer DM correcting the high-stroke, low-order aberrations, and the tweeter DM correcting the low-stroke, high-order aberrations. The optimal damping factor for a DM is found to be the median of the eigenvalue spectrum of the influence matrix of that DM. Wavefront control accuracy is maximized with the optimized control parameters. For the breadboard system, the residual wavefront error can be controlled to the precision of 0.03 μm in root mean square. The W-T dual-DM AO has applications in both ophthalmology and astronomy. © 2012 Optical Society of America

  6. Ultra-high resolution computed tomography imaging

    DOEpatents

    Paulus, Michael J.; Sari-Sarraf, Hamed; Tobin, Jr., Kenneth William; Gleason, Shaun S.; Thomas, Jr., Clarence E.

    2002-01-01

    A method for ultra-high resolution computed tomography imaging, comprising the steps of: focusing a high energy particle beam, for example x-rays or gamma-rays, onto a target object; acquiring a 2-dimensional projection data set representative of the target object; generating a corrected projection data set by applying a deconvolution algorithm, having an experimentally determined a transfer function, to the 2-dimensional data set; storing the corrected projection data set; incrementally rotating the target object through an angle of approximately 180.degree., and after each the incremental rotation, repeating the radiating, acquiring, generating and storing steps; and, after the rotating step, applying a cone-beam algorithm, for example a modified tomographic reconstruction algorithm, to the corrected projection data sets to generate a 3-dimensional image. The size of the spot focus of the beam is reduced to not greater than approximately 1 micron, and even to not greater than approximately 0.5 microns.

  7. A Formal Framework for the Analysis of Algorithms That Recover From Loss of Separation

    NASA Technical Reports Server (NTRS)

    Butler, RIcky W.; Munoz, Cesar A.

    2008-01-01

    We present a mathematical framework for the specification and verification of state-based conflict resolution algorithms that recover from loss of separation. In particular, we propose rigorous definitions of horizontal and vertical maneuver correctness that yield horizontal and vertical separation, respectively, in a bounded amount of time. We also provide sufficient conditions for independent correctness, i.e., separation under the assumption that only one aircraft maneuvers, and for implicitly coordinated correctness, i.e., separation under the assumption that both aircraft maneuver. An important benefit of this approach is that different aircraft can execute different algorithms and implicit coordination will still be achieved, as long as they all meet the explicit criteria of the framework. Towards this end we have sought to make the criteria as general as possible. The framework presented in this paper has been formalized and mechanically verified in the Prototype Verification System (PVS).

  8. Optimal wavefront estimation of incoherent sources

    NASA Astrophysics Data System (ADS)

    Riggs, A. J. Eldorado; Kasdin, N. Jeremy; Groff, Tyler

    2014-08-01

    Direct imaging is in general necessary to characterize exoplanets and disks. A coronagraph is an instrument used to create a dim (high-contrast) region in a star's PSF where faint companions can be detected. All coronagraphic high-contrast imaging systems use one or more deformable mirrors (DMs) to correct quasi-static aberrations and recover contrast in the focal plane. Simulations show that existing wavefront control algorithms can correct for diffracted starlight in just a few iterations, but in practice tens or hundreds of control iterations are needed to achieve high contrast. The discrepancy largely arises from the fact that simulations have perfect knowledge of the wavefront and DM actuation. Thus, wavefront correction algorithms are currently limited by the quality and speed of wavefront estimates. Exposures in space will take orders of magnitude more time than any calculations, so a nonlinear estimation method that needs fewer images but more computational time would be advantageous. In addition, current wavefront correction routines seek only to reduce diffracted starlight. Here we present nonlinear estimation algorithms that include optimal estimation of sources incoherent with a star such as exoplanets and debris disks.

  9. Assessment of health risks due to arsenic from iron ore lumps in a beach setting.

    PubMed

    Swartjes, Frank A; Janssen, Paul J C M

    2016-09-01

    In 2011, an artificial hook-shaped peninsula of 128ha beach area was created along the Dutch coast, containing thousands of iron ore lumps, which include arsenic from natural origin. Elemental arsenic and inorganic arsenic induce a range of toxicological effects and has been classified as proven human carcinogens. The combination of easy access to the beach and the presence of arsenic raised concern about possible human health effects by the local authorities. The objective of this study is therefore to investigate human health risks from the presence of arsenic-containing iron ore lumps in a beach setting. The exposure scenarios underlying the human health-based risk limits for contaminated land in The Netherlands, based on soil material ingestion and a residential setting, are not appropriate. Two specific exposure scenarios related to the playing with iron ore lumps on the beach ('sandcastle building') are developed on the basis of expert judgement, relating to children in the age of 2 to 12years, i.e., a worst case exposure scenario and a precautionary scenario. Subsequently, exposure is calculated by the quantification of the following factors: hand loading, soil-mouth transfer effectivity, hand-mouth contact frequency, contact surface, body weight and the relative oral bioavailability factor. By lack of consensus on a universal reference dose for arsenic for use in the stage of risk characterization, three different types of assessments have been evaluated: on the basis of the current Provisional Tolerable Daily Intake (PTWI), on the basis of the Benchmark Dose Lower limit (BMDL), and by a comparison of exposure from the iron ore lumps with background exposure. It is concluded, certainly from the perspective of the conservative exposure assessment, that unacceptable human health risks due to exposure to arsenic from the iron ore lumps are unlikely and there is no need for risk management actions. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. An adaptive multi-level simulation algorithm for stochastic biological systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lester, C., E-mail: lesterc@maths.ox.ac.uk; Giles, M. B.; Baker, R. E.

    2015-01-14

    Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Montemore » Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the efficiency of our method using a number of examples.« less

  11. Application of a novel metal artifact correction algorithm in flat-panel CT after coil embolization of brain aneurysms: intraindividual comparison.

    PubMed

    Buhk, J-H; Groth, M; Sehner, S; Fiehler, J; Schmidt, N O; Grzyska, U

    2013-09-01

    To evaluate a novel algorithm for correcting beam hardening artifacts caused by metal implants in computed tomography performed on a C-arm angiography system equipped with a flat panel (FP-CT). 16 datasets of cerebral FP-CT acquisitions after coil embolization of brain aneurysms in the context of acute subarachnoid hemorrhage have been reconstructed by applying a soft tissue kernel with and without a novel reconstruction filter for metal artifact correction. Image reading was performed in multiplanar reformations (MPR) in average mode on a dedicated radiological workplace in comparison to the preinterventional native multisection CT (MS-CT) scan serving as the anatomic gold standard. Two independent radiologists performed image scoring following a defined scale in direct comparison of the image data with and without artifact correction. For statistical analysis, a random intercept model was calculated. The inter-rater agreement was very high (ICC = 86.3 %). The soft tissue image quality and visualization of the CSF spaces at the level of the implants was substantially improved. The additional metal artifact correction algorithm did not induce impairment of the subjective image quality in any other brain regions. Adding metal artifact correction to FP-CT in an acute postinterventional setting helps to visualize the close vicinity of the aneurysm at a generally consistent image quality. © Georg Thieme Verlag KG Stuttgart · New York.

  12. A hydrological emulator for global applications – HE v1.0.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yaling; Hejazi, Mohamad; Li, Hongyi

    While global hydrological models (GHMs) are very useful in exploring water resources and interactions between the Earth and human systems, their use often requires numerous model inputs, complex model calibration, and high computation costs. To overcome these challenges, we construct an efficient open-source and ready-to-use hydrological emulator (HE) that can mimic complex GHMs at a range of spatial scales (e.g., basin, region, globe). More specifically, we construct both a lumped and a distributed scheme of the HE based on the monthly abcd model to explore the tradeoff between computational cost and model fidelity. Model predictability and computational efficiency are evaluatedmore » in simulating global runoff from 1971 to 2010 with both the lumped and distributed schemes. The results are compared against the runoff product from the widely used Variable Infiltration Capacity (VIC) model. Our evaluation indicates that the lumped and distributed schemes present comparable results regarding annual total quantity, spatial pattern, and temporal variation of the major water fluxes (e.g., total runoff, evapotranspiration) across the global 235 basins (e.g., correlation coefficient r between the annual total runoff from either of these two schemes and the VIC is > 0.96), except for several cold (e.g., Arctic, interior Tibet), dry (e.g., North Africa) and mountainous (e.g., Argentina) regions. Compared against the monthly total runoff product from the VIC (aggregated from daily runoff), the global mean Kling–Gupta efficiencies are 0.75 and 0.79 for the lumped and distributed schemes, respectively, with the distributed scheme better capturing spatial heterogeneity. Notably, the computation efficiency of the lumped scheme is 2 orders of magnitude higher than the distributed one and 7 orders more efficient than the VIC model. A case study of uncertainty analysis for the world's 16 basins with top annual streamflow is conducted using 100 000 model simulations, and it demonstrates the lumped scheme's extraordinary advantage in computational efficiency. Lastly, our results suggest that the revised lumped abcd model can serve as an efficient and reasonable HE for complex GHMs and is suitable for broad practical use, and the distributed scheme is also an efficient alternative if spatial heterogeneity is of more interest.« less

  13. Three-dimensional ophthalmic optical coherence tomography with a refraction correction algorithm

    NASA Astrophysics Data System (ADS)

    Zawadzki, Robert J.; Leisser, Christoph; Leitgeb, Rainer; Pircher, Michael; Fercher, Adolf F.

    2003-10-01

    We built an optical coherence tomography (OCT) system with a rapid scanning optical delay (RSOD) line, which allows probing full axial eye length. The system produces Three-dimensional (3D) data sets that are used to generate 3D tomograms of the model eye. The raw tomographic data were processed by an algorithm, which is based on Snell"s law to correct the interface positions. The Zernike polynomials representation of the interfaces allows quantitative wave aberration measurements. 3D images of our results are presented to illustrate the capabilities of the system and the algorithm performance. The system allows us to measure intra-ocular distances.

  14. Algorithm for loading shot noise microbunching in multi-dimensional, free-electron laser simulation codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fawley, William M.

    We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser(FEL) simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multi-dimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.

  15. A formally verified algorithm for interactive consistency under a hybrid fault model

    NASA Technical Reports Server (NTRS)

    Lincoln, Patrick; Rushby, John

    1993-01-01

    Consistent distribution of single-source data to replicated computing channels is a fundamental problem in fault-tolerant system design. The 'Oral Messages' (OM) algorithm solves this problem of Interactive Consistency (Byzantine Agreement) assuming that all faults are worst-cass. Thambidurai and Park introduced a 'hybrid' fault model that distinguished three fault modes: asymmetric (Byzantine), symmetric, and benign; they also exhibited, along with an informal 'proof of correctness', a modified version of OM. Unfortunately, their algorithm is flawed. The discipline of mechanically checked formal verification eventually enabled us to develop a correct algorithm for Interactive Consistency under the hybrid fault model. This algorithm withstands $a$ asymmetric, $s$ symmetric, and $b$ benign faults simultaneously, using $m+1$ rounds, provided $n is greater than 2a + 2s + b + m$, and $m\\geg a$. We present this algorithm, discuss its subtle points, and describe its formal specification and verification in PVS. We argue that formal verification systems such as PVS are now sufficiently effective that their application to fault-tolerance algorithms should be considered routine.

  16. Comparison of selected dose calculation algorithms in radiotherapy treatment planning for tissues with inhomogeneities

    NASA Astrophysics Data System (ADS)

    Woon, Y. L.; Heng, S. P.; Wong, J. H. D.; Ung, N. M.

    2016-03-01

    Inhomogeneity correction is recommended for accurate dose calculation in radiotherapy treatment planning since human body are highly inhomogeneous with the presence of bones and air cavities. However, each dose calculation algorithm has its own limitations. This study is to assess the accuracy of five algorithms that are currently implemented for treatment planning, including pencil beam convolution (PBC), superposition (SP), anisotropic analytical algorithm (AAA), Monte Carlo (MC) and Acuros XB (AXB). The calculated dose was compared with the measured dose using radiochromic film (Gafchromic EBT2) in inhomogeneous phantoms. In addition, the dosimetric impact of different algorithms on intensity modulated radiotherapy (IMRT) was studied for head and neck region. MC had the best agreement with the measured percentage depth dose (PDD) within the inhomogeneous region. This was followed by AXB, AAA, SP and PBC. For IMRT planning, MC algorithm is recommended for treatment planning in preference to PBC and SP. The MC and AXB algorithms were found to have better accuracy in terms of inhomogeneity correction and should be used for tumour volume within the proximity of inhomogeneous structures.

  17. Full self-consistency in the Fermi-orbital self-interaction correction

    NASA Astrophysics Data System (ADS)

    Yang, Zeng-hui; Pederson, Mark R.; Perdew, John P.

    2017-05-01

    The Perdew-Zunger self-interaction correction cures many common problems associated with semilocal density functionals, but suffers from a size-extensivity problem when Kohn-Sham orbitals are used in the correction. Fermi-Löwdin-orbital self-interaction correction (FLOSIC) solves the size-extensivity problem, allowing its use in periodic systems and resulting in better accuracy in finite systems. Although the previously published FLOSIC algorithm Pederson et al., J. Chem. Phys. 140, 121103 (2014)., 10.1063/1.4869581 appears to work well in many cases, it is not fully self-consistent. This would be particularly problematic for systems where the occupied manifold is strongly changed by the correction. In this paper, we demonstrate a different algorithm for FLOSIC to achieve full self-consistency with only marginal increase of computational cost. The resulting total energies are found to be lower than previously reported non-self-consistent results.

  18. Analysis and design of algorithm-based fault-tolerant systems

    NASA Technical Reports Server (NTRS)

    Nair, V. S. Sukumaran

    1990-01-01

    An important consideration in the design of high performance multiprocessor systems is to ensure the correctness of the results computed in the presence of transient and intermittent failures. Concurrent error detection and correction have been applied to such systems in order to achieve reliability. Algorithm Based Fault Tolerance (ABFT) was suggested as a cost-effective concurrent error detection scheme. The research was motivated by the complexity involved in the analysis and design of ABFT systems. To that end, a matrix-based model was developed and, based on that, algorithms for both the design and analysis of ABFT systems are formulated. These algorithms are less complex than the existing ones. In order to reduce the complexity further, a hierarchical approach is developed for the analysis of large systems.

  19. Coastal Zone Color Scanner atmospheric correction algorithm - Multiple scattering effects

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.; Castano, Diego J.

    1987-01-01

    Errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm are analyzed. The analysis is based on radiative transfer computations in model atmospheres, in which the aerosols and molecules are distributed vertically in an exponential manner, with most of the aerosol scattering located below the molecular scattering. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates, making it possible to determine the errors along typical CZCS scan lines. Information provided by the analysis makes it possible to judge the efficacy of the current algorithm with the current sensor and to estimate the impact of the algorithm-induced errors on a variety of applications.

  20. Flexible hydrological modeling - Disaggregation from lumped catchment scale to higher spatial resolutions

    NASA Astrophysics Data System (ADS)

    Tran, Quoc Quan; Willems, Patrick; Pannemans, Bart; Blanckaert, Joris; Pereira, Fernando; Nossent, Jiri; Cauwenberghs, Kris; Vansteenkiste, Thomas

    2015-04-01

    Based on an international literature review on model structures of existing rainfall-runoff and hydrological models, a generalized model structure is proposed. It consists of different types of meteorological components, storage components, splitting components and routing components. They can be spatially organized in a lumped way, or on a grid, spatially interlinked by source-to-sink or grid-to-grid (cell-to-cell) routing. The grid size of the model can be chosen depending on the application. The user can select/change the spatial resolution depending on the needs and/or the evaluation of the accuracy of the model results, or use different spatial resolutions in parallel for different applications. Major research questions addressed during the study are: How can we assure consistent results of the model at any spatial detail? How can we avoid strong or sudden changes in model parameters and corresponding simulation results, when one moves from one level of spatial detail to another? How can we limit the problem of overparameterization/equifinality when we move from the lumped model to the spatially distributed model? The proposed approach is a step-wise one, where first the lumped conceptual model is calibrated using a systematic, data-based approach, followed by a disaggregation step where the lumped parameters are disaggregated based on spatial catchment characteristics (topography, land use, soil characteristics). In this way, disaggregation can be done down to any spatial scale, and consistently among scales. Only few additional calibration parameters are introduced to scale the absolute spatial differences in model parameters, but keeping the relative differences as obtained from the spatial catchment characteristics. After calibration of the spatial model, the accuracies of the lumped and spatial models were compared for peak, low and cumulative runoff total and sub-flows (at downstream and internal gauging stations). For the distributed models, additional validation on spatial results was done for the groundwater head values at observation wells. To ensure that the lumped model can produce results as accurate as the spatially distributed models or close regardless to the number of parameters and implemented physical processes, it was checked whether the structure of the lumped models had to be adjusted. The concept has been implemented in a PCRaster - Python platform and tested for two Belgian case studies (catchments of the rivers Dijle and Grote Nete). So far, use is made of existing model structures (NAM, PDM, VHM and HBV). Acknowledgement: These results were obtained within the scope of research activities for the Flemish Environment Agency (VMM) - division Operational Water Management on "Next Generation hydrological modeling", in cooperation with IMDC consultants, and for Flanders Hydraulics Research (Waterbouwkundig Laboratorium) on "Effect of climate change on the hydrological regime of navigable watercourses in Belgium".

  1. On the Performance of Alternate Conceptual Ecohydrological Models for Streamflow Prediction

    NASA Astrophysics Data System (ADS)

    Naseem, Bushra; Ajami, Hoori; Cordery, Ian; Sharma, Ashish

    2016-04-01

    A merging of a lumped conceptual hydrological model with two conceptual dynamic vegetation models is presented to assess the performance of these models for simultaneous simulations of streamflow and leaf area index (LAI). Two conceptual dynamic vegetation models with differing representation of ecological processes are merged with a lumped conceptual hydrological model (HYMOD) to predict catchment scale streamflow and LAI. The merged RR-LAI-I model computes relative leaf biomass based on transpiration rates while the RR-LAI-II model computes above ground green and dead biomass based on net primary productivity and water use efficiency in response to soil moisture dynamics. To assess the performance of these models, daily discharge and 8-day MODIS LAI product for 27 catchments of 90 - 1600km2 in size located in the Murray - Darling Basin in Australia are used. Our results illustrate that when single-objective optimisation was focussed on maximizing the objective function for streamflow or LAI, the other un-calibrated predicted outcome (LAI if streamflow is the focus) was consistently compromised. Thus, single-objective optimization cannot take into account the essence of all processes in the conceptual ecohydrological models. However, multi-objective optimisation showed great strength for streamflow and LAI predictions. Both response outputs were better simulated by RR-LAI-II than RR-LAI-I due to better representation of physical processes such as net primary productivity (NPP) in RR-LAI-II. Our results highlight that simultaneous calibration of streamflow and LAI using a multi-objective algorithm proves to be an attractive tool for improved streamflow predictions.

  2. Ultrasound of pediatric breast masses: what to do with lumps and bumps.

    PubMed

    Valeur, Natalie S; Rahbar, Habib; Chapman, Teresa

    2015-10-01

    The approach to breast masses in children differs from that in adults in many ways, including the differential diagnostic considerations, imaging algorithm and appropriateness of biopsy as a means of further characterization. Most pediatric breast masses are benign, either related to breast development or benign neoplastic processes. Biopsy is rarely needed and can damage the developing breast; thus radiologists must be familiar with the imaging appearance of common entities so that biopsies are judiciously recommended. The purpose of this article is to describe the imaging appearances of the normally developing pediatric breast as well as illustrate the imaging findings of a spectrum of diseases, including those that are benign (fibroadenoma, juvenile papillomatosis, pseudoangiomatous stromal hyperplasia, gynecomastia, abscess and fat necrosis), malignant (breast carcinoma and metastases), and have variable malignant potential (phyllodes tumor).

  3. Spectral-Based Volume Sensor Prototype, Post-VS4 Test Series Algorithm Development

    DTIC Science & Technology

    2009-04-30

    Computer Pcorr Probabilty / Percentage of Correct Classification (# Correct / # Total) PD PhotoDiode Pd Probabilty / Percentage of Detection (# Correct...Detections / Total of Sources) Pfa Probabilty / Percentage of False Alarm (# FAs / Total # of Sources) SBVS Spectral-Based Volume Sensor SFA Smoke and

  4. Description of algorithms for processing Coastal Zone Color Scanner (CZCS) data

    NASA Technical Reports Server (NTRS)

    Zion, P. M.

    1983-01-01

    The algorithms for processing coastal zone color scanner (CZCS) data to geophysical units (pigment concentration) are described. Current public domain information for processing these data is summarized. Calibration, atmospheric correction, and bio-optical algorithms are presented. Three CZCS data processing implementations are compared.

  5. Fast correction approach for wavefront sensorless adaptive optics based on a linear phase diversity technique.

    PubMed

    Yue, Dan; Nie, Haitao; Li, Ye; Ying, Changsheng

    2018-03-01

    Wavefront sensorless (WFSless) adaptive optics (AO) systems have been widely studied in recent years. To reach optimum results, such systems require an efficient correction method. This paper presents a fast wavefront correction approach for a WFSless AO system mainly based on the linear phase diversity (PD) technique. The fast closed-loop control algorithm is set up based on the linear relationship between the drive voltage of the deformable mirror (DM) and the far-field images of the system, which is obtained through the linear PD algorithm combined with the influence function of the DM. A large number of phase screens under different turbulence strengths are simulated to test the performance of the proposed method. The numerical simulation results show that the method has fast convergence rate and strong correction ability, a few correction times can achieve good correction results, and can effectively improve the imaging quality of the system while needing fewer measurements of CCD data.

  6. Influence of dose calculation algorithms on the predicted dose distribution and NTCP values for NSCLC patients.

    PubMed

    Nielsen, Tine B; Wieslander, Elinore; Fogliata, Antonella; Nielsen, Morten; Hansen, Olfred; Brink, Carsten

    2011-05-01

    To investigate differences in calculated doses and normal tissue complication probability (NTCP) values between different dose algorithms. Six dose algorithms from four different treatment planning systems were investigated: Eclipse AAA, Oncentra MasterPlan Collapsed Cone and Pencil Beam, Pinnacle Collapsed Cone and XiO Multigrid Superposition, and Fast Fourier Transform Convolution. Twenty NSCLC patients treated in the period 2001-2006 at the same accelerator were included and the accelerator used for treatments were modeled in the different systems. The treatment plans were recalculated with the same number of monitor units and beam arrangements across the dose algorithms. Dose volume histograms of the GTV, PTV, combined lungs (excluding the GTV), and heart were exported and evaluated. NTCP values for heart and lungs were calculated using the relative seriality model and the LKB model, respectively. Furthermore, NTCP for the lungs were calculated from two different model parameter sets. Calculations and evaluations were performed both including and excluding density corrections. There are found statistical significant differences between the calculated dose to heart, lung, and targets across the algorithms. Mean lung dose and V20 are not very sensitive to change between the investigated dose calculation algorithms. However, the different dose levels for the PTV averaged over the patient population are varying up to 11%. The predicted NTCP values for pneumonitis vary between 0.20 and 0.24 or 0.35 and 0.48 across the investigated dose algorithms depending on the chosen model parameter set. The influence of the use of density correction in the dose calculation on the predicted NTCP values depends on the specific dose calculation algorithm and the model parameter set. For fixed values of these, the changes in NTCP can be up to 45%. Calculated NTCP values for pneumonitis are more sensitive to the choice of algorithm than mean lung dose and V20 which are also commonly used for plan evaluation. The NTCP values for heart complication are, in this study, not very sensitive to the choice of algorithm. Dose calculations based on density corrections result in quite different NTCP values than calculations without density corrections. It is therefore important when working with NTCP planning to use NTCP parameter values based on calculations and treatments similar to those for which the NTCP is of interest.

  7. Methods of harmonic synthesis for global geopotential models and their first-, second- and third-order gradients

    NASA Astrophysics Data System (ADS)

    Fantino, E.; Casotto, S.

    2009-07-01

    Four widely used algorithms for the computation of the Earth’s gravitational potential and its first-, second- and third-order gradients are examined: the traditional increasing degree recursion in associated Legendre functions and its variant based on the Clenshaw summation, plus the methods of Pines and Cunningham-Metris, which are free from the singularities that distinguish the first two methods at the geographic poles. All four methods are reorganized with the lumped coefficients approach, which in the cases of Pines and Cunningham-Metris requires a complete revision of the algorithms. The characteristics of the four methods are studied and described, and numerical tests are performed to assess and compare their precision, accuracy, and efficiency. In general the performance levels of all four codes exhibit large improvements over previously published versions. From the point of view of numerical precision, away from the geographic poles Clenshaw and Legendre offer an overall better quality. Furthermore, Pines and Cunningham-Metris are affected by an intrinsic loss of precision at the equator and suffer from additional deterioration when the gravity gradients components are rotated into the East-North-Up topocentric reference system.

  8. A robust method for removal of glint effects from satellite ocean colour imagery

    NASA Astrophysics Data System (ADS)

    Singh, R. K.; Shanmugam, P.

    2014-12-01

    Removal of the glint effects from satellite imagery for accurate retrieval of water-leaving radiances is a complicated problem since its contribution in the measured signal is dependent on many factors such as viewing geometry, sun elevation and azimuth, illumination conditions, wind speed and direction, and the water refractive index. To simplify the situation, existing glint correction models describe the extent of the glint-contaminated region and its contribution to the radiance essentially as a function of the wind speed and sea surface slope that often lead to a tremendous loss of information with a considerable scientific and financial impact. Even with the glint-tilting capability of modern sensors, glint contamination is severe on the satellite-derived ocean colour products in the equatorial and sub-tropical regions. To rescue a significant portion of data presently discarded as "glint contaminated" and improving the accuracy of water-leaving radiances in the glint contaminated regions, we developed a glint correction algorithm which is dependent only on the satellite derived Rayleigh Corrected Radiance and absorption by clear waters. The new algorithm is capable of achieving meaningful retrievals of ocean radiances from the glint-contaminated pixels unless saturated by strong glint in any of the wavebands. It takes into consideration the combination of the background absorption of radiance by water and the spectral glint function, to accurately minimize the glint contamination effects and produce robust ocean colour products. The new algorithm is implemented along with an aerosol correction method and its performance is demonstrated for many MODIS-Aqua images over the Arabian Sea, one of the regions that are heavily affected by sunglint due to their geographical location. The results with and without sunglint correction are compared indicating major improvements in the derived products with sunglint correction. When compared to the results of an existing model in the SeaDAS processing system, the new algorithm has the best performance in terms of yielding physically realistic water-leaving radiance spectra and improving the accuracy of the ocean colour products. Validation of MODIS-Aqua derived water-leaving radiances with in-situ data also corroborates the above results. Unlike the standard models, the new algorithm performs well in variable illumination and wind conditions and does not require any auxiliary data besides the Rayleigh-corrected radiance itself. Exploitation of signals observed by sensors looking within regions affected by bright white sunglint is possible with the present algorithm when the requirement of a stable response over a wide dynamical range for these sensors is fulfilled.

  9. Quantitative Microplate-Based Respirometry with Correction for Oxygen Diffusion

    PubMed Central

    2009-01-01

    Respirometry using modified cell culture microplates offers an increase in throughput and a decrease in biological material required for each assay. Plate based respirometers are susceptible to a range of diffusion phenomena; as O2 is consumed by the specimen, atmospheric O2 leaks into the measurement volume. Oxygen also dissolves in and diffuses passively through the polystyrene commonly used as a microplate material. Consequently the walls of such respirometer chambers are not just permeable to O2 but also store substantial amounts of gas. O2 flux between the walls and the measurement volume biases the measured oxygen consumption rate depending on the actual [O2] gradient. We describe a compartment model-based correction algorithm to deconvolute the biological oxygen consumption rate from the measured [O2]. We optimize the algorithm to work with the Seahorse XF24 extracellular flux analyzer. The correction algorithm is biologically validated using mouse cortical synaptosomes and liver mitochondria attached to XF24 V7 cell culture microplates, and by comparison to classical Clark electrode oxygraph measurements. The algorithm increases the useful range of oxygen consumption rates, the temporal resolution, and durations of measurements. The algorithm is presented in a general format and is therefore applicable to other respirometer systems. PMID:19555051

  10. Near-infrared spectroscopy determined cerebral oxygenation with eliminated skin blood flow in young males.

    PubMed

    Hirasawa, Ai; Kaneko, Takahito; Tanaka, Naoki; Funane, Tsukasa; Kiguchi, Masashi; Sørensen, Henrik; Secher, Niels H; Ogoh, Shigehiko

    2016-04-01

    We estimated cerebral oxygenation during handgrip exercise and a cognitive task using an algorithm that eliminates the influence of skin blood flow (SkBF) on the near-infrared spectroscopy (NIRS) signal. The algorithm involves a subtraction method to develop a correction factor for each subject. For twelve male volunteers (age 21 ± 1 yrs) +80 mmHg pressure was applied over the left temporal artery for 30 s by a custom-made headband cuff to calculate an individual correction factor. From the NIRS-determined ipsilateral cerebral oxyhemoglobin concentration (O2Hb) at two source-detector distances (15 and 30 mm) with the algorithm using the individual correction factor, we expressed cerebral oxygenation without influence from scalp and scull blood flow. Validity of the estimated cerebral oxygenation was verified during cerebral neural activation (handgrip exercise and cognitive task). With the use of both source-detector distances, handgrip exercise and a cognitive task increased O2Hb (P < 0.01) but O2Hb was reduced when SkBF became eliminated by pressure on the temporal artery for 5 s. However, when the estimation of cerebral oxygenation was based on the algorithm developed when pressure was applied to the temporal artery, estimated O2Hb was not affected by elimination of SkBF during handgrip exercise (P = 0.666) or the cognitive task (P = 0.105). These findings suggest that the algorithm with the individual correction factor allows for evaluation of changes in an accurate cerebral oxygenation without influence of extracranial blood flow by NIRS applied to the forehead.

  11. QCCM Center for Quantum Algorithms

    DTIC Science & Technology

    2008-10-17

    algorithms (e.g., quantum walks and adiabatic computing ), as well as theoretical advances relating algorithms to physical implementations (e.g...Park, NC 27709-2211 15. SUBJECT TERMS Quantum algorithms, quantum computing , fault-tolerant error correction Richard Cleve MITACS East Academic...0511200 Algebraic results on quantum automata A. Ambainis, M. Beaudry, M. Golovkins, A. Kikusts, M. Mercer, D. Thrien Theory of Computing Systems 39(2006

  12. 29 CFR 4041.24 - Notices of plan benefits.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... lump sum and the age at which, or form in which, the plan benefits will be paid differs from the normal retirement benefit— (i) The age or form stated in the plan; and (ii) The age or form adjustment factors; and... for the third month before the month in which the lump sum is distributed), a reference to the...

  13. A painful perineal lump: an unusual case of ectopic breast tissue

    PubMed Central

    Yongue, G; Leff, D; Lamb, BW; Karim, S; Aref, F; Vashisht, R

    2011-01-01

    We report the case of a 40-year-old lady who presented with an episodically painful perineal lump. Clinical and radiological investigations were inconclusive. Excision biopsy confirmed an ectopic breast mass. Ectopic breast tissue is difficult to diagnose but close attention to clinical findings can help to guide further investigation and diagnosis. PMID:22004627

  14. 29 CFR 4044.75 - Other lump sum benefits.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... sum benefits. The value of a lump sum benefit which is not covered under § 4044.73 or § 4044.74 is equal to— (a) The value under the qualifying bid, if an insurer provides the benefit; or (b) The present value of the benefit as of the date of distribution, determined using reasonable actuarial assumptions...

  15. Nonlinear Wave Propagation

    DTIC Science & Technology

    2000-03-17

    scattering problem has intrinsic interest in its own right. A new class of lump type solutions of the multidimensional Kadomtsev - Petviashvili (KP) equation ...solutions associated with the Kadomtsev - Petviashvili equation have more com- plicated interaction properties than the previously known lump...B-3. New Solutions of the Nonstationary Schrödinger and Kadomtsev - Petviashvili Equations , M.J. Ablowitz and J. Villarroel, in Symmetries and

  16. 5 CFR 839.1115 - What is an actuarial reduction?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...? An actuarial reduction allows you to receive benefits without having to pay an amount due in a lump sum. OPM reduces your annuity in a way that, on average, allows the Fund to recover the amount of the... have to pay at that time. To compute an actuarial reduction, OPM divides the lump sum amount by the...

  17. 5 CFR 839.1115 - What is an actuarial reduction?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...? An actuarial reduction allows you to receive benefits without having to pay an amount due in a lump sum. OPM reduces your annuity in a way that, on average, allows the Fund to recover the amount of the... have to pay at that time. To compute an actuarial reduction, OPM divides the lump sum amount by the...

  18. 20 CFR 404.1059 - Deemed wages for certain individuals interned during World War II.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... the amount of, any lump-sum death payment in the case of a death after December 1972, and for... for a monthly benefit, a recalculation of benefits by reason of this section, or a lump-sum death...) The highest actual hourly rate of pay received for any employment before internment, multiplied by 40...

  19. 20 CFR 404.1059 - Deemed wages for certain individuals interned during World War II.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... the amount of, any lump-sum death payment in the case of a death after December 1972, and for... for a monthly benefit, a recalculation of benefits by reason of this section, or a lump-sum death...) The highest actual hourly rate of pay received for any employment before internment, multiplied by 40...

  20. 20 CFR 404.1059 - Deemed wages for certain individuals interned during World War II.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... the amount of, any lump-sum death payment in the case of a death after December 1972, and for... for a monthly benefit, a recalculation of benefits by reason of this section, or a lump-sum death...) The highest actual hourly rate of pay received for any employment before internment, multiplied by 40...

Top