NASA Astrophysics Data System (ADS)
Nawi, Nazri Mohd.; Khan, Abdullah; Rehman, M. Z.
2015-05-01
A nature inspired behavior metaheuristic techniques which provide derivative-free solutions to solve complex problems. One of the latest additions to the group of nature inspired optimization procedure is Cuckoo Search (CS) algorithm. Artificial Neural Network (ANN) training is an optimization task since it is desired to find optimal weight set of a neural network in training process. Traditional training algorithms have some limitation such as getting trapped in local minima and slow convergence rate. This study proposed a new technique CSLM by combining the best features of two known algorithms back-propagation (BP) and Levenberg Marquardt algorithm (LM) for improving the convergence speed of ANN training and avoiding local minima problem by training this network. Some selected benchmark classification datasets are used for simulation. The experiment result show that the proposed cuckoo search with Levenberg Marquardt algorithm has better performance than other algorithm used in this study.
Toushmalani, Reza
2013-01-01
The purpose of this study was to compare the performance of two methods for gravity inversion of a fault. First method [Particle swarm optimization (PSO)] is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. Second method [The Levenberg-Marquardt algorithm (LM)] is an approximation to the Newton method used also for training ANNs. In this paper first we discussed the gravity field of a fault, then describes the algorithms of PSO and LM And presents application of Levenberg-Marquardt algorithm, and a particle swarm algorithm in solving inverse problem of a fault. Most importantly the parameters for the algorithms are given for the individual tests. Inverse solution reveals that fault model parameters are agree quite well with the known results. A more agreement has been found between the predicted model anomaly and the observed gravity anomaly in PSO method rather than LM method.
NASA Astrophysics Data System (ADS)
Lin, Y.; O'Malley, D.; Vesselinov, V. V.
2015-12-01
Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
NASA Astrophysics Data System (ADS)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
Fu, Xingang; Li, Shuhui; Fairbank, Michael; Wunsch, Donald C; Alonso, Eduardo
2015-09-01
This paper investigates how to train a recurrent neural network (RNN) using the Levenberg-Marquardt (LM) algorithm as well as how to implement optimal control of a grid-connected converter (GCC) using an RNN. To successfully and efficiently train an RNN using the LM algorithm, a new forward accumulation through time (FATT) algorithm is proposed to calculate the Jacobian matrix required by the LM algorithm. This paper explores how to incorporate FATT into the LM algorithm. The results show that the combination of the LM and FATT algorithms trains RNNs better than the conventional backpropagation through time algorithm. This paper presents an analytical study on the optimal control of GCCs, including theoretically ideal optimal and suboptimal controllers. To overcome the inapplicability of the optimal GCC controller under practical conditions, a new RNN controller with an improved input structure is proposed to approximate the ideal optimal controller. The performance of an ideal optimal controller and a well-trained RNN controller was compared in close to real-life power converter switching environments, demonstrating that the proposed RNN controller can achieve close to ideal optimal control performance even under low sampling rate conditions. The excellent performance of the proposed RNN controller under challenging and distorted system conditions further indicates the feasibility of using an RNN to approximate optimal control in practical applications.
Bandwidth correction for LED chromaticity based on Levenberg-Marquardt algorithm
NASA Astrophysics Data System (ADS)
Huang, Chan; Jin, Shiqun; Xia, Guo
2017-10-01
Light emitting diode (LED) is widely employed in industrial applications and scientific researches. With a spectrometer, the chromaticity of LED can be measured. However, chromaticity shift will occur due to the broadening effects of the spectrometer. In this paper, an approach is put forward to bandwidth correction for LED chromaticity based on Levenberg-Marquardt algorithm. We compare chromaticity of simulated LED spectra by using the proposed method and differential operator method to bandwidth correction. The experimental results show that the proposed approach achieves an excellent performance in bandwidth correction which proves the effectiveness of the approach. The method has also been tested on true blue LED spectra.
Modified Levenberg-Marquardt Method for RÖSSLER Chaotic System Fuzzy Modeling Training
NASA Astrophysics Data System (ADS)
Wang, Yu-Hui; Wu, Qing-Xian; Jiang, Chang-Sheng; Xue, Ya-Li; Fang, Wei
Generally, fuzzy approximation models require some human knowledge and experience. Operator's experience is involved in the mathematics of fuzzy theory as a collection of heuristic rules. The main goal of this paper is to present a new method for identifying unknown nonlinear dynamics such as Rössler system without any human knowledge. Instead of heuristic rules, the presented method uses the input-output data pairs to identify the Rössler chaotic system. The training algorithm is a modified Levenberg-Marquardt (L-M) method, which can adjust the parameters of each linear polynomial and fuzzy membership functions on line, and do not rely on experts' experience excessively. Finally, it is applied to training Rössler chaotic system fuzzy identification. Comparing this method with the standard L-M method, the convergence speed is accelerated. The simulation results demonstrate the effectiveness of the proposed method.
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laurence, T; Chromy, B
2009-11-10
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms ofmore » counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood
NASA Astrophysics Data System (ADS)
Korkin, S.; Lyapustin, A.
2012-12-01
The Levenberg-Marquardt algorithm [1, 2] provides a numerical iterative solution to the problem of minimization of a function over a space of its parameters. In our work, the Levenberg-Marquardt algorithm retrieves optical parameters of a thin (single scattering) plane parallel atmosphere irradiated by collimated infinitely wide monochromatic beam of light. Black ground surface is assumed. Computational accuracy, sensitivity to the initial guess and the presence of noise in the signal, and other properties of the algorithm are investigated in scalar (using intensity only) and vector (including polarization) modes. We consider an atmosphere that contains a mixture of coarse and fine fractions. Following [3], the fractions are simulated using Henyey-Greenstein model. Though not realistic, this assumption is very convenient for tests [4, p.354]. In our case it yields analytical evaluation of Jacobian matrix. Assuming the MISR geometry of observation [5] as an example, the average scattering cosines and the ratio of coarse and fine fractions, the atmosphere optical depth, and the single scattering albedo, are the five parameters to be determined numerically. In our implementation of the algorithm, the system of five linear equations is solved using the fast Cramer's rule [6]. A simple subroutine developed by the authors, makes the algorithm independent from external libraries. All Fortran 90/95 codes discussed in the presentation will be available immediately after the meeting from sergey.v.korkin@nasa.gov by request. [1]. Levenberg K, A method for the solution of certain non-linear problems in least squares, Quarterly of Applied Mathematics, 1944, V.2, P.164-168. [2]. Marquardt D, An algorithm for least-squares estimation of nonlinear parameters, Journal on Applied Mathematics, 1963, V.11, N.2, P.431-441. [3]. Hovenier JW, Multiple scattering of polarized light in planetary atmospheres. Astronomy and Astrophysics, 1971, V.13, P.7 - 29. [4]. Mishchenko MI, Travis LD
NASA Technical Reports Server (NTRS)
Korkin, S.; Lyapustin, A.
2012-01-01
The Levenberg-Marquardt algorithm [1, 2] provides a numerical iterative solution to the problem of minimization of a function over a space of its parameters. In our work, the Levenberg-Marquardt algorithm retrieves optical parameters of a thin (single scattering) plane parallel atmosphere irradiated by collimated infinitely wide monochromatic beam of light. Black ground surface is assumed. Computational accuracy, sensitivity to the initial guess and the presence of noise in the signal, and other properties of the algorithm are investigated in scalar (using intensity only) and vector (including polarization) modes. We consider an atmosphere that contains a mixture of coarse and fine fractions. Following [3], the fractions are simulated using Henyey-Greenstein model. Though not realistic, this assumption is very convenient for tests [4, p.354]. In our case it yields analytical evaluation of Jacobian matrix. Assuming the MISR geometry of observation [5] as an example, the average scattering cosines and the ratio of coarse and fine fractions, the atmosphere optical depth, and the single scattering albedo, are the five parameters to be determined numerically. In our implementation of the algorithm, the system of five linear equations is solved using the fast Cramer s rule [6]. A simple subroutine developed by the authors, makes the algorithm independent from external libraries. All Fortran 90/95 codes discussed in the presentation will be available immediately after the meeting from sergey.v.korkin@nasa.gov by request.
Adjusted Levenberg-Marquardt method application to methene retrieval from IASI/METOP spectra
NASA Astrophysics Data System (ADS)
Khamatnurova, Marina; Gribanov, Konstantin
2016-04-01
Levenberg-Marquardt method [1] with iteratively adjusted parameter and simultaneous evaluation of averaging kernels together with technique of parameters selection are developed and applied to the retrieval of methane vertical profiles in the atmosphere from IASI/METOP spectra. Retrieved methane vertical profiles are then used for calculation of total atmospheric column amount. NCEP/NCAR reanalysis data provided by ESRL (NOAA, Boulder,USA) [2] are taken as initial guess for retrieval algorithm. Surface temperature, temperature and humidity vertical profiles are retrieved before methane vertical profile retrieval for each selected spectrum. Modified software package FIRE-ARMS [3] were used for numerical experiments. To adjust parameters and validate the method we used ECMWF MACC reanalysis data [4]. Methane columnar values retrieved from cloudless IASI spectra demonstrate good agreement with MACC columnar values. Comparison is performed for IASI spectra measured in May of 2012 over Western Siberia. Application of the method for current IASI/METOP measurements are discussed. 1.Ma C., Jiang L. Some Research on Levenberg-Marquardt Method for the Nonlinear Equations // Applied Mathematics and Computation. 2007. V.184. P. 1032-1040 2.http://www.esrl.noaa.gov/psdhttp://www.esrl.noaa.gov/psd 3.Gribanov K.G., Zakharov V.I., Tashkun S.A., Tyuterev Vl.G.. A New Software Tool for Radiative Transfer Calculations and its application to IMG/ADEOS data // JQSRT.2001.V.68.№ 4. P. 435-451. 4.http://www.ecmwf.int/http://www.ecmwf.int
NASA Astrophysics Data System (ADS)
Kwon, Sung-il; Lynch, M.; Prokop, M.
2005-02-01
This paper addresses the system identification and the decoupling PI controller design for a normal conducting RF cavity. Based on the open-loop measurement data of an SNS DTL cavity, the open-loop system's bandwidths and loop time delays are estimated by using batched least square. With the identified system, a PI controller is designed in such a way that it suppresses the time varying klystron droop and decouples the In-phase and Quadrature of the cavity field. The Levenberg-Marquardt algorithm is applied for nonlinear least squares to obtain the optimal PI controller parameters. The tuned PI controller gains are downloaded to the low-level RF system by using channel access. The experiment of the closed-loop system is performed and the performance is investigated. The proposed tuning method is running automatically in real time interface between a host computer with controller hardware through ActiveX Channel Access.
Tra, Viet; Kim, Jaeyoung; Kim, Jong-Myon
2017-01-01
This paper presents a novel method for diagnosing incipient bearing defects under variable operating speeds using convolutional neural networks (CNNs) trained via the stochastic diagonal Levenberg-Marquardt (S-DLM) algorithm. The CNNs utilize the spectral energy maps (SEMs) of the acoustic emission (AE) signals as inputs and automatically learn the optimal features, which yield the best discriminative models for diagnosing incipient bearing defects under variable operating speeds. The SEMs are two-dimensional maps that show the distribution of energy across different bands of the AE spectrum. It is hypothesized that the variation of a bearing’s speed would not alter the overall shape of the AE spectrum rather, it may only scale and translate it. Thus, at different speeds, the same defect would yield SEMs that are scaled and shifted versions of each other. This hypothesis is confirmed by the experimental results, where CNNs trained using the S-DLM algorithm yield significantly better diagnostic performance under variable operating speeds compared to existing methods. In this work, the performance of different training algorithms is also evaluated to select the best training algorithm for the CNNs. The proposed method is used to diagnose both single and compound defects at six different operating speeds. PMID:29211025
NASA Astrophysics Data System (ADS)
Zounemat-Kermani, Mohammad
2012-08-01
In this study, the ability of two models of multi linear regression (MLR) and Levenberg-Marquardt (LM) feed-forward neural network was examined to estimate the hourly dew point temperature. Dew point temperature is the temperature at which water vapor in the air condenses into liquid. This temperature can be useful in estimating meteorological variables such as fog, rain, snow, dew, and evapotranspiration and in investigating agronomical issues as stomatal closure in plants. The availability of hourly records of climatic data (air temperature, relative humidity and pressure) which could be used to predict dew point temperature initiated the practice of modeling. Additionally, the wind vector (wind speed magnitude and direction) and conceptual input of weather condition were employed as other input variables. The three quantitative standard statistical performance evaluation measures, i.e. the root mean squared error, mean absolute error, and absolute logarithmic Nash-Sutcliffe efficiency coefficient ( {| {{{Log}}({{NS}})} |} ) were employed to evaluate the performances of the developed models. The results showed that applying wind vector and weather condition as input vectors along with meteorological variables could slightly increase the ANN and MLR predictive accuracy. The results also revealed that LM-NN was superior to MLR model and the best performance was obtained by considering all potential input variables in terms of different evaluation criteria.
Zhu, Xiang; Zhang, Dianwen
2013-01-01
We present a fast, accurate and robust parallel Levenberg-Marquardt minimization optimizer, GPU-LMFit, which is implemented on graphics processing unit for high performance scalable parallel model fitting processing. GPU-LMFit can provide a dramatic speed-up in massive model fitting analyses to enable real-time automated pixel-wise parametric imaging microscopy. We demonstrate the performance of GPU-LMFit for the applications in superresolution localization microscopy and fluorescence lifetime imaging microscopy. PMID:24130785
NASA Astrophysics Data System (ADS)
Tam, Kai-Chung; Lau, Siu-Kit; Tang, Shiu-Keung
2016-07-01
A microphone array signal processing method for locating a stationary point source over a locally reactive ground and for estimating ground impedance is examined in detail in the present study. A non-linear least square approach using the Levenberg-Marquardt method is proposed to overcome the problem of unknown ground impedance. The multiple signal classification method (MUSIC) is used to give the initial estimation of the source location, while the technique of forward backward spatial smoothing is adopted as a pre-processer of the source localization to minimize the effects of source coherence. The accuracy and robustness of the proposed signal processing method are examined. Results show that source localization in the horizontal direction by MUSIC is satisfactory. However, source coherence reduces drastically the accuracy in estimating the source height. The further application of Levenberg-Marquardt method with the results from MUSIC as the initial inputs improves significantly the accuracy of source height estimation. The present proposed method provides effective and robust estimation of the ground surface impedance.
NASA Astrophysics Data System (ADS)
Pahlavani, P.; Gholami, A.; Azimi, S.
2017-09-01
This paper presents an indoor positioning technique based on a multi-layer feed-forward (MLFF) artificial neural networks (ANN). Most of the indoor received signal strength (RSS)-based WLAN positioning systems use the fingerprinting technique that can be divided into two phases: the offline (calibration) phase and the online (estimation) phase. In this paper, RSSs were collected for all references points in four directions and two periods of time (Morning and Evening). Hence, RSS readings were sampled at a regular time interval and specific orientation at each reference point. The proposed ANN based model used Levenberg-Marquardt algorithm for learning and fitting the network to the training data. This RSS readings in all references points and the known position of these references points was prepared for training phase of the proposed MLFF neural network. Eventually, the average positioning error for this network using 30% check and validation data was computed approximately 2.20 meter.
Location of Sinabung volcano magma chamber on 2013 using lavenberg-marquardt inversion scheme
NASA Astrophysics Data System (ADS)
Kumalasari, R.; Srigutomo, W.; Djamal, M.; Meilano, I.; Gunawan, H.
2018-05-01
Sinabung Volcano has been monitoring using GPS after his eruption on August 2010. We Applied Levenberg-Marquardt Inversion Scheme to GPS data on 2013 because deformation of Sinabung Volcano in this year show an inflation and deflation, first we applied Levenberg-Marquardt to velocity data on 23 January 2013 then we applied Levenberg-Marquardt Inversion Scheme to data on 31 December 2013. From our analysis we got the depth of the pressure source modeling results that indicate some possibilities that Sinabung has a deep magma chamber about 15km and also shallow magma chamber about 1km from the surface.
NASA Astrophysics Data System (ADS)
Lapierre, J. L.; Sonnenfeld, R. G.; Hager, W. W.; Morris, K.
2011-12-01
Researchers have long studied the copious and complex electric field waveforms caused by lightning. By combining electric-field measurements taken at many different locations on the ground simultaneously [Krehbiel et al., 1979], we hope to learn more about charge sources for lightning flashes. The Langmuir Electric Field Array (LEFA) is a network of nine field-change measurement stations (slow-antennas) arranged around Langmuir Laboratory near Magdalena, New Mexico. Using a mathematical method called the Levenberg-Marquardt (LM) method, we can invert the electric field data to determine the magnitude and position of the charge centroid removed from the cloud. We analyzed three return strokes (RS) following a dart-leader from a storm occurring on October 21st 2011. RS 'A' occurred at 07:17:00.63 UT. The altitude of the charge centroid was estimated to be 5 km via LMA data. Because the LM method requires a prediction, the code was run with a wide range of values to verify the robustness of the method. Predictions varied from ±3 C for the charge magnitude and ±20 km N-S and E-W for the position (with the coordinate origin being the Langmuir Laboratory Annex). The LM method converged to a charge magnitude of -5.5 C and a centroid position of 3.3 km E-W and 12 km, N-S for that RS. RS 'B' occurred at 07:20:05.9 UT. With an altitude of 4 km, the predictions were again varied; ±3 C, ±15 km N-S and E-W. Most runs converged to -27.5 C, 4 km E-W, and 10.9 km N-S. Finally, while results seem best for events right over the array, success was had locating more distant events. RS 'C' occurred at 02:42:46.8 UT. Assuming an altitude of 5 km and varying the predictions as with RS 'A', the results converged to -9.2 C, 35.5 km E-W, and 9 km N-S. All of these results are broadly consistent with the LMA and the NLDN. By continuing this type of analysis, we hope to learn more about how lightning channels propagate and how the charges in the cloud respond to the sudden change in
Manifold absolute pressure estimation using neural network with hybrid training algorithm
Selamat, Hazlina; Alimin, Ahmad Jais; Haniff, Mohamad Fadzli
2017-01-01
In a modern small gasoline engine fuel injection system, the load of the engine is estimated based on the measurement of the manifold absolute pressure (MAP) sensor, which took place in the intake manifold. This paper present a more economical approach on estimating the MAP by using only the measurements of the throttle position and engine speed, resulting in lower implementation cost. The estimation was done via two-stage multilayer feed-forward neural network by combining Levenberg-Marquardt (LM) algorithm, Bayesian Regularization (BR) algorithm and Particle Swarm Optimization (PSO) algorithm. Based on the results found in 20 runs, the second variant of the hybrid algorithm yields a better network performance than the first variant of hybrid algorithm, LM, LM with BR and PSO by estimating the MAP closely to the simulated MAP values. By using a valid experimental training data, the estimator network that trained with the second variant of the hybrid algorithm showed the best performance among other algorithms when used in an actual retrofit fuel injection system (RFIS). The performance of the estimator was also validated in steady-state and transient condition by showing a closer MAP estimation to the actual value. PMID:29190779
A Fast Deep Learning System Using GPU
2014-06-01
hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and...widely used in data modeling until three decades later when efficient training algorithm for RBM is invented by Hinton [3] and the computing power is...be trained using most of optimization algorithms , such as BP, conjugate gradient descent (CGD) or Levenberg-Marquardt (LM). The advantage of this
NASA Astrophysics Data System (ADS)
Lilichenko, Mark; Kelley, Anne Myers
2001-04-01
A novel approach is presented for finding the vibrational frequencies, Franck-Condon factors, and vibronic linewidths that best reproduce typical, poorly resolved electronic absorption (or fluorescence) spectra of molecules in condensed phases. While calculation of the theoretical spectrum from the molecular parameters is straightforward within the harmonic oscillator approximation for the vibrations, "inversion" of an experimental spectrum to deduce these parameters is not. Standard nonlinear least-squares fitting methods such as Levenberg-Marquardt are highly susceptible to becoming trapped in local minima in the error function unless very good initial guesses for the molecular parameters are made. Here we employ a genetic algorithm to force a broad search through parameter space and couple it with the Levenberg-Marquardt method to speed convergence to each local minimum. In addition, a neural network trained on a large set of synthetic spectra is used to provide an initial guess for the fitting parameters and to narrow the range searched by the genetic algorithm. The combined algorithm provides excellent fits to a variety of single-mode absorption spectra with experimentally negligible errors in the parameters. It converges more rapidly than the genetic algorithm alone and more reliably than the Levenberg-Marquardt method alone, and is robust in the presence of spectral noise. Extensions to multimode systems, and/or to include other spectroscopic data such as resonance Raman intensities, are straightforward.
Ma, Changxi; Hao, Wei; Pan, Fuquan; Xiang, Wang
2018-01-01
Route optimization of hazardous materials transportation is one of the basic steps in ensuring the safety of hazardous materials transportation. The optimization scheme may be a security risk if road screening is not completed before the distribution route is optimized. For road screening issues of hazardous materials transportation, a road screening algorithm of hazardous materials transportation is built based on genetic algorithm and Levenberg-Marquardt neural network (GA-LM-NN) by analyzing 15 attributes data of each road network section. A multi-objective robust optimization model with adjustable robustness is constructed for the hazardous materials transportation problem of single distribution center to minimize transportation risk and time. A multi-objective genetic algorithm is designed to solve the problem according to the characteristics of the model. The algorithm uses an improved strategy to complete the selection operation, applies partial matching cross shift and single ortho swap methods to complete the crossover and mutation operation, and employs an exclusive method to construct Pareto optimal solutions. Studies show that the sets of hazardous materials transportation road can be found quickly through the proposed road screening algorithm based on GA-LM-NN, whereas the distribution route Pareto solutions with different levels of robustness can be found rapidly through the proposed multi-objective robust optimization model and algorithm.
[Model and analysis of spectropolarimetric BRDF of painted target based on GA-LM method].
Chen, Chao; Zhao, Yong-Qiang; Luo, Li; Pan, Quan; Cheng, Yong-Mei; Wang, Kai
2010-03-01
Models based on microfacet were used to describe spectropolarimetric BRDF (short for bidirectional reflectance distribution function) with experimental data. And the spectropolarimetric BRDF values of targets were measured with the comparison to the standard whiteboard, which was considered as Lambert and had a uniform reflectance rate up to 98% at arbitrary angle of view. And then the relationships between measured spectropolarimetric BRDF values and the angles of view, as well as wavelengths which were in a range of 400-720 nm were analyzed in details. The initial value needed to be input to the LM optimization method was difficult to get and greatly impacted the results. Therefore, optimization approach which combines genetic algorithm and Levenberg-Marquardt (LM) was utilized aiming to retrieve parameters of nonlinear models, and the initial values were obtained using GA approach. Simulated experiments were used to test the efficiency of the adopted optimization method. And the simulated experiment ensures the optimization method to have a good performance and be able to retrieve the parameters of nonlinear model efficiently. The correctness of the models was validated by real outdoor sampled data. The parameters of DoP model retrieved are the refraction index of measured targets. The refraction index of the same color painted target but with different materials was also obtained. Conclusion has been drawn that the refraction index from these two targets are very near and this slight difference could be understood by the difference in the conditions of paint targets' surface, not the material of the targets.
NASA Astrophysics Data System (ADS)
Fajriani; Srigutomo, Wahyu; Pratomo, Prihandhanu M.
2017-04-01
Self-Potential (SP) method is frequently used to identify subsurface structures based on electrical properties. For fixed geometry problems, SP method is related to simple geometrical shapes of causative bodies such as a sphere, cylinder, and sheet. This approach is implemented to determine the value of parameters such as shape, depth, polarization angle, and electric dipole moment. In this study, the technique was applied for investigation of fault, where the fault is considered as resembling the shape of a sheet representing dike or fault. The investigated fault is located at Pinggirsari village, Bandung regency, West Java, Indonesia. The observed SP anomalies that were measured allegedly above the fault were inverted to estimate all the fault parameters through inverse modeling scheme using the Levenberg-Marquardt method. The inversion scheme was first tested on a synthetic model, where a close agreement between the test parameters and the calculated parameters was achieved. Finally, the schema was carried out to invert the real observed SP anomalies. The results show that the presence of the fault was detected beneath the surface having electric dipole moment K = 41.5 mV, half-fault dimension a = 34 m, depth of the sheet’s center h = 14.6 m, the location of the fault’s center xo = 478.25 m, and the polarization angle to the horizontal plane θ = 334.52° in a clockwise direction.
Koay, Cheng Guan; Chang, Lin-Ching; Carew, John D; Pierpaoli, Carlo; Basser, Peter J
2006-09-01
A unifying theoretical and algorithmic framework for diffusion tensor estimation is presented. Theoretical connections among the least squares (LS) methods, (linear least squares (LLS), weighted linear least squares (WLLS), nonlinear least squares (NLS) and their constrained counterparts), are established through their respective objective functions, and higher order derivatives of these objective functions, i.e., Hessian matrices. These theoretical connections provide new insights in designing efficient algorithms for NLS and constrained NLS (CNLS) estimation. Here, we propose novel algorithms of full Newton-type for the NLS and CNLS estimations, which are evaluated with Monte Carlo simulations and compared with the commonly used Levenberg-Marquardt method. The proposed methods have a lower percent of relative error in estimating the trace and lower reduced chi2 value than those of the Levenberg-Marquardt method. These results also demonstrate that the accuracy of an estimate, particularly in a nonlinear estimation problem, is greatly affected by the Hessian matrix. In other words, the accuracy of a nonlinear estimation is algorithm-dependent. Further, this study shows that the noise variance in diffusion weighted signals is orientation dependent when signal-to-noise ratio (SNR) is low (
NASA Astrophysics Data System (ADS)
Arabzadeh, Vida; Niaki, S. T. A.; Arabzadeh, Vahid
2017-10-01
One of the most important processes in the early stages of construction projects is to estimate the cost involved. This process involves a wide range of uncertainties, which make it a challenging task. Because of unknown issues, using the experience of the experts or looking for similar cases are the conventional methods to deal with cost estimation. The current study presents data-driven methods for cost estimation based on the application of artificial neural network (ANN) and regression models. The learning algorithms of the ANN are the Levenberg-Marquardt and the Bayesian regulated. Moreover, regression models are hybridized with a genetic algorithm to obtain better estimates of the coefficients. The methods are applied in a real case, where the input parameters of the models are assigned based on the key issues involved in a spherical tank construction. The results reveal that while a high correlation between the estimated cost and the real cost exists; both ANNs could perform better than the hybridized regression models. In addition, the ANN with the Levenberg-Marquardt learning algorithm (LMNN) obtains a better estimation than the ANN with the Bayesian-regulated learning algorithm (BRNN). The correlation between real data and estimated values is over 90%, while the mean square error is achieved around 0.4. The proposed LMNN model can be effective to reduce uncertainty and complexity in the early stages of the construction project.
Hu, Jiandong; Ma, Liuzheng; Wang, Shun; Yang, Jianming; Chang, Keke; Hu, Xinran; Sun, Xiaohui; Chen, Ruipeng; Jiang, Min; Zhu, Juanhua; Zhao, Yuanyuan
2015-01-01
Kinetic analysis of biomolecular interactions are powerfully used to quantify the binding kinetic constants for the determination of a complex formed or dissociated within a given time span. Surface plasmon resonance biosensors provide an essential approach in the analysis of the biomolecular interactions including the interaction process of antigen-antibody and receptors-ligand. The binding affinity of the antibody to the antigen (or the receptor to the ligand) reflects the biological activities of the control antibodies (or receptors) and the corresponding immune signal responses in the pathologic process. Moreover, both the association rate and dissociation rate of the receptor to ligand are the substantial parameters for the study of signal transmission between cells. A number of experimental data may lead to complicated real-time curves that do not fit well to the kinetic model. This paper presented an analysis approach of biomolecular interactions established by utilizing the Marquardt algorithm. This algorithm was intensively considered to implement in the homemade bioanalyzer to perform the nonlinear curve-fitting of the association and disassociation process of the receptor to ligand. Compared with the results from the Newton iteration algorithm, it shows that the Marquardt algorithm does not only reduce the dependence of the initial value to avoid the divergence but also can greatly reduce the iterative regression times. The association and dissociation rate constants, ka, kd and the affinity parameters for the biomolecular interaction, KA, KD, were experimentally obtained 6.969×105 mL·g-1·s-1, 0.00073 s-1, 9.5466×108 mL·g-1 and 1.0475×10-9 g·mL-1, respectively from the injection of the HBsAg solution with the concentration of 16ng·mL-1. The kinetic constants were evaluated distinctly by using the obtained data from the curve-fitting results. PMID:26147997
Metaheuristic and Machine Learning Models for TFE-731-2, PW4056, and JT8D-9 Cruise Thrust
NASA Astrophysics Data System (ADS)
Baklacioglu, Tolga
2017-08-01
The requirement for an accurate engine thrust model has a major antecedence in airline fuel saving programs, assessment of environmental effects of fuel consumption, emissions reduction studies, and air traffic management applications. In this study, utilizing engine manufacturers' real data, a metaheuristic model based on genetic algorithms (GAs) and a machine learning model based on neural networks (NNs) trained with Levenberg-Marquardt (LM), delta-bar-delta (DBD), and conjugate gradient (CG) algorithms were accomplished to incorporate the effect of both flight altitude and Mach number in the estimation of thrust. For the GA model, the analysis of population size impact on the model's accuracy and effect of number of data on model coefficients were also performed. For the NN model, design of optimum topology was searched for one- and two-hidden-layer networks. Predicted thrust values presented a close agreement with real thrust data for both models, among which LM trained NNs gave the best accuracies.
NASA Astrophysics Data System (ADS)
Pelicano, Christian Mark; Rapadas, Nick; Cagatan, Gerard; Magdaluyo, Eduardo
2017-12-01
Herein, the crystallite size and band gap energy of zinc oxide (ZnO) quantum dots were predicted using artificial neural network (ANN). Three input factors including reagent ratio, growth time, and growth temperature were examined with respect to crystallite size and band gap energy as response factors. The generated results from neural network model were then compared with the experimental results. Experimental crystallite size and band gap energy of ZnO quantum dots were measured from TEM images and absorbance spectra, respectively. The Levenberg-Marquardt (LM) algorithm was used as the learning algorithm for the ANN model. The performance of the ANN model was then assessed through mean square error (MSE) and regression values. Based on the results, the ANN modelling results are in good agreement with the experimental data.
Zhou, Jianyong; Luo, Zu; Li, Chunquan; Deng, Mi
2018-01-01
When the meshless method is used to establish the mathematical-mechanical model of human soft tissues, it is necessary to define the space occupied by human tissues as the problem domain and the boundary of the domain as the surface of those tissues. Nodes should be distributed in both the problem domain and on the boundaries. Under external force, the displacement of the node is computed by the meshless method to represent the deformation of biological soft tissues. However, computation by the meshless method consumes too much time, which will affect the simulation of real-time deformation of human tissues in virtual surgery. In this article, the Marquardt's Algorithm is proposed to fit the nodal displacement at the problem domain's boundary and obtain the relationship between surface deformation and force. When different external forces are applied, the deformation of soft tissues can be quickly obtained based on this relationship. The analysis and discussion show that the improved model equations with Marquardt's Algorithm not only can simulate the deformation in real-time but also preserve the authenticity of the deformation model's physical properties. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Nasser Eddine, Achraf; Huard, Benoît; Gabano, Jean-Denis; Poinot, Thierry
2018-06-01
This paper deals with the initialization of a non linear identification algorithm used to accurately estimate the physical parameters of Lithium-ion battery. A Randles electric equivalent circuit is used to describe the internal impedance of the battery. The diffusion phenomenon related to this modeling is presented using a fractional order method. The battery model is thus reformulated into a transfer function which can be identified through Levenberg-Marquardt algorithm to ensure the algorithm's convergence to the physical parameters. An initialization method is proposed in this paper by taking into account previously acquired information about the static and dynamic system behavior. The method is validated using noisy voltage response, while precision of the final identification results is evaluated using Monte-Carlo method.
Shahsavari, Shadab; Rezaie Shirmard, Leila; Amini, Mohsen; Abedin Dokoosh, Farid
2017-01-01
Formulation of a nanoparticulate Fingolimod delivery system based on biodegradable poly(3-hydroxybutyrate-co-3-hydroxyvalerate) was optimized according to artificial neural networks (ANNs). Concentration of poly(3-hydroxybutyrate-co-3-hydroxyvalerate), PVA and amount of Fingolimod is considered as the input value, and the particle size, polydispersity index, loading capacity, and entrapment efficacy as output data in experimental design study. In vitro release study was carried out for best formulation according to statistical analysis. ANNs are employed to generate the best model to determine the relationships between various values. In order to specify the model with the best accuracy and proficiency for the in vitro release, a multilayer percepteron with different training algorithm has been examined. Three training model formulations including Levenberg-Marquardt (LM), gradient descent, and Bayesian regularization were employed for training the ANN models. It is demonstrated that the predictive ability of each training algorithm is in the order of LM > gradient descent > Bayesian regularization. Also, optimum formulation was achieved by LM training function with 15 hidden layers and 20 neurons. The transfer function of the hidden layer for this formulation and the output layer were tansig and purlin, respectively. Also, the optimization process was developed by minimizing the error among the predicted and observed values of training algorithm (about 0.0341). Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Coupling HYDRUS-1D Code with PA-DDS Algorithms for Inverse Calibration
NASA Astrophysics Data System (ADS)
Wang, Xiang; Asadzadeh, Masoud; Holländer, Hartmut
2017-04-01
Numerical modelling requires calibration to predict future stages. A standard method for calibration is inverse calibration where generally multi-objective optimization algorithms are used to find a solution, e.g. to find an optimal solution of the van Genuchten Mualem (VGM) parameters to predict water fluxes in the vadose zone. We coupled HYDRUS-1D with PA-DDS to add a new, robust function for inverse calibration to the model. The PA-DDS method is a recently developed multi-objective optimization algorithm, which combines Dynamically Dimensioned Search (DDS) and Pareto Archived Evolution Strategy (PAES). The results were compared to a standard method (Marquardt-Levenberg method) implemented in HYDRUS-1D. Calibration performance is evaluated using observed and simulated soil moisture at two soil layers in the Southern Abbotsford, British Columbia, Canada in the terms of the root mean squared error (RMSE) and the Nash-Sutcliffe Efficiency (NSE). Results showed low RMSE values of 0.014 and 0.017 and strong NSE values of 0.961 and 0.939. Compared to the results by the Marquardt-Levenberg method, we received better calibration results for deeper located soil sensors. However, VGM parameters were similar comparing with previous studies. Both methods are equally computational efficient. We claim that a direct implementation of PA-DDS into HYDRUS-1D should reduce the computation effort further. This, the PA-DDS method is efficient for calibrating recharge for complex vadose zone modelling with multiple soil layer and can be a potential tool for calibration of heat and solute transport. Future work should focus on the effectiveness of PA-DDS for calibrating more complex versions of the model with complex vadose zone settings, with more soil layers, and against measured heat and solute transport. Keywords: Recharge, Calibration, HYDRUS-1D, Multi-objective Optimization
Hernández-Melchor, Dulce Jazmín; López-Pérez, Pablo A; Carrillo-Vargas, Sergio; Alberto-Murrieta, Alvaro; González-Gómez, Evanibaldo; Camacho-Pérez, Beni
2017-09-06
This work presents an experimental-theoretical strategy for a batch process for lead removal by photosynthetic consortium, conformed by algae and bacteria. Photosynthetic consortium, isolated from a treatment plant wastewater of Tecamac (Mexico), was used as inoculum in bubble column photobioreactors. The consortium was used to evaluate the kinetics of lead removal at different initial concentrations of metal (15, 30, 40, 50, and 60 mgL -1 ), carried out in batch culture with a hydraulic residence time of 14 days using Bold's Basal mineral medium. The photobioreactor was operated under the following conditions: aeration of 0.5 vvm, 80 μmol m -2 s -1 of photon flux density and a photoperiod light/dark 12:12. After determining the best growth kinetics of biomass and metal removal, they were tested under different ratios (30 and 60%) of wastewater-culture medium. Additionally, the biomass growth (X), nitrogen consumption (N), chemical oxygen demand (COD), and metal removal (Pb) were quantified. Achieved lead removal was 97.4% when the initial lead concentration was up to 50 mgL -1 using 60% of wastewater. Additionally, an unstructured-type mathematical model was developed to simulate COD, X, N, and lead removal. Furthermore, a comparison between the Levenberg-Marquardt (L-M) optimization approach and Genetic Algorithms (GA) was carried out for parameter estimation. Also, it was concluded that GA has a slightly better performance and possesses better convergence and computational time than L-M. Hence, the proposed method might be applied for parameter estimation of biological models and be used for the monitoring and control process.
A Novel Method of Localization for Moving Objects with an Alternating Magnetic Field
Gao, Xiang; Yan, Shenggang; Li, Bin
2017-01-01
Magnetic detection technology has wide applications in the fields of geological exploration, biomedical treatment, wreck removal and localization of unexploded ordinance. A large number of methods have been developed to locate targets with static magnetic fields, however, the relation between the problem of localization of moving objectives with alternating magnetic fields and the localization with a static magnetic field is rarely studied. A novel method of target localization based on coherent demodulation was proposed in this paper. The problem of localization of moving objects with an alternating magnetic field was transformed into the localization with a static magnetic field. The Levenberg-Marquardt (L-M) algorithm was applied to calculate the position of the target with magnetic field data measured by a single three-component magnetic sensor. Theoretical simulation and experimental results demonstrate the effectiveness of the proposed method. PMID:28430153
Shan, Peng; Peng, Silong; Zhao, Yuhui; Tang, Liang
2016-03-01
An analysis of binary mixtures of hydroxyl compound by Attenuated Total Reflection Fourier transform infrared spectroscopy (ATR FT-IR) and classical least squares (CLS) yield large model error due to the presence of unmodeled components such as H-bonded components. To accommodate these spectral variations, polynomial-based least squares (LSP) and polynomial-based total least squares (TLSP) are proposed to capture the nonlinear absorbance-concentration relationship. LSP is based on assuming that only absorbance noise exists; while TLSP takes both absorbance noise and concentration noise into consideration. In addition, based on different solving strategy, two optimization algorithms (limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm and Levenberg-Marquardt (LM) algorithm) are combined with TLSP and then two different TLSP versions (termed as TLSP-LBFGS and TLSP-LM) are formed. The optimum order of each nonlinear model is determined by cross-validation. Comparison and analyses of the four models are made from two aspects: absorbance prediction and concentration prediction. The results for water-ethanol solution and ethanol-ethyl lactate solution show that LSP, TLSP-LBFGS, and TLSP-LM can, for both absorbance prediction and concentration prediction, obtain smaller root mean square error of prediction than CLS. Additionally, they can also greatly enhance the accuracy of estimated pure component spectra. However, from the view of concentration prediction, the Wilcoxon signed rank test shows that there is no statistically significant difference between each nonlinear model and CLS. © The Author(s) 2016.
NASA Astrophysics Data System (ADS)
Mofavvaz, Shirin; Sohrabi, Mahmoud Reza; Nezamzadeh-Ejhieh, Alireza
2017-07-01
In the present study, artificial neural networks (ANNs) and least squares support vector machines (LS-SVM) as intelligent methods based on absorption spectra in the range of 230-300 nm have been used for determination of antihistamine decongestant contents. In the first step, one type of network (feed-forward back-propagation) from the artificial neural network with two different training algorithms, Levenberg-Marquardt (LM) and gradient descent with momentum and adaptive learning rate back-propagation (GDX) algorithm, were employed and their performance was evaluated. The performance of the LM algorithm was better than the GDX algorithm. In the second one, the radial basis network was utilized and results compared with the previous network. In the last one, the other intelligent method named least squares support vector machine was proposed to construct the antihistamine decongestant prediction model and the results were compared with two of the aforementioned networks. The values of the statistical parameters mean square error (MSE), Regression coefficient (R2), correlation coefficient (r) and also mean recovery (%), relative standard deviation (RSD) used for selecting the best model between these methods. Moreover, the proposed methods were compared to the high- performance liquid chromatography (HPLC) as a reference method. One way analysis of variance (ANOVA) test at the 95% confidence level applied to the comparison results of suggested and reference methods that there were no significant differences between them.
Marchand, A J; Hitti, E; Monge, F; Saint-Jalmes, H; Guillin, R; Duvauferrier, R; Gambarota, G
2014-11-01
To assess the feasibility of measuring diffusion and perfusion fraction in vertebral bone marrow using the intravoxel incoherent motion (IVIM) approach and to compare two fitting methods, i.e., the non-negative least squares (NNLS) algorithm and the more commonly used Levenberg-Marquardt (LM) non-linear least squares algorithm, for the analysis of IVIM data. MRI experiments were performed on fifteen healthy volunteers, with a diffusion-weighted echo-planar imaging (EPI) sequence at five different b-values (0, 50, 100, 200, 600 s/mm2), in combination with an STIR module to suppress the lipid signal. Diffusion signal decays in the first lumbar vertebra (L1) were fitted to a bi-exponential function using the LM algorithm and further analyzed with the NNLS algorithm to calculate the values of the apparent diffusion coefficient (ADC), pseudo-diffusion coefficient (D*) and perfusion fraction. The NNLS analysis revealed two diffusion components only in seven out of fifteen volunteers, with ADC=0.60±0.09 (10(-3) mm(2)/s), D*=28±9 (10(-3) mm2/s) and perfusion fraction=14%±6%. The values obtained by the LM bi-exponential fit were: ADC=0.45±0.27 (10(-3) mm2/s), D*=63±145 (10(-3) mm2/s) and perfusion fraction=27%±17%. Furthermore, the LM algorithm yielded values of perfusion fraction in cases where the decay was not bi-exponential, as assessed by NNLS analysis. The IVIM approach allows for measuring diffusion and perfusion fraction in vertebral bone marrow; its reliability can be improved by using the NNLS, which identifies the diffusion decays that display a bi-exponential behavior. Copyright © 2014 Elsevier Inc. All rights reserved.
On Some Separated Algorithms for Separable Nonlinear Least Squares Problems.
Gan, Min; Chen, C L Philip; Chen, Guang-Yong; Chen, Long
2017-10-03
For a class of nonlinear least squares problems, it is usually very beneficial to separate the variables into a linear and a nonlinear part and take full advantage of reliable linear least squares techniques. Consequently, the original problem is turned into a reduced problem which involves only nonlinear parameters. We consider in this paper four separated algorithms for such problems. The first one is the variable projection (VP) algorithm with full Jacobian matrix of Golub and Pereyra. The second and third ones are VP algorithms with simplified Jacobian matrices proposed by Kaufman and Ruano et al. respectively. The fourth one only uses the gradient of the reduced problem. Monte Carlo experiments are conducted to compare the performance of these four algorithms. From the results of the experiments, we find that: 1) the simplified Jacobian proposed by Ruano et al. is not a good choice for the VP algorithm; moreover, it may render the algorithm hard to converge; 2) the fourth algorithm perform moderately among these four algorithms; 3) the VP algorithm with the full Jacobian matrix perform more stable than that of the VP algorithm with Kuafman's simplified one; and 4) the combination of VP algorithm and Levenberg-Marquardt method is more effective than the combination of VP algorithm and Gauss-Newton method.
NASA Astrophysics Data System (ADS)
Kobayashi, Kiyoshi; Suzuki, Tohru S.
2018-03-01
A new algorithm for the automatic estimation of an equivalent circuit and the subsequent parameter optimization is developed by combining the data-mining concept and complex least-squares method. In this algorithm, the program generates an initial equivalent-circuit model based on the sampling data and then attempts to optimize the parameters. The basic hypothesis is that the measured impedance spectrum can be reproduced by the sum of the partial-impedance spectra presented by the resistor, inductor, resistor connected in parallel to a capacitor, and resistor connected in parallel to an inductor. The adequacy of the model is determined by using a simple artificial-intelligence function, which is applied to the output function of the Levenberg-Marquardt module. From the iteration of model modifications, the program finds an adequate equivalent-circuit model without any user input to the equivalent-circuit model.
Yalavarthy, Phaneendra K; Lynch, Daniel R; Pogue, Brian W; Dehghani, Hamid; Paulsen, Keith D
2008-05-01
Three-dimensional (3D) diffuse optical tomography is known to be a nonlinear, ill-posed and sometimes under-determined problem, where regularization is added to the minimization to allow convergence to a unique solution. In this work, a generalized least-squares (GLS) minimization method was implemented, which employs weight matrices for both data-model misfit and optical properties to include their variances and covariances, using a computationally efficient scheme. This allows inversion of a matrix that is of a dimension dictated by the number of measurements, instead of by the number of imaging parameters. This increases the computation speed up to four times per iteration in most of the under-determined 3D imaging problems. An analytic derivation, using the Sherman-Morrison-Woodbury identity, is shown for this efficient alternative form and it is proven to be equivalent, not only analytically, but also numerically. Equivalent alternative forms for other minimization methods, like Levenberg-Marquardt (LM) and Tikhonov, are also derived. Three-dimensional reconstruction results indicate that the poor recovery of quantitatively accurate values in 3D optical images can also be a characteristic of the reconstruction algorithm, along with the target size. Interestingly, usage of GLS reconstruction methods reduces error in the periphery of the image, as expected, and improves by 20% the ability to quantify local interior regions in terms of the recovered optical contrast, as compared to LM methods. Characterization of detector photo-multiplier tubes noise has enabled the use of the GLS method for reconstructing experimental data and showed a promise for better quantification of target in 3D optical imaging. Use of these new alternative forms becomes effective when the ratio of the number of imaging property parameters exceeds the number of measurements by a factor greater than 2.
NASA Astrophysics Data System (ADS)
Rajora, M.; Zou, P.; Xu, W.; Jin, L.; Chen, W.; Liang, S. Y.
2017-12-01
With the rapidly changing demands of the manufacturing market, intelligent techniques are being used to solve engineering problems due to their ability to handle nonlinear complex problems. For example, in the conventional production of stator cores, it is relied upon experienced engineers to make an initial plan on the number of compensation sheets to be added to achieve uniform pressure distribution throughout the laminations. Additionally, these engineers must use their experience to revise the initial plans based upon the measurements made during the production of stator core. However, this method yields inconsistent results as humans are incapable of storing and analysing large amounts of data. In this article, first, a Neural Network (NN), trained using a hybrid Levenberg-Marquardt (LM) - Genetic Algorithm (GA), is developed to assist the engineers with the decision-making process. Next, the trained NN is used as a fitness function in an optimization algorithm to find the optimal values of the initial compensation sheet plan with the aim of minimizing the required revisions during the production of the stator core.
Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms
Vázquez, Roberto A.
2015-01-01
Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems. PMID:26221132
NASA Astrophysics Data System (ADS)
Osmanoglu, B.; Ozkan, C.; Sunar, F.
2013-10-01
After air strikes on July 14 and 15, 2006 the Jiyeh Power Station started leaking oil into the eastern Mediterranean Sea. The power station is located about 30 km south of Beirut and the slick covered about 170 km of coastline threatening the neighboring countries Turkey and Cyprus. Due to the ongoing conflict between Israel and Lebanon, cleaning efforts could not start immediately resulting in 12 000 to 15 000 tons of fuel oil leaking into the sea. In this paper we compare results from automatic and semi-automatic slick detection algorithms. The automatic detection method combines the probabilities calculated for each pixel from each image to obtain a joint probability, minimizing the adverse effects of atmosphere on oil spill detection. The method can readily utilize X-, C- and L-band data where available. Furthermore wind and wave speed observations can be used for a more accurate analysis. For this study, we utilize Envisat ASAR ScanSAR data. A probability map is generated based on the radar backscatter, effect of wind and dampening value. The semi-automatic algorithm is based on supervised classification. As a classifier, Artificial Neural Network Multilayer Perceptron (ANN MLP) classifier is used since it is more flexible and efficient than conventional maximum likelihood classifier for multisource and multi-temporal data. The learning algorithm for ANN MLP is chosen as the Levenberg-Marquardt (LM). Training and test data for supervised classification are composed from the textural information created from SAR images. This approach is semiautomatic because tuning the parameters of classifier and composing training data need a human interaction. We point out the similarities and differences between the two methods and their results as well as underlining their advantages and disadvantages. Due to the lack of ground truth data, we compare obtained results to each other, as well as other published oil slick area assessments.
NASA Astrophysics Data System (ADS)
Khamatnurova, M. Yu.; Gribanov, K. G.; Zakharov, V. I.; Rokotyan, N. V.; Imasu, R.
2017-11-01
The algorithm for atmospheric methane distribution retrieval in atmosphere from IASI spectra has been developed. The feasibility of Levenberg-Marquardt method for atmospheric methane total column amount retrieval from the spectra measured by IASI/METOP modified for the case of lack of a priori covariance matrices for methane vertical profiles is studied in this paper. Method and algorithm were implemented into software package together with iterative estimation of a posteriori covariance matrices and averaging kernels for each individual retrieval. This allows retrieval quality selection using the properties of both types of matrices. Methane (XCH4) retrieval by Levenberg-Marquardt method from IASI/METOP spectra is presented in this work. NCEP/NCAR reanalysis data provided by ESRL (NOAA, Boulder, USA) were taken as initial guess. Surface temperature, air temperature and humidity vertical profiles are retrieved before methane vertical profile retrieval. The data retrieved from ground-based measurements at the Ural Atmospheric Station and data of L2/IASI standard product were used for the verification of the method and results of methane retrieval from IASI/METOP spectra.
NASA Astrophysics Data System (ADS)
Zhou, Wanmeng; Wang, Hua; Tang, Guojin; Guo, Shuai
2016-09-01
The time-consuming experimental method for handling qualities assessment cannot meet the increasing fast design requirements for the manned space flight. As a tool for the aircraft handling qualities research, the model-predictive-control structured inverse simulation (MPC-IS) has potential applications in the aerospace field to guide the astronauts' operations and evaluate the handling qualities more effectively. Therefore, this paper establishes MPC-IS for the manual-controlled rendezvous and docking (RVD) and proposes a novel artificial neural network inverse simulation system (ANN-IS) to further decrease the computational cost. The novel system was obtained by replacing the inverse model of MPC-IS with the artificial neural network. The optimal neural network was trained by the genetic Levenberg-Marquardt algorithm, and finally determined by the Levenberg-Marquardt algorithm. In order to validate MPC-IS and ANN-IS, the manual-controlled RVD experiments on the simulator were carried out. The comparisons between simulation results and experimental data demonstrated the validity of two systems and the high computational efficiency of ANN-IS.
Using L-M BP Algorithm Forecase the 305 Days Production of First-Breed Dairy
NASA Astrophysics Data System (ADS)
Wei, Xiaoli; Qi, Guoqiang; Shen, Weizheng; Jian, Sun
Aiming at the shortage of conventional BP algorithm, a BP neural net works improved by L-M algorithm is put forward. On the basis of the network, a Prediction model for 305 day's milk productions was set up. Traditional methods finish these data must spend at least 305 days, But this model can forecast first-breed dairy's 305 days milk production ahead of 215 days. The validity of the improved BP neural network predictive model was validated through the experiments.
Kong, Jianlei; Ding, Xiaokang; Liu, Jinhao; Yan, Lei; Wang, Jianli
2015-01-01
In this paper, a new algorithm to improve the accuracy of estimating diameter at breast height (DBH) for tree trunks in forest areas is proposed. First, the information is collected by a two-dimensional terrestrial laser scanner (2DTLS), which emits laser pulses to generate a point cloud. After extraction and filtration, the laser point clusters of the trunks are obtained, which are optimized by an arithmetic means method. Then, an algebraic circle fitting algorithm in polar form is non-linearly optimized by the Levenberg-Marquardt method to form a new hybrid algorithm, which is used to acquire the diameters and positions of the trees. Compared with previous works, this proposed method improves the accuracy of diameter estimation of trees significantly and effectively reduces the calculation time. Moreover, the experimental results indicate that this method is stable and suitable for the most challenging conditions, which has practical significance in improving the operating efficiency of forest harvester and reducing the risk of causing accidents. PMID:26147726
Multiobjective generalized extremal optimization algorithm for simulation of daylight illuminants
NASA Astrophysics Data System (ADS)
Kumar, Srividya Ravindra; Kurian, Ciji Pearl; Gomes-Borges, Marcos Eduardo
2017-10-01
Daylight illuminants are widely used as references for color quality testing and optical vision testing applications. Presently used daylight simulators make use of fluorescent bulbs that are not tunable and occupy more space inside the quality testing chambers. By designing a spectrally tunable LED light source with an optimal number of LEDs, cost, space, and energy can be saved. This paper describes an application of the generalized extremal optimization (GEO) algorithm for selection of the appropriate quantity and quality of LEDs that compose the light source. The multiobjective approach of this algorithm tries to get the best spectral simulation with minimum fitness error toward the target spectrum, correlated color temperature (CCT) the same as the target spectrum, high color rendering index (CRI), and luminous flux as required for testing applications. GEO is a global search algorithm based on phenomena of natural evolution and is especially designed to be used in complex optimization problems. Several simulations have been conducted to validate the performance of the algorithm. The methodology applied to model the LEDs, together with the theoretical basis for CCT and CRI calculation, is presented in this paper. A comparative result analysis of M-GEO evolutionary algorithm with the Levenberg-Marquardt conventional deterministic algorithm is also presented.
Implementation of neural network for color properties of polycarbonates
NASA Astrophysics Data System (ADS)
Saeed, U.; Ahmad, S.; Alsadi, J.; Ross, D.; Rizvi, G.
2014-05-01
In present paper, the applicability of artificial neural networks (ANN) is investigated for color properties of plastics. The neural networks toolbox of Matlab 6.5 is used to develop and test the ANN model on a personal computer. An optimal design is completed for 10, 12, 14,16,18 & 20 hidden neurons on single hidden layer with five different algorithms: batch gradient descent (GD), batch variable learning rate (GDX), resilient back-propagation (RP), scaled conjugate gradient (SCG), levenberg-marquardt (LM) in the feed forward back-propagation neural network model. The training data for ANN is obtained from experimental measurements. There were twenty two inputs including resins, additives & pigments while three tristimulus color values L*, a* and b* were used as output layer. Statistical analysis in terms of Root-Mean-Squared (RMS), absolute fraction of variance (R squared), as well as mean square error is used to investigate the performance of ANN. LM algorithm with fourteen neurons on hidden layer in Feed Forward Back-Propagation of ANN model has shown best result in the present study. The degree of accuracy of the ANN model in reduction of errors is proven acceptable in all statistical analysis and shown in results. However, it was concluded that ANN provides a feasible method in error reduction in specific color tristimulus values.
Fast algorithm for spectral processing with application to on-line welding quality assurance
NASA Astrophysics Data System (ADS)
Mirapeix, J.; Cobo, A.; Jaúregui, C.; López-Higuera, J. M.
2006-10-01
A new technique is presented in this paper for the analysis of welding process emission spectra to accurately estimate in real-time the plasma electronic temperature. The estimation of the electronic temperature of the plasma, through the analysis of the emission lines from multiple atomic species, may be used to monitor possible perturbations during the welding process. Unlike traditional techniques, which usually involve peak fitting to Voigt functions using the Levenberg-Marquardt recursive method, sub-pixel algorithms are used to more accurately estimate the central wavelength of the peaks. Three different sub-pixel algorithms will be analysed and compared, and it will be shown that the LPO (linear phase operator) sub-pixel algorithm is a better solution within the proposed system. Experimental tests during TIG-welding using a fibre optic to capture the arc light, together with a low cost CCD-based spectrometer, show that some typical defects associated with perturbations in the electron temperature can be easily detected and identified with this technique. A typical processing time for multiple peak analysis is less than 20 ms running on a conventional PC.
Simulation of river stage using artificial neural network and MIKE 11 hydrodynamic model
NASA Astrophysics Data System (ADS)
Panda, Rabindra K.; Pramanik, Niranjan; Bala, Biplab
2010-06-01
Simulation of water levels at different sections of a river using physically based flood routing models is quite cumbersome, because it requires many types of data such as hydrologic time series, river geometry, hydraulics of existing control structures and channel roughness coefficients. Normally in developing countries like India it is not easy to collect these data because of poor monitoring and record keeping. Therefore, an artificial neural network (ANN) technique is used as an effective alternative in hydrologic simulation studies. The present study aims at comparing the performance of the ANN technique with a widely used physically based hydrodynamic model in the MIKE 11 environment. The MIKE 11 hydrodynamic model was calibrated and validated for the monsoon periods (June-September) of the years 2006 and 2001, respectively. Feed forward neural network architecture with Levenberg-Marquardt (LM) back propagation training algorithm was used to train the neural network model using hourly water level data of the period June-September 2006. The trained ANN model was tested using data for the same period of the year 2001. Simulated water levels by the MIKE 11HD were compared with the corresponding water levels predicted by the ANN model. The results obtained from the ANN model were found to be much better than that of the MIKE 11HD results as indicated by the values of the goodness of fit indices used in the study. The Nash-Sutcliffe index ( E) and root mean square error (RMSE) obtained in case of the ANN model were found to be 0.8419 and 0.8939 m, respectively, during model testing, whereas in case of MIKE 11HD, the values of E and RMSE were found to be 0.7836 and 1.00 m, respectively, during model validation. The difference between the observed and simulated peak water levels obtained from the ANN model was found to be much lower than that of MIKE 11HD. The study reveals that the use of Levenberg-Marquardt algorithm with eight hidden neurons in the hidden layer
The fatigue life prediction of aluminium alloy using genetic algorithm and neural network
NASA Astrophysics Data System (ADS)
Susmikanti, Mike
2013-09-01
The behavior of the fatigue life of the industrial materials is very important. In many cases, the material with experiencing fatigue life cannot be avoided, however, there are many ways to control their behavior. Many investigations of the fatigue life phenomena of alloys have been done, but it is high cost and times consuming computation. This paper report the modeling and simulation approaches to predict the fatigue life behavior of Aluminum Alloys and resolves some problems of computation. First, the simulation using genetic algorithm was utilized to optimize the load to obtain the stress values. These results can be used to provide N-cycle fatigue life of the material. Furthermore, the experimental data was applied as input data in the neural network learning, while the samples data were applied for testing of the training data. Finally, the multilayer perceptron algorithm is applied to predict whether the given data sets in accordance with the fatigue life of the alloy. To achieve rapid convergence, the Levenberg-Marquardt algorithm was also employed. The simulations results shows that the fatigue behaviors of aluminum under pressure can be predicted. In addition, implementation of neural networks successfully identified a model for material fatigue life.
Zheng, Zi-Yi; Guo, Xiao-Na; Zhu, Ke-Xue; Peng, Wei; Zhou, Hui-Ming
2017-07-15
Methoxy-ρ-benzoquinone (MBQ) and 2, 6-dimethoxy-ρ-benzoquinone (DMBQ) are two potential anticancer compounds in fermented wheat germ. In present study, modeling and optimization of added macronutrients, microelements, vitamins for producing MBQ and DMBQ was investigated using artificial neural network (ANN) combined with genetic algorithm (GA). A configuration of 16-11-1 ANN model with Levenberg-Marquardt training algorithm was applied for modeling the complicated nonlinear interactions among 16 nutrients in fermentation process. Under the guidance of optimized scheme, the total contents of MBQ and DMBQ was improved by 117% compared with that in the control group. Further, by evaluating the relative importance of each nutrient in terms of the two benzoquinones' yield, macronutrients and microelements were found to have a greater influence than most of vitamins. It was also observed that a number of interactions between nutrients affected the yield of MBQ and DMBQ remarkably. Copyright © 2017 Elsevier Ltd. All rights reserved.
ECG Based Heart Arrhythmia Detection Using Wavelet Coherence and Bat Algorithm
NASA Astrophysics Data System (ADS)
Kora, Padmavathi; Sri Rama Krishna, K.
2016-12-01
Atrial fibrillation (AF) is a type of heart abnormality, during the AF electrical discharges in the atrium are rapid, results in abnormal heart beat. The morphology of ECG changes due to the abnormalities in the heart. This paper consists of three major steps for the detection of heart diseases: signal pre-processing, feature extraction and classification. Feature extraction is the key process in detecting the heart abnormality. Most of the ECG detection systems depend on the time domain features for cardiac signal classification. In this paper we proposed a wavelet coherence (WTC) technique for ECG signal analysis. The WTC calculates the similarity between two waveforms in frequency domain. Parameters extracted from WTC function is used as the features of the ECG signal. These features are optimized using Bat algorithm. The Levenberg Marquardt neural network classifier is used to classify the optimized features. The performance of the classifier can be improved with the optimized features.
An Improved Calibration Method for a Rotating 2D LIDAR System.
Zeng, Yadan; Yu, Heng; Dai, Houde; Song, Shuang; Lin, Mingqiang; Sun, Bo; Jiang, Wei; Meng, Max Q-H
2018-02-07
This paper presents an improved calibration method of a rotating two-dimensional light detection and ranging (R2D-LIDAR) system, which can obtain the 3D scanning map of the surroundings. The proposed R2D-LIDAR system, composed of a 2D LIDAR and a rotating unit, is pervasively used in the field of robotics owing to its low cost and dense scanning data. Nevertheless, the R2D-LIDAR system must be calibrated before building the geometric model because there are assembled deviation and abrasion between the 2D LIDAR and the rotating unit. Hence, the calibration procedures should contain both the adjustment between the two devices and the bias of 2D LIDAR itself. The main purpose of this work is to resolve the 2D LIDAR bias issue with a flat plane based on the Levenberg-Marquardt (LM) algorithm. Experimental results for the calibration of the R2D-LIDAR system prove the reliability of this strategy to accurately estimate sensor offsets with the error range from -15 mm to 15 mm for the performance of capturing scans.
Cui, Yiqian; Shi, Junyou; Wang, Zili
2015-11-01
Quantum Neural Networks (QNN) models have attracted great attention since it innovates a new neural computing manner based on quantum entanglement. However, the existing QNN models are mainly based on the real quantum operations, and the potential of quantum entanglement is not fully exploited. In this paper, we proposes a novel quantum neuron model called Complex Quantum Neuron (CQN) that realizes a deep quantum entanglement. Also, a novel hybrid networks model Complex Rotation Quantum Dynamic Neural Networks (CRQDNN) is proposed based on Complex Quantum Neuron (CQN). CRQDNN is a three layer model with both CQN and classical neurons. An infinite impulse response (IIR) filter is embedded in the Networks model to enable the memory function to process time series inputs. The Levenberg-Marquardt (LM) algorithm is used for fast parameter learning. The networks model is developed to conduct time series predictions. Two application studies are done in this paper, including the chaotic time series prediction and electronic remaining useful life (RUL) prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.
Applications of Monte Carlo method to nonlinear regression of rheological data
NASA Astrophysics Data System (ADS)
Kim, Sangmo; Lee, Junghaeng; Kim, Sihyun; Cho, Kwang Soo
2018-02-01
In rheological study, it is often to determine the parameters of rheological models from experimental data. Since both rheological data and values of the parameters vary in logarithmic scale and the number of the parameters is quite large, conventional method of nonlinear regression such as Levenberg-Marquardt (LM) method is usually ineffective. The gradient-based method such as LM is apt to be caught in local minima which give unphysical values of the parameters whenever the initial guess of the parameters is far from the global optimum. Although this problem could be solved by simulated annealing (SA), the Monte Carlo (MC) method needs adjustable parameter which could be determined in ad hoc manner. We suggest a simplified version of SA, a kind of MC methods which results in effective values of the parameters of most complicated rheological models such as the Carreau-Yasuda model of steady shear viscosity, discrete relaxation spectrum and zero-shear viscosity as a function of concentration and molecular weight.
Cota-Ruiz, Juan; Rosiles, Jose-Gerardo; Sifuentes, Ernesto; Rivas-Perea, Pablo
2012-01-01
This research presents a distributed and formula-based bilateration algorithm that can be used to provide initial set of locations. In this scheme each node uses distance estimates to anchors to solve a set of circle-circle intersection (CCI) problems, solved through a purely geometric formulation. The resulting CCIs are processed to pick those that cluster together and then take the average to produce an initial node location. The algorithm is compared in terms of accuracy and computational complexity with a Least-Squares localization algorithm, based on the Levenberg-Marquardt methodology. Results in accuracy vs. computational performance show that the bilateration algorithm is competitive compared with well known optimized localization algorithms.
Power plant fault detection using artificial neural network
NASA Astrophysics Data System (ADS)
Thanakodi, Suresh; Nazar, Nazatul Shiema Moh; Joini, Nur Fazriana; Hidzir, Hidzrin Dayana Mohd; Awira, Mohammad Zulfikar Khairul
2018-02-01
The fault that commonly occurs in power plants is due to various factors that affect the system outage. There are many types of faults in power plants such as single line to ground fault, double line to ground fault, and line to line fault. The primary aim of this paper is to diagnose the fault in 14 buses power plants by using an Artificial Neural Network (ANN). The Multilayered Perceptron Network (MLP) that detection trained utilized the offline training methods such as Gradient Descent Backpropagation (GDBP), Levenberg-Marquardt (LM), and Bayesian Regularization (BR). The best method is used to build the Graphical User Interface (GUI). The modelling of 14 buses power plant, network training, and GUI used the MATLAB software.
Improving CMD Areal Density Analysis: Algorithms and Strategies
NASA Astrophysics Data System (ADS)
Wilson, R. E.
2014-06-01
Essential ideas, successes, and difficulties of Areal Density Analysis (ADA) for color-magnitude diagrams (CMD¡¯s) of resolved stellar populations are examined, with explanation of various algorithms and strategies for optimal performance. A CMDgeneration program computes theoretical datasets with simulated observational error and a solution program inverts the problem by the method of Differential Corrections (DC) so as to compute parameter values from observed magnitudes and colors, with standard error estimates and correlation coefficients. ADA promises not only impersonal results, but also significant saving of labor, especially where a given dataset is analyzed with several evolution models. Observational errors and multiple star systems, along with various single star characteristics and phenomena, are modeled directly via the Functional Statistics Algorithm (FSA). Unlike Monte Carlo, FSA is not dependent on a random number generator. Discussions include difficulties and overall requirements, such as need for fast evolutionary computation and realization of goals within machine memory limits. Degradation of results due to influence of pixelization on derivatives, Initial Mass Function (IMF) quantization, IMF steepness, low Areal Densities (A ), and large variation in A are reduced or eliminated through a variety of schemes that are explained sufficiently for general application. The Levenberg-Marquardt and MMS algorithms for improvement of solution convergence are contained within the DC program. An example of convergence, which typically is very good, is shown in tabular form. A number of theoretical and practical solution issues are discussed, as are prospects for further development.
An information geometric approach to least squares minimization
NASA Astrophysics Data System (ADS)
Transtrum, Mark; Machta, Benjamin; Sethna, James
2009-03-01
Parameter estimation by nonlinear least squares minimization is a ubiquitous problem that has an elegant geometric interpretation: all possible parameter values induce a manifold embedded within the space of data. The minimization problem is then to find the point on the manifold closest to the origin. The standard algorithm for minimizing sums of squares, the Levenberg-Marquardt algorithm, also has geometric meaning. When the standard algorithm fails to efficiently find accurate fits to the data, geometric considerations suggest improvements. Problems involving large numbers of parameters, such as often arise in biological contexts, are notoriously difficult. We suggest an algorithm based on geodesic motion that may offer improvements over the standard algorithm for a certain class of problems.
Determination of elastic moduli from measured acoustic velocities.
Brown, J Michael
2018-06-01
Methods are evaluated in solution of the inverse problem associated with determination of elastic moduli for crystals of arbitrary symmetry from elastic wave velocities measured in many crystallographic directions. A package of MATLAB functions provides a robust and flexible environment for analysis of ultrasonic, Brillouin, or Impulsive Stimulated Light Scattering datasets. Three inverse algorithms are considered: the gradient-based methods of Levenberg-Marquardt and Backus-Gilbert, and a non-gradient-based (Nelder-Mead) simplex approach. Several data types are considered: body wave velocities alone, surface wave velocities plus a side constraint on X-ray-diffraction-based axes compressibilities, or joint body and surface wave velocities. The numerical algorithms are validated through comparisons with prior published results and through analysis of synthetic datasets. Although all approaches succeed in finding low-misfit solutions, the Levenberg-Marquardt method consistently demonstrates effectiveness and computational efficiency. However, linearized gradient-based methods, when applied to a strongly non-linear problem, may not adequately converge to the global minimum. The simplex method, while slower, is less susceptible to being trapped in local misfit minima. A "multi-start" strategy (initiate searches from more than one initial guess) provides better assurance that global minima have been located. Numerical estimates of parameter uncertainties based on Monte Carlo simulations are compared to formal uncertainties based on covariance calculations. Copyright © 2018 Elsevier B.V. All rights reserved.
High-speed polarization sensitive optical coherence tomography for retinal diagnostics
NASA Astrophysics Data System (ADS)
Yin, Biwei; Wang, Bingqing; Vemishetty, Kalyanramu; Nagle, Jim; Liu, Shuang; Wang, Tianyi; Rylander, Henry G., III; Milner, Thomas E.
2012-01-01
We report design and construction of an FPGA-based high-speed swept-source polarization-sensitive optical coherence tomography (SS-PS-OCT) system for clinical retinal imaging. Clinical application of the SS-PS-OCT system is accurate measurement and display of thickness, phase retardation and birefringence maps of the retinal nerve fiber layer (RNFL) in human subjects for early detection of glaucoma. The FPGA-based SS-PS-OCT system provides three incident polarization states on the eye and uses a bulk-optic polarization sensitive balanced detection module to record two orthogonal interference fringe signals. Interference fringe signals and relative phase retardation between two orthogonal polarization states are used to obtain Stokes vectors of light returning from each RNFL depth. We implement a Levenberg-Marquardt algorithm on a Field Programmable Gate Array (FPGA) to compute accurate phase retardation and birefringence maps. For each retinal scan, a three-state Levenberg-Marquardt nonlinear algorithm is applied to 360 clusters each consisting of 100 A-scans to determine accurate maps of phase retardation and birefringence in less than 1 second after patient measurement allowing real-time clinical imaging-a speedup of more than 300 times over previous implementations. We report application of the FPGA-based SS-PS-OCT system for real-time clinical imaging of patients enrolled in a clinical study at the Eye Institute of Austin and Duke Eye Center.
Non-intrusive reduced order modeling of nonlinear problems using neural networks
NASA Astrophysics Data System (ADS)
Hesthaven, J. S.; Ubbiali, S.
2018-06-01
We develop a non-intrusive reduced basis (RB) method for parametrized steady-state partial differential equations (PDEs). The method extracts a reduced basis from a collection of high-fidelity solutions via a proper orthogonal decomposition (POD) and employs artificial neural networks (ANNs), particularly multi-layer perceptrons (MLPs), to accurately approximate the coefficients of the reduced model. The search for the optimal number of neurons and the minimum amount of training samples to avoid overfitting is carried out in the offline phase through an automatic routine, relying upon a joint use of the Latin hypercube sampling (LHS) and the Levenberg-Marquardt (LM) training algorithm. This guarantees a complete offline-online decoupling, leading to an efficient RB method - referred to as POD-NN - suitable also for general nonlinear problems with a non-affine parametric dependence. Numerical studies are presented for the nonlinear Poisson equation and for driven cavity viscous flows, modeled through the steady incompressible Navier-Stokes equations. Both physical and geometrical parametrizations are considered. Several results confirm the accuracy of the POD-NN method and show the substantial speed-up enabled at the online stage as compared to a traditional RB strategy.
Pandey, Daya Shankar; Das, Saptarshi; Pan, Indranil; Leahy, James J; Kwapinski, Witold
2016-12-01
In this paper, multi-layer feed forward neural networks are used to predict the lower heating value of gas (LHV), lower heating value of gasification products including tars and entrained char (LHV p ) and syngas yield during gasification of municipal solid waste (MSW) during gasification in a fluidized bed reactor. These artificial neural networks (ANNs) with different architectures are trained using the Levenberg-Marquardt (LM) back-propagation algorithm and a cross validation is also performed to ensure that the results generalise to other unseen datasets. A rigorous study is carried out on optimally choosing the number of hidden layers, number of neurons in the hidden layer and activation function in a network using multiple Monte Carlo runs. Nine input and three output parameters are used to train and test various neural network architectures in both multiple output and single output prediction paradigms using the available experimental datasets. The model selection procedure is carried out to ascertain the best network architecture in terms of predictive accuracy. The simulation results show that the ANN based methodology is a viable alternative which can be used to predict the performance of a fluidized bed gasifier. Copyright © 2016 Elsevier Ltd. All rights reserved.
Early aerospaceplane propulsion research: Marquardt Corporation: ca 1956-1963
NASA Technical Reports Server (NTRS)
Lindley, Charles A.
1992-01-01
A brief summary is presented of the very early days of aerospaceplane propulsion and concept research, from a viewpoint based in the Astro Division of Marquardt Aircraft Co. in the years listed, with some view into later times that were on Bill Escher's watch and others. Other groups who were pursuing the same goals by various routes are discussed by some following speakers. The chief purpose is to bring out background information that may be of value to members of the workshop and future workers in the field. The state of engine and airframe technology at those times must be understood to make sense of the effort. Operational kerosene fueled ramjets were routinely flying Mach 2 to 3 in the Bomarc and Talos interceptors. One Marquardt ramjet had accelerated a Lockheed X-7 test vehicle to about Mach 4.7 in an all-out test, holding it at nearly 1 'G' until the fuel ran out. Development of further research at Marquardt is outlined.
Pole-zero form fractional model identification in frequency domain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mansouri, R.; Djamah, T.; Djennoune, S.
2009-03-05
This paper deals with system identification in the frequency domain using non integer order models given in the pole-zero form. The usual identification techniques cannot be used in this case because of the non integer orders of differentiation which makes the problem strongly nonlinear. A general identification method based on Levenberg-Marquardt algorithm is developed and allows to estimate the (2n+2m+1) parameters of the model. Its application to identify the ''skin effect'' of a squirrel cage induction machine modeling is then presented.
Wang, G.L.; Chew, W.C.; Cui, T.J.; Aydiner, A.A.; Wright, D.L.; Smith, D.V.
2004-01-01
Three-dimensional (3D) subsurface imaging by using inversion of data obtained from the very early time electromagnetic system (VETEM) was discussed. The study was carried out by using the distorted Born iterative method to match the internal nonlinear property of the 3D inversion problem. The forward solver was based on the total-current formulation bi-conjugate gradient-fast Fourier transform (BCCG-FFT). It was found that the selection of regularization parameter follow a heuristic rule as used in the Levenberg-Marquardt algorithm so that the iteration is stable.
Predicting the survival of diabetes using neural network
NASA Astrophysics Data System (ADS)
Mamuda, Mamman; Sathasivam, Saratha
2017-08-01
Data mining techniques at the present time are used in predicting diseases of health care industries. Neural Network is one among the prevailing method in data mining techniques of an intelligent field for predicting diseases in health care industries. This paper presents a study on the prediction of the survival of diabetes diseases using different learning algorithms from the supervised learning algorithms of neural network. Three learning algorithms are considered in this study: (i) The levenberg-marquardt learning algorithm (ii) The Bayesian regulation learning algorithm and (iii) The scaled conjugate gradient learning algorithm. The network is trained using the Pima Indian Diabetes Dataset with the help of MATLAB R2014(a) software. The performance of each algorithm is further discussed through regression analysis. The prediction accuracy of the best algorithm is further computed to validate the accurate prediction
An efficient variable projection formulation for separable nonlinear least squares problems.
Gan, Min; Li, Han-Xiong
2014-05-01
We consider in this paper a class of nonlinear least squares problems in which the model can be represented as a linear combination of nonlinear functions. The variable projection algorithm projects the linear parameters out of the problem, leaving the nonlinear least squares problems involving only the nonlinear parameters. To implement the variable projection algorithm more efficiently, we propose a new variable projection functional based on matrix decomposition. The advantage of the proposed formulation is that the size of the decomposed matrix may be much smaller than those of previous ones. The Levenberg-Marquardt algorithm using finite difference method is then applied to minimize the new criterion. Numerical results show that the proposed approach achieves significant reduction in computing time.
Estimation of near-surface shear-wave velocity by inversion of Rayleigh waves
Xia, J.; Miller, R.D.; Park, C.B.
1999-01-01
The shear-wave (S-wave) velocity of near-surface materials (soil, rocks, pavement) and its effect on seismic-wave propagation are of fundamental interest in many groundwater, engineering, and environmental studies. Rayleigh-wave phase velocity of a layered-earth model is a function of frequency and four groups of earth properties: P-wave velocity, S-wave velocity, density, and thickness of layers. Analysis of the Jacobian matrix provides a measure of dispersion-curve sensitivity to earth properties. S-wave velocities are the dominant influence on a dispersion curve in a high-frequency range (>5 Hz) followed by layer thickness. An iterative solution technique to the weighted equation proved very effective in the high-frequency range when using the Levenberg-Marquardt and singular-value decomposition techniques. Convergence of the weighted solution is guaranteed through selection of the damping factor using the Levenberg-Marquardt method. Synthetic examples demonstrated calculation efficiency and stability of inverse procedures. We verify our method using borehole S-wave velocity measurements.Iterative solutions to the weighted equation by the Levenberg-Marquardt and singular-value decomposition techniques are derived to estimate near-surface shear-wave velocity. Synthetic and real examples demonstrate the calculation efficiency and stability of the inverse procedure. The inverse results of the real example are verified by borehole S-wave velocity measurements.
Momentum-weighted conjugate gradient descent algorithm for gradient coil optimization.
Lu, Hanbing; Jesmanowicz, Andrzej; Li, Shi-Jiang; Hyde, James S
2004-01-01
MRI gradient coil design is a type of nonlinear constrained optimization. A practical problem in transverse gradient coil design using the conjugate gradient descent (CGD) method is that wire elements move at different rates along orthogonal directions (r, phi, z), and tend to cross, breaking the constraints. A momentum-weighted conjugate gradient descent (MW-CGD) method is presented to overcome this problem. This method takes advantage of the efficiency of the CGD method combined with momentum weighting, which is also an intrinsic property of the Levenberg-Marquardt algorithm, to adjust step sizes along the three orthogonal directions. A water-cooled, 12.8 cm inner diameter, three axis torque-balanced gradient coil for rat imaging was developed based on this method, with an efficiency of 2.13, 2.08, and 4.12 mT.m(-1).A(-1) along X, Y, and Z, respectively. Experimental data demonstrate that this method can improve efficiency by 40% and field uniformity by 27%. This method has also been applied to the design of a gradient coil for the human brain, employing remote current return paths. The benefits of this design include improved gradient field uniformity and efficiency, with a shorter length than gradient coil designs using coaxial return paths. Copyright 2003 Wiley-Liss, Inc.
Holland, E
2008-03-01
Stephen Marquardt has derived a mask from the golden ratio that he claims represents the "ideal" facial archetype. Many have found his mask convincing, including cosmetic surgeons. However, Marquardt's mask is associated with numerous problems. The method used to examine goodness of fit with the proportions in the mask is faulty. The mask is ill-suited for non-European populations, especially sub-Saharan Africans and East Asians. The mask also appears to approximate the face shape of masculinized European women. Given that the general public strongly and overwhelmingly prefers above average facial femininity in women, white women seeking aesthetic facial surgery would be ill-advised to aim toward a better fit with Marquardt's mask. This article aims to show the proper way of assessing goodness of fit with Marquardt's mask, to address the shape of the mask as it pertains to masculinity-femininity, and to discuss the broader issue of an objective assessment of facial attractiveness. Generalized Procrustes analysis is used to show how goodness of fit with Marquardt's mask can be assessed. Thin-plate spline analysis is used to illustrate visually how sample faces, including northwestern European averages, differ from Marquardt's mask. Marquardt's mask best describes the facial proportions of masculinized white women as seen in fashion models. Marquardt's mask does not appear to describe "ideal" face shape even for white women because its proportions are inconsistent with the optimal preferences of most people, especially with regard to femininity.
Yang, Wanan; Li, Yan; Qin, Fengqing
2015-01-01
To actively maneuver a robotic capsule for interactive diagnosis in the gastrointestinal tract, visualizing accurate position and orientation of the capsule when it moves in the gastrointestinal tract is essential. A possible method that encloses the circuits, batteries, imaging device, etc into the capsule looped by an axially magnetized permanent-magnet ring is proposed. Based on expression of the axially magnetized permanent-magnet ring's magnetic fields, a localization and orientation model was established. An improved hybrid strategy that combines the advantages of particle-swarm optimization, clone algorithm, and the Levenberg-Marquardt algorithm was found to solve the model. Experiments showed that the hybrid strategy has good accuracy, convergence, and real time performance.
Dynamic Capacity Allocation Algorithms for iNET Link Manager
2014-05-01
algorithm that can better cope with severe congestion and misbehaving users and traffic flows. We compare the E-LM with the LM baseline algorithm (B-LM...capacity allocation algorithm that can better cope with severe congestion and misbehaving users and traffic flows. We compare the E-LM with the LM
Computation of transmitted and received B1 fields in magnetic resonance imaging.
Milles, Julien; Zhu, Yue Min; Chen, Nan-Kuei; Panych, Lawrence P; Gimenez, Gérard; Guttmann, Charles R G
2006-05-01
Computation of B1 fields is a key issue for determination and correction of intensity nonuniformity in magnetic resonance images. This paper presents a new method for computing transmitted and received B1 fields. Our method combines a modified MRI acquisition protocol and an estimation technique based on the Levenberg-Marquardt algorithm and spatial filtering. It enables accurate estimation of transmitted and received B1 fields for both homogeneous and heterogeneous objects. The method is validated using numerical simulations and experimental data from phantom and human scans. The experimental results are in agreement with theoretical expectations.
MC3: Multi-core Markov-chain Monte Carlo code
NASA Astrophysics Data System (ADS)
Cubillos, Patricio; Harrington, Joseph; Lust, Nate; Foster, AJ; Stemm, Madison; Loredo, Tom; Stevenson, Kevin; Campo, Chris; Hardin, Matt; Hardy, Ryan
2016-10-01
MC3 (Multi-core Markov-chain Monte Carlo) is a Bayesian statistics tool that can be executed from the shell prompt or interactively through the Python interpreter with single- or multiple-CPU parallel computing. It offers Markov-chain Monte Carlo (MCMC) posterior-distribution sampling for several algorithms, Levenberg-Marquardt least-squares optimization, and uniform non-informative, Jeffreys non-informative, or Gaussian-informative priors. MC3 can share the same value among multiple parameters and fix the value of parameters to constant values, and offers Gelman-Rubin convergence testing and correlated-noise estimation with time-averaging or wavelet-based likelihood estimation methods.
Description of bioremediation of soils using the model of a multistep system of microorganisms
NASA Astrophysics Data System (ADS)
Lubysheva, A. I.; Potashev, K. A.; Sofinskaya, O. A.
2018-01-01
The paper deals with the development of a mathematical model describing the interaction of a multi-step system of microorganisms in soil polluted with oil products. Each step in this system uses products of vital activity of the previous step to feed. Six different models of the multi-step system are considered. The equipping of the models with coefficients was carried out from the condition of minimizing the residual of the calculated and experimental data using an original algorithm based on the Levenberg-Marquardt method in combination with the Monte Carlo method for the initial approximation finding.
NASA Astrophysics Data System (ADS)
Habarulema, J. B.; McKinnell, L.-A.
2012-05-01
In this work, results obtained by investigating the application of different neural network backpropagation training algorithms are presented. This was done to assess the performance accuracy of each training algorithm in total electron content (TEC) estimations using identical datasets in models development and verification processes. Investigated training algorithms are standard backpropagation (SBP), backpropagation with weight delay (BPWD), backpropagation with momentum (BPM) term, backpropagation with chunkwise weight update (BPC) and backpropagation for batch (BPB) training. These five algorithms are inbuilt functions within the Stuttgart Neural Network Simulator (SNNS) and the main objective was to find out the training algorithm that generates the minimum error between the TEC derived from Global Positioning System (GPS) observations and the modelled TEC data. Another investigated algorithm is the MatLab based Levenberg-Marquardt backpropagation (L-MBP), which achieves convergence after the least number of iterations during training. In this paper, neural network (NN) models were developed using hourly TEC data (for 8 years: 2000-2007) derived from GPS observations over a receiver station located at Sutherland (SUTH) (32.38° S, 20.81° E), South Africa. Verification of the NN models for all algorithms considered was performed on both "seen" and "unseen" data. Hourly TEC values over SUTH for 2003 formed the "seen" dataset. The "unseen" dataset consisted of hourly TEC data for 2002 and 2008 over Cape Town (CPTN) (33.95° S, 18.47° E) and SUTH, respectively. The models' verification showed that all algorithms investigated provide comparable results statistically, but differ significantly in terms of time required to achieve convergence during input-output data training/learning. This paper therefore provides a guide to neural network users for choosing appropriate algorithms based on the availability of computation capabilities used for research.
Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie
2018-01-01
As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.
NASA Astrophysics Data System (ADS)
Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie
2018-01-01
As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.
Kargar, Soudabeh; Borisch, Eric A; Froemming, Adam T; Kawashima, Akira; Mynderse, Lance A; Stinson, Eric G; Trzasko, Joshua D; Riederer, Stephen J
2018-05-01
To describe an efficient numerical optimization technique using non-linear least squares to estimate perfusion parameters for the Tofts and extended Tofts models from dynamic contrast enhanced (DCE) MRI data and apply the technique to prostate cancer. Parameters were estimated by fitting the two Tofts-based perfusion models to the acquired data via non-linear least squares. We apply Variable Projection (VP) to convert the fitting problem from a multi-dimensional to a one-dimensional line search to improve computational efficiency and robustness. Using simulation and DCE-MRI studies in twenty patients with suspected prostate cancer, the VP-based solver was compared against the traditional Levenberg-Marquardt (LM) strategy for accuracy, noise amplification, robustness to converge, and computation time. The simulation demonstrated that VP and LM were both accurate in that the medians closely matched assumed values across typical signal to noise ratio (SNR) levels for both Tofts models. VP and LM showed similar noise sensitivity. Studies using the patient data showed that the VP method reliably converged and matched results from LM with approximate 3× and 2× reductions in computation time for the standard (two-parameter) and extended (three-parameter) Tofts models. While LM failed to converge in 14% of the patient data, VP converged in the ideal 100%. The VP-based method for non-linear least squares estimation of perfusion parameters for prostate MRI is equivalent in accuracy and robustness to noise, while being more reliably (100%) convergent and computationally about 3× (TM) and 2× (ETM) faster than the LM-based method. Copyright © 2017 Elsevier Inc. All rights reserved.
A point cloud modeling method based on geometric constraints mixing the robust least squares method
NASA Astrophysics Data System (ADS)
Yue, JIanping; Pan, Yi; Yue, Shun; Liu, Dapeng; Liu, Bin; Huang, Nan
2016-10-01
The appearance of 3D laser scanning technology has provided a new method for the acquisition of spatial 3D information. It has been widely used in the field of Surveying and Mapping Engineering with the characteristics of automatic and high precision. 3D laser scanning data processing process mainly includes the external laser data acquisition, the internal industry laser data splicing, the late 3D modeling and data integration system. For the point cloud modeling, domestic and foreign researchers have done a lot of research. Surface reconstruction technology mainly include the point shape, the triangle model, the triangle Bezier surface model, the rectangular surface model and so on, and the neural network and the Alfa shape are also used in the curved surface reconstruction. But in these methods, it is often focused on single surface fitting, automatic or manual block fitting, which ignores the model's integrity. It leads to a serious problems in the model after stitching, that is, the surfaces fitting separately is often not satisfied with the well-known geometric constraints, such as parallel, vertical, a fixed angle, or a fixed distance. However, the research on the special modeling theory such as the dimension constraint and the position constraint is not used widely. One of the traditional modeling methods adding geometric constraints is a method combing the penalty function method and the Levenberg-Marquardt algorithm (L-M algorithm), whose stability is pretty good. But in the research process, it is found that the method is greatly influenced by the initial value. In this paper, we propose an improved method of point cloud model taking into account the geometric constraint. We first apply robust least-squares to enhance the initial value's accuracy, and then use penalty function method to transform constrained optimization problems into unconstrained optimization problems, and finally solve the problems using the L-M algorithm. The experimental results
S-Genius, a universal software platform with versatile inverse problem resolution for scatterometry
NASA Astrophysics Data System (ADS)
Fuard, David; Troscompt, Nicolas; El Kalyoubi, Ismael; Soulan, Sébastien; Besacier, Maxime
2013-05-01
S-Genius is a new universal scatterometry platform, which gathers all the LTM-CNRS know-how regarding the rigorous electromagnetic computation and several inverse problem solver solutions. This software platform is built to be a userfriendly, light, swift, accurate, user-oriented scatterometry tool, compatible with any ellipsometric measurements to fit and any types of pattern. It aims to combine a set of inverse problem solver capabilities — via adapted Levenberg- Marquard optimization, Kriging, Neural Network solutions — that greatly improve the reliability and the velocity of the solution determination. Furthermore, as the model solution is mainly vulnerable to materials optical properties, S-Genius may be coupled with an innovative material refractive indices determination. This paper will a little bit more focuses on the modified Levenberg-Marquardt optimization, one of the indirect method solver built up in parallel with the total SGenius software coding by yours truly. This modified Levenberg-Marquardt optimization corresponds to a Newton algorithm with an adapted damping parameter regarding the definition domains of the optimized parameters. Currently, S-Genius is technically ready for scientific collaboration, python-powered, multi-platform (windows/linux/macOS), multi-core, ready for 2D- (infinite features along the direction perpendicular to the incident plane), conical, and 3D-features computation, compatible with all kinds of input data from any possible ellipsometers (angle or wavelength resolved) or reflectometers, and widely used in our laboratory for resist trimming studies, etching features characterization (such as complex stack) or nano-imprint lithography measurements for instance. The work about kriging solver, neural network solver and material refractive indices determination is done (or about to) by other LTM members and about to be integrated on S-Genius platform.
On the VHF Source Retrieval Errors Associated with Lightning Mapping Arrays (LMAs)
NASA Technical Reports Server (NTRS)
Koshak, W.
2016-01-01
This presentation examines in detail the standard retrieval method: that of retrieving the (x, y, z, t) parameters of a lightning VHF point source from multiple ground-based Lightning Mapping Array (LMA) time-of-arrival (TOA) observations. The solution is found by minimizing a chi-squared function via the Levenberg-Marquardt algorithm. The associated forward problem is examined to illustrate the importance of signal-to-noise ratio (SNR). Monte Carlo simulated retrievals are used to assess the benefits of changing various LMA network properties. A generalized retrieval method is also introduced that, in addition to TOA data, uses LMA electric field amplitude measurements to retrieve a transient VHF dipole moment source.
Redundant interferometric calibration as a complex optimization problem
NASA Astrophysics Data System (ADS)
Grobler, T. L.; Bernardi, G.; Kenyon, J. S.; Parsons, A. R.; Smirnov, O. M.
2018-05-01
Observations of the redshifted 21 cm line from the epoch of reionization have recently motivated the construction of low-frequency radio arrays with highly redundant configurations. These configurations provide an alternative calibration strategy - `redundant calibration' - and boost sensitivity on specific spatial scales. In this paper, we formulate calibration of redundant interferometric arrays as a complex optimization problem. We solve this optimization problem via the Levenberg-Marquardt algorithm. This calibration approach is more robust to initial conditions than current algorithms and, by leveraging an approximate matrix inversion, allows for further optimization and an efficient implementation (`redundant STEFCAL'). We also investigated using the preconditioned conjugate gradient method as an alternative to the approximate matrix inverse, but found that its computational performance is not competitive with respect to `redundant STEFCAL'. The efficient implementation of this new algorithm is made publicly available.
NASA Astrophysics Data System (ADS)
Kisi, Ozgur; Shiri, Jalal
2012-06-01
Estimating sediment volume carried by a river is an important issue in water resources engineering. This paper compares the accuracy of three different soft computing methods, Artificial Neural Networks (ANNs), Adaptive Neuro-Fuzzy Inference System (ANFIS), and Gene Expression Programming (GEP), in estimating daily suspended sediment concentration on rivers by using hydro-meteorological data. The daily rainfall, streamflow and suspended sediment concentration data from Eel River near Dos Rios, at California, USA are used as a case study. The comparison results indicate that the GEP model performs better than the other models in daily suspended sediment concentration estimation for the particular data sets used in this study. Levenberg-Marquardt, conjugate gradient and gradient descent training algorithms were used for the ANN models. Out of three algorithms, the Conjugate gradient algorithm was found to be better than the others.
Recursive Bayesian recurrent neural networks for time-series modeling.
Mirikitani, Derrick T; Nikolaev, Nikolay
2010-02-01
This paper develops a probabilistic approach to recursive second-order training of recurrent neural networks (RNNs) for improved time-series modeling. A general recursive Bayesian Levenberg-Marquardt algorithm is derived to sequentially update the weights and the covariance (Hessian) matrix. The main strengths of the approach are a principled handling of the regularization hyperparameters that leads to better generalization, and stable numerical performance. The framework involves the adaptation of a noise hyperparameter and local weight prior hyperparameters, which represent the noise in the data and the uncertainties in the model parameters. Experimental investigations using artificial and real-world data sets show that RNNs equipped with the proposed approach outperform standard real-time recurrent learning and extended Kalman training algorithms for recurrent networks, as well as other contemporary nonlinear neural models, on time-series modeling.
NASA Astrophysics Data System (ADS)
Uca; Toriman, Ekhwan; Jaafar, Othman; Maru, Rosmini; Arfan, Amal; Saleh Ahmar, Ansari
2018-01-01
Prediction of suspended sediment discharge in a catchments area is very important because it can be used to evaluation the erosion hazard, management of its water resources, water quality, hydrology project management (dams, reservoirs, and irrigation) and to determine the extent of the damage that occurred in the catchments. Multiple Linear Regression analysis and artificial neural network can be used to predict the amount of daily suspended sediment discharge. Regression analysis using the least square method, whereas artificial neural networks using Radial Basis Function (RBF) and feedforward multilayer perceptron with three learning algorithms namely Levenberg-Marquardt (LM), Scaled Conjugate Descent (SCD) and Broyden-Fletcher-Goldfarb-Shanno Quasi-Newton (BFGS). The number neuron of hidden layer is three to sixteen, while in output layer only one neuron because only one output target. The mean absolute error (MAE), root mean square error (RMSE), coefficient of determination (R2 ) and coefficient of efficiency (CE) of the multiple linear regression (MLRg) value Model 2 (6 input variable independent) has the lowest the value of MAE and RMSE (0.0000002 and 13.6039) and highest R2 and CE (0.9971 and 0.9971). When compared between LM, SCG and RBF, the BFGS model structure 3-7-1 is the better and more accurate to prediction suspended sediment discharge in Jenderam catchment. The performance value in testing process, MAE and RMSE (13.5769 and 17.9011) is smallest, meanwhile R2 and CE (0.9999 and 0.9998) is the highest if it compared with the another BFGS Quasi-Newton model (6-3-1, 9-10-1 and 12-12-1). Based on the performance statistics value, MLRg, LM, SCG, BFGS and RBF suitable and accurately for prediction by modeling the non-linear complex behavior of suspended sediment responses to rainfall, water depth and discharge. The comparison between artificial neural network (ANN) and MLRg, the MLRg Model 2 accurately for to prediction suspended sediment discharge (kg
Kucza, Witold
2013-07-25
Stochastic and deterministic simulations of dispersion in cylindrical channels on the Poiseuille flow have been presented. The random walk (stochastic) and the uniform dispersion (deterministic) models have been used for computations of flow injection analysis responses. These methods coupled with the genetic algorithm and the Levenberg-Marquardt optimization methods, respectively, have been applied for determination of diffusion coefficients. The diffusion coefficients of fluorescein sodium, potassium hexacyanoferrate and potassium dichromate have been determined by means of the presented methods and FIA responses that are available in literature. The best-fit results agree with each other and with experimental data thus validating both presented approaches. Copyright © 2013 The Author. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Moroni, Giovanni; Syam, Wahyudin P.; Petrò, Stefano
2014-08-01
Product quality is a main concern today in manufacturing; it drives competition between companies. To ensure high quality, a dimensional inspection to verify the geometric properties of a product must be carried out. High-speed non-contact scanners help with this task, by both speeding up acquisition speed and increasing accuracy through a more complete description of the surface. The algorithms for the management of the measurement data play a critical role in ensuring both the measurement accuracy and speed of the device. One of the most fundamental parts of the algorithm is the procedure for fitting the substitute geometry to a cloud of points. This article addresses this challenge. Three relevant geometries are selected as case studies: a non-linear least-squares fitting of a circle, sphere and cylinder. These geometries are chosen in consideration of their common use in practice; for example the sphere is often adopted as a reference artifact for performance verification of a coordinate measuring machine (CMM) and a cylinder is the most relevant geometry for a pin-hole relation as an assembly feature to construct a complete functioning product. In this article, an improvement of the initial point guess for the Levenberg-Marquardt (LM) algorithm by employing a chaos optimization (CO) method is proposed. This causes a performance improvement in the optimization of a non-linear function fitting the three geometries. The results show that, with this combination, a higher quality of fitting results a smaller norm of the residuals can be obtained while preserving the computational cost. Fitting an ‘incomplete-point-cloud’, which is a situation where the point cloud does not cover a complete feature e.g. from half of the total part surface, is also investigated. Finally, a case study of fitting a hemisphere is presented.
Solution Methods for 3D Tomographic Inversion Using A Highly Non-Linear Ray Tracer
NASA Astrophysics Data System (ADS)
Hipp, J. R.; Ballard, S.; Young, C. J.; Chang, M.
2008-12-01
To develop 3D velocity models to improve nuclear explosion monitoring capability, we have developed a 3D tomographic modeling system that traces rays using an implementation of the Um and Thurber ray pseudo- bending approach, with full enforcement of Snell's Law in 3D at the major discontinuities. Due to the highly non-linear nature of the ray tracer, however, we are forced to substantially damp the inversion in order to converge on a reasonable model. Unfortunately the amount of damping is not known a priori and can significantly extend the number of calls of the computationally expensive ray-tracer and the least squares matrix solver. If the damping term is too small the solution step-size produces either an un-realistic model velocity change or places the solution in or near a local minimum from which extrication is nearly impossible. If the damping term is too large, convergence can be very slow or premature convergence can occur. Standard approaches involve running inversions with a suite of damping parameters to find the best model. A better solution methodology is to take advantage of existing non-linear solution techniques such as Levenberg-Marquardt (LM) or quasi-newton iterative solvers. In particular, the LM algorithm was specifically designed to find the minimum of a multi-variate function that is expressed as the sum of squares of non-linear real-valued functions. It has become a standard technique for solving non-linear least squared problems, and is widely adopted in a broad spectrum of disciplines, including the geosciences. At each iteration, the LM approach dynamically varies the level of damping to optimize convergence. When the current estimate of the solution is far from the ultimate solution LM behaves as a steepest decent method, but transitions to Gauss- Newton behavior, with near quadratic convergence, as the estimate approaches the final solution. We show typical linear solution techniques and how they can lead to local minima if the
Modelling and simulation of a moving interface problem: freeze drying of black tea extract
NASA Astrophysics Data System (ADS)
Aydin, Ebubekir Sıddık; Yucel, Ozgun; Sadikoglu, Hasan
2017-06-01
The moving interface separates the material that is subjected to the freeze drying process as dried and frozen. Therefore, the accurate modeling the moving interface reduces the process time and energy consumption by improving the heat and mass transfer predictions during the process. To describe the dynamic behavior of the drying stages of the freeze-drying, a case study of brewed black tea extract in storage trays including moving interface was modeled that the heat and mass transfer equations were solved using orthogonal collocation method based on Jacobian polynomial approximation. Transport parameters and physical properties describing the freeze drying of black tea extract were evaluated by fitting the experimental data using Levenberg-Marquardt algorithm. Experimental results showed good agreement with the theoretical predictions.
NASA Astrophysics Data System (ADS)
Sarkar, A.; Chakravartty, J. K.
2013-10-01
A model is developed to predict the constitutive flow behavior of cadmium during compression test using artificial neural network (ANN). The inputs of the neural network are strain, strain rate, and temperature, whereas flow stress is the output. Experimental data obtained from compression tests in the temperature range -30 to 70 °C, strain range 0.1 to 0.6, and strain rate range 10-3 to 1 s-1 are employed to develop the model. A three-layer feed-forward ANN is trained with Levenberg-Marquardt training algorithm. It has been shown that the developed ANN model can efficiently and accurately predict the deformation behavior of cadmium. This trained network could predict the flow stress better than a constitutive equation of the type.
Neuro-fuzzy and neural network techniques for forecasting sea level in Darwin Harbor, Australia
NASA Astrophysics Data System (ADS)
Karimi, Sepideh; Kisi, Ozgur; Shiri, Jalal; Makarynskyy, Oleg
2013-03-01
Accurate predictions of sea level with different forecast horizons are important for coastal and ocean engineering applications, as well as in land drainage and reclamation studies. The methodology of tidal harmonic analysis, which is generally used for obtaining a mathematical description of the tides, is data demanding requiring processing of tidal observation collected over several years. In the present study, hourly sea levels for Darwin Harbor, Australia were predicted using two different, data driven techniques, adaptive neuro-fuzzy inference system (ANFIS) and artificial neural network (ANN). Multi linear regression (MLR) technique was used for selecting the optimal input combinations (lag times) of hourly sea level. The input combination comprises current sea level as well as five previous level values found to be optimal. For the ANFIS models, five different membership functions namely triangular, trapezoidal, generalized bell, Gaussian and two Gaussian membership function were tested and employed for predicting sea level for the next 1 h, 24 h, 48 h and 72 h. The used ANN models were trained using three different algorithms, namely, Levenberg-Marquardt, conjugate gradient and gradient descent. Predictions of optimal ANFIS and ANN models were compared with those of the optimal auto-regressive moving average (ARMA) models. The coefficient of determination, root mean square error and variance account statistics were used as comparison criteria. The obtained results indicated that triangular membership function was optimal for predictions with the ANFIS models while adaptive learning rate and Levenberg-Marquardt were most suitable for training the ANN models. Consequently, ANFIS and ANN models gave similar forecasts and performed better than the developed for the same purpose ARMA models for all the prediction intervals.
Hildebrandt, P; Greinert, R; Stier, A; Taniguchi, H
1989-12-08
The isozymes 2 and 4 of rabbit microsomal cytochrome P-450 (LM2, LM4) have been studied by resonance Raman spectroscopy. Based on high quality spectra, a vibrational assignment of the porphyrin modes in the frequency range between 100-1700 cm-1 is presented for different ferric states of cytochrome P-450 LM2 and LM4. The resonance Raman spectra are interpreted in terms of the spin and ligation state of the heme iron and of heme-protein interactions. While in cytochrome P-450 LM2 the six-coordinated low-spin configuration is predominantly occupied, in the isozyme LM4 the five-coordinated high-spin form is the most stable state. The different stability of these two spin configurations in LM2 and LM4 can be attributed to the structures of the active sites. In the low-spin form of the isozymes LM4 the protein matrix forces the heme into a more rigid conformation than in LM2. These steric constraints are removed upon dissociation of the sixth ligand leading to a more flexible structure of the active site in the high-spin form of the isozyme LM4. The vibrational modes of the vinyl groups were found to be characteristic markers for the specific structures of the heme pockets in both isozymes. They also respond sensitively to type-I substrate binding. While in cytochrome P-450 LM4 the occupation of the substrate-binding pocket induces conformational changes of the vinyl groups, as reflected by frequency shifts of the vinyl modes, in the LM2 isozyme the ground-state conformation of these substituents remain unaffected, suggesting that the more flexible heme pocket can accommodate substrates without imposing steric constraints on the porphyrin. The resonance Raman technique makes structural changes visible which are induced by substrate binding in addition and independent of the changes associated with the shift of the spin state equilibrium: the high-spin states in the substrate-bound and substrate-free enzyme are structurally different. The formation of the inactive form
The Best of LM_Net Select, 2001.
ERIC Educational Resources Information Center
Milbury, Peter; Eisenberg, Michael B.; Walker, Michelle
LM_NET, the most successful educational listserv in the world has approximately 15,000 subscribed members from every state in the United States and from over 65 countries. LM_NET covers a wide range of interests, all related to library and information work in education, and interactions on LM_NET result in in-depth treatments of the major…
Apollo 9 Mission image - View of the Lunar Module (LM) 3 and Service Module (SM) LM Adapter
1969-03-03
View of the Lunar Module (LM) 3 and Service Module (SM) LM Adapter. Film magazine was A,film type was SO-368 Ektachrome with 0.460 - 0.710 micrometers film / filter transmittance response and haze filter, 80mm lens.
NASA Astrophysics Data System (ADS)
Deo, Ravinesh C.; Şahin, Mehmet
2015-02-01
The prediction of future drought is an effective mitigation tool for assessing adverse consequences of drought events on vital water resources, agriculture, ecosystems and hydrology. Data-driven model predictions using machine learning algorithms are promising tenets for these purposes as they require less developmental time, minimal inputs and are relatively less complex than the dynamic or physical model. This paper authenticates a computationally simple, fast and efficient non-linear algorithm known as extreme learning machine (ELM) for the prediction of Effective Drought Index (EDI) in eastern Australia using input data trained from 1957-2008 and the monthly EDI predicted over the period 2009-2011. The predictive variables for the ELM model were the rainfall and mean, minimum and maximum air temperatures, supplemented by the large-scale climate mode indices of interest as regression covariates, namely the Southern Oscillation Index, Pacific Decadal Oscillation, Southern Annular Mode and the Indian Ocean Dipole moment. To demonstrate the effectiveness of the proposed data-driven model a performance comparison in terms of the prediction capabilities and learning speeds was conducted between the proposed ELM algorithm and the conventional artificial neural network (ANN) algorithm trained with Levenberg-Marquardt back propagation. The prediction metrics certified an excellent performance of the ELM over the ANN model for the overall test sites, thus yielding Mean Absolute Errors, Root-Mean Square Errors, Coefficients of Determination and Willmott's Indices of Agreement of 0.277, 0.008, 0.892 and 0.93 (for ELM) and 0.602, 0.172, 0.578 and 0.92 (for ANN) models. Moreover, the ELM model was executed with learning speed 32 times faster and training speed 6.1 times faster than the ANN model. An improvement in the prediction capability of the drought duration and severity by the ELM model was achieved. Based on these results we aver that out of the two machine learning
NASA Astrophysics Data System (ADS)
Haddout, Soufiane
2016-06-01
In Newtonian mechanics, the non-inertial reference frames is a generalization of Newton's laws to any reference frames. While this approach simplifies some problems, there is often little physical insight into the motion, in particular into the effects of the Coriolis force. The fictitious Coriolis force can be used by anyone in that frame of reference to explain why objects follow curved paths. In this paper, a mathematical solution based on differential equations in non-inertial reference is used to study different types of motion in rotating system. In addition, the experimental data measured on a turntable device, using a video camera in a mechanics laboratory was conducted to compare with mathematical solution in case of parabolically curved, solving non-linear least-squares problems, based on Levenberg-Marquardt's and Gauss-Newton algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burnett, J. L.; Britton, R. E.; Abrecht, D. G.
The acquisition of time-stamped list (TLIST) data provides additional information useful to gamma-spectrometry analysis. A novel technique is described that uses non-linear least-squares fitting and the Levenberg-Marquardt algorithm to simultaneously determine parent-daughter atoms from time sequence measurements of only the daughter radionuclide. This has been demonstrated for the radioactive decay of short-lived radon progeny (214Pb/214Bi, 212Pb/212Bi) described using the Bateman first-order differential equation. The calculated atoms are in excellent agreement with measured atoms, with a difference of 1.3 – 4.8% for parent atoms and 2.4% - 10.4% for daughter atoms. Measurements are also reported with reduced uncertainty. The technique hasmore » potential to redefine gamma-spectrometry analysis.« less
MC ray-tracing optimization of lobster-eye focusing devices with RESTRAX
NASA Astrophysics Data System (ADS)
Šaroun, Jan; Kulda, Jiří
2006-11-01
The enhanced functionalities of the latest version of the RESTRAX software, providing a high-speed Monte Carlo (MC) ray-tracing code to represent a virtual three-axis neutron spectrometer, include representation of parabolic and elliptic guide profiles and facilities for numerical optimization of parameter values, characterizing the instrument components. As examples, we present simulations of a doubly focusing monochromator in combination with cold neutron guides and lobster-eye supermirror devices, concentrating a monochromatic beam to small sample volumes. A Levenberg-Marquardt minimization algorithm is used to optimize simultaneously several parameters of the monochromator and lobster-eye guides. We compare the performance of optimized configurations in terms of monochromatic neutron flux and energy spread and demonstrate the effect of lobster-eye optics on beam transformations in real and momentum subspaces.
An interactive program for pharmacokinetic modeling.
Lu, D R; Mao, F
1993-05-01
A computer program, PharmK, was developed for pharmacokinetic modeling of experimental data. The program was written in C computer language based on the high-level user-interface Macintosh operating system. The intention was to provide a user-friendly tool for users of Macintosh computers. An interactive algorithm based on the exponential stripping method is used for the initial parameter estimation. Nonlinear pharmacokinetic model fitting is based on the maximum likelihood estimation method and is performed by the Levenberg-Marquardt method based on chi 2 criterion. Several methods are available to aid the evaluation of the fitting results. Pharmacokinetic data sets have been examined with the PharmK program, and the results are comparable with those obtained with other programs that are currently available for IBM PC-compatible and other types of computers.
Gpufit: An open-source toolkit for GPU-accelerated curve fitting.
Przybylski, Adrian; Thiel, Björn; Keller-Findeisen, Jan; Stock, Bernd; Bates, Mark
2017-11-16
We present a general purpose, open-source software library for estimation of non-linear parameters by the Levenberg-Marquardt algorithm. The software, Gpufit, runs on a Graphics Processing Unit (GPU) and executes computations in parallel, resulting in a significant gain in performance. We measured a speed increase of up to 42 times when comparing Gpufit with an identical CPU-based algorithm, with no loss of precision or accuracy. Gpufit is designed such that it is easily incorporated into existing applications or adapted for new ones. Multiple software interfaces, including to C, Python, and Matlab, ensure that Gpufit is accessible from most programming environments. The full source code is published as an open source software repository, making its function transparent to the user and facilitating future improvements and extensions. As a demonstration, we used Gpufit to accelerate an existing scientific image analysis package, yielding significantly improved processing times for super-resolution fluorescence microscopy datasets.
NASA Astrophysics Data System (ADS)
Ma, Suodong; Pan, Qiao; Shen, Weimin
2016-09-01
As one kind of light source simulation devices, spectrally tunable light sources are able to generate specific spectral shape and radiant intensity outputs according to different application requirements, which have urgent demands in many fields of the national economy and the national defense industry. Compared with the LED-type spectrally tunable light source, the one based on a DMD-convex grating Offner configuration has advantages of high spectral resolution, strong digital controllability, high spectrum synthesis accuracy, etc. As a key link of the above type light source to achieve target spectrum outputs, spectrum synthesis algorithm based on spectrum matching is therefore very important. An improved spectrum synthesis algorithm based on linear least square initialization and Levenberg-Marquardt iterative optimization is proposed in this paper on the basis of in-depth study of the spectrum matching principle. The effectiveness of the proposed method is verified by a series of simulations and experimental works.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sozen, A.; Arcaklioglu, E.
The main goal of this study is to develop the energy sources estimation equations in order to estimate the future projections and make correct investments in Turkey using artificial neural network (ANN) approach. It is also expected that this study will be helpful in demonstrating energy situation of Turkey in amount of EU countries. Basic energy indicators such as population, gross generation, installed capacity, net energy consumption, import, export are used in input layer of ANN. Basic energy sources such as coal, lignite, fuel-oil, natural gas and hydro are in output layer. Data from 1975 to 2003 are used tomore » train. Three years (1981, 1994 and 2003) are only used as test data to confirm this method. Also, in this study, the best approach was investigated for each energy sources by using different learning algorithms (scaled conjugate gradient (SCG) and Levenberg-Marquardt (LM)) and a logistic sigmoid transfer function in the ANN with developed software. The statistical coefficients of multiple determinations (R{sup 2}-value) for training data are equal to 0.99802, 0.99918, 0.997134, 0.998831 and 0.995681 for natural gas, lignite, coal, hydraulic, and fuel-oil, respectively. Similarly, these values for testing data are equal to 0.995623, 0.999456, 0.998545, 0.999236, and 0.99002. The best approach was found for lignite by SCG algorithm with seven neurons so mean absolute percentage error (MAPE) is equal to 1.646753 for lignite. According to the results, the future projections of energy indicators using ANN technique have been obviously predicted within acceptable errors. Apart from reducing the whole time required, the importance of the ANN approach is possible to find solutions that make energy applications more viable and thus more attractive to potential users.« less
Screening Adhesively Bonded Single-Lap-Joint Testing Results Using Nonlinear Calculation Parameters
2012-03-01
versus displacement response for single-lap-joints bonded with damage-tolerant adhe- sives, such the polyurea adhesive plotted in Figure 2, is much...displacement response for a single-lap-joint bonded with a polyurea adhesive. Complex x-y plots are commonly fitted using the Levenberg-Marquardt...expected decrease in maximum strength for the polyurea in compar- ison to the epoxy, which could have been obtained using a traditional analysis approach
Kinetic modeling and fitting software for interconnected reaction schemes: VisKin.
Zhang, Xuan; Andrews, Jared N; Pedersen, Steen E
2007-02-15
Reaction kinetics for complex, highly interconnected kinetic schemes are modeled using analytical solutions to a system of ordinary differential equations. The algorithm employs standard linear algebra methods that are implemented using MatLab functions in a Visual Basic interface. A graphical user interface for simple entry of reaction schemes facilitates comparison of a variety of reaction schemes. To ensure microscopic balance, graph theory algorithms are used to determine violations of thermodynamic cycle constraints. Analytical solutions based on linear differential equations result in fast comparisons of first order kinetic rates and amplitudes as a function of changing ligand concentrations. For analysis of higher order kinetics, we also implemented a solution using numerical integration. To determine rate constants from experimental data, fitting algorithms that adjust rate constants to fit the model to imported data were implemented using the Levenberg-Marquardt algorithm or using Broyden-Fletcher-Goldfarb-Shanno methods. We have included the ability to carry out global fitting of data sets obtained at varying ligand concentrations. These tools are combined in a single package, which we have dubbed VisKin, to guide and analyze kinetic experiments. The software is available online for use on PCs.
Learning LM Specificity for Ganglion Cells
NASA Technical Reports Server (NTRS)
Ahumada, Albert J.
2015-01-01
Unsupervised learning models have been proposed based on experience (Ahumada and Mulligan, 1990;Wachtler, Doi, Lee and Sejnowski, 2007) that allow the cortex to develop units with LM specific color opponent receptive fields like the blob cells reported by Hubel and Wiesel on the basis of visual experience. These models used ganglion cells with LM indiscriminate wiring as inputs to the learning mechanism, which was presumed to occur at the cortical level.
Zhang, Ping; Hong, Bo; He, Liang; Cheng, Fei; Zhao, Peng; Wei, Cailiang; Liu, Yunhui
2015-01-01
PM2.5 pollution has become of increasing public concern because of its relative importance and sensitivity to population health risks. Accurate predictions of PM2.5 pollution and population exposure risks are crucial to developing effective air pollution control strategies. We simulated and predicted the temporal and spatial changes of PM2.5 concentration and population exposure risks, by coupling optimization algorithms of the Back Propagation-Artificial Neural Network (BP-ANN) model and a geographical information system (GIS) in Xi’an, China, for 2013, 2020, and 2025. Results indicated that PM2.5 concentration was positively correlated with GDP, SO2, and NO2, while it was negatively correlated with population density, average temperature, precipitation, and wind speed. Principal component analysis of the PM2.5 concentration and its influencing factors’ variables extracted four components that accounted for 86.39% of the total variance. Correlation coefficients of the Levenberg-Marquardt (trainlm) and elastic (trainrp) algorithms were more than 0.8, the index of agreement (IA) ranged from 0.541 to 0.863 and from 0.502 to 0.803 by trainrp and trainlm algorithms, respectively; mean bias error (MBE) and Root Mean Square Error (RMSE) indicated that the predicted values were very close to the observed values, and the accuracy of trainlm algorithm was better than the trainrp. Compared to 2013, temporal and spatial variation of PM2.5 concentration and risk of population exposure to pollution decreased in 2020 and 2025. The high-risk areas of population exposure to PM2.5 were mainly distributed in the northern region, where there is downtown traffic, abundant commercial activity, and more exhaust emissions. A moderate risk zone was located in the southern region associated with some industrial pollution sources, and there were mainly low-risk areas in the western and eastern regions, which are predominantly residential and educational areas. PMID:26426030
Zhang, Ping; Hong, Bo; He, Liang; Cheng, Fei; Zhao, Peng; Wei, Cailiang; Liu, Yunhui
2015-09-29
PM2.5 pollution has become of increasing public concern because of its relative importance and sensitivity to population health risks. Accurate predictions of PM2.5 pollution and population exposure risks are crucial to developing effective air pollution control strategies. We simulated and predicted the temporal and spatial changes of PM2.5 concentration and population exposure risks, by coupling optimization algorithms of the Back Propagation-Artificial Neural Network (BP-ANN) model and a geographical information system (GIS) in Xi'an, China, for 2013, 2020, and 2025. Results indicated that PM2.5 concentration was positively correlated with GDP, SO₂, and NO₂, while it was negatively correlated with population density, average temperature, precipitation, and wind speed. Principal component analysis of the PM2.5 concentration and its influencing factors' variables extracted four components that accounted for 86.39% of the total variance. Correlation coefficients of the Levenberg-Marquardt (trainlm) and elastic (trainrp) algorithms were more than 0.8, the index of agreement (IA) ranged from 0.541 to 0.863 and from 0.502 to 0.803 by trainrp and trainlm algorithms, respectively; mean bias error (MBE) and Root Mean Square Error (RMSE) indicated that the predicted values were very close to the observed values, and the accuracy of trainlm algorithm was better than the trainrp. Compared to 2013, temporal and spatial variation of PM2.5 concentration and risk of population exposure to pollution decreased in 2020 and 2025. The high-risk areas of population exposure to PM2.5 were mainly distributed in the northern region, where there is downtown traffic, abundant commercial activity, and more exhaust emissions. A moderate risk zone was located in the southern region associated with some industrial pollution sources, and there were mainly low-risk areas in the western and eastern regions, which are predominantly residential and educational areas.
NASA Astrophysics Data System (ADS)
Li, C.; Lu, H.; Wen, X.
2015-12-01
Land surface model (LSM), which simulates energy, water and momentum exchanges between land and atmosphere, is an important component of Earth System Models (ESM). As shown in CMIP5, different ESMs usually use different LSMs and represent various land surface status. In order to select a land surface model which could be embedded into the ESM developed in Tsinghua University, we firstly evaluate the performance of three LSMs: Community Land Model (CLM4.5) and two different versions of Common Land Model (CoLM2005 and CoLM2014). All of three models were driven by CRUNCEP data and simulation results from 1980 to 2010 were used in this study. Diagnostic data provided by NCAR, global latent and sensible heat flux map estimated by Jung, net radiation from SRB, and in situ observation collected from FluxNet were used as reference data. Two variables, surface runoff and snow depth, were used for evaluating the model performance in water budget simulation, while three variables including net radiation, sensible heat, and latent heat were used for assessing energy budget simulation. For 30 years averaged runoff, global average value of Colm2014 is 0.44mm/day and close to the diagnostic value of 0.75 mm/day, while that of Colm2005 is 0.44mm/day and that of CLM is 0.20mm/day. For snow depth simulation, three models all have overestimation in the Northern Hemisphere and underestimation in the Southern Hemisphere compare to diagnostic data. For 30 years energy budget simulation, at global scale, CoLM2005 performs best in latent heat estimation, CoLM2014 performs best in sensible heat simulation, and CoLM2005 and CoLM2014 make similar performance in net radiation estimation but is still better than CLM. At regional and local scale, comparing to the four years average of flux tower observation, RMSE of CoLM2005 is the smallest for latent heat (9.717 W/m2) , and for sensible heat simulation, RMSE of CoLM2005 (13.048 W/m2) is slightly greater than CLM(10.767 W/m2) but still better
On representations of the filiform Lie superalgebra Lm,n
NASA Astrophysics Data System (ADS)
Wang, Qi; Chen, Hongjia; Liu, Wende
2015-11-01
In this paper, we study the representations for the filiform Lie superalgebras Lm,n, a particular class of nilpotent Lie superalgebras. We determine the minimal dimension of a faithful module over Lm,n using the theory of linear algebra. In addition, using the method of Feingold and Frenkel (1985), we construct some finite and infinite dimensional modules over Lm,n on the Grassmann algebra and the mixed Clifford-Weyl algebra.
Kaveh, Mohammad; Chayjan, Reza Amiri
2014-01-01
Drying of terebinth fruit was conducted to provide microbiological stability, reduce product deterioration due to chemical reactions, facilitate storage and lower transportation costs. Because terebinth fruit is susceptible to heat, the selection of a suitable drying technology is a challenging task. Artificial neural networks (ANNs) are used as a nonlinear mapping structures for modelling and prediction of some physical and drying properties of terebinth fruit. Drying characteristics of terebinth fruit with an initial moisture content of 1.16 (d.b.) was studied in an infrared fluidized bed dryer. Different levels of air temperatures (40, 55 and 70°C), air velocities (0.93, 1.76 and 2.6 m/s) and infrared (IR) radiation powers (500, 1000 and 1500 W) were applied. In the present study, the application of Artificial Neural Network (ANN) for predicting the drying moisture diffusivity, energy consumption, shrinkage, drying rate and moisture ratio (output parameter for ANN modelling) was investigated. Air temperature, air velocity, IR radiation and drying time were considered as input parameters. The results revealed that to predict drying rate and moisture ratio a network with the TANSIG-LOGSIG-TANSIG transfer function and Levenberg-Marquardt (LM) training algorithm made the most accurate predictions for the terebinth fruit drying. The best results for ANN at predications were R2 = 0.9678 for drying rate, R2 = 0.9945 for moisture ratio, R2 = 0.9857 for moisture diffusivity and R2 = 0.9893 for energy consumption. Results indicated that artificial neural network can be used as an alternative approach for modelling and predicting of terebinth fruit drying parameters with high correlation. Also ANN can be used in optimization of the process.
Forecasting Zakat collection using artificial neural network
NASA Astrophysics Data System (ADS)
Sy Ahmad Ubaidillah, Sh. Hafizah; Sallehuddin, Roselina
2013-04-01
'Zakat', "that which purifies" or "alms", is the giving of a fixed portion of one's wealth to charity, generally to the poor and needy. It is one of the five pillars of Islam, and must be paid by all practicing Muslims who have the financial means (nisab). 'Nisab' is the minimum level to determine whether there is a 'zakat' to be paid on the assets. Today, in most Muslim countries, 'zakat' is collected through a decentralized and voluntary system. Under this voluntary system, 'zakat' committees are established, which are tasked with the collection and distribution of 'zakat' funds. 'Zakat' promotes a more equitable redistribution of wealth, and fosters a sense of solidarity amongst members of the 'Ummah'. The Malaysian government has established a 'zakat' center at every state to facilitate the management of 'zakat'. The center has to have a good 'zakat' management system to effectively execute its functions especially in the collection and distribution of 'zakat'. Therefore, a good forecasting model is needed. The purpose of this study is to develop a forecasting model for Pusat Zakat Pahang (PZP) to predict the total amount of collection from 'zakat' of assets more precisely. In this study, two different Artificial Neural Network (ANN) models using two different learning algorithms are developed; Back Propagation (BP) and Levenberg-Marquardt (LM). Both models are developed and compared in terms of their accuracy performance. The best model is determined based on the lowest mean square error and the highest correlations values. Based on the results obtained from the study, BP neural network is recommended as the forecasting model to forecast the collection from 'zakat' of assets for PZP.
Meneses, Anderson Alvarenga de Moura; Palheta, Dayara Bastos; Pinheiro, Christiano Jorge Gomes; Barroso, Regina Cely Rodrigues
2018-03-01
X-ray Synchrotron Radiation Micro-Computed Tomography (SR-µCT) allows a better visualization in three dimensions with a higher spatial resolution, contributing for the discovery of aspects that could not be observable through conventional radiography. The automatic segmentation of SR-µCT scans is highly valuable due to its innumerous applications in geological sciences, especially for morphology, typology, and characterization of rocks. For a great number of µCT scan slices, a manual process of segmentation would be impractical, either for the time expended and for the accuracy of results. Aiming the automatic segmentation of SR-µCT geological sample images, we applied and compared Energy Minimization via Graph Cuts (GC) algorithms and Artificial Neural Networks (ANNs), as well as the well-known K-means and Fuzzy C-Means algorithms. The Dice Similarity Coefficient (DSC), Sensitivity and Precision were the metrics used for comparison. Kruskal-Wallis and Dunn's tests were applied and the best methods were the GC algorithms and ANNs (with Levenberg-Marquardt and Bayesian Regularization). For those algorithms, an approximate Dice Similarity Coefficient of 95% was achieved. Our results confirm the possibility of usage of those algorithms for segmentation and posterior quantification of porosity of an igneous rock sample SR-µCT scan. Copyright © 2017 Elsevier Ltd. All rights reserved.
Antwi, Philip; Li, Jianzheng; Meng, Jia; Deng, Kaiwen; Koblah Quashie, Frank; Li, Jiuling; Opoku Boadi, Portia
2018-06-01
In this a, three-layered feedforward-backpropagation artificial neural network (BPANN) model was developed and employed to evaluate COD removal an upflow anaerobic sludge blanket (UASB) reactor treating industrial starch processing wastewater. At the end of UASB operation, microbial community characterization revealed satisfactory composition of microbes whereas morphology depicted rod-shaped archaea. pH, COD, NH 4 + , VFA, OLR and biogas yield were selected by principal component analysis and used as input variables. Whilst tangent sigmoid function (tansig) and linear function (purelin) were assigned as activation functions at the hidden-layer and output-layer, respectively, optimum BPANN architecture was achieved with Levenberg-Marquardt algorithm (trainlm) after eleven training algorithms had been tested. Based on performance indicators such the mean squared errors, fractional variance, index of agreement and coefficient of determination (R 2 ), the BPANN model demonstrated significant performance with R 2 reaching 87%. The study revealed that, control and optimization of an anaerobic digestion process with BPANN model was feasible. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Deo, Ravinesh C.; Şahin, Mehmet
2015-07-01
The forecasting of drought based on cumulative influence of rainfall, temperature and evaporation is greatly beneficial for mitigating adverse consequences on water-sensitive sectors such as agriculture, ecosystems, wildlife, tourism, recreation, crop health and hydrologic engineering. Predictive models of drought indices help in assessing water scarcity situations, drought identification and severity characterization. In this paper, we tested the feasibility of the Artificial Neural Network (ANN) as a data-driven model for predicting the monthly Standardized Precipitation and Evapotranspiration Index (SPEI) for eight candidate stations in eastern Australia using predictive variable data from 1915 to 2005 (training) and simulated data for the period 2006-2012. The predictive variables were: monthly rainfall totals, mean temperature, minimum temperature, maximum temperature and evapotranspiration, which were supplemented by large-scale climate indices (Southern Oscillation Index, Pacific Decadal Oscillation, Southern Annular Mode and Indian Ocean Dipole) and the Sea Surface Temperatures (Nino 3.0, 3.4 and 4.0). A total of 30 ANN models were developed with 3-layer ANN networks. To determine the best combination of learning algorithms, hidden transfer and output functions of the optimum model, the Levenberg-Marquardt and Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton backpropagation algorithms were utilized to train the network, tangent and logarithmic sigmoid equations used as the activation functions and the linear, logarithmic and tangent sigmoid equations used as the output function. The best ANN architecture had 18 input neurons, 43 hidden neurons and 1 output neuron, trained using the Levenberg-Marquardt learning algorithm using tangent sigmoid equation as the activation and output functions. An evaluation of the model performance based on statistical rules yielded time-averaged Coefficient of Determination, Root Mean Squared Error and the Mean Absolute
Camera-pose estimation via projective Newton optimization on the manifold.
Sarkis, Michel; Diepold, Klaus
2012-04-01
Determining the pose of a moving camera is an important task in computer vision. In this paper, we derive a projective Newton algorithm on the manifold to refine the pose estimate of a camera. The main idea is to benefit from the fact that the 3-D rigid motion is described by the special Euclidean group, which is a Riemannian manifold. The latter is equipped with a tangent space defined by the corresponding Lie algebra. This enables us to compute the optimization direction, i.e., the gradient and the Hessian, at each iteration of the projective Newton scheme on the tangent space of the manifold. Then, the motion is updated by projecting back the variables on the manifold itself. We also derive another version of the algorithm that employs homeomorphic parameterization to the special Euclidean group. We test the algorithm on several simulated and real image data sets. Compared with the standard Newton minimization scheme, we are now able to obtain the full numerical formula of the Hessian with a 60% decrease in computational complexity. Compared with Levenberg-Marquardt, the results obtained are more accurate while having a rather similar complexity.
Neural Network Based Intrusion Detection System for Critical Infrastructures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Todd Vollmer; Ondrej Linda; Milos Manic
2009-07-01
Resiliency and security in control systems such as SCADA and Nuclear plant’s in today’s world of hackers and malware are a relevant concern. Computer systems used within critical infrastructures to control physical functions are not immune to the threat of cyber attacks and may be potentially vulnerable. Tailoring an intrusion detection system to the specifics of critical infrastructures can significantly improve the security of such systems. The IDS-NNM – Intrusion Detection System using Neural Network based Modeling, is presented in this paper. The main contributions of this work are: 1) the use and analyses of real network data (data recordedmore » from an existing critical infrastructure); 2) the development of a specific window based feature extraction technique; 3) the construction of training dataset using randomly generated intrusion vectors; 4) the use of a combination of two neural network learning algorithms – the Error-Back Propagation and Levenberg-Marquardt, for normal behavior modeling. The presented algorithm was evaluated on previously unseen network data. The IDS-NNM algorithm proved to be capable of capturing all intrusion attempts presented in the network communication while not generating any false alerts.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bekar, Kursat B; Miller, Thomas Martin; Patton, Bruce W
The characteristic X-rays produced by the interactions of the electron beam with the sample in a scanning electron microscope (SEM) are usually captured with a variable-energy detector, a process termed energy dispersive spectrometry (EDS). The purpose of this work is to exploit inverse simulations of SEM-EDS spectra to enable rapid determination of sample properties, particularly elemental composition. This is accomplished using penORNL, a modified version of PENELOPE, and a modified version of the traditional Levenberg Marquardt nonlinear optimization algorithm, which together is referred to as MOZAIK-SEM. The overall conclusion of this work is that MOZAIK-SEM is a promising method formore » performing inverse analysis of X-ray spectra generated within a SEM. As this methodology exists now, MOZAIK-SEM has been shown to calculate the elemental composition of an unknown sample within a few percent of the actual composition.« less
Usage of Neural Network to Predict Aluminium Oxide Layer Thickness
Michal, Peter; Vagaská, Alena; Gombár, Miroslav; Kmec, Ján; Spišák, Emil; Kučerka, Daniel
2015-01-01
This paper shows an influence of chemical composition of used electrolyte, such as amount of sulphuric acid in electrolyte, amount of aluminium cations in electrolyte and amount of oxalic acid in electrolyte, and operating parameters of process of anodic oxidation of aluminium such as the temperature of electrolyte, anodizing time, and voltage applied during anodizing process. The paper shows the influence of those parameters on the resulting thickness of aluminium oxide layer. The impact of these variables is shown by using central composite design of experiment for six factors (amount of sulphuric acid, amount of oxalic acid, amount of aluminium cations, electrolyte temperature, anodizing time, and applied voltage) and by usage of the cubic neural unit with Levenberg-Marquardt algorithm during the results evaluation. The paper also deals with current densities of 1 A·dm−2 and 3 A·dm−2 for creating aluminium oxide layer. PMID:25922850
Evaluation of infiltration models in contaminated landscape.
Sadegh Zadeh, Kouroush; Shirmohammadi, Adel; Montas, Hubert J; Felton, Gary
2007-06-01
The infiltration models of Kostiakov, Green-Ampt, and Philip (two and three terms equations) were used, calibrated, and evaluated to simulate in-situ infiltration in nine different soil types. The Osborne-Moré modified version of the Levenberg-Marquardt optimization algorithm was coupled with the experimental data obtained by the double ring infiltrometers and the infiltration equations, to estimate the model parameters. Comparison of the model outputs with the experimental data indicates that the models can successfully describe cumulative infiltration in different soil types. However, since Kostiakov's equation fails to accurately simulate the infiltration rate as time approaches infinity, Philip's two-term equation, in some cases, produces negative values for the saturated hydraulic conductivity of soils, and the Green-Ampt model uses piston flow assumptions, we suggest using Philip's three-term equation to simulate infiltration and to estimate the saturated hydraulic conductivity of soils.
Estimation of the left ventricular shape and motion with a limited number of slices
NASA Astrophysics Data System (ADS)
Robert, Anne; Schmitt, Francis J. M.; Mousseaux, Elie
1996-04-01
In this paper, we describe a method for the reconstruction of the surface of the left ventricle from a set of lacunary data (that is an incomplete, unevenly sampled and unstructured data set). Global models, because they compress the properties of a surface into a small set of parameters, have a strong regularizing power and are therefore very well suited to lacunary data. Globally deformable superquadrics are particularly attractive, because of their simplicity. This model can be fitted to the data using the Levenberg-Marquardt algorithm for non-linear optimization. However, the difficulties we experienced to get temporally consistent solutions as well as the intrinsic 4D character of the data led us to generalize the classical 3D superquadric model to 4D. We present results on a 4D sequence from the Dynamic Spatial Reconstructor of the Mayo Clinic, and on a 4D IRM sequence.
Usage of neural network to predict aluminium oxide layer thickness.
Michal, Peter; Vagaská, Alena; Gombár, Miroslav; Kmec, Ján; Spišák, Emil; Kučerka, Daniel
2015-01-01
This paper shows an influence of chemical composition of used electrolyte, such as amount of sulphuric acid in electrolyte, amount of aluminium cations in electrolyte and amount of oxalic acid in electrolyte, and operating parameters of process of anodic oxidation of aluminium such as the temperature of electrolyte, anodizing time, and voltage applied during anodizing process. The paper shows the influence of those parameters on the resulting thickness of aluminium oxide layer. The impact of these variables is shown by using central composite design of experiment for six factors (amount of sulphuric acid, amount of oxalic acid, amount of aluminium cations, electrolyte temperature, anodizing time, and applied voltage) and by usage of the cubic neural unit with Levenberg-Marquardt algorithm during the results evaluation. The paper also deals with current densities of 1 A · dm(-2) and 3 A · dm(-2) for creating aluminium oxide layer.
NASA Astrophysics Data System (ADS)
ul Amin, Rooh; Aijun, Li; Khan, Muhammad Umer; Shamshirband, Shahaboddin; Kamsin, Amirrudin
2017-01-01
In this paper, an adaptive trajectory tracking controller based on extended normalized radial basis function network (ENRBFN) is proposed for 3-degree-of-freedom four rotor hover vehicle subjected to external disturbance i.e. wind turbulence. Mathematical model of four rotor hover system is developed using equations of motions and a new computational intelligence based technique ENRBFN is introduced to approximate the unmodeled dynamics of the hover vehicle. The adaptive controller based on the Lyapunov stability approach is designed to achieve tracking of the desired attitude angles of four rotor hover vehicle in the presence of wind turbulence. The adaptive weight update based on the Levenberg-Marquardt algorithm is used to avoid weight drift in case the system is exposed to external disturbances. The closed-loop system stability is also analyzed using Lyapunov stability theory. Simulations and experimental results are included to validate the effectiveness of the proposed control scheme.
Local atomic structure of Fe/Cr multilayers: Depth-resolved method
NASA Astrophysics Data System (ADS)
Babanov, Yu. A.; Ponomarev, D. A.; Devyaterikov, D. I.; Salamatov, Yu. A.; Romashev, L. N.; Ustinov, V. V.; Vasin, V. V.; Ageev, A. L.
2017-10-01
A depth-resolved method for the investigation of the local atomic structure by combining data of X-ray reflectivity and angle-resolved EXAFS is proposed. The solution of the problem can be divided into three stages: 1) determination of the element concentration profile with the depth z from X-ray reflectivity data, 2) determination of the X-ray fluorescence emission spectrum of the element i absorption coefficient μia (z,E) as a function of depth and photon energy E using the angle-resolved EXAFS data Iif (E , ϑl) , 3) determination of partial correlation functions gij (z , r) as a function of depth from μi (z , E) . All stages of the proposed method are demonstrated on a model example of a multilayer nanoheterostructure Cr/Fe/Cr/Al2O3. Three partial pair correlation functions are obtained. A modified Levenberg-Marquardt algorithm and a regularization method are applied.
10 CFR Appendixes L-M to Part 50 - [Reserved
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 1 2011-01-01 2011-01-01 false [Reserved] L Appendixes L-M to Part 50 Energy NUCLEAR REGULATORY COMMISSION DOMESTIC LICENSING OF PRODUCTION AND UTILIZATION FACILITIES Appendixes L-M to Part 50 [Reserved] ...
Star centroiding error compensation for intensified star sensors.
Jiang, Jie; Xiong, Kun; Yu, Wenbo; Yan, Jinyun; Zhang, Guangjun
2016-12-26
A star sensor provides high-precision attitude information by capturing a stellar image; however, the traditional star sensor has poor dynamic performance, which is attributed to its low sensitivity. Regarding the intensified star sensor, the image intensifier is utilized to improve the sensitivity, thereby further improving the dynamic performance of the star sensor. However, the introduction of image intensifier results in star centroiding accuracy decrease, further influencing the attitude measurement precision of the star sensor. A star centroiding error compensation method for intensified star sensors is proposed in this paper to reduce the influences. First, the imaging model of the intensified detector, which includes the deformation parameter of the optical fiber panel, is established based on the orthographic projection through the analysis of errors introduced by the image intensifier. Thereafter, the position errors at the target points based on the model are obtained by using the Levenberg-Marquardt (LM) optimization method. Last, the nearest trigonometric interpolation method is presented to compensate for the arbitrary centroiding error of the image plane. Laboratory calibration result and night sky experiment result show that the compensation method effectively eliminates the error introduced by the image intensifier, thus remarkably improving the precision of the intensified star sensors.
A Carrier Estimation Method Based on MLE and KF for Weak GNSS Signals.
Zhang, Hongyang; Xu, Luping; Yan, Bo; Zhang, Hua; Luo, Liyan
2017-06-22
Maximum likelihood estimation (MLE) has been researched for some acquisition and tracking applications of global navigation satellite system (GNSS) receivers and shows high performance. However, all current methods are derived and operated based on the sampling data, which results in a large computation burden. This paper proposes a low-complexity MLE carrier tracking loop for weak GNSS signals which processes the coherent integration results instead of the sampling data. First, the cost function of the MLE of signal parameters such as signal amplitude, carrier phase, and Doppler frequency are used to derive a MLE discriminator function. The optimal value of the cost function is searched by an efficient Levenberg-Marquardt (LM) method iteratively. Its performance including Cramér-Rao bound (CRB), dynamic characteristics and computation burden are analyzed by numerical techniques. Second, an adaptive Kalman filter is designed for the MLE discriminator to obtain smooth estimates of carrier phase and frequency. The performance of the proposed loop, in terms of sensitivity, accuracy and bit error rate, is compared with conventional methods by Monte Carlo (MC) simulations both in pedestrian-level and vehicle-level dynamic circumstances. Finally, an optimal loop which combines the proposed method and conventional method is designed to achieve the optimal performance both in weak and strong signal circumstances.
Inversion of 2-D DC resistivity data using rapid optimization and minimal complexity neural network
NASA Astrophysics Data System (ADS)
Singh, U. K.; Tiwari, R. K.; Singh, S. B.
2010-02-01
The backpropagation (BP) artificial neural network (ANN) technique of optimization based on steepest descent algorithm is known to be inept for its poor performance and does not ensure global convergence. Nonlinear and complex DC resistivity data require efficient ANN model and more intensive optimization procedures for better results and interpretations. Improvements in the computational ANN modeling process are described with the goals of enhancing the optimization process and reducing ANN model complexity. Well-established optimization methods, such as Radial basis algorithm (RBA) and Levenberg-Marquardt algorithms (LMA) have frequently been used to deal with complexity and nonlinearity in such complex geophysical records. We examined here the efficiency of trained LMA and RB networks by using 2-D synthetic resistivity data and then finally applied to the actual field vertical electrical resistivity sounding (VES) data collected from the Puga Valley, Jammu and Kashmir, India. The resulting ANN reconstruction resistivity results are compared with the result of existing inversion approaches, which are in good agreement. The depths and resistivity structures obtained by the ANN methods also correlate well with the known drilling results and geologic boundaries. The application of the above ANN algorithms proves to be robust and could be used for fast estimation of resistive structures for other complex earth model also.
Accelerated Training for Large Feedforward Neural Networks
NASA Technical Reports Server (NTRS)
Stepniewski, Slawomir W.; Jorgensen, Charles C.
1998-01-01
In this paper we introduce a new training algorithm, the scaled variable metric (SVM) method. Our approach attempts to increase the convergence rate of the modified variable metric method. It is also combined with the RBackprop algorithm, which computes the product of the matrix of second derivatives (Hessian) with an arbitrary vector. The RBackprop method allows us to avoid computationally expensive, direct line searches. In addition, it can be utilized in the new, 'predictive' updating technique of the inverse Hessian approximation. We have used directional slope testing to adjust the step size and found that this strategy works exceptionally well in conjunction with the Rbackprop algorithm. Some supplementary, but nevertheless important enhancements to the basic training scheme such as improved setting of a scaling factor for the variable metric update and computationally more efficient procedure for updating the inverse Hessian approximation are presented as well. We summarize by comparing the SVM method with four first- and second- order optimization algorithms including a very effective implementation of the Levenberg-Marquardt method. Our tests indicate promising computational speed gains of the new training technique, particularly for large feedforward networks, i.e., for problems where the training process may be the most laborious.
A General Method for Solving Systems of Non-Linear Equations
NASA Technical Reports Server (NTRS)
Nachtsheim, Philip R.; Deiss, Ron (Technical Monitor)
1995-01-01
The method of steepest descent is modified so that accelerated convergence is achieved near a root. It is assumed that the function of interest can be approximated near a root by a quadratic form. An eigenvector of the quadratic form is found by evaluating the function and its gradient at an arbitrary point and another suitably selected point. The terminal point of the eigenvector is chosen to lie on the line segment joining the two points. The terminal point found lies on an axis of the quadratic form. The selection of a suitable step size at this point leads directly to the root in the direction of steepest descent in a single step. Newton's root finding method not infrequently diverges if the starting point is far from the root. However, the current method in these regions merely reverts to the method of steepest descent with an adaptive step size. The current method's performance should match that of the Levenberg-Marquardt root finding method since they both share the ability to converge from a starting point far from the root and both exhibit quadratic convergence near a root. The Levenberg-Marquardt method requires storage for coefficients of linear equations. The current method which does not require the solution of linear equations requires more time for additional function and gradient evaluations. The classic trade off of time for space separates the two methods.
ERIC Educational Resources Information Center
Peacock, Christopher
2012-01-01
The purpose of this research effort was to develop a model that provides repeatable Location Management (LM) testing using a network simulation tool, QualNet version 5.1 (2011). The model will provide current and future protocol developers a framework to simulate stable protocol environments for development. This study used the Design Science…
Apollo LM guidance computer software for the final lunar descent.
NASA Technical Reports Server (NTRS)
Eyles, D.
1973-01-01
In all manned lunar landings to date, the lunar module Commander has taken partial manual control of the spacecraft during the final stage of the descent, below roughly 500 ft altitude. This report describes programs developed at the Charles Stark Draper Laboratory, MIT, for use in the LM's guidance computer during the final descent. At this time computational demands on the on-board computer are at a maximum, and particularly close interaction with the crew is necessary. The emphasis is on the design of the computer software rather than on justification of the particular guidance algorithms employed. After the computer and the mission have been introduced, the current configuration of the final landing programs and an advanced version developed experimentally by the author are described.
ISOT_Calc: A versatile tool for parameter estimation in sorption isotherms
NASA Astrophysics Data System (ADS)
Beltrán, José L.; Pignatello, Joseph J.; Teixidó, Marc
2016-09-01
Geochemists and soil chemists commonly use parametrized sorption data to assess transport and impact of pollutants in the environment. However, this evaluation is often hampered by a lack of detailed sorption data analysis, which implies further non-accurate transport modeling. To this end, we present a novel software tool to precisely analyze and interpret sorption isotherm data. Our developed tool, coded in Visual Basic for Applications (VBA), operates embedded within the Microsoft Excel™ environment. It consists of a user-defined function named ISOT_Calc, followed by a supplementary optimization Excel macro (Ref_GN_LM). The ISOT_Calc function estimates the solute equilibrium concentration in the aqueous and solid phases (Ce and q, respectively). Hence, it represents a very flexible way in the optimization of the sorption isotherm parameters, as it can be carried out over the residuals of q, Ce, or both simultaneously (i.e., orthogonal distance regression). The developed function includes the most usual sorption isotherm models, as predefined equations, as well as the possibility to easily introduce custom-defined ones. Regarding the Ref_GN_LM macro, it allows the parameter optimization by using a Levenberg-Marquardt modified Gauss-Newton iterative procedure. In order to evaluate the performance of the presented tool, both function and optimization macro have been applied to different sorption data examples described in the literature. Results showed that the optimization of the isotherm parameters was successfully achieved in all cases, indicating the robustness and reliability of the developed tool. Thus, the presented software tool, available to researchers and students for free, has proven to be a user-friendly and an interesting alternative to conventional fitting tools used in sorption data analysis.
Comparison of Conceptual and Neural Network Rainfall-Runoff Models
NASA Astrophysics Data System (ADS)
Vidyarthi, V. K.; Jain, A.
2014-12-01
Rainfall-runoff (RR) model is a key component of any water resource application. There are two types of techniques usually employed for RR modeling: physics based and data-driven techniques. Although the physics based models have been used for operational purposes for a very long time, they provide only reasonable accuracy in modeling and forecasting. On the other hand, the Artificial Neural Networks (ANNs) have been reported to provide superior modeling performance; however, they have not been acceptable by practitioners, decision makers and water resources engineers as operational tools. The ANNs one of the data driven techniques, became popular for efficient modeling of the complex natural systems in the last couple of decades. In this paper, the comparative results for conceptual and ANN models in RR modeling are presented. The conceptual models were developed by the use of rainfall-runoff library (RRL) and genetic algorithm (GA) was used for calibration of these models. Feed-forward neural network model structure trained by Levenberg-Marquardt (LM) training algorithm has been adopted here to develop all the ANN models. The daily rainfall, runoff and various climatic data derived from Bird creek basin, Oklahoma, USA were employed to develop all the models included here. Daily potential evapotranspiration (PET), which was used in conceptual model development, was calculated by the use of Penman equation. The input variables were selected on the basis of correlation analysis. The performance evaluation statistics such as average absolute relative error (AARE), Pearson's correlation coefficient (R) and threshold statistics (TS) were used for assessing the performance of all the models developed here. The results obtained in this study show that the ANN models outperform the conventional conceptual models due to their ability to learn the non-linearity and complexity inherent in data of rainfall-runoff process in a more efficient manner. There is a strong need to
Machine learning modelling for predicting soil liquefaction susceptibility
NASA Astrophysics Data System (ADS)
Samui, P.; Sitharam, T. G.
2011-01-01
This study describes two machine learning techniques applied to predict liquefaction susceptibility of soil based on the standard penetration test (SPT) data from the 1999 Chi-Chi, Taiwan earthquake. The first machine learning technique which uses Artificial Neural Network (ANN) based on multi-layer perceptions (MLP) that are trained with Levenberg-Marquardt backpropagation algorithm. The second machine learning technique uses the Support Vector machine (SVM) that is firmly based on the theory of statistical learning theory, uses classification technique. ANN and SVM have been developed to predict liquefaction susceptibility using corrected SPT [(N1)60] and cyclic stress ratio (CSR). Further, an attempt has been made to simplify the models, requiring only the two parameters [(N1)60 and peck ground acceleration (amax/g)], for the prediction of liquefaction susceptibility. The developed ANN and SVM models have also been applied to different case histories available globally. The paper also highlights the capability of the SVM over the ANN models.
SPIN: An Inversion Code for the Photospheric Spectral Line
NASA Astrophysics Data System (ADS)
Yadav, Rahul; Mathew, Shibu K.; Tiwary, Alok Ranjan
2017-08-01
Inversion codes are the most useful tools to infer the physical properties of the solar atmosphere from the interpretation of Stokes profiles. In this paper, we present the details of a new Stokes Profile INversion code (SPIN) developed specifically to invert the spectro-polarimetric data of the Multi-Application Solar Telescope (MAST) at Udaipur Solar Observatory. The SPIN code has adopted Milne-Eddington approximations to solve the polarized radiative transfer equation (RTE) and for the purpose of fitting a modified Levenberg-Marquardt algorithm has been employed. We describe the details and utilization of the SPIN code to invert the spectro-polarimetric data. We also present the details of tests performed to validate the inversion code by comparing the results from the other widely used inversion codes (VFISV and SIR). The inverted results of the SPIN code after its application to Hinode/SP data have been compared with the inverted results from other inversion codes.
Nonlinear Schrödinger approach to European option pricing
NASA Astrophysics Data System (ADS)
Wróblewski, Marcin
2017-05-01
This paper deals with numerical option pricing methods based on a Schrödinger model rather than the Black-Scholes model. Nonlinear Schrödinger boundary value problems seem to be alternatives to linear models which better reflect the complexity and behavior of real markets. Therefore, based on the nonlinear Schrödinger option pricing model proposed in the literature, in this paper a model augmented by external atomic potentials is proposed and numerically tested. In terms of statistical physics the developed model describes the option in analogy to a pair of two identical quantum particles occupying the same state. The proposed model is used to price European call options on a stock index. the model is calibrated using the Levenberg-Marquardt algorithm based on market data. A Runge-Kutta method is used to solve the discretized boundary value problem numerically. Numerical results are provided and discussed. It seems that our proposal more accurately models phenomena observed in the real market than do linear models.
Decomposing the permeability spectra of nanocrystalline finemet core
NASA Astrophysics Data System (ADS)
Varga, Lajos K.; Kovac, Jozef
2018-04-01
In this paper we present a theoretical and experimental investigation on the magnetization contributions to permeability spectra of normal annealed Finemet core with round type hysteresis curve. Real and imaginary parts of the permeability were determined as a function of exciting magnetic field (HAC) between 40 Hz -110 MHz using an Agilent 4294A type Precision Impedance Analyzer. The amplitude of the exciting field was below and around the coercive field of the sample. The spectra were decomposed using the Levenberg-Marquardt algorithm running under Origin 9 software in four contributions: i) eddy current; ii) Debye relaxation of magnetization rotation, iii) Debye relaxation of damped domain wall motion and iv) resonant type DW motion. For small exciting amplitudes the first two components dominate. The last two contributions connected to the DW appear for relative large HAC only, around the coercive force. All the contributions will be discussed in detail accentuating the role of eddy current that is not negligible even for the smallest applied exciting field.
Model reduction for experimental thermal characterization of a holding furnace
NASA Astrophysics Data System (ADS)
Loussouarn, Thomas; Maillet, Denis; Remy, Benjamin; Dan, Diane
2017-09-01
Vacuum holding induction furnaces are used for the manufacturing of turbine blades by loss wax foundry process. The control of solidification parameters is a key factor for the manufacturing of these parts. The definition of the structure of a reduced heat transfer model with experimental identification through an estimation of its parameters is required here. Internal sensors outputs, together with this model, can be used for assessing the thermal state of the furnace through an inverse approach, for a better control. Here, an axisymmetric furnace and its load have been numerically modelled using FlexPDE, a finite elements code. The internal induction heat source as well as the transient radiative transfer inside the furnace are calculated through this detailed model. A reduced lumped body model has been constructed to represent the numerical furnace. The model reduction and the estimation of the parameters of the lumped body have been made using a Levenberg-Marquardt least squares minimization algorithm, using two synthetic temperature signals with a further validation test.
Terrain Model Registration for Single Cycle Instrument Placement
NASA Technical Reports Server (NTRS)
Deans, Matthew; Kunz, Clay; Sargent, Randy; Pedersen, Liam
2003-01-01
This paper presents an efficient and robust method for registration of terrain models created using stereo vision on a planetary rover. Our approach projects two surface models into a virtual depth map, rendering the models as they would be seen from a single range sensor. Correspondence is established based on which points project to the same location in the virtual range sensor. A robust norm of the deviations in observed depth is used as the objective function, and the algorithm searches for the rigid transformation which minimizes the norm. An initial coarse search is done using rover pose information from odometry and orientation sensing. A fine search is done using Levenberg-Marquardt. Our method enables a planetary rover to keep track of designated science targets as it moves, and to hand off targets from one set of stereo cameras to another. These capabilities are essential for the rover to autonomously approach a science target and place an instrument in contact in a single command cycle.
Size distribution of Portuguese firms between 2006 and 2012
NASA Astrophysics Data System (ADS)
Pascoal, Rui; Augusto, Mário; Monteiro, A. M.
2016-09-01
This study aims to describe the size distribution of Portuguese firms, as measured by annual sales and total assets, between 2006 and 2012, giving an economic interpretation for the evolution of the distribution along the time. Three distributions are fitted to data: the lognormal, the Pareto (and as a particular case Zipf) and the Simplified Canonical Law (SCL). We present the main arguments found in literature to justify the use of distributions and emphasize the interpretation of SCL coefficients. Methods of estimation include Maximum Likelihood, modified Ordinary Least Squares in log-log scale and Nonlinear Least Squares considering the Levenberg-Marquardt algorithm. When applying these approaches to Portuguese's firms data, we analyze if the evolution of estimated parameters in both lognormal power and SCL is in accordance with the known existence of a recession period after 2008. This is confirmed for sales but not for assets, leading to the conclusion that the first variable is a best proxy for firm size.
A Hybrid MPI/OpenMP Approach for Parallel Groundwater Model Calibration on Multicore Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan
2010-01-01
Groundwater model calibration is becoming increasingly computationally time intensive. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelism in software and hardware to reduce calibration time on multicore computers with minimal parallelization effort. At first, HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for a uranium transport model with over a hundred species involving nearly a hundred reactions, and a field scale coupled flow and transport model. In the first application, a single parallelizable loop is identified to consume over 97% of the total computational time. With a few lines of OpenMP compiler directives inserted into the code,more » the computational time reduces about ten times on a compute node with 16 cores. The performance is further improved by selectively parallelizing a few more loops. For the field scale application, parallelizable loops in 15 of the 174 subroutines in HGC5 are identified to take more than 99% of the execution time. By adding the preconditioned conjugate gradient solver and BICGSTAB, and using a coloring scheme to separate the elements, nodes, and boundary sides, the subroutines for finite element assembly, soil property update, and boundary condition application are parallelized, resulting in a speedup of about 10 on a 16-core compute node. The Levenberg-Marquardt (LM) algorithm is added into HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, compute nodes at the number of adjustable parameters (when the forward difference is used for Jacobian approximation), or twice that number (if the center difference is used), are used to reduce the calibration time from days and weeks to a few hours for the two applications. This approach can be extended to global optimization scheme and Monte Carol analysis where thousands of compute nodes can be efficiently utilized.« less
NASA Astrophysics Data System (ADS)
Vinodhini, K.; Divya Bharathi, R.; Srinivasan, K.
2018-02-01
Lactose is an optically active substance. As it is one of the reducing sugars, exhibits mutarotation in solution when it dissolves in any solvent. In solution, lactose exists in two isomeric forms, alpha-Lactose (α-L) and beta-lactose (β-L) through the mutarotation reaction. Mutarotation produces a dynamic equilibrium between two isomers in a solution and kinetics of this process determines the growth rate of alpha lactose monohydrate (α-LM) crystals. Since no data were available on the specific rotation of aqueous α-LM solutions at different concentrations at 33 °C, the initial experiments were carried out on the specific rotation of aqueous α-LM solutions at different concentrations at 33 °C. The specific rotations of the solutions were decreased with increasing time through the mutarotation reaction. The initial and final (equilibrium) specific rotations of the solutions were determined by using automatic digital polarimeter. The compositions of α and β-L in all prepared solutions were calculated from initial and final optical rotations by the method of Sharp and Doob. The composition of α-L decreased whereas, the composition of β-L increased in solutions with increasing concentration of α-LM at 33 °C. Experimental results revealed that this method could be easily and safely employed to study the dependence of specific rotation of solutions on their concentration. The effect of β-lactose on the morphology of nucleated α-LM single crystals has been studied at different experimental conditions.
Comparison of methods for accurate end-point detection of potentiometric titrations
NASA Astrophysics Data System (ADS)
Villela, R. L. A.; Borges, P. P.; Vyskočil, L.
2015-01-01
Detection of the end point in potentiometric titrations has wide application on experiments that demand very low measurement uncertainties mainly for certifying reference materials. Simulations of experimental coulometric titration data and consequential error analysis of the end-point values were conducted using a programming code. These simulations revealed that the Levenberg-Marquardt method is in general more accurate than the traditional second derivative technique used currently as end-point detection for potentiometric titrations. Performance of the methods will be compared and presented in this paper.
Gompertzian stochastic model with delay effect to cervical cancer growth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazlan, Mazma Syahidatul Ayuni binti; Rosli, Norhayati binti; Bahar, Arifah
2015-02-03
In this paper, a Gompertzian stochastic model with time delay is introduced to describe the cervical cancer growth. The parameters values of the mathematical model are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic model numerically. The efficiency of mathematical model is measured by comparing the simulated result and the clinical data of cervical cancer growth. Low values of Mean-Square Error (MSE) of Gompertzian stochastic model with delay effect indicate good fits.
Li, Cheng; Pan, Xinyi; Ying, Kui; Zhang, Qiang; An, Jing; Weng, Dehe; Qin, Wen; Li, Kuncheng
2009-11-01
The conventional phase difference method for MR thermometry suffers from disturbances caused by the presence of lipid protons, motion-induced error, and field drift. A signal model is presented with multi-echo gradient echo (GRE) sequence using a fat signal as an internal reference to overcome these problems. The internal reference signal model is fit to the water and fat signals by the extended Prony algorithm and the Levenberg-Marquardt algorithm to estimate the chemical shifts between water and fat which contain temperature information. A noise analysis of the signal model was conducted using the Cramer-Rao lower bound to evaluate the noise performance of various algorithms, the effects of imaging parameters, and the influence of the water:fat signal ratio in a sample on the temperature estimate. Comparison of the calculated temperature map and thermocouple temperature measurements shows that the maximum temperature estimation error is 0.614 degrees C, with a standard deviation of 0.06 degrees C, confirming the feasibility of this model-based temperature mapping method. The influence of sample water:fat signal ratio on the accuracy of the temperature estimate is evaluated in a water-fat mixed phantom experiment with an optimal ratio of approximately 0.66:1. (c) 2009 Wiley-Liss, Inc.
Identification and control of plasma vertical position using neural network in Damavand tokamak.
Rasouli, H; Rasouli, C; Koohi, A
2013-02-01
In this work, a nonlinear model is introduced to determine the vertical position of the plasma column in Damavand tokamak. Using this model as a simulator, a nonlinear neural network controller has been designed. In the first stage, the electronic drive and sensory circuits of Damavand tokamak are modified. These circuits can control the vertical position of the plasma column inside the vacuum vessel. Since the vertical position of plasma is an unstable parameter, a direct closed loop system identification algorithm is performed. In the second stage, a nonlinear model is identified for plasma vertical position, based on the multilayer perceptron (MLP) neural network (NN) structure. Estimation of simulator parameters has been performed by back-propagation error algorithm using Levenberg-Marquardt gradient descent optimization technique. The model is verified through simulation of the whole closed loop system using both simulator and actual plant in similar conditions. As the final stage, a MLP neural network controller is designed for simulator model. In the last step, online training is performed to tune the controller parameters. Simulation results justify using of the NN controller for the actual plant.
Global Search Methods for Stellarator Design
NASA Astrophysics Data System (ADS)
Mynick, H. E.; Pomphrey, N.
2001-10-01
We have implemented a new variant Stellopt-DE of the stellarator optimizer Stellopt used by the NCSX team.(A. Reiman, G. Fu, S. Hirshman, D. Monticello, et al., EPS Meeting on Controlled Fusion and Plasma Physics Research, Maastricht, the Netherlands, June 14-18, 1999, (European Physical Society, Petit-Lancy, 1999).) It is based on the ``differential evolution'' (DE) algorithm,(R. Storn, K. Price, U.C. Berkeley Technical Report TR-95-012, ICSI (March, 1995).) a global search method which is far less prone than local algorithms such as the Levenberg-Marquardt method presently used in Stellopt to become trapped in local suboptimal minima of the cost function \\chi. Explorations of stellarator configuration space z to which the DE method has been applied will be presented. Additionally, an accompanying effort to understand the results of this more global exploration has found that a wide range of Quasi-Axisymmetric Stellarators (QAS) previously studied fall into a small number of classes, and we obtain maps of \\chi(z) from which one can see the relative positions of these QAS, and the reasons for the classes into which they fall.
NASA Technical Reports Server (NTRS)
Truong-Loi, My-Linh; Saatchi, Sassan; Jaruwatanadilok, Sermsak
2012-01-01
A semi-empirical algorithm for the retrieval of soil moisture, root mean square (RMS) height and biomass from polarimetric SAR data is explained and analyzed in this paper. The algorithm is a simplification of the distorted Born model. It takes into account the physical scattering phenomenon and has three major components: volume, double-bounce and surface. This simplified model uses the three backscattering coefficients ( sigma HH, sigma HV and sigma vv) at low-frequency (P-band). The inversion process uses the Levenberg-Marquardt non-linear least-squares method to estimate the structural parameters. The estimation process is entirely explained in this paper, from initialization of the unknowns to retrievals. A sensitivity analysis is also done where the initial values in the inversion process are varying randomly. The results show that the inversion process is not really sensitive to initial values and a major part of the retrievals has a root-mean-square error lower than 5% for soil moisture, 24 Mg/ha for biomass and 0.49 cm for roughness, considering a soil moisture of 40%, roughness equal to 3cm and biomass varying from 0 to 500 Mg/ha with a mean of 161 Mg/ha
NASA Astrophysics Data System (ADS)
Singh, U. K.; Tiwari, R. K.; Singh, S. B.
2005-02-01
This paper deals with the application of artificial neural networks (ANN) technique for the study of a case history using 1-D inversion of vertical electrical resistivity sounding (VES) data from the Puga valley, Kashmir, India. The study area is important for its rich geothermal resources as well as from the tectonic point of view as it is located near the collision boundary of the Indo-Asian crustal plates. In order to understand the resistivity structure and layer thicknesses, we used here three-layer feedforward neural networks to model and predict measured VES data. Three algorithms, e.g. back-propagation (BP), adaptive back-propagation (ABP) and Levenberg-Marquardt algorithm (LMA) were applied to the synthetic as well as real VES field data and efficiency of supervised training network are compared. Analyses suggest that LMA is computationally faster and give results, which are comparatively more accurate and consistent than BP and ABP. The results obtained using the ANN inversions are remarkably correlated with the available borehole litho-logs. The feasibility study suggests that ANN methods offer an excellent complementary tool for the direct detection of layered resistivity structure.
Yoon, Ye-Eun; Im, Byung Gee; Kim, Jung-Suk; Jang, Jae-Hyung
2017-01-09
Tissue adhesives, which inherently serve as wound sealants or as hemostatic agents, can be further augmented to acquire crucial functions as scaffolds, thereby accelerating wound healing or elevating the efficacy of tissue regeneration. Herein, multifunctional adherent fibrous matrices, acting as self-adhesive scaffolds capable of cell/gene delivery, were devised by coaxially electrospinning poly(caprolactone) (PCL) and poly(vinylpyrrolidone) (PVP). Wrapping the building block PCL fibers with the adherent PVP layers formed film-like fibrous matrices that could rapidly adhere to wet biological surfaces, referred to as fibrous layered matrix (FiLM) adhesives. The inclusion of ionic salts (i.e., dopamine hydrochloride) in the sheath layers generated spontaneously multilayered fibrous adhesives, whose partial layers could be manually peeled off, termed derivative FiLM (d-FiLM). In the context of scaffolds/tissue adhesives, both FiLM and d-FiLM demonstrated almost identical characteristics (i.e., sticky, mechanical, and performances as cell/gene carriers). Importantly, the single FiLM-process can yield multiple sets of d-FiLM by investing the same processing time, materials, and labor required to form a single conventional adhesive fibrous mat, thereby highlighting the economic aspects of the process. The FiLM/d-FiLM offer highly impacting contributions to many biomedical applications, especially in fields that require urgent aids (e.g., endoscopic surgeries, implantation in wet environments, severe wounds).
Open-source Software for Exoplanet Atmospheric Modeling
NASA Astrophysics Data System (ADS)
Cubillos, Patricio; Blecic, Jasmina; Harrington, Joseph
2018-01-01
I will present a suite of self-standing open-source tools to model and retrieve exoplanet spectra implemented for Python. These include: (1) a Bayesian-statistical package to run Levenberg-Marquardt optimization and Markov-chain Monte Carlo posterior sampling, (2) a package to compress line-transition data from HITRAN or Exomol without loss of information, (3) a package to compute partition functions for HITRAN molecules, (4) a package to compute collision-induced absorption, and (5) a package to produce radiative-transfer spectra of transit and eclipse exoplanet observations and atmospheric retrievals.
On the adequacy of identified Cole Cole models
NASA Astrophysics Data System (ADS)
Xiang, Jianping; Cheng, Daizhan; Schlindwein, F. S.; Jones, N. B.
2003-06-01
The Cole-Cole model has been widely used to interpret electrical geophysical data. Normally an iterative computer program is used to invert the frequency domain complex impedance data and simple error estimation is obtained from the squared difference of the measured (field) and calculated values over the full frequency range. Recently a new direct inversion algorithm was proposed for the 'optimal' estimation of the Cole-Cole parameters, which differs from existing inversion algorithms in that the estimated parameters are direct solutions of a set of equations without the need for an initial guess for initialisation. This paper first briefly investigates the advantages and disadvantages of the new algorithm compared to the standard Levenberg-Marquardt "ridge regression" algorithm. Then, and more importantly, we address the adequacy of the models resulting from both the "ridge regression" and the new algorithm, using two different statistical tests and we give objective statistical criteria for acceptance or rejection of the estimated models. The first is the standard χ2 technique. The second is a parameter-accuracy based test that uses a joint multi-normal distribution. Numerical results that illustrate the performance of both testing methods are given. The main goals of this paper are (i) to provide the source code for the new ''direct inversion'' algorithm in Matlab and (ii) to introduce and demonstrate two methods to determine the reliability of a set of data before data processing, i.e., to consider the adequacy of the resulting Cole-Cole model.
Level-set techniques for facies identification in reservoir modeling
NASA Astrophysics Data System (ADS)
Iglesias, Marco A.; McLaughlin, Dennis
2011-03-01
In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.
Application of COMSOL to Acoustic Imaging
2010-10-01
Marquardt (LM) (2 epochs), followed by Broyden, Fletcher, Goldfarb, and Shannon (BFGS) (2 epochs) followed by scaled conjugate gradient ( SCG )(100...Use Matlab’s excellent Neural Network Toolbox Optimization techniques considered: ScaledCon jugate Gradient (“ SCG ”) - fast OneStep
Optimization Methods in Sherpa
NASA Astrophysics Data System (ADS)
Siemiginowska, Aneta; Nguyen, Dan T.; Doe, Stephen M.; Refsdal, Brian L.
2009-09-01
Forward fitting is a standard technique used to model X-ray data. A statistic, usually assumed weighted chi^2 or Poisson likelihood (e.g. Cash), is minimized in the fitting process to obtain a set of the best model parameters. Astronomical models often have complex forms with many parameters that can be correlated (e.g. an absorbed power law). Minimization is not trivial in such setting, as the statistical parameter space becomes multimodal and finding the global minimum is hard. Standard minimization algorithms can be found in many libraries of scientific functions, but they are usually focused on specific functions. However, Sherpa designed as general fitting and modeling application requires very robust optimization methods that can be applied to variety of astronomical data (X-ray spectra, images, timing, optical data etc.). We developed several optimization algorithms in Sherpa targeting a wide range of minimization problems. Two local minimization methods were built: Levenberg-Marquardt algorithm was obtained from MINPACK subroutine LMDIF and modified to achieve the required robustness; and Nelder-Mead simplex method has been implemented in-house based on variations of the algorithm described in the literature. A global search Monte-Carlo method has been implemented following a differential evolution algorithm presented by Storn and Price (1997). We will present the methods in Sherpa and discuss their usage cases. We will focus on the application to Chandra data showing both 1D and 2D examples. This work is supported by NASA contract NAS8-03060 (CXC).
Evaluation of Laser Based Alignment Algorithms Under Additive Random and Diffraction Noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClay, W A; Awwal, A; Wilhelmsen, K
2004-09-30
The purpose of the automatic alignment algorithm at the National Ignition Facility (NIF) is to determine the position of a laser beam based on the position of beam features from video images. The position information obtained is used to command motors and attenuators to adjust the beam lines to the desired position, which facilitates the alignment of all 192 beams. One of the goals of the algorithm development effort is to ascertain the performance, reliability, and uncertainty of the position measurement. This paper describes a method of evaluating the performance of algorithms using Monte Carlo simulation. In particular we showmore » the application of this technique to the LM1{_}LM3 algorithm, which determines the position of a series of two beam light sources. The performance of the algorithm was evaluated for an ensemble of over 900 simulated images with varying image intensities and noise counts, as well as varying diffraction noise amplitude and frequency. The performance of the algorithm on the image data set had a tolerance well beneath the 0.5-pixel system requirement.« less
Turbulence profiling for adaptive optics tomographic reconstructors
NASA Astrophysics Data System (ADS)
Laidlaw, Douglas J.; Osborn, James; Wilson, Richard W.; Morris, Timothy J.; Butterley, Timothy; Reeves, Andrew P.; Townson, Matthew J.; Gendron, Éric; Vidal, Fabrice; Morel, Carine
2016-07-01
To approach optimal performance advanced Adaptive Optics (AO) systems deployed on ground-based telescopes must have accurate knowledge of atmospheric turbulence as a function of altitude. Stereo-SCIDAR is a high-resolution stereoscopic instrument dedicated to this measure. Here, its profiles are directly compared to internal AO telemetry atmospheric profiling techniques for CANARY (Vidal et al. 20141), a Multi-Object AO (MOAO) pathfinder on the William Herschel Telescope (WHT), La Palma. In total twenty datasets are analysed across July and October of 2014. Levenberg-Marquardt fitting algorithms dubbed Direct Fitting and Learn 2 Step (L2S; Martin 20142) are used in the recovery of profile information via covariance matrices - respectively attaining average Pearson product-moment correlation coefficients with stereo-SCIDAR of 0.2 and 0.74. By excluding the measure of covariance between orthogonal Wavefront Sensor (WFS) slopes these results have revised values of 0.65 and 0.2. A data analysis technique that combines L2S and SLODAR is subsequently introduced that achieves a correlation coefficient of 0.76.
Artificial neural networks in knee injury risk evaluation among professional football players
NASA Astrophysics Data System (ADS)
Martyna, Michałowska; Tomasz, Walczak; Krzysztof, Grabski Jakub; Monika, Grygorowicz
2018-01-01
Lower limb injury risk assessment was proposed, based on isokinetic examination that is a part of standard athlete's biomechanical evaluation performed mainly twice a year. Information about non-contact knee injury (or lack of the injury) sustained within twelve months after isokinetic test, confirmed in USG were verified. Three the most common types of football injuries were taken into consideration: anterior cruciate ligament (ACL) rupture, hamstring and quadriceps muscles injuries. 22 parameters, obtained from isokinetic tests were divided into 4 groups and used as input parameters of five feedforward artificial neural networks (ANNs). The 5th group consisted of all considered parameters. The networks were trained with the use of Levenberg-Marquardt backpropagation algorithm to return value close to 1 for the sets of parameters corresponding injury event and close to 0 for parameters with no injury recorded within 6 - 12 months after isokinetic test. Results of this study shows that ANN might be useful tools, which simplify process of simultaneous interpretation of many numerical parameters, but the most important factor that significantly influence the results is database used for ANN training.
Hemmat, Abbas; Kafashan, Jalal; Huang, Hongying
2017-01-01
To study the optimum process conditions for pretreatments and anaerobic codigestion of oil refinery wastewater (ORWW) with chicken manure, L9 (34) Taguchi's orthogonal array was applied. The biogas production (BGP), biomethane content (BMP), and chemical oxygen demand solubilization (CODS) in stabilization rate were evaluated as the process outputs. The optimum conditions were obtained by using Design Expert software (Version 7.0.0). The results indicated that the optimum conditions could be achieved with 44% ORWW, 36°C temperature, 30 min sonication, and 6% TS in the digester. The optimum BGP, BMP, and CODS removal rates by using the optimum conditions were 294.76 mL/gVS, 151.95 mL/gVS, and 70.22%, respectively, as concluded by the experimental results. In addition, the artificial neural network (ANN) technique was implemented to develop an ANN model for predicting BGP yield and BMP content. The Levenberg-Marquardt algorithm was utilized to train ANN, and the architecture of 9-19-2 for the ANN model was obtained. PMID:29441352
Prediction of Film Cooling Effectiveness on a Gas Turbine Blade Leading Edge Using ANN and CFD
NASA Astrophysics Data System (ADS)
Dávalos, J. O.; García, J. C.; Urquiza, G.; Huicochea, A.; De Santiago, O.
2018-05-01
In this work, the area-averaged film cooling effectiveness (AAFCE) on a gas turbine blade leading edge was predicted by employing an artificial neural network (ANN) using as input variables: hole diameter, injection angle, blowing ratio, hole and columns pitch. The database used to train the network was built using computational fluid dynamics (CFD) based on a two level full factorial design of experiments. The CFD numerical model was validated with an experimental rig, where a first stage blade of a gas turbine was represented by a cylindrical specimen. The ANN architecture was composed of three layers with four neurons in hidden layer and Levenberg-Marquardt was selected as ANN optimization algorithm. The AAFCE was successfully predicted by the ANN with a regression coefficient R2<0.99 and a root mean square error RMSE=0.0038. The ANN weight coefficients were used to estimate the relative importance of the input parameters. Blowing ratio was the most influential parameter with relative importance of 40.36 % followed by hole diameter. Additionally, by using the ANN model, the relationship between input parameters was analyzed.
Imfit: A Fast, Flexible Program for Astronomical Image Fitting
NASA Astrophysics Data System (ADS)
Erwin, Peter
2014-08-01
Imift is an open-source astronomical image-fitting program specialized for galaxies but potentially useful for other sources, which is fast, flexible, and highly extensible. Its object-oriented design allows new types of image components (2D surface-brightness functions) to be easily written and added to the program. Image functions provided with Imfit include Sersic, exponential, and Gaussian galaxy decompositions along with Core-Sersic and broken-exponential profiles, elliptical rings, and three components that perform line-of-sight integration through 3D luminosity-density models of disks and rings seen at arbitrary inclinations. Available minimization algorithms include Levenberg-Marquardt, Nelder-Mead simplex, and Differential Evolution, allowing trade-offs between speed and decreased sensitivity to local minima in the fit landscape. Minimization can be done using the standard chi^2 statistic (using either data or model values to estimate per-pixel Gaussian errors, or else user-supplied error images) or the Cash statistic; the latter is particularly appropriate for cases of Poisson data in the low-count regime. The C++ source code for Imfit is available under the GNU Public License.
SfM with MRFs: discrete-continuous optimization for large-scale structure from motion.
Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P
2013-12-01
Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point (VP) estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time.
Coelho, Lúcia H G; Gutz, Ivano G R
2006-03-15
A chemometric method for analysis of conductometric titration data was introduced to extend its applicability to lower concentrations and more complex acid-base systems. Auxiliary pH measurements were made during the titration to assist the calculation of the distribution of protonable species on base of known or guessed equilibrium constants. Conductivity values of each ionized or ionizable species possibly present in the sample were introduced in a general equation where the only unknown parameters were the total concentrations of (conjugated) bases and of strong electrolytes not involved in acid-base equilibria. All these concentrations were adjusted by a multiparametric nonlinear regression (NLR) method, based on the Levenberg-Marquardt algorithm. This first conductometric titration method with NLR analysis (CT-NLR) was successfully applied to simulated conductometric titration data and to synthetic samples with multiple components at concentrations as low as those found in rainwater (approximately 10 micromol L(-1)). It was possible to resolve and quantify mixtures containing a strong acid, formic acid, acetic acid, ammonium ion, bicarbonate and inert electrolyte with accuracy of 5% or better.
Broiler weight estimation based on machine vision and artificial neural network.
Amraei, S; Abdanan Mehdizadeh, S; Salari, S
2017-04-01
1. Machine vision and artificial neural network (ANN) procedures were used to estimate live body weight of broiler chickens in 30 1-d-old broiler chickens reared for 42 d. 2. Imaging was performed two times daily. To localise chickens within the pen, an ellipse fitting algorithm was used and the chickens' head and tail removed using the Chan-Vese method. 3. The correlations between the body weight and 6 physical extracted features indicated that there were strong correlations between body weight and the 5 features including area, perimeter, convex area, major and minor axis length. 5. According to statistical analysis there was no significant difference between morning and afternoon data over 42 d. 6. In an attempt to improve the accuracy of live weight approximation different ANN techniques, including Bayesian regulation, Levenberg-Marquardt, Scaled conjugate gradient and gradient descent were used. Bayesian regulation with R 2 value of 0.98 was the best network for prediction of broiler weight. 7. The accuracy of the machine vision technique was examined and most errors were less than 50 g.
Eslamizadeh, Gholamhossein; Barati, Ramin
2017-05-01
Early recognition of heart disease plays a vital role in saving lives. Heart murmurs are one of the common heart problems. In this study, Artificial Neural Network (ANN) is trained with Modified Neighbor Annealing (MNA) to classify heart cycles into normal and murmur classes. Heart cycles are separated from heart sounds using wavelet transformer. The network inputs are features extracted from individual heart cycles, and two classification outputs. Classification accuracy of the proposed model is compared with five multilayer perceptron trained with Levenberg-Marquardt, Extreme-learning-machine, back-propagation, simulated-annealing, and neighbor-annealing algorithms. It is also compared with a Self-Organizing Map (SOM) ANN. The proposed model is trained and tested using real heart sounds available in the Pascal database to show the applicability of the proposed scheme. Also, a device to record real heart sounds has been developed and used for comparison purposes too. Based on the results of this study, MNA can be used to produce considerable results as a heart cycle classifier. Copyright © 2017 Elsevier B.V. All rights reserved.
Cooperative photometric redshift estimation
NASA Astrophysics Data System (ADS)
Cavuoti, S.; Tortora, C.; Brescia, M.; Longo, G.; Radovich, M.; Napolitano, N. R.; Amaro, V.; Vellucci, C.
2017-06-01
In the modern galaxy surveys photometric redshifts play a central role in a broad range of studies, from gravitational lensing and dark matter distribution to galaxy evolution. Using a dataset of ~ 25,000 galaxies from the second data release of the Kilo Degree Survey (KiDS) we obtain photometric redshifts with five different methods: (i) Random forest, (ii) Multi Layer Perceptron with Quasi Newton Algorithm, (iii) Multi Layer Perceptron with an optimization network based on the Levenberg-Marquardt learning rule, (iv) the Bayesian Photometric Redshift model (or BPZ) and (v) a classical SED template fitting procedure (Le Phare). We show how SED fitting techniques could provide useful information on the galaxy spectral type which can be used to improve the capability of machine learning methods constraining systematic errors and reduce the occurrence of catastrophic outliers. We use such classification to train specialized regression estimators, by demonstrating that such hybrid approach, involving SED fitting and machine learning in a single collaborative framework, is capable to improve the overall prediction accuracy of photometric redshifts.
NASA Astrophysics Data System (ADS)
Das, Chandan; Das, Arijit; Kumar Golder, Animes
2016-10-01
The present work illustrates the Microwave-Assisted Drying (MWAD) characteristic of aloe vera gel combined with process optimization and artificial neural network modeling. The influence of microwave power (160-480 W), gel quantity (4-8 g) and drying time (1-9 min) on the moisture ratio was investigated. The drying of aloe gel exhibited typical diffusion-controlled characteristics with a predominant interaction between input power and drying time. Falling rate period was observed for the entire MWAD of aloe gel. Face-centered Central Composite Design (FCCD) developed a regression model to evaluate their effects on moisture ratio. The optimal MWAD conditions were established as microwave power of 227.9 W, sample amount of 4.47 g and 5.78 min drying time corresponding to the moisture ratio of 0.15. A computer-stimulated Artificial Neural Network (ANN) model was generated for mapping between process variables and the desired response. `Levenberg-Marquardt Back Propagation' algorithm with 3-5-1 architect gave the best prediction, and it showed a clear superiority over FCCD.
Implementation of a numerical holding furnace model in foundry and construction of a reduced model
NASA Astrophysics Data System (ADS)
Loussouarn, Thomas; Maillet, Denis; Remy, Benjamin; Dan, Diane
2016-09-01
Vacuum holding induction furnaces are used for the manufacturing of turbine blades by loss wax foundry process. The control of solidification parameters is a key factor for the manufacturing of these parts in according to geometrical and structural expectations. The definition of a reduced heat transfer model with experimental identification through an estimation of its parameters is required here. In a further stage this model will be used to characterize heat exchanges using internal sensors through inverse techniques to optimize the furnace command and the optimization of its design. Here, an axisymmetric furnace and its load have been numerically modelled using FlexPDE, a finite elements code. A detailed model allows the calculation of the internal induction heat source as well as transient radiative transfer inside the furnace. A reduced lumped body model has been defined to represent the numerical furnace. The model reduction and the estimation of the parameters of the lumped body have been made using a Levenberg-Marquardt least squares minimization algorithm with Matlab, using two synthetic temperature signals with a further validation test.
Radar modulation classification using time-frequency representation and nonlinear regression
NASA Astrophysics Data System (ADS)
De Luigi, Christophe; Arques, Pierre-Yves; Lopez, Jean-Marc; Moreau, Eric
1999-09-01
In naval electronic environment, pulses emitted by radars are collected by ESM receivers. For most of them the intrapulse signal is modulated by a particular law. To help the classical identification process, a classification and estimation of this modulation law is applied on the intrapulse signal measurements. To estimate with a good accuracy the time-varying frequency of a signal corrupted by an additive noise, one method has been chosen. This method consists on the Wigner distribution calculation, the instantaneous frequency is then estimated by the peak location of the distribution. Bias and variance of the estimator are performed by computed simulations. In a estimated sequence of frequencies, we assume the presence of false and good estimated ones, the hypothesis of Gaussian distribution is made on the errors. A robust non linear regression method, based on the Levenberg-Marquardt algorithm, is thus applied on these estimated frequencies using a Maximum Likelihood Estimator. The performances of the method are tested by using varied modulation laws and different signal to noise ratios.
Optimized Structure of the Traffic Flow Forecasting Model With a Deep Learning Approach.
Yang, Hao-Fan; Dillon, Tharam S; Chen, Yi-Ping Phoebe
2017-10-01
Forecasting accuracy is an important issue for successful intelligent traffic management, especially in the domain of traffic efficiency and congestion reduction. The dawning of the big data era brings opportunities to greatly improve prediction accuracy. In this paper, we propose a novel model, stacked autoencoder Levenberg-Marquardt model, which is a type of deep architecture of neural network approach aiming to improve forecasting accuracy. The proposed model is designed using the Taguchi method to develop an optimized structure and to learn traffic flow features through layer-by-layer feature granulation with a greedy layerwise unsupervised learning algorithm. It is applied to real-world data collected from the M6 freeway in the U.K. and is compared with three existing traffic predictors. To the best of our knowledge, this is the first time that an optimized structure of the traffic flow forecasting model with a deep learning approach is presented. The evaluation results demonstrate that the proposed model with an optimized structure has superior performance in traffic flow forecasting.
Adaptive learning and control for MIMO system based on adaptive dynamic programming.
Fu, Jian; He, Haibo; Zhou, Xinmin
2011-07-01
Adaptive dynamic programming (ADP) is a promising research field for design of intelligent controllers, which can both learn on-the-fly and exhibit optimal behavior. Over the past decades, several generations of ADP design have been proposed in the literature, which have demonstrated many successful applications in various benchmarks and industrial applications. While many of the existing researches focus on multiple-inputs-single-output system with steepest descent search, in this paper we investigate a generalized multiple-input-multiple-output (GMIMO) ADP design for online learning and control, which is more applicable to a wide range of practical real-world applications. Furthermore, an improved weight-updating algorithm based on recursive Levenberg-Marquardt methods is presented and embodied in the GMIMO approach to improve its performance. Finally, we test the performance of this approach based on a practical complex system, namely, the learning and control of the tension and height of the looper system in a hot strip mill. Experimental results demonstrate that the proposed approach can achieve effective and robust performance.
Iliev, I; Vassileva, T; Ignatova, C; Ivanova, I; Haertlé, T; Monsan, P; Chobert, J-M
2008-01-01
To find different types of glucosyltransferases (GTFs) produced by Leuconostoc mesenteroides strain Lm 28 and its mutant forms, and to check the effectiveness of gluco-oligosaccharide synthesis using maltose as the acceptor. Constitutive mutants were obtained after chemical mutagenesis by ethyl methane sulfonate. Lm M281 produced more active GTFs than that obtained by the parental strain cultivated on sucrose. GTF from Lm M286 produced a resistant glucan, based on endo-dextranase and amyloglucosidase hydrolysis. The extracellular enzymes from Lm M286 catalyse acceptor reactions and transfer the glucose unit from sucrose to maltose to produce gluco-oligosaccharides (GOS). By increasing the sucrose/maltose ratio, it was possible to catalyse the synthesis of oligosaccharides of increasing degree of polymerization (DP). Different types of GTFs (dextransucrase, alternansucrase and levansucrase) were produced from new constitutive mutants of Leuc. mesenteroides. GTFs from Lm M286 can catalyse the acceptor reaction in the presence of maltose, leading to the synthesis of branched oligosaccharides. Conditions were optimized to synthesize GOS by using GTFs from Lm M286, with the aim of producing maximum quantities of branched-chain oligosaccharides with DP 3-5. This would allow the use of the latter as prebiotics.
[Unipacs: A-LM German, Units 3-29].
ERIC Educational Resources Information Center
West Bend High Schools, WI.
These instructional materials, designed for use with the "A-LM" German language course, permit teachers to individualize instruction. Basic objectives are outlined concerning basic dialogues, vocabulary, supplementary materials, reading, grammar, recombination materials, and creative conversation. A student checklist serves as a guide for the…
NASA Astrophysics Data System (ADS)
Yang, Kangjian; Yang, Ping; Wang, Shuai; Dong, Lizhi; Xu, Bing
2018-05-01
We propose a method to identify tip-tilt disturbance model for Linear Quadratic Gaussian control. This identification method based on Levenberg-Marquardt method conducts with a little prior information and no auxiliary system and it is convenient to identify the tip-tilt disturbance model on-line for real-time control. This identification method makes it easy that Linear Quadratic Gaussian control runs efficiently in different adaptive optics systems for vibration mitigation. The validity of the Linear Quadratic Gaussian control associated with this tip-tilt disturbance model identification method is verified by experimental data, which is conducted in replay mode by simulation.
Stochastic growth logistic model with aftereffect for batch fermentation process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosli, Norhayati; Ayoubi, Tawfiqullah; Bahar, Arifah
2014-06-19
In this paper, the stochastic growth logistic model with aftereffect for the cell growth of C. acetobutylicum P262 and Luedeking-Piret equations for solvent production in batch fermentation system is introduced. The parameters values of the mathematical models are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic models numerically. The effciency of mathematical models is measured by comparing the simulated result and the experimental data of the microbial growth and solvent production in batch system. Low values of Root Mean-Square Error (RMSE) of stochastic models with aftereffect indicate good fits.
Stochastic growth logistic model with aftereffect for batch fermentation process
NASA Astrophysics Data System (ADS)
Rosli, Norhayati; Ayoubi, Tawfiqullah; Bahar, Arifah; Rahman, Haliza Abdul; Salleh, Madihah Md
2014-06-01
In this paper, the stochastic growth logistic model with aftereffect for the cell growth of C. acetobutylicum P262 and Luedeking-Piret equations for solvent production in batch fermentation system is introduced. The parameters values of the mathematical models are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic models numerically. The effciency of mathematical models is measured by comparing the simulated result and the experimental data of the microbial growth and solvent production in batch system. Low values of Root Mean-Square Error (RMSE) of stochastic models with aftereffect indicate good fits.
Zou, Weiyao; Qi, Xiaofeng; Burns, Stephen A
2011-07-01
We implemented a Lagrange-multiplier (LM)-based damped least-squares (DLS) control algorithm in a woofer-tweeter dual deformable-mirror (DM) adaptive optics scanning laser ophthalmoscope (AOSLO). The algorithm uses data from a single Shack-Hartmann wavefront sensor to simultaneously correct large-amplitude low-order aberrations by a woofer DM and small-amplitude higher-order aberrations by a tweeter DM. We measured the in vivo performance of high resolution retinal imaging with the dual DM AOSLO. We compared the simultaneous LM-based DLS dual DM controller with both single DM controller, and a successive dual DM controller. We evaluated performance using both wavefront (RMS) and image quality metrics including brightness and power spectrum. The simultaneous LM-based dual DM AO can consistently provide near diffraction-limited in vivo routine imaging of human retina.
LM-3: A High-resolution Lake Michigan Mass Balance Water Quality Model
This report is a user’s manual that describes the high-resolution mass balance model known as LM3. LM3 has been applied to Lake Michigan to describe the transport and fate of atrazine, PCB congeners, and chloride in that system. The model has also been used to model eutrophicat...
Inverse analysis of non-uniform temperature distributions using multispectral pyrometry
NASA Astrophysics Data System (ADS)
Fu, Tairan; Duan, Minghao; Tian, Jibin; Shi, Congling
2016-05-01
Optical diagnostics can be used to obtain sub-pixel temperature information in remote sensing. A multispectral pyrometry method was developed using multiple spectral radiation intensities to deduce the temperature area distribution in the measurement region. The method transforms a spot multispectral pyrometer with a fixed field of view into a pyrometer with enhanced spatial resolution that can give sub-pixel temperature information from a "one pixel" measurement region. A temperature area fraction function was defined to represent the spatial temperature distribution in the measurement region. The method is illustrated by simulations of a multispectral pyrometer with a spectral range of 8.0-13.0 μm measuring a non-isothermal region with a temperature range of 500-800 K in the spot pyrometer field of view. The inverse algorithm for the sub-pixel temperature distribution (temperature area fractions) in the "one pixel" verifies this multispectral pyrometry method. The results show that an improved Levenberg-Marquardt algorithm is effective for this ill-posed inverse problem with relative errors in the temperature area fractions of (-3%, 3%) for most of the temperatures. The analysis provides a valuable reference for the use of spot multispectral pyrometers for sub-pixel temperature distributions in remote sensing measurements.
High-Lift Optimization Design Using Neural Networks on a Multi-Element Airfoil
NASA Technical Reports Server (NTRS)
Greenman, Roxana M.; Roth, Karlin R.; Smith, Charles A. (Technical Monitor)
1998-01-01
The high-lift performance of a multi-element airfoil was optimized by using neural-net predictions that were trained using a computational data set. The numerical data was generated using a two-dimensional, incompressible, Navier-Stokes algorithm with the Spalart-Allmaras turbulence model. Because it is difficult to predict maximum lift for high-lift systems, an empirically-based maximum lift criteria was used in this study to determine both the maximum lift and the angle at which it occurs. Multiple input, single output networks were trained using the NASA Ames variation of the Levenberg-Marquardt algorithm for each of the aerodynamic coefficients (lift, drag, and moment). The artificial neural networks were integrated with a gradient-based optimizer. Using independent numerical simulations and experimental data for this high-lift configuration, it was shown that this design process successfully optimized flap deflection, gap, overlap, and angle of attack to maximize lift. Once the neural networks were trained and integrated with the optimizer, minimal additional computer resources were required to perform optimization runs with different initial conditions and parameters. Applying the neural networks within the high-lift rigging optimization process reduced the amount of computational time and resources by 83% compared with traditional gradient-based optimization procedures for multiple optimization runs.
Kinematic modelling of disc galaxies using graphics processing units
NASA Astrophysics Data System (ADS)
Bekiaris, G.; Glazebrook, K.; Fluke, C. J.; Abraham, R.
2016-01-01
With large-scale integral field spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the graphics processing unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and nested sampling algorithms, but also a naive brute-force approach based on nested grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ˜100 when compared to a single-threaded CPU, and up to a factor of ˜10 when compared to a multithreaded dual CPU configuration. Our method's accuracy, precision and robustness are assessed by successfully recovering the kinematic properties of simulated data, and also by verifying the kinematic modelling results of galaxies from the GHASP and DYNAMO surveys as found in the literature. The resulting GBKFIT code is available for download from: http://supercomputing.swin.edu.au/gbkfit.
Acoustic Sensor Network for Relative Positioning of Nodes
De Marziani, Carlos; Ureña, Jesus; Hernandez, Álvaro; Mazo, Manuel; García, Juan Jesús; Jimenez, Ana; Rubio, María del Carmen Pérez; Álvarez, Fernando; Villadangos, José Manuel
2009-01-01
In this work, an acoustic sensor network for a relative localization system is analyzed by reporting the accuracy achieved in the position estimation. The proposed system has been designed for those applications where objects are not restricted to a particular environment and thus one cannot depend on any external infrastructure to compute their positions. The objects are capable of computing spatial relations among themselves using only acoustic emissions as a ranging mechanism. The object positions are computed by a multidimensional scaling (MDS) technique and, afterwards, a least-square algorithm, based on the Levenberg-Marquardt algorithm (LMA), is applied to refine results. Regarding the position estimation, all the parameters involved in the computation of the temporary relations with the proposed ranging mechanism have been considered. The obtained results show that a fine-grained localization can be achieved considering a Gaussian distribution error in the proposed ranging mechanism. Furthermore, since acoustic sensors require a line-of-sight to properly work, the system has been tested by modeling the lost of this line-of-sight as a non-Gaussian error. A suitable position estimation has been achieved even if it is considered a bias of up to 25 of the line-of-sight measurements among a set of nodes. PMID:22291520
Trajectory prediction for ballistic missiles based on boost-phase LOS measurements
NASA Astrophysics Data System (ADS)
Yeddanapudi, Murali; Bar-Shalom, Yaakov
1997-10-01
This paper addresses the problem of the estimation of the trajectory of a tactical ballistic missile using line of sight (LOS) measurements from one or more passive sensors (typically satellites). The major difficulties of this problem include: the estimation of the unknown time of launch, incorporation of (inaccurate) target thrust profiles to model the target dynamics during the boost phase and an overall ill-conditioning of the estimation problem due to poor observability of the target motion via the LOS measurements. We present a robust estimation procedure based on the Levenberg-Marquardt algorithm that provides both the target state estimate and error covariance taking into consideration the complications mentioned above. An important consideration in the defense against tactical ballistic missiles is the determination of the target position and error covariance at the acquisition range of a surveillance radar in the vicinity of the impact point. We present a systematic procedure to propagate the target state and covariance to a nominal time, when it is within the detection range of a surveillance radar to obtain a cueing volume. Mont Carlo simulation studies on typical single and two sensor scenarios indicate that the proposed algorithms are accurate in terms of the estimates and the estimator calculated covariances are consistent with the errors.
Khadke, Piyush; Patne, Nita; Singh, Arvind; Shinde, Gulab
2016-01-01
In this article, a novel and accurate scheme for fault detection, classification and fault distance estimation for a fixed series compensated transmission line is proposed. The proposed scheme is based on artificial neural network (ANN) and metal oxide varistor (MOV) energy, employing Levenberg-Marquardt training algorithm. The novelty of this scheme is the use of MOV energy signals of fixed series capacitors (FSC) as input to train the ANN. Such approach has never been used in any earlier fault analysis algorithms in the last few decades. Proposed scheme uses only single end measurement energy signals of MOV in all the 3 phases over one cycle duration from the occurrence of a fault. Thereafter, these MOV energy signals are fed as input to ANN for fault distance estimation. Feasibility and reliability of the proposed scheme have been evaluated for all ten types of fault in test power system model at different fault inception angles over numerous fault locations. Real transmission system parameters of 3-phase 400 kV Wardha-Aurangabad transmission line (400 km) with 40 % FSC at Power Grid Wardha Substation, India is considered for this research. Extensive simulation experiments show that the proposed scheme provides quite accurate results which demonstrate complete protection scheme with high accuracy, simplicity and robustness.
Novel maximum-margin training algorithms for supervised neural networks.
Ludwig, Oswaldo; Nunes, Urbano
2010-06-01
MICI, MMGDX, and Levenberg-Marquard (LM), respectively. The resulting neural network was named assembled neural network (ASNN). Benchmark data sets of real-world problems have been used in experiments that enable a comparison with other state-of-the-art classifiers. The results provide evidence of the effectiveness of our methods regarding accuracy, AUC, and balanced error rate.
Brama, Elisabeth; Peddie, Christopher J; Wilkes, Gary; Gu, Yan; Collinson, Lucy M; Jones, Martin L
2016-12-13
In-resin fluorescence (IRF) protocols preserve fluorescent proteins in resin-embedded cells and tissues for correlative light and electron microscopy, aiding interpretation of macromolecular function within the complex cellular landscape. Dual-contrast IRF samples can be imaged in separate fluorescence and electron microscopes, or in dual-modality integrated microscopes for high resolution correlation of fluorophore to organelle. IRF samples also offer a unique opportunity to automate correlative imaging workflows. Here we present two new locator tools for finding and following fluorescent cells in IRF blocks, enabling future automation of correlative imaging. The ultraLM is a fluorescence microscope that integrates with an ultramicrotome, which enables 'smart collection' of ultrathin sections containing fluorescent cells or tissues for subsequent transmission electron microscopy or array tomography. The miniLM is a fluorescence microscope that integrates with serial block face scanning electron microscopes, which enables 'smart tracking' of fluorescent structures during automated serial electron image acquisition from large cell and tissue volumes.
Ghaedi, M; Shojaeipour, E; Ghaedi, A M; Sahraei, Reza
2015-05-05
In this study, copper nanowires loaded on activated carbon (Cu-NWs-AC) was used as novel efficient adsorbent for the removal of malachite green (MG) from aqueous solution. This new material was synthesized through simple protocol and its surface properties such as surface area, pore volume and functional groups were characterized with different techniques such XRD, BET and FESEM analysis. The relation between removal percentages with variables such as solution pH, adsorbent dosage (0.005, 0.01, 0.015, 0.02 and 0.1g), contact time (1-40min) and initial MG concentration (5, 10, 20, 70 and 100mg/L) was investigated and optimized. A three-layer artificial neural network (ANN) model was utilized to predict the malachite green dye removal (%) by Cu-NWs-AC following conduction of 248 experiments. When the training of the ANN was performed, the parameters of ANN model were as follows: linear transfer function (purelin) at output layer, Levenberg-Marquardt algorithm (LMA), and a tangent sigmoid transfer function (tansig) at the hidden layer with 11 neurons. The minimum mean squared error (MSE) of 0.0017 and coefficient of determination (R(2)) of 0.9658 were found for prediction and modeling of dye removal using testing data set. A good agreement between experimental data and predicted data using the ANN model was obtained. Fitting the experimental data on previously optimized condition confirm the suitability of Langmuir isotherm models for their explanation with maximum adsorption capacity of 434.8mg/g at 25°C. Kinetic studies at various adsorbent mass and initial MG concentration show that the MG maximum removal percentage was achieved within 20min. The adsorption of MG follows the pseudo-second-order with a combination of intraparticle diffusion model. Copyright © 2015 Elsevier B.V. All rights reserved.
Intermediate Macroeconomics without the IS-LM Model.
ERIC Educational Resources Information Center
Weerapana, Akila
2003-01-01
States that the IS-LM model is the primary model of economic fluctuations taught in undergraduate macroeconomics. Argues that the aggregate demand-price adjustment (AD-PA) model is superior for teaching about economic fluctuations. Compares the IS-LS model with the AD-AP model using two current issues in macroeconomics. (JEH)
NASA Astrophysics Data System (ADS)
Moliner, L.; Correcher, C.; González, A. J.; Conde, P.; Hernández, L.; Orero, A.; Rodríguez-Álvarez, M. J.; Sánchez, F.; Soriano, A.; Vidal, L. F.; Benlloch, J. M.
2013-02-01
In this work we present an innovative algorithm for the reconstruction of PET images based on the List-Mode (LM) technique which improves their spatial resolution compared to results obtained with current MLEM algorithms. This study appears as a part of a large project with the aim of improving diagnosis in early Alzheimer disease stages by means of a newly developed hybrid PET-MR insert. At the present, Alzheimer is the most relevant neurodegenerative disease and the best way to apply an effective treatment is its early diagnosis. The PET device will consist of several monolithic LYSO crystals coupled to SiPM detectors. Monolithic crystals can reduce scanner costs with the advantage to enable implementation of very small virtual pixels in their geometry. This is especially useful for LM reconstruction algorithms, since they do not need a pre-calculated system matrix. We have developed an LM algorithm which has been initially tested with a large aperture (186 mm) breast PET system. Such an algorithm instead of using the common lines of response, incorporates a novel calculation of tubes of response. The new approach improves the volumetric spatial resolution about a factor 2 at the border of the field of view when compared with traditionally used MLEM algorithm. Moreover, it has also shown to decrease the image noise, thus increasing the image quality.
Characterization and mapping of complementary lesion-mimic genes lm1 and lm2 in common wheat.
Yao, Qin; Zhou, Ronghua; Fu, Tihua; Wu, Weiren; Zhu, Zhendong; Li, Aili; Jia, Jizeng
2009-10-01
A lesion-mimic phenotype appeared in a segregating population of common wheat cross Yanzhan 1/Zaosui 30. The parents had non-lesion normal phenotypes. Shading treatment and histochemical analyses showed that the lesions were caused by light-dependent cell death and were not associated with pathogens. Studies over two cropping seasons showed that some lines with more highly expressed lesion-mimic phenotypes exhibited significantly lower grain yields than those with the normal phenotype, but there were no significant effects in the lines with weakly expressed lesion-mimic phenotypes. Among yield traits, one-thousand grain weight was the most affected by lesion-mimic phenotypes. Genetic analysis indicated that this was a novel type of lesion mimic, which was caused by interaction of recessive genes derived from each parent. The lm1 (lesion mimic 1) locus from Zaosui 30 was flanked by microsatellite markers Xwmc674 and Xbarc133/Xbarc147 on chromosome 3BS, at genetic distances of 1.2 and 3.8 cM, respectively, whereas lm2 from Yanzhan 1 was mapped between microsatellite markers Xgwm513 and Xksum154 on chromosome 4BL, at genetic distances of 1.5 and 3 cM, respectively. The linked microsatellite makers identified in this study might be useful for evaluating whether potential parents with normal phenotype are carriers of lesion-mimic alleles.
Possibilities and testing of CPRNG in block cipher mode of operation PM-DC-LM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zacek, Petr; Jasek, Roman; Malanik, David
2016-06-08
This paper discusses the chaotic pseudo-random number generator (CPRNG), which is used in block cipher mode of operation called PM-DC-LM. PM-DC-LM is one of possible subversions of general PM mode. In this paper is not discussed the design of PM-DC-LM, but only CPRNG as a part of it because designing is written in other papers. Possibilities, how to change or to improve CPRNG are mentioned. The final part is devoted for a little testing of CPRNG and some testing data are shown.
Magnetic localization and orientation of the capsule endoscope based on a random complex algorithm.
He, Xiaoqi; Zheng, Zizhao; Hu, Chao
2015-01-01
The development of the capsule endoscope has made possible the examination of the whole gastrointestinal tract without much pain. However, there are still some important problems to be solved, among which, one important problem is the localization of the capsule. Currently, magnetic positioning technology is a suitable method for capsule localization, and this depends on a reliable system and algorithm. In this paper, based on the magnetic dipole model as well as magnetic sensor array, we propose nonlinear optimization algorithms using a random complex algorithm, applied to the optimization calculation for the nonlinear function of the dipole, to determine the three-dimensional position parameters and two-dimensional direction parameters. The stability and the antinoise ability of the algorithm is compared with the Levenberg-Marquart algorithm. The simulation and experiment results show that in terms of the error level of the initial guess of magnet location, the random complex algorithm is more accurate, more stable, and has a higher "denoise" capacity, with a larger range for initial guess values.
NASA Astrophysics Data System (ADS)
King, Thomas Steven
A hybrid gravity modeling method is developed to investigate the structure of sedimentary mass bodies. The method incorporates as constraints surficial basement/sediment contacts and topography of a mass target with a quadratically varying density distribution. The inverse modeling utilizes a genetic algorithm (GA) to scan a wide range of the solution space to determine initial models and the Marquardt-Levenberg (ML) nonlinear inversion to determine final models that meet pre-assigned misfit criteria, thus providing an estimate of model variability and uncertainty. The surface modeling technique modifies Delaunay triangulation by allowing individual facets to be manually constructed and non-convex boundaries to be incorporated into the triangulation scheme. The sedimentary body is represented by a set of uneven prisms and edge elements, comprised of tetrahedrons, capped by polyhedrons. Each underlying prism and edge element's top surface is located by determining its point of tangency with the overlying terrain. The remaining overlying mass is gravitationally evaluated and subtracted from the observation points. Inversion then proceeds in the usual sense, but on an irregular tiered surface with each element's density defined relative to their top surface. Efficiency is particularly important due to the large number of facets evaluated for surface representations and the many repeated element evaluations of the stochastic GA. The gravitation of prisms, triangular faceted polygons, and tetrahedrons can be formulated in different ways, either mathematically or by physical approximations, each having distinct characteristics, such as evaluation time, accuracy over various spatial ranges, and computational singularities. A decision tree or switching routine is constructed for each element by combining these characteristics into a single cohesive package that optimizes the computation for accuracy and speed while avoiding singularities. The GA incorporates a subspace
NASA Technical Reports Server (NTRS)
Fennelly, J. A.; Torr, D. G.; Richards, P. G.; Torr, M. R.; Sharp, W. E.
1991-01-01
This paper describes a technique for extracting thermospheric profiles of the atomic-oxygen density and temperature, using ground-based measurements of the O(+)(2D-2P) doublet at 7320 and 7330 A in the twilight airglow. In this method, a local photochemical model is used to calculate the 7320-A intensity; the method also utilizes an iterative inversion procedure based on the Levenberg-Marquardt method described by Press et al. (1986). The results demonstrate that, if the measurements are only limited by errors due to Poisson noise, the altitude profiles of neutral temperature and atomic oxygen concentration can be determined accurately using currently available spectrometers.
Comments on Different techniques for finding best-fit parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fenimore, Edward E.; Triplett, Laurie A.
2014-07-01
A common data analysis problem is to find best-fit parameters through chi-square minimization. Levenberg-Marquardt is an often used system that depends on gradients and converges when successive iterations do not change chi-square more than a specified amount. We point out in cases where the sought-after parameter weakly affects the fit and cases where the overall scale factor is a parameter, that a Golden Search technique can often do better. The Golden Search converges when the best-fit point is within a specified range and that range can be made arbitrarily small. It does not depend on the value of chi-square.
A Learning Model for L/M Specificity in Ganglion Cells
NASA Technical Reports Server (NTRS)
Ahumada, Albert J.
2016-01-01
An unsupervised learning model for developing LM specific wiring at the ganglion cell level would support the research indicating LM specific wiring at the ganglion cell level (Reid and Shapley, 2002). Removing the contributions to the surround from cells of the same cone type improves the signal-to-noise ratio of the chromatic signals. The unsupervised learning model used is Hebbian associative learning, which strengthens the surround input connections according to the correlation of the output with the input. Since the surround units of the same cone type as the center are redundant with the center, their weights end up disappearing. This process can be thought of as a general mechanism for eliminating unnecessary cells in the nervous system.
Robust radio interferometric calibration using the t-distribution
NASA Astrophysics Data System (ADS)
Kazemi, S.; Yatawatta, S.
2013-10-01
A major stage of radio interferometric data processing is calibration or the estimation of systematic errors in the data and the correction for such errors. A stochastic error (noise) model is assumed, and in most cases, this underlying model is assumed to be Gaussian. However, outliers in the data due to interference or due to errors in the sky model would have adverse effects on processing based on a Gaussian noise model. Most of the shortcomings of calibration such as the loss in flux or coherence, and the appearance of spurious sources, could be attributed to the deviations of the underlying noise model. In this paper, we propose to improve the robustness of calibration by using a noise model based on Student's t-distribution. Student's t-noise is a special case of Gaussian noise when the variance is unknown. Unlike Gaussian-noise-model-based calibration, traditional least-squares minimization would not directly extend to a case when we have a Student's t-noise model. Therefore, we use a variant of the expectation-maximization algorithm, called the expectation-conditional maximization either algorithm, when we have a Student's t-noise model and use the Levenberg-Marquardt algorithm in the maximization step. We give simulation results to show the robustness of the proposed calibration method as opposed to traditional Gaussian-noise-model-based calibration, especially in preserving the flux of weaker sources that are not included in the calibration model.
Liu, Xiao-Jian; Sun, Ya-Wen; Li, Da-Qi; Li, Sheng; Ma, En-Bo; Zhang, Jian-Zhen
2018-04-01
In Locusta migratoria, we found that two chitin biosynthesis genes, UDP N-acetylglucosamine pyrophosphorylase gene LmUAP1 and chitin synthase gene LmCHS1, are expressed mainly in the integument and are responsible for cuticle formation. However, whether these genes are regulated by 20-hydroxyecdysone (20E) is still largely unclear. Here, we showed the developmental expression pattern of LmUAP1, LmCHS1 and the corresponding 20E titer during the last instar nymph stage of locust. RNA interference (RNAi) directed toward a common region of the two isoforms of LmEcR (LmEcRcom) reduced the expression level of LmUAP1, while there was no difference in the expression of LmCHS1. Meantime, injection of 20E in vivo induced the expression of LmUAP1 but not LmCHS1. Further, we found injection-based RNAi of LmEcRcom resulted in 100% mortality. The locusts failed to molt with no apolysis, and maintained in the nymph stage until death. In conclusion, our preliminary results indicated that LmUAP1 in the chitin biosynthesis pathway is a 20E late-response gene and LmEcR plays an essential role in locust growth and development, which could be a good potential target for RNAi-based pest control. © 2016 Institute of Zoology, Chinese Academy of Sciences.
Fitting Nonlinear Curves by use of Optimization Techniques
NASA Technical Reports Server (NTRS)
Hill, Scott A.
2005-01-01
MULTIVAR is a FORTRAN 77 computer program that fits one of the members of a set of six multivariable mathematical models (five of which are nonlinear) to a multivariable set of data. The inputs to MULTIVAR include the data for the independent and dependent variables plus the user s choice of one of the models, one of the three optimization engines, and convergence criteria. By use of the chosen optimization engine, MULTIVAR finds values for the parameters of the chosen model so as to minimize the sum of squares of the residuals. One of the optimization engines implements a routine, developed in 1982, that utilizes the Broydon-Fletcher-Goldfarb-Shanno (BFGS) variable-metric method for unconstrained minimization in conjunction with a one-dimensional search technique that finds the minimum of an unconstrained function by polynomial interpolation and extrapolation without first finding bounds on the solution. The second optimization engine is a faster and more robust commercially available code, denoted Design Optimization Tool, that also uses the BFGS method. The third optimization engine is a robust and relatively fast routine that implements the Levenberg-Marquardt algorithm.
Olawoyin, Richard
2016-10-01
The backpropagation (BP) artificial neural network (ANN) is a renowned and extensively functional mathematical tool used for time-series predictions and approximations; which also define results for non-linear functions. ANNs are vital tools in the predictions of toxicant levels, such as polycyclic aromatic hydrocarbons (PAH) potentially derived from anthropogenic activities in the microenvironment. In the present work, BP ANN was used as a prediction tool to study the potential toxicity of PAH carcinogens (PAHcarc) in soils. Soil samples (16 × 4 = 64) were collected from locations in South-southern Nigeria. The concentration of PAHcarc in laboratory cultivated white melilot, Melilotus alba roots grown on treated soils was predicted using ANN model training. Results indicated the Levenberg-Marquardt back-propagation training algorithm converged in 2.5E+04 epochs at an average RMSE value of 1.06E-06. The averagedR(2) comparison between the measured and predicted outputs was 0.9994. It may be deduced from this study that, analytical processes involving environmental risk assessment as used in this study can successfully provide prompt prediction and source identification of major soil toxicants. Copyright © 2016 Elsevier Ltd. All rights reserved.
Grosse, Constantino
2014-04-01
The description and interpretation of dielectric spectroscopy data usually require the use of analytical functions, which include unknown parameters that must be determined iteratively by means of a fitting procedure. This is not a trivial task and much effort has been spent to find the best way to accomplish it. While the theoretical approach based on the Levenberg-Marquardt algorithm is well known, no freely available program specifically adapted to the dielectric spectroscopy problem exists to the best of our knowledge. Moreover, even the more general commercial packages usually fail on the following aspects: (1) allow to keep temporarily fixed some of the parameters, (2) allow to freely specify the uncertainty values for each data point, (3) check that parameter values fall within prescribed bounds during the fitting process, and (4) allow to fit either the real, or the imaginary, or simultaneously both parts of the complex permittivity. A program that satisfies all these requirements and allows fitting any superposition of the Debye, Cole-Cole, Cole-Davidson, and Havriliak-Negami dispersions plus a conductivity term to measured dielectric spectroscopy data is presented. It is available on request from the author. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Rezaeian, P.; Ataenia, V.; Shafiei, S.
2017-12-01
In this paper, the flux of photons inside the irradiation cell of the Gammacell-220 is calculated using an analytical method based on multipole moment expansion. The flux of the photons inside the irradiation cell is introduced as the function of monopole, dipoles and quadruples in the Cartesian coordinate system. For the source distribution of the Gammacell-220, the values of the multipole moments are specified by direct integrating. To confirm the validation of the presented methods, the flux distribution inside the irradiation cell was determined utilizing MCNP simulations as well as experimental measurements. To measure the flux inside the irradiation cell, Amber dosimeters were employed. The calculated values of the flux were in agreement with the values obtained by simulations and measurements, especially in the central zones of the irradiation cell. In order to show that the present method is a good approximation to determine the flux in the irradiation cell, the values of the multipole moments were obtained by fitting the simulation and experimental data using Levenberg-Marquardt algorithm. The present method leads to reasonable results for the all source distribution even without any symmetry which makes it a powerful tool for the source load planning.
NASA Astrophysics Data System (ADS)
Sahoo, Sasmita; Jha, Madan K.
2013-12-01
The potential of multiple linear regression (MLR) and artificial neural network (ANN) techniques in predicting transient water levels over a groundwater basin were compared. MLR and ANN modeling was carried out at 17 sites in Japan, considering all significant inputs: rainfall, ambient temperature, river stage, 11 seasonal dummy variables, and influential lags of rainfall, ambient temperature, river stage and groundwater level. Seventeen site-specific ANN models were developed, using multi-layer feed-forward neural networks trained with Levenberg-Marquardt backpropagation algorithms. The performance of the models was evaluated using statistical and graphical indicators. Comparison of the goodness-of-fit statistics of the MLR models with those of the ANN models indicated that there is better agreement between the ANN-predicted groundwater levels and the observed groundwater levels at all the sites, compared to the MLR. This finding was supported by the graphical indicators and the residual analysis. Thus, it is concluded that the ANN technique is superior to the MLR technique in predicting spatio-temporal distribution of groundwater levels in a basin. However, considering the practical advantages of the MLR technique, it is recommended as an alternative and cost-effective groundwater modeling tool.
Di Nardo, Francesco; Mengoni, Michele; Morettini, Micaela
2013-05-01
Present study provides a novel MATLAB-based parameter estimation procedure for individual assessment of hepatic insulin degradation (HID) process from standard frequently-sampled intravenous glucose tolerance test (FSIGTT) data. Direct access to the source code, offered by MATLAB, enabled us to design an optimization procedure based on the alternating use of Gauss-Newton's and Levenberg-Marquardt's algorithms, which assures the full convergence of the process and the containment of computational time. Reliability was tested by direct comparison with the application, in eighteen non-diabetic subjects, of well-known kinetic analysis software package SAAM II, and by application on different data. Agreement between MATLAB and SAAM II was warranted by intraclass correlation coefficients ≥0.73; no significant differences between corresponding mean parameter estimates and prediction of HID rate; and consistent residual analysis. Moreover, MATLAB optimization procedure resulted in a significant 51% reduction of CV% for the worst-estimated parameter by SAAM II and in maintaining all model-parameter CV% <20%. In conclusion, our MATLAB-based procedure was suggested as a suitable tool for the individual assessment of HID process. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Ground Motion Prediction Model Using Artificial Neural Network
NASA Astrophysics Data System (ADS)
Dhanya, J.; Raghukanth, S. T. G.
2018-03-01
This article focuses on developing a ground motion prediction equation based on artificial neural network (ANN) technique for shallow crustal earthquakes. A hybrid technique combining genetic algorithm and Levenberg-Marquardt technique is used for training the model. The present model is developed to predict peak ground velocity, and 5% damped spectral acceleration. The input parameters for the prediction are moment magnitude ( M w), closest distance to rupture plane ( R rup), shear wave velocity in the region ( V s30) and focal mechanism ( F). A total of 13,552 ground motion records from 288 earthquakes provided by the updated NGA-West2 database released by Pacific Engineering Research Center are utilized to develop the model. The ANN architecture considered for the model consists of 192 unknowns including weights and biases of all the interconnected nodes. The performance of the model is observed to be within the prescribed error limits. In addition, the results from the study are found to be comparable with the existing relations in the global database. The developed model is further demonstrated by estimating site-specific response spectra for Shimla city located in Himalayan region.
Inverse optimal design of the radiant heating in materials processing and manufacturing
NASA Astrophysics Data System (ADS)
Fedorov, A. G.; Lee, K. H.; Viskanta, R.
1998-12-01
Combined convective, conductive, and radiative heat transfer is analyzed during heating of a continuously moving load in the industrial radiant oven. A transient, quasi-three-dimensional model of heat transfer between a continuous load of parts moving inside an oven on a conveyor belt at a constant speed and an array of radiant heaters/burners placed inside the furnace enclosure is developed. The model accounts for radiative exchange between the heaters and the load, heat conduction in the load, and convective heat transfer between the moving load and oven environment. The thermal model developed has been used to construct a general framework for an inverse optimal design of an industrial oven as an example. In particular, the procedure based on the Levenberg-Marquardt nonlinear least squares optimization algorithm has been developed to obtain the optimal temperatures of the heaters/burners that need to be specified to achieve a prescribed temperature distribution of the surface of a load. The results of calculations for several sample cases are reported to illustrate the capabilities of the procedure developed for the optimal inverse design of an industrial radiant oven.
NASA Astrophysics Data System (ADS)
Kompany-Zareh, Mohsen; Khoshkam, Maryam
2013-02-01
This paper describes estimation of reaction rate constants and pure ultraviolet/visible (UV-vis) spectra of the component involved in a second order consecutive reaction between Ortho-Amino benzoeic acid (o-ABA) and Diazoniom ions (DIAZO), with one intermediate. In the described system, o-ABA was not absorbing in the visible region of interest and thus, closure rank deficiency problem did not exist. Concentration profiles were determined by solving differential equations of the corresponding kinetic model. In that sense, three types of model-based procedures were applied to estimate the rate constants of the kinetic system, according to Levenberg/Marquardt (NGL/M) algorithm. Original data-based, Score-based and concentration-based objective functions were included in these nonlinear fitting procedures. Results showed that when there is error in initial concentrations, accuracy of estimated rate constants strongly depends on the type of applied objective function in fitting procedure. Moreover, flexibility in application of different constraints and optimization of the initial concentrations estimation during the fitting procedure were investigated. Results showed a considerable decrease in ambiguity of obtained parameters by applying appropriate constraints and adjustable initial concentrations of reagents.
Modeling corneal surfaces with rational functions for high-speed videokeratoscopy data compression.
Schneider, Martin; Iskander, D Robert; Collins, Michael J
2009-02-01
High-speed videokeratoscopy is an emerging technique that enables study of the corneal surface and tear-film dynamics. Unlike its static predecessor, this new technique results in a very large amount of digital data for which storage needs become significant. We aimed to design a compression technique that would use mathematical functions to parsimoniously fit corneal surface data with a minimum number of coefficients. Since the Zernike polynomial functions that have been traditionally used for modeling corneal surfaces may not necessarily correctly represent given corneal surface data in terms of its optical performance, we introduced the concept of Zernike polynomial-based rational functions. Modeling optimality criteria were employed in terms of both the rms surface error as well as the point spread function cross-correlation. The parameters of approximations were estimated using a nonlinear least-squares procedure based on the Levenberg-Marquardt algorithm. A large number of retrospective videokeratoscopic measurements were used to evaluate the performance of the proposed rational-function-based modeling approach. The results indicate that the rational functions almost always outperform the traditional Zernike polynomial approximations with the same number of coefficients.
"A-LM German": How to Make it Work.
ERIC Educational Resources Information Center
Hartmetz, Dieter
1978-01-01
The A-LM German materials are analyzed in terms of their weakness and positive features, and suggestions for their use and adaptation are presented. It is argued that: the basic dialogue is almost unusable; the structure drills are repetitive and often not challenging; the taped arrangement of the listening exercises is awkward; the dialogue…
Slama, Matous; Benes, Peter M.; Bila, Jiri
2015-01-01
During radiotherapy treatment for thoracic and abdomen cancers, for example, lung cancers, respiratory motion moves the target tumor and thus badly affects the accuracy of radiation dose delivery into the target. A real-time image-guided technique can be used to monitor such lung tumor motion for accurate dose delivery, but the system latency up to several hundred milliseconds for repositioning the radiation beam also affects the accuracy. In order to compensate the latency, neural network prediction technique with real-time retraining can be used. We have investigated real-time prediction of 3D time series of lung tumor motion on a classical linear model, perceptron model, and on a class of higher-order neural network model that has more attractive attributes regarding its optimization convergence and computational efficiency. The implemented static feed-forward neural architectures are compared when using gradient descent adaptation and primarily the Levenberg-Marquardt batch algorithm as the ones of the most common and most comprehensible learning algorithms. The proposed technique resulted in fast real-time retraining, so the total computational time on a PC platform was equal to or even less than the real treatment time. For one-second prediction horizon, the proposed techniques achieved accuracy less than one millimeter of 3D mean absolute error in one hundred seconds of total treatment time. PMID:25893194
Bukovsky, Ivo; Homma, Noriyasu; Ichiji, Kei; Cejnek, Matous; Slama, Matous; Benes, Peter M; Bila, Jiri
2015-01-01
During radiotherapy treatment for thoracic and abdomen cancers, for example, lung cancers, respiratory motion moves the target tumor and thus badly affects the accuracy of radiation dose delivery into the target. A real-time image-guided technique can be used to monitor such lung tumor motion for accurate dose delivery, but the system latency up to several hundred milliseconds for repositioning the radiation beam also affects the accuracy. In order to compensate the latency, neural network prediction technique with real-time retraining can be used. We have investigated real-time prediction of 3D time series of lung tumor motion on a classical linear model, perceptron model, and on a class of higher-order neural network model that has more attractive attributes regarding its optimization convergence and computational efficiency. The implemented static feed-forward neural architectures are compared when using gradient descent adaptation and primarily the Levenberg-Marquardt batch algorithm as the ones of the most common and most comprehensible learning algorithms. The proposed technique resulted in fast real-time retraining, so the total computational time on a PC platform was equal to or even less than the real treatment time. For one-second prediction horizon, the proposed techniques achieved accuracy less than one millimeter of 3D mean absolute error in one hundred seconds of total treatment time.
Zufferey, Rachel; Al-Ani, Gada K; Dunlap, Kara
2009-12-01
Glycerolipid biosynthesis in Leishmania initiates with the acylation of glycerol-3-phosphate by a single glycerol-3-phosphate acyltransferase, LmGAT, or of dihydroxyacetonephosphate by a dihydroxyacetonephosphate acyltransferase, LmDAT. We previously reported that acylation of the precursor dihydroxyacetonephosphate rather than glycerol-3-phosphate is the physiologically relevant pathway for Leishmania parasites. We demonstrated that LmDAT is important for normal growth, survival during the stationary phase, and for virulence. Here, we assessed the role of LmDAT in glycerolipid metabolism and metacyclogenesis. LmDAT was found to be implicated in the biosynthesis of ether glycerolipids, including the ether lipid derived virulence factor lipophosphoglycan and glycosylphosphatidylinositol-anchored proteins. The null mutant produced longer lipophosphoglycan molecules that were not released in the medium, and augmented levels of glycosylphosphatidylinositol-anchored proteins. In addition, the integrity of detergent resistant membranes was not affected by the absence of the LmDAT gene. Further, our genetic analyses strongly suggest that LmDAT was synthetic lethal with the glycerol-3-phosphate acyltransferase encoding gene LmGAT, implying that Leishmania expresses only two acyltransferases that initiate the biosynthesis of its cellular glycerolipids. Last, despite the fact that LmDAT is important for virulence the null mutant still exhibited the typical characteristics of metacyclics.
Perera-Garcia, Martha A; Mendoza-Carranza, Manuel; Contreras-Sánchez, Wilfrido; Ferrara, Allyse; Huerta-Ortiz, Maricela; Hernández-Gómez, Raúl E
2013-06-01
Common snook Centropomus unidecimalis is an important commercial and fishery species in Southern Mexico, however the high exploitation rates have resulted in a strong reduction of its abundances. Since, the information about its population structure is scarce, the objective of the present research was to determine and compare the age structure in four important fishery sites. For this, age and growth of common snook were determined from specimens collected monthly, from July 2006 to March 2008, from two coastal (Barra Bosque and Barra San Pedro) and two riverine (San Pedro and Tres Brazos) commercial fishery sites in Tabasco, Mexico. Age was determined using sectioned saggitae otoliths and data analyzed by von Bertalanffy and Levenberg-Marquardt among others. Estimated ages ranged from 2 to 17 years. Monthly patterns of marginal increment formation and the percentage of otoliths with opaque rings on the outer edge demonstrated that a single annulus was formed each year. The von Bertalanffy parameters were calculated for males and females using linear adjustment and the non-linear method of Levenberg-Marquardt. The von Bertalanffy growth equations were FLt = 109.21(1-e-0.2(t+0.57)) for Barra Bosque, FLt = 94.56(1-e-027(t+0.485)) for Barra San Pedro, FLt = 97.15(1-e 0.17(t + 1.32)) for San Pedro and FLt = 83.77(1-e-026(t + 0.49)) for Tres Brazos. According to (Hotelling's T2, p < 0.05) test growth was significantly greater for females than for males. Based on the Chen test, von Bertalanffy growth curves were different among the study sites (RSS, p < 0.05). Based on the observed differences in growth parameters among sampling sites (coastal and riverine environments) future research need to be conducted on migration and population genetics, in order to delineate the stock structure of this population and support management programs.
The new GFDL global atmosphere and land model AM2-LM2: Evaluation with prescribed SST simulations
Anderson, J.L.; Balaji, V.; Broccoli, A.J.; Cooke, W.F.; Delworth, T.L.; Dixon, K.W.; Donner, L.J.; Dunne, K.A.; Freidenreich, S.M.; Garner, S.T.; Gudgel, R.G.; Gordon, C.T.; Held, I.M.; Hemler, R.S.; Horowitz, L.W.; Klein, S.A.; Knutson, T.R.; Kushner, P.J.; Langenhost, A.R.; Lau, N.-C.; Liang, Z.; Malyshev, S.L.; Milly, P.C.D.; Nath, M.J.; Ploshay, J.J.; Ramaswamy, V.; Schwarzkopf, M.D.; Shevliakova, E.; Sirutis, J.J.; Soden, B.J.; Stern, W.F.; Thompson, L.A.; Wilson, R.J.; Wittenberg, A.T.; Wyman, B.L.
2004-01-01
The configuration and performance of a new global atmosphere and land model for climate research developed at the Geophysical Fluid Dynamics Laboratory (GFDL) are presented. The atmosphere model, known as AM2, includes a new gridpoint dynamical core, a prognostic cloud scheme, and a multispecies aerosol climatology, as well as components from previous models used at GFDL. The land model, known as LM2, includes soil sensible and latent heat storage, groundwater storage, and stomatal resistance. The performance of the coupled model AM2-LM2 is evaluated with a series of prescribed sea surface temperature (SST) simulations. Particular focus is given to the model's climatology and the characteristics of interannual variability related to El Nin??o-Southern Oscillation (ENSO). One AM2-LM2 integration was perfor med according to the prescriptions of the second Atmospheric Model Intercomparison Project (AMIP II) and data were submitted to the Program for Climate Model Diagnosis and Intercomparison (PCMDI). Particular strengths of AM2-LM2, as judged by comparison to other models participating in AMIP II, include its circulation and distributions of precipitation. Prominent problems of AM2-LM2 include a cold bias to surface and tropospheric temperatures, weak tropical cyclone activity, and weak tropical intraseasonal activity associated with the Madden-Julian oscillation. An ensemble of 10 AM2-LM 2 integrations with observed SSTs for the second half of the twentieth century permits a statistically reliable assessment of the model's response to ENSO. In general, AM2-LM2 produces a realistic simulation of the anomalies in tropical precipitation and extratropical circulation that are associated with ENSO. ?? 2004 American Meteorological Society.
HeLM: a macrophyte-based method for monitoring and assessment of Greek lakes.
Zervas, Dimitrios; Tsiaoussi, Vasiliki; Tsiripidis, Ioannis
2018-05-05
The Water Framework Directive (WFD) requires Member States to develop appropriate assessment methods for the classification of the ecological status of their surface waters. Mediterranean region has lagged behind in this task, so we propose here the first developed method for Greek lakes, Hellenic Lake Macrophyte (HeLM) assessment method. This method is based on two metrics, a modified trophic index and maximum colonization depth C max that quantify the degree of changes in lake macrophytic vegetation, as a response to eutrophication and general degradation pressures. The method was developed on the basis of a data set sampled from 272 monitoring transects in 16 Greek lakes. Sites from three lakes were selected as potential reference sites by using a screening process. Ecological quality ratios were calculated for each metric and for each lake, and ecological status class boundaries were defined. For the evaluation of effectiveness of the method, the correlations between individual metrics and final HeLM values and common pressure indicators, such as total phosphorus, chlorophyll a and Secchi depth, were tested and found highly significant and relatively strong. In addition, the ability of HeLM values and its individual metrics to distinguish between different macrophytic communities' structure was checked using aquatic plant life-forms and found satisfactory. The HeLM method gave a reliable assessment of the macrophytic vegetation's condition in Greek lakes and may constitute a useful tool for the classification of ecological status of other Mediterranean lakes.
NASA Astrophysics Data System (ADS)
Rabin, Sam S.; Ward, Daniel S.; Malyshev, Sergey L.; Magi, Brian I.; Shevliakova, Elena; Pacala, Stephen W.
2018-03-01
This study describes and evaluates the Fire Including Natural & Agricultural Lands model (FINAL) which, for the first time, explicitly simulates cropland and pasture management fires separately from non-agricultural fires. The non-agricultural fire module uses empirical relationships to simulate burned area in a quasi-mechanistic framework, similar to past fire modeling efforts, but with a novel optimization method that improves the fidelity of simulated fire patterns to new observational estimates of non-agricultural burning. The agricultural fire components are forced with estimates of cropland and pasture fire seasonality and frequency derived from observational land cover and satellite fire datasets. FINAL accurately simulates the amount, distribution, and seasonal timing of burned cropland and pasture over 2001-2009 (global totals: 0.434×106 and 2.02×106 km2 yr-1 modeled, 0.454×106 and 2.04×106 km2 yr-1 observed), but carbon emissions for cropland and pasture fire are overestimated (global totals: 0.295 and 0.706 PgC yr-1 modeled, 0.194 and 0.538 PgC yr-1 observed). The non-agricultural fire module underestimates global burned area (1.91×106 km2 yr-1 modeled, 2.44×106 km2 yr-1 observed) and carbon emissions (1.14 PgC yr-1 modeled, 1.84 PgC yr-1 observed). The spatial pattern of total burned area and carbon emissions is generally well reproduced across much of sub-Saharan Africa, Brazil, Central Asia, and Australia, whereas the boreal zone sees underestimates. FINAL represents an important step in the development of global fire models, and offers a strategy for fire models to consider human-driven fire regimes on cultivated lands. At the regional scale, simulations would benefit from refinements in the parameterizations and improved optimization datasets. We include an in-depth discussion of the lessons learned from using the Levenberg-Marquardt algorithm in an interactive optimization for a dynamic global vegetation model.
NASA Astrophysics Data System (ADS)
Simeonov, Tzvetan; Vey, Sibylle; Alshawaf, Fadwa; Dick, Galina; Guerova, Guergana; Güntner, Andreas; Hohmann, Christian; Kunwar, Ajeet; Trost, Benjamin; Wickert, Jens
2017-04-01
Water storage variations in the atmosphere and in soils are among the most dynamic within the Earth's water cycle. The continuous measurement of water storage in these media with a high spatial and temporal resolution is a challenging task, not yet completely solved by various observation techniques. With the development of the Global Navigation Satellite Systems (GNSS) a new approach for atmospheric water vapor estimation in the atmosphere and in parallel of soil moisture in the vicinity of GNSS ground stations was established in the recent years with several key advantages compared to traditional techniques. Regional and global GNSS networks are nowadays operationally used to provide the Integrated Water Vapor (IWV) information with high temporal resolution above the individual stations. Corresponding data products are used to improve the day-by-day weather prediction of leading forecast centers. Selected stations from these networks can be used to additionally derive the soil moisture in the vicinity of the receivers. Such parallel measurement of IWV and soil moisture using a single measuring device provides a unique possibility to analyze water fluxes between the atmosphere and the land surface. We installed an advanced experimental GNSS setup for hydrology at the field research station of the Leibniz Institute for Agricultural Engineering and Bioeconomy in Marquardt, around 30km West of Berlin, Germany. The setup includes several GNSS receivers, various Time Domain Reflectometry (TDR) sensors at different depths for soil moisture measurement and an meteorological station. The setup was mainly installed to develop and improve GNSS based techniques for soil moisture determination and to analyze GNSS IWV and SM in parallel on a long-term perspective. We introduce initial results from more than two years of measurements. The comparison in station Marquardt shows good agreement (correlation 0.79) between the GNSS derived soil moisture and the TDR measurements. A
Apollo 9 Mission image - Top view of the Lunar Module (LM) spacecraft from the Command Module (CM)
1969-03-03
The Lunar Module (LM) 3 "Spider",still attached to the Saturn V third (S-IVB) stage,is photographed from the Command/Service Module (CSM) "Gumdrop" on the first day of the Apollo 9 Earth-orbital mission. This picture was taken following CSM/LM-S-IVB separation,and prior to LM extraction from the S-IVB. The Spacecraft Lunar Module Adapter (SLA) panels have already been jettisoned. Film magazine was A,film type was SO-368 Ektachrome with 0.460 - 0.710 micrometers film / filter transmittance response and haze filter, 80mm lens.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
This report on evaporite mineralization was completed as an Ancillary Work Plan for the Applied Studies and Technology program under the U.S. Department of Energy (DOE) Office of Legacy Management (LM). This study reviews all LM sites under Title I and Title II of the Uranium Mill Tailings Radiation Control Act (UMTRCA) and one Decontamination and Decommissioning site to provide (1) a summary of which sites have evaporite deposits, (2) any available quantitative geochemical and mineralogical analyses, and (3) references to relevant reports. In this study, “evaporite” refers to any secondary mineral precipitate that occurs due to a loss ofmore » water through evaporative processes. This includes efflorescent salt crusts, where this term refers to a migration of dissolved constituents to the surface with a resulting salt crust, where “salt” can refer to any secondary precipitate, regardless of constituents. The potential for the formation of evaporites at LM sites has been identified, and may have relevance to plume persistence issues. Evaporite deposits have the potential to concentrate and store contaminants at LM sites that could later be re-released. These deposits can also provide a temporary storage mechanism for carbonate, chloride, and sulfate salts along with uranium and other contaminants of concern (COCs). Identification of sites with evaporites will be used in a new technical task plan (TTP), Persistent Secondary Contaminant Sources (PeSCS), for any proposed additional sampling and analyses. This additional study is currently under development and will focus on determining if the dissolution of evaporites has the potential to hinder natural flushing strategies and impact plume persistence. This report provides an initial literature review on evaporites followed by details for each site with identified evaporites. The final summary includes a table listing of all relevant LM sites regardless of evaporite identification.« less
NASA Astrophysics Data System (ADS)
Rakotomanga, Prisca; Soussen, Charles; Blondel, Walter C. P. M.
2017-03-01
Diffuse reflectance spectroscopy (DRS) has been acknowledged as a valuable optical biopsy tool for in vivo characterizing pathological modifications in epithelial tissues such as cancer. In spatially resolved DRS, accurate and robust estimation of the optical parameters (OP) of biological tissues is a major challenge due to the complexity of the physical models. Solving this inverse problem requires to consider 3 components: the forward model, the cost function, and the optimization algorithm. This paper presents a comparative numerical study of the performances in estimating OP depending on the choice made for each of the latter components. Mono- and bi-layer tissue models are considered. Monowavelength (scalar) absorption and scattering coefficients are estimated. As a forward model, diffusion approximation analytical solutions with and without noise are implemented. Several cost functions are evaluated possibly including normalized data terms. Two local optimization methods, Levenberg-Marquardt and TrustRegion-Reflective, are considered. Because they may be sensitive to the initial setting, a global optimization approach is proposed to improve the estimation accuracy. This algorithm is based on repeated calls to the above-mentioned local methods, with initial parameters randomly sampled. Two global optimization methods, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), are also implemented. Estimation performances are evaluated in terms of relative errors between the ground truth and the estimated values for each set of unknown OP. The combination between the number of variables to be estimated, the nature of the forward model, the cost function to be minimized and the optimization method are discussed.
Structural Damage Detection Using Changes in Natural Frequencies: Theory and Applications
NASA Astrophysics Data System (ADS)
He, K.; Zhu, W. D.
2011-07-01
A vibration-based method that uses changes in natural frequencies of a structure to detect damage has advantages over conventional nondestructive tests in detecting various types of damage, including loosening of bolted joints, using minimum measurement data. Two major challenges associated with applications of the vibration-based damage detection method to engineering structures are addressed: accurate modeling of structures and the development of a robust inverse algorithm to detect damage, which are defined as the forward and inverse problems, respectively. To resolve the forward problem, new physics-based finite element modeling techniques are developed for fillets in thin-walled beams and for bolted joints, so that complex structures can be accurately modeled with a reasonable model size. To resolve the inverse problem, a logistical function transformation is introduced to convert the constrained optimization problem to an unconstrained one, and a robust iterative algorithm using a trust-region method, called the Levenberg-Marquardt method, is developed to accurately detect the locations and extent of damage. The new methodology can ensure global convergence of the iterative algorithm in solving under-determined system equations and deal with damage detection problems with relatively large modeling error and measurement noise. The vibration-based damage detection method is applied to various structures including lightning masts, a space frame structure and one of its components, and a pipeline. The exact locations and extent of damage can be detected in the numerical simulation where there is no modeling error and measurement noise. The locations and extent of damage can be successfully detected in experimental damage detection.
Intelligence system based classification approach for medical disease diagnosis
NASA Astrophysics Data System (ADS)
Sagir, Abdu Masanawa; Sathasivam, Saratha
2017-08-01
The prediction of breast cancer in women who have no signs or symptoms of the disease as well as survivability after undergone certain surgery has been a challenging problem for medical researchers. The decision about presence or absence of diseases depends on the physician's intuition, experience and skill for comparing current indicators with previous one than on knowledge rich data hidden in a database. This measure is a very crucial and challenging task. The goal is to predict patient condition by using an adaptive neuro fuzzy inference system (ANFIS) pre-processed by grid partitioning. To achieve an accurate diagnosis at this complex stage of symptom analysis, the physician may need efficient diagnosis system. A framework describes methodology for designing and evaluation of classification performances of two discrete ANFIS systems of hybrid learning algorithms least square estimates with Modified Levenberg-Marquardt and Gradient descent algorithms that can be used by physicians to accelerate diagnosis process. The proposed method's performance was evaluated based on training and test datasets with mammographic mass and Haberman's survival Datasets obtained from benchmarked datasets of University of California at Irvine's (UCI) machine learning repository. The robustness of the performance measuring total accuracy, sensitivity and specificity is examined. In comparison, the proposed method achieves superior performance when compared to conventional ANFIS based gradient descent algorithm and some related existing methods. The software used for the implementation is MATLAB R2014a (version 8.3) and executed in PC Intel Pentium IV E7400 processor with 2.80 GHz speed and 2.0 GB of RAM.
NASA Astrophysics Data System (ADS)
Wei, B. G.; Wu, X. Y.; Yao, Z. F.; Huang, H.
2017-11-01
Transformers are essential devices of the power system. The accurate computation of the highest temperature (HST) of a transformer’s windings is very significant, as for the HST is a fundamental parameter in controlling the load operation mode and influencing the life time of the insulation. Based on the analysis of the heat transfer processes and the thermal characteristics inside transformers, there is taken into consideration the influence of factors like the sunshine, external wind speed etc. on the oil-immersed transformers. Experimental data and the neural network are used for modeling and protesting of the HST, and furthermore, investigations are conducted on the optimization of the structure and algorithms of neutral network are conducted. Comparison is made between the measured values and calculated values by using the recommended algorithm of IEC60076 and by using the neural network algorithm proposed by the authors; comparison that shows that the value computed with the neural network algorithm approximates better the measured value than the value computed with the algorithm proposed by IEC60076.
Matsumoto, Yoshifumi; Hiramatsu, Chihiro; Matsushita, Yuka; Ozawa, Norihiro; Ashino, Ryuichi; Nakata, Makiko; Kasagi, Satoshi; Di Fiore, Anthony; Schaffner, Colleen M; Aureli, Filippo; Melin, Amanda D; Kawamura, Shoji
2014-01-01
New World monkeys exhibit prominent colour vision variation due to allelic polymorphism of the long-to-middle wavelength (L/M) opsin gene. The known spectral variation of L/M opsins in primates is broadly determined by amino acid composition at three sites: 180, 277 and 285 (the ‘three-sites’ rule). However, two L/M opsin alleles found in the black-handed spider monkeys (Ateles geoffroyi) are known exceptions, presumably due to novel mutations. The spectral separation of the two L/M photopigments is 1.5 times greater than expected based on the ‘three-sites’ rule. Yet the consequence of this for the visual ecology of the species is unknown, as is the evolutionary mechanism by which spectral shift was achieved. In this study, we first examine L/M opsins of two other Atelinae species, the long-haired spider monkeys (A. belzebuth) and the common woolly monkeys (Lagothrix lagotricha). By a series of site-directed mutagenesis, we show that a mutation Y213D (tyrosine to aspartic acid at site 213) in the ancestral opsin of the two alleles enabled N294K, which occurred in one allele of the ateline ancestor and increased the spectral separation between the two alleles. Second, by modelling the chromaticity of dietary fruits and background leaves in a natural habitat of spider monkeys, we demonstrate that chromatic discrimination of fruit from leaves is significantly enhanced by these mutations. This evolutionary renovation of L/M opsin polymorphism in atelines illustrates a previously unappreciated dynamism of opsin genes in shaping primate colour vision. PMID:24612406
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szadkowski, Zbigniew; Glas, Dariusz; Pytel, Krzysztof
Neutrinos play a fundamental role in the understanding of the origin of ultra-high-energy cosmic rays. They interact through charged and neutral currents in the atmosphere generating extensive air showers. However, their a very low rate of events potentially generated by neutrinos is a significant challenge for a detection technique and requires both sophisticated algorithms and high-resolution hardware. A trigger based on a artificial neural network was implemented into the Cyclone{sup R} V E FPGA 5CEFA9F31I7 - the heart of the prototype Front-End boards developed for tests of new algorithms in the Pierre Auger surface detectors. Showers for muon and taumore » neutrino initiating particles on various altitudes, angles and energies were simulated in CORSICA and Offline platforms giving pattern of ADC traces in Auger water Cherenkov detectors. The 3-layer 12-8-1 neural network was taught in MATLAB by simulated ADC traces according the Levenberg-Marquardt algorithm. Results show that a probability of a ADC traces generation is very low due to a small neutrino cross-section. Nevertheless, ADC traces, if occur, for 1-10 EeV showers are relatively short and can be analyzed by 16-point input algorithm. We optimized the coefficients from MATLAB to get a maximal range of potentially registered events and for fixed-point FPGA processing to minimize calculation errors. New sophisticated triggers implemented in Cyclone{sup R} V E FPGAs with large amount of DSP blocks, embedded memory running with 120 - 160 MHz sampling may support a discovery of neutrino events in the Pierre Auger Observatory. (authors)« less
PONS2train: tool for testing the MLP architecture and local traning methods for runoff forecast
NASA Astrophysics Data System (ADS)
Maca, P.; Pavlasek, J.; Pech, P.
2012-04-01
The purpose of presented poster is to introduce the PONS2train developed for runoff prediction via multilayer perceptron - MLP. The software application enables the implementation of 12 different MLP's transfer functions, comparison of 9 local training algorithms and finally the evaluation the MLP performance via 17 selected model evaluation metrics. The PONS2train software is written in C++ programing language. Its implementation consists of 4 classes. The NEURAL_NET and NEURON classes implement the MLP, the CRITERIA class estimates model evaluation metrics and for model performance evaluation via testing and validation datasets. The DATA_PATTERN class prepares the validation, testing and calibration datasets. The software application uses the LAPACK, BLAS and ARMADILLO C++ linear algebra libraries. The PONS2train implements the first order local optimization algorithms: standard on-line and batch back-propagation with learning rate combined with momentum and its variants with the regularization term, Rprop and standard batch back-propagation with variable momentum and learning rate. The second order local training algorithms represents: the Levenberg-Marquardt algorithm with and without regularization and four variants of scaled conjugate gradients. The other important PONS2train features are: the multi-run, the weight saturation control, early stopping of trainings, and the MLP weights analysis. The weights initialization is done via two different methods: random sampling from uniform distribution on open interval or Nguyen Widrow method. The data patterns can be transformed via linear and nonlinear transformation. The runoff forecast case study focuses on PONS2train implementation and shows the different aspects of the MLP training, the MLP architecture estimation, the neural network weights analysis and model uncertainty estimation.
Analysis of retinal and cortical components of Retinex algorithms
NASA Astrophysics Data System (ADS)
Yeonan-Kim, Jihyun; Bertalmío, Marcelo
2017-05-01
Following Land and McCann's first proposal of the Retinex theory, numerous Retinex algorithms that differ considerably both algorithmically and functionally have been developed. We clarify the relationships among various Retinex families by associating their spatial processing structures to the neural organizations in the retina and the primary visual cortex in the brain. Some of the Retinex algorithms have a retina-like processing structure (Land's designator idea and NASA Retinex), and some show a close connection with the cortical structures in the primary visual area of the brain (two-dimensional L&M Retinex). A third group of Retinexes (the variational Retinex) manifests an explicit algorithmic relation to Wilson-Cowan's physiological model. We intend to overview these three groups of Retinexes with the frame of reference in the biological visual mechanisms.
Shiller, Jason; Van de Wouw, Angela P.; Taranto, Adam P.; Bowen, Joanna K.; Dubois, David; Robinson, Andrew; Deng, Cecilia H.; Plummer, Kim M.
2015-01-01
Venturia inaequalis and V. pirina are Dothideomycete fungi that cause apple scab and pear scab disease, respectively. Whole genome sequencing of V. inaequalis and V. pirina isolates has revealed predicted proteins with sequence similarity to AvrLm6, a Leptosphaeria maculans effector that triggers a resistance response in Brassica napus and B. juncea carrying the resistance gene, Rlm6. AvrLm6-like genes are present as large families (>15 members) in all sequenced strains of V. inaequalis and V. pirina, while in L. maculans, only AvrLm6 and a single paralog have been identified. The Venturia AvrLm6-like genes are located in gene-poor regions of the genomes, and mostly in close proximity to transposable elements, which may explain the expansion of these gene families. An AvrLm6-like gene from V. inaequalis with the highest sequence identity to AvrLm6 was unable to trigger a resistance response in Rlm6-carrying B. juncea. RNA-seq and qRT-PCR gene expression analyses, of in planta- and in vitro-grown V. inaequalis, has revealed that many of the AvrLm6-like genes are expressed during infection. An AvrLm6 homolog from V. inaequalis that is up-regulated during infection was shown (using an eYFP-fusion protein construct) to be localized to the sub-cuticular stroma during biotrophic infection of apple hypocotyls. PMID:26635823
Frequency guided methods for demodulation of a single fringe pattern.
Wang, Haixia; Kemao, Qian
2009-08-17
Phase demodulation from a single fringe pattern is a challenging task but of interest. A frequency-guided regularized phase tracker and a frequency-guided sequential demodulation method with Levenberg-Marquardt optimization are proposed to demodulate a single fringe pattern. Demodulation path guided by the local frequency from the highest to the lowest is applied in both methods. Since critical points have low local frequency values, they are processed last so that the spurious sign problem caused by these points is avoided. These two methods can be considered as alternatives to the effective fringe follower regularized phase tracker. Demodulation results from one computer-simulated and two experimental fringe patterns using the proposed methods will be demonstrated. (c) 2009 Optical Society of America
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lessa, L. L.; Martins, A. S.; Fellows, C. E., E-mail: fellows@if.uff.br
2015-10-28
In this note, three vibrational bands of the electronic transition A{sup 2}Σ{sup +}-X{sup 2}Π of the N{sub 2}O{sup +} radical (000-100, 100-100, and 001-101) were theoretically analysed. Starting from Hamiltonian models proposed for this kind of molecule, their parameters were calculated using a Levenberg-Marquardt fit procedure in order to reduce the root mean square deviation from the experimental transitions below to 0.01 cm{sup −1}. The main objective of this work is to obtain new and reliable values for rotational constant B″ and the spin-orbit interaction parameter A of the analysed vibrational levels of the X{sup 2}Π electronic state of thismore » molecule.« less
NASA Astrophysics Data System (ADS)
Maulidah, Rifa'atul; Purqon, Acep
2016-08-01
Mendong (Fimbristylis globulosa) has a potentially industrial application. We investigate a predictive model for heat and mass transfer in drying kinetics during drying a Mendong. We experimentally dry the Mendong by using a microwave oven. In this study, we analyze three mathematical equations and feed forward neural network (FNN) with back propagation to describe the drying behavior of Mendong. Our results show that the experimental data and the artificial neural network model has a good agreement and better than a mathematical equation approach. The best FNN for the prediction is 3-20-1-1 structure with Levenberg- Marquardt training function. This drying kinetics modeling is potentially applied to determine the optimal parameters during mendong drying and to estimate and control of drying process.
Code Samples Used for Complexity and Control
NASA Astrophysics Data System (ADS)
Ivancevic, Vladimir G.; Reid, Darryn J.
2015-11-01
The following sections are included: * MathematicaⓇ Code * Generic Chaotic Simulator * Vector Differential Operators * NLS Explorer * 2C++ Code * C++ Lambda Functions for Real Calculus * Accelerometer Data Processor * Simple Predictor-Corrector Integrator * Solving the BVP with the Shooting Method * Linear Hyperbolic PDE Solver * Linear Elliptic PDE Solver * Method of Lines for a Set of the NLS Equations * C# Code * Iterative Equation Solver * Simulated Annealing: A Function Minimum * Simple Nonlinear Dynamics * Nonlinear Pendulum Simulator * Lagrangian Dynamics Simulator * Complex-Valued Crowd Attractor Dynamics * Freeform Fortran Code * Lorenz Attractor Simulator * Complex Lorenz Attractor * Simple SGE Soliton * Complex Signal Presentation * Gaussian Wave Packet * Hermitian Matrices * Euclidean L2-Norm * Vector/Matrix Operations * Plain C-Code: Levenberg-Marquardt Optimizer * Free Basic Code: 2D Crowd Dynamics with 3000 Agents
Moros, J; Serrano, J; Gallego, F J; Macías, J; Laserna, J J
2013-06-15
During recent years laser-induced breakdown spectroscopy (LIBS) has been considered one of the techniques with larger ability for trace detection of explosives. However, despite of the high sensitivity exhibited for this application, LIBS suffers from a limited selectivity due to difficulties in assigning the molecular origin of the spectral emissions observed. This circumstance makes the recognition of fingerprints a latent challenging problem. In the present manuscript the sorting of six explosives (chloratite, ammonal, DNT, TNT, RDX and PETN) against a broad list of potential harmless interferents (butter, fuel oil, hand cream, olive oil, …), all of them in the form of fingerprints deposited on the surfaces of objects for courier services, has been carried out. When LIBS information is processed through a multi-stage architecture algorithm built from a suitable combination of 3 learning classifiers, an unknown fingerprint may be labeled into a particular class. Neural network classifiers trained by the Levenberg-Marquardt rule were decided within 3D scatter plots projected onto the subspace of the most useful features extracted from the LIBS spectra. Experimental results demonstrate that the presented algorithm sorts fingerprints according to their hazardous character, although its spectral information is virtually identical in appearance, with rates of false negatives and false positives not beyond of 10%. These reported achievements mean a step forward in the technology readiness level of LIBS for this complex application related to defense, homeland security and force protection. Copyright © 2013 Elsevier B.V. All rights reserved.
Yetilmezsoy, Kaan; Demirel, Sevgi
2008-05-30
A three-layer artificial neural network (ANN) model was developed to predict the efficiency of Pb(II) ions removal from aqueous solution by Antep pistachio (Pistacia Vera L.) shells based on 66 experimental sets obtained in a laboratory batch study. The effect of operational parameters such as adsorbent dosage, initial concentration of Pb(II) ions, initial pH, operating temperature, and contact time were studied to optimise the conditions for maximum removal of Pb(II) ions. On the basis of batch test results, optimal operating conditions were determined to be an initial pH of 5.5, an adsorbent dosage of 1.0 g, an initial Pb(II) concentration of 30 ppm, and a temperature of 30 degrees C. Experimental results showed that a contact time of 45 min was generally sufficient to achieve equilibrium. After backpropagation (BP) training combined with principal component analysis (PCA), the ANN model was able to predict adsorption efficiency with a tangent sigmoid transfer function (tansig) at hidden layer with 11 neurons and a linear transfer function (purelin) at output layer. The Levenberg-Marquardt algorithm (LMA) was found as the best of 11 BP algorithms with a minimum mean squared error (MSE) of 0.000227875. The linear regression between the network outputs and the corresponding targets were proven to be satisfactory with a correlation coefficient of about 0.936 for five model variables used in this study.
NASA Astrophysics Data System (ADS)
Sagir, Abdu Masanawa; Sathasivam, Saratha
2017-08-01
Medical diagnosis is the process of determining which disease or medical condition explains a person's determinable signs and symptoms. Diagnosis of most of the diseases is very expensive as many tests are required for predictions. This paper aims to introduce an improved hybrid approach for training the adaptive network based fuzzy inference system with Modified Levenberg-Marquardt algorithm using analytical derivation scheme for computation of Jacobian matrix. The goal is to investigate how certain diseases are affected by patient's characteristics and measurement such as abnormalities or a decision about presence or absence of a disease. To achieve an accurate diagnosis at this complex stage of symptom analysis, the physician may need efficient diagnosis system to classify and predict patient condition by using an adaptive neuro fuzzy inference system (ANFIS) pre-processed by grid partitioning. The proposed hybridised intelligent system was tested with Pima Indian Diabetes dataset obtained from the University of California at Irvine's (UCI) machine learning repository. The proposed method's performance was evaluated based on training and test datasets. In addition, an attempt was done to specify the effectiveness of the performance measuring total accuracy, sensitivity and specificity. In comparison, the proposed method achieves superior performance when compared to conventional ANFIS based gradient descent algorithm and some related existing methods. The software used for the implementation is MATLAB R2014a (version 8.3) and executed in PC Intel Pentium IV E7400 processor with 2.80 GHz speed and 2.0 GB of RAM.
Simulation of a fast diffuse optical tomography system based on radiative transfer equation
NASA Astrophysics Data System (ADS)
Motevalli, S. M.; Payani, A.
2016-12-01
Studies show that near-infrared (NIR) light (light with wavelength between 700nm and 1300nm) undergoes two interactions, absorption and scattering, when it penetrates a tissue. Since scattering is the predominant interaction, the calculation of light distribution in the tissue and the image reconstruction of absorption and scattering coefficients are very complicated. Some analytical and numerical methods, such as radiative transport equation and Monte Carlo method, have been used for the simulation of light penetration in tissue. Recently, some investigators in the world have tried to develop a diffuse optical tomography system. In these systems, NIR light penetrates the tissue and passes through the tissue. Then, light exiting the tissue is measured by NIR detectors placed around the tissue. These data are collected from all the detectors and transferred to the computational parts (including hardware and software), which make a cross-sectional image of the tissue after performing some computational processes. In this paper, the results of the simulation of an optical diffuse tomography system are presented. This simulation involves two stages: a) Simulation of the forward problem (or light penetration in the tissue), which is performed by solving the diffusion approximation equation in the stationary state using FEM. b) Simulation of the inverse problem (or image reconstruction), which is performed by the optimization algorithm called Broyden quasi-Newton. This method of image reconstruction is faster compared to the other Newton-based optimization algorithms, such as the Levenberg-Marquardt one.
Stochastic approach to data analysis in fluorescence correlation spectroscopy.
Rao, Ramachandra; Langoju, Rajesh; Gösch, Michael; Rigler, Per; Serov, Alexandre; Lasser, Theo
2006-09-21
Fluorescence correlation spectroscopy (FCS) has emerged as a powerful technique for measuring low concentrations of fluorescent molecules and their diffusion constants. In FCS, the experimental data is conventionally fit using standard local search techniques, for example, the Marquardt-Levenberg (ML) algorithm. A prerequisite for these categories of algorithms is the sound knowledge of the behavior of fit parameters and in most cases good initial guesses for accurate fitting, otherwise leading to fitting artifacts. For known fit models and with user experience about the behavior of fit parameters, these local search algorithms work extremely well. However, for heterogeneous systems or where automated data analysis is a prerequisite, there is a need to apply a procedure, which treats FCS data fitting as a black box and generates reliable fit parameters with accuracy for the chosen model in hand. We present a computational approach to analyze FCS data by means of a stochastic algorithm for global search called PGSL, an acronym for Probabilistic Global Search Lausanne. This algorithm does not require any initial guesses and does the fitting in terms of searching for solutions by global sampling. It is flexible as well as computationally faster at the same time for multiparameter evaluations. We present the performance study of PGSL for two-component with triplet fits. The statistical study and the goodness of fit criterion for PGSL are also presented. The robustness of PGSL on noisy experimental data for parameter estimation is also verified. We further extend the scope of PGSL by a hybrid analysis wherein the output of PGSL is fed as initial guesses to ML. Reliability studies show that PGSL and the hybrid combination of both perform better than ML for various thresholds of the mean-squared error (MSE).
Soil hydraulic material properties and layered architecture from time-lapse GPR
NASA Astrophysics Data System (ADS)
Jaumann, Stefan; Roth, Kurt
2018-04-01
Quantitative knowledge of the subsurface material distribution and its effective soil hydraulic material properties is essential to predict soil water movement. Ground-penetrating radar (GPR) is a noninvasive and nondestructive geophysical measurement method that is suitable to monitor hydraulic processes. Previous studies showed that the GPR signal from a fluctuating groundwater table is sensitive to the soil water characteristic and the hydraulic conductivity function. In this work, we show that the GPR signal originating from both the subsurface architecture and the fluctuating groundwater table is suitable to estimate the position of layers within the subsurface architecture together with the associated effective soil hydraulic material properties with inversion methods. To that end, we parameterize the subsurface architecture, solve the Richards equation, convert the resulting water content to relative permittivity with the complex refractive index model (CRIM), and solve Maxwell's equations numerically. In order to analyze the GPR signal, we implemented a new heuristic algorithm that detects relevant signals in the radargram (events) and extracts the corresponding signal travel time and amplitude. This algorithm is applied to simulated as well as measured radargrams and the detected events are associated automatically. Using events instead of the full wave regularizes the inversion focussing on the relevant measurement signal. For optimization, we use a global-local approach with preconditioning. Starting from an ensemble of initial parameter sets drawn with a Latin hypercube algorithm, we sequentially couple a simulated annealing algorithm with a Levenberg-Marquardt algorithm. The method is applied to synthetic as well as measured data from the ASSESS test site. We show that the method yields reasonable estimates for the position of the layers as well as for the soil hydraulic material properties by comparing the results to references derived from ground
Postflight analysis of the EVCS-LM communications link for the Apollo 15 mission
NASA Technical Reports Server (NTRS)
Royston, C. L., Jr.; Eggers, D. S.
1972-01-01
Data from the Apollo 15 mission were used to compare the actual performance of the EVCS to LM communications link with the preflight performance predictions. Based on the results of the analysis, the following conclusions were made: (1) The radio transmission loss data show good correlation with predictions during periods when the radio line of sight was obscured. (2) The technique of predicting shadow losses due to obstacles in the radio line of sight provides a good estimate of the actual shadowing loss. (3) When the transmitter was on an upslope, the radio transmission loss approached the free space loss values as the line of sight to the LM was regained.
Prediction Model for Predicting Powdery Mildew using ANN for Medicinal Plant— Picrorhiza kurrooa
NASA Astrophysics Data System (ADS)
Shivling, V. D.; Ghanshyam, C.; Kumar, Rakesh; Kumar, Sanjay; Sharma, Radhika; Kumar, Dinesh; Sharma, Atul; Sharma, Sudhir Kumar
2017-02-01
Plant disease fore casting system is an important system as it can be used for prediction of disease, further it can be used as an alert system to warn the farmers in advance so as to protect their crop from being getting infected. Fore casting system will predict the risk of infection for crop by using the environmental factors that favor in germination of disease. In this study an artificial neural network based system for predicting the risk of powdery mildew in Picrorhiza kurrooa was developed. For development, Levenberg-Marquardt backpropagation algorithm was used having a single hidden layer of ten nodes. Temperature and duration of wetness are the major environmental factors that favor infection. Experimental data was used as a training set and some percentage of data was used for testing and validation. The performance of the system was measured in the form of the coefficient of correlation (R), coefficient of determination (R2), mean square error and root mean square error. For simulating the network an inter face was developed. Using this interface the network was simulated by putting temperature and wetness duration so as to predict the level of risk at that particular value of the input data.
River velocities from sequential multispectral remote sensing images
NASA Astrophysics Data System (ADS)
Chen, Wei; Mied, Richard P.
2013-06-01
We address the problem of extracting surface velocities from a pair of multispectral remote sensing images over rivers using a new nonlinear multiple-tracer form of the global optimal solution (GOS). The derived velocity field is a valid solution across the image domain to the nonlinear system of equations obtained by minimizing a cost function inferred from the conservation constraint equations for multiple tracers. This is done by deriving an iteration equation for the velocity, based on the multiple-tracer displaced frame difference equations, and a local approximation to the velocity field. The number of velocity equations is greater than the number of velocity components, and thus overly constrain the solution. The iterative technique uses Gauss-Newton and Levenberg-Marquardt methods and our own algorithm of the progressive relaxation of the over-constraint. We demonstrate the nonlinear multiple-tracer GOS technique with sequential multispectral Landsat and ASTER images over a portion of the Potomac River in MD/VA, and derive a dense field of accurate velocity vectors. We compare the GOS river velocities with those from over 12 years of data at four NOAA reference stations, and find good agreement. We discuss how to find the appropriate spatial and temporal resolutions to allow optimization of the technique for specific rivers.
NASA Astrophysics Data System (ADS)
Garcia, Xavier; Boerner, David; Pedersen, Laust B.
2003-09-01
We have developed a Marquardt-Levenberg inversion algorithm incorporating the effects of near-surface galvanic distortion into the electromagnetic (EM) response of a layered earth model. Different tests on synthetic model responses suggest that for the grounded source method, the magnetic distortion does not vanish for low frequencies. Including this effect is important, although to date it has been neglected. We have inverted 10 stations of controlled-source audio-magnetotellurics (CSAMT) data recorded near the Buchans Mine, Newfoundland, Canada. The Buchans Mine was one of the richest massive sulphide deposits in the world, and is situated in a highly resistive volcanogenic environment, substantially modified by thrust faulting. Preliminary work in the area demonstrated that the EM fields observed at adjacent stations show large differences due to the existence of mineralized fracture zones and variable overburden thickness. Our inversion results suggest a three-layered model that is appropriate for the Buchans Mine. The resistivity model correlates with the seismic reflection interpretation that documents the existence of two thrust packages. The distortion parameters obtained from the inversion concur with the synthetic studies that galvanic magnetic distortion is required to interpret the Buchans data since the magnetic component of the galvanic distortion does not vanish at low frequency.
Davidson, Shaun M; Docherty, Paul D; Murray, Rua
2017-03-01
Parameter identification is an important and widely used process across the field of biomedical engineering. However, it is susceptible to a number of potential difficulties, such as parameter trade-off, causing premature convergence at non-optimal parameter values. The proposed Dimensional Reduction Method (DRM) addresses this issue by iteratively reducing the dimension of hyperplanes where trade off occurs, and running subsequent identification processes within these hyperplanes. The DRM was validated using clinical data to optimize 4 parameters of the widely used Bergman Minimal Model of glucose and insulin kinetics, as well as in-silico data to optimize 5 parameters of the Pulmonary Recruitment (PR) Model. Results were compared with the popular Levenberg-Marquardt (LMQ) Algorithm using a Monte-Carlo methodology, with both methods afforded equivalent computational resources. The DRM converged to a lower or equal residual value in all tests run using the Bergman Minimal Model and actual patient data. For the PR model, the DRM attained significantly lower overall median parameter error values and lower residuals in the vast majority of tests. This shows the DRM has potential to provide better resolution of optimum parameter values for the variety of biomedical models in which significant levels of parameter trade-off occur. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Alves, Larissa A.; de Castro, Arthur H.; de Mendonça, Fernanda G.; de Mesquita, João P.
2016-05-01
The oxygenated functional groups present on the surface of carbon dots with an average size of 2.7 ± 0.5 nm were characterized by a variety of techniques. In particular, we discussed the fit data of potentiometric titration curves using a nonlinear regression method based on the Levenberg-Marquardt algorithm. The results obtained by statistical treatment of the titration curve data showed that the best fit was obtained considering the presence of five Brønsted-Lowry acids on the surface of the carbon dots with constant ionization characteristics of carboxylic acids, cyclic ester, phenolic and pyrone-like groups. The total number of oxygenated acid groups obtained was 5 mmol g-1, with approximately 65% (∼2.9 mmol g-1) originating from groups with pKa < 6. The methodology showed good reproducibility and stability with standard deviations below 5%. The nature of the groups was independent of small variations in experimental conditions, i.e. the mass of carbon dots titrated and initial concentration of HCl solution. Finally, we believe that the methodology used here, together with other characterization techniques, is a simple, fast and powerful tool to characterize the complex acid-base properties of these so interesting and intriguing nanoparticles.
NASA Astrophysics Data System (ADS)
Fang, Kaizheng; Mu, Daobin; Chen, Shi; Wu, Borong; Wu, Feng
2012-06-01
In this study, a prediction model based on artificial neural network is constructed for surface temperature simulation of nickel-metal hydride battery. The model is developed from a back-propagation network which is trained by Levenberg-Marquardt algorithm. Under each ambient temperature of 10 °C, 20 °C, 30 °C and 40 °C, an 8 Ah cylindrical Ni-MH battery is charged in the rate of 1 C, 3 C and 5 C to its SOC of 110% in order to provide data for the model training. Linear regression method is adopted to check the quality of the model training, as well as mean square error and absolute error. It is shown that the constructed model is of excellent training quality for the guarantee of prediction accuracy. The surface temperature of battery during charging is predicted under various ambient temperatures of 50 °C, 60 °C, 70 °C by the model. The results are validated in good agreement with experimental data. The value of battery surface temperature is calculated to exceed 90 °C under the ambient temperature of 60 °C if it is overcharged in 5 C, which might cause battery safety issues.
Camera flash heating of a three-layer solid composite: An approximate solution
NASA Astrophysics Data System (ADS)
Jibrin, Sani; Moksin, Mohd Maarof; Husin, Mohd Shahril; Zakaria, Azmi; Hassan, Jumiah; Talib, Zainal Abidin
2014-03-01
Camera flash heating and the subsequent thermal wave propagation in a solid composite material is studied using the Laplace transform technique. Full-field rear surface temperature for a single-layer, two-layer and three-layer solid composites are obtained directly from the Laplace transform conversion tables as opposed to the tedious inversion process by integral transform method. This is achieved by first expressing the hyperbolic-transcendental equation in terms of negative exponentials of square root of s/α and expanded same in a series by the binomial theorem. Electrophoretic deposition (EPD) and dip coating processes were used to prepare three-layer solid composites consisting ZnO/Cu/ZnO and starch/Al/starch respectively. About 0.5ml of deionized water enclosed within an air-tight aluminium container serves as the third three layer sample (AL/water/AL). Thermal diffusivity experiments were carried out on all the three samples prepared. Using Scaled Levenberg-Marquardt algorithm, the approximate temperature curve for the three-layer solid composite is fitted with the corresponding experimental result. The agreement between the theoretical curve and the experimental data as well as that between the obtained thermal diffusivity values for the ZnO, aluminium and deionized water in this work and similar ones found in literature is found to be very good.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakhtiari, S.; Liao, S.; Elmer, T.
This paper analyzes heart rate (HR) information from physiological tracings collected with a remote millimeter wave (mmW) I-Q sensor for biometric monitoring applications. A parameter optimization method based on the nonlinear Levenberg-Marquardt algorithm is used. The mmW sensor works at 94 GHz and can detect the vital signs of a human subject from a few to tens of meters away. The reflected mmW signal is typically affected by respiration, body movement, background noise, and electronic system noise. Processing of the mmW radar signal is, thus, necessary to obtain the true HR. The down-converted received signal in this case consists ofmore » both the real part (I-branch) and the imaginary part (Q-branch), which can be considered as the cosine and sine of the received phase of the HR signal. Instead of fitting the converted phase angle signal, the method directly fits the real and imaginary parts of the HR signal, which circumvents the need for phase unwrapping. This is particularly useful when the SNR is low. Also, the method identifies both beat-to-beat HR and individual heartbeat magnitude, which is valuable for some medical diagnosis applications. The mean HR here is compared to that obtained using the discrete Fourier transform.« less
Prediction of Greenhouse Gas (GHG) Fluxes from Coastal Salt Marshes using Artificial Neural Network
NASA Astrophysics Data System (ADS)
Ishtiaq, K. S.; Abdul-Aziz, O. I.
2017-12-01
Coastal salt marshes are among the most productive ecosystems on earth. Given the complex interactions between ambient environment and ecosystem biological exchanges, it is difficult to predict the salt marsh greenhouse gas (GHG) fluxes (CO2 and CH4) from their environmental drivers. In this study, we developed an artificial neural network (ANN) model to robustly predict the salt marsh GHG fluxes using a limited number of input variables (photosynthetically active radiation, soil temperature and porewater salinity). The ANN parameterization involved an optimized 3-layer feed forward Levenberg-Marquardt training algorithm. Four tidal salt marshes of Waquoit Bay, MA — incorporating a gradient in land-use, salinity and hydrology — were considered as the case study sites. The wetlands were dominated by native Spartina Alterniflora, and characterized by high salinity and frequent flooding. The developed ANN model showed a good performance (training R2 = 0.87 - 0.96; testing R2 = 0.84 - 0.88) in predicting the fluxes across the case study sites. The model can be used to estimate wetland GHG fluxes and potential carbon balance under different IPCC climate change and sea level rise scenarios. The model can also aid the development of GHG offset protocols to set monitoring guidelines for restoration of coastal salt marshes.
Sun, Yang; Shevell, Steven K
2008-01-01
The mother or daughter of a male with an X-chromosome-linked red/green color defect is an obligate carrier of the color deficient gene array. According to the Lyonization hypothesis, a female carrier's defective gene is expressed and thus carriers may have more than two types of pigments in the L/M photopigment range. An open question is how a carrier's third cone pigment in the L/M range affects the postreceptoral neural signals encoding color. Here, a model considered how the signal from the third pigment pools with signals from the normal's two pigments in the L/M range. Three alternative assumptions were considered for the signal from the third cone pigment: it pools with the signal from (1) L cones, (2) M cones, or (3) both types of cones. Spectral-sensitivity peak, optical density, and the relative number of each cone type were factors in the model. The model showed that differences in Rayleigh matches among carriers can be due to individual differences in the number of the third type of L/M cone, and the spectral sensitivity peak and optical density of the third L/M pigment; surprisingly, however, individual differences in the cone ratio of the other two cone types (one L and the other M) did not affect the match. The predicted matches were compared to Schmidt's (1934/1955) report of carriers' Rayleigh matches. For carriers of either protanomaly or deuteranomaly, these matches were not consistent with the signal from the third L/M pigment combining with only the signal from M cones. The matches could be accounted for by pooling the third-pigment's response with L-cone signals, either exclusively or randomly with M-cone responses as well.
NASA Astrophysics Data System (ADS)
Lin, Tsungpo
Performance engineers face the major challenge in modeling and simulation for the after-market power system due to system degradation and measurement errors. Currently, the majority in power generation industries utilizes the deterministic data matching method to calibrate the model and cascade system degradation, which causes significant calibration uncertainty and also the risk of providing performance guarantees. In this research work, a maximum-likelihood based simultaneous data reconciliation and model calibration (SDRMC) is used for power system modeling and simulation. By replacing the current deterministic data matching with SDRMC one can reduce the calibration uncertainty and mitigate the error propagation to the performance simulation. A modeling and simulation environment for a complex power system with certain degradation has been developed. In this environment multiple data sets are imported when carrying out simultaneous data reconciliation and model calibration. Calibration uncertainties are estimated through error analyses and populated to performance simulation by using principle of error propagation. System degradation is then quantified by performance comparison between the calibrated model and its expected new & clean status. To mitigate smearing effects caused by gross errors, gross error detection (GED) is carried out in two stages. The first stage is a screening stage, in which serious gross errors are eliminated in advance. The GED techniques used in the screening stage are based on multivariate data analysis (MDA), including multivariate data visualization and principal component analysis (PCA). Subtle gross errors are treated at the second stage, in which the serial bias compensation or robust M-estimator is engaged. To achieve a better efficiency in the combined scheme of the least squares based data reconciliation and the GED technique based on hypotheses testing, the Levenberg-Marquardt (LM) algorithm is utilized as the optimizer. To
NASA Astrophysics Data System (ADS)
Vesselinov, V. V.; Harp, D.
2010-12-01
The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST
1969-11-19
AS12-48-7034 (19 Nov. 1969) --- A close-up view of a portion of quadrant II of the descent stage of the Apollo 12 Lunar Module (LM), photographed during the Apollo 12 extravehicular activity (EVA). At lower left is the LM's Y footpad. The empty Radioisotope Thermoelectric Generator (RTG) fuel cask is at upper right. The fuel capsule has already been removed and placed in the RTG. The RTG furnishes power for the Apollo Lunar Surface Experiments Package (ALSEP) which the Apollo 12 astronauts deployed on the moon. The LM's descent engine skirt is in the center background. The rod-like object protruding out from under the footpad is a lunar surface sensing probe. Astronaut Richard F. Gordon Jr., command module pilot, remained with the Command and Service Modules (CSM) in lunar orbit while astronauts Charles Conrad Jr., commander; and Alan L. Bean, lunar module pilot, descended in the LM to explore the moon.
Recombination Narratives to Accompany "A-LM French One," First Edition.
ERIC Educational Resources Information Center
Coughlin, Dorothy
Supplementary recombination narratives intended for use with the 1961 edition of the text "A-LM French One" are designed to help students learn to manipulate basic textual materials. The sample narratives correlate with Units 4-14 of the text. The teacher is urged to make use of the overhead projector when using the narratives for the…
Lee, Mi-Hwa; Lee, Jiyeon; Nam, Young-Do; Lee, Jong Suk; Seo, Myung-Ji; Yi, Sung-Hun
2016-03-16
A wild-type microorganism exhibiting antimicrobial activities was isolated from the Korean traditional fermented soybean food Chungkookjang and identified as Bacillus sp. LM7. During its stationary growth phase, the microorganism secreted an antimicrobial substance, which we partially purified using a simple two-step procedure involving ammonium sulfate precipitation and heat treatment. The partially purified antimicrobial substance, Anti-LM7, was stable over a broad pH range (4.0-9.0) and at temperatures up to 80 °C for 30 min, and was resistant to most proteolytic enzymes and maintained its activity in 30% (v/v) organic solvents. Anti-LM7 inhibited the growth of a broad range of Gram-positive bacteria, including Bacillus cereus and Listeria monocytogenes, but it did not inhibit lactic acid bacteria such as Lactobacillus plantarum and Lactococcus lactis subsp. Lactis. Moreover, unlike commercially available nisin and polymyxin B, Anti-LM7 inhibited certain fungal strains. Lastly, liquid chromatography-mass spectrometry analysis of Anti-LM7 revealed that it contained eight lipopeptides belonging to two families: four bacillomycin D and four surfactin analogs. These Bacillus sp. LM7-produced heterogeneous lipopeptides exhibiting extremely high stability and a broad antimicrobial spectrum are likely to be closely related to the antimicrobial activity of Chungkookjang, and their identification presents an opportunity for application of the peptides in environmental bioremediation, pharmaceutical, cosmetic, and food industries. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Escher, William J. D.; Roddy, Jordan E.; Hyde, Eric H.
2000-01-01
The Supercharged Ejector Ramjet (SERJ) engine developments of the 1960s, as pursued by The Marquardt Corporation and its associated industry team members, are described. In just three years, engineering work on this combined-cycle powerplant type evolved, from its initial NASA-sponsored reusable space transportation system study status, into a U.S. Air Force/Navy-supported exploratory development program as a candidate 4.5 high-performance military aircraft engine. Bridging a productive transition from the spaceflight to the aviation arena, this case history supports the expectation that fully-integrated airbreathing/rocket propulsion systems hold high promise toward meeting the demanding propulsion requirements of tomorrow's aircraft-like Spaceliner class transportation systems. Lessons to be learned from this "SERJ Story" are offered for consideration by today's advanced space transportation and combined-cycle propulsion researchers and forward-planning communities.
ERIC Educational Resources Information Center
Findlay, David W.
1999-01-01
Offers instructors a presentation of the IS (investment saving)-LM (liquidity preference-money supply) model, suggesting that a number of benefits emerge if the instructor focuses on what determines the size of both the horizontal and vertical distances between the IS curves and between the LM curves. (CMK)
NASA Astrophysics Data System (ADS)
Kohnizio Mahli, Maximus; Jamian, Saifulnizan; Ismail, Al Emran; Nor, Nik Hisyamudin Muhd; Nor, Mohd Khir Mohd; Azhar Kamarudin, Kamarul
2017-10-01
Al LM6 hollow cylinder is fabricated using horizontal centrifugal casting which produce a very fine grain on the outer surface of the structure. In this study, the effect of motor speed and pouring temperature on the microstructure of Al LM6 hollow cylinder is determined. The speed of the motor used during casting are 1300rpm, 1500rpm and 1700rpm and the pouring temperature are 690°C, 710°C and 725°C. The Al LM6 hollow cylinder is produced by pouring the molten Al LM6 into a cylindrical casting mold which is connected with a shaft and it is rotated by motor until it is solidified. Then, the cross-section is observed using OM and SEM/EDS. From the microstructure observation, the distributions of Si are more concentrated at the inner parts and the size of Si is bigger at the inner parts. The result shows that the Si particles at the inner part which is fabricated at the highest motor speed (1700rpm) have the most Si particles compared with the Si particles that are casted with other motor speeds.
LM1-64: a Newly Reported Lmc-Pn with WR Nucleus
NASA Astrophysics Data System (ADS)
Pena, M.; Olguin, L.; Ruiz, M. T.; Torres-Peimbert, S.
1993-05-01
The object LM1-64 was reported by Lindsay & Mullan (1963, Irish Astron. J., 5, 51) as a probable PN in the LMC. Optical and UV spectra taken by us confirm that suggestion. LM1-64 is a high excitation planetary nebulae which shows evidence of having a WC central star. Broad stellar emission at lambda 4650 is detected in the optical spectrum obtained with the CTIO 4m telescope, in 1989. A UV spectrum in the range from 1200 Angstroms to 2000 Angstroms was obtained with IUE in 1990. We have measured all the emission line fluxes available and determined values for the physical conditions and chemical abundances of the nebular ionized gas. The derived values are T(OIII) = 14000K, log He/H = 11.05, log C/H = 9.48, log O/H = 8.55 and log Ne/H = 7.94. LM1-64 shows a large C enhancement in the envelope as result of the central star activity, while He, O and Ne are comparable to the average values reported for the LMC-PNe (Monk, Barlow & Clegg, 1988, MNRAS, 234, 583). We have estimated the He II Zanstra temperature of the central star to be ~ 80,000 K. This temperature is much higher than the values reported for the known LMC-PNe with WR nucleus that Monk et al. have classified as W4 to W8. The only other high temperature WR nucleus in a LMC-PN is N66 which recently showed evidence of undergoing a WR episode (Torres-Peimbert, Ruiz, Peimbert & Pe\\ na, 1993, IAU Symp. 155, eds. A. Acker & R. Weinberger, in press).
Monthly evaporation forecasting using artificial neural networks and support vector machines
NASA Astrophysics Data System (ADS)
Tezel, Gulay; Buyukyildiz, Meral
2016-04-01
Evaporation is one of the most important components of the hydrological cycle, but is relatively difficult to estimate, due to its complexity, as it can be influenced by numerous factors. Estimation of evaporation is important for the design of reservoirs, especially in arid and semi-arid areas. Artificial neural network methods and support vector machines (SVM) are frequently utilized to estimate evaporation and other hydrological variables. In this study, usability of artificial neural networks (ANNs) (multilayer perceptron (MLP) and radial basis function network (RBFN)) and ɛ-support vector regression (SVR) artificial intelligence methods was investigated to estimate monthly pan evaporation. For this aim, temperature, relative humidity, wind speed, and precipitation data for the period 1972 to 2005 from Beysehir meteorology station were used as input variables while pan evaporation values were used as output. The Romanenko and Meyer method was also considered for the comparison. The results were compared with observed class A pan evaporation data. In MLP method, four different training algorithms, gradient descent with momentum and adaptive learning rule backpropagation (GDX), Levenberg-Marquardt (LVM), scaled conjugate gradient (SCG), and resilient backpropagation (RBP), were used. Also, ɛ-SVR model was used as SVR model. The models were designed via 10-fold cross-validation (CV); algorithm performance was assessed via mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R 2). According to the performance criteria, the ANN algorithms and ɛ-SVR had similar results. The ANNs and ɛ-SVR methods were found to perform better than the Romanenko and Meyer methods. Consequently, the best performance using the test data was obtained using SCG(4,2,2,1) with R 2 = 0.905.
NASA Astrophysics Data System (ADS)
Szadkowski, Zbigniew; Głas, Dariusz; Pytel, Krzysztof; Wiedeński, Michał
2017-06-01
Neutrinos play a fundamental role in the understanding of the origin of ultrahigh-energy cosmic rays. They interact through charged and neutral currents in the atmosphere generating extensive air showers. However, the very low rate of events potentially generated by neutrinos is a significant challenge for detection techniques and requires both sophisticated algorithms and high-resolution hardware. Air showers initiated by protons and muon neutrinos at various altitudes, angles, and energies were simulated in CORSIKA and the Auger OffLine event reconstruction platforms, giving analog-to-digital convertor (ADC) patterns in Auger water Cherenkov detectors on the ground. The proton interaction cross section is high, so proton “old” showers start their development early in the atmosphere. In contrast to this, neutrinos can generate “young” showers deeply in the atmosphere relatively close to the detectors. Differences between “old” proton and “young” neutrino showers are visible in attenuation factors of ADC waveforms. For the separation of “old” proton and “young” neutrino ADC traces, many three-layer artificial neural networks (ANNs) were tested. They were trained in MATLAB (in a dedicated way -only “old” proton and “young” neutrino showers as patterns) by simulated ADC traces according to the Levenberg-Marquardt algorithm. Unexpectedly, the recognition efficiency is found to be almost independent of the size of the networks. The ANN trigger based on a selected 8-6-1 network was tested in the Cyclone V E FPGA 5CEFA9F31I7, the heart of prototype front-end boards developed for testing new algorithms in the Pierre Auger surface detectors.
Dziewit, Lukasz; Oscik, Karolina; Bartosik, Dariusz
2014-01-01
ABSTRACT ΦLM21 is a temperate phage isolated from Sinorhizobium sp. strain LM21 (Alphaproteobacteria). Genomic analysis and electron microscopy suggested that ΦLM21 is a member of the family Siphoviridae. The phage has an isometric head and a long noncontractile tail. The genome of ΦLM21 has 50,827 bp of linear double-stranded DNA encoding 72 putative proteins, including proteins responsible for the assembly of the phage particles, DNA packaging, transcription, replication, and lysis. Virion proteins were characterized using mass spectrometry, leading to the identification of the major capsid and tail components, tape measure, and a putative portal protein. We have confirmed the activity of two gene products, a lytic enzyme (a putative chitinase) and a DNA methyltransferase, sharing sequence specificity with the cell cycle-regulating methyltransferase (CcrM) of the bacterial host. Interestingly, the genome of Sinorhizobium phage ΦLM21 shows very limited similarity to other known phage genome sequences and is thus considered unique. IMPORTANCE Prophages are known to play an important role in the genomic diversification of bacteria via horizontal gene transfer. The influence of prophages on pathogenic bacteria is very well documented. However, our knowledge of the overall impact of prophages on the survival of their lysogenic, nonpathogenic bacterial hosts is still limited. In particular, information on prophages of the agronomically important Sinorhizobium species is scarce. In this study, we describe the isolation and molecular characterization of a novel temperate bacteriophage, ΦLM21, of Sinorhizobium sp. LM21. Since we have not found any similar sequences, we propose that this bacteriophage is a novel species. We conducted a functional analysis of selected proteins. We have demonstrated that the phage DNA methyltransferase has the same sequence specificity as the cell cycle-regulating methyltransferase CcrM of its host. We point out that this phenomenon of
NASA Astrophysics Data System (ADS)
Shahri, Abbas; Mousavinaseri, Mahsasadat; Naderi, Shima; Espersson, Maria
2015-04-01
Application of Artificial Neural Networks (ANNs) in many areas of engineering, in particular to geotechnical engineering problems such as site characterization has demonstrated some degree of success. The present paper aims to evaluate the feasibility of several various types of ANN models to predict the clay sensitivity of soft clays form piezocone penetration test data (CPTu). To get the aim, a research database of CPTu data of 70 test points around the Göta River near the Lilli Edet in the southwest of Sweden which is a high prone land slide area were collected and considered as input for ANNs. For training algorithms the quick propagation, conjugate gradient descent, quasi-Newton, limited memory quasi-Newton and Levenberg-Marquardt were developed tested and trained using the CPTu data to provide a comparison between the results of field investigation and ANN models to estimate the clay sensitivity. The reason of using the clay sensitivity parameter in this study is due to its relation to landslides in Sweden.A special high sensitive clay namely quick clay is considered as the main responsible for experienced landslides in Sweden which has high sensitivity and prone to slide. The training and testing program was started with 3-2-1 ANN architecture structure. By testing and trying several various architecture structures and changing the hidden layer in order to have a higher output resolution the 3-4-4-3-1 architecture structure for ANN in this study was confirmed. The tested algorithm showed that increasing the hidden layers up to 4 layers in ANN can improve the results and the 3-4-4-3-1 architecture structure ANNs for prediction of clay sensitivity represent reliable and reasonable response. The obtained results showed that the conjugate gradient descent algorithm with R2=0.897 has the best performance among the tested algorithms. Keywords: clay sensitivity, landslide, Artificial Neural Network
Passive load follow analysis of the STAR-LM and STAR-H2 systems
NASA Astrophysics Data System (ADS)
Moisseytsev, Anton
A steady-state model for the calculation of temperature and pressure distributions, and heat and work balance for the STAR-LM and the STAR-H2 systems was developed. The STAR-LM system is designed for electricity production and consists of the lead cooled reactor on natural circulation and the supercritical carbon dioxide Brayton cycle. The STAR-H2 system uses the same reactor which is coupled to the hydrogen production plant, the Brayton cycle, and the water desalination plant. The Brayton cycle produces electricity for the on-site needs. Realistic modules for each system component were developed. The model also performs design calculations for the turbine and compressors for the CO2 Brayton cycle. The model was used to optimize the performance of the entire system as well as every system component. The size of each component was calculated. For the 400 MWt reactor power the STAR-LM produces 174.4 MWe (44% efficiency) and the STAR-H2 system produces 7450 kg H2/hr. The steady state model was used to conduct quasi-static passive load follow analysis. The control strategy was developed for each system; no control action on the reactor is required. As a main safety criterion, the peak cladding temperature is used. It was demonstrated that this temperature remains below the safety limit during both normal operation and load follow.
Conformable derivative approach to anomalous diffusion
NASA Astrophysics Data System (ADS)
Zhou, H. W.; Yang, S.; Zhang, S. Q.
2018-02-01
By using a new derivative with fractional order, referred to conformable derivative, an alternative representation of the diffusion equation is proposed to improve the modeling of anomalous diffusion. The analytical solutions of the conformable derivative model in terms of Gauss kernel and Error function are presented. The power law of the mean square displacement for the conformable diffusion model is studied invoking the time-dependent Gauss kernel. The parameters related to the conformable derivative model are determined by Levenberg-Marquardt method on the basis of the experimental data of chloride ions transportation in reinforced concrete. The data fitting results showed that the conformable derivative model agrees better with the experimental data than the normal diffusion equation. Furthermore, the potential application of the proposed conformable derivative model of water flow in low-permeability media is discussed.
LM193 Dual Differential Comparator Total Ionizing Dose Test Report
NASA Technical Reports Server (NTRS)
Topper, Alyson; Forney, James; Campola, Michael
2017-01-01
The purpose of this test was to characterize the flight lot of Texas Instruments' LM193 (flight part number is 5962-9452601Q2A) for total dose response. This test served as the radiation lot acceptance test (RLAT) for the lot date code (LDC) tested. Low dose rate (LDR) irradiations were performed in this test so that the device susceptibility to enhanced low dose rate sensitivity (ELDRS) was determined.
Evaluation of Teaching the IS-LM Model through a Simulation Program
ERIC Educational Resources Information Center
Pablo-Romero, Maria del Populo; Pozo-Barajas, Rafael; Gomez-Calero, Maria de la Palma
2012-01-01
The IS-ML model is a basic tool used in the teaching of short-term macroeconomics. Teaching is essentially done through the use of graphs. However, the way these graphs are traditionally taught does not allow the learner to easily visualise changes in the curves. The IS-LM simulation program overcomes difficulties encountered in understanding the…
Ecological and ecosystem-level impacts of aquatic invasive species in Lake Michigan were examined using the Lake Michigan Ecosystem Model (LM-Eco). The LM-Eco model includes a detailed description of trophic levels and their interactions within the lower food web of Lake Michiga...
French I Supplementary Reader (For A-LM One, 1961, Units 9-14).
ERIC Educational Resources Information Center
Scott, Linda; Booth, Alice
Supplementary readings intended for use with the 1961 edition of the "A-LM" French 1 course are compiled in this text. They are specifically designed to accompany Units 9-14. It is suggested that the recombination narratives enable students to become more capable of independent reading. (RL)
Yu, Rongrong; Liu, Weimin; Li, Daqi; Zhao, Xiaoming; Ding, Guowei; Zhang, Min; Ma, Enbo; Zhu, KunYan; Li, Sheng; Moussian, Bernard; Zhang, Jianzhen
2016-01-01
In the three-dimensional extracellular matrix of the insect cuticle, horizontally aligned microfibrils composed of the polysaccharide chitin and associated proteins are stacked either parallel to each other or helicoidally. The underlying molecular mechanisms that implement differential chitin organization are largely unknown. To learn more about cuticle organization, we sought to study the role of chitin deacetylases (CDA) in this process. In the body cuticle of nymphs of the migratory locust Locusta migratoria, helicoidal chitin organization is changed to an organization with unidirectional microfibril orientation when LmCDA2 expression is knocked down by RNA interference. In addition, the LmCDA2-deficient cuticle is less compact suggesting that LmCDA2 is needed for chitin packaging. Animals with reduced LmCDA2 activity die at molting, underlining that correct chitin organization is essential for survival. Interestingly, we find that LmCDA2 localizes only to the initially produced chitin microfibrils that constitute the apical site of the chitin stack. Based on our data, we hypothesize that LmCDA2-mediated chitin deacetylation at the beginning of chitin production is a decisive reaction that triggers helicoidal arrangement of subsequently assembled chitin-protein microfibrils. PMID:27637332
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kubohara, Yuzuru, E-mail: ykuboha@juntendo.ac.jp; Department of Health Science, Juntendo University Graduate School of Health and Sports Science, Inzai 270-1695; Komachi, Mayumi
Osteosarcoma is a common metastatic bone cancer that predominantly develops in children and adolescents. Metastatic osteosarcoma remains associated with a poor prognosis; therefore, more effective anti-metastatic drugs are needed. Differentiation-inducing factor-1 (DIF-1), −2, and −3 are novel lead anti-tumor agents that were originally isolated from the cellular slime mold Dictyostelium discoideum. Here we investigated the effects of a panel of DIF derivatives on lysophosphatidic acid (LPA)-induced migration of mouse osteosarcoma LM8 cells by using a Boyden chamber assay. Some DIF derivatives such as Br-DIF-1, DIF-3(+2), and Bu-DIF-3 (5–20 μM) dose-dependently suppressed LPA-induced cell migration with associated IC{sub 50} values of 5.5, 4.6, andmore » 4.2 μM, respectively. On the other hand, the IC{sub 50} values of Br-DIF-1, DIF-3(+2), and Bu-DIF-3 versus cell proliferation were 18.5, 7.2, and 2.0 μM, respectively, in LM8 cells, and >20, 14.8, and 4.3 μM, respectively, in mouse 3T3-L1 fibroblasts (non-transformed). Together, our results demonstrate that Br-DIF-1 in particular may be a valuable tool for the analysis of cancer cell migration, and that DIF derivatives such as DIF-3(+2) and Bu-DIF-3 are promising lead anti-tumor agents for the development of therapies that suppress osteosarcoma cell proliferation, migration, and metastasis. - Highlights: • LPA induces cell migration (invasion) in murine osteosarcoma LM8 cells. • DIFs are novel lead anti-tumor agents found in Dictyostelium discoideum. • We examined the effects of DIF derivatives on LPA-induced LM8 cell migration in vitro. • Some of the DIF derivatives inhibited LPA-induced LM8 cell migration.« less
SiSeRHMap v1.0: a simulator for mapped seismic response using a hybrid model
NASA Astrophysics Data System (ADS)
Grelle, G.; Bonito, L.; Lampasi, A.; Revellino, P.; Guerriero, L.; Sappa, G.; Guadagno, F. M.
2015-06-01
SiSeRHMap is a computerized methodology capable of drawing up prediction maps of seismic response. It was realized on the basis of a hybrid model which combines different approaches and models in a new and non-conventional way. These approaches and models are organized in a code-architecture composed of five interdependent modules. A GIS (Geographic Information System) Cubic Model (GCM), which is a layered computational structure based on the concept of lithodynamic units and zones, aims at reproducing a parameterized layered subsoil model. A metamodeling process confers a hybrid nature to the methodology. In this process, the one-dimensional linear equivalent analysis produces acceleration response spectra of shear wave velocity-thickness profiles, defined as trainers, which are randomly selected in each zone. Subsequently, a numerical adaptive simulation model (Spectra) is optimized on the above trainer acceleration response spectra by means of a dedicated Evolutionary Algorithm (EA) and the Levenberg-Marquardt Algorithm (LMA) as the final optimizer. In the final step, the GCM Maps Executor module produces a serial map-set of a stratigraphic seismic response at different periods, grid-solving the calibrated Spectra model. In addition, the spectra topographic amplification is also computed by means of a numerical prediction model. This latter is built to match the results of the numerical simulations related to isolate reliefs using GIS topographic attributes. In this way, different sets of seismic response maps are developed, on which, also maps of seismic design response spectra are defined by means of an enveloping technique.
Typing SNP based on the near-infrared spectroscopy and artificial neural network
NASA Astrophysics Data System (ADS)
Ren, Li; Wang, Wei-Peng; Gao, Yu-Zhen; Yu, Xiao-Wei; Xie, Hong-Ping
2009-07-01
Based on the near-infrared spectra (NIRS) of the measured samples as the discriminant variables of their genotypes, the genotype discriminant model of SNP has been established by using back-propagation artificial neural network (BP-ANN). Taking a SNP (857G > A) of N-acetyltransferase 2 (NAT2) as an example, DNA fragments containing the SNP site were amplified by the PCR method based on a pair of primers to obtain the three-genotype (GG, AA, and GA) modeling samples. The NIRS-s of the amplified samples were directly measured in transmission by using quartz cell. Based on the sample spectra measured, the two BP-ANN-s were combined to obtain the stronger ability of the three-genotype classification. One of them was established to compress the measured NIRS variables by using the resilient back-propagation algorithm, and another network established by Levenberg-Marquardt algorithm according to the compressed NIRS-s was used as the discriminant model of the three-genotype classification. For the established model, the root mean square error for the training and the prediction sample sets were 0.0135 and 0.0132, respectively. Certainly, this model could rightly predict the three genotypes (i.e. the accuracy of prediction samples was up to100%) and had a good robust for the prediction of unknown samples. Since the three genotypes of SNP could be directly determined by using the NIRS-s without any preprocessing for the analyzed samples after PCR, this method is simple, rapid and low-cost.
Two-Dimensional High-Lift Aerodynamic Optimization Using Neural Networks
NASA Technical Reports Server (NTRS)
Greenman, Roxana M.
1998-01-01
The high-lift performance of a multi-element airfoil was optimized by using neural-net predictions that were trained using a computational data set. The numerical data was generated using a two-dimensional, incompressible, Navier-Stokes algorithm with the Spalart-Allmaras turbulence model. Because it is difficult to predict maximum lift for high-lift systems, an empirically-based maximum lift criteria was used in this study to determine both the maximum lift and the angle at which it occurs. The 'pressure difference rule,' which states that the maximum lift condition corresponds to a certain pressure difference between the peak suction pressure and the pressure at the trailing edge of the element, was applied and verified with experimental observations for this configuration. Multiple input, single output networks were trained using the NASA Ames variation of the Levenberg-Marquardt algorithm for each of the aerodynamic coefficients (lift, drag and moment). The artificial neural networks were integrated with a gradient-based optimizer. Using independent numerical simulations and experimental data for this high-lift configuration, it was shown that this design process successfully optimized flap deflection, gap, overlap, and angle of attack to maximize lift. Once the neural nets were trained and integrated with the optimizer, minimal additional computer resources were required to perform optimization runs with different initial conditions and parameters. Applying the neural networks within the high-lift rigging optimization process reduced the amount of computational time and resources by 44% compared with traditional gradient-based optimization procedures for multiple optimization runs.
NASA Astrophysics Data System (ADS)
M K, Harsha Kumar; P S, Vishweshwara; N, Gnanasekaran; C, Balaji
2018-05-01
The major objectives in the design of thermal systems are obtaining the information about thermophysical, transport and boundary properties. The main purpose of this paper is to estimate the unknown heat flux at the surface of a solid body. A constant area mild steel fin is considered and the base is subjected to constant heat flux. During heating, natural convection heat transfer occurs from the fin to ambient. The direct solution, which is the forward problem, is developed as a conjugate heat transfer problem from the fin and the steady state temperature distribution is recorded for any assumed heat flux. In order to model the natural convection heat transfer from the fin, an extended domain is created near the fin geometry and air is specified as a fluid medium and Navier Stokes equation is solved by incorporating the Boussinesq approximation. The computational time involved in executing the forward model is then reduced by developing a neural network (NN) between heat flux values and temperatures based on back propagation algorithm. The conjugate heat transfer NN model is now coupled with Genetic algorithm (GA) for the solution of the inverse problem. Initially, GA is applied to the pure surrogate data, the results are then used as input to the Levenberg- Marquardt method and such hybridization is proven to result in accurate estimation of the unknown heat flux. The hybrid method is then applied for the experimental temperature to estimate the unknown heat flux. A satisfactory agreement between the estimated and actual heat flux is achieved by incorporating the hybrid method.
A novel calibration method of focused light field camera for 3-D reconstruction of flame temperature
NASA Astrophysics Data System (ADS)
Sun, Jun; Hossain, Md. Moinul; Xu, Chuan-Long; Zhang, Biao; Wang, Shi-Min
2017-05-01
This paper presents a novel geometric calibration method for focused light field camera to trace the rays of flame radiance and to reconstruct the three-dimensional (3-D) temperature distribution of a flame. A calibration model is developed to calculate the corner points and their projections of the focused light field camera. The characteristics of matching main lens and microlens f-numbers are used as an additional constrains for the calibration. Geometric parameters of the focused light field camera are then achieved using Levenberg-Marquardt algorithm. Total focused images in which all the points are in focus, are utilized to validate the proposed calibration method. Calibration results are presented and discussed in details. The maximum mean relative error of the calibration is found less than 0.13%, indicating that the proposed method is capable of calibrating the focused light field camera successfully. The parameters obtained by the calibration are then utilized to trace the rays of flame radiance. A least square QR-factorization algorithm with Plank's radiation law is used to reconstruct the 3-D temperature distribution of a flame. Experiments were carried out on an ethylene air fired combustion test rig to reconstruct the temperature distribution of flames. The flame temperature obtained by the proposed method is then compared with that obtained by using high-precision thermocouple. The difference between the two measurements was found no greater than 6.7%. Experimental results demonstrated that the proposed calibration method and the applied measurement technique perform well in the reconstruction of the flame temperature.
Ghaedi, M; Zeinali, N; Ghaedi, A M; Teimuori, M; Tashkhourian, J
2014-05-05
In this study, graphite oxide (GO) nano according to Hummers method was synthesized and subsequently was used for the removal of methylene blue (MB) and brilliant green (BG). The detail information about the structure and physicochemical properties of GO are investigated by different techniques such as XRD and FTIR analysis. The influence of solution pH, initial dye concentration, contact time and adsorbent dosage was examined in batch mode and optimum conditions was set as pH=7.0, 2 mg of GO and 10 min contact time. Employment of equilibrium isotherm models for description of adsorption capacities of GO explore the good efficiency of Langmuir model for the best presentation of experimental data with maximum adsorption capacity of 476.19 and 416.67 for MB and BG dyes in single solution. The analysis of adsorption rate at various stirring times shows that both dyes adsorption followed a pseudo second-order kinetic model with cooperation with interparticle diffusion model. Subsequently, the adsorption data as new combination of artificial neural network was modeled to evaluate and obtain the real conditions for fast and efficient removal of dyes. A three-layer artificial neural network (ANN) model is applicable for accurate prediction of dyes removal percentage from aqueous solution by GO following conduction of 336 experimental data. The network was trained using the obtained experimental data at optimum pH with different GO amount (0.002-0.008 g) and 5-40 mg/L of both dyes over contact time of 0.5-30 min. The ANN model was able to predict the removal efficiency with Levenberg-Marquardt algorithm (LMA), a linear transfer function (purelin) at output layer and a tangent sigmoid transfer function (tansig) at hidden layer with 10 and 11 neurons for MB and BG dyes, respectively. The minimum mean squared error (MSE) of 0.0012 and coefficient of determination (R(2)) of 0.982 were found for prediction and modeling of MB removal, while the respective value for BG was the
1987-06-01
WESTERN STUDIES AUGUSTANA COLLEGE, SIOUX FALLS, SOUTH DAKOTA 57105 ARCHEOLOGICAL CONTRACT SERIES NUMBER 29 VOLUME 2 APPENDICES 1. Location of Sites...Specialist Report - Report on the Soils at Sites 39GR53 and 39LM33, by Dr. Frederick Westin APPENDIX I Location of Sites on UGSG 7.5’ Quadrangle Maps...QUADANGL LOCTIO / (r\\\\ 2~0 SCALE 12000 0~ 1 0IL QUDANL OATO 1PPROXLOMETERA 51ONOU INTRVA 10LNAIN FEET DAU 1S MEILEALEE Location of site 39LM39. J~27 . ~ FIt
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bisacchi, Davide; Zhou, Yao; Rosen, Barry P.
2006-10-01
LmACR2 from L. major is the first rhodanese-like enzyme directly involved in the reduction of arsenate and antimonate to be crystallized. Diffraction data have been collected to 1.99 Å resolution using synchrotron X-rays. Arsenic is present in the biosphere owing either to the presence of pesticides and herbicides used in agricultural and industrial activities or to leaching from geological formations. The health effects of prolonged exposure to arsenic can be devastating and may lead to various forms of cancer. Antimony(V), which is chemically very similar to arsenic, is used instead in the treatment of leishmaniasis, an infection caused by themore » protozoan parasite Leishmania sp.; the reduction of pentavalent antimony contained in the drug Pentostam to the active trivalent form arises from the presence in the Leishmania genome of a gene, LmACR2, coding for the protein LmACR2 (14.5 kDa, 127 amino acids) that displays weak but significant sequence similarity to the catalytic domain of Cdc25 phosphatase and to rhodanese enzymes. For structural characterization, LmACR2 was overexpressed, purified to homogeneity and crystallized in a trigonal space group (P321 or P3{sub 1}21/P3{sub 2}21). The protein crystallized in two distinct trigonal crystal forms, with unit-cell parameters a = b = 111.0, c = 86.1 Å and a = b = 111.0, c = 175.6 Å, respectively. At a synchrotron beamline, the diffraction pattern extended to a resolution limit of 1.99 Å.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szadkowski, Zbigniew; Glas, Dariusz; Pytel, Krzysztof
Observations of ultra-high energy neutrinos became a priority in experimental astro-particle physics. Up to now, the Pierre Auger Observatory did not find any candidate on a neutrino event. This imposes competitive limits to the diffuse flux of ultra-high energy neutrinos in the EeV range and above. A very low rate of events potentially generated by neutrinos is a significant challenge for a detection technique and requires both sophisticated algorithms and high-resolution hardware. A trigger based on a artificial neural network was implemented into the Cyclone{sup R} V E FPGA 5CEFA9F31I7. The prototype Front-End boards for Auger-Beyond-2015 with Cyclone{sup R} Vmore » E can test the neural network algorithm in real pampas conditions in 2015. Showers for muon and tau neutrino initiating particles on various altitudes, angles and energies were simulated in CORSICA and Offline platforms giving pattern of ADC traces in Auger water Cherenkov detectors. The 3-layer 12-10-1 neural network was taught in MATLAB by simulated ADC traces according the Levenberg-Marquardt algorithm. Results show that a probability of a ADC traces generation is very low due to a small neutrino cross-section. Nevertheless, ADC traces, if occur, for 1-10 EeV showers are relatively short and can be analyzed by 16-point input algorithm. For 100 EeV range traces are much longer, but with significantly higher amplitudes, which can be detected by standard threshold algorithms. We optimized the coefficients from MATLAB to get a maximal range of potentially registered events and for fixed-point FPGA processing to minimize calculation errors. Currently used Front-End boards based on no-more produced ACEXR PLDs and obsolete Cyclone{sup R} FPGAs allow an implementation of relatively simple threshold algorithms for triggers. New sophisticated trigger implemented in Cyclone{sup R} V E FPGAs with large amount of DSP blocks, embedded memory running with 120 - 160 MHz sampling may support to discover neutrino
Yu, Rongrong; Liu, Weimin; Li, Daqi; Zhao, Xiaoming; Ding, Guowei; Zhang, Min; Ma, Enbo; Zhu, KunYan; Li, Sheng; Moussian, Bernard; Zhang, Jianzhen
2016-11-18
In the three-dimensional extracellular matrix of the insect cuticle, horizontally aligned microfibrils composed of the polysaccharide chitin and associated proteins are stacked either parallel to each other or helicoidally. The underlying molecular mechanisms that implement differential chitin organization are largely unknown. To learn more about cuticle organization, we sought to study the role of chitin deacetylases (CDA) in this process. In the body cuticle of nymphs of the migratory locust Locusta migratoria, helicoidal chitin organization is changed to an organization with unidirectional microfibril orientation when LmCDA2 expression is knocked down by RNA interference. In addition, the LmCDA2-deficient cuticle is less compact suggesting that LmCDA2 is needed for chitin packaging. Animals with reduced LmCDA2 activity die at molting, underlining that correct chitin organization is essential for survival. Interestingly, we find that LmCDA2 localizes only to the initially produced chitin microfibrils that constitute the apical site of the chitin stack. Based on our data, we hypothesize that LmCDA2-mediated chitin deacetylation at the beginning of chitin production is a decisive reaction that triggers helicoidal arrangement of subsequently assembled chitin-protein microfibrils. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.
Visco-elastic controlled-source full waveform inversion without surface waves
NASA Astrophysics Data System (ADS)
Paschke, Marco; Krause, Martin; Bleibinhaus, Florian
2016-04-01
We developed a frequency-domain visco-elastic full waveform inversion for onshore seismic experiments with topography. The forward modeling is based on a finite-difference time-domain algorithm by Robertsson that uses the image-method to ensure a stress-free condition at the surface. The time-domain data is Fourier-transformed at every point in the model space during the forward modeling for a given set of frequencies. The motivation for this approach is the reduced amount of memory when computing kernels, and the straightforward implementation of the multiscale approach. For the inversion, we calculate the Frechet derivative matrix explicitly, and we implement a Levenberg-Marquardt scheme that allows for computing the resolution matrix. To reduce the size of the Frechet derivative matrix, and to stabilize the inversion, an adapted inverse mesh is used. The node spacing is controlled by the velocity distribution and the chosen frequencies. To focus the inversion on body waves (P, P-coda, and S) we mute the surface waves from the data. Consistent spatiotemporal weighting factors are applied to the wavefields during the Fourier transform to obtain the corresponding kernels. We test our code with a synthetic study using the Marmousi model with arbitrary topography. This study also demonstrates the importance of topography and muting surface waves in controlled-source full waveform inversion.
NASA Astrophysics Data System (ADS)
Parente, Mario; Makarewicz, Heather D.; Bishop, Janice L.
2011-04-01
This study advances curve-fitting modeling of absorption bands of reflectance spectra and applies this new model to spectra of Martian meteorites ALH 84001 and EETA 79001 and data from the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM). This study also details a recently introduced automated parameter initialization technique. We assess the performance of this automated procedure by comparing it to the currently available initialization method and perform a sensitivity analysis of the fit results to variation in initial guesses. We explore the issues related to the removal of the continuum, offer guidelines for continuum removal when modeling the absorptions and explore different continuum-removal techniques. We further evaluate the suitability of curve fitting techniques using Gaussians/Modified Gaussians to decompose spectra into individual end-member bands. We show that nonlinear least squares techniques such as the Levenberg-Marquardt algorithm achieve comparable results to the MGM model ( Sunshine and Pieters, 1993; Sunshine et al., 1990) for meteorite spectra. Finally we use Gaussian modeling to fit CRISM spectra of pyroxene and olivine-rich terrains on Mars. Analysis of CRISM spectra of two regions show that the pyroxene-dominated rock spectra measured at Juventae Chasma were modeled well with low Ca pyroxene, while the pyroxene-rich spectra acquired at Libya Montes required both low-Ca and high-Ca pyroxene for a good fit.
Mousavi, Seyed Mahdi; Niaei, Aligholi; Salari, Dariush; Panahi, Parvaneh Nakhostin; Samandari, Masoud
2013-01-01
A response surface methodology (RSM) involving a central composite design was applied to the modelling and optimization of a preparation of Mn/active carbon nanocatalysts in NH3-SCR of NO at 250 degrees C and the results were compared with the artificial neural network (ANN) predicted values. The catalyst preparation parameters, including metal loading (wt%), calcination temperature and pre-oxidization degree (v/v% HNO3) were selected as influence factors on catalyst efficiency. In the RSM model, the predicted values of NO conversion were found to be in good agreement with the experimental values. Pareto graphic analysis showed that all the chosen parameters and some of the interactions were effective on response. The optimization results showed that maximum NO conversion was achieved at the optimum conditions: 10.2 v/v% HNO3, 6.1 wt% Mn loading and calcination at 480 degrees C. The ANN model was developed by a feed-forward back propagation network with the topology 3, 8 and 1 and a Levenberg-Marquardt training algorithm. The mean square error for the ANN and RSM models were 0.339 and 1.176, respectively, and the R2 values were 0.991 and 0.972, respectively, indicating the superiority of ANN in capturing the nonlinear behaviour of the system and being accurate in estimating the values of the NO conversion.
Oxidative desulfurization: kinetic modelling.
Dhir, S; Uppaluri, R; Purkait, M K
2009-01-30
Increasing environmental legislations coupled with enhanced production of petroleum products demand, the deployment of novel technologies to remove organic sulfur efficiently. This work represents the kinetic modeling of ODS using H(2)O(2) over tungsten-containing layered double hydroxide (LDH) using the experimental data provided by Hulea et al. [V. Hulea, A.L. Maciuca, F. Fajula, E. Dumitriu, Catalytic oxidation of thiophenes and thioethers with hydrogen peroxide in the presence of W-containing layered double hydroxides, Appl. Catal. A: Gen. 313 (2) (2006) 200-207]. The kinetic modeling approach in this work initially targets the scope of the generation of a superstructure of micro-kinetic reaction schemes and models assuming Langmuir-Hinshelwood (LH) and Eley-Rideal (ER) mechanisms. Subsequently, the screening and selection of above models is initially based on profile-based elimination of incompetent schemes followed by non-linear regression search performed using the Levenberg-Marquardt algorithm (LMA) for the chosen models. The above analysis inferred that Eley-Rideal mechanism describes the kinetic behavior of ODS process using tungsten-containing LDH, with adsorption of reactant and intermediate product only taking place on the catalyst surface. Finally, an economic index is presented that scopes the economic aspects of the novel catalytic technology with the parameters obtained during regression analysis to conclude that the cost factor for the catalyst is 0.0062-0.04759 US $ per barrel.
Akdenur, B; Okkesum, S; Kara, S; Günes, S
2009-11-01
In this study, electromyography signals sampled from children undergoing orthodontic treatment were used to estimate the effect of an orthodontic trainer on the anterior temporal muscle. A novel data normalization method, called the correlation- and covariance-supported normalization method (CCSNM), based on correlation and covariance between features in a data set, is proposed to provide predictive guidance to the orthodontic technique. The method was tested in two stages: first, data normalization using the CCSNM; second, prediction of normalized values of anterior temporal muscles using an artificial neural network (ANN) with a Levenberg-Marquardt learning algorithm. The data set consists of electromyography signals from right anterior temporal muscles, recorded from 20 children aged 8-13 years with class II malocclusion. The signals were recorded at the start and end of a 6-month treatment. In order to train and test the ANN, two-fold cross-validation was used. The CCSNM was compared with four normalization methods: minimum-maximum normalization, z score, decimal scaling, and line base normalization. In order to demonstrate the performance of the proposed method, prevalent performance-measuring methods, and the mean square error and mean absolute error as mathematical methods, the statistical relation factor R2 and the average deviation have been examined. The results show that the CCSNM was the best normalization method among other normalization methods for estimating the effect of the trainer.
Huang, Shang-Ming; Li, Hsin-Ju; Liu, Yung-Chuan; Kuo, Chia-Hung; Shieh, Chwen-Jen
2017-11-15
Although retinol is an important nutrient, retinol is highly sensitive to oxidation. At present, some ester forms of retinol are generally used in nutritional supplements because of its stability and bioavailability. However, such esters are commonly synthesized by chemical procedures which are harmful to the environment. Thus, this study utilized a green method using lipase as a catalyst with sonication assistance to produce a retinol derivative named retinyl laurate. Moreover, the process was optimized by an artificial neural network (ANN). First, a three-level-four-factor central composite design (CCD) was employed to design 27 experiments, which the highest relative conversion was 82.64%. Further, the optimal architecture of the CCD-employing ANN was developed, including the learning Levenberg-Marquardt algorithm, the transfer function (hyperbolic tangent), iterations (10,000), and the nodes of the hidden layer (6). The best performance of the ANN was evaluated by the root mean squared error (RMSE) and the coefficient of determination ( R ²) from predicting and observed data, which displayed a good data-fitting property. Finally, the process performed with optimal parameters actually obtained a relative conversion of 88.31% without long-term reactions, and the lipase showed great reusability for biosynthesis. Thus, this study utilizes green technology to efficiently produce retinyl laurate, and the bioprocess is well established by ANN-mediated modeling and optimization.
NASA Astrophysics Data System (ADS)
Ren, Tao; Modest, Michael F.; Fateev, Alexander; Clausen, Sønnik
2015-01-01
In this study, we present an inverse calculation model based on the Levenberg-Marquardt optimization method to reconstruct temperature and species concentration from measured line-of-sight spectral transmissivity data for homogeneous gaseous media. The high temperature gas property database HITEMP 2010 (Rothman et al. (2010) [1]), which contains line-by-line (LBL) information for several combustion gas species, such as CO2 and H2O, was used to predict gas spectral transmissivities. The model was validated by retrieving temperatures and species concentrations from experimental CO2 and H2O transmissivity measurements. Optimal wavenumber ranges for CO2 and H2O transmissivity measured across a wide range of temperatures and concentrations were determined according to the performance of inverse calculations. Results indicate that the inverse radiation model shows good feasibility for measurements of temperature and gas concentration.
NASA Technical Reports Server (NTRS)
1979-01-01
A nonlinear, maximum likelihood, parameter identification computer program (NLSCIDNT) is described which evaluates rotorcraft stability and control coefficients from flight test data. The optimal estimates of the parameters (stability and control coefficients) are determined (identified) by minimizing the negative log likelihood cost function. The minimization technique is the Levenberg-Marquardt method, which behaves like the steepest descent method when it is far from the minimum and behaves like the modified Newton-Raphson method when it is nearer the minimum. Twenty-one states and 40 measurement variables are modeled, and any subset may be selected. States which are not integrated may be fixed at an input value, or time history data may be substituted for the state in the equations of motion. Any aerodynamic coefficient may be expressed as a nonlinear polynomial function of selected 'expansion variables'.
Zou, Lingyun; Wang, Zhengzhi; Huang, Jiaomin
2007-12-01
Subcellular location is one of the key biological characteristics of proteins. Position-specific profiles (PSP) have been introduced as important characteristics of proteins in this article. In this study, to obtain position-specific profiles, the Position Specific Iterative-Basic Local Alignment Search Tool (PSI-BLAST) has been used to search for protein sequences in a database. Position-specific scoring matrices are extracted from the profiles as one class of characteristics. Four-part amino acid compositions and 1st-7th order dipeptide compositions have also been calculated as the other two classes of characteristics. Therefore, twelve characteristic vectors are extracted from each of the protein sequences. Next, the characteristic vectors are weighed by a simple weighing function and inputted into a BP neural network predictor named PSP-Weighted Neural Network (PSP-WNN). The Levenberg-Marquardt algorithm is employed to adjust the weight matrices and thresholds during the network training instead of the error back propagation algorithm. With a jackknife test on the RH2427 dataset, PSP-WNN has achieved a higher overall prediction accuracy of 88.4% rather than the prediction results by the general BP neural network, Markov model, and fuzzy k-nearest neighbors algorithm on this dataset. In addition, the prediction performance of PSP-WNN has been evaluated with a five-fold cross validation test on the PK7579 dataset and the prediction results have been consistently better than those of the previous method on the basis of several support vector machines, using compositions of both amino acids and amino acid pairs. These results indicate that PSP-WNN is a powerful tool for subcellular localization prediction. At the end of the article, influences on prediction accuracy using different weighting proportions among three characteristic vector categories have been discussed. An appropriate proportion is considered by increasing the prediction accuracy.
Imhoff, Johannes F.; Rahn, Tanja; Künzel, Sven; Neulinger, Sven C.
2018-01-01
Two different photosystems for performing bacteriochlorophyll-mediated photosynthetic energy conversion are employed in different bacterial phyla. Those bacteria employing a photosystem II type of photosynthetic apparatus include the phototrophic purple bacteria (Proteobacteria), Gemmatimonas and Chloroflexus with their photosynthetic relatives. The proteins of the photosynthetic reaction center PufL and PufM are essential components and are common to all bacteria with a type-II photosynthetic apparatus, including the anaerobic as well as the aerobic phototrophic Proteobacteria. Therefore, PufL and PufM proteins and their genes are perfect tools to evaluate the phylogeny of the photosynthetic apparatus and to study the diversity of the bacteria employing this photosystem in nature. Almost complete pufLM gene sequences and the derived protein sequences from 152 type strains and 45 additional strains of phototrophic Proteobacteria employing photosystem II were compared. The results give interesting and comprehensive insights into the phylogeny of the photosynthetic apparatus and clearly define Chromatiales, Rhodobacterales, Sphingomonadales as major groups distinct from other Alphaproteobacteria, from Betaproteobacteria and from Caulobacterales (Brevundimonas subvibrioides). A special relationship exists between the PufLM sequences of those bacteria employing bacteriochlorophyll b instead of bacteriochlorophyll a. A clear phylogenetic association of aerobic phototrophic purple bacteria to anaerobic purple bacteria according to their PufLM sequences is demonstrated indicating multiple evolutionary lines from anaerobic to aerobic phototrophic purple bacteria. The impact of pufLM gene sequences for studies on the environmental diversity of phototrophic bacteria is discussed and the possibility of their identification on the species level in environmental samples is pointed out. PMID:29472894
Apollo 16, LM-11 ascent propulsion system final flight evaluation
NASA Technical Reports Server (NTRS)
Griffin, W. G.
1974-01-01
The duty cycle for the LM-11 APS consisted of two firings, an ascent stage liftoff from the lunar surface, and the terminal phase initiation (TPI) burn. APS performance for the first firing was evaluated and found to be satisfactory. No propulsion data were received from the second APS burn; however, all indications were that the burn was nominal. Engine ignition for the APS lunar liftoff burn occured at the Apollo elapsed time (AET) of 175:31:47.9 (hours:minutes:seconds). Burn duration was 427.7 seconds.
LM-research opportunities and activities at Beer-Sheva
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lesin, S.
1996-06-01
Energy conversion concepts based on liquid metal (LM) magnetohydrodynamic (MHD) technology was intensively investigated at the Center for MHD Studies (CMHDS), in the Ben-Gurion University of the Negev in Israel. LMMHD energy conversion systems operate in a closed cycle as follows: heat intended for conversion into electricity is added to a liquid metal contained in a closed loop of pipes. The liquid metal is mixed with vapor or gas introduced from outside so that a two-phase mixture is formed. The gaseous phase performs a thermodynamic cycle, converting a certain amount of heat into mechanical energy of the liquid metal. Thismore » energy is converted into electrical power as the metal flows across a magnetic field in the MHD channel. Those systems where the expanding thermodynamic fluid performs work against gravitational forces (natural circulation loops) and using heavy liquid metals are named ETGAR systems. A number of different heavy-metal facilities have been specially constructed and tested with fluid combinations of mercury and steam, mercury and nitrogen, mercury and freon, lead-bismuth and steam, and lead and steam. Since the experimental investigation of such flows is a very difficult task and all the known measurment methods are incomplete and not fully reliable, a variety of experimental approaches have been developed. In most experiments, instantaneous pressure distribution along the height of the upcomer were measured and the average void fraction was calculated numerically using the one-dimensional equation for the two-phase flow. The research carried out at the CMHDS led to significant improvements in the characterization of the two-phase phenomena expected in the riser of ETGAR systems. One of the most important outcomes is the development of a new empirical correlation which enables the reliable prediction of the velocity ratio between the LM and the steam (slip), the friction factor, as well as of the steam void fraction distribution along the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fukuda, Hiroki; Nakamura, Seikou; Chisaki, Yugo
2016-02-26
Daphnetin, 7,8-dihydroxycoumarin, present in main constituents of Daphne odora var. marginatai, has multiple pharmacological activities including anti-proliferative effects in cancer cells. In this study, using a Transwell system, we showed that daphnetin inhibited invasion and migration of highly metastatic murine osteosarcoma LM8 cells in a dose-dependent manner. Following treatment by daphnetin, cells that penetrated the Transwell membrane were rounder than non-treated cells. Immunofluorescence analysis revealed that daphnetin decreased the numbers of intracellular stress fibers and filopodia. Moreover, daphnetin treatment dramatically decreased the expression levels of RhoA and Cdc42. In summary, the dihydroxycoumarin derivative daphnetin inhibits the invasion and migration ofmore » LM8 cells, and therefore represents a promising agent for use against metastatic cancer. - Highlights: • Daphnetin, a coumarin-derivative, inhibited invasion and migration of LM8 cells. • Stress fibers and filopodia were decreased by daphnetin treatment. • Daphnetin decreased RhoA and Cdc42 protein expression.« less
NASA Astrophysics Data System (ADS)
Ma, Junhai; Ren, Wenbo; Zhan, Xueli
2017-04-01
Based on the study of scholars at home and abroad, this paper improves the three-dimensional IS-LM model in macroeconomics, analyzes the equilibrium point of the system and stability conditions, focuses on the parameters and complex dynamic characteristics when Hopf bifurcation occurs in the three-dimensional IS-LM macroeconomics system. In order to analyze the stability of limit cycles when Hopf bifurcation occurs, this paper further introduces the first Lyapunov coefficient to judge the limit cycles, i.e. from a practical view of the business cycle. Numerical simulation results show that within the range of most of the parameters, the limit cycle of 3D IS-LM macroeconomics is stable, that is, the business cycle is stable; with the increase of the parameters, limit cycles becomes unstable, and the value range of the parameters in this situation is small. The research results of this paper have good guide significance for the analysis of macroeconomics system.
A Sub-Sampling Approach for Data Acquisition in Gamma Ray Emission Tomography
NASA Astrophysics Data System (ADS)
Fysikopoulos, Eleftherios; Kopsinis, Yannis; Georgiou, Maria; Loudos, George
2016-06-01
rate), in terms of energy resolution and image signal to noise ratio for both gamma ray energies. The Levenberg-Marquardt (LM) non-linear least-squares algorithm was used, in post processing, in order to fit the acquired data with the proposed model. The results showed that analog pulses prior to digitization are being estimated with high accuracy after fitting with the bi-exponential model.
Photometric Variability of the mCP Star CS Vir: Evolution of the Rotation Period
NASA Astrophysics Data System (ADS)
Ozuyar, D.; Sener, H. T.; Stevens, I. R.
2018-01-01
The aim of this study is to accurately calculate the rotational period of CS Vir by using STEREO observations and investigate a possible period variation of the star with the help of all accessible data. The STEREO data that cover 5-yr time interval between 2007 and 2011 are analysed by means of the Lomb-Scargle and Phase Dispersion Minimization methods. In order to obtain a reliable rotation period and its error value, computational algorithms such as the Levenberg-Marquardt and Monte Carlo simulation algorithms are applied to the data sets. Thus, the rotation period of CS Vir is improved to be 9.29572(12) d by using the 5-yr of combined data set. Also, the light elements are calculated as HJD max = 2454715.975(11) + 9d . 29572(12) × E + 9d . 78(1.13) × 10 - 8 × E 2 by means of the extremum times derived from the STEREO light curves and archives. Moreover, with this study, a period variation is revealed for the first time, and it is found that the period has lengthened by 0.66(8) s y-1, equivalent to 66 s per century. Additionally, a time-scale for a possible spin-down is calculated around τSD 106 yr. The differential rotation and magnetic braking are thought to be responsible of the mentioned rotational deceleration. It is deduced that the spin-down time-scale of the star is nearly three orders of magnitude shorter than its main-sequence lifetime (τMS 109 yr). It is, in return, suggested that the process of increase in the period might be reversible.
NASA Astrophysics Data System (ADS)
Hendricks, S.; Hoppmann, M.; Hunkeler, P. A.; Kalscheuer, T.; Gerdes, R.
2015-12-01
In Antarctica, ice crystals (platelets) form and grow in supercooled waters below ice shelves. These platelets rise and accumulate beneath nearby sea ice to form a several meter thick sub-ice platelet layer. This special ice type is a unique habitat, influences sea-ice mass and energy balance, and its volume can be interpreted as an indicator for ice - ocean interactions. Although progress has been made in determining and understanding its spatio-temporal variability based on point measurements, an investigation of this phenomenon on a larger scale remains a challenge due to logistical constraints and a lack of suitable methodology. In the present study, we applied a lateral constrained Marquardt-Levenberg inversion to a unique multi-frequency electromagnetic (EM) induction sounding dataset obtained on the ice-shelf influenced fast-ice regime of Atka Bay, eastern Weddell Sea. We adapted the inversion algorithm to incorporate a sensor specific signal bias, and confirmed the reliability of the algorithm by performing a sensitivity study using synthetic data. We inverted the field data for sea-ice and sub-ice platelet-layer thickness and electrical conductivity, and calculated ice-volume fractions from platelet-layer conductivities using Archie's Law. The thickness results agreed well with drill-hole validation datasets within the uncertainty range, and the ice-volume fraction also yielded plausible results. Our findings imply that multi-frequency EM induction sounding is a suitable approach to efficiently map sea-ice and platelet-layer properties. However, we emphasize that the successful application of this technique requires a break with traditional EM sensor calibration strategies due to the need of absolute calibration with respect to a physical forward model.
Wienke, B R; O'Leary, T R
2008-05-01
Linking model and data, we detail the LANL diving reduced gradient bubble model (RGBM), dynamical principles, and correlation with data in the LANL Data Bank. Table, profile, and meter risks are obtained from likelihood analysis and quoted for air, nitrox, helitrox no-decompression time limits, repetitive dive tables, and selected mixed gas and repetitive profiles. Application analyses include the EXPLORER decompression meter algorithm, NAUI tables, University of Wisconsin Seafood Diver tables, comparative NAUI, PADI, Oceanic NDLs and repetitive dives, comparative nitrogen and helium mixed gas risks, USS Perry deep rebreather (RB) exploration dive,world record open circuit (OC) dive, and Woodville Karst Plain Project (WKPP) extreme cave exploration profiles. The algorithm has seen extensive and utilitarian application in mixed gas diving, both in recreational and technical sectors, and forms the bases forreleased tables and decompression meters used by scientific, commercial, and research divers. The LANL Data Bank is described, and the methods used to deduce risk are detailed. Risk functions for dissolved gas and bubbles are summarized. Parameters that can be used to estimate profile risk are tallied. To fit data, a modified Levenberg-Marquardt routine is employed with L2 error norm. Appendices sketch the numerical methods, and list reports from field testing for (real) mixed gas diving. A Monte Carlo-like sampling scheme for fast numerical analysis of the data is also detailed, as a coupled variance reduction technique and additional check on the canonical approach to estimating diving risk. The method suggests alternatives to the canonical approach. This work represents a first time correlation effort linking a dynamical bubble model with deep stop data. Supercomputing resources are requisite to connect model and data in application.
Pontone, Gianluca; Muscogiuri, Giuseppe; Andreini, Daniele; Guaricci, Andrea I; Guglielmo, Marco; Baggiano, Andrea; Fazzari, Fabio; Mushtaq, Saima; Conte, Edoardo; Annoni, Andrea; Formenti, Alberto; Mancini, Elisabetta; Verdecchia, Massimo; Campari, Alessandro; Martini, Chiara; Gatti, Marco; Fusini, Laura; Bonfanti, Lorenzo; Consiglio, Elisa; Rabbat, Mark G; Bartorelli, Antonio L; Pepi, Mauro
2018-03-27
A new postprocessing algorithm named adaptive statistical iterative reconstruction (ASIR)-V has been recently introduced. The aim of this article was to analyze the impact of ASIR-V algorithm on signal, noise, and image quality of coronary computed tomography angiography. Fifty consecutive patients underwent clinically indicated coronary computed tomography angiography (Revolution CT; GE Healthcare, Milwaukee, WI). Images were reconstructed using filtered back projection and ASIR-V 0%, and a combination of filtered back projection and ASIR-V 20%-80% and ASIR-V 100%. Image noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) were calculated for left main coronary artery (LM), left anterior descending artery (LAD), left circumflex artery (LCX), and right coronary artery (RCA) and were compared between the different postprocessing algorithms used. Similarly a four-point Likert image quality score of coronary segments was graded for each dataset and compared. A cutoff value of P < .05 was considered statistically significant. Compared to ASIR-V 0%, ASIR-V 100% demonstrated a significant reduction of image noise in all coronaries (P < .01). Compared to ASIR-V 0%, SNR was significantly higher with ASIR-V 60% in LM (P < .01), LAD (P < .05), LCX (P < .05), and RCA (P < .01). Compared to ASIR-V 0%, CNR for ASIR-V ≥60% was significantly improved in LM (P < .01), LAD (P < .05), and RCA (P < .01), whereas LCX demonstrated a significant improvement with ASIR-V ≥80%. ASIR-V 60% had significantly better Likert image quality scores compared to ASIR-V 0% in segment-, vessel-, and patient-based analyses (P < .01). Reconstruction with ASIR-V 60% provides the optimal balance between image noise, SNR, CNR, and image quality. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Morris, Melody K; Shriver, Zachary; Sasisekharan, Ram; Lauffenburger, Douglas A
2012-03-01
Mathematical models have substantially improved our ability to predict the response of a complex biological system to perturbation, but their use is typically limited by difficulties in specifying model topology and parameter values. Additionally, incorporating entities across different biological scales ranging from molecular to organismal in the same model is not trivial. Here, we present a framework called "querying quantitative logic models" (Q2LM) for building and asking questions of constrained fuzzy logic (cFL) models. cFL is a recently developed modeling formalism that uses logic gates to describe influences among entities, with transfer functions to describe quantitative dependencies. Q2LM does not rely on dedicated data to train the parameters of the transfer functions, and it permits straight-forward incorporation of entities at multiple biological scales. The Q2LM framework can be employed to ask questions such as: Which therapeutic perturbations accomplish a designated goal, and under what environmental conditions will these perturbations be effective? We demonstrate the utility of this framework for generating testable hypotheses in two examples: (i) a intracellular signaling network model; and (ii) a model for pharmacokinetics and pharmacodynamics of cell-cytokine interactions; in the latter, we validate hypotheses concerning molecular design of granulocyte colony stimulating factor. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Meeker, Rick B; Poulton, Winona; Feng, Wen-hai; Hudson, Lola; Longo, Frank M
2012-06-01
Feline immunodeficiency virus (FIV) infection like human immunodeficiency virus (HIV), produces systemic and central nervous system disease in its natural host, the domestic cat, that parallels the pathogenesis seen in HIV-infected humans. The ability to culture feline nervous system tissue affords the unique opportunity to directly examine interactions of infectious virus with CNS cells for the development of models and treatments that can then be translated to a natural infectious model. To explore the therapeutic potential of a new p75 neurotrophin receptor ligand, LM11A-31, we evaluated neuronal survival, neuronal damage and calcium homeostasis in cultured feline neurons following inoculation with FIV. FIV resulted in the gradual appearance of dendritic beading, pruning of processes and shrinkage of neuronal perikarya in the neurons. Astrocytes developed a more activated appearance and there was an enhanced accumulation of microglia, particularly at longer times post-inoculation. Addition of 10 nM LM11A-31, to the cultures greatly reduced or eliminated the neuronal pathology as well as the FIV effects on astrocytes and microglia. LM11A-31 also, prevented the development of delayed calcium deregulation in feline neurons exposed to conditioned medium from FIV treated macrophages. The suppression of calcium accumulation prevented the development of foci of calcium accumulation and beading in the dendrites. FIV replication was unaffected by LM11A-31. The strong neuroprotection afforded by LM11A-31 in an infectious in vitro model indicates that LM11A-31 may have excellent potential for the treatment of HIV-associated neurodegeneration.
NASA Astrophysics Data System (ADS)
Xing, X.; Yuan, Z.; Chen, L. F.; Yu, X. Y.; Xiao, L.
2018-04-01
The stability control is one of the major technical difficulties in the field of highway subgrade construction engineering. Building deformation model is a crucial step for InSAR time series deformation monitoring. Most of the InSAR deformation models for deformation monitoring are pure empirical mathematical models, without considering the physical mechanism of the monitored object. In this study, we take rheology into consideration, inducing rheological parameters into traditional InSAR deformation models. To assess the feasibility and accuracy for our new model, both simulation and real deformation data over Lungui highway (a typical highway built on soft clay subgrade in Guangdong province, China) are investigated with TerraSAR-X satellite imagery. In order to solve the unknows of the non-linear rheological model, three algorithms: Gauss-Newton (GN), Levenberg-Marquarat (LM), and Genetic Algorithm (GA), are utilized and compared to estimate the unknown parameters. Considering both the calculation efficiency and accuracy, GA is chosen as the final choice for the new model in our case study. Preliminary real data experiment is conducted with use of 17 TerraSAR-X Stripmap images (with a 3-m resolution). With the new deformation model and GA aforementioned, the unknown rheological parameters over all the high coherence points are obtained and the LOS deformation (the low-pass component) sequences are generated.
1969-11-19
AS12-46-6726 (19 Nov. 1969) --- Astronaut Alan L. Bean, lunar module pilot for the Apollo 12 mission, starts down the ladder of the Lunar Module (LM) to join astronaut Charles Conrad Jr., mission commander, in extravehicular activity (EVA). While astronauts Conrad and Bean descended in the LM "Intrepid" to explore the Ocean of Storms region of the moon, astronaut Richard F. Gordon Jr., command module pilot, remained with the Command and Service Modules (CSM) "Yankee Clipper" in lunar orbit.
SU-F-T-428: An Optimization-Based Commissioning Tool for Finite Size Pencil Beam Dose Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; Tian, Z; Song, T
Purpose: Finite size pencil beam (FSPB) algorithms are commonly used to pre-calculate the beamlet dose distribution for IMRT treatment planning. FSPB commissioning, which usually requires fine tuning of the FSPB kernel parameters, is crucial to the dose calculation accuracy and hence the plan quality. Yet due to the large number of beamlets, FSPB commissioning could be very tedious. This abstract reports an optimization-based FSPB commissioning tool we have developed in MatLab to facilitate the commissioning. Methods: A FSPB dose kernel generally contains two types of parameters: the profile parameters determining the dose kernel shape, and a 2D scaling factors accountingmore » for the longitudinal and off-axis corrections. The former were fitted using the penumbra of a reference broad beam’s dose profile with Levenberg-Marquardt algorithm. Since the dose distribution of a broad beam is simply a linear superposition of the dose kernel of each beamlet calculated with the fitted profile parameters and scaled using the scaling factors, these factors could be determined by solving an optimization problem which minimizes the discrepancies between the calculated dose of broad beams and the reference dose. Results: We have commissioned a FSPB algorithm for three linac photon beams (6MV, 15MV and 6MVFFF). Dose of four field sizes (6*6cm2, 10*10cm2, 15*15cm2 and 20*20cm2) were calculated and compared with the reference dose exported from Eclipse TPS system. For depth dose curves, the differences are less than 1% of maximum dose after maximum dose depth for most cases. For lateral dose profiles, the differences are less than 2% of central dose at inner-beam regions. The differences of the output factors are within 1% for all the three beams. Conclusion: We have developed an optimization-based commissioning tool for FSPB algorithms to facilitate the commissioning, providing sufficient accuracy of beamlet dose calculation for IMRT optimization.« less
Application of separable parameter space techniques to multi-tracer PET compartment modeling.
Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J
2016-02-07
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.
Temperature and pressure correlation for volume of gas hydrates with crystal structures sI and sII
NASA Astrophysics Data System (ADS)
Vinš, Václav; Jäger, Andreas; Hielscher, Sebastian; Span, Roland; Hrubý, Jan; Breitkopf, Cornelia
The temperature and pressure correlations for the volume of gas hydrates forming crystal structures sI and sII developed in previous study [Fluid Phase Equilib. 427 (2016) 268-281], focused on the modeling of pure gas hydrates relevant in CCS (carbon capture and storage), were revised and modified for the modeling of mixed hydrates in this study. A universal reference state at temperature of 273.15 K and pressure of 1 Pa is used in the new correlation. Coefficients for the thermal expansion together with the reference lattice parameter were simultaneously correlated to both the temperature data and the pressure data for the lattice parameter. A two-stage Levenberg Marquardt algorithm was employed for the parameter optimization. The pressure dependence described in terms of the bulk modulus remained unchanged compared to the original study. A constant value for the bulk modulus B0 = 10 GPa was employed for all selected hydrate formers. The new correlation is in good agreement with the experimental data over wide temperature and pressure ranges from 0 K to 293 K and from 0 to 2000 MPa, respectively. Compared to the original correlation used for the modeling of pure gas hydrates the new correlation provides significantly better agreement with the experimental data for sI hydrates. The results of the new correlation are comparable to the results of the old correlation in case of sII hydrates. In addition, the new correlation is suitable for modeling of mixed hydrates.
NASA Technical Reports Server (NTRS)
Ocasio, W. C.; Rigney, D. R.; Clark, K. P.; Mark, R. G.; Goldberger, A. L. (Principal Investigator)
1993-01-01
We describe the theory and computer implementation of a newly-derived mathematical model for analyzing the shape of blood pressure waveforms. Input to the program consists of an ECG signal, plus a single continuous channel of peripheral blood pressure, which is often obtained invasively from an indwelling catheter during intensive-care monitoring or non-invasively from a tonometer. Output from the program includes a set of parameter estimates, made for every heart beat. Parameters of the model can be interpreted in terms of the capacitance of large arteries, the capacitance of peripheral arteries, the inertance of blood flow, the peripheral resistance, and arterial pressure due to basal vascular tone. Aortic flow due to contraction of the left ventricle is represented by a forcing function in the form of a descending ramp, the area under which represents the stroke volume. Differential equations describing the model are solved by the method of Laplace transforms, permitting rapid parameter estimation by the Levenberg-Marquardt algorithm. Parameter estimates and their confidence intervals are given in six examples, which are chosen to represent a variety of pressure waveforms that are observed during intensive-care monitoring. The examples demonstrate that some of the parameters may fluctuate markedly from beat to beat. Our program will find application in projects that are intended to correlate the details of the blood pressure waveform with other physiological variables, pathological conditions, and the effects of interventions.
Raj, Retheep; Sivanandan, K S
2017-01-01
Estimation of elbow dynamics has been the object of numerous investigations. In this work a solution is proposed for estimating elbow movement velocity and elbow joint angle from Surface Electromyography (SEMG) signals. Here the Surface Electromyography signals are acquired from the biceps brachii muscle of human hand. Two time-domain parameters, Integrated EMG (IEMG) and Zero Crossing (ZC), are extracted from the Surface Electromyography signal. The relationship between the time domain parameters, IEMG and ZC with elbow angular displacement and elbow angular velocity during extension and flexion of the elbow are studied. A multiple input-multiple output model is derived for identifying the kinematics of elbow. A Nonlinear Auto Regressive with eXogenous inputs (NARX) structure based multiple layer perceptron neural network (MLPNN) model is proposed for the estimation of elbow joint angle and elbow angular velocity. The proposed NARX MLPNN model is trained using Levenberg-marquardt based algorithm. The proposed model is estimating the elbow joint angle and elbow movement angular velocity with appreciable accuracy. The model is validated using regression coefficient value (R). The average regression coefficient value (R) obtained for elbow angular displacement prediction is 0.9641 and for the elbow anglular velocity prediction is 0.9347. The Nonlinear Auto Regressive with eXogenous inputs (NARX) structure based multiple layer perceptron neural networks (MLPNN) model can be used for the estimation of angular displacement and movement angular velocity of the elbow with good accuracy.
NASA Astrophysics Data System (ADS)
Tahavvor, Ali Reza
2017-03-01
In the present study artificial neural network and fractal geometry are used to predict frost thickness and density on a cold flat plate having constant surface temperature under forced convection for different ambient conditions. These methods are very applicable in this area because phase changes such as melting and solidification are simulated by conventional methods but frost formation is a most complicated phase change phenomenon consists of coupled heat and mass transfer. Therefore conventional mathematical techniques cannot capture the effects of all parameters on its growth and development because this process influenced by many factors and it is a time dependent process. Therefore, in this work soft computing method such as artificial neural network and fractal geometry are used to do this manner. The databases for modeling are generated from the experimental measurements. First, multilayer perceptron network is used and it is found that the back-propagation algorithm with Levenberg-Marquardt learning rule is the best choice to estimate frost growth properties due to accurate and faster training procedure. Second, fractal geometry based on the Von-Koch curve is used to model frost growth procedure especially in frost thickness and density. Comparison is performed between experimental measurements and soft computing methods. Results show that soft computing methods can be used more efficiently to determine frost properties over a flat plate. Based on the developed models, wide range of frost formation over flat plates can be determined for various conditions.
Bohling, Geoffrey C.; Butler, J.J.
2001-01-01
We have developed a program for inverse analysis of two-dimensional linear or radial groundwater flow problems. The program, 1r2dinv, uses standard finite difference techniques to solve the groundwater flow equation for a horizontal or vertical plane with heterogeneous properties. In radial mode, the program simulates flow to a well in a vertical plane, transforming the radial flow equation into an equivalent problem in Cartesian coordinates. The physical parameters in the model are horizontal or x-direction hydraulic conductivity, anisotropy ratio (vertical to horizontal conductivity in a vertical model, y-direction to x-direction in a horizontal model), and specific storage. The program allows the user to specify arbitrary and independent zonations of these three parameters and also to specify which zonal parameter values are known and which are unknown. The Levenberg-Marquardt algorithm is used to estimate parameters from observed head values. Particularly powerful features of the program are the ability to perform simultaneous analysis of heads from different tests and the inclusion of the wellbore in the radial mode. These capabilities allow the program to be used for analysis of suites of well tests, such as multilevel slug tests or pumping tests in a tomographic format. The combination of information from tests stressing different vertical levels in an aquifer provides the means for accurately estimating vertical variations in conductivity, a factor profoundly influencing contaminant transport in the subsurface. ?? 2001 Elsevier Science Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Santosa, H.; Hobara, Y.
2017-01-01
The electric field amplitude of very low frequency (VLF) transmitter from Hawaii (NPM) has been continuously recorded at Chofu (CHF), Tokyo, Japan. The VLF amplitude variability indicates lower ionospheric perturbation in the D region (60-90 km altitude range) around the NPM-CHF propagation path. We carried out the prediction of daily nighttime mean VLF amplitude by using Nonlinear Autoregressive with Exogenous Input Neural Network (NARX NN). The NARX NN model, which was built based on the daily input variables of various physical parameters such as stratospheric temperature, total column ozone, cosmic rays, Dst, and Kp indices possess good accuracy during the model building. The fitted model was constructed within the training period from 1 January 2011 to 4 February 2013 by using three algorithms, namely, Bayesian Neural Network (BRANN), Levenberg Marquardt Neural Network (LMANN), and Scaled Conjugate Gradient (SCG). The LMANN has the largest Pearson correlation coefficient (r) of 0.94 and smallest root-mean-square error (RMSE) of 1.19 dB. The constructed models by using LMANN were applied to predict the VLF amplitude from 5 February 2013 to 31 December 2013. As a result the one step (1 day) ahead predicted nighttime VLF amplitude has the r of 0.93 and RMSE of 2.25 dB. We conclude that the model built according to the proposed methodology provides good predictions of the electric field amplitude of VLF waves for NPM-CHF (midlatitude) propagation path.
Ricci, Luca; Formica, Domenico; Sparaci, Laura; Lasorsa, Francesca Romana; Taffoni, Fabrizio; Tamilia, Eleonora; Guglielmelli, Eugenio
2014-01-09
Recent advances in wearable sensor technologies for motion capture have produced devices, mainly based on magneto and inertial measurement units (M-IMU), that are now suitable for out-of-the-lab use with children. In fact, the reduced size, weight and the wireless connectivity meet the requirement of minimum obtrusivity and give scientists the possibility to analyze children's motion in daily life contexts. Typical use of magneto and inertial measurement units (M-IMU) motion capture systems is based on attaching a sensing unit to each body segment of interest. The correct use of this setup requires a specific calibration methodology that allows mapping measurements from the sensors' frames of reference into useful kinematic information in the human limbs' frames of reference. The present work addresses this specific issue, presenting a calibration protocol to capture the kinematics of the upper limbs and thorax in typically developing (TD) children. The proposed method allows the construction, on each body segment, of a meaningful system of coordinates that are representative of real physiological motions and that are referred to as functional frames (FFs). We will also present a novel cost function for the Levenberg-Marquardt algorithm, to retrieve the rotation matrices between each sensor frame (SF) and the corresponding FF. Reported results on a group of 40 children suggest that the method is repeatable and reliable, opening the way to the extensive use of this technology for out-of-the-lab motion capture in children.
Hernández, B; Peña, E; Pascual, G; Rodríguez, M; Calvo, B; Doblaré, M; Bellón, J M
2011-04-01
The aims of this study are to experimentally characterize the passive elastic behaviour of the rabbit abdominal wall and to develop a mechanical constitutive law which accurately reproduces the obtained experimental results. For this purpose, tissue samples from New Zealand White rabbits 2150±50 (g) were mechanically tested in vitro. Mechanical tests, consisting of uniaxial loading on tissue samples oriented along the craneo-caudal and the perpendicular directions, respectively, revealed the anisotropic non-linear mechanical behaviour of the abdominal tissues. Experiments were performed considering the composite muscle (including external oblique-EO, internal oblique-IO and transverse abdominis-TA muscle layers), as well as separated muscle layers (i.e., external oblique, and the bilayer formed by internal oblique and transverse abdominis). Both the EO muscle layer and the IO-TA bilayer demonstrated a stiffer behaviour along the transversal direction to muscle fibres than along the longitudinal one. The fibre arrangement was measured by means of a histological study which confirmed that collagen fibres are mainly responsible for the passive mechanical strength and stiffness. Furthermore, the degree of anisotropy of the abdominal composite muscle turned out to be less pronounced than those obtained while studying the EO and IO-TA separately. Moreover, a phenomenological constitutive law was used to capture the measured experimental curves. A Levenberg-Marquardt optimization algorithm was used to fit the model constants to reproduce the experimental curves. Copyright © 2010 Elsevier Ltd. All rights reserved.
Application of separable parameter space techniques to multi-tracer PET compartment modeling
NASA Astrophysics Data System (ADS)
Zhang, Jeff L.; Morey, A. Michael; Kadrmas, Dan J.
2016-02-01
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.
NASA Astrophysics Data System (ADS)
Laidi, Maamar; Hanini, Salah; Rezrazi, Ahmed; Yaiche, Mohamed Redha; El Hadj, Abdallah Abdallah; Chellali, Farouk
2017-04-01
In this study, a backpropagation artificial neural network (BP-ANN) model is used as an alternative approach to predict solar radiation on tilted surfaces (SRT) using a number of variables involved in physical process. These variables are namely the latitude of the site, mean temperature and relative humidity, Linke turbidity factor and Angstrom coefficient, extraterrestrial solar radiation, solar radiation data measured on horizontal surfaces (SRH), and solar zenith angle. Experimental solar radiation data from 13 stations spread all over Algeria around the year (2004) were used for training/validation and testing the artificial neural networks (ANNs), and one station was used to make the interpolation of the designed ANN. The ANN model was trained, validated, and tested using 60, 20, and 20 % of all data, respectively. The configuration 8-35-1 (8 inputs, 35 hidden, and 1 output neurons) presented an excellent agreement between the prediction and the experimental data during the test stage with determination coefficient of 0.99 and root meat squared error of 5.75 Wh/m2, considering a three-layer feedforward backpropagation neural network with Levenberg-Marquardt training algorithm, a hyperbolic tangent sigmoid and linear transfer function at the hidden and the output layer, respectively. This novel model could be used by researchers or scientists to design high-efficiency solar devices that are usually tilted at an optimum angle to increase the solar incident on the surface.
Gacesa, Jelena Popadic; Ivancevic, Tijana; Ivancevic, Nik; Paljic, Feodora Popic; Grujic, Nikola
2010-08-26
Our aim was to determine the dynamics in muscle strength increase and fatigue development during repetitive maximal contraction in specific maximal self-perceived elbow extensors training program. We will derive our functional model for m. triceps brachii in spirit of traditional Hill's two-component muscular model and after fitting our data, develop a prediction tool for this specific training system. Thirty-six healthy young men (21 +/- 1.0 y, BMI 25.4 +/- 7.2 kg/m(2)), who did not take part in any formal resistance exercise regime, volunteered for this study. The training protocol was performed on the isoacceleration dynamometer, lasted for 12 weeks, with a frequency of five sessions per week. Each training session included five sets of 10 maximal contractions (elbow extensions) with a 1 min resting period between each set. The non-linear dynamic system model was used for fitting our data in conjunction with the Levenberg-Marquardt regression algorithm. As a proper dynamical system, our functional model of m. triceps brachii can be used for prediction and control. The model can be used for the predictions of muscular fatigue in a single series, the cumulative daily muscular fatigue and the muscular growth throughout the training process. In conclusion, the application of non-linear dynamics in this particular training model allows us to mathematically explain some functional changes in the skeletal muscle as a result of its adaptation to programmed physical activity-training. 2010 Elsevier Ltd. All rights reserved.
Apollo 16, LM-11 descent propulsion system final flight evaluation
NASA Technical Reports Server (NTRS)
Avvenire, A. T.
1974-01-01
The performance of the LM-11 descent propulsion system during the Apollo 16 missions was evaluated and found satisfactory. The average engine effective specific impulse was 0.1 second higher than predicted, but well within the predicted one sigma uncertainty of 0.2 seconds. Several flight measurement discrepancies existed during the flight as follows: (1) the chamber pressure transducer had a noticeable drift, exhibiting a maximum error of about 1.5 psi at approximately 130 seconds after engine ignition, (2) the fuel and oxidizer interface pressure measurements appeared to be low during the entire flight, and (3) the fuel propellant quantity gaging system did not perform within expected accuracies.
Apollo Operations Handbook Lunar Module (LM 11 and Subsequent) Vol. 2 Operational Procedures
NASA Technical Reports Server (NTRS)
1971-01-01
The Apollo Operations Handbook (AOH) is the primary means of documenting LM descriptions and procedures. The AOH is published in two separately bound volumes. This information is useful in support of program management, engineering, test, flight simulation, and real time flight support efforts. This volume contains crew operational procedures: normal, backup, abort, malfunction, and emergency. These procedures define the sequence of actions necessary for safe and efficient subsystem operation.
Chen, Liang
2017-06-10
Bacillus velezensis LM2303 is a biocontrol strain with a broad inhibitory spectrum against plant pathogens, isolated from the dung of wild yak inhabited Qinghai-Tibet plateau, China. Here we present its complete genome sequence, which consists of a single, circular chromosome of 3,989,393bp with a 46.68% G+C content. Genome analysis revealed genes encoding specialized functions for the biosynthesis of antifungal metabolites and antibacterial metabolites, the promotion of plant growth, the alleviation of oxidative stress and nutrient utilization. And the biosynthesis of antimicrobial metabolites in strain LM2303 was confirmed by biochemical analysis, while its plant growth promoting traits were confirmed by inoculation tests. Our results will establish a better foundation for further studies and biocontrol application of B. velezensis LM2303. Copyright © 2017 Elsevier B.V. All rights reserved.
Sekulić, Vladislav; Skinner, Frances K
2017-01-01
Although biophysical details of inhibitory neurons are becoming known, it is challenging to map these details onto function. Oriens-lacunosum/moleculare (O-LM) cells are inhibitory cells in the hippocampus that gate information flow, firing while phase-locked to theta rhythms. We build on our existing computational model database of O-LM cells to link model with function. We place our models in high-conductance states and modulate inhibitory inputs at a wide range of frequencies. We find preferred spiking recruitment of models at high (4–9 Hz) or low (2–5 Hz) theta depending on, respectively, the presence or absence of h-channels on their dendrites. This also depends on slow delayed-rectifier potassium channels, and preferred theta ranges shift when h-channels are potentiated by cyclic AMP. Our results suggest that O-LM cells can be differentially recruited by frequency-modulated inputs depending on specific channel types and distributions. This work exposes a strategy for understanding how biophysical characteristics contribute to function. DOI: http://dx.doi.org/10.7554/eLife.22962.001 PMID:28318488
An Improved Calibration Method for a Rotating 2D LIDAR System
Zeng, Yadan; Yu, Heng; Song, Shuang; Lin, Mingqiang; Sun, Bo; Jiang, Wei; Meng, Max Q.-H.
2018-01-01
This paper presents an improved calibration method of a rotating two-dimensional light detection and ranging (R2D-LIDAR) system, which can obtain the 3D scanning map of the surroundings. The proposed R2D-LIDAR system, composed of a 2D LIDAR and a rotating unit, is pervasively used in the field of robotics owing to its low cost and dense scanning data. Nevertheless, the R2D-LIDAR system must be calibrated before building the geometric model because there are assembled deviation and abrasion between the 2D LIDAR and the rotating unit. Hence, the calibration procedures should contain both the adjustment between the two devices and the bias of 2D LIDAR itself. The main purpose of this work is to resolve the 2D LIDAR bias issue with a flat plane based on the Levenberg–Marquardt (LM) algorithm. Experimental results for the calibration of the R2D-LIDAR system prove the reliability of this strategy to accurately estimate sensor offsets with the error range from −15 mm to 15 mm for the performance of capturing scans. PMID:29414885
System and Method for Modeling the Flow Performance Features of an Object
NASA Technical Reports Server (NTRS)
Jorgensen, Charles (Inventor); Ross, James (Inventor)
1997-01-01
The method and apparatus includes a neural network for generating a model of an object in a wind tunnel from performance data on the object. The network is trained from test input signals (e.g., leading edge flap position, trailing edge flap position, angle of attack, and other geometric configurations, and power settings) and test output signals (e.g., lift, drag, pitching moment, or other performance features). In one embodiment, the neural network training method employs a modified Levenberg-Marquardt optimization technique. The model can be generated 'real time' as wind tunnel testing proceeds. Once trained, the model is used to estimate performance features associated with the aircraft given geometric configuration and/or power setting input. The invention can also be applied in other similar static flow modeling applications in aerodynamics, hydrodynamics, fluid dynamics, and other such disciplines. For example, the static testing of cars, sails, and foils, propellers, keels, rudders, turbines, fins, and the like, in a wind tunnel, water trough, or other flowing medium.
Padé Approximant and Minimax Rational Approximation in Standard Cosmology
NASA Astrophysics Data System (ADS)
Zaninetti, Lorenzo
2016-02-01
The luminosity distance in the standard cosmology as given by $\\Lambda$CDM and consequently the distance modulus for supernovae can be defined by the Pad\\'e approximant. A comparison with a known analytical solution shows that the Pad\\'e approximant for the luminosity distance has an error of $4\\%$ at redshift $= 10$. A similar procedure for the Taylor expansion of the luminosity distance gives an error of $4\\%$ at redshift $=0.7 $; this means that for the luminosity distance, the Pad\\'e approximation is superior to the Taylor series. The availability of an analytical expression for the distance modulus allows applying the Levenberg--Marquardt method to derive the fundamental parameters from the available compilations for supernovae. A new luminosity function for galaxies derived from the truncated gamma probability density function models the observed luminosity function for galaxies when the observed range in absolute magnitude is modeled by the Pad\\'e approximant. A comparison of $\\Lambda$CDM with other cosmologies is done adopting a statistical point of view.
Őze, A; Puszta, A; Buzás, P; Kóbor, P; Braunitzer, G; Nagy, A
2018-06-21
Flashing light stimulation is often used to investigate the visual system. However, the magnitude of the effect of this stimulus on the various subcortical pathways is not well investigated. The signals of conscious vision are conveyed by the magnocellular, parvocellular and koniocellular pathways. Parvocellular and koniocellular pathways (or more precisely, the L-M opponent and S-cone isolating channels) can be accessed by isoluminant red-green (L-M) and S-cone isolating stimuli, respectively. The main goal of the present study was to explore how costimulation with strong white extrafoveal light flashes alters the perception of stimuli specific to these pathways. Eleven healthy volunteers with negative neurological and ophthalmological history were enrolled for the study. Isoluminance of L-M and S-cone isolating sine-wave gratings was set individually, using the minimum motion procedure. The contrast thresholds for these stimuli as well as for achromatic gratings were determined by an adaptive staircase procedure where subjects had to indicate the orientation (horizontal, oblique or vertical) of the gratings. Thresholds were then determined again while a strong white peripheral light flash was presented 50 ms before each trial. Peripheral light flashes significantly (p < 0.05) increased the contrast thresholds of the achromatic and S-cone isolating stimuli. The threshold elevation was especially marked in case of the achromatic stimuli. However, the contrast threshold for the L-M stimuli was not significantly influenced by the light flashes. We conclude that extrafoveally applied light flashes influence predominantly the perception of achromatic stimuli. Copyright © 2018 Elsevier B.V. All rights reserved.
Prevalence and levels of Listeria monocytogenes (Lm) in ready-to-eat foods (RTE) at retail.
USDA-ARS?s Scientific Manuscript database
Although significant efforts have been taken to control Lm in Ready-to-eat (RTE)foods over the last decade, a well-designed survey is needed to determine whether changes occur in the “true” prevalence and levels of the pathogen and to provide current data to assess the relative ranking of higher ris...
Kar, Subrata; Majumder, D Dutta
2017-08-01
Investigation of brain cancer can detect the abnormal growth of tissue in the brain using computed tomography (CT) scans and magnetic resonance (MR) images of patients. The proposed method classifies brain cancer on shape-based feature extraction as either benign or malignant. The authors used input variables such as shape distance (SD) and shape similarity measure (SSM) in fuzzy tools, and used fuzzy rules to evaluate the risk status as an output variable. We presented a classifier neural network system (NNS), namely Levenberg-Marquardt (LM), which is a feed-forward back-propagation learning algorithm used to train the NN for the status of brain cancer, if any, and which achieved satisfactory performance with 100% accuracy. The proposed methodology is divided into three phases. First, we find the region of interest (ROI) in the brain to detect the tumors using CT and MR images. Second, we extract the shape-based features, like SD and SSM, and grade the brain tumors as benign or malignant with the concept of SD function and SSM as shape-based parameters. Third, we classify the brain cancers using neuro-fuzzy tools. In this experiment, we used a 16-sample database with SSM (μ) values and classified the benignancy or malignancy of the brain tumor lesions using the neuro-fuzzy system (NFS). We have developed a fuzzy expert system (FES) and NFS for early detection of brain cancer from CT and MR images. In this experiment, shape-based features, such as SD and SSM, were extracted from the ROI of brain tumor lesions. These shape-based features were considered as input variables and, using fuzzy rules, we were able to evaluate brain cancer risk values for each case. We used an NNS with LM, a feed-forward back-propagation learning algorithm, as a classifier for the diagnosis of brain cancer and achieved satisfactory performance with 100% accuracy. The proposed network was trained with MR image datasets of 16 cases. The 16 cases were fed to the ANN with 2 input neurons, one
A dynamic IS-LM business cycle model with two time delays in capital accumulation equation
NASA Astrophysics Data System (ADS)
Zhou, Lujun; Li, Yaqiong
2009-06-01
In this paper, we analyze a augmented IS-LM business cycle model with the capital accumulation equation that two time delays are considered in investment processes according to Kalecki's idea. Applying stability switch criteria and Hopf bifurcation theory, we prove that time delays cause the equilibrium to lose or gain stability and Hopf bifurcation occurs.
SiSeRHMap v1.0: a simulator for mapped seismic response using a hybrid model
NASA Astrophysics Data System (ADS)
Grelle, Gerardo; Bonito, Laura; Lampasi, Alessandro; Revellino, Paola; Guerriero, Luigi; Sappa, Giuseppe; Guadagno, Francesco Maria
2016-04-01
The SiSeRHMap (simulator for mapped seismic response using a hybrid model) is a computerized methodology capable of elaborating prediction maps of seismic response in terms of acceleration spectra. It was realized on the basis of a hybrid model which combines different approaches and models in a new and non-conventional way. These approaches and models are organized in a code architecture composed of five interdependent modules. A GIS (geographic information system) cubic model (GCM), which is a layered computational structure based on the concept of lithodynamic units and zones, aims at reproducing a parameterized layered subsoil model. A meta-modelling process confers a hybrid nature to the methodology. In this process, the one-dimensional (1-D) linear equivalent analysis produces acceleration response spectra for a specified number of site profiles using one or more input motions. The shear wave velocity-thickness profiles, defined as trainers, are randomly selected in each zone. Subsequently, a numerical adaptive simulation model (Emul-spectra) is optimized on the above trainer acceleration response spectra by means of a dedicated evolutionary algorithm (EA) and the Levenberg-Marquardt algorithm (LMA) as the final optimizer. In the final step, the GCM maps executor module produces a serial map set of a stratigraphic seismic response at different periods, grid solving the calibrated Emul-spectra model. In addition, the spectra topographic amplification is also computed by means of a 3-D validated numerical prediction model. This model is built to match the results of the numerical simulations related to isolate reliefs using GIS morphometric data. In this way, different sets of seismic response maps are developed on which maps of design acceleration response spectra are also defined by means of an enveloping technique.
Photodynamical modeling of hierarchical stellar system KOI-126
NASA Astrophysics Data System (ADS)
Earl, Nicholas Michael
The power and precision of the Kepler space telescope has provided the astrophysical field with a valuable insight into the dynamics of extra-solar systems. KOI-126 represents the first eclipsing hierarchical triple stellar system identified in the Kepler mission's photometry. The dynamics of the system are such that ascertaining the parameters of each body accurately (better than a few percent) is possible from the photometry alone. This allows determination of the characteristics while avoiding biases inherent in traditional studies of low-mass eclipsing systems. The parameter set for KOI-126 was originally reported on by Carter et al. and is uniquely composed of a low-mass binary, KOI-126 B and KOI-126 C. This pair orbits a third, more massive star KOI-126 A. The original analysis employed a full dynamical-photometric model, utilizing a Levenberg-Marquardt algorithm and least-squares minimization, to fit the short-cadence (i.e. successive 58.84 second cadence exposures) photometric data from the Kepler spacecraft captured over a period of 247 days. The updated catalog of short-cadence data now covers a span of 1,300 days. In light of the new data, and the valuable contribution accurately sampled fully-convective stars offer to theoretical stellar models, it is therefore relevant to refine the parameters of this system. Furthermore, with the ubiquity of multi-stellar systems, a well documented, portable, scalable computer modeling code for N-body systems is introduced. Thus, a new analysis is done on KOI-126 using this parallelized dynamical-photometric modeling package written in Python, based on Carter et al.'s original code, titled Pynamic. Pynamic allows the use of several fitting algorithms, but in this analysis utilizes the affine-invariant Markov chain Monte Carlo ensemble.
NASA Astrophysics Data System (ADS)
Singh, Upendra K.; Tiwari, R. K.; Singh, S. B.
2013-03-01
This paper presents the effects of several parameters on the artificial neural networks (ANN) inversion of vertical electrical sounding (VES) data. Sensitivity of ANN parameters was examined on the performance of adaptive backpropagation (ABP) and Levenberg-Marquardt algorithms (LMA) to test the robustness to noisy synthetic as well as field geophysical data and resolving capability of these methods for predicting the subsurface resistivity layers. We trained, tested and validated ANN using the synthetic VES data as input to the networks and layer parameters of the models as network output. ANN learning parameters are varied and corresponding observations are recorded. The sensitivity analysis of synthetic data and real model demonstrate that ANN algorithms applied in VES data inversion should be considered well not only in terms of accuracy but also in terms of high computational efforts. Also the analysis suggests that ANN model with its various controlling parameters are largely data dependent and hence no unique architecture can be designed for VES data analysis. ANN based methods are also applied to the actual VES field data obtained from the tectonically vital geothermal areas of Jammu and Kashmir, India. Analysis suggests that both the ABP and LMA are suitable methods for 1-D VES modeling. But the LMA method provides greater degree of robustness than the ABP in case of 2-D VES modeling. Comparison of the inversion results with known lithology correlates well and also reveals the additional significant feature of reconsolidated breccia of about 7.0 m thickness beneath the overburden in some cases like at sounding point RDC-5. We may therefore conclude that ANN based methods are significantly faster and efficient for detection of complex layered resistivity structures with a relatively greater degree of precision and resolution.
Development of LM10-MIRA LOX/LNG expander cycle demonstrator engine
NASA Astrophysics Data System (ADS)
Rudnykh, Mikhail; Carapellese, Stefano; Liuzzi, Daniele; Arione, Luigi; Caggiano, Giuseppe; Bellomi, Paolo; D'Aversa, Emanuela; Pellegrini, Rocco; Lobov, S. D.; Gurtovoy, A. A.; Rachuk, V. S.
2016-09-01
This article contains results of joint works by Konstruktorskoe Buro Khimavtomatiki (KBKhA, Russia) and AVIO Company (Italy) on creation of the LM10-MIRA liquid-propellant rocket demonstrator engine for the third stage of the upgraded "Vega" launcher.Scientific and research activities conducted by KBKhA and AVIO in 2007-2014 in the frame of the LYRA Program, funded by the Italian Space Agency, with ELV as Prime contractor, and under dedicated ASI-Roscosmos inter-agencies agreement, were aimed at development and testing of a 7.5 t thrust expander cycle demonstrator engine propelled by oxygen and liquid natural gas (further referred to as LNG).
Registration using natural features for augmented reality systems.
Yuan, M L; Ong, S K; Nee, A Y C
2006-01-01
Registration is one of the most difficult problems in augmented reality (AR) systems. In this paper, a simple registration method using natural features based on the projective reconstruction technique is proposed. This method consists of two steps: embedding and rendering. Embedding involves specifying four points to build the world coordinate system on which a virtual object will be superimposed. In rendering, the Kanade-Lucas-Tomasi (KLT) feature tracker is used to track the natural feature correspondences in the live video. The natural features that have been tracked are used to estimate the corresponding projective matrix in the image sequence. Next, the projective reconstruction technique is used to transfer the four specified points to compute the registration matrix for augmentation. This paper also proposes a robust method for estimating the projective matrix, where the natural features that have been tracked are normalized (translation and scaling) and used as the input data. The estimated projective matrix will be used as an initial estimate for a nonlinear optimization method that minimizes the actual residual errors based on the Levenberg-Marquardt (LM) minimization method, thus making the results more robust and stable. The proposed registration method has three major advantages: 1) It is simple, as no predefined fiducials or markers are used for registration for either indoor and outdoor AR applications. 2) It is robust, because it remains effective as long as at least six natural features are tracked during the entire augmentation, and the existence of the corresponding projective matrices in the live video is guaranteed. Meanwhile, the robust method to estimate the projective matrix can obtain stable results even when there are some outliers during the tracking process. 3) Virtual objects can still be superimposed on the specified areas, even if some parts of the areas are occluded during the entire process. Some indoor and outdoor experiments have
Artificial intelligence techniques for embryo and oocyte classification.
Manna, Claudio; Nanni, Loris; Lumini, Alessandra; Pappalardo, Sebastiana
2013-01-01
One of the most relevant aspects in assisted reproduction technology is the possibility of characterizing and identifying the most viable oocytes or embryos. In most cases, embryologists select them by visual examination and their evaluation is totally subjective. Recently, due to the rapid growth in the capacity to extract texture descriptors from a given image, a growing interest has been shown in the use of artificial intelligence methods for embryo or oocyte scoring/selection in IVF programmes. This work concentrates the efforts on the possible prediction of the quality of embryos and oocytes in order to improve the performance of assisted reproduction technology, starting from their images. The artificial intelligence system proposed in this work is based on a set of Levenberg-Marquardt neural networks trained using textural descriptors (the local binary patterns). The proposed system was tested on two data sets of 269 oocytes and 269 corresponding embryos from 104 women and compared with other machine learning methods already proposed in the past for similar classification problems. Although the results are only preliminary, they show an interesting classification performance. This technique may be of particular interest in those countries where legislation restricts embryo selection. One of the most relevant aspects in assisted reproduction technology is the possibility of characterizing and identifying the most viable oocytes or embryos. In most cases, embryologists select them by visual examination and their evaluation is totally subjective. Recently, due to the rapid growth in our capacity to extract texture descriptors from a given image, a growing interest has been shown in the use of artificial intelligence methods for embryo or oocyte scoring/selection in IVF programmes. In this work, we concentrate our efforts on the possible prediction of the quality of embryos and oocytes in order to improve the performance of assisted reproduction technology
Evaluation of Pre- and Post- Redevelopment Groundwater Chemical Analyses from LM Monitoring Wells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamp, Susan; Dayvault, Jalena
This report documents the efforts and analyses conducted for the Applied Studies and Technology (AS&T) Ancillary Work Plan (AWP) project titled Evaluation of Pre- and Post- Redevelopment Groundwater Sample Laboratory Analyses from Selected LM Groundwater Monitoring Wells. This effort entailed compiling an inventory of nearly 500 previous well redevelopment events at 16 U.S. Department of Energy Office of Legacy Management (LM) sites, searching the literature for impacts of well redevelopment on groundwater sample quality, and—the focus of this report—evaluating the impacts of well redevelopment on field measurements and sample analytical results. Study Catalyst Monitoring well redevelopment, the surging or high-volumemore » pumping of a well to loosen and remove accumulated sediment and biological build-up from a well, is considered an element of monitoring well maintenance that is implemented periodically during the lifetime of the well to mitigate its gradual deterioration. Well redevelopment has been conducted fairly routinely at a few LM sites in the western United States (e.g., the Grand Junction office site and the Gunnison processing site in Colorado), but at most other sites in this region it is not a routine practice. Also, until recently (2014–2015), there had been no specific criteria for implementing well redevelopment, and documentation of redevelopment events has been inconsistent. A catalyst for this evaluation was the self-identification of these inconsistencies by the Legacy Management Support contractor. As a result, in early 2015 Environmental Monitoring Operations (EMO) staff began collecting and documenting additional field measurements during well redevelopment events. In late 2015, AS&T staff undertook an independent internal evaluation of EMO's well redevelopment records and corresponding pre- and post-well-redevelopment groundwater analytical results. Study Findings Although literature discussions parallel the prevailing industry
Modelling the fate of pesticides in paddy rice-fish pond farming system in Northern Vietnam
NASA Astrophysics Data System (ADS)
Lamers, M.; Nguyen, N.; Streck, T.
2012-04-01
During the last decade rice production in Vietnam has tremendously increased due to the introduction of new high yield, short duration rice varieties and an increased application of pesticides. Since pesticides are toxic by design, there is a natural concern on the possible impacts of their presence in the environment on human health and environment quality. In North Vietnam, lowland and upland rice fields were identified to be a major non-point source of agrochemical pollution to surface and ground water, which are often directly used for domestic purposes. Field measurements, however, are time consuming, costly and logistical demanding. Hence, quantification, forecast and risk assessment studies are hampered by a limited amount of field data. One potential way to cope with this shortcoming is the use of process-based models. In the present study we developed a model for simulating short-term pesticide dynamics in combined paddy rice field - fish pond farming systems under the specific environmental conditions of south-east Asia. Basic approaches and algorithms to describe the key underlying biogeochemical processes were mainly adopted from the literature to assure that the model reflects the current standard of scientific knowledge and commonly accepted theoretical background. The model was calibrated by means of the Gauss-Marquardt-Levenberg algorithm and validated against measured pesticide concentrations (dimethoate and fenitrothion) during spring and summer rice crop season 2008, respectively, of a paddy field - fish pond system typical for northern Vietnam. First simulation results indicate that our model is capable to simulate the fate of pesticides in such paddy - fish pond farming systems. The model efficiency for the period of calibration, for example, was 0.97 and 0.95 for dimethoate and fenitrothion, respectively. For the period of validation, however, the modeling efficiency slightly decreased to 0.96 and 0.81 for dimethoate and fenitrothion
Sun, Chang; Carey, Anne-Marie; Gao, Bing-Rong; Wraight, Colin A; Woodbury, Neal W; Lin, Su
2016-06-23
It has become increasingly clear that dynamics plays a major role in the function of many protein systems. One system that has proven particularly facile for studying the effects of dynamics on protein-mediated chemistry is the bacterial photosynthetic reaction center from Rhodobacter sphaeroides. Previous experimental and computational analysis have suggested that the dynamics of the protein matrix surrounding the primary quinone acceptor, QA, may be particularly important in electron transfer involving this cofactor. One can substantially increase the flexibility of this region by removing one of the reaction center subunits, the H-subunit. Even with this large change in structure, photoinduced electron transfer to the quinone still takes place. To evaluate the effect of H-subunit removal on electron transfer to QA, we have compared the kinetics of electron transfer and associated spectral evolution for the LM dimer with that of the intact reaction center complex on picosecond to millisecond time scales. The transient absorption spectra associated with all measured electron transfer reactions are similar, with the exception of a broadening in the QX transition and a blue-shift in the QY transition bands of the special pair of bacteriochlorophylls (P) in the LM dimer. The kinetics of the electron transfer reactions not involving quinones are unaffected. There is, however, a 4-fold decrease in the electron transfer rate from the reduced bacteriopheophytin to QA in the LM dimer compared to the intact reaction center and a similar decrease in the recombination rate of the resulting charge-separated state (P(+)QA(-)). These results are consistent with the concept that the removal of the H-subunit results in increased flexibility in the region around the quinone and an associated shift in the reorganization energy associated with charge separation and recombination.
NASA Astrophysics Data System (ADS)
Li, N.; Yue, X. Y.
2018-03-01
Macroscopic root water uptake models proportional to a root density distribution function (RDDF) are most commonly used to model water uptake by plants. As the water uptake is difficult and labor intensive to measure, these models are often calibrated by inverse modeling. Most previous inversion studies assume RDDF to be constant with depth and time or dependent on only depth for simplification. However, under field conditions, this function varies with type of soil and root growth and thus changes with both depth and time. This study proposes an inverse method to calibrate both spatially and temporally varying RDDF in unsaturated water flow modeling. To overcome the difficulty imposed by the ill-posedness, the calibration is formulated as an optimization problem in the framework of the Tikhonov regularization theory, adding additional constraint to the objective function. Then the formulated nonlinear optimization problem is numerically solved with an efficient algorithm on the basis of the finite element method. The advantage of our method is that the inverse problem is translated into a Tikhonov regularization functional minimization problem and then solved based on the variational construction, which circumvents the computational complexity in calculating the sensitivity matrix involved in many derivative-based parameter estimation approaches (e.g., Levenberg-Marquardt optimization). Moreover, the proposed method features optimization of RDDF without any prior form, which is applicable to a more general root water uptake model. Numerical examples are performed to illustrate the applicability and effectiveness of the proposed method. Finally, discussions on the stability and extension of this method are presented.
Xia, J.; Xu, Y.; Miller, R.D.; Chen, C.
2006-01-01
A Gibson half-space model (a non-layered Earth model) has the shear modulus varying linearly with depth in an inhomogeneous elastic half-space. In a half-space of sedimentary granular soil under a geostatic state of initial stress, the density and the Poisson's ratio do not vary considerably with depth. In such an Earth body, the dynamic shear modulus is the parameter that mainly affects the dispersion of propagating waves. We have estimated shear-wave velocities in the compressible Gibson half-space by inverting Rayleigh-wave phase velocities. An analytical dispersion law of Rayleigh-type waves in a compressible Gibson half-space is given in an algebraic form, which makes our inversion process extremely simple and fast. The convergence of the weighted damping solution is guaranteed through selection of the damping factor using the Levenberg-Marquardt method. Calculation efficiency is achieved by reconstructing a weighted damping solution using singular value decomposition techniques. The main advantage of this algorithm is that only three parameters define the compressible Gibson half-space model. Theoretically, to determine the model by the inversion, only three Rayleigh-wave phase velocities at different frequencies are required. This is useful in practice where Rayleigh-wave energy is only developed in a limited frequency range or at certain frequencies as data acquired at manmade structures such as dams and levees. Two real examples are presented and verified by borehole S-wave velocity measurements. The results of these real examples are also compared with the results of the layered-Earth model. ?? Springer 2006.
Image guided constitutive modeling of the silicone brain phantom
NASA Astrophysics Data System (ADS)
Puzrin, Alexander; Skrinjar, Oskar; Ozan, Cem; Kim, Sihyun; Mukundan, Srinivasan
2005-04-01
The goal of this work is to develop reliable constitutive models of the mechanical behavior of the in-vivo human brain tissue for applications in neurosurgery. We propose to define the mechanical properties of the brain tissue in-vivo, by taking the global MR or CT images of a brain response to ventriculostomy - the relief of the elevated intracranial pressure. 3D image analysis translates these images into displacement fields, which by using inverse analysis allow for the constitutive models of the brain tissue to be developed. We term this approach Image Guided Constitutive Modeling (IGCM). The presented paper demonstrates performance of the IGCM in the controlled environment: on the silicone brain phantoms closely simulating the in-vivo brain geometry, mechanical properties and boundary conditions. The phantom of the left hemisphere of human brain was cast using silicon gel. An inflatable rubber membrane was placed inside the phantom to model the lateral ventricle. The experiments were carried out in a specially designed setup in a CT scanner with submillimeter isotropic voxels. The non-communicative hydrocephalus and ventriculostomy were simulated by consequently inflating and deflating the internal rubber membrane. The obtained images were analyzed to derive displacement fields, meshed, and incorporated into ABAQUS. The subsequent Inverse Finite Element Analysis (based on Levenberg-Marquardt algorithm) allowed for optimization of the parameters of the Mooney-Rivlin non-linear elastic model for the phantom material. The calculated mechanical properties were consistent with those obtained from the element tests, providing justification for the future application of the IGCM to in-vivo brain tissue.
Diagnostic analysis of liver B ultrasonic texture features based on LM neural network
NASA Astrophysics Data System (ADS)
Chi, Qingyun; Hua, Hu; Liu, Menglin; Jiang, Xiuying
2017-03-01
In this study, B ultrasound images of 124 benign and malignant patients were randomly selected as the study objects. The B ultrasound images of the liver were treated by enhanced de-noising. By constructing the gray level co-occurrence matrix which reflects the information of each angle, Principal Component Analysis of 22 texture features were extracted and combined with LM neural network for diagnosis and classification. Experimental results show that this method is a rapid and effective diagnostic method for liver imaging, which provides a quantitative basis for clinical diagnosis of liver diseases.
PSF reconstruction for Compton-based prompt gamma imaging
NASA Astrophysics Data System (ADS)
Jan, Meei-Ling; Lee, Ming-Wei; Huang, Hsuan-Ming
2018-02-01
Compton-based prompt gamma (PG) imaging has been proposed for in vivo range verification in proton therapy. However, several factors degrade the image quality of PG images, some of which are due to inherent properties of a Compton camera such as spatial resolution and energy resolution. Moreover, Compton-based PG imaging has a spatially variant resolution loss. In this study, we investigate the performance of the list-mode ordered subset expectation maximization algorithm with a shift-variant point spread function (LM-OSEM-SV-PSF) model. We also evaluate how well the PG images reconstructed using an SV-PSF model reproduce the distal falloff of the proton beam. The SV-PSF parameters were estimated from simulation data of point sources at various positions. Simulated PGs were produced in a water phantom irradiated with a proton beam. Compared to the LM-OSEM algorithm, the LM-OSEM-SV-PSF algorithm improved the quality of the reconstructed PG images and the estimation of PG falloff positions. In addition, the 4.44 and 5.25 MeV PG emissions can be accurately reconstructed using the LM-OSEM-SV-PSF algorithm. However, for the 2.31 and 6.13 MeV PG emissions, the LM-OSEM-SV-PSF reconstruction provides limited improvement. We also found that the LM-OSEM algorithm followed by a shift-variant Richardson-Lucy deconvolution could reconstruct images with quality visually similar to the LM-OSEM-SV-PSF-reconstructed images, while requiring shorter computation time.
Tatai, Ildiko; Zaharie, Ioan
2012-11-01
In this paper a gyrator implementation using a LM13700 operational transconductance amplifier is analyzed. It was first verified under PSpice simulation and experimentally the antireciprocity of this gyrator, i.e., its properties. This type of gyrator can be used for controlling the energy transfer from one port to the other by modifying the bias currents of the operational transconductance amplifier.
Angeli, Vasiliki; Polymeris, George S; Sfampa, Ioanna K; Tsirliganis, Nestor C; Kitis, George
2017-04-01
Natural calcium fluoride has been commonly used as thermoluminescence (TL) dosimeter due to its high luminescence intensity. The aim of this work includes attempting a correlation between specific TL glow curves after bleaching and components of linearly modulated optically stimulated luminescence (LM-OSL) as well as continuous wave OSL (CW-OSL). A component resolved analysis was applied to both integrated intensity of the RTL glow curves and all OSL decay curves, by using a Computerized Glow-Curve De-convolution (CGCD) procedure. All CW-OSL and LM-OSL components are correlated to the decay components of the integrated RTL signal, apart from two RTL components which cannot be directly correlated with either LM-OSL or CW-OSL component. The unique, stringent criterion for this correlation deals with the value of the decay constant λ of each bleaching component. There is only one, unique bleaching component present in all three luminescence entities which were the subject of the present study, indicating that each TL trap yields at least three different bleaching components; different TL traps can indicate bleaching components with similar values. According to the data of the present work each RTL bleaching component receives electrons from at least two peaks. The results of the present study strongly suggest that the traps that contribute to TL and OSL are the same. Copyright © 2017 Elsevier Ltd. All rights reserved.
Posé, Sara; Marcus, Susan E; Paul Knox, J
2018-04-24
Antibody-based approaches have been used to study cell wall architecture and modifications during the ripening process of two important fleshy fruit crops: tomato and strawberry. Cell wall polymers in both unripe and ripe fruits have been sequentially solubilized and fractions analysed with sets of monoclonal antibodies focusing on the pectic polysaccharides. We demonstrate the specific detection of the LM26 branched galactan epitope, associated with rhamnogalacturonan-I, in cell walls of ripe strawberry fruit. Analytical approaches confirm that the LM26 epitope is linked to sets of rhamnogalacturonan-I and homogalacturonan molecules. The cellulase-degradation of cellulose-rich residues that releases cell wall polymers intimately linked with cellulose microfibrils has been used to explore aspects of branched galactan occurrence and galactan metabolism. In situ analyses of ripe strawberry fruits indicate that the LM26 epitope is present in all primary cell walls and also particularly abundant in vascular tissues. The significance of the occurrence of branched galactan structures in the side chains of rhamnogalacturonan-I pectins in the context of ripening strawberry fruit is discussed. This article is protected by copyright. All rights reserved.
LM2-Mercury, a mercury mass balance model, was developed to simulate and evaluate the transport, fate, and biogeochemical transformations of mercury in Lake Michigan. The model simulates total suspended solids (TSS), disolved organic carbon (DOC), and total, elemental, divalent, ...
The Lake Michigan Mass Balance Project (LMMBP) was initiated to support the development of a Lake Wide Management Plan (LaMP) for Lake Michigan. As one of the models in the LMMBP modeling framework, the Level 2 Lake Michigan containment transport and fate (LM2) model has been dev...
Soil Moisture Estimate under Forest using a Semi-empirical Model at P-Band
NASA Astrophysics Data System (ADS)
Truong-Loi, M.; Saatchi, S.; Jaruwatanadilok, S.
2013-12-01
coefficients: σHH, σVV and σHV. The inversion process, which is not an ill-posed problem, uses the non-linear optimization method of Levenberg-Marquardt and estimates the three model parameters: vegetation aboveground biomass, average soil moisture and surface roughness. The model analytical formulation will be first recalled and sensitivity analyses will be shown. Then some results obtained with real SAR data will be presented and compared to ground estimates.
NASA Astrophysics Data System (ADS)
Maheshwera Reddy Paturi, Uma; Devarasetti, Harish; Abimbola Fadare, David; Reddy Narala, Suresh Kumar
2018-04-01
In the present paper, the artificial neural network (ANN) and response surface methodology (RSM) are used in modeling of surface roughness in WS2 (tungsten disulphide) solid lubricant assisted minimal quantity lubrication (MQL) machining. The real time MQL turning of Inconel 718 experimental data considered in this paper was available in the literature [1]. In ANN modeling, performance parameters such as mean square error (MSE), mean absolute percentage error (MAPE) and average error in prediction (AEP) for the experimental data were determined based on Levenberg–Marquardt (LM) feed forward back propagation training algorithm with tansig as transfer function. The MATLAB tool box has been utilized in training and testing of neural network model. Neural network model with three input neurons, one hidden layer with five neurons and one output neuron (3-5-1 architecture) is found to be most confidence and optimal. The coefficient of determination (R2) for both the ANN and RSM model were seen to be 0.998 and 0.982 respectively. The surface roughness predictions from ANN and RSM model were related with experimentally measured values and found to be in good agreement with each other. However, the prediction efficacy of ANN model is relatively high when compared with RSM model predictions.
Automatic selection of arterial input function using tri-exponential models
NASA Astrophysics Data System (ADS)
Yao, Jianhua; Chen, Jeremy; Castro, Marcelo; Thomasson, David
2009-02-01
Dynamic Contrast Enhanced MRI (DCE-MRI) is one method for drug and tumor assessment. Selecting a consistent arterial input function (AIF) is necessary to calculate tissue and tumor pharmacokinetic parameters in DCE-MRI. This paper presents an automatic and robust method to select the AIF. The first stage is artery detection and segmentation, where knowledge about artery structure and dynamic signal intensity temporal properties of DCE-MRI is employed. The second stage is AIF model fitting and selection. A tri-exponential model is fitted for every candidate AIF using the Levenberg-Marquardt method, and the best fitted AIF is selected. Our method has been applied in DCE-MRIs of four different body parts: breast, brain, liver and prostate. The success rates in artery segmentation for 19 cases are 89.6%+/-15.9%. The pharmacokinetic parameters computed from the automatically selected AIFs are highly correlated with those from manually determined AIFs (R2=0.946, P(T<=t)=0.09). Our imaging-based tri-exponential AIF model demonstrated significant improvement over a previously proposed bi-exponential model.
NASA Astrophysics Data System (ADS)
Sharudin, R. W.; AbdulBari Ali, S.; Zulkarnain, M.; Shukri, M. A.
2018-05-01
This study reports on the integration of Artificial Neural Network (ANNs) with experimental data in predicting the solubility of carbon dioxide (CO2) blowing agent in SEBS by generating highest possible value for Regression coefficient (R2). Basically, foaming of thermoplastic elastomer with CO2 is highly affected by the CO2 solubility. The ability of ANN in predicting interpolated data of CO2 solubility was investigated by comparing training results via different method of network training. Regards to the final prediction result for CO2 solubility by ANN, the prediction trend (output generate) was corroborated with the experimental results. The obtained result of different method of training showed the trend of output generated by Gradient Descent with Momentum & Adaptive LR (traingdx) required longer training time and required more accurate input to produce better output with final Regression Value of 0.88. However, it goes vice versa with Levenberg-Marquardt (trainlm) technique as it produced better output in quick detention time with final Regression Value of 0.91.
NASA Astrophysics Data System (ADS)
Zhou, Kaixing; Sun, Xiucong; Huang, Hai; Wang, Xinsheng; Ren, Guangwei
2017-10-01
The space-based Automatic Dependent Surveillance - Broadcast (ADS-B) is a new technology for air traffic management. The satellite equipped with spaceborne ADS-B system receives the broadcast signals from aircraft and transfers the message to ground stations, so as to extend the coverage area of terrestrial-based ADS-B. In this work, a novel satellite single-axis attitude determination solution based on the ADS-B receiving system is proposed. This solution utilizes the signal-to-noise ratio (SNR) measurement of the broadcast signals from aircraft to determine the boresight orientation of the ADS-B receiving antenna fixed on the satellite. The basic principle of this solution is described. The feasibility study of this new attitude determination solution is implemented, including the link budget and the access analysis. On this basis, the nonlinear least squares estimation based on the Levenberg-Marquardt method is applied to estimate the single-axis orientation. A full digital simulation has been carried out to verify the effectiveness and performance of this solution. Finally, the corresponding results are processed and presented minutely.
NASA Astrophysics Data System (ADS)
Varady, Mark; Mantooth, Brent; Pearl, Thomas; Willis, Matthew
2014-03-01
A continuum model of reactive decontamination in absorbing polymeric thin film substrates exposed to the chemical warfare agent O-ethyl S-[2-(diisopropylamino)ethyl] methylphosphonothioate (known as VX) was developed to assess the performance of various decontaminants. Experiments were performed in conjunction with an inverse analysis method to obtain the necessary model parameters. The experiments involved contaminating a substrate with a fixed VX exposure, applying a decontaminant, followed by a time-resolved, liquid phase extraction of the absorbing substrate to measure the residual contaminant by chromatography. Decontamination model parameters were uniquely determined using the Levenberg-Marquardt nonlinear least squares fitting technique to best fit the experimental time evolution of extracted mass. The model was implemented numerically in both a 2D axisymmetric finite element program and a 1D finite difference code, and it was found that the more computationally efficient 1D implementation was sufficiently accurate. The resulting decontamination model provides an accurate quantification of contaminant concentration profile in the material, which is necessary to assess exposure hazards.
Fitting and Modeling in the ASC Data Analysis Environment
NASA Astrophysics Data System (ADS)
Doe, S.; Siemiginowska, A.; Joye, W.; McDowell, J.
As part of the AXAF Science Center (ASC) Data Analysis Environment, we will provide to the astronomical community a Fitting Application. We present a design of the application in this paper. Our design goal is to give the user the flexibility to use a variety of optimization techniques (Levenberg-Marquardt, maximum entropy, Monte Carlo, Powell, downhill simplex, CERN-Minuit, and simulated annealing) and fit statistics (chi (2) , Cash, variance, and maximum likelihood); our modular design allows the user easily to add their own optimization techniques and/or fit statistics. We also present a comparison of the optimization techniques to be provided by the Application. The high spatial and spectral resolutions that will be obtained with AXAF instruments require a sophisticated data modeling capability. We will provide not only a suite of astronomical spatial and spectral source models, but also the capability of combining these models into source models of up to four data dimensions (i.e., into source functions f(E,x,y,t)). We will also provide tools to create instrument response models appropriate for each observation.
CubiCal - Fast radio interferometric calibration suite exploiting complex optimisation
NASA Astrophysics Data System (ADS)
Kenyon, J. S.; Smirnov, O. M.; Grobler, T. L.; Perkins, S. J.
2018-05-01
It has recently been shown that radio interferometric gain calibration can be expressed succinctly in the language of complex optimisation. In addition to providing an elegant framework for further development, it exposes properties of the calibration problem which can be exploited to accelerate traditional non-linear least squares solvers such as Gauss-Newton and Levenberg-Marquardt. We extend existing derivations to chains of Jones terms: products of several gains which model different aberrant effects. In doing so, we find that the useful properties found in the single term case still hold. We also develop several specialised solvers which deal with complex gains parameterised by real values. The newly developed solvers have been implemented in a Python package called CubiCal, which uses a combination of Cython, multiprocessing and shared memory to leverage the power of modern hardware. We apply CubiCal to both simulated and real data, and perform both direction-independent and direction-dependent self-calibration. Finally, we present the results of some rudimentary profiling to show that CubiCal is competitive with respect to existing calibration tools such as MeqTrees.
Efficient and Robust Optimization for Building Energy Simulation.
Pourarian, Shokouh; Kearsley, Anthony; Wen, Jin; Pertzborn, Amanda
2016-06-15
Efficiently, robustly and accurately solving large sets of structured, non-linear algebraic and differential equations is one of the most computationally expensive steps in the dynamic simulation of building energy systems. Here, the efficiency, robustness and accuracy of two commonly employed solution methods are compared. The comparison is conducted using the HVACSIM+ software package, a component based building system simulation tool. The HVACSIM+ software presently employs Powell's Hybrid method to solve systems of nonlinear algebraic equations that model the dynamics of energy states and interactions within buildings. It is shown here that the Powell's method does not always converge to a solution. Since a myriad of other numerical methods are available, the question arises as to which method is most appropriate for building energy simulation. This paper finds considerable computational benefits result from replacing the Powell's Hybrid method solver in HVACSIM+ with a solver more appropriate for the challenges particular to numerical simulations of buildings. Evidence is provided that a variant of the Levenberg-Marquardt solver has superior accuracy and robustness compared to the Powell's Hybrid method presently used in HVACSIM+.
Optical Activity of Benzil Crystal
NASA Astrophysics Data System (ADS)
Říha, Jan; Vyšín, Ivo
2003-09-01
Optical activity of benzil as an example of optically active matter in the crystalline state only, not in solution, is studied for the wavelengths ranging from 0.320 m to 0.585 m. Previously measured experimental data are approximated by the theoretical set of formulas, which were derived by the use of the three coupled oscillators model. The earlier published formula consisting of six terms differed from the experimental data particularly in the wavelength region (0.380-0.510) m. This formula is replaced by the twelve-term formula which was computed by our specially worked computer program for the interpretation of the experimental data of optical activity based on the Marquardt-Levenberg method of the sum of least squares minimization. The possibility of molecular contribution to the resulting optical activity of benzil is mentioned. The use of Kramers-Kronig transforms for the determination of the circular dichroism curve based on the optical rotatory dispersion result is shown. The theoretically computed circular dichroism is compared with the available experimental data.
A Feature-Free 30-Disease Pathological Brain Detection System by Linear Regression Classifier.
Chen, Yi; Shao, Ying; Yan, Jie; Yuan, Ti-Fei; Qu, Yanwen; Lee, Elizabeth; Wang, Shuihua
2017-01-01
Alzheimer's disease patients are increasing rapidly every year. Scholars tend to use computer vision methods to develop automatic diagnosis system. (Background) In 2015, Gorji et al. proposed a novel method using pseudo Zernike moment. They tested four classifiers: learning vector quantization neural network, pattern recognition neural network trained by Levenberg-Marquardt, by resilient backpropagation, and by scaled conjugate gradient. This study presents an improved method by introducing a relatively new classifier-linear regression classification. Our method selects one axial slice from 3D brain image, and employed pseudo Zernike moment with maximum order of 15 to extract 256 features from each image. Finally, linear regression classification was harnessed as the classifier. The proposed approach obtains an accuracy of 97.51%, a sensitivity of 96.71%, and a specificity of 97.73%. Our method performs better than Gorji's approach and five other state-of-the-art approaches. Therefore, it can be used to detect Alzheimer's disease. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Greenwald, Scott H.; Kuchenbecker, James A.; Rowlan, Jessica S.; Neitz, Jay; Neitz, Maureen
2017-01-01
Purpose Human long (L) and middle (M) wavelength cone opsin genes are highly variable due to intermixing. Two L/M cone opsin interchange mutants, designated LIAVA and LVAVA, are associated with clinical diagnoses, including red-green color vision deficiency, blue cone monochromacy, cone degeneration, myopia, and Bornholm Eye Disease. Because the protein and splicing codes are carried by the same nucleotides, intermixing L and M genes can cause disease by affecting protein structure and splicing. Methods Genetically engineered mice were created to allow investigation of the consequences of altered protein structure alone, and the effects on cone morphology were examined using immunohistochemistry. In humans and mice, cone function was evaluated using the electroretinogram (ERG) under L/M- or short (S) wavelength cone isolating conditions. Effects of LIAVA and LVAVA genes on splicing were evaluated using a minigene assay. Results ERGs and histology in mice revealed protein toxicity for the LVAVA but not for the LIAVA opsin. Minigene assays showed that the dominant messenger RNA (mRNA) was aberrantly spliced for both variants; however, the LVAVA gene produced a small but significant amount of full-length mRNA and LVAVA subjects had correspondingly reduced ERG amplitudes. In contrast, the LIAVA subject had no L/M cone ERG. Conclusions Dramatic differences in phenotype can result from seemingly minor differences in genotype through divergent effects on the dual amino acid and splicing codes. Translational Relevance The mechanism by which individual mutations contribute to clinical phenotypes provides valuable information for diagnosis and prognosis of vision disorders associated with L/M interchange mutations, and it informs strategies for developing therapies. PMID:28516000
Lipka, J J; Waskell, L A
1989-01-01
Rabbit cytochrome P450 isozyme 2 requires cytochrome b5 to metabolize the volatile anesthetic methoxyflurane but not the substrate benzphetamine [E. Canova-Davis and L. Waskell (1984) J. Biol. Chem. 259, 2541-2546]. To determine whether the requirement for cytochrome b5 for methoxyflurane oxidation is mediated by an allosteric effect on cytochrome P450 LM2 or cytochrome P450 reductase, we have investigated whether this anesthetic can induce a role for cytochrome b5 in benzphetamine metabolism. Using rabbit liver microsomes and antibodies raised in guinea pigs against rabbit cytochrome b5, we found that methoxyflurane did not create a cytochrome b5 requirement for benzphetamine metabolism. Methoxyflurane also failed to induce a role for cytochrome b5 in benzphetamine metabolism in the purified, reconstituted mixed function oxidase system. Studies of the reaction kinetics established that in the absence of cytochrome b5, methoxyflurane and benzphetamine are competitive inhibitors, and that in the presence of cytochrome b5, benzphetamine and methoxyflurane are two alternate substrates in competition for a single site on the same enzyme. These results all indicate that the methoxyflurane-induced cytochrome b5 dependence of the mixed function oxidase cytochrome P450 LM2 system is a direct result of the interaction between methoxyflurane and the substrate binding site of cytochrome P450 LM2 and suggest the focus of future studies of this question.
Murata, Yasuhiko; Hashimoto, Takuma; Urushihara, Yusuke; Shiga, Soichiro; Takeda, Kazuya; Jingu, Keiichi; Hosoi, Yoshio
2018-01-22
Presence of unperfused regions containing cells under hypoxia and nutrient starvation contributes to radioresistance in solid human tumors. It is well known that hypoxia causes cellular radioresistance, but little is known about the effects of nutrient starvation on radiosensitivity. We have reported that nutrient starvation induced decrease of mTORC1 activity and decrease of radiosensitivity in an SV40-transformed human fibroblast cell line, LM217, and that nutrient starvation induced increase of mTORC1 activity and increase of radiosensitivity in human liver cancer cell lines, HepG2 and HuH6 (Murata et al., BBRC 2015). Knockdown of mTOR using small interfering RNA (siRNA) for mTOR suppressed radiosensitivity under nutrient starvation alone in HepG2 cells, which suggests that mTORC1 pathway regulates radiosensitivity under nutrient starvation alone. In the present study, effects of hypoxia and nutrient starvation on radiosensitivity were investigated using the same cell lines. LM217 and HepG2 cells were used to examine the effects of hypoxia and nutrient starvation on cellular radiosensitivity, mTORC1 pathway including AMPK, ATM, and HIF-1α, which are known as regulators of mTORC1 activity, and glycogen storage, which is induced by HIF-1 and HIF-2 under hypoxia and promotes cell survival. Under hypoxia and nutrient starvation, AMPK activity and ATM expression were increased in LM217 cells and decreased in HepG2 cells compared with AMPK activity under nutrient starvation alone or ATM expression under hypoxia alone. Under hypoxia and nutrient starvation, radiosensitivity was decreased in LM217 cells and increased in HepG2 cells compared with radiosensitivity under hypoxia alone. Under hypoxia and nutrient starvation, knockdown of AMPK decreased ATM activity and increased radiation sensitivity in LM217 cells. In both cell lines, mTORC1 activity was decreased under hypoxia and nutrient starvation. Under hypoxia alone, knockdown of mTOR slightly increased ATM
Decewicz, Przemyslaw; Radlinska, Monika; Dziewit, Lukasz
2017-01-01
The genus Sinorhizobium/Ensifer mostly groups nitrogen-fixing bacteria that create root or stem nodules on leguminous plants and transform atmospheric nitrogen into ammonia, which improves the productivity of the plants. Although these biotechnologically-important bacteria are commonly found in various soil environments, little is known about their phages. In this study, the genome of Sinorhizobium sp. LM21 isolated from a heavy-metal-contaminated copper mine in Poland was investigated for the presence of prophages and DNA methyltransferase-encoding genes. In addition to the previously identified temperate phage, ΦLM21, and the phage-plasmid, pLM21S1, the analysis revealed the presence of three prophage regions. Moreover, four novel phage-encoded DNA methyltransferase (MTase) genes were identified and the enzymes were characterized. It was shown that two of the identified viral MTases methylated the same target sequence (GANTC) as cell cycle-regulated methyltransferase (CcrM) of the bacterial host strain, LM21. This discovery was recognized as an example of the evolutionary convergence between enzymes of sinorhizobial viruses and their host, which may play an important role in virus cycle. In the last part of the study, thorough comparative analyses of 31 sinorhizobial (pro)phages (including active sinorhizobial phages and novel putative prophages retrieved and manually re-annotated from Sinorhizobium spp. genomes) were performed. The networking analysis revealed the presence of highly conserved proteins (e.g., holins and endolysins) and a high diversity of viral integrases. The analysis also revealed a large number of viral DNA MTases, whose genes were frequently located within the predicted replication modules of analyzed prophages, which may suggest their important regulatory role. Summarizing, complex analysis of the phage protein similarity network enabled a new insight into overall sinorhizobial virome diversity. PMID:28672885
NASA Technical Reports Server (NTRS)
Platnick, Steven; King, Michael D.; Wind, Galina; Amarasinghe, Nandana; Marchant, Benjamin; Arnold, G. Thomas
2012-01-01
Operational Moderate Resolution Imaging Spectroradiometer (MODIS) retrievals of cloud optical and microphysical properties (part of the archived products MOD06 and MYD06, for MODIS Terra and Aqua, respectively) are currently being reprocessed along with other MODIS Atmosphere Team products. The latest "Collection 6" processing stream, which is expected to begin production by summer 2012, includes updates to the previous cloud retrieval algorithm along with new capabilities. The 1 km retrievals, based on well-known solar reflectance techniques, include cloud optical thickness, effective particle radius, and water path, as well as thermodynamic phase derived from a combination of solar and infrared tests. Being both global and of high spatial resolution requires an algorithm that is computationally efficient and can perform over all surface types. Collection 6 additions and enhancements include: (i) absolute effective particle radius retrievals derived separately from the 1.6 and 3.7 !-lm bands (instead of differences relative to the standard 2.1 !-lm retrieval), (ii) comprehensive look-up tables for cloud reflectance and emissivity (no asymptotic theory) with a wind-speed interpolated Cox-Munk BRDF for ocean surfaces, (iii) retrievals for both liquid water and ice phases for each pixel, and a subsequent determination of the phase based, in part, on effective radius retrieval outcomes for the two phases, (iv) new ice cloud radiative models using roughened particles with a specified habit, (v) updated spatially-complete global spectral surface albedo maps derived from MODIS Collection 5, (vi) enhanced pixel-level uncertainty calculations incorporating additional radiative error sources including the MODIS L1 B uncertainty index for assessing band and scene-dependent radiometric uncertainties, (v) and use of a new 1 km cloud top pressure/temperature algorithm (also part of MOD06) for atmospheric corrections and low cloud non-unity emissivity temperature adjustments.
CARES/Life Ceramics Durability Evaluation Software Enhanced for Cyclic Fatigue
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Powers, Lynn M.; Janosik, Lesley A.
1999-01-01
The CARES/Life computer program predicts the probability of a monolithic ceramic component's failure as a function of time in service. The program has many features and options for materials evaluation and component design. It couples commercial finite element programs--which resolve a component's temperature and stress distribution--to reliability evaluation and fracture mechanics routines for modeling strength-limiting defects. The capability, flexibility, and uniqueness of CARES/Life have attracted many users representing a broad range of interests and has resulted in numerous awards for technological achievements and technology transfer. Recent work with CARES/Life was directed at enhancing the program s capabilities with regards to cyclic fatigue. Only in the last few years have ceramics been recognized to be susceptible to enhanced degradation from cyclic loading. To account for cyclic loads, researchers at the NASA Lewis Research Center developed a crack growth model that combines the Power Law (time-dependent) and the Walker Law (cycle-dependent) crack growth models. This combined model has the characteristics of Power Law behavior (decreased damage) at high R ratios (minimum load/maximum load) and of Walker law behavior (increased damage) at low R ratios. In addition, a parameter estimation methodology for constant-amplitude, steady-state cyclic fatigue experiments was developed using nonlinear least squares and a modified Levenberg-Marquardt algorithm. This methodology is used to give best estimates of parameter values from cyclic fatigue specimen rupture data (usually tensile or flexure bar specimens) for a relatively small number of specimens. Methodology to account for runout data (unfailed specimens over the duration of the experiment) was also included.
IceChrono v1: a probabilistic model to compute a common and optimal chronology for several ice cores
NASA Astrophysics Data System (ADS)
Parrenin, Frédéric
2015-04-01
Polar ice cores provide exceptional archives of past environmental conditions. The dating of ice cores is essential to interpret the paleo records that they contain, but it is a complicated problem since it involves different dating methods. Here I present IceChrono v1, a new probabilistic model to combine different kinds of chronological information to obtain a common and optimized chronology for several ice cores, as well as its uncertainty. It is based on the inversion of three quantities: the surface accumulation rate, the Lock-In Depth (LID) of air bubbles and the vertical thinning function. The chronological information used are: models of the sedimentation process (accumulation of snow, densification of snow into ice and air trapping, ice flow), ice and gas dated horizons, ice and gas dated depth intervals, Δdepth observations (depth shift between synchronous events recorded in the ice and in the air), stratigraphic links in between ice cores (ice-ice, air-air or mix ice-air and air-ice links). The optimization problem is formulated as a least squares problems, that is, all densities of probabilities are assumed gaussian. It is numerically solved using the Levenberg-Marquardt algorithm and a numerical evaluation of the model's Jacobian. IceChrono is similar in scope to the Datice model, but has differences from the mathematical, numerical and programming point of views. I apply IceChrono on an AICC2012-like experiment and I find similar results than Datice within a few centuries, which is a confirmation of both IceChrono and Datice codes. IceChrono v1 is freely available under the GPL v3 open source license.
NASA Astrophysics Data System (ADS)
Kim, Seung Joong
The protein folding problem has been one of the most challenging subjects in biological physics due to its complexity. Energy landscape theory based on statistical mechanics provides a thermodynamic interpretation of the protein folding process. We have been working to answer fundamental questions about protein-protein and protein-water interactions, which are very important for describing the energy landscape surface of proteins correctly. At first, we present a new method for computing protein-protein interaction potentials of solvated proteins directly from SAXS data. An ensemble of proteins was modeled by Metropolis Monte Carlo and Molecular Dynamics simulations, and the global X-ray scattering of the whole model ensemble was computed at each snapshot of the simulation. The interaction potential model was optimized and iterated by a Levenberg-Marquardt algorithm. Secondly, we report that terahertz spectroscopy directly probes hydration dynamics around proteins and determines the size of the dynamical hydration shell. We also present the sequence and pH-dependence of the hydration shell and the effect of the hydrophobicity. On the other hand, kinetic terahertz absorption (KITA) spectroscopy is introduced to study the refolding kinetics of ubiquitin and its mutants. KITA results are compared to small angle X-ray scattering, tryptophan fluorescence, and circular dichroism results. We propose that KITA monitors the rearrangement of hydrogen bonding during secondary structure formation. Finally, we present development of the automated single molecule operating system (ASMOS) for a high throughput single molecule detector, which levitates a single protein molecule in a 10 microm diameter droplet by the laser guidance. I also have performed supporting calculations and simulations with my own program codes.
Maghsoudi, M; Ghaedi, M; Zinali, A; Ghaedi, A M; Habibi, M H
2015-01-05
In this research, ZnO nanoparticle loaded on activated carbon (ZnO-NPs-AC) was synthesized simply by a low cost and nontoxic procedure. The characterization and identification have been completed by different techniques such as SEM and XRD analysis. A three layer artificial neural network (ANN) model is applicable for accurate prediction of dye removal percentage from aqueous solution by ZnO-NRs-AC following conduction of 270 experimental data. The network was trained using the obtained experimental data at optimum pH with different ZnO-NRs-AC amount (0.005-0.015 g) and 5-40 mg/L of sunset yellow dye over contact time of 0.5-30 min. The ANN model was applied for prediction of the removal percentage of present systems with Levenberg-Marquardt algorithm (LMA), a linear transfer function (purelin) at output layer and a tangent sigmoid transfer function (tansig) in the hidden layer with 6 neurons. The minimum mean squared error (MSE) of 0.0008 and coefficient of determination (R(2)) of 0.998 were found for prediction and modeling of SY removal. The influence of parameters including adsorbent amount, initial dye concentration, pH and contact time on sunset yellow (SY) removal percentage were investigated and optimal experimental conditions were ascertained. Optimal conditions were set as follows: pH, 2.0; 10 min contact time; an adsorbent dose of 0.015 g. Equilibrium data fitted truly with the Langmuir model with maximum adsorption capacity of 142.85 mg/g for 0.005 g adsorbent. The adsorption of sunset yellow followed the pseudo-second-order rate equation. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Maghsoudi, M.; Ghaedi, M.; Zinali, A.; Ghaedi, A. M.; Habibi, M. H.
2015-01-01
In this research, ZnO nanoparticle loaded on activated carbon (ZnO-NPs-AC) was synthesized simply by a low cost and nontoxic procedure. The characterization and identification have been completed by different techniques such as SEM and XRD analysis. A three layer artificial neural network (ANN) model is applicable for accurate prediction of dye removal percentage from aqueous solution by ZnO-NRs-AC following conduction of 270 experimental data. The network was trained using the obtained experimental data at optimum pH with different ZnO-NRs-AC amount (0.005-0.015 g) and 5-40 mg/L of sunset yellow dye over contact time of 0.5-30 min. The ANN model was applied for prediction of the removal percentage of present systems with Levenberg-Marquardt algorithm (LMA), a linear transfer function (purelin) at output layer and a tangent sigmoid transfer function (tansig) in the hidden layer with 6 neurons. The minimum mean squared error (MSE) of 0.0008 and coefficient of determination (R2) of 0.998 were found for prediction and modeling of SY removal. The influence of parameters including adsorbent amount, initial dye concentration, pH and contact time on sunset yellow (SY) removal percentage were investigated and optimal experimental conditions were ascertained. Optimal conditions were set as follows: pH, 2.0; 10 min contact time; an adsorbent dose of 0.015 g. Equilibrium data fitted truly with the Langmuir model with maximum adsorption capacity of 142.85 mg/g for 0.005 g adsorbent. The adsorption of sunset yellow followed the pseudo-second-order rate equation.
Dhiman, Nitesh; Markandeya; Singh, Amrita; Verma, Neeraj K; Ajaria, Nidhi; Patnaik, Satyakam
2017-05-01
ZnO NPs were synthesized by a prudent green chemistry approach in presence of polyacrylamide grafted guar gum polymer (pAAm-g-GG) to ensure uniform morphology, and functionality and appraised for their ability to degrade photocatalytically Acridine Orange (AO) dye. These ZnO@pAAm-g-GG NPs were thoroughly characterized by various spectroscopic, XRD and electron microscopic techniques. The relative quantity of ZnO NPs in polymeric matrix has been estimated by spectro-analytical procedure; AAS and TGA analysis. The impact of process parameters viz. NP's dose, contact time and AO dye concentration on percentage photocatalytic degradation of AO dyes were evaluated using multivariate optimizing tools, Response Surface Methodology (RSM) involving Box-Behnken Design (BBD) and Artificial Neural Network (ANN). Congruity of the BBD statistical model was implied by R 2 value 0.9786 and F-value 35.48. At RSM predicted optimal condition viz. ZnO@pAAm-g-GG NP's dose of 0.2g/L, contact time of 210min and AO dye concentration 10mg/L, a maximum of 98% dye degradation was obtained. ANOVA indicated appropriateness of the model for dye degradation owing to "Prob.>F" less than 0.05 for variable parameters. We further, employed three layers feed forward ANN model for validating the BBD process parameters and suitability of our chosen model. The evaluation of Levenberg-Marquardt algorithm (ANN1) and Gradient Descent with adaptive learning rate (ANN2) model employed to scrutinize the best method and found experimental values of AO dye degradation were in close to those with predicated value of ANN 2 modeling with minimum error. Copyright © 2017 Elsevier Inc. All rights reserved.
Jerome, Neil P; Orton, Matthew R; d'Arcy, James A; Collins, David J; Koh, Dow-Mu; Leach, Martin O
2014-01-01
To evaluate the effect on diffusion-weighted image-derived parameters in the apparent diffusion coefficient (ADC) and intra-voxel incoherent motion (IVIM) models from choice of either free-breathing or navigator-controlled acquisition. Imaging was performed with consent from healthy volunteers (n = 10) on a 1.5T Siemens Avanto scanner. Parameter-matched free-breathing and navigator-controlled diffusion-weighted images were acquired, without averaging in the console, for a total scan time of ∼10 minutes. Regions of interest were drawn for renal cortex, renal pyramid, whole kidney, liver, spleen, and paraspinal muscle. An ADC diffusion model for these regions was fitted for b-values ≥ 250 s/mm(2) , using a Levenberg-Marquardt algorithm, and an IVIM model was fitted for all images using a Bayesian method. ADC and IVIM parameters from the two acquisition regimes show no significant differences for the cohort; individual cases show occasional discrepancies, with outliers in parameter estimates arising more commonly from navigator-controlled scans. The navigator-controlled acquisitions showed, on average, a smaller range of movement for the kidneys (6.0 ± 1.4 vs. 10.0 ± 1.7 mm, P = 0.03), but also a smaller number of averages collected (3.9 ± 0.1 vs. 5.5 ± 0.2, P < 0.01) in the allocated time. Navigator triggering offers no advantage in fitted diffusion parameters, whereas free-breathing appears to offer greater confidence in fitted diffusion parameters, with fewer outliers, for matched acquisition periods. Copyright © 2013 Wiley Periodicals, Inc.
Campos-Neto, A; Webb, J R; Greeson, K; Coler, R N; Skeiky, Y A W; Reed, S G
2002-06-01
We have recently shown that a cocktail containing two leishmanial recombinant antigens (LmSTI1 and TSA) and interleukin-12 (IL-12) as an adjuvant induces solid protection in both a murine and a nonhuman primate model of cutaneous leishmaniasis. However, because IL-12 is difficult to prepare, is expensive, and does not have the stability required for a vaccine product, we have investigated the possibility of using DNA as an alternative means of inducing protective immunity. Here, we present evidence that the antigens TSA and LmSTI1 delivered in a plasmid DNA format either as single genes or in a tandem digene construct induce equally solid protection against Leishmania major infection in susceptible BALB/c mice. Immunization of mice with either TSA DNA or LmSTI1 DNA induced specific CD4(+)-T-cell responses of the Th1 phenotype without a requirement for specific adjuvant. CD8 responses, as measured by cytotoxic-T-lymphocyte activity, were generated after immunization with TSA DNA but not LmSTI1 DNA. Interestingly, vaccination of mice with TSA DNA consistently induced protection to a much greater extent than LmSTI1 DNA, thus supporting the notion that CD8 responses might be an important accessory arm of the immune response for acquired resistance against leishmaniasis. Moreover, the protection induced by DNA immunization was specific for infection with Leishmania, i.e., the immunization had no effect on the course of infection of the mice challenged with an unrelated intracellular pathogen such as Mycobacterium tuberculosis. Conversely, immunization of BALB/c mice with a plasmid DNA that is protective against challenge with M. tuberculosis had no effect on the course of infection of these mice with L. major. Together, these results indicate that the protection observed with the leishmanial DNA is mediated by acquired specific immune response rather than by the activation of nonspecific innate immune mechanisms. In addition, a plasmid DNA containing a fusion construct
Mikhaylova, E; Kolstein, M; De Lorenzo, G; Chmeissani, M
2014-07-01
A novel positron emission tomography (PET) scanner design based on a room-temperature pixelated CdTe solid-state detector is being developed within the framework of the Voxel Imaging PET (VIP) Pathfinder project [1]. The simulation results show a great potential of the VIP to produce high-resolution images even in extremely challenging conditions such as the screening of a human head [2]. With unprecedented high channel density (450 channels/cm 3 ) image reconstruction is a challenge. Therefore optimization is needed to find the best algorithm in order to exploit correctly the promising detector potential. The following reconstruction algorithms are evaluated: 2-D Filtered Backprojection (FBP), Ordered Subset Expectation Maximization (OSEM), List-Mode OSEM (LM-OSEM), and the Origin Ensemble (OE) algorithm. The evaluation is based on the comparison of a true image phantom with a set of reconstructed images obtained by each algorithm. This is achieved by calculation of image quality merit parameters such as the bias, the variance and the mean square error (MSE). A systematic optimization of each algorithm is performed by varying the reconstruction parameters, such as the cutoff frequency of the noise filters and the number of iterations. The region of interest (ROI) analysis of the reconstructed phantom is also performed for each algorithm and the results are compared. Additionally, the performance of the image reconstruction methods is compared by calculating the modulation transfer function (MTF). The reconstruction time is also taken into account to choose the optimal algorithm. The analysis is based on GAMOS [3] simulation including the expected CdTe and electronic specifics.
Optimum Design of LLC Resonant Converter using Inductance Ratio (Lm/Lr)
NASA Astrophysics Data System (ADS)
Palle, Kowstubha; Krishnaveni, K.; Ramesh Reddy, Kolli
2017-06-01
The main benefits of LLC resonant dc/dc converter over conventional series and parallel resonant converters are its light load regulation, less circulating currents, larger bandwidth for zero voltage switching, and less tuning of switching frequency for controlled output. An unique analytical tool, called fundamental harmonic approximation with peak gain adjustment is used for designing the converter. In this paper, an optimum design of the converter is proposed by considering three different design criterions with different values of inductance ratio (Lm/Lr) to achieve good efficiency at high input voltage. The optimum design includes the analysis in operating range, switching frequency range, primary side losses of a switch and stability. The analysis is carried out with simulation using the software tools like MATLAB and PSIM. The performance of the optimized design is demonstrated for a design specification of 12 V, 5 A output operating with an input voltage range of 300-400 V using FSFR 2100 IC of Texas instruments.
NASA Astrophysics Data System (ADS)
Hoppmann, Mario; Hunkeler, Priska A.; Hendricks, Stefan; Kalscheuer, Thomas; Gerdes, Rüdiger
2016-04-01
In Antarctica, ice crystals (platelets) form and grow in supercooled waters below ice shelves. These platelets rise, accumulate beneath nearby sea ice, and subsequently form a several meter thick, porous sub-ice platelet layer. This special ice type is a unique habitat, influences sea-ice mass and energy balance, and its volume can be interpreted as an indicator of the health of an ice shelf. Although progress has been made in determining and understanding its spatio-temporal variability based on point measurements, an investigation of this phenomenon on a larger scale remains a challenge due to logistical constraints and a lack of suitable methodology. In the present study, we applied a lateral constrained Marquardt-Levenberg inversion to a unique multi-frequency electromagnetic (EM) induction sounding dataset obtained on the ice-shelf influenced fast-ice regime of Atka Bay, eastern Weddell Sea. We adapted the inversion algorithm to incorporate a sensor specific signal bias, and confirmed the reliability of the algorithm by performing a sensitivity study using synthetic data. We inverted the field data for sea-ice and platelet-layer thickness and electrical conductivity, and calculated ice-volume fractions within the platelet layer using Archie's Law. The thickness results agreed well with drillhole validation datasets within the uncertainty range, and the ice-volume fraction yielded results comparable to other studies. Both parameters together enable an estimation of the total ice volume within the platelet layer, which was found to be comparable to the volume of landfast sea ice in this region, and corresponded to more than a quarter of the annual basal melt volume of the nearby Ekström Ice Shelf. Our findings show that multi-frequency EM induction sounding is a suitable approach to efficiently map sea-ice and platelet-layer properties, with important implications for research into ocean/ice-shelf/sea-ice interactions. However, a successful application of this
NASA Astrophysics Data System (ADS)
Tiberi, Lara; Costa, Giovanni
2017-04-01
The possibility to directly associate the damages to the ground motion parameters is always a great challenge, in particular for civil protections. Indeed a ground motion parameter, estimated in near real time that can express the damages occurred after an earthquake, is fundamental to arrange the first assistance after an event. The aim of this work is to contribute to the estimation of the ground motion parameter that better describes the observed intensity, immediately after an event. This can be done calculating for each ground motion parameter estimated in a near real time mode a regression law which correlates the above-mentioned parameter to the observed macro-seismic intensity. This estimation is done collecting high quality accelerometric data in near field, filtering them at different frequency steps. The regression laws are calculated using two different techniques: the non linear least-squares (NLLS) Marquardt-Levenberg algorithm and the orthogonal distance methodology (ODR). The limits of the first methodology are the needed of initial values for the parameters a and b (set 1.0 in this study), and the constraint that the independent variable must be known with greater accuracy than the dependent variable. While the second algorithm is based on the estimation of the errors perpendicular to the line, rather than just vertically. The vertical errors are just the errors in the 'y' direction, so only for the dependent variable whereas the perpendicular errors take into account errors for both the variables, the dependent and the independent. This makes possible also to directly invert the relation, so the a and b values can be used also to express the gmps as function of I. For each law the standard deviation and R2 value are estimated in order to test the quality and the reliability of the found relation. The Amatrice earthquake of 24th August of 2016 is used as case of study to test the goodness of the calculated regression laws.
Modelling Schumann resonances from ELF measurements using non-linear optimization methods
NASA Astrophysics Data System (ADS)
Castro, Francisco; Toledo-Redondo, Sergio; Fornieles, Jesús; Salinas, Alfonso; Portí, Jorge; Navarro, Enrique; Sierra, Pablo
2017-04-01
Schumann resonances (SR) can be found in planetary atmospheres, inside the cavity formed by the conducting surface of the planet and the lower ionosphere. They are a powerful tool to investigate both the electric processes that occur in the atmosphere and the characteristics of the surface and the lower ionosphere. In this study, the measurements are obtained in the ELF (Extremely Low Frequency) Juan Antonio Morente station located in the national park of Sierra Nevada. The three first modes, contained in the frequency band between 6 to 25 Hz, will be considered. For each time series recorded by the station, the amplitude spectrum was estimated by using Bartlett averaging. Then, the central frequencies and amplitudes of the SRs were obtained by fitting the spectrum with non-linear functions. In the poster, a study of nonlinear unconstrained optimization methods applied to the estimation of the Schumann Resonances will be presented. Non-linear fit, also known as optimization process, is the procedure followed in obtaining Schumann Resonances from the natural electromagnetic noise. The optimization methods that have been analysed are: Levenberg-Marquardt, Conjugate Gradient, Gradient, Newton and Quasi-Newton. The functions that the different methods fit to data are three lorentzian curves plus a straight line. Gaussian curves have also been considered. The conclusions of this study are outlined in the following paragraphs: i) Natural electromagnetic noise is better fitted using Lorentzian functions; ii) the measurement bandwidth can accelerate the convergence of the optimization method; iii) Gradient method has less convergence and has a highest mean squared error (MSE) between measurement and the fitted function, whereas Levenberg-Marquad, Gradient conjugate method and Cuasi-Newton method give similar results (Newton method presents higher MSE); v) There are differences in the MSE between the parameters that define the fit function, and an interval from 1% to 5% has
Nakamura, Atsushi; Aizawa, Junichi; Sakayama, Kenshi; Kidani, Teruki; Takata, Tomoyo; Norimatsu, Yoshiaki; Miura, Hiromasa; Masuno, Hiroshi
2012-09-26
One of the problems associated with osteosarcoma is the frequent formation of micrometastases in the lung prior to diagnosis because the development of metastatic lesions often causes a fatal outcome. Therefore, the prevention of pulmonary metastases during the early stage of tumor development is critical for the improvement of the prognosis of osteosarcoma patients. In Japan, soy is consumed in a wide variety of forms, such as miso soup and soy sauce. The purpose of this study is to investigate the effect of genistein, an isoflavone found in soy, on the invasive and motile potential of osteosarcoma cells. LM8 cells were treated for 3 days with various concentrations of genistein. The effect of genistein on cell proliferation was determined by DNA measurement in the cultures and 5-bromo-2'-deoxyuridine (BrdU) incorporation study. The assays of cell invasion and motility were performed using the cell culture inserts with either matrigel-coated membranes or uncoated membranes in the invasion chambers. The expression and secretion of MMP-2 were determined by immunohistochemistry and gelatin zymography. The subcellular localization and cellular level of β-catenin were determined by immunofluorescence and Western blot. For examining cell morphology, the ethanol-fixed cells were stained with hematoxylin-eosin (H&E). The expression of osteocalcin mRNA was determined by reverse transcription-polymerase chain reaction (RT-PCR). Genistein dose-dependently inhibits cell proliferation. Genistein-treated cells were less invasive and less motile than untreated cells. The expression and secretion of MMP-2 were lower in the genistein-treated cultures than in the untreated cultures. β-Catenin in untreated cells was located in the cytoplasm and/or nucleus, while in genistein-treated cells it was translocated near to the plasma membrane. The level of β-catenin was higher in genistein-treated cells than in untreated cells. Treatment of LM8 cells with genistein induced morphological
2012-01-01
Background One of the problems associated with osteosarcoma is the frequent formation of micrometastases in the lung prior to diagnosis because the development of metastatic lesions often causes a fatal outcome. Therefore, the prevention of pulmonary metastases during the early stage of tumor development is critical for the improvement of the prognosis of osteosarcoma patients. In Japan, soy is consumed in a wide variety of forms, such as miso soup and soy sauce. The purpose of this study is to investigate the effect of genistein, an isoflavone found in soy, on the invasive and motile potential of osteosarcoma cells. Methods LM8 cells were treated for 3 days with various concentrations of genistein. The effect of genistein on cell proliferation was determined by DNA measurement in the cultures and 5-bromo-2’-deoxyuridine (BrdU) incorporation study. The assays of cell invasion and motility were performed using the cell culture inserts with either matrigel-coated membranes or uncoated membranes in the invasion chambers. The expression and secretion of MMP-2 were determined by immunohistochemistry and gelatin zymography. The subcellular localization and cellular level of β-catenin were determined by immunofluorescence and Western blot. For examining cell morphology, the ethanol-fixed cells were stained with hematoxylin-eosin (H&E). The expression of osteocalcin mRNA was determined by reverse transcription-polymerase chain reaction (RT-PCR). Results Genistein dose-dependently inhibits cell proliferation. Genistein-treated cells were less invasive and less motile than untreated cells. The expression and secretion of MMP-2 were lower in the genistein-treated cultures than in the untreated cultures. β-Catenin in untreated cells was located in the cytoplasm and/or nucleus, while in genistein-treated cells it was translocated near to the plasma membrane. The level of β-catenin was higher in genistein-treated cells than in untreated cells. Treatment of LM8 cells with
Eric Rowell; Carl Selelstad; Lee Vierling; Lloyd Queen; Wayne Sheppard
2006-01-01
The success of a local maximum (LM) tree detection algorithm for detecting individual trees from lidar data depends on stand conditions that are often highly variable. A laser height variance and percent canopy cover (PCC) classification is used to segment the landscape by stand condition prior to stem detection. We test the performance of the LM algorithm using canopy...
The contribution of LM to the neuroscience of movement vision
Zihl, Josef; Heywood, Charles A.
2015-01-01
The significance of early and sporadic reports in the 19th century of impairments of motion vision following brain damage was largely unrecognized. In the absence of satisfactory post-mortem evidence, impairments were interpreted as the consequence of a more general disturbance resulting from brain damage, the location and extent of which was unknown. Moreover, evidence that movement constituted a special visual perception and may be selectively spared was similarly dismissed. Such skepticism derived from a reluctance to acknowledge that the neural substrates of visual perception may not be confined to primary visual cortex. This view did not persist. First, it was realized that visual movement perception does not depend simply on the analysis of spatial displacements and temporal intervals, but represents a specific visual movement sensation. Second persuasive evidence for functional specialization in extrastriate cortex, and notably the discovery of cortical area V5/MT, suggested a separate region specialized for motion processing. Shortly thereafter the remarkable case of patient LM was published, providing compelling evidence for a selective and specific loss of movement vision. The case is reviewed here, along with an assessment of its contribution to visual neuroscience. PMID:25741251
Khan, Raees; Ul Abidin, Sheikh Zain; Ahmad, Mushtaq; Zafar, Muhammad; Liu, Jie; Amina, Hafiza
2018-01-01
The present study is intended to assess gymnosperms pollen flora of Pakistan using Light Microscope (LM) and Scanning Electron Microscopy (SEM) for its taxonomic significance in identification of gymnosperms. Pollens of 35 gymnosperm species (12 genera and five families) were collected from its various distributional sites of gymnosperms in Pakistan. LM and SEM were used to investigate different palyno-morphological characteristics. Five pollen types (i.e., Inaperturate, Monolete, Monoporate, Vesiculate-bisaccate and Polyplicate) were observed. Six In equatorial view seven types of pollens were observed, in which ten species were sub-angular, nine species were Traingular, six species were Perprolate, three species were Rhomboidal, three species were semi-angular, two species were rectangular and two species were prolate. While five types of pollen were observed in polar view, in which ten species were Spheroidal, nine species were Angular, eight were Interlobate, six species were Circular, two species were Elliptic. Eighteen species has rugulate and 17 species has faveolate ornamentation. Eighteen species has verrucate and 17 have gemmate type sculpturing. The data was analysed through cluster analysis. The study showed that these palyno-morphological features have significance value in classification and identification of gymnosperms. Based on these different palyno-morphological features, a taxonomic key was proposed for the accurate and fast identifications of gymnosperms from Pakistan. © 2017 Wiley Periodicals, Inc.
Terminal (Mis)diagnosis and the Physician–Patient Relationship in LM Montgomery’s The Blue Castle
2010-01-01
LM Montgomery’s The Blue Castle was first published in 1926, yet contains many insights into medical practice that remain relevant today. The protagonist, Valancy, mistakenly receives a terminal diagnosis in a letter from her physician, who has sent her a note intended for another patient. Her interactions with the physician raise issues that are still relevant in contemporary medical education and practice, primarily the importance of effective communication in the physician–patient relationship, especially in the context of diagnosing terminal illness and handling a diagnostic error. The Blue Castle offers a useful starting point for debate and discussion in medical education about these topics. PMID:20473640
Periodic orbits around areostationary points in the Martian gravity field
NASA Astrophysics Data System (ADS)
Liu, Xiao-Dong; Baoyin, Hexi; Ma, Xing-Rui
2012-05-01
This study investigates the problem of areostationary orbits around Mars in three-dimensional space. Areostationary orbits are expected to be used to establish a future telecommunication network for the exploration of Mars. However, no artificial satellites have been placed in these orbits thus far. The characteristics of the Martian gravity field are presented, and areostationary points and their linear stability are calculated. By taking linearized solutions in the planar case as the initial guesses and utilizing the Levenberg-Marquardt method, families of periodic orbits around areostationary points are shown to exist. Short-period orbits and long-period orbits are found around linearly stable areostationary points, but only short-period orbits are found around unstable areostationary points. Vertical periodic orbits around both linearly stable and unstable areostationary points are also examined. Satellites in these periodic orbits could depart from areostationary points by a few degrees in longitude, which would facilitate observation of the Martian topography. Based on the eigenvalues of the monodromy matrix, the evolution of the stability index of periodic orbits is determined. Finally, heteroclinic orbits connecting the two unstable areostationary points are found, providing the possibility for orbital transfer with minimal energy consumption.
Ultrasound guided electrical impedance tomography for 2D free-interface reconstruction
NASA Astrophysics Data System (ADS)
Liang, Guanghui; Ren, Shangjie; Dong, Feng
2017-07-01
The free-interface detection problem is normally seen in industrial or biological processes. Electrical impedance tomography (EIT) is a non-invasive technique with advantages of high-speed and low cost, and is a promising solution for free-interface detection problems. However, due to the ill-posed and nonlinear characteristics, the spatial resolution of EIT is low. To deal with the issue, an ultrasound guided EIT is proposed to directly reconstruct the geometric configuration of the target free-interface. In the method, the position of the central point of the target interface is measured by a pair of ultrasound transducers mounted at the opposite side of the objective domain, and then the position measurement is used as the prior information for guiding the EIT-based free-interface reconstruction. During the process, a constrained least squares framework is used to fuse the information from different measurement modalities, and the Lagrange multiplier-based Levenberg-Marquardt method is adopted to provide the iterative solution of the constraint optimization problem. The numerical results show that the proposed ultrasound guided EIT method for the free-interface reconstruction is more accurate than the single modality method, especially when the number of valid electrodes is limited.
Efficient and Robust Optimization for Building Energy Simulation
Pourarian, Shokouh; Kearsley, Anthony; Wen, Jin; Pertzborn, Amanda
2016-01-01
Efficiently, robustly and accurately solving large sets of structured, non-linear algebraic and differential equations is one of the most computationally expensive steps in the dynamic simulation of building energy systems. Here, the efficiency, robustness and accuracy of two commonly employed solution methods are compared. The comparison is conducted using the HVACSIM+ software package, a component based building system simulation tool. The HVACSIM+ software presently employs Powell’s Hybrid method to solve systems of nonlinear algebraic equations that model the dynamics of energy states and interactions within buildings. It is shown here that the Powell’s method does not always converge to a solution. Since a myriad of other numerical methods are available, the question arises as to which method is most appropriate for building energy simulation. This paper finds considerable computational benefits result from replacing the Powell’s Hybrid method solver in HVACSIM+ with a solver more appropriate for the challenges particular to numerical simulations of buildings. Evidence is provided that a variant of the Levenberg-Marquardt solver has superior accuracy and robustness compared to the Powell’s Hybrid method presently used in HVACSIM+. PMID:27325907
Surface Fitting Filtering of LIDAR Point Cloud with Waveform Information
NASA Astrophysics Data System (ADS)
Xing, S.; Li, P.; Xu, Q.; Wang, D.; Li, P.
2017-09-01
Full-waveform LiDAR is an active technology of photogrammetry and remote sensing. It provides more detailed information about objects along the path of a laser pulse than discrete-return topographic LiDAR. The point cloud and waveform information with high quality can be obtained by waveform decomposition, which could make contributions to accurate filtering. The surface fitting filtering method with waveform information is proposed to present such advantage. Firstly, discrete point cloud and waveform parameters are resolved by global convergent Levenberg Marquardt decomposition. Secondly, the ground seed points are selected, of which the abnormal ones are detected by waveform parameters and robust estimation. Thirdly, the terrain surface is fitted and the height difference threshold is determined in consideration of window size and mean square error. Finally, the points are classified gradually with the rising of window size. The filtering process is finished until window size is larger than threshold. The waveform data in urban, farmland and mountain areas from "WATER (Watershed Allied Telemetry Experimental Research)" are selected for experiments. Results prove that compared with traditional method, the accuracy of point cloud filtering is further improved and the proposed method has highly practical value.
Heat Resistance Mediated by pLM58 Plasmid-Borne ClpL in Listeria monocytogenes
Aalto-Araneda, Mariella; Lindström, Miia; Korkeala, Hannu
2017-01-01
ABSTRACT Listeria monocytogenes is one of the most heat-resistant non-spore-forming food-borne pathogens and poses a notable risk to food safety, particularly when mild heat treatments are used in food processing and preparation. While general heat stress properties and response mechanisms of L. monocytogenes have been described, accessory mechanisms providing particular L. monocytogenes strains with the advantage of enhanced heat resistance are unknown. Here, we report plasmid-mediated heat resistance of L. monocytogenes for the first time. This resistance is mediated by the ATP-dependent protease ClpL. We tested the survival of two wild-type L. monocytogenes strains—both of serotype 1/2c, sequence type ST9, and high sequence identity—at high temperatures and compared their genome composition in order to identify genetic mechanisms involved in their heat survival phenotype. L. monocytogenes AT3E was more heat resistant (0.0 CFU/ml log10 reduction) than strain AL4E (1.4 CFU/ml log10 reduction) after heating at 55°C for 40 min. A prominent difference in the genome compositions of the two strains was a 58-kb plasmid (pLM58) harbored by the heat-resistant AT3E strain, suggesting plasmid-mediated heat resistance. Indeed, plasmid curing resulted in significantly decreased heat resistance (1.1 CFU/ml log10 reduction) at 55°C. pLM58 harbored a 2,115-bp open reading frame annotated as an ATP-dependent protease (ClpL)-encoding clpL gene. Introducing the clpL gene into a natively heat-sensitive L. monocytogenes strain (1.2 CFU/ml log10 reduction) significantly increased the heat resistance of the recipient strain (0.4 CFU/ml log10 reduction) at 55°C. Plasmid-borne ClpL is thus a potential predictor of elevated heat resistance in L. monocytogenes. IMPORTANCE Listeria monocytogenes is a dangerous food pathogen causing the severe illness listeriosis that has a high mortality rate in immunocompromised individuals. Although destroyed by pasteurization, L
Nováková, Miroslava; Šašek, Vladimír; Trdá, Lucie; Krutinová, Hana; Mongin, Thomas; Valentová, Olga; Balesdent, Marie-HelEne; Rouxel, Thierry; Burketová, Lenka
2016-08-01
To achieve host colonization, successful pathogens need to overcome plant basal defences. For this, (hemi)biotrophic pathogens secrete effectors that interfere with a range of physiological processes of the host plant. AvrLm4-7 is one of the cloned effectors from the hemibiotrophic fungus Leptosphaeria maculans 'brassicaceae' infecting mainly oilseed rape (Brassica napus). Although its mode of action is still unknown, AvrLm4-7 is strongly involved in L. maculans virulence. Here, we investigated the effect of AvrLm4-7 on plant defence responses in a susceptible cultivar of B. napus. Using two isogenic L. maculans isolates differing in the presence of a functional AvrLm4-7 allele [absence ('a4a7') and presence ('A4A7') of the allele], the plant hormone concentrations, defence-related gene transcription and reactive oxygen species (ROS) accumulation were analysed in infected B. napus cotyledons. Various components of the plant immune system were affected. Infection with the 'A4A7' isolate caused suppression of salicylic acid- and ethylene-dependent signalling, the pathways regulating an effective defence against L. maculans infection. Furthermore, ROS accumulation was decreased in cotyledons infected with the 'A4A7' isolate. Treatment with an antioxidant agent, ascorbic acid, increased the aggressiveness of the 'a4a7' L. maculans isolate, but not that of the 'A4A7' isolate. Together, our results suggest that the increased aggressiveness of the 'A4A7' L. maculans isolate could be caused by defects in ROS-dependent defence and/or linked to suppressed SA and ET signalling. This is the first study to provide insights into the manipulation of B. napus defence responses by an effector of L. maculans. © 2015 BSPP AND JOHN WILEY & SONS LTD.
Evaluation of Available Software for Reconstruction of a Structure from its Imagery
2017-04-01
Math . 2, 164–168. Lowe, D. G. (1999) Object recognition from local scale-invariant features, in Proc. Int. Conf. Computer Vision, Vol. 2, pp. 1150–1157...Marquardt, D. (1963) An algorithm for least-squares estimation of nonlinear parameters, SIAM J. Appl. Math . 11(2), 431–441. UNCLASSIFIED 11 DST-Group–TR
Photodynamic Therapy of the Murine LM3 Tumor Using Meso-Tetra (4-N,N,N-Trimethylanilinium) Porphine.
Colombo, L L; Juarranz, A; Cañete, M; Villanueva, A; Stockert, J C
2007-12-01
Photodynamic therapy (PDT) of cancer is based on the cytotoxicity induced by a photosensitizer in the presence of oxygen and visible light, resulting in cell death and tumor regression. This work describes the response of the murine LM3 tumor to PDT using meso-tetra (4-N,N,N-trimethylanilinium) porphine (TMAP). BALB/c mice with intradermal LM3 tumors were subjected to intravenous injection of TMAP (4 mg/kg) followed 24 h later by blue-red light irradiation (λmax: 419, 457, 650 nm) for 60 min (total dose: 290 J/cm(2)) on depilated and glycerol-covered skin over the tumor of anesthetized animals. Control (drug alone, light alone) and PDT treatments (drug + light) were performed once and repeated 48 h later. No significant differences were found between untreated tumors and tumors only treated with TMAP or light. PDT-treated tumors showed almost total but transitory tumor regression (from 3 mm to less than 1 mm) in 8/9 animals, whereas no regression was found in 1/9. PDT response was heterogeneous and each tumor showed different regression and growth delay. The survival of PDT-treated animals was significantly higher than that of TMAP and light controls, showing a lower number of lung metastasis but increased tumor-draining lymph node metastasis. Repeated treatment and reduction of tissue light scattering by glycerol could be useful approaches in studies on PDT of cancer.
Photodynamic Therapy of the Murine LM3 Tumor Using Meso-Tetra (4-N,N,N-Trimethylanilinium) Porphine
Colombo, L. L.; Juarranz, A.; Cañete, M.; Villanueva, A.; Stockert, J. C.
2007-01-01
Photodynamic therapy (PDT) of cancer is based on the cytotoxicity induced by a photosensitizer in the presence of oxygen and visible light, resulting in cell death and tumor regression. This work describes the response of the murine LM3 tumor to PDT using meso-tetra (4-N,N,N-trimethylanilinium) porphine (TMAP). BALB/c mice with intradermal LM3 tumors were subjected to intravenous injection of TMAP (4 mg/kg) followed 24 h later by blue-red light irradiation (λmax: 419, 457, 650 nm) for 60 min (total dose: 290 J/cm2) on depilated and glycerol-covered skin over the tumor of anesthetized animals. Control (drug alone, light alone) and PDT treatments (drug + light) were performed once and repeated 48 h later. No significant differences were found between untreated tumors and tumors only treated with TMAP or light. PDT-treated tumors showed almost total but transitory tumor regression (from 3 mm to less than 1 mm) in 8/9 animals, whereas no regression was found in 1/9. PDT response was heterogeneous and each tumor showed different regression and growth delay. The survival of PDT-treated animals was significantly higher than that of TMAP and light controls, showing a lower number of lung metastasis but increased tumor-draining lymph node metastasis. Repeated treatment and reduction of tissue light scattering by glycerol could be useful approaches in studies on PDT of cancer. PMID:23675051
Artificial Intelligence Can Predict Daily Trauma Volume and Average Acuity.
Stonko, David P; Dennis, Bradley M; Betzold, Richard D; Peetz, Allan B; Gunter, Oliver L; Guillamondegui, Oscar D
2018-04-19
The goal of this study was to integrate temporal and weather data in order to create an artificial neural network (ANN) to predict trauma volume, the number of emergent operative cases, and average daily acuity at a level 1 trauma center. Trauma admission data from TRACS and weather data from the National Oceanic and Atmospheric Administration (NOAA) was collected for all adult trauma patients from July 2013-June 2016. The ANN was constructed using temporal (time, day of week), and weather factors (daily high, active precipitation) to predict four points of daily trauma activity: number of traumas, number of penetrating traumas, average ISS, and number of immediate OR cases per day. We trained a two-layer feed-forward network with 10 sigmoid hidden neurons via the Levenberg-Marquardt backpropagation algorithm, and performed k-fold cross validation and accuracy calculations on 100 randomly generated partitions. 10,612 patients over 1,096 days were identified. The ANN accurately predicted the daily trauma distribution in terms of number of traumas, number of penetrating traumas, number of OR cases, and average daily ISS (combined training correlation coefficient r = 0.9018+/-0.002; validation r = 0.8899+/- 0.005; testing r = 0.8940+/-0.006). We were able to successfully predict trauma and emergent operative volume, and acuity using an ANN by integrating local weather and trauma admission data from a level 1 center. As an example, for June 30, 2016, it predicted 9.93 traumas (actual: 10), and a mean ISS score of 15.99 (actual: 13.12); see figure 3. This may prove useful for predicting trauma needs across the system and hospital administration when allocating limited resources. Level III STUDY TYPE: Prognostic/Epidemiological.
NASA Astrophysics Data System (ADS)
Nema, Manish K.; Khare, Deepak; Chandniha, Surendra K.
2017-11-01
Estimation of evapotranspiration (ET) is an essential component of the hydrologic cycle, which is also requisite for efficient irrigation water management planning and hydro-meteorological studies at both the basin and catchment scales. There are about twenty well-established methods available for ET estimation which depends upon various meteorological parameters and assumptions. Most of these methods are physically based and need a variety of input data. The FAO-56 Penman-Monteith method (PM) for estimating reference evapotranspiration (ET0) is recommend for irrigation scheduling worldwide, because PM generally yields the best results under various climatic conditions. This study investigates the abilities of artificial neural networks (ANN) to improve the accuracy of monthly evaporation estimation in sub-humid climatic region of Dehradun. In the first part of the study, different ANN models, comprising various combinations of training function and number of neutrons were developed to estimate the ET0 and it has been compared with the Penman-Monteith (PM) ET0 as the ideal (observed) ET0. Various statistical approaches were considered to estimate the model performance, i.e. Coefficient of Correlation ( r), Sum of Squared Errors, Root Mean Square Error, Nash-Sutcliffe Efficiency Index (NSE) and Mean Absolute Error. The ANN model with Levenberg-Marquardt training algorithm, single hidden layer and nine number of neutron schema was found the best predicting capabilities for the study station with Coefficient of Correlation ( r) and NSE value of 0.996 and 0.991 for calibration period and 0.990 and 0.980 for validation period, respectively. In the subsequent part of the study, the trend analysis of ET0 time series revealed a rising trend in the month of March, and a falling trend during June to November, except August, with more than 90% significance level and the annual declining rate was found to 1.49 mm per year.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan
2010-01-01
Calibration of groundwater models involves hundreds to thousands of forward solutions, each of which may solve many transient coupled nonlinear partial differential equations, resulting in a computationally intensive problem. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelisms in software and hardware to reduce calibration time on multi-core computers. HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for direct solutions for a reactive transport model application, and a field-scale coupled flow and transport model application. In the reactive transport model, a single parallelizable loop is identified to account for over 97% of the total computational time using GPROF.more » Addition of a few lines of OpenMP compiler directives to the loop yields a speedup of about 10 on a 16-core compute node. For the field-scale model, parallelizable loops in 14 of 174 HGC5 subroutines that require 99% of the execution time are identified. As these loops are parallelized incrementally, the scalability is found to be limited by a loop where Cray PAT detects over 90% cache missing rates. With this loop rewritten, similar speedup as the first application is achieved. The OpenMP-parallelized code can be run efficiently on multiple workstations in a network or multiple compute nodes on a cluster as slaves using parallel PEST to speedup model calibration. To run calibration on clusters as a single task, the Levenberg Marquardt algorithm is added to HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, 100 200 compute cores are used to reduce the calibration time from weeks to a few hours for these two applications. This approach is applicable to most of the existing groundwater model codes for many applications.« less
Helgesson, P; Sjöstrand, H
2017-11-01
Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r 1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r 1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r 1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.
Inversion of Attributes and Full Waveforms of Ground Penetrating Radar Data Using PEST
NASA Astrophysics Data System (ADS)
Jazayeri, S.; Kruse, S.; Esmaeili, S.
2015-12-01
We seek to establish a method, based on freely available software, for inverting GPR signals for the underlying physical properties (electrical permittivity, magnetic permeability, target geometries). Such a procedure should be useful for classroom instruction and for analyzing surface GPR surveys over simple targets. We explore the applicability of the PEST parameter estimation software package for GPR inversion (www.pesthomepage.org). PEST is designed to invert data sets with large numbers of parameters, and offers a variety of inversion methods. Although primarily used in hydrogeology, the code has been applied to a wide variety of physical problems. The PEST code requires forward model input; the forward model of the GPR signal is done with the GPRMax package (www.gprmax.com). The problem of extracting the physical characteristics of a subsurface anomaly from the GPR data is highly nonlinear. For synthetic models of simple targets in homogeneous backgrounds, we find PEST's nonlinear Gauss-Marquardt-Levenberg algorithm is preferred. This method requires an initial model, for which the weighted differences between model-generated data and those of the "true" synthetic model (the objective function) are calculated. In order to do this, the Jacobian matrix and the derivatives of the observation data in respect to the model parameters are computed using a finite differences method. Next, the iterative process of building new models by updating the initial values starts in order to minimize the objective function. Another measure of the goodness of the final acceptable model is the correlation coefficient which is calculated based on the method of Cooley and Naff. An accepted final model satisfies both of these conditions. Models to date show that physical properties of simple isolated targets against homogeneous backgrounds can be obtained from multiple traces from common-offset surface surveys. Ongoing work examines the inversion capabilities with more complex
NASA Astrophysics Data System (ADS)
Helgesson, P.; Sjöstrand, H.
2017-11-01
Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.
Penn, Richard; Werner, Michael; Thomas, Justin
2015-01-01
Background Estimation of stochastic process models from data is a common application of time series analysis methods. Such system identification processes are often cast as hypothesis testing exercises whose intent is to estimate model parameters and test them for statistical significance. Ordinary least squares (OLS) regression and the Levenberg-Marquardt algorithm (LMA) have proven invaluable computational tools for models being described by non-homogeneous, linear, stationary, ordinary differential equations. Methods In this paper we extend stochastic model identification to linear, stationary, partial differential equations in two independent variables (2D) and show that OLS and LMA apply equally well to these systems. The method employs an original nonparametric statistic as a test for the significance of estimated parameters. Results We show gray scale and color images are special cases of 2D systems satisfying a particular autoregressive partial difference equation which estimates an analogous partial differential equation. Several applications to medical image modeling and classification illustrate the method by correctly classifying demented and normal OLS models of axial magnetic resonance brain scans according to subject Mini Mental State Exam (MMSE) scores. Comparison with 13 image classifiers from the literature indicates our classifier is at least 14 times faster than any of them and has a classification accuracy better than all but one. Conclusions Our modeling method applies to any linear, stationary, partial differential equation and the method is readily extended to 3D whole-organ systems. Further, in addition to being a robust image classifier, estimated image models offer insights into which parameters carry the most diagnostic image information and thereby suggest finer divisions could be made within a class. Image models can be estimated in milliseconds which translate to whole-organ models in seconds; such runtimes could make real
Moghaddari, Mitra; Yousefi, Fakhri; Ghaedi, Mehrorang; Dashtian, Kheibar
2018-04-01
In this study, the artificial neural network (ANN) and response surface methodology (RSM) based on central composite design (CCD) were applied for modeling and optimization of the simultaneous ultrasound-assisted removal of quinoline yellow (QY) and eosin B (EB). The MWCNT-NH 2 and its composites were prepared by sonochemistry method and characterized by scanning electron microscopy (SEM), X-ray diffraction (XRD) and energy dispersive spectroscopy (EDS) analysis's. Initial dyes concentrations, adsorbent mass, sonication time and pH contribution on QY and EB removal percentage were investigated by CCD and replication of experiments at conditions suggested by model has results which statistically are close to experimented data. The ultrasound irradiation is associated with raising mass transfer of process so that small amount of the adsorbent (0.025 g) is able to remove high percentage (88.00% and 91.00%) of QY and EB, respectively in short time (6.0 min) at pH = 6. Analysis of experimental data by conventional models is good indication of Langmuir efficiency for fitting and explanation of experimented data. The ANN based on the Levenberg-Marquardt algorithm (LMA) combined of linear transfer function at output layer and tangent sigmoid transfer function at hidden layer with 20 hidden neurons supply best operation conditions for good prediction of adsorption data. Accurate and efficient artificial neural network was obtained by changing the number of neurons in the hidden layer, while data was divided into training, test and validation sets which contained 70, 15 and 15% of data points respectively. The Average absolute deviation (AAD)% of a collection of 128 data points for MWCNT-NH 2 and composites is 0.58%.for EB and 0.55 for YQ. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mekanik, F.; Imteaz, M. A.; Gato-Trinidad, S.; Elmahdi, A.
2013-10-01
In this study, the application of Artificial Neural Networks (ANN) and Multiple regression analysis (MR) to forecast long-term seasonal spring rainfall in Victoria, Australia was investigated using lagged El Nino Southern Oscillation (ENSO) and Indian Ocean Dipole (IOD) as potential predictors. The use of dual (combined lagged ENSO-IOD) input sets for calibrating and validating ANN and MR Models is proposed to investigate the simultaneous effect of past values of these two major climate modes on long-term spring rainfall prediction. The MR models that did not violate the limits of statistical significance and multicollinearity were selected for future spring rainfall forecast. The ANN was developed in the form of multilayer perceptron using Levenberg-Marquardt algorithm. Both MR and ANN modelling were assessed statistically using mean square error (MSE), mean absolute error (MAE), Pearson correlation (r) and Willmott index of agreement (d). The developed MR and ANN models were tested on out-of-sample test sets; the MR models showed very poor generalisation ability for east Victoria with correlation coefficients of -0.99 to -0.90 compared to ANN with correlation coefficients of 0.42-0.93; ANN models also showed better generalisation ability for central and west Victoria with correlation coefficients of 0.68-0.85 and 0.58-0.97 respectively. The ability of multiple regression models to forecast out-of-sample sets is compatible with ANN for Daylesford in central Victoria and Kaniva in west Victoria (r = 0.92 and 0.67 respectively). The errors of the testing sets for ANN models are generally lower compared to multiple regression models. The statistical analysis suggest the potential of ANN over MR models for rainfall forecasting using large scale climate modes.
O'Neill, William; Penn, Richard; Werner, Michael; Thomas, Justin
2015-06-01
Estimation of stochastic process models from data is a common application of time series analysis methods. Such system identification processes are often cast as hypothesis testing exercises whose intent is to estimate model parameters and test them for statistical significance. Ordinary least squares (OLS) regression and the Levenberg-Marquardt algorithm (LMA) have proven invaluable computational tools for models being described by non-homogeneous, linear, stationary, ordinary differential equations. In this paper we extend stochastic model identification to linear, stationary, partial differential equations in two independent variables (2D) and show that OLS and LMA apply equally well to these systems. The method employs an original nonparametric statistic as a test for the significance of estimated parameters. We show gray scale and color images are special cases of 2D systems satisfying a particular autoregressive partial difference equation which estimates an analogous partial differential equation. Several applications to medical image modeling and classification illustrate the method by correctly classifying demented and normal OLS models of axial magnetic resonance brain scans according to subject Mini Mental State Exam (MMSE) scores. Comparison with 13 image classifiers from the literature indicates our classifier is at least 14 times faster than any of them and has a classification accuracy better than all but one. Our modeling method applies to any linear, stationary, partial differential equation and the method is readily extended to 3D whole-organ systems. Further, in addition to being a robust image classifier, estimated image models offer insights into which parameters carry the most diagnostic image information and thereby suggest finer divisions could be made within a class. Image models can be estimated in milliseconds which translate to whole-organ models in seconds; such runtimes could make real-time medicine and surgery modeling possible.
Using geometry to improve model fitting and experiment design for glacial isostasy
NASA Astrophysics Data System (ADS)
Kachuck, S. B.; Cathles, L. M.
2017-12-01
As scientists we routinely deal with models, which are geometric objects at their core - the manifestation of a set of parameters as predictions for comparison with observations. When the number of observations exceeds the number of parameters, the model is a hypersurface (the model manifold) in the space of all possible predictions. The object of parameter fitting is to find the parameters corresponding to the point on the model manifold as close to the vector of observations as possible. But the geometry of the model manifold can make this difficult. By curving, ending abruptly (where, for instance, parameters go to zero or infinity), and by stretching and compressing the parameters together in unexpected directions, it can be difficult to design algorithms that efficiently adjust the parameters. Even at the optimal point on the model manifold, parameters might not be individually resolved well enough to be applied to new contexts. In our context of glacial isostatic adjustment, models of sparse surface observations have a broad spread of sensitivity to mixtures of the earth's viscous structure and the surface distribution of ice over the last glacial cycle. This impedes precise statements about crucial geophysical processes, such as the planet's thermal history or the climates that controlled the ice age. We employ geometric methods developed in the field of systems biology to improve the efficiency of fitting (geodesic accelerated Levenberg-Marquardt) and to identify the maximally informative sources of additional data to make better predictions of sea levels and ice configurations (optimal experiment design). We demonstrate this in particular in reconstructions of the Barents Sea Ice Sheet, where we show that only certain kinds of data from the central Barents have the power to distinguish between proposed models.
Spectroscopic analysis technique for arc-welding process control
NASA Astrophysics Data System (ADS)
Mirapeix, Jesús; Cobo, Adolfo; Conde, Olga; Quintela, María Ángeles; López-Higuera, José-Miguel
2005-09-01
The spectroscopic analysis of the light emitted by thermal plasmas has found many applications, from chemical analysis to monitoring and control of industrial processes. Particularly, it has been demonstrated that the analysis of the thermal plasma generated during arc or laser welding can supply information about the process and, thus, about the quality of the weld. In some critical applications (e.g. the aerospace sector), an early, real-time detection of defects in the weld seam (oxidation, porosity, lack of penetration, ...) is highly desirable as it can reduce expensive non-destructive testing (NDT). Among others techniques, full spectroscopic analysis of the plasma emission is known to offer rich information about the process itself, but it is also very demanding in terms of real-time implementations. In this paper, we proposed a technique for the analysis of the plasma emission spectrum that is able to detect, in real-time, changes in the process parameters that could lead to the formation of defects in the weld seam. It is based on the estimation of the electronic temperature of the plasma through the analysis of the emission peaks from multiple atomic species. Unlike traditional techniques, which usually involve peak fitting to Voigt functions using the Levenberg-Marquardt recursive method, we employ the LPO (Linear Phase Operator) sub-pixel algorithm to accurately estimate the central wavelength of the peaks (allowing an automatic identification of each atomic species) and cubic-spline interpolation of the noisy data to obtain the intensity and width of the peaks. Experimental tests on TIG-welding using fiber-optic capture of light and a low-cost CCD-based spectrometer, show that some typical defects can be easily detected and identified with this technique, whose typical processing time for multiple peak analysis is less than 20msec. running in a conventional PC.
NASA Technical Reports Server (NTRS)
Buchner, Stephen; McMorrow, Dale; Roche, Nicholas; Dusseau, Laurent; Pease, Ron L.
2008-01-01
Shapes of single event transients (SETs) in a linear bipolar circuit (LM124) change with exposure to total ionizing dose (TID) radiation. SETs shape changes are a direct consequence of TID-induced degradation of bipolar transistor gain. A reduction in transistor gain causes a reduction in the drive current of the current sources in the circuit, and it is the lower drive current that most affects the shapes of large amplitude SETs.
Kubohara, Yuzuru; Komachi, Mayumi; Homma, Yoshimi; Kikuchi, Haruhisa; Oshima, Yoshiteru
2015-08-07
Osteosarcoma is a common metastatic bone cancer that predominantly develops in children and adolescents. Metastatic osteosarcoma remains associated with a poor prognosis; therefore, more effective anti-metastatic drugs are needed. Differentiation-inducing factor-1 (DIF-1), -2, and -3 are novel lead anti-tumor agents that were originally isolated from the cellular slime mold Dictyostelium discoideum. Here we investigated the effects of a panel of DIF derivatives on lysophosphatidic acid (LPA)-induced migration of mouse osteosarcoma LM8 cells by using a Boyden chamber assay. Some DIF derivatives such as Br-DIF-1, DIF-3(+2), and Bu-DIF-3 (5-20 μM) dose-dependently suppressed LPA-induced cell migration with associated IC50 values of 5.5, 4.6, and 4.2 μM, respectively. On the other hand, the IC50 values of Br-DIF-1, DIF-3(+2), and Bu-DIF-3 versus cell proliferation were 18.5, 7.2, and 2.0 μM, respectively, in LM8 cells, and >20, 14.8, and 4.3 μM, respectively, in mouse 3T3-L1 fibroblasts (non-transformed). Together, our results demonstrate that Br-DIF-1 in particular may be a valuable tool for the analysis of cancer cell migration, and that DIF derivatives such as DIF-3(+2) and Bu-DIF-3 are promising lead anti-tumor agents for the development of therapies that suppress osteosarcoma cell proliferation, migration, and metastasis. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Gervin, Janette C.; Behrenfeld, Michael; McClain, Charles R.; Spinhirne, James; Purves, Lloyd; Wood, H. John; Roberto, Michael R.
2004-01-01
The Physiology Lidar-Multispectral Mission (PhyLM) is intended to explore the complex ecosystems of our global oceans. New "inversion" methods and improved understanding of marine optics have opened the door to quantifying a range of critical ocean properties. This new information could revolutionize our understanding of global ocean processes, such as phytoplankton growth, harmful algal blooms, carbon fluxes between major pools and the productivity equation. The new science requires new measurements not addressed by currently planned space missions. PhyLM will combine active and advanced passive remote sensing technologies to quantify standing stocks and fluxes of climate-critical components of the Ocean carbon cycle to meet these science providing multispectral bands from the far UV through the near infrared (340 - 1250 nm) at a ground resolution of 250 m. Improved detectors, filters, mirrors, digitization and focal plane design will offer an overall higher-quality data product. The unprecedented accuracy and precision of the absolute water-leaving radiances will support inversion- based quantification of an expanded set of ocean carbon cycle components. The dual- wavelength (532 & 1064 nm) Nd:Yag Lidar will enhance the accuracy and precision of the passive data by providing aerosol profiles for atmospheric correction and coincident active measurements of backscattering. The Lidar will also examine dark-side fluorescence as an additional approach to quantifying phytoplankton biomass in highly productive regions.
1969-11-19
AS12-46-6728 (19 Nov. 1969) --- Astronaut Alan L. Bean, lunar module pilot for the Apollo 12 mission, is about to step off the ladder of the Lunar Module to join astronaut Charles Conrad Jr., mission commander, in extravehicular activity (EVA). Conrad and Bean descended in the Apollo 12 LM to explore the moon while astronaut Richard F. Gordon Jr., command module pilot, remained with the Command and Service Modules in lunar orbit.
NASA Technical Reports Server (NTRS)
Jefferys, W. H.
1981-01-01
A least squares method proposed previously for solving a general class of problems is expanded in two ways. First, covariance matrices related to the solution are calculated and their interpretation is given. Second, improved methods of solving the normal equations related to those of Marquardt (1963) and Fletcher and Powell (1963) are developed for this approach. These methods may converge in cases where Newton's method diverges or converges slowly.
Bres, Vanessa; Yang, Hua; Hsu, Ernie; Ren, Yan; Cheng, Ying; Wisniewski, Michele; Hanhan, Maesa; Zaslavsky, Polina; Noll, Nathan; Weaver, Brett; Campbell, Paul; Reshatoff, Michael; Becker, Michael
2014-01-01
The Atlas Listeria monocytogenes LmG2 Detection Assay, developed by Roka Bioscience Inc., was compared to a reference culture method for seven food types (hot dogs, cured ham, deli turkey, chicken salad, vanilla ice cream, frozen chocolate cream pie, and frozen cheese pizza) and one surface (stainless steel, grade 316). A 125 g portion of deli turkey was tested using a 1:4 food:media dilution ratio, and a 25 g portion for all other foods was tested using 1:9 food:media dilution ratio. The enrichment time and media for Roka's method was 24 to 28 h for 25 g food samples and environmental surfaces, and 44 to 48 h for 125 g at 35 ± 2°C in PALCAM broth containing 0.02 g/L nalidixic acid. Comparison of the Atlas Listeria monocytogenes LmG2 Detection Assay to the reference method required an unpaired approach. For each matrix, 20 samples inoculated at a fractional level and five samples inoculated at a high level with a different strain of Listeria monocytogenes were tested by each method. The Atlas Listeria monocytogenes LmG2 Detection Assay was compared to the Official Methods of Analysis of AOAC INTERNATIONAL 993.12 method for dairy products, the U.S. Department of Agriculture, Food Safety and Inspection Service, Microbiology Laboratory Guidebook 8.08 method for ready-to-eat meat and environmental samples, and the U.S. Food and Drug Administration Bacteriological Analytical Manual, Chapter 10 method for frozen foods. In the method developer studies, Roka's method, at 24 h (or 44 h for 125 g food samples), had 126 positives out of 200 total inoculated samples, compared to 102 positives for the reference methods at 48 h. In the independent laboratory studies, vanilla ice cream, deli turkey and stainless steel grade 316 were evaluated. Roka's method, at 24 h (or 44 h for 125 g food samples), had 64 positives out of 75 total inoculated samples compared to 54 positives for the reference methods at 48 h. The Atlas Listeria monocytogenes LmG2 Detection Assay detected all 50
21 CFR 73.3120 - 16,17-Dimethoxydinaphtho [1,2,3-cd:3′,2′,1′-lm] perylene-5,10-dione.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 1 2013-04-01 2013-04-01 false 16,17-Dimethoxydinaphtho [1,2,3-cd:3â²,2â²,1â²-lm] perylene-5,10-dione. 73.3120 Section 73.3120 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL LISTING OF COLOR ADDITIVES EXEMPT FROM CERTIFICATION Medical Devices...
21 CFR 73.3120 - 16,17-Dimethoxydinaphtho [1,2,3-cd:3′,2′,1′-lm] perylene-5,10-dione.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 1 2011-04-01 2011-04-01 false 16,17-Dimethoxydinaphtho [1,2,3-cd:3â²,2â²,1â²-lm] perylene-5,10-dione. 73.3120 Section 73.3120 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL LISTING OF COLOR ADDITIVES EXEMPT FROM CERTIFICATION Medical Devices...
21 CFR 73.3120 - 16,17-Dimethoxydinaphtho [1,2,3-cd:3′,2′,1′-lm] perylene-5,10-dione.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 1 2014-04-01 2014-04-01 false 16,17-Dimethoxydinaphtho [1,2,3-cd:3â²,2â²,1â²-lm] perylene-5,10-dione. 73.3120 Section 73.3120 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL LISTING OF COLOR ADDITIVES EXEMPT FROM CERTIFICATION Medical Devices...
21 CFR 73.3120 - 16,17-Dimethoxydinaphtho [1,2,3-cd:3′,2′,1′-lm] perylene-5,10-dione.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 1 2012-04-01 2012-04-01 false 16,17-Dimethoxydinaphtho [1,2,3-cd:3â²,2â²,1â²-lm] perylene-5,10-dione. 73.3120 Section 73.3120 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL LISTING OF COLOR ADDITIVES EXEMPT FROM CERTIFICATION Medical Devices...
21 CFR 73.3120 - 16,17-Dimethoxydinaphtho [1,2,3-cd:3′,2′,1′-lm] perylene-5,10-dione.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 1 2010-04-01 2010-04-01 false 16,17-Dimethoxydinaphtho [1,2,3-cd:3â²,2â²,1â²-lm] perylene-5,10-dione. 73.3120 Section 73.3120 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL LISTING OF COLOR ADDITIVES EXEMPT FROM CERTIFICATION Medical Devices...
Melin, Amanda D; Matsushita, Yuka; Moritz, Gillian L; Dominy, Nathaniel J; Kawamura, Shoji
2013-05-22
Tarsiers are small nocturnal primates with a long history of fuelling debate on the origin and evolution of anthropoid primates. Recently, the discovery of M and L opsin genes in two sister species, Tarsius bancanus (Bornean tarsier) and Tarsius syrichta (Philippine tarsier), respectively, was interpreted as evidence of an ancestral long-to-middle (L/M) opsin polymorphism, which, in turn, suggested a diurnal or cathemeral (arrhythmic) activity pattern. This view is compatible with the hypothesis that stem tarsiers were diurnal; however, a reversion to nocturnality during the Middle Eocene, as evidenced by hyper-enlarged orbits, predates the divergence of T. bancanus and T. syrichta in the Late Miocene. Taken together, these findings suggest that some nocturnal tarsiers possessed high-acuity trichromatic vision, a concept that challenges prevailing views on the adaptive origins of the anthropoid visual system. It is, therefore, important to explore the plausibility and antiquity of trichromatic vision in the genus Tarsius. Here, we show that Sulawesi tarsiers (Tarsius tarsier), a phylogenetic out-group of Philippine and Bornean tarsiers, have an L opsin gene that is more similar to the L opsin gene of T. syrichta than to the M opsin gene of T. bancanus in non-synonymous nucleotide sequence. This result suggests that an L/M opsin polymorphism is the ancestral character state of crown tarsiers and raises the possibility that many hallmarks of the anthropoid visual system evolved under dim (mesopic) light conditions. This interpretation challenges the persistent nocturnal-diurnal dichotomy that has long informed debate on the origin of anthropoid primates.
Melin, Amanda D.; Matsushita, Yuka; Moritz, Gillian L.; Dominy, Nathaniel J.; Kawamura, Shoji
2013-01-01
Tarsiers are small nocturnal primates with a long history of fuelling debate on the origin and evolution of anthropoid primates. Recently, the discovery of M and L opsin genes in two sister species, Tarsius bancanus (Bornean tarsier) and Tarsius syrichta (Philippine tarsier), respectively, was interpreted as evidence of an ancestral long-to-middle (L/M) opsin polymorphism, which, in turn, suggested a diurnal or cathemeral (arrhythmic) activity pattern. This view is compatible with the hypothesis that stem tarsiers were diurnal; however, a reversion to nocturnality during the Middle Eocene, as evidenced by hyper-enlarged orbits, predates the divergence of T. bancanus and T. syrichta in the Late Miocene. Taken together, these findings suggest that some nocturnal tarsiers possessed high-acuity trichromatic vision, a concept that challenges prevailing views on the adaptive origins of the anthropoid visual system. It is, therefore, important to explore the plausibility and antiquity of trichromatic vision in the genus Tarsius. Here, we show that Sulawesi tarsiers (Tarsius tarsier), a phylogenetic out-group of Philippine and Bornean tarsiers, have an L opsin gene that is more similar to the L opsin gene of T. syrichta than to the M opsin gene of T. bancanus in non-synonymous nucleotide sequence. This result suggests that an L/M opsin polymorphism is the ancestral character state of crown tarsiers and raises the possibility that many hallmarks of the anthropoid visual system evolved under dim (mesopic) light conditions. This interpretation challenges the persistent nocturnal–diurnal dichotomy that has long informed debate on the origin of anthropoid primates. PMID:23536597
Decomposition Techniques for Icesat/glas Full-Waveform Data
NASA Astrophysics Data System (ADS)
Liu, Z.; Gao, X.; Li, G.; Chen, J.
2018-04-01
The geoscience laser altimeter system (GLAS) on the board Ice, Cloud, and land Elevation Satellite (ICESat), is the first long-duration space borne full-waveform LiDAR for measuring the topography of the ice shelf and temporal variation, cloud and atmospheric characteristics. In order to extract the characteristic parameters of the waveform, the key step is to process the full waveform data. In this paper, the modified waveform decomposition method is proposed to extract the echo components from full-waveform. First, the initial parameter estimation is implemented through data preprocessing and waveform detection. Next, the waveform fitting is demonstrated using the Levenberg-Marquard (LM) optimization method. The results show that the modified waveform decomposition method can effectively extract the overlapped echo components and missing echo components compared with the results from GLA14 product. The echo components can also be extracted from the complex waveforms.
Prediction of Shrinkage Porosity Defect in Sand Casting Process of LM25
NASA Astrophysics Data System (ADS)
Rathod, Hardik; Dhulia, Jay K.; Maniar, Nirav P.
2017-08-01
In the present worldwide and aggressive environment, foundry commercial enterprises need to perform productively with least number of rejections and create casting parts in shortest lead time. It has become extremely difficult for foundry industries to meet demands of defects free casting and meet strict delivery schedules. The process of casting solidification is complex in nature. Prediction of shrinkage defect in metal casting is one of the critical concern in foundries and is one of the potential research areas in casting. Due to increasing pressure to improve quality and to reduce cost, it is very essential to upgrade the level of current methodology used in foundries. In the present research work, prediction methodology of shrinkage porosity defect in sand casting process of LM25 using experimentation and ANSYS is proposed. The objectives successfully achieved are prediction of shrinkage porosity distribution in Al-Si casting and determining effectiveness of investigated function for predicting shrinkage porosity by correlating results of simulating studies to those obtained experimentally. The real-time application of the research reflects from the fact that experimentation is performed on 9 different Y junctions at foundry industry and practical data obtained from experimentation are used for simulation.
NASA Astrophysics Data System (ADS)
Liu, Zexi; Cohen, Fernand
2017-11-01
We describe an approach for synthesizing a three-dimensional (3-D) face structure from an image or images of a human face taken at a priori unknown poses using gender and ethnicity specific 3-D generic models. The synthesis process starts with a generic model, which is personalized as images of the person become available using preselected landmark points that are tessellated to form a high-resolution triangular mesh. From a single image, two of the three coordinates of the model are reconstructed in accordance with the given image of the person, while the third coordinate is sampled from the generic model, and the appearance is made in accordance with the image. With multiple images, all coordinates and appearance are reconstructed in accordance with the observed images. This method allows for accurate pose estimation as well as face identification in 3-D rendering of a difficult two-dimensional (2-D) face recognition problem into a much simpler 3-D surface matching problem. The estimation of the unknown pose is achieved using the Levenberg-Marquardt optimization process. Encouraging experimental results are obtained in a controlled environment with high-resolution images under a good illumination condition, as well as for images taken in an uncontrolled environment under arbitrary illumination with low-resolution cameras.
Hervás, César; Silva, Manuel; Serrano, Juan Manuel; Orejuela, Eva
2004-01-01
The suitability of an approach for extracting heuristic rules from trained artificial neural networks (ANNs) pruned by a regularization method and with architectures designed by evolutionary computation for quantifying highly overlapping chromatographic peaks is demonstrated. The ANN input data are estimated by the Levenberg-Marquardt method in the form of a four-parameter Weibull curve associated with the profile of the chromatographic band. To test this approach, two N-methylcarbamate pesticides, carbofuran and propoxur, were quantified using a classic peroxyoxalate chemiluminescence reaction as a detection system for chromatographic analysis. Straightforward network topologies (one and two outputs models) allow the analytes to be quantified in concentration ratios ranging from 1:7 to 5:1 with an average standard error of prediction for the generalization test of 2.7 and 2.3% for carbofuran and propoxur, respectively. The reduced dimensions of the selected ANN architectures, especially those obtained after using heuristic rules, allowed simple quantification equations to be developed that transform the input variables into output variables. These equations can be easily interpreted from a chemical point of view to attain quantitative analytical information regarding the effect of both analytes on the characteristics of chromatographic bands, namely profile, dispersion, peak height, and residence time. Copyright 2004 American Chemical Society
NASA Astrophysics Data System (ADS)
Kalkisim, A. T.; Hasiloglu, A. S.; Bilen, K.
2016-04-01
Due to the refrigerant gas R134a which is used in automobile air conditioning systems and has greater global warming impact will be phased out gradually, an alternative gas is being desired to be used without much change on existing air conditioning systems. It is aimed to obtain the easier solution for intermediate values on the performance by creating a Neural Network Model in case of using a fluid (R152a) in automobile air conditioning systems that has the thermodynamic properties close to each other and near-zero global warming impact. In this instance, a network structure giving the most accurate result has been established by identifying which model provides the best education with which network structure and makes the most accurate predictions in the light of the data obtained after five different ANN models was trained with three different network structures. During training of Artificial Neural Network, Quick Propagation, Quasi-Newton, Levenberg-Marquardt and Conjugate Gradient Descent Batch Back Propagation methodsincluding five inputs and one output were trained with various network structures. Over 1500 iterations have been evaluated and the most appropriate model was identified by determining minimum error rates. The accuracy of the determined ANN model was revealed by comparing with estimates made by the Multi-Regression method.
Development of a Nonlinear Soft-Sensor Using a GMDH Network for a Refinery Crude Distillation Tower
NASA Astrophysics Data System (ADS)
Fujii, Kenzo; Yamamoto, Toru
In atmospheric distillation processes, the stabilization of processes is required in order to optimize the crude-oil composition that corresponds to product market conditions. However, the process control systems sometimes fall into unstable states in the case where unexpected disturbances are introduced, and these unusual phenomena have had an undesirable affect on certain products. Furthermore, a useful chemical engineering model has not yet been established for these phenomena. This remains a serious problem in the atmospheric distillation process. This paper describes a new modeling scheme to predict unusual phenomena in the atmospheric distillation process using the GMDH (Group Method of Data Handling) network which is one type of network model. According to the GMDH network, the model structure can be determined systematically. However, the least squares method has been commonly utilized in determining weight coefficients (model parameters). Estimation accuracy is not entirely expected, because the sum of squared errors between the measured values and estimates is evaluated. Therefore, instead of evaluating the sum of squared errors, the sum of absolute value of errors is introduced and the Levenberg-Marquardt method is employed in order to determine model parameters. The effectiveness of the proposed method is evaluated by the foaming prediction in the crude oil switching operation in the atmospheric distillation process.
Reconstruction of fluorescence molecular tomography with a cosinoidal level set method.
Zhang, Xuanxuan; Cao, Xu; Zhu, Shouping
2017-06-27
Implicit shape-based reconstruction method in fluorescence molecular tomography (FMT) is capable of achieving higher image clarity than image-based reconstruction method. However, the implicit shape method suffers from a low convergence speed and performs unstably due to the utilization of gradient-based optimization methods. Moreover, the implicit shape method requires priori information about the number of targets. A shape-based reconstruction scheme of FMT with a cosinoidal level set method is proposed in this paper. The Heaviside function in the classical implicit shape method is replaced with a cosine function, and then the reconstruction can be accomplished with the Levenberg-Marquardt method rather than gradient-based methods. As a result, the priori information about the number of targets is not required anymore and the choice of step length is avoided. Numerical simulations and phantom experiments were carried out to validate the proposed method. Results of the proposed method show higher contrast to noise ratios and Pearson correlations than the implicit shape method and image-based reconstruction method. Moreover, the number of iterations required in the proposed method is much less than the implicit shape method. The proposed method performs more stably, provides a faster convergence speed than the implicit shape method, and achieves higher image clarity than the image-based reconstruction method.
NASA Astrophysics Data System (ADS)
Karlsson, Hanna; Pettersson, Anders; Larsson, Marcus; Strömberg, Tomas
2011-02-01
Model based analysis of calibrated diffuse reflectance spectroscopy can be used for determining oxygenation and concentration of skin chromophores. This study aimed at assessing the effect of including melanin in addition to hemoglobin (Hb) as chromophores and compensating for inhomogeneously distributed blood (vessel packaging), in a single-layer skin model. Spectra from four humans were collected during different provocations using a twochannel fiber optic probe with source-detector separations 0.4 and 1.2 mm. Absolute calibrated spectra using data from either a single distance or both distances were analyzed using inverse Monte Carlo for light transport and Levenberg-Marquardt for non-linear fitting. The model fitting was excellent using a single distance. However, the estimated model failed to explain spectra from the other distance. The two-distance model did not fit the data well at either distance. Model fitting was significantly improved including melanin and vessel packaging. The most prominent effect when fitting data from the larger separation compared to the smaller separation was a different light scattering decay with wavelength, while the tissue fraction of Hb and saturation were similar. For modeling spectra at both distances, we propose using either a multi-layer skin model or a more advanced model for the scattering phase function.
Kaul, D K; Tsai, H M; Liu, X D; Nakada, M T; Nagel, R L; Coller, B S
2000-01-15
Abnormal interaction of sickle red blood cells (SS RBC) with the vascular endothelium has been implicated as a factor in the initiation of vasoocclusion in sickle cell anemia. Both von Willebrand factor (vWf) and thrombospondin (TSP) play important roles in mediating SS RBC-endothelium interaction and can bind to the endothelium via alphaVbeta3 receptors. We have used monoclonal antibodies (MoAb) directed against alphaVbeta3 and alphaIIbbeta3 (GPIIb/IIIa) integrins to dissect the role of these integrins in SS RBC adhesion. The murine MoAb 7E3 inhibits both alphaVbeta3 and alphaIIbbeta3 (GPIIb/IIIa), whereas MoAb LM609 selectively inhibits alphaVbeta3, and MoAb 10E5 binds only to alphaIIbbeta3. In this study, we have tested the capacity of these MoAbs to block platelet-activating factor (PAF)-induced SS RBC adhesion in the ex vivo mesocecum vasculature of the rat. Infusion of washed SS RBC in preparations treated with PAF (200 pg/mL), with or without a control antibody, resulted in extensive adhesion of these cells in venules, accompanied by frequent postcapillary blockage and increased peripheral resistance units (PRU). PAF also caused increased endothelial surface and interendothelial expression of endothelial vWf. Importantly, pretreatment ofthe vasculature with either MoAb 7E3 F(ab')(2) or LM609, but not 10E5 F(ab')(2), after PAF almost completely inhibited SS RBC adhesion in postcapillary venules, the sites of maximal adhesion and frequent blockage. The inhibition of adhesion with 7E3 or LM609 was accompanied by smaller increases in PRU and shorter pressure-flow recovery times. Thus, blockade of alphaVbeta3 may constitute a potential therapeutic approach to prevent SS RBC-endothelium interactions under flow conditions. (Blood. 2000;95:368-374)
Adamson, Jason; Jaunky, Tomasz; Thorne, David; Gaça, Marianna D
2018-03-01
Traditional in vitro exposure to combustible tobacco products utilise exposure systems that include the use of smoking machines to generate, dilute and deliver smoke to in vitro cell cultures. With reported lower emissions from next generation tobacco and nicotine products (NGPs), including e-cigarettes and tobacco heating products (THPs), diluting the aerosol is potentially not required. Herein we present a simplified exposure scenario to undiluted NGP aerosols, using a new puffing system called the LM4E. Nicotine delivery from an e-cigarette was used as a dosimetry marker, and was measured at source across 4 LM4E ports and in the exposure chamber. Cell viability studies, using Neutral Red Uptake (NRU) assay, were performed using H292 human lung epithelial cells, testing undiluted aerosols from an e-cigarette and a THP. E-cigarette mean nicotine generated at source was measured at 0.084 ± 0.005 mg/puff with no significant differences in delivery across the 4 different ports, p = 0.268 (n = 10/port). Mean nicotine delivery from the e-cigarette to the in vitro exposure chamber (measured up to 100 puffs) was 0.046 ± 0.006 mg/puff, p = 0.061. Aerosol penetration within the LM4E was 55% from source to chamber. H292 cells were exposed to undiluted e-cigarette aerosol for 2 h (240 puffs) or undiluted THP aerosol for 1 h (120 puffs). There were positive correlations between puff number and nicotine in the exposed culture media, R 2 = 0.764 for the e-cigarette and R 2 = 0.970 for the THP. NRU determined cell viability for e-cigarettes after 2 h' exposure resulted in 21.5 ± 17.0% cell survival, however for the THP, full cytotoxicity was reached after 1-h exposure. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Study and Modeling of the Impact of TID on the ATREE Response in LM124 Operational Amplifier
NASA Astrophysics Data System (ADS)
Roig, Fabien; Dusseau, L.; Ribeiro, P.; Auriel, G.; Roche, N. J.-H.; Privat, A.; Vaillé, J.-R.; Boch, J.; Saigné, F.; Marec, R.; Calvel, P.; Bezerra, F.; Ecoffet, R.; Azais, B.
2014-08-01
Shapes of ATREEs (Analog Transient Radiation Effects on Electronics) in a bipolar integrated circuit change with exposure to Total Ionizing Dose (TID) radiation. The impact of TID on ATREEs is investigated in the LM124 operational amplifier (opamp) from three different manufacturers. Significant variations are observed on the ATREE responsesfrom different manufacturers. The ATREEs are produced by pulsed X-ray experiments. ASET laser mappings are performed to highlight the sensitive bipolar transistors, explaining the ATREE phenomena variations from one manufacturer to another one. ATREE modeling results are presented using a previously developed simulation tool. A good agreement is observed between experimental ATREE responses and model outputs whatever the TID level, the prompt dose level, the amplifier configuration and the device manufacturer.
Neodymium-140 DOTA-LM3: Evaluation of an In Vivo Generator for PET with a Non-Internalizing Vector.
Severin, Gregory W; Kristensen, Lotte K; Nielsen, Carsten H; Fonslet, Jesper; Jensen, Andreas I; Frellsen, Anders F; Jensen, K M; Elema, Dennis R; Maecke, Helmut; Kjær, Andreas; Johnston, Karl; Köster, Ulli
2017-01-01
140 Nd ( t 1/2 = 3.4 days), owing to its short-lived positron emitting daughter 140 Pr ( t 1/2 = 3.4 min), has promise as an in vivo generator for positron emission tomography (PET). However, the electron capture decay of 140 Nd is chemically disruptive to macrocycle-based radiolabeling, meaning that an in vivo redistribution of the daughter 140 Pr is expected before positron emission. The purpose of this study was to determine how the delayed positron from the de-labeled 140 Pr affects preclinical imaging with 140 Nd. To explore the effect, 140 Nd was produced at CERN-ISOLDE, reacted with the somatostatin analogue, DOTA-LM3 (1,4,7,10- tetraazacyclododecane, 1,4,7- tri acetic acid, 10- acetamide N - p-Cl-Phecyclo(d-Cys-Tyr-d-4-amino-Phe(carbamoyl)-Lys-Thr-Cys)d-Tyr-NH2) and injected into H727 xenograft bearing mice. Comparative pre- and post-mortem PET imaging at 16 h postinjection was used to quantify the in vivo redistribution of 140 Pr following 140 Nd decay. The somatostatin receptor-positive pancreas exhibited the highest tissue accumulation of 140 Nd-DOTA-LM3 (13% ID/g at 16 h) coupled with the largest observed redistribution rate, where 56 ± 7% ( n = 4, mean ± SD) of the in situ produced 140 Pr washed out of the pancreas before decay. Contrastingly, the liver, spleen, and lungs acted as strong sink organs for free 140 Pr 3+ . Based upon these results, we conclude that 140 Nd imaging with a non-internalizing vector convolutes the biodistribution of the tracer with the accumulation pattern of free 140 Pr. This redistribution phenomenon may show promise as a probe of the cellular interaction with the vector, such as in determining tissue dependent internalization behavior.
Neodymium-140 DOTA-LM3: Evaluation of an In Vivo Generator for PET with a Non-Internalizing Vector
Severin, Gregory W.; Kristensen, Lotte K.; Nielsen, Carsten H.; Fonslet, Jesper; Jensen, Andreas I.; Frellsen, Anders F.; Jensen, K. M.; Elema, Dennis R.; Maecke, Helmut; Kjær, Andreas; Johnston, Karl; Köster, Ulli
2017-01-01
140Nd (t1/2 = 3.4 days), owing to its short-lived positron emitting daughter 140Pr (t1/2 = 3.4 min), has promise as an in vivo generator for positron emission tomography (PET). However, the electron capture decay of 140Nd is chemically disruptive to macrocycle-based radiolabeling, meaning that an in vivo redistribution of the daughter 140Pr is expected before positron emission. The purpose of this study was to determine how the delayed positron from the de-labeled 140Pr affects preclinical imaging with 140Nd. To explore the effect, 140Nd was produced at CERN-ISOLDE, reacted with the somatostatin analogue, DOTA-LM3 (1,4,7,10- tetraazacyclododecane, 1,4,7- tri acetic acid, 10- acetamide N - p-Cl-Phecyclo(d-Cys-Tyr-d-4-amino-Phe(carbamoyl)-Lys-Thr-Cys)d-Tyr-NH2) and injected into H727 xenograft bearing mice. Comparative pre- and post-mortem PET imaging at 16 h postinjection was used to quantify the in vivo redistribution of 140Pr following 140Nd decay. The somatostatin receptor-positive pancreas exhibited the highest tissue accumulation of 140Nd-DOTA-LM3 (13% ID/g at 16 h) coupled with the largest observed redistribution rate, where 56 ± 7% (n = 4, mean ± SD) of the in situ produced 140Pr washed out of the pancreas before decay. Contrastingly, the liver, spleen, and lungs acted as strong sink organs for free 140Pr3+. Based upon these results, we conclude that 140Nd imaging with a non-internalizing vector convolutes the biodistribution of the tracer with the accumulation pattern of free 140Pr. This redistribution phenomenon may show promise as a probe of the cellular interaction with the vector, such as in determining tissue dependent internalization behavior. PMID:28748183
Performance of High Temperature Operational Amplifier, Type LM2904WH, under Extreme Temperatures
NASA Technical Reports Server (NTRS)
Patterson, Richard; Hammoud, Ahmad; Elbuluk, Malik
2008-01-01
Operation of electronic parts and circuits under extreme temperatures is anticipated in NASA space exploration missions as well as terrestrial applications. Exposure of electronics to extreme temperatures and wide-range thermal swings greatly affects their performance via induced changes in the semiconductor material properties, packaging and interconnects, or due to incompatibility issues between interfaces that result from thermal expansion/contraction mismatch. Electronics that are designed to withstand operation and perform efficiently in extreme temperatures would mitigate risks for failure due to thermal stresses and, therefore, improve system reliability. In addition, they contribute to reducing system size and weight, simplifying its design, and reducing development cost through the elimination of otherwise required thermal control elements for proper ambient operation. A large DC voltage gain (100 dB) operational amplifier with a maximum junction temperature of 150 C was recently introduced by STMicroelectronics [1]. This LM2904WH chip comes in a plastic package and is designed specifically for automotive and industrial control systems. It operates from a single power supply over a wide range of voltages, and it consists of two independent, high gain, internally frequency compensated operational amplifiers. Table I shows some of the device manufacturer s specifications.
Hussain, Amara Noor; Zafar, Muhammad; Ahmad, Mushtaq; Khan, Raees; Yaseen, Ghulam; Khan, Muhammad Saleem; Nazir, Abdul; Khan, Amir Muhammad; Shaheen, Shabnum
2018-05-01
Palynological features as well as comparative foliar epidermal using light and scanning electron microscope (SEM) of 17 species (10genera) of Amaranthaceae have been studied for its taxonomic significance. Different foliar and palynological micro-morphological characters were examined to explain their value in resolving the difficulty in identification. All species were amphistomatic but stomata on abaxial surface were more abundant. Taxonomically significant epidermal character including stomata type, trichomes (unicellular, multicellular, and capitate) and epidermal cells shapes (polygonal and irregular) were also observed. Pollens of this family are Polypantoporate, pores large, spheroidal, mesoporous region is sparsely to scabrate, densely psilate, and spinulose. All these characters can be active at species level for identification purpose. This study indicates that at different taxonomic levels, LM and SEM pollen and epidermal morphology is explanatory and significant to identify species and genera. © 2018 Wiley Periodicals, Inc.
Sajjad, Wasim; Qadir, Sundas; Ahmad, Manzoor; Rafiq, Muhammad; Hasan, Fariha; Tehan, Richard; McPhail, Kerry L; Shah, Aamer Ali
2018-05-04
The current study was conducted to investigate the possible role of a compatible solute from radio-halophilic bacterium against desiccation and ultra-violet radiation induced oxidative stress. Nine different radio-resistant bacteria were isolated from desert soil, where strain WMA-LM19 was chosen for detailed studies on the basis of its high tolerance to ultraviolet radiation among all these isolates. 16S rRNA gene sequencing indicated the bacterium was closely related to Stenotrophomonas sp. (KT008383). A bacterial milking strategy was applied for extraction of intracellular compatible solutes in 70% (v/v) ethanol, which were purified by High Performance Liquid Chromatography (HPLC). The compound was characterized as ectoine by 1 H and 13 C Nuclear Magnetic Resonance (NMR), and Mass Spectrometry (MS). Ectoine inhibited oxidative damage to proteins and lipids in comparison to the standard ascorbic acid. It also demonstrated more efficient preventition (54.80%) against lysis to erythrocytes membrane by surface active agents than lecithin. Furthermore, a high level of ectoine-mediated protection of bovine serum albumin against ionizing radiation (1500-2000Jm -2 ) was observed, as indicated by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) analysis. The results indicated that ectoine from Stenotrophomonas sp. WMA-LM19 can be used as a potential mitigator and radio-protective agent to overcome radiation- and salinity-mediated oxidative damages in extreme environment. Due to its anti-oxidant properties, ectoine from a radio-halophilic bacterium might be used in sunscreen formulation for protection against UV induced oxidative stress. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Parameterizing sorption isotherms using a hybrid global-local fitting procedure.
Matott, L Shawn; Singh, Anshuman; Rabideau, Alan J
2017-05-01
Predictive modeling of the transport and remediation of groundwater contaminants requires an accurate description of the sorption process, which is usually provided by fitting an isotherm model to site-specific laboratory data. Commonly used calibration procedures, listed in order of increasing sophistication, include: trial-and-error, linearization, non-linear regression, global search, and hybrid global-local search. Given the considerable variability in fitting procedures applied in published isotherm studies, we investigated the importance of algorithm selection through a series of numerical experiments involving 13 previously published sorption datasets. These datasets, considered representative of state-of-the-art for isotherm experiments, had been previously analyzed using trial-and-error, linearization, or non-linear regression methods. The isotherm expressions were re-fit using a 3-stage hybrid global-local search procedure (i.e. global search using particle swarm optimization followed by Powell's derivative free local search method and Gauss-Marquardt-Levenberg non-linear regression). The re-fitted expressions were then compared to previously published fits in terms of the optimized weighted sum of squared residuals (WSSR) fitness function, the final estimated parameters, and the influence on contaminant transport predictions - where easily computed concentration-dependent contaminant retardation factors served as a surrogate measure of likely transport behavior. Results suggest that many of the previously published calibrated isotherm parameter sets were local minima. In some cases, the updated hybrid global-local search yielded order-of-magnitude reductions in the fitness function. In particular, of the candidate isotherms, the Polanyi-type models were most likely to benefit from the use of the hybrid fitting procedure. In some cases, improvements in fitness function were associated with slight (<10%) changes in parameter values, but in other cases
Huang, Lihan; Hwang, Andy; Phillips, John
2011-10-01
The objective of this work is to develop a mathematical model for evaluating the effect of temperature on the rate of microbial growth. The new mathematical model is derived by combination and modification of the Arrhenius equation and the Eyring-Polanyi transition theory. The new model, suitable for both suboptimal and the entire growth temperature ranges, was validated using a collection of 23 selected temperature-growth rate curves belonging to 5 groups of microorganisms, including Pseudomonas spp., Listeria monocytogenes, Salmonella spp., Clostridium perfringens, and Escherichia coli, from the published literature. The curve fitting is accomplished by nonlinear regression using the Levenberg-Marquardt algorithm. The resulting estimated growth rate (μ) values are highly correlated to the data collected from the literature (R(2) = 0.985, slope = 1.0, intercept = 0.0). The bias factor (B(f) ) of the new model is very close to 1.0, while the accuracy factor (A(f) ) ranges from 1.0 to 1.22 for most data sets. The new model is compared favorably with the Ratkowsky square root model and the Eyring equation. Even with more parameters, the Akaike information criterion, Bayesian information criterion, and mean square errors of the new model are not statistically different from the square root model and the Eyring equation, suggesting that the model can be used to describe the inherent relationship between temperature and microbial growth rates. The results of this work show that the new growth rate model is suitable for describing the effect of temperature on microbial growth rate. Practical Application: Temperature is one of the most significant factors affecting the growth of microorganisms in foods. This study attempts to develop and validate a mathematical model to describe the temperature dependence of microbial growth rate. The findings show that the new model is accurate and can be used to describe the effect of temperature on microbial growth rate in foods
NASA Astrophysics Data System (ADS)
Soltani, M.; Kunstmann, H.; Laux, P.; Mauder, M.
2016-12-01
In mountainous and prealpine regions echohydrological processes exhibit rapid changes within short distances due to the complex orography and strong elevation gradients. Water- and energy fluxes between the land surface and the atmosphere are crucial drivers for nearly all ecosystem processes. The aim of this research is to analyze the variability of surface water- and energy fluxes by both comprehensive observational hydrometeorological data analysis and process-based high resolution hydrological modeling for a mountainous and prealpine region in Germany. We particularly focus on the closure of the observed energy balance and on the added value of energy flux observations for parameter estimation in our hydrological model (GEOtop) by inverse modeling using PEST. Our study area is the catchment of the river Rott (55 km2), being part of the TERENO prealpine observatory in Southern Germany, and we focus particularly on the observations during the summer episode May to July 2013. We present the coupling of GEOtop and the parameter estimation tool PEST, which is based on the Gauss-Marquardt-Levenberg method, a gradient-based nonlinear parameter estimation algorithm. Estimation of the surface energy partitioning during the data analysis process revealed that the latent heat flux was considered as the main consumer of available energy. The relative imbalance was largest during nocturnal periods. An energy imbalance was observed at the eddy-covariance site Fendt due to either underestimated turbulent fluxes or overestimated available energy. The calculation of the simulated energy and water balances for the entire catchment indicated that 78% of net radiation leaves the catchment as latent heat flux, 17% as sensible heat, and 5% enters the soil in the form of soil heat flux. 45% of the catchment aggregated precipitation leaves the catchment as discharge and 55% as evaporation. Using the developed GEOtop-PEST interface, the hydrological model is calibrated by comparing
A measurement-based generalized source model for Monte Carlo dose simulations of CT scans
Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun
2018-01-01
The goal of this study is to develop a generalized source model (GSM) for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology. PMID:28079526
Improvements to the ion Doppler spectrometer diagnostic on the HIT-SI experiments.
Hossack, Aaron; Chandra, Rian; Everson, Chris; Jarboe, Tom
2018-03-01
An ion Doppler spectrometer diagnostic system measuring impurity ion temperature and velocity on the HIT-SI and HIT-SI3 spheromak devices has been improved with higher spatiotemporal resolution and lower error than previously described devices. Hardware and software improvements to the established technique have resulted in a record of 6.9 μs temporal and ≤2.8 cm spatial resolution in the midplane of each device. These allow Ciii and Oii flow, displacement, and temperature profiles to be observed simultaneously. With 72 fused-silica fiber channels in two independent bundles, and an f/8.5 Czerny-Turner spectrometer coupled to a video camera, frame rates of up to ten times the imposed magnetic perturbation frequency of 14.5 kHz were achieved in HIT-SI, viewing the upper half of the midplane. In HIT-SI3, frame rates of up to eight times the perturbation frequency were achieved viewing both halves of the midplane. Biorthogonal decomposition is used as a novel filtering tool, reducing uncertainty in ion temperature from ≲13 to ≲5 eV (with an instrument temperature of 8-16 eV) and uncertainty in velocity from ≲2 to ≲1 km/s. Doppler shift and broadening are calculated via the Levenberg-Marquardt algorithm, after which the errors in velocity and temperature are uniquely specified. Axisymmetric temperature profiles on HIT-SI3 for Ciii peaked near the inboard current separatrix at ≈40 eV are observed. Axisymmetric plasma displacement profiles have been measured on HIT-SI3, peaking at ≈6 cm at the outboard separatrix. Both profiles agree with the upper half of the midplane observable by HIT-SI. With its complete midplane view, HIT-SI3 has unambiguously extracted axisymmetric, toroidal current dependent rotation of up to 3 km/s. Analysis of the temporal phase of the displacement uncovers a coherent structure, locked to the applied perturbation. Previously described diagnostic systems could not achieve such results.
Improvements to the ion Doppler spectrometer diagnostic on the HIT-SI experiments
NASA Astrophysics Data System (ADS)
Hossack, Aaron; Chandra, Rian; Everson, Chris; Jarboe, Tom
2018-03-01
An ion Doppler spectrometer diagnostic system measuring impurity ion temperature and velocity on the HIT-SI and HIT-SI3 spheromak devices has been improved with higher spatiotemporal resolution and lower error than previously described devices. Hardware and software improvements to the established technique have resulted in a record of 6.9 μs temporal and ≤2.8 cm spatial resolution in the midplane of each device. These allow Ciii and Oii flow, displacement, and temperature profiles to be observed simultaneously. With 72 fused-silica fiber channels in two independent bundles, and an f/8.5 Czerny-Turner spectrometer coupled to a video camera, frame rates of up to ten times the imposed magnetic perturbation frequency of 14.5 kHz were achieved in HIT-SI, viewing the upper half of the midplane. In HIT-SI3, frame rates of up to eight times the perturbation frequency were achieved viewing both halves of the midplane. Biorthogonal decomposition is used as a novel filtering tool, reducing uncertainty in ion temperature from ≲13 to ≲5 eV (with an instrument temperature of 8-16 eV) and uncertainty in velocity from ≲2 to ≲1 km/s. Doppler shift and broadening are calculated via the Levenberg-Marquardt algorithm, after which the errors in velocity and temperature are uniquely specified. Axisymmetric temperature profiles on HIT-SI3 for Ciii peaked near the inboard current separatrix at ≈40 eV are observed. Axisymmetric plasma displacement profiles have been measured on HIT-SI3, peaking at ≈6 cm at the outboard separatrix. Both profiles agree with the upper half of the midplane observable by HIT-SI. With its complete midplane view, HIT-SI3 has unambiguously extracted axisymmetric, toroidal current dependent rotation of up to 3 km/s. Analysis of the temporal phase of the displacement uncovers a coherent structure, locked to the applied perturbation. Previously described diagnostic systems could not achieve such results.
Gamma Spectroscopy by Artificial Neural Network Coupled with MCNP
NASA Astrophysics Data System (ADS)
Sahiner, Huseyin
While neutron activation analysis is widely used in many areas, sensitivity of the analysis depends on how the analysis is conducted. Even though the sensitivity of the techniques carries error, compared to chemical analysis, its range is in parts per million or sometimes billion. Due to this sensitivity, the use of neutron activation analysis becomes important when analyzing bio-samples. Artificial neural network is an attractive technique for complex systems. Although there are neural network applications on spectral analysis, training by simulated data to analyze experimental data has not been made. This study offers an improvement on spectral analysis and optimization on neural network for the purpose. The work considers five elements that are considered as trace elements for bio-samples. However, the system is not limited to five elements. The only limitation of the study comes from data library availability on MCNP. A perceptron network was employed to identify five elements from gamma spectra. In quantitative analysis, better results were obtained when the neural fitting tool in MATLAB was used. As a training function, Levenberg-Marquardt algorithm was used with 23 neurons in the hidden layer with 259 gamma spectra in the input. Because the interest of the study deals with five elements, five neurons representing peak counts of five isotopes in the input layer were used. Five output neurons revealed mass information of these elements from irradiated kidney stones. Results showing max error of 17.9% in APA, 24.9% in UA, 28.2% in COM, 27.9% in STRU type showed the success of neural network approach in analyzing gamma spectra. This high error was attributed to Zn that has a very long decay half-life compared to the other elements. The simulation and experiments were made under certain experimental setup (3 hours irradiation, 96 hours decay time, 8 hours counting time). Nevertheless, the approach is subject to be generalized for different setups.
Forecasting the Onset Time of Volcanic Eruptions Using Ground Deformation Data
NASA Astrophysics Data System (ADS)
Blake, S.; Cortes, J. A.
2016-12-01
The pre-eruptive inflation of the ground surface is a well-known phenomenon at many volcanoes. In a number of intensively studied cases, elevation and/or radial tilt increase with time (t) towards a limiting value by following a decaying exponential with characteristic timescale τ (Kilauea and Mauna Loa: Dvorak and Okamura 1987, Lengliné et al., 2008) or, after sufficiently long times, by following the sum of two such functions such that two timescales, τ1 and τ2, are required to describe the temporal pattern of inflation (Axial Seamount: Nooner and Chadwick, 2009). We have used the Levenberg-Marquardt non-linear fit algorithm to analyse data for 18 inflation periods at Krafla volcano, Iceland, (Björnsson and Eysteinsson, 1998) and found the same functional relationship. Pooling all of the available data from 25 eruptions at 4 volcanoes shows that the duration of inflation before an eruption or shallow intrusion (t*) is comparable to τ (or the longer of τ1 and τ2) and follows an almost 1:1 linear relationship (r2 0.8). We also find that this scaling is replicated by Monte Carlo simulations of physics-based forward models of hydraulically connected dual magma chamber systems which erupt when the chamber pressure reaches a threshold value. These results lead to a new forecasting method which we describe and assess here: if τ can be constrained during an on-going inflation period, then the statistical distribution of t*/τ values calibrated from other pre-eruptive inflation periods allows the probability of an eruption starting before (or after) a specified time to be estimated. The time at which there is a specified probability of an eruption starting can also be forecast. These approaches rely on fitting deformation data up to time t in order to obtain τ(t) which is then used to forecast t*. Forecasts can be updated after each new deformation measurement.
A measurement-based generalized source model for Monte Carlo dose simulations of CT scans
NASA Astrophysics Data System (ADS)
Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun
2017-03-01
The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.
NASA Astrophysics Data System (ADS)
Marinoni, Marianna; Delay, Frederick; Ackerer, Philippe; Riva, Monica; Guadagnini, Alberto
2016-08-01
We investigate the effect of considering reciprocal drawdown curves for the characterization of hydraulic properties of aquifer systems through inverse modeling based on interference well testing. Reciprocity implies that drawdown observed in a well B when pumping takes place from well A should strictly coincide with the drawdown observed in A when pumping in B with the same flow rate as in A. In this context, a critical point related to applications of hydraulic tomography is the assessment of the number of available independent drawdown data and their impact on the solution of the inverse problem. The issue arises when inverse modeling relies upon mathematical formulations of the classical single-continuum approach to flow in porous media grounded on Darcy's law. In these cases, introducing reciprocal drawdown curves in the database of an inverse problem is equivalent to duplicate some information, to a certain extent. We present a theoretical analysis of the way a Least-Square objective function and a Levenberg-Marquardt minimization algorithm are affected by the introduction of reciprocal information in the inverse problem. We also investigate the way these reciprocal data, eventually corrupted by measurement errors, influence model parameter identification in terms of: (a) the convergence of the inverse model, (b) the optimal values of parameter estimates, and (c) the associated estimation uncertainty. Our theoretical findings are exemplified through a suite of computational examples focused on block-heterogeneous systems with increased complexity level. We find that the introduction of noisy reciprocal information in the objective function of the inverse problem has a very limited influence on the optimal parameter estimates. Convergence of the inverse problem improves when adding diverse (nonreciprocal) drawdown series, but does not improve when reciprocal information is added to condition the flow model. The uncertainty on optimal parameter estimates is
Khanali, Majid; Mobli, Hossein; Hosseinzadeh-Bandbafha, Homa
2017-12-01
In this study, an artificial neural network (ANN) model was developed for predicting the yield and life cycle environmental impacts based on energy inputs required in processing of black tea, green tea, and oolong tea in Guilan province of Iran. A life cycle assessment (LCA) approach was used to investigate the environmental impact categories of processed tea based on the cradle to gate approach, i.e., from production of input materials using raw materials to the gate of tea processing units, i.e., packaged tea. Thus, all the tea processing operations such as withering, rolling, fermentation, drying, and packaging were considered in the analysis. The initial data were obtained from tea processing units while the required data about the background system was extracted from the EcoInvent 2.2 database. LCA results indicated that diesel fuel and corrugated paper box used in drying and packaging operations, respectively, were the main hotspots. Black tea processing unit caused the highest pollution among the three processing units. Three feed-forward back-propagation ANN models based on Levenberg-Marquardt training algorithm with two hidden layers accompanied by sigmoid activation functions and a linear transfer function in output layer, were applied for three types of processed tea. The neural networks were developed based on energy equivalents of eight different input parameters (energy equivalents of fresh tea leaves, human labor, diesel fuel, electricity, adhesive, carton, corrugated paper box, and transportation) and 11 output parameters (yield, global warming, abiotic depletion, acidification, eutrophication, ozone layer depletion, human toxicity, freshwater aquatic ecotoxicity, marine aquatic ecotoxicity, terrestrial ecotoxicity, and photochemical oxidation). The results showed that the developed ANN models with R 2 values in the range of 0.878 to 0.990 had excellent performance in predicting all the output variables based on inputs. Energy consumption for
Can diffusion-weighted imaging serve as a biomarker of fibrosis in pancreatic adenocarcinoma?
Hecht, Elizabeth M; Liu, Michael Z; Prince, Martin R; Jambawalikar, Sachin; Remotti, Helen E; Weisberg, Stuart W; Garmon, Donald; Lopez-Pintado, Sara; Woo, Yanghee; Kluger, Michael D; Chabot, John A
2017-08-01
To assess the relationship between diffusion-weighted imaging (DWI) and intravoxel incoherent motion (IVIM)-derived quantitative parameters (apparent diffusion coefficient [ADC], perfusion fraction [f], D slow , diffusion coefficient [D], and D fast , pseudodiffusion coefficient [D*]) and histopathology in pancreatic adenocarcinoma (PAC). Subjects with suspected surgically resectable PAC were prospectively enrolled in this Health Insurance Portability and Accountability Act (HIPAA)-compliant, Institutional Review Board-approved study. Imaging was performed at 1.5T with a respiratory-triggered echo planar DWI sequence using 10 b values. Two readers drew regions of interest (ROIs) over the tumor and adjacent nontumoral tissue. Monoexponential and biexponential fits were used to derive ADC 2b , ADC all , f, D, and D*, which were compared to quantitative histopathology of fibrosis, mean vascular density, and cellularity. Two biexponential IVIM models were investigated and compared: 1) nonlinear least-square fitting based on the Levenberg-Marquardt algorithm, and 2) linear fit using a fixed D* (20 mm 2 /s). Statistical analysis included Student's t-test, Pearson correlation (P < 0.05 was considered significant), intraclass correlation, and coefficients of variance. Twenty subjects with PAC were included in the final cohort. Negative correlation between D and fibrosis (Reader 2: r = -0.57 P = 0.01; pooled P = -0.46, P = 0.04) was observed with a trend toward positive correlation between f and fibrosis (r = 0.44, P = 0.05). ADC 2b was significantly lower in PAC with dense fibrosis than with loose fibrosis ADC 2b (P = 0.03). Inter- and intrareader agreement was excellent for ADC, D, and f. In PAC, D negatively correlates with fibrosis, with a trend toward positive correlation with f suggesting both perfusion and diffusion effects contribute to stromal desmoplasia. ADC 2b is significantly lower in tumors with dense fibrosis and may serve as a
Time-of-flight PET image reconstruction using origin ensembles.
Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven
2015-03-07
The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.
Time-of-flight PET image reconstruction using origin ensembles
NASA Astrophysics Data System (ADS)
Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven
2015-03-01
The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.
NASA Astrophysics Data System (ADS)
Srinivas, Kadivendi; Vundavilli, Pandu R.; Manzoor Hussain, M.; Saiteja, M.
2016-09-01
Welding input parameters such as current, gas flow rate and torch angle play a significant role in determination of qualitative mechanical properties of weld joint. Traditionally, it is necessary to determine the weld input parameters for every new welded product to obtain a quality weld joint which is time consuming. In the present work, the effect of plasma arc welding parameters on mild steel was studied using a neural network approach. To obtain a response equation that governs the input-output relationships, conventional regression analysis was also performed. The experimental data was constructed based on Taguchi design and the training data required for neural networks were randomly generated, by varying the input variables within their respective ranges. The responses were calculated for each combination of input variables by using the response equations obtained through the conventional regression analysis. The performances in Levenberg-Marquardt back propagation neural network and radial basis neural network (RBNN) were compared on various randomly generated test cases, which are different from the training cases. From the results, it is interesting to note that for the above said test cases RBNN analysis gave improved training results compared to that of feed forward back propagation neural network analysis. Also, RBNN analysis proved a pattern of increasing performance as the data points moved away from the initial input values.
Chroma intra prediction based on inter-channel correlation for HEVC.
Zhang, Xingyu; Gisquet, Christophe; François, Edouard; Zou, Feng; Au, Oscar C
2014-01-01
In this paper, we investigate a new inter-channel coding mode called LM mode proposed for the next generation video coding standard called high efficiency video coding. This mode exploits inter-channel correlation using reconstructed luma to predict chroma linearly with parameters derived from neighboring reconstructed luma and chroma pixels at both encoder and decoder to avoid overhead signaling. In this paper, we analyze the LM mode and prove that the LM parameters for predicting original chroma and reconstructed chroma are statistically the same. We also analyze the error sensitivity of the LM parameters. We identify some LM mode problematic situations and propose three novel LM-like modes called LMA, LML, and LMO to address the situations. To limit the increase in complexity due to the LM-like modes, we propose some fast algorithms with the help of some new cost functions. We further identify some potentially-problematic conditions in the parameter estimation (including regression dilution problem) and introduce a novel model correction technique to detect and correct those conditions. Simulation results suggest that considerable BD-rate reduction can be achieved by the proposed LM-like modes and model correction technique. In addition, the performance gain of the two techniques appears to be essentially additive when combined.
Complexity and Hopf Bifurcation Analysis on a Kind of Fractional-Order IS-LM Macroeconomic System
NASA Astrophysics Data System (ADS)
Ma, Junhai; Ren, Wenbo
On the basis of our previous research, we deepen and complete a kind of macroeconomics IS-LM model with fractional-order calculus theory, which is a good reflection on the memory characteristics of economic variables, we also focus on the influence of the variables on the real system, and improve the analysis capabilities of the traditional economic models to suit the actual macroeconomic environment. The conditions of Hopf bifurcation in fractional-order system models are briefly demonstrated, and the fractional order when Hopf bifurcation occurs is calculated, showing the inherent complex dynamic characteristics of the system. With numerical simulation, bifurcation, strange attractor, limit cycle, waveform and other complex dynamic characteristics are given; and the order condition is obtained with respect to time. We find that the system order has an important influence on the running state of the system. The system has a periodic motion when the order meets the conditions of Hopf bifurcation; the fractional-order system gradually stabilizes with the change of the order and parameters while the corresponding integer-order system diverges. This study has certain significance to policy-making about macroeconomic regulation and control.
Lee, M.; Malyshev, S.; Shevliakova, E.; Milly, Paul C. D.; Jaffé, P. R.
2014-01-01
We developed a process model LM3-TAN to assess the combined effects of direct human influences and climate change on terrestrial and aquatic nitrogen (TAN) cycling. The model was developed by expanding NOAA's Geophysical Fluid Dynamics Laboratory land model LM3V-N of coupled terrestrial carbon and nitrogen (C-N) cycling and including new N cycling processes and inputs such as a soil denitrification, point N sources to streams (i.e., sewage), and stream transport and microbial processes. Because the model integrates ecological, hydrological, and biogeochemical processes, it captures key controls of the transport and fate of N in the vegetation–soil–river system in a comprehensive and consistent framework which is responsive to climatic variations and land-use changes. We applied the model at 1/8° resolution for a study of the Susquehanna River Basin. We simulated with LM3-TAN stream dissolved organic-N, ammonium-N, and nitrate-N loads throughout the river network, and we evaluated the modeled loads for 1986–2005 using data from 16 monitoring stations as well as a reported budget for the entire basin. By accounting for interannual hydrologic variability, the model was able to capture interannual variations of stream N loadings. While the model was calibrated with the stream N loads only at the last downstream Susquehanna River Basin Commission station Marietta (40°02' N, 76°32' W), it captured the N loads well at multiple locations within the basin with different climate regimes, land-use types, and associated N sources and transformations in the sub-basins. Furthermore, the calculated and previously reported N budgets agreed well at the level of the whole Susquehanna watershed. Here we illustrate how point and non-point N sources contributing to the various ecosystems are stored, lost, and exported via the river. Local analysis of six sub-basins showed combined effects of land use and climate on soil denitrification rates, with the highest rates in the
NASA Astrophysics Data System (ADS)
Alipchenkov, V. M.; Anfimov, A. M.; Afremov, D. A.; Gorbunov, V. S.; Zeigarnik, Yu. A.; Kudryavtsev, A. V.; Osipov, S. L.; Mosunova, N. A.; Strizhov, V. F.; Usov, E. V.
2016-02-01
The conceptual fundamentals of the development of the new-generation system thermal-hydraulic computational HYDRA-IBRAE/LM code are presented. The code is intended to simulate the thermalhydraulic processes that take place in the loops and the heat-exchange equipment of liquid-metal cooled fast reactor systems under normal operation and anticipated operational occurrences and during accidents. The paper provides a brief overview of Russian and foreign system thermal-hydraulic codes for modeling liquid-metal coolants and gives grounds for the necessity of development of a new-generation HYDRA-IBRAE/LM code. Considering the specific engineering features of the nuclear power plants (NPPs) equipped with the BN-1200 and the BREST-OD-300 reactors, the processes and the phenomena are singled out that require a detailed analysis and development of the models to be correctly described by the system thermal-hydraulic code in question. Information on the functionality of the computational code is provided, viz., the thermalhydraulic two-phase model, the properties of the sodium and the lead coolants, the closing equations for simulation of the heat-mass exchange processes, the models to describe the processes that take place during the steam-generator tube rupture, etc. The article gives a brief overview of the usability of the computational code, including a description of the support documentation and the supply package, as well as possibilities of taking advantages of the modern computer technologies, such as parallel computations. The paper shows the current state of verification and validation of the computational code; it also presents information on the principles of constructing of and populating the verification matrices for the BREST-OD-300 and the BN-1200 reactor systems. The prospects are outlined for further development of the HYDRA-IBRAE/LM code, introduction of new models into it, and enhancement of its usability. It is shown that the program of development and
NASA Astrophysics Data System (ADS)
Lobanov, P. D.; Usov, E. V.; Butov, A. A.; Pribaturin, N. A.; Mosunova, N. A.; Strizhov, V. F.; Chukhno, V. I.; Kutlimetov, A. E.
2017-10-01
Experiments with impulse gas injection into model coolants, such as water or the Rose alloy, performed at the Novosibirsk Branch of the Nuclear Safety Institute, Russian Academy of Sciences, are described. The test facility and the experimental conditions are presented in details. The dependence of coolant pressure on the injected gas flow and the time of injection was determined. The purpose of these experiments was to verify the physical models of thermohydraulic codes for calculation of the processes that could occur during the rupture of tubes of a steam generator with heavy liquid metal coolant or during fuel rod failure in water-cooled reactors. The experimental results were used for verification of the HYDRA-IBRAE/LM system thermohydraulic code developed at the Nuclear Safety Institute, Russian Academy of Sciences. The models of gas bubble transportation in a vertical channel that are used in the code are described in detail. A two-phase flow pattern diagram and correlations for prediction of friction of bubbles and slugs as they float up in a vertical channel and of two-phase flow friction factor are presented. Based on the results of simulation of these experiments using the HYDRA-IBRAE/LM code, the arithmetic mean error in predicted pressures was calculated, and the predictions were analyzed considering the uncertainty in the input data, geometry of the test facility, and the error of the empirical correlation. The analysis revealed major factors having a considerable effect on the predictions. The recommendations are given on updating of the experimental results and improvement of the models used in the thermohydraulic code.
STAR Algorithm Integration Team - Facilitating operational algorithm development
NASA Astrophysics Data System (ADS)
Mikles, V. J.
2015-12-01
The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.
IMFIT: A FAST, FLEXIBLE NEW PROGRAM FOR ASTRONOMICAL IMAGE FITTING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erwin, Peter; Universitäts-Sternwarte München, Scheinerstrasse 1, D-81679 München
2015-02-01
I describe a new, open-source astronomical image-fitting program called IMFIT, specialized for galaxies but potentially useful for other sources, which is fast, flexible, and highly extensible. A key characteristic of the program is an object-oriented design that allows new types of image components (two-dimensional surface-brightness functions) to be easily written and added to the program. Image functions provided with IMFIT include the usual suspects for galaxy decompositions (Sérsic, exponential, Gaussian), along with Core-Sérsic and broken-exponential profiles, elliptical rings, and three components that perform line-of-sight integration through three-dimensional luminosity-density models of disks and rings seen at arbitrary inclinations. Available minimization algorithmsmore » include Levenberg-Marquardt, Nelder-Mead simplex, and Differential Evolution, allowing trade-offs between speed and decreased sensitivity to local minima in the fit landscape. Minimization can be done using the standard χ{sup 2} statistic (using either data or model values to estimate per-pixel Gaussian errors, or else user-supplied error images) or Poisson-based maximum-likelihood statistics; the latter approach is particularly appropriate for cases of Poisson data in the low-count regime. I show that fitting low-signal-to-noise ratio galaxy images using χ{sup 2} minimization and individual-pixel Gaussian uncertainties can lead to significant biases in fitted parameter values, which are avoided if a Poisson-based statistic is used; this is true even when Gaussian read noise is present.« less
Economic indicators selection for crime rates forecasting using cooperative feature selection
NASA Astrophysics Data System (ADS)
Alwee, Razana; Shamsuddin, Siti Mariyam Hj; Salleh Sallehuddin, Roselina
2013-04-01
Features selection in multivariate forecasting model is very important to ensure that the model is accurate. The purpose of this study is to apply the Cooperative Feature Selection method for features selection. The features are economic indicators that will be used in crime rate forecasting model. The Cooperative Feature Selection combines grey relational analysis and artificial neural network to establish a cooperative model that can rank and select the significant economic indicators. Grey relational analysis is used to select the best data series to represent each economic indicator and is also used to rank the economic indicators according to its importance to the crime rate. After that, the artificial neural network is used to select the significant economic indicators for forecasting the crime rates. In this study, we used economic indicators of unemployment rate, consumer price index, gross domestic product and consumer sentiment index, as well as data rates of property crime and violent crime for the United States. Levenberg-Marquardt neural network is used in this study. From our experiments, we found that consumer price index is an important economic indicator that has a significant influence on the violent crime rate. While for property crime rate, the gross domestic product, unemployment rate and consumer price index are the influential economic indicators. The Cooperative Feature Selection is also found to produce smaller errors as compared to Multiple Linear Regression in forecasting property and violent crime rates.
Gibon, Julien; Kang, Min Su; Aliaga, Arturo; Sharif, Behrang; Rosa-Neto, Pedro; Séguéla, Philippe; Barker, Philip A; Kostikov, Alexey
2016-10-01
Mature neurotrophins as well as their pro forms are critically involved in the regulation of neuronal functions. They are signaling through three distinct types of receptors: tropomyosin receptor kinase family (TrkA/B/C), p75 neurotrophin receptor (p75(NTR)) and sortilin. Aberrant expression of p75(NTR) in the CNS is implicated in a variety of neurodegenerative diseases, including Alzheimer's disease. The goal of this work was to evaluate one of the very few reported p75(NTR) small molecule ligands as a lead compound for development of novel PET radiotracers for in vivo p75(NTR) imaging. Here we report that previously described ligand LM11A-24 shows significant inhibition of carbachol-induced persistent firing (PF) of entorhinal cortex (EC) pyramidal neurons in wild-type mice via selective interaction with p75(NTR). Based on this electrophysiological assay, the compound has very high potency with an EC50<10nM. We optimized the radiosynthesis of [(11)C]LM11A-24 as the first attempt to develop PET radioligand for in vivo imaging of p75(NTR). Despite some weak interaction with CNS tissues, the radiolabeled compound showed unfavorable in vivo profile presumably due to high hydrophilicity. Copyright © 2016 Elsevier Ltd. All rights reserved.
Selfish Gene Algorithm Vs Genetic Algorithm: A Review
NASA Astrophysics Data System (ADS)
Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed
2016-11-01
Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.
An efficient algorithm for function optimization: modified stem cells algorithm
NASA Astrophysics Data System (ADS)
Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad Hadi
2013-03-01
In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).
Biosorption of chromium (VI) from aqueous solutions and ANN modelling.
Nag, Soma; Mondal, Abhijit; Bar, Nirjhar; Das, Sudip Kumar
2017-08-01
The use of sustainable, green and biodegradable natural wastes for Cr(VI) detoxification from the contaminated wastewater is considered as a challenging issue. The present research is aimed to assess the effectiveness of seven different natural biomaterials, such as jackfruit leaf, mango leaf, onion peel, garlic peel, bamboo leaf, acid treated rubber leaf and coconut shell powder, for Cr(VI) eradication from aqueous solution by biosorption process. Characterizations were conducted using SEM, BET and FTIR spectroscopy. The effects of operating parameters, viz., pH, initial Cr(VI) ion concentration, adsorbent dosages, contact time and temperature on metal removal efficiency, were studied. The biosorption mechanism was described by the pseudo-second-order model and Langmuir isotherm model. The biosorption process was exothermic, spontaneous and chemical (except garlic peel) in nature. The sequence of adsorption capacity was mango leaf > jackfruit leaf > acid treated rubber leaf > onion peel > bamboo leaf > garlic peel > coconut shell with maximum Langmuir adsorption capacity of 35.7 mg g -1 for mango leaf. The treated effluent can be reused. Desorption study suggested effective reuse of the adsorbents up to three cycles, and safe disposal method of the used adsorbents suggested biodegradability and sustainability of the process by reapplication of the spent adsorbent and ultimately leading towards zero wastages. The performances of the adsorbents were verified with wastewater from electroplating industry. The scale-up study reported for industrial applications. ANN modelling using multilayer perception with gradient descent (GD) and Levenberg-Marquart (LM) algorithm had been successfully used for prediction of Cr(VI) removal efficiency. The study explores the undiscovered potential of the natural waste materials for sustainable existence of small and medium sector industries, especially in the third world countries by protecting the environment by eco-innovation.
Denni Algorithm An Enhanced Of SMS (Scan, Move and Sort) Algorithm
NASA Astrophysics Data System (ADS)
Aprilsyah Lubis, Denni; Salim Sitompul, Opim; Marwan; Tulus; Andri Budiman, M.
2017-12-01
Sorting has been a profound area for the algorithmic researchers, and many resources are invested to suggest a more working sorting algorithm. For this purpose many existing sorting algorithms were observed in terms of the efficiency of the algorithmic complexity. Efficient sorting is important to optimize the use of other algorithms that require sorted lists to work correctly. Sorting has been considered as a fundamental problem in the study of algorithms that due to many reasons namely, the necessary to sort information is inherent in many applications, algorithms often use sorting as a key subroutine, in algorithm design there are many essential techniques represented in the body of sorting algorithms, and many engineering issues come to the fore when implementing sorting algorithms., Many algorithms are very well known for sorting the unordered lists, and one of the well-known algorithms that make the process of sorting to be more economical and efficient is SMS (Scan, Move and Sort) algorithm, an enhancement of Quicksort invented Rami Mansi in 2010. This paper presents a new sorting algorithm called Denni-algorithm. The Denni algorithm is considered as an enhancement on the SMS algorithm in average, and worst cases. The Denni algorithm is compared with the SMS algorithm and the results were promising.
Altuner, Durdu; Ates, Ilker; Suzen, Sinan H; Koc, Gonul Varan; Aral, Yalcin; Karakaya, Asuman
2011-11-01
Paraoxonase (PON1) is a serum esterase responsible for the protection against xenobiotics toxicity such as paraoxon. Alterations in PON1 concentrations have been reported in a variety of diseases including diabetes mellitus (DM). It has been shown that the serum PON1 concentration and activity are decreased in patients with both type 1 and type 2 DM. This study aimed to investigate the lipid profiles and the relationship between PON1 activity and PON1, QR192 and LM55 polymorphisms in Turkish type 2 diabetic patients and non-diabetic control subjects. According to our results, RR variant had significantly higher PON activity than QQ and QR variants (p < 0.01) and LL variant had significantly higher PON activity than MM variant in both control and patient groups (p < 0.05). In conclusion, we found that PON1 192RR and 55LL genotypes are associated with higher PON activity than QQ and MM genotypes. This may be more protective to lipid peroxidation.
A novel method of language modeling for automatic captioning in TC video teleconferencing.
Zhang, Xiaojia; Zhao, Yunxin; Schopp, Laura
2007-05-01
We are developing an automatic captioning system for teleconsultation video teleconferencing (TC-VTC) in telemedicine, based on large vocabulary conversational speech recognition. In TC-VTC, doctors' speech contains a large number of infrequently used medical terms in spontaneous styles. Due to insufficiency of data, we adopted mixture language modeling, with models trained from several datasets of medical and nonmedical domains. This paper proposes novel modeling and estimation methods for the mixture language model (LM). Component LMs are trained from individual datasets, with class n-gram LMs trained from in-domain datasets and word n-gram LMs trained from out-of-domain datasets, and they are interpolated into a mixture LM. For class LMs, semantic categories are used for class definition on medical terms, names, and digits. The interpolation weights of a mixture LM are estimated by a greedy algorithm of forward weight adjustment (FWA). The proposed mixing of in-domain class LMs and out-of-domain word LMs, the semantic definitions of word classes, as well as the weight-estimation algorithm of FWA are effective on the TC-VTC task. As compared with using mixtures of word LMs with weights estimated by the conventional expectation-maximization algorithm, the proposed methods led to a 21% reduction of perplexity on test sets of five doctors, which translated into improvements of captioning accuracy.
Hybrid Cryptosystem Using Tiny Encryption Algorithm and LUC Algorithm
NASA Astrophysics Data System (ADS)
Rachmawati, Dian; Sharif, Amer; Jaysilen; Andri Budiman, Mohammad
2018-01-01
Security becomes a very important issue in data transmission and there are so many methods to make files more secure. One of that method is cryptography. Cryptography is a method to secure file by writing the hidden code to cover the original file. Therefore, if the people do not involve in cryptography, they cannot decrypt the hidden code to read the original file. There are many methods are used in cryptography, one of that method is hybrid cryptosystem. A hybrid cryptosystem is a method that uses a symmetric algorithm to secure the file and use an asymmetric algorithm to secure the symmetric algorithm key. In this research, TEA algorithm is used as symmetric algorithm and LUC algorithm is used as an asymmetric algorithm. The system is tested by encrypting and decrypting the file by using TEA algorithm and using LUC algorithm to encrypt and decrypt the TEA key. The result of this research is by using TEA Algorithm to encrypt the file, the cipher text form is the character from ASCII (American Standard for Information Interchange) table in the form of hexadecimal numbers and the cipher text size increase by sixteen bytes as the plaintext length is increased by eight characters.
Liver transplantation for metastatic liver malignancies.
Foss, Aksel; Lerut, Jan P
2014-06-01
Liver transplantation is a validated treatment of primary hepatobiliary tumours. Over the last decade, a renewed interest for liver transplantation as a curative treatment of colorectal liver metastasis (CR-LM) and neuro-endocrine metastasis (NET-LM) has developed. The ELTR and UNOS analyses showed that liver transplantation may offer excellent disease-free survival (ranging from 30 to 77%) in case of NET-LM, on the condition that stringent selection criteria are implemented. The interest for liver transplantation in the treatment of CR-LM has been fostered by the Norwegian SECA study. Five-year A 5-year survival rate of 60% could be reached. Despite the high recurrence rate (90%), one-third of patients were disease free following pulmonary surgery for metastases. Liver transplantation will take a more prominent place in the therapeutic algorithm of CR-LM and NET-LM. Larger experiences are necessary to improve knowledge about tumour biology and to refine selection criteria. A multimodal approach adding neo and adjuvant medical treatment to the transplant procedure will be key to bring this oncologic transplant project into the clinical arena. The preserved liver function in these patients will allow a more deliberate access to split liver and living donation for these indications.
A comparison of several methods of solving nonlinear regression groundwater flow problems
Cooley, Richard L.
1985-01-01
Computational efficiency and computer memory requirements for four methods of minimizing functions were compared for four test nonlinear-regression steady state groundwater flow problems. The fastest methods were the Marquardt and quasi-linearization methods, which required almost identical computer times and numbers of iterations; the next fastest was the quasi-Newton method, and last was the Fletcher-Reeves method, which did not converge in 100 iterations for two of the problems. The fastest method per iteration was the Fletcher-Reeves method, and this was followed closely by the quasi-Newton method. The Marquardt and quasi-linearization methods were slower. For all four methods the speed per iteration was directly related to the number of parameters in the model. However, this effect was much more pronounced for the Marquardt and quasi-linearization methods than for the other two. Hence the quasi-Newton (and perhaps Fletcher-Reeves) method might be more efficient than either the Marquardt or quasi-linearization methods if the number of parameters in a particular model were large, although this remains to be proven. The Marquardt method required somewhat less central memory than the quasi-linearization metilod for three of the four problems. For all four problems the quasi-Newton method required roughly two thirds to three quarters of the memory required by the Marquardt method, and the Fletcher-Reeves method required slightly less memory than the quasi-Newton method. Memory requirements were not excessive for any of the four methods.
NASA Astrophysics Data System (ADS)
Haddout, Soufiane; Igouzal, Mohammed; Maslouhi, Abdellatif
2016-09-01
The longitudinal variation of salinity and the maximum salinity intrusion length in an alluvial estuary are important environmental concerns for policy makers and managers since they influence water quality, water utilization and agricultural development in estuarine environments and the potential use of water resources in general. The supermoon total lunar eclipse is a rare event. According to NASA, they have only occurred 5 times in the 1900s - in 1910, 1928, 1946, 1964 and 1982. After the 28 September 2015 total lunar eclipse, a Super Blood Moon eclipse will not recur before 8 October 2033. In this paper, for the first time, the impact of the combination of a supermoon and a total lunar eclipse on the salinity intrusion along an estuary is studied. The 28 September 2015 supermoon total lunar eclipse is the focus of this study and the Sebou river estuary (Morocco) is used as an application area. The Sebou estuary is an area with high agricultural potential, is becoming one of the most important industrial zones in Morocco and it is experiencing a salt intrusion problem. Hydrodynamic equations for tidal wave propagation coupled with the Savenije theory and a numerical salinity transport model (HEC-RAS software "Hydrologic Engineering Center River Analysis System") are applied to study the impact of the supermoon total lunar eclipse on the salinity intrusion. Intensive salinity measurements during this extreme event were recorded along the Sebou estuary. Measurements showed a modification of the shape of axial salinity profiles and a notable water elevation rise, compared with normal situations. The two optimization parameters (Van der Burgh's and dispersion coefficients) of the analytical model are estimated based on the Levenberg-Marquardt's algorithm (i.e., solving nonlinear least-squares problems). The salinity transport model was calibrated and validated using field data. The results show that the two models described very well the salt intrusion during the
IceChrono1: a probabilistic model to compute a common and optimal chronology for several ice cores
NASA Astrophysics Data System (ADS)
Parrenin, Frédéric; Bazin, Lucie; Capron, Emilie; Landais, Amaëlle; Lemieux-Dudon, Bénédicte; Masson-Delmotte, Valérie
2016-04-01
Polar ice cores provide exceptional archives of past environmental conditions. The dating of ice cores and the estimation of the age scale uncertainty are essential to interpret the climate and environmental records that they contain. It is however a complex problem which involves different methods. Here, we present IceChrono1, a new probabilistic model integrating various sources of chronological information to produce a common and optimized chronology for several ice cores, as well as its uncertainty. IceChrono1 is based on the inversion of three quantities: the surface accumulation rate, the Lock-In Depth (LID) of air bubbles and the thinning function. The chronological information integrated into the model are: models of the sedimentation process (accumulation of snow, densification of snow into ice and air trapping, ice flow), ice and air dated horizons, ice and air depth intervals with known durations, Δdepth observations (depth shift between synchronous events recorded in the ice and in the air) and finally air and ice stratigraphic links in between ice cores. The optimization is formulated as a least squares problem, implying that all densities of probabilities are assumed to be Gaussian. It is numerically solved using the Levenberg-Marquardt algorithm and a numerical evaluation of the model's Jacobian. IceChrono follows an approach similar to that of the Datice model which was recently used to produce the AICC2012 chronology for 4 Antarctic ice cores and 1 Greenland ice core. IceChrono1 provides improvements and simplifications with respect to Datice from the mathematical, numerical and programming point of views. The capabilities of IceChrono is demonstrated on a case study similar to the AICC2012 dating experiment. We find results similar to those of Datice, within a few centuries, which is a confirmation of both IceChrono and Datice codes. We also test new functionalities with respect to the original version of Datice: observations as ice intervals
IceChrono1: a probabilistic model to compute a common and optimal chronology for several ice cores
NASA Astrophysics Data System (ADS)
Parrenin, F.; Bazin, L.; Capron, E.; Landais, A.; Lemieux-Dudon, B.; Masson-Delmotte, V.
2015-05-01
Polar ice cores provide exceptional archives of past environmental conditions. The dating of ice cores and the estimation of the age-scale uncertainty are essential to interpret the climate and environmental records that they contain. It is, however, a complex problem which involves different methods. Here, we present IceChrono1, a new probabilistic model integrating various sources of chronological information to produce a common and optimized chronology for several ice cores, as well as its uncertainty. IceChrono1 is based on the inversion of three quantities: the surface accumulation rate, the lock-in depth (LID) of air bubbles and the thinning function. The chronological information integrated into the model are models of the sedimentation process (accumulation of snow, densification of snow into ice and air trapping, ice flow), ice- and air-dated horizons, ice and air depth intervals with known durations, depth observations (depth shift between synchronous events recorded in the ice and in the air) and finally air and ice stratigraphic links in between ice cores. The optimization is formulated as a least squares problem, implying that all densities of probabilities are assumed to be Gaussian. It is numerically solved using the Levenberg-Marquardt algorithm and a numerical evaluation of the model's Jacobian. IceChrono follows an approach similar to that of the Datice model which was recently used to produce the AICC2012 (Antarctic ice core chronology) for four Antarctic ice cores and one Greenland ice core. IceChrono1 provides improvements and simplifications with respect to Datice from the mathematical, numerical and programming point of views. The capabilities of IceChrono1 are demonstrated on a case study similar to the AICC2012 dating experiment. We find results similar to those of Datice, within a few centuries, which is a confirmation of both IceChrono1 and Datice codes. We also test new functionalities with respect to the original version of Datice
NASA Astrophysics Data System (ADS)
Seelos, F. P.; Arvidson, R. E.; Guinness, E. A.; Wolff, M. J.
2004-12-01
The Mars Exploration Rover (MER) Panoramic Camera (Pancam) observation strategy included the acquisition of multispectral data sets specifically designed to support the photometric analysis of Martian surface materials (J. R. Johnson, this conference). We report on the numerical inversion of observed Pancam radiance-on-sensor data to determine the best-fit surface bidirectional reflectance parameters as defined by Hapke theory. The model bidirectional reflectance parameters for the Martian surface provide constraints on physical and material properties and allow for the direct comparison of Pancam and orbital data sets. The parameter optimization procedure consists of a spatial multigridding strategy driving a Levenberg-Marquardt nonlinear least squares optimization engine. The forward radiance models and partial derivatives (via finite-difference approximation) are calculated using an implementation of the DIScrete Ordinate Radiative Transfer (DISORT) algorithm with the four-parameter Hapke bidirectional reflectance function and the two-parameter Henyey-Greenstein phase function defining the lower boundary. The DISORT implementation includes a plane-parallel model of the Martian atmosphere derived from a combination of Thermal Emission Spectrometer (TES), Pancam, and Mini-TES atmospheric data acquired near in time to the surface observations. This model accounts for bidirectional illumination from the attenuated solar beam and hemispherical-directional skylight illumination. The initial investigation was limited to treating the materials surrounding the rover as a single surface type, consistent with the spatial resolution of orbital observations. For more detailed analyses the observation geometry can be calculated from the correlation of Pancam stereo pairs (J. M. Soderblom et al., this conference). With improved geometric control, the radiance inversion can be applied to constituent surface material classes such as ripple and dune forms in addition to the soils
NASA Astrophysics Data System (ADS)
Nadeau-Beaulieu, Michel
In this thesis, three mathematical models are built from flight test data for different aircraft design applications: a ground dynamics model for the Bell 427 helicopter, a prediction model for the rotor and engine parameters for the same helicopter type and a simulation model for the aeroelastic deflections of the F/A-18. In the ground dynamics application, the model structure is derived from physics where the normal force between the helicopter and the ground is modelled as a vertical spring and the frictional force is modelled with static and dynamic friction coefficients. The ground dynamics model coefficients are optimized to ensure that the model matches the landing data within the FAA (Federal Aviation Administration) tolerance bands for a level D flight simulator. In the rotor and engine application, rotors torques (main and tail), the engine torque and main rotor speed are estimated using a state-space model. The model inputs are nonlinear terms derived from the pilot control inputs and the helicopter states. The model parameters are identified using the subspace method and are further optimised with the Levenberg-Marquardt minimisation algorithm. The model built with the subspace method provides an excellent estimate of the outputs within the FAA tolerance bands. The F/A-18 aeroelastic state-space model is built from flight test. The research concerning this model is divided in two parts. Firstly, the deflection of a given structural surface on the aircraft following a differential ailerons control input is represented by a Multiple Inputs Single Outputs linear model whose inputs are the ailerons positions and the structural surfaces deflections. Secondly, a single state-space model is used to represent the deflection of the aircraft wings and trailing edge flaps following any control input. In this case the model is made non-linear by multiplying model inputs into higher order terms and using these terms as the inputs of the state-space equations. In
Lightning Location Using Acoustic Signals
NASA Astrophysics Data System (ADS)
Badillo, E.; Arechiga, R. O.; Thomas, R. J.
2013-05-01
In the summer of 2011 and 2012 a network of acoustic arrays was deployed in the Magdalena mountains of central New Mexico to locate lightning flashes. A Times-Correlation (TC) ray-tracing-based-technique was developed in order to obtain the location of lightning flashes near the network. The TC technique, locates acoustic sources from lightning. It was developed to complement the lightning location of RF sources detected by the Lightning Mapping Array (LMA) developed at Langmuir Laboratory, in New Mexico Tech. The network consisted of four arrays with four microphones each. The microphones on each array were placed in a triangular configuration with one of the microphones in the center of the array. The distance between the central microphone and the rest of them was about 30 m. The distance between centers of the arrays ranged from 500 m to 1500 m. The TC technique uses times of arrival (TOA) of acoustic waves to trace back the location of thunder sources. In order to obtain the times of arrival, the signals were filtered in a frequency band of 2 to 20 hertz and cross-correlated. Once the times of arrival were obtained, the Levenberg-Marquardt algorithm was applied to locate the spatial coordinates (x,y, and z) of thunder sources. Two techniques were used and contrasted to compute the accuracy of the TC method: Nearest-Neighbors (NN), between acoustic and LMA located sources, and standard deviation from the curvature matrix of the system as a measure of dispersion of the results. For the best case scenario, a triggered lightning event, the TC method applied with four microphones, located sources with a median error of 152 m and 142.9 m using nearest-neighbors and standard deviation respectively.; Results of the TC method in the lightning event recorded at 18:47:35 UTC, August 6, 2012. Black dots represent the results computed. Light color dots represent the LMA data for the same event. The results were obtained with the MGTM station (four channels). This figure
2011-01-01
Purpose To theoretically develop and experimentally validate a formulism based on a fractional order calculus (FC) diffusion model to characterize anomalous diffusion in brain tissues measured with a twice-refocused spin-echo (TRSE) pulse sequence. Materials and Methods The FC diffusion model is the fractional order generalization of the Bloch-Torrey equation. Using this model, an analytical expression was derived to describe the diffusion-induced signal attenuation in a TRSE pulse sequence. To experimentally validate this expression, a set of diffusion-weighted (DW) images was acquired at 3 Tesla from healthy human brains using a TRSE sequence with twelve b-values ranging from 0 to 2,600 s/mm2. For comparison, DW images were also acquired using a Stejskal-Tanner diffusion gradient in a single-shot spin-echo echo planar sequence. For both datasets, a Levenberg-Marquardt fitting algorithm was used to extract three parameters: diffusion coefficient D, fractional order derivative in space β, and a spatial parameter μ (in units of μm). Using adjusted R-squared values and standard deviations, D, β and μ values and the goodness-of-fit in three specific regions of interest (ROI) in white matter, gray matter, and cerebrospinal fluid were evaluated for each of the two datasets. In addition, spatially resolved parametric maps were assessed qualitatively. Results The analytical expression for the TRSE sequence, derived from the FC diffusion model, accurately characterized the diffusion-induced signal loss in brain tissues at high b-values. In the selected ROIs, the goodness-of-fit and standard deviations for the TRSE dataset were comparable with the results obtained from the Stejskal-Tanner dataset, demonstrating the robustness of the FC model across multiple data acquisition strategies. Qualitatively, the D, β, and μ maps from the TRSE dataset exhibited fewer artifacts, reflecting the improved immunity to eddy currents. Conclusion The diffusion-induced signal
Gao, Qing; Srinivasan, Girish; Magin, Richard L; Zhou, Xiaohong Joe
2011-05-01
To theoretically develop and experimentally validate a formulism based on a fractional order calculus (FC) diffusion model to characterize anomalous diffusion in brain tissues measured with a twice-refocused spin-echo (TRSE) pulse sequence. The FC diffusion model is the fractional order generalization of the Bloch-Torrey equation. Using this model, an analytical expression was derived to describe the diffusion-induced signal attenuation in a TRSE pulse sequence. To experimentally validate this expression, a set of diffusion-weighted (DW) images was acquired at 3 Tesla from healthy human brains using a TRSE sequence with twelve b-values ranging from 0 to 2600 s/mm(2). For comparison, DW images were also acquired using a Stejskal-Tanner diffusion gradient in a single-shot spin-echo echo planar sequence. For both datasets, a Levenberg-Marquardt fitting algorithm was used to extract three parameters: diffusion coefficient D, fractional order derivative in space β, and a spatial parameter μ (in units of μm). Using adjusted R-squared values and standard deviations, D, β, and μ values and the goodness-of-fit in three specific regions of interest (ROIs) in white matter, gray matter, and cerebrospinal fluid, respectively, were evaluated for each of the two datasets. In addition, spatially resolved parametric maps were assessed qualitatively. The analytical expression for the TRSE sequence, derived from the FC diffusion model, accurately characterized the diffusion-induced signal loss in brain tissues at high b-values. In the selected ROIs, the goodness-of-fit and standard deviations for the TRSE dataset were comparable with the results obtained from the Stejskal-Tanner dataset, demonstrating the robustness of the FC model across multiple data acquisition strategies. Qualitatively, the D, β, and μ maps from the TRSE dataset exhibited fewer artifacts, reflecting the improved immunity to eddy currents. The diffusion-induced signal attenuation in a TRSE pulse sequence
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Algorithm aversion: people erroneously avoid algorithms after seeing them err.
Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade
2015-02-01
Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.
The Texas Medication Algorithm Project (TMAP) schizophrenia algorithms.
Miller, A L; Chiles, J A; Chiles, J K; Crismon, M L; Rush, A J; Shon, S P
1999-10-01
In the Texas Medication Algorithm Project (TMAP), detailed guidelines for medication management of schizophrenia and related disorders, bipolar disorders, and major depressive disorders have been developed and implemented. This article describes the algorithms developed for medication treatment of schizophrenia and related disorders. The guidelines recommend a sequence of medications and discuss dosing, duration, and switch-over tactics. They also specify response criteria at each stage of the algorithm for both positive and negative symptoms. The rationale and evidence for each aspect of the algorithms are presented.
Local Surface Reconstruction from MER images using Stereo Workstation
NASA Astrophysics Data System (ADS)
Shin, Dongjoe; Muller, Jan-Peter
2010-05-01
The authors present a semi-automatic workflow that reconstructs the 3D shape of the martian surface from local stereo images delivered by PnCam or NavCam on systems such as the NASA Mars Exploration Rover (MER) Mission and in the future the ESA-NASA ExoMars rover PanCam. The process is initiated with manually selected tiepoints on a stereo workstation which is then followed by a tiepoint refinement, stereo-matching using region growing and Levenberg-Marquardt Algorithm (LMA)-based bundle adjustment processing. The stereo workstation, which is being developed by UCL in collaboration with colleagues at the Jet Propulsion Laboratory (JPL) within the EU FP7 ProVisG project, includes a set of practical GUI-based tools that enable an operator to define a visually correct tiepoint via a stereo display. To achieve platform and graphic hardware independence, the stereo application has been implemented using JPL's JADIS graphic library which is written in JAVA and the remaining processing blocks used in the reconstruction workflow have also been developed as a JAVA package to increase the code re-usability, portability and compatibility. Although initial tiepoints from the stereo workstation are reasonably acceptable as true correspondences, it is often required to employ an optional validity check and/or quality enhancing process. To meet this requirement, the workflow has been designed to include a tiepoint refinement process based on the Adaptive Least Square Correlation (ALSC) matching algorithm so that the initial tiepoints can be further enhanced to sub-pixel precision or rejected if they fail to pass the ALSC matching threshold. Apart from the accuracy of reconstruction, it is obvious that the other criterion to assess the quality of reconstruction is the density (or completeness) of reconstruction, which is not attained in the refinement process. Thus, we re-implemented a stereo region growing process, which is a core matching algorithm within the UCL
Algorithm Visualization System for Teaching Spatial Data Algorithms
ERIC Educational Resources Information Center
Nikander, Jussi; Helminen, Juha; Korhonen, Ari
2010-01-01
TRAKLA2 is a web-based learning environment for data structures and algorithms. The system delivers automatically assessed algorithm simulation exercises that are solved using a graphical user interface. In this work, we introduce a novel learning environment for spatial data algorithms, SDA-TRAKLA2, which has been implemented on top of the…
Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms
NASA Technical Reports Server (NTRS)
Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)
2000-01-01
In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.
Automated Calibration For Numerical Models Of Riverflow
NASA Astrophysics Data System (ADS)
Fernandez, Betsaida; Kopmann, Rebekka; Oladyshkin, Sergey
2017-04-01
Calibration of numerical models is fundamental since the beginning of all types of hydro system modeling, to approximate the parameters that can mimic the overall system behavior. Thus, an assessment of different deterministic and stochastic optimization methods is undertaken to compare their robustness, computational feasibility, and global search capacity. Also, the uncertainty of the most suitable methods is analyzed. These optimization methods minimize the objective function that comprises synthetic measurements and simulated data. Synthetic measurement data replace the observed data set to guarantee an existing parameter solution. The input data for the objective function derivate from a hydro-morphological dynamics numerical model which represents an 180-degree bend channel. The hydro- morphological numerical model shows a high level of ill-posedness in the mathematical problem. The minimization of the objective function by different candidate methods for optimization indicates a failure in some of the gradient-based methods as Newton Conjugated and BFGS. Others reveal partial convergence, such as Nelder-Mead, Polak und Ribieri, L-BFGS-B, Truncated Newton Conjugated, and Trust-Region Newton Conjugated Gradient. Further ones indicate parameter solutions that range outside the physical limits, such as Levenberg-Marquardt and LeastSquareRoot. Moreover, there is a significant computational demand for genetic optimization methods, such as Differential Evolution and Basin-Hopping, as well as for Brute Force methods. The Deterministic Sequential Least Square Programming and the scholastic Bayes Inference theory methods present the optimal optimization results. keywords: Automated calibration of hydro-morphological dynamic numerical model, Bayesian inference theory, deterministic optimization methods.
Prediction of U-Mo dispersion nuclear fuels with Al-Si alloy using artificial neural network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Susmikanti, Mike, E-mail: mike@batan.go.id; Sulistyo, Jos, E-mail: soj@batan.go.id
2014-09-30
Dispersion nuclear fuels, consisting of U-Mo particles dispersed in an Al-Si matrix, are being developed as fuel for research reactors. The equilibrium relationship for a mixture component can be expressed in the phase diagram. It is important to analyze whether a mixture component is in equilibrium phase or another phase. The purpose of this research it is needed to built the model of the phase diagram, so the mixture component is in the stable or melting condition. Artificial neural network (ANN) is a modeling tool for processes involving multivariable non-linear relationships. The objective of the present work is to developmore » code based on artificial neural network models of system equilibrium relationship of U-Mo in Al-Si matrix. This model can be used for prediction of type of resulting mixture, and whether the point is on the equilibrium phase or in another phase region. The equilibrium model data for prediction and modeling generated from experimentally data. The artificial neural network with resilient backpropagation method was chosen to predict the dispersion of nuclear fuels U-Mo in Al-Si matrix. This developed code was built with some function in MATLAB. For simulations using ANN, the Levenberg-Marquardt method was also used for optimization. The artificial neural network is able to predict the equilibrium phase or in the phase region. The develop code based on artificial neural network models was built, for analyze equilibrium relationship of U-Mo in Al-Si matrix.« less
Computer-based planning of optimal donor sites for autologous osseous grafts
NASA Astrophysics Data System (ADS)
Krol, Zdzislaw; Chlebiej, Michal; Zerfass, Peter; Zeilhofer, Hans-Florian U.; Sader, Robert; Mikolajczak, Pawel; Keeve, Erwin
2002-05-01
Bone graft surgery is often necessary for reconstruction of craniofacial defects after trauma, tumor, infection or congenital malformation. In this operative technique the removed or missing bone segment is filled with a bone graft. The mainstay of the craniofacial reconstruction rests with the replacement of the defected bone by autogeneous bone grafts. To achieve sufficient incorporation of the autograft into the host bone, precise planning and simulation of the surgical intervention is required. The major problem is to determine as accurately as possible the donor site where the graft should be dissected from and to define the shape of the desired transplant. A computer-aided method for semi-automatic selection of optimal donor sites for autografts in craniofacial reconstructive surgery has been developed. The non-automatic step of graft design and constraint setting is followed by a fully automatic procedure to find the best fitting position. In extension to preceding work, a new optimization approach based on the Levenberg-Marquardt method has been implemented and embedded into our computer-based surgical planning system. This new technique enables, once the pre-processing step has been performed, selection of the optimal donor site in time less than one minute. The method has been applied during surgery planning step in more than 20 cases. The postoperative observations have shown that functional results, such as speech and chewing ability as well as restoration of bony continuity were clearly better compared to conventionally planned operations. Moreover, in most cases the duration of the surgical interventions has been distinctly reduced.
Tees, D F; Waugh, R E; Hammer, D A
2001-01-01
A microcantilever technique was used to apply force to receptor-ligand molecules involved in leukocyte rolling on blood vessel walls. E-selectin was adsorbed onto 3-microm-diameter, 4-mm-long glass fibers, and the selectin ligand, sialyl Lewis(x), was coupled to latex microspheres. After binding, the microsphere and bound fiber were retracted using a computerized loading protocol that combines hydrodynamic and Hookean forces on the fiber to produce a range of force loading rates (force/time), r(f). From the distribution of forces at failure, the average force was determined and plotted as a function of ln r(f). The slope and intercept of the plot yield the unstressed reverse reaction rate, k(r)(o), and a parameter that describes the force dependence of reverse reaction rates, r(o). The ligand was titrated so adhesion occurred in approximately 30% of tests, implying that >80% of adhesive events involve single bonds. Monte Carlo simulations show that this level of multiple bonding has little effect on parameter estimation. The estimates are r(o) = 0.048 and 0.016 nm and k(r)(o) = 0.72 and 2.2 s(-1) for loading rates in the ranges 200-1000 and 1000-5000 pN s(-1), respectively. Levenberg-Marquardt fitting across all values of r(f) gives r(o) = 0.034 nm and k(r)(o) = 0.82 s(-1). The values of these parameters are in the range required for rolling, as suggested by adhesive dynamics simulations. PMID:11159435
NASA Astrophysics Data System (ADS)
Ghnimi, Thouraya; Hassini, Lamine; Bagane, Mohamed
2016-12-01
The aim of this work is to determine the desorption isotherms and the drying kinetics of bay laurel leaves ( Laurus Nobilis L.). The desorption isotherms were performed at three temperature levels: 50, 60 and 70 °C and at water activity ranging from 0.057 to 0.88 using the statistic gravimetric method. Five sorption models were used to fit desorption experimental isotherm data. It was found that Kuhn model offers the best fitting of experimental moisture isotherms in the mentioned investigated ranges of temperature and water activity. The Net isosteric heat of water desorption was evaluated using The Clausius-Clapeyron equation and was then best correlated to equilibrium moisture content by the empirical Tsami's equation. Thin layer convective drying curves of bay laurel leaves were obtained for temperatures of 45, 50, 60 and 70 °C, relative humidity of 5, 15, 30 and 45 % and air velocities of 1, 1.5 and 2 m/s. A non linear regression procedure of Levenberg-Marquardt was used to fit drying curves with five semi empirical mathematical models available in the literature, The R2 and χ2 were used to evaluate the goodness of fit of models to data. Based on the experimental drying curves the drying characteristic curve (DCC) has been established and fitted with a third degree polynomial function. It was found that the Midilli Kucuk model was the best semi-empirical model describing thin layer drying kinetics of bay laurel leaves. The bay laurel leaves effective moisture diffusivity and activation energy were also identified.
Image processing meta-algorithm development via genetic manipulation of existing algorithm graphs
NASA Astrophysics Data System (ADS)
Schalkoff, Robert J.; Shaaban, Khaled M.
1999-07-01
Automatic algorithm generation for image processing applications is not a new idea, however previous work is either restricted to morphological operates or impractical. In this paper, we show recent research result in the development and use of meta-algorithms, i.e. algorithms which lead to new algorithms. Although the concept is generally applicable, the application domain in this work is restricted to image processing. The meta-algorithm concept described in this paper is based upon out work in dynamic algorithm. The paper first present the concept of dynamic algorithms which, on the basis of training and archived algorithmic experience embedded in an algorithm graph (AG), dynamically adjust the sequence of operations applied to the input image data. Each node in the tree-based representation of a dynamic algorithm with out degree greater than 2 is a decision node. At these nodes, the algorithm examines the input data and determines which path will most likely achieve the desired results. This is currently done using nearest-neighbor classification. The details of this implementation are shown. The constrained perturbation of existing algorithm graphs, coupled with a suitable search strategy, is one mechanism to achieve meta-algorithm an doffers rich potential for the discovery of new algorithms. In our work, a meta-algorithm autonomously generates new dynamic algorithm graphs via genetic recombination of existing algorithm graphs. The AG representation is well suited to this genetic-like perturbation, using a commonly- employed technique in artificial neural network synthesis, namely the blueprint representation of graphs. A number of exam. One of the principal limitations of our current approach is the need for significant human input in the learning phase. Efforts to overcome this limitation are discussed. Future research directions are indicated.
Motion Cueing Algorithm Development: Piloted Performance Testing of the Cueing Algorithms
NASA Technical Reports Server (NTRS)
Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.
2005-01-01
The relative effectiveness in simulating aircraft maneuvers with both current and newly developed motion cueing algorithms was assessed with an eleven-subject piloted performance evaluation conducted on the NASA Langley Visual Motion Simulator (VMS). In addition to the current NASA adaptive algorithm, two new cueing algorithms were evaluated: the optimal algorithm and the nonlinear algorithm. The test maneuvers included a straight-in approach with a rotating wind vector, an offset approach with severe turbulence and an on/off lateral gust that occurs as the aircraft approaches the runway threshold, and a takeoff both with and without engine failure after liftoff. The maneuvers were executed with each cueing algorithm with added visual display delay conditions ranging from zero to 200 msec. Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. Piloted performance parameters for the approach maneuvers, the vertical velocity upon touchdown and the runway touchdown position, were also analyzed but did not show any noticeable difference among the cueing algorithms. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach were less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.
Super-Encryption Implementation Using Monoalphabetic Algorithm and XOR Algorithm for Data Security
NASA Astrophysics Data System (ADS)
Rachmawati, Dian; Andri Budiman, Mohammad; Aulia, Indra
2018-03-01
The exchange of data that occurs offline and online is very vulnerable to the threat of data theft. In general, cryptography is a science and art to maintain data secrecy. An encryption is a cryptography algorithm in which data is transformed into cipher text, which is something that is unreadable and meaningless so it cannot be read or understood by other parties. In super-encryption, two or more encryption algorithms are combined to make it more secure. In this work, Monoalphabetic algorithm and XOR algorithm are combined to form a super- encryption. Monoalphabetic algorithm works by changing a particular letter into a new letter based on existing keywords while the XOR algorithm works by using logic operation XOR Since Monoalphabetic algorithm is a classical cryptographic algorithm and XOR algorithm is a modern cryptographic algorithm, this scheme is expected to be both easy-to-implement and more secure. The combination of the two algorithms is capable of securing the data and restoring it back to its original form (plaintext), so the data integrity is still ensured.
Linear feature detection algorithm for astronomical surveys - I. Algorithm description
NASA Astrophysics Data System (ADS)
Bektešević, Dino; Vinković, Dejan
2017-11-01
Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.
Language Model Combination and Adaptation Using Weighted Finite State Transducers
NASA Technical Reports Server (NTRS)
Liu, X.; Gales, M. J. F.; Hieronymus, J. L.; Woodland, P. C.
2010-01-01
In speech recognition systems language model (LMs) are often constructed by training and combining multiple n-gram models. They can be either used to represent different genres or tasks found in diverse text sources, or capture stochastic properties of different linguistic symbol sequences, for example, syllables and words. Unsupervised LM adaption may also be used to further improve robustness to varying styles or tasks. When using these techniques, extensive software changes are often required. In this paper an alternative and more general approach based on weighted finite state transducers (WFSTs) is investigated for LM combination and adaptation. As it is entirely based on well-defined WFST operations, minimum change to decoding tools is needed. A wide range of LM combination configurations can be flexibly supported. An efficient on-the-fly WFST decoding algorithm is also proposed. Significant error rate gains of 7.3% relative were obtained on a state-of-the-art broadcast audio recognition task using a history dependently adapted multi-level LM modelling both syllable and word sequences
Regier, Michael D; Moodie, Erica E M
2016-05-01
We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.
The Sentinel-3 Surface Topography Mission (S-3 STM): Level 2 SAR Ocean Retracker
NASA Astrophysics Data System (ADS)
Dinardo, S.; Lucas, B.; Benveniste, J.
2015-12-01
The SRAL Radar Altimeter, on board of the ESA Mission Sentinel-3 (S-3), has the capacity to operate either in the Pulse-Limited Mode (also known as LRM) or in the novel Synthetic Aperture Radar (SAR) mode. Thanks to the initial results from SAR Altimetry obtained exploiting CryoSat-2 data, lately the interest by the scientific community in this new technology has significantly increased and consequently the definition of accurate processing methodologies (along with validation strategies) has now assumed a capital importance. In this paper, we present the algorithm proposed to retrieve from S-3 STM SAR return waveforms the standard ocean geophysical parameters (ocean topography, wave height and sigma nought) and the validation results that have been so far achieved exploiting the CryoSat-2 data as well as the simulated data. The inversion method (retracking) to extract from the return waveform the geophysical information is a curve best-fitting scheme based on the bounded Levenberg-Marquardt Least-Squares Estimation Method (LEVMAR-LSE). The S-3 STM SAR Ocean retracking algorithm adopts, as return waveform’s model, the “SAMOSA” model [Ray et al, 2014], named after the R&D project SAMOSA (led by Satoc and funded by ESA), in which it has been initially developed. The SAMOSA model is a physically-based model that offers a complete description of a SAR Altimeter return waveform from ocean surface, expressed in the form of maps of reflected power in Delay-Doppler space (also known as stack) or expressed as multilooked echoes. SAMOSA is able to account for an elliptical antenna pattern, mispointing errors in roll and yaw, surface scattering pattern, non-linear ocean wave statistics and spherical Earth surface effects. In spite of its truly comprehensive character, the SAMOSA model comes with a compact analytical formulation expressed in term of Modified Bessel functions. The specifications of the retracking algorithm have been gathered in a technical document (DPM
Zhao, M.; Golaz, J.-C.; Held, I. M.; Guo, H.; Balaji, V.; Benson, R.; Chen, J.-H.; Chen, X.; Donner, L. J.; Dunne, J. P.; Dunne, Krista A.; Durachta, J.; Fan, S.-M.; Freidenreich, S. M.; Garner, S. T.; Ginoux, P.; Harris, L. M.; Horowitz, L. W.; Krasting, J. P.; Langenhorst, A. R.; Liang, Z.; Lin, P.; Lin, S.-J.; Malyshev, S. L.; Mason, E.; Milly, Paul C.D.; Ming, Y.; Naik, V.; Paulot, F.; Paynter, D.; Phillipps, P.; Radhakrishnan, A.; Ramaswamy, V.; Robinson, T.; Schwarzkopf, D.; Seman, C. J.; Shevliakova, E.; Shen, Z.; Shin, H.; Silvers, L.; Wilson, J. R.; Winton, M.; Wittenberg, A. T.; Wyman, B.; Xiang, B.
2018-01-01
In this two‐part paper, a description is provided of a version of the AM4.0/LM4.0 atmosphere/land model that will serve as a base for a new set of climate and Earth system models (CM4 and ESM4) under development at NOAA's Geophysical Fluid Dynamics Laboratory (GFDL). This version, with roughly 100 km horizontal resolution and 33 levels in the vertical, contains an aerosol model that generates aerosol fields from emissions and a “light” chemistry mechanism designed to support the aerosol model but with prescribed ozone. In Part 1, the quality of the simulation in AMIP (Atmospheric Model Intercomparison Project) mode—with prescribed sea surface temperatures (SSTs) and sea‐ice distribution—is described and compared with previous GFDL models and with the CMIP5 archive of AMIP simulations. The model's Cess sensitivity (response in the top‐of‐atmosphere radiative flux to uniform warming of SSTs) and effective radiative forcing are also presented. In Part 2, the model formulation is described more fully and key sensitivities to aspects of the model formulation are discussed, along with the approach to model tuning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Ming; Golaz, J. -C.; Held, I. M.
In this two–part paper, a description is provided of a version of the AM4.0/LM4.0 atmosphere/land model that will serve as a base for a new set of climate and Earth system models (CM4 and ESM4) under development at NOAA's Geophysical Fluid Dynamics Laboratory (GFDL). This version, with roughly 100 km horizontal resolution and 33 levels in the vertical, contains an aerosol model that generates aerosol fields from emissions and a “light” chemistry mechanism designed to support the aerosol model but with prescribed ozone. In Part 1, the quality of the simulation in AMIP (Atmospheric Model Intercomparison Project) mode—with prescribed seamore » surface temperatures (SSTs) and sea–ice distribution—is described and compared with previous GFDL models and with the CMIP5 archive of AMIP simulations. Here, the model's Cess sensitivity (response in the top–of–atmosphere radiative flux to uniform warming of SSTs) and effective radiative forcing are also presented. In Part 2, the model formulation is described more fully and key sensitivities to aspects of the model formulation are discussed, along with the approach to model tuning.« less