FSILP: fuzzy-stochastic-interval linear programming for supporting municipal solid waste management.
Li, Pu; Chen, Bing
2011-04-01
Although many studies on municipal solid waste management (MSW management) were conducted under uncertain conditions of fuzzy, stochastic, and interval coexistence, the solution to the conventional linear programming problems of integrating fuzzy method with the other two was inefficient. In this study, a fuzzy-stochastic-interval linear programming (FSILP) method is developed by integrating Nguyen's method with conventional linear programming for supporting municipal solid waste management. The Nguyen's method was used to convert the fuzzy and fuzzy-stochastic linear programming problems into the conventional linear programs, by measuring the attainment values of fuzzy numbers and/or fuzzy random variables, as well as superiority and inferiority between triangular fuzzy numbers/triangular fuzzy-stochastic variables. The developed method can effectively tackle uncertainties described in terms of probability density functions, fuzzy membership functions, and discrete intervals. Moreover, the method can also improve upon the conventional interval fuzzy programming and two-stage stochastic programming approaches, with advantageous capabilities that are easily achieved with fewer constraints and significantly reduces consumption time. The developed model was applied to a case study of municipal solid waste management system in a city. The results indicated that reasonable solutions had been generated. The solution can help quantify the relationship between the change of system cost and the uncertainties, which could support further analysis of tradeoffs between the waste management cost and the system failure risk. Copyright © 2010 Elsevier Ltd. All rights reserved.
Runge-Kutta Methods for Linear Ordinary Differential Equations
NASA Technical Reports Server (NTRS)
Zingg, David W.; Chisholm, Todd T.
1997-01-01
Three new Runge-Kutta methods are presented for numerical integration of systems of linear inhomogeneous ordinary differential equations (ODES) with constant coefficients. Such ODEs arise in the numerical solution of the partial differential equations governing linear wave phenomena. The restriction to linear ODEs with constant coefficients reduces the number of conditions which the coefficients of the Runge-Kutta method must satisfy. This freedom is used to develop methods which are more efficient than conventional Runge-Kutta methods. A fourth-order method is presented which uses only two memory locations per dependent variable, while the classical fourth-order Runge-Kutta method uses three. This method is an excellent choice for simulations of linear wave phenomena if memory is a primary concern. In addition, fifth- and sixth-order methods are presented which require five and six stages, respectively, one fewer than their conventional counterparts, and are therefore more efficient. These methods are an excellent option for use with high-order spatial discretizations.
A New Approach to Detect Mover Position in Linear Motors Using Magnetic Sensors
Paul, Sarbajit; Chang, Junghwan
2015-01-01
A new method to detect the mover position of a linear motor is proposed in this paper. This method employs a simple cheap Hall Effect sensor-based magnetic sensor unit to detect the mover position of the linear motor. With the movement of the linear motor, Hall Effect sensor modules electrically separated 120° along with the idea of three phase balanced condition (va + vb + vc = 0) are used to produce three phase signals. The amplitude of the sensor output voltage signals are adjusted to unit amplitude to minimize the amplitude errors. With the unit amplitude signals three to two phase transformation is done to reduce the three multiples of harmonic components. The final output thus obtained is converted to position data by the use of arctangent function. The measurement accuracy of the new method is analyzed by experiments and compared with the conventional two phase method. Using the same number of sensor modules as the conventional two phase method, the proposed method gives more accurate position information compared to the conventional system where sensors are separated by 90° electrical angles. PMID:26506348
ERIC Educational Resources Information Center
Eshleman, Winston Hull
Compared were programed materials and conventional methods for teaching two units of eighth grade science. Programed materials used were linear programed books requiring constructed responses. The conventional methods included textbook study, written exercises, lectures, discussions, demonstrations, experiments, chalkboard drawings, films,…
[Psychiatric Rehabilitation - From the Linear Continuum Approach Towards Supported Inclusion].
Richter, Dirk; Hertig, Res; Hoffmann, Holger
2016-11-01
Background: For many decades, psychiatric rehabilitation in the German-speaking countries is following a conventional linear continuum approach. Methods: Recent developments in important fields related to psychiatric rehabilitation (UN Convention on the Rights of People with Disabilities, theory of rehabilitation, empirical research) are reviewed. Results: Common to all developments in the reviewed fields are the principles of choice, autonomy and social inclusion. These principles contradict the conventional linear continuum approach. Conclusions: The linear continuum approach of psychiatric rehabilitation should be replaced by the "supported inclusion"-approach. © Georg Thieme Verlag KG Stuttgart · New York.
Okamoto, Takuma; Sakaguchi, Atsushi
2017-03-01
Generating acoustically bright and dark zones using loudspeakers is gaining attention as one of the most important acoustic communication techniques for such uses as personal sound systems and multilingual guide services. Although most conventional methods are based on numerical solutions, an analytical approach based on the spatial Fourier transform with a linear loudspeaker array has been proposed, and its effectiveness has been compared with conventional acoustic energy difference maximization and presented by computer simulations. To describe the effectiveness of the proposal in actual environments, this paper investigates the experimental validation of the proposed approach with rectangular and Hann windows and compared it with three conventional methods: simple delay-and-sum beamforming, contrast maximization, and least squares-based pressure matching using an actually implemented linear array of 64 loudspeakers in an anechoic chamber. The results of both the computer simulations and the actual experiments show that the proposed approach with a Hann window more accurately controlled the bright and dark zones than the conventional methods.
Integrating conventional and inverse representation for face recognition.
Xu, Yong; Li, Xuelong; Yang, Jian; Lai, Zhihui; Zhang, David
2014-10-01
Representation-based classification methods are all constructed on the basis of the conventional representation, which first expresses the test sample as a linear combination of the training samples and then exploits the deviation between the test sample and the expression result of every class to perform classification. However, this deviation does not always well reflect the difference between the test sample and each class. With this paper, we propose a novel representation-based classification method for face recognition. This method integrates conventional and the inverse representation-based classification for better recognizing the face. It first produces conventional representation of the test sample, i.e., uses a linear combination of the training samples to represent the test sample. Then it obtains the inverse representation, i.e., provides an approximation representation of each training sample of a subject by exploiting the test sample and training samples of the other subjects. Finally, the proposed method exploits the conventional and inverse representation to generate two kinds of scores of the test sample with respect to each class and combines them to recognize the face. The paper shows the theoretical foundation and rationale of the proposed method. Moreover, this paper for the first time shows that a basic nature of the human face, i.e., the symmetry of the face can be exploited to generate new training and test samples. As these new samples really reflect some possible appearance of the face, the use of them will enable us to obtain higher accuracy. The experiments show that the proposed conventional and inverse representation-based linear regression classification (CIRLRC), an improvement to linear regression classification (LRC), can obtain very high accuracy and greatly outperforms the naive LRC and other state-of-the-art conventional representation based face recognition methods. The accuracy of CIRLRC can be 10% greater than that of LRC.
AlBarakati, SF; Kula, KS; Ghoneima, AA
2012-01-01
Objective The aim of this study was to assess the reliability and reproducibility of angular and linear measurements of conventional and digital cephalometric methods. Methods A total of 13 landmarks and 16 skeletal and dental parameters were defined and measured on pre-treatment cephalometric radiographs of 30 patients. The conventional and digital tracings and measurements were performed twice by the same examiner with a 6 week interval between measurements. The reliability within the method was determined using Pearson's correlation coefficient (r2). The reproducibility between methods was calculated by paired t-test. The level of statistical significance was set at p < 0.05. Results All measurements for each method were above 0.90 r2 (strong correlation) except maxillary length, which had a correlation of 0.82 for conventional tracing. Significant differences between the two methods were observed in most angular and linear measurements except for ANB angle (p = 0.5), angle of convexity (p = 0.09), anterior cranial base (p = 0.3) and the lower anterior facial height (p = 0.6). Conclusion In general, both methods of conventional and digital cephalometric analysis are highly reliable. Although the reproducibility of the two methods showed some statistically significant differences, most differences were not clinically significant. PMID:22184624
Podoleanu, Adrian Gh; Bradu, Adrian
2013-08-12
Conventional spectral domain interferometry (SDI) methods suffer from the need of data linearization. When applied to optical coherence tomography (OCT), conventional SDI methods are limited in their 3D capability, as they cannot deliver direct en-face cuts. Here we introduce a novel SDI method, which eliminates these disadvantages. We denote this method as Master - Slave Interferometry (MSI), because a signal is acquired by a slave interferometer for an optical path difference (OPD) value determined by a master interferometer. The MSI method radically changes the main building block of an SDI sensor and of a spectral domain OCT set-up. The serially provided signal in conventional technology is replaced by multiple signals, a signal for each OPD point in the object investigated. This opens novel avenues in parallel sensing and in parallelization of signal processing in 3D-OCT, with applications in high- resolution medical imaging and microscopy investigation of biosamples. Eliminating the need of linearization leads to lower cost OCT systems and opens potential avenues in increasing the speed of production of en-face OCT images in comparison with conventional SDI.
NASA Astrophysics Data System (ADS)
He, Yu; Shen, Yuecheng; Feng, Xiaohua; Liu, Changjun; Wang, Lihong V.
2017-08-01
A circularly polarized antenna, providing more homogeneous illumination compared to a linearly polarized antenna, is more suitable for microwave induced thermoacoustic tomography (TAT). The conventional realization of a circular polarization is by using a helical antenna, but it suffers from low efficiency, low power capacity, and limited aperture in TAT systems. Here, we report an implementation of a circularly polarized illumination method in TAT by inserting a single-layer linear-to-circular polarizer based on frequency selective surfaces between a pyramidal horn antenna and an imaging object. The performance of the proposed method was validated by both simulations and experimental imaging of a breast tumor phantom. The results showed that a circular polarization was achieved, and the resultant thermoacoustic signal-to-noise was twice greater than that in the helical antenna case. The proposed method is more desirable in a waveguide-based TAT system than the conventional method.
Linear prediction data extrapolation superresolution radar imaging
NASA Astrophysics Data System (ADS)
Zhu, Zhaoda; Ye, Zhenru; Wu, Xiaoqing
1993-05-01
Range resolution and cross-range resolution of range-doppler imaging radars are related to the effective bandwidth of transmitted signal and the angle through which the object rotates relatively to the radar line of sight (RLOS) during the coherent processing time, respectively. In this paper, linear prediction data extrapolation discrete Fourier transform (LPDEDFT) superresolution imaging method is investigated for the purpose of surpassing the limitation imposed by the conventional FFT range-doppler processing and improving the resolution capability of range-doppler imaging radar. The LPDEDFT superresolution imaging method, which is conceptually simple, consists of extrapolating observed data beyond the observation windows by means of linear prediction, and then performing the conventional IDFT of the extrapolated data. The live data of a metalized scale model B-52 aircraft mounted on a rotating platform in a microwave anechoic chamber and a flying Boeing-727 aircraft were processed. It is concluded that, compared to the conventional Fourier method, either higher resolution for the same effective bandwidth of transmitted signals and total rotation angle of the object or equal-quality images from smaller bandwidth and total angle may be obtained by LPDEDFT.
Ohuchida, Kenoki; Moriyama, Taiki; Shindo, Koji; Manabe, Tatsuya; Ohtsuka, Takao; Shimizu, Shuji; Nakamura, Masafumi
2017-01-01
Background We previously reported the use of an inverted T-shaped method to obtain a suitable view for hand sewing to close the common entry hole when the linear stapler was fired for esophagojejunostomy after laparoscopic total gastrectomy (LTG). This conventional method involved insertion of the fixed cartridge fork to the Roux limb and the fine movable anvil fork to the esophagus to avoid perforation of the jejunum. However, insertion of the movable anvil fork to the esophagus during this procedure often requires us to strongly push down the main body of the stapler with the fixed cartridge fork to bring the direction of the anvil fork in line with the direction of the long axis of the esophagus while controlling the opening of the movable anvil fork. We therefore modified this complicated inverted T-shaped method using a linear stapler with a movable cartridge fork. This modified method involved insertion of the movable cartridge fork into the Roux limb followed by natural, easy insertion of the fixed anvil fork into the esophagus without controlling the opening of the movable cartridge fork. Methods We performed LTG in a total of 155 consecutive patients with gastric cancer from November 2007 to December 2015 in Kyushu University Hospital. After LTG, we performed the conventional inverted T-shaped method using a linear stapler with a fixed cartridge fork in 61 patients from November 2007 to July 2011 (fixed cartridge group). From August 2011, we used a linear stapler with a movable cartridge fork and performed the modified inverted T-shaped method in 94 patients (movable cartridge group). We herein compare the short-term outcomes in 94 cases of LTG using the modified method (movable cartridge fork) with those in 61 cases using the conventional method (fixed cartridge fork). Results We found no significant differences in the perioperative or postoperative events between the movable and fixed cartridge groups. One case of anastomotic leakage occurred in the fixed cartridge group, but no anastomotic leakage occurred in the movable cartridge group. Conclusions Although there were no remarkable differences in the short-term outcomes between the movable and fixed cartridge groups, we believe that the modified inverted T-shaped method is technically more feasible and reliable than the conventional method and will contribute to the improved safety of LTG. PMID:28616606
Comparison between a model-based and a conventional pyramid sensor reconstructor.
Korkiakoski, Visa; Vérinaud, Christophe; Le Louarn, Miska; Conan, Rodolphe
2007-08-20
A model of a non-modulated pyramid wavefront sensor (P-WFS) based on Fourier optics has been presented. Linearizations of the model represented as Jacobian matrices are used to improve the P-WFS phase estimates. It has been shown in simulations that a linear approximation of the P-WFS is sufficient in closed-loop adaptive optics. Also a method to compute model-based synthetic P-WFS command matrices is shown, and its performance is compared to the conventional calibration. It was observed that in poor visibility the new calibration is better than the conventional.
Equivalent linearization for fatigue life estimates of a nonlinear structure
NASA Technical Reports Server (NTRS)
Miles, R. N.
1989-01-01
An analysis is presented of the suitability of the method of equivalent linearization for estimating the fatigue life of a nonlinear structure. Comparisons are made of the fatigue life of a nonlinear plate as predicted using conventional equivalent linearization and three other more accurate methods. The excitation of the plate is assumed to be Gaussian white noise and the plate response is modeled using a single resonant mode. The methods used for comparison consist of numerical simulation, a probabalistic formulation, and a modification of equivalent linearization which avoids the usual assumption that the response process is Gaussian. Remarkably close agreement is obtained between all four methods, even for cases where the response is significantly linear.
Ohuchida, Kenoki; Nagai, Eishi; Moriyama, Taiki; Shindo, Koji; Manabe, Tatsuya; Ohtsuka, Takao; Shimizu, Shuji; Nakamura, Masafumi
2017-01-01
We previously reported the use of an inverted T-shaped method to obtain a suitable view for hand sewing to close the common entry hole when the linear stapler was fired for esophagojejunostomy after laparoscopic total gastrectomy (LTG). This conventional method involved insertion of the fixed cartridge fork to the Roux limb and the fine movable anvil fork to the esophagus to avoid perforation of the jejunum. However, insertion of the movable anvil fork to the esophagus during this procedure often requires us to strongly push down the main body of the stapler with the fixed cartridge fork to bring the direction of the anvil fork in line with the direction of the long axis of the esophagus while controlling the opening of the movable anvil fork. We therefore modified this complicated inverted T-shaped method using a linear stapler with a movable cartridge fork. This modified method involved insertion of the movable cartridge fork into the Roux limb followed by natural, easy insertion of the fixed anvil fork into the esophagus without controlling the opening of the movable cartridge fork. We performed LTG in a total of 155 consecutive patients with gastric cancer from November 2007 to December 2015 in Kyushu University Hospital. After LTG, we performed the conventional inverted T-shaped method using a linear stapler with a fixed cartridge fork in 61 patients from November 2007 to July 2011 (fixed cartridge group). From August 2011, we used a linear stapler with a movable cartridge fork and performed the modified inverted T-shaped method in 94 patients (movable cartridge group). We herein compare the short-term outcomes in 94 cases of LTG using the modified method (movable cartridge fork) with those in 61 cases using the conventional method (fixed cartridge fork). We found no significant differences in the perioperative or postoperative events between the movable and fixed cartridge groups. One case of anastomotic leakage occurred in the fixed cartridge group, but no anastomotic leakage occurred in the movable cartridge group. Although there were no remarkable differences in the short-term outcomes between the movable and fixed cartridge groups, we believe that the modified inverted T-shaped method is technically more feasible and reliable than the conventional method and will contribute to the improved safety of LTG.
NASA Astrophysics Data System (ADS)
Li, Guo; Xia, Jun; Li, Lei; Wang, Lidai; Wang, Lihong V.
2015-03-01
Linear transducer arrays are readily available for ultrasonic detection in photoacoustic computed tomography. They offer low cost, hand-held convenience, and conventional ultrasonic imaging. However, the elevational resolution of linear transducer arrays, which is usually determined by the weak focus of the cylindrical acoustic lens, is about one order of magnitude worse than the in-plane axial and lateral spatial resolutions. Therefore, conventional linear scanning along the elevational direction cannot provide high-quality three-dimensional photoacoustic images due to the anisotropic spatial resolutions. Here we propose an innovative method to achieve isotropic resolutions for three-dimensional photoacoustic images through combined linear and rotational scanning. In each scan step, we first elevationally scan the linear transducer array, and then rotate the linear transducer array along its center in small steps, and scan again until 180 degrees have been covered. To reconstruct isotropic three-dimensional images from the multiple-directional scanning dataset, we use the standard inverse Radon transform originating from X-ray CT. We acquired a three-dimensional microsphere phantom image through the inverse Radon transform method and compared it with a single-elevational-scan three-dimensional image. The comparison shows that our method improves the elevational resolution by up to one order of magnitude, approaching the in-plane lateral-direction resolution. In vivo rat images were also acquired.
ERIC Educational Resources Information Center
Wang, Tianyou
2009-01-01
Holland and colleagues derived a formula for analytical standard error of equating using the delta-method for the kernel equating method. Extending their derivation, this article derives an analytical standard error of equating procedure for the conventional percentile rank-based equipercentile equating with log-linear smoothing. This procedure is…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hao; Yang, Weitao, E-mail: weitao.yang@duke.edu; Department of Physics, Duke University, Durham, North Carolina 27708
We developed a new method to calculate the atomic polarizabilities by fitting to the electrostatic potentials (ESPs) obtained from quantum mechanical (QM) calculations within the linear response theory. This parallels the conventional approach of fitting atomic charges based on electrostatic potentials from the electron density. Our ESP fitting is combined with the induced dipole model under the perturbation of uniform external electric fields of all orientations. QM calculations for the linear response to the external electric fields are used as input, fully consistent with the induced dipole model, which itself is a linear response model. The orientation of the uniformmore » external electric fields is integrated in all directions. The integration of orientation and QM linear response calculations together makes the fitting results independent of the orientations and magnitudes of the uniform external electric fields applied. Another advantage of our method is that QM calculation is only needed once, in contrast to the conventional approach, where many QM calculations are needed for many different applied electric fields. The molecular polarizabilities obtained from our method show comparable accuracy with those from fitting directly to the experimental or theoretical molecular polarizabilities. Since ESP is directly fitted, atomic polarizabilities obtained from our method are expected to reproduce the electrostatic interactions better. Our method was used to calculate both transferable atomic polarizabilities for polarizable molecular mechanics’ force fields and nontransferable molecule-specific atomic polarizabilities.« less
NASA Astrophysics Data System (ADS)
Miller, Kelsey; Guyon, Olivier
2016-07-01
This paper presents the early-stage simulation results of linear dark field control (LDFC) as a new approach to maintaining a stable dark hole within a stellar post-coronagraphic PSF. In practice, conventional speckle nulling is used to create a dark hole in the PSF, and LDFC is then employed to maintain the dark field by using information from the bright speckle field. The concept exploits the linear response of the bright speckle intensity to wavefront variations in the pupil, and therefore has many advantages over conventional speckle nulling as a method for stabilizing the dark hole. In theory, LDFC is faster, more sensitive, and more robust than using conventional speckle nulling techniques, like electric field conjugation, to maintain the dark hole. In this paper, LDFC theory, linear bright speckle characterization, and first results in simulation are presented as an initial step toward the deployment of LDFC on the UA Wavefront Control testbed in the coming year.
Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam
2009-01-01
This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.
NASA Technical Reports Server (NTRS)
Manson, S. S.; Halford, G. R.
1980-01-01
Simple procedures are presented for treating cumulative fatigue damage under complex loading history using either the damage curve concept or the double linear damage rule. A single equation is provided for use with the damage curve approach; each loading event providing a fraction of damage until failure is presumed to occur when the damage sum becomes unity. For the double linear damage rule, analytical expressions are provided for determining the two phases of life. The procedure involves two steps, each similar to the conventional application of the commonly used linear damage rule. When the sum of cycle ratios based on phase 1 lives reaches unity, phase 1 is presumed complete, and further loadings are summed as cycle ratios on phase 2 lives. When the phase 2 sum reaches unity, failure is presumed to occur. No other physical properties or material constants than those normally used in a conventional linear damage rule analysis are required for application of either of the two cumulative damage methods described. Illustrations and comparisons of both methods are discussed.
NASA Technical Reports Server (NTRS)
Manson, S. S.; Halford, G. R.
1981-01-01
Simple procedures are given for treating cumulative fatigue damage under complex loading history using either the damage curve concept or the double linear damage rule. A single equation is given for use with the damage curve approach; each loading event providing a fraction of damage until failure is presumed to occur when the damage sum becomes unity. For the double linear damage rule, analytical expressions are given for determining the two phases of life. The procedure comprises two steps, each similar to the conventional application of the commonly used linear damage rule. Once the sum of cycle ratios based on Phase I lives reaches unity, Phase I is presumed complete, and further loadings are summed as cycle ratios based on Phase II lives. When the Phase II sum attains unity, failure is presumed to occur. It is noted that no physical properties or material constants other than those normally used in a conventional linear damage rule analysis are required for application of either of the two cumulative damage methods described. Illustrations and comparisons are discussed for both methods.
Local Intrinsic Dimension Estimation by Generalized Linear Modeling.
Hino, Hideitsu; Fujiki, Jun; Akaho, Shotaro; Murata, Noboru
2017-07-01
We propose a method for intrinsic dimension estimation. By fitting the power of distance from an inspection point and the number of samples included inside a ball with a radius equal to the distance, to a regression model, we estimate the goodness of fit. Then, by using the maximum likelihood method, we estimate the local intrinsic dimension around the inspection point. The proposed method is shown to be comparable to conventional methods in global intrinsic dimension estimation experiments. Furthermore, we experimentally show that the proposed method outperforms a conventional local dimension estimation method.
Deviation Value for Conventional X-ray in Hospitals in South Sulawesi Province from 2014 to 2016
NASA Astrophysics Data System (ADS)
Bachtiar, Ilham; Abdullah, Bualkar; Tahir, Dahlan
2018-03-01
This paper describes the conventional X-ray machine parameters tested in the region of South Sulawesi from 2014 to 2016. The objective of this research is to know deviation of every parameter of conventional X-ray machine. The testing parameters were analyzed by using quantitative methods with participatory observational approach. Data collection was performed by testing the output of conventional X-ray plane using non-invasive x-ray multimeter. The test parameters include tube voltage (kV) accuracy, radiation output linearity, reproducibility and radiation beam value (HVL) quality. The results of the analysis show four conventional X-ray test parameters have varying deviation spans, where the tube voltage (kV) accuracy has an average value of 4.12%, the average radiation output linearity is 4.47% of the average reproducibility of 0.62% and the averaged of the radiation beam (HVL) is 3.00 mm.
Combining global and local approximations
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.
1991-01-01
A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.
Terza, Joseph V; Bradford, W David; Dismuke, Clara E
2008-01-01
Objective To investigate potential bias in the use of the conventional linear instrumental variables (IV) method for the estimation of causal effects in inherently nonlinear regression settings. Data Sources Smoking Supplement to the 1979 National Health Interview Survey, National Longitudinal Alcohol Epidemiologic Survey, and simulated data. Study Design Potential bias from the use of the linear IV method in nonlinear models is assessed via simulation studies and real world data analyses in two commonly encountered regression setting: (1) models with a nonnegative outcome (e.g., a count) and a continuous endogenous regressor; and (2) models with a binary outcome and a binary endogenous regressor. Principle Findings The simulation analyses show that substantial bias in the estimation of causal effects can result from applying the conventional IV method in inherently nonlinear regression settings. Moreover, the bias is not attenuated as the sample size increases. This point is further illustrated in the survey data analyses in which IV-based estimates of the relevant causal effects diverge substantially from those obtained with appropriate nonlinear estimation methods. Conclusions We offer this research as a cautionary note to those who would opt for the use of linear specifications in inherently nonlinear settings involving endogeneity. PMID:18546544
Gowda, Dhananjaya; Airaksinen, Manu; Alku, Paavo
2017-09-01
Recently, a quasi-closed phase (QCP) analysis of speech signals for accurate glottal inverse filtering was proposed. However, the QCP analysis which belongs to the family of temporally weighted linear prediction (WLP) methods uses the conventional forward type of sample prediction. This may not be the best choice especially in computing WLP models with a hard-limiting weighting function. A sample selective minimization of the prediction error in WLP reduces the effective number of samples available within a given window frame. To counter this problem, a modified quasi-closed phase forward-backward (QCP-FB) analysis is proposed, wherein each sample is predicted based on its past as well as future samples thereby utilizing the available number of samples more effectively. Formant detection and estimation experiments on synthetic vowels generated using a physical modeling approach as well as natural speech utterances show that the proposed QCP-FB method yields statistically significant improvements over the conventional linear prediction and QCP methods.
NASA Astrophysics Data System (ADS)
Jung, I. I.; Lee, J. H.; Lee, C. S.; Choi, Y.-W.
2011-02-01
We propose a novel circuit to be applied to the front-end integrated circuits of gamma-ray spectroscopy systems. Our circuit is designed as a type of current conveyor (ICON) employing a constant- gm (transconductance) method which can significantly improve the linearity in the amplified signals by using a large time constant and the time-invariant characteristics of an amplifier. The constant- gm method is obtained by a feedback control which keeps the transconductance of the input transistor constant. To verify the performance of the propose circuit, the time constant variations for the channel resistances are simulated with the TSMC 0.18 μm transistor parameters using HSPICE, and then compared with those of a conventional ICON. As a result, the proposed ICON shows only 0.02% output linearity variation and 0.19% time constant variation for the input amplitude up to 100 mV. These are significantly small values compared to a conventional ICON's 1.39% and 19.43%, respectively, for the same conditions.
NASA Astrophysics Data System (ADS)
Karimi, Hossein; Nikmehr, Saeid; Khodapanah, Ehsan
2016-09-01
In this paper, we develop a B-spline finite-element method (FEM) based on a locally modal wave propagation with anisotropic perfectly matched layers (PMLs), for the first time, to simulate nonlinear and lossy plasmonic waveguides. Conventional approaches like beam propagation method, inherently omit the wave spectrum and do not provide physical insight into nonlinear modes especially in the plasmonic applications, where nonlinear modes are constructed by linear modes with very close propagation constant quantities. Our locally modal B-spline finite element method (LMBS-FEM) does not suffer from the weakness of the conventional approaches. To validate our method, first, propagation of wave for various kinds of linear, nonlinear, lossless and lossy materials of metal-insulator plasmonic structures are simulated using LMBS-FEM in MATLAB and the comparisons are made with FEM-BPM module of COMSOL Multiphysics simulator and B-spline finite-element finite-difference wide angle beam propagation method (BSFEFD-WABPM). The comparisons show that not only our developed numerical approach is computationally more accurate and efficient than conventional approaches but also it provides physical insight into the nonlinear nature of the propagation modes.
Matrix completion by deep matrix factorization.
Fan, Jicong; Cheng, Jieyu
2018-02-01
Conventional methods of matrix completion are linear methods that are not effective in handling data of nonlinear structures. Recently a few researchers attempted to incorporate nonlinear techniques into matrix completion but there still exists considerable limitations. In this paper, a novel method called deep matrix factorization (DMF) is proposed for nonlinear matrix completion. Different from conventional matrix completion methods that are based on linear latent variable models, DMF is on the basis of a nonlinear latent variable model. DMF is formulated as a deep-structure neural network, in which the inputs are the low-dimensional unknown latent variables and the outputs are the partially observed variables. In DMF, the inputs and the parameters of the multilayer neural network are simultaneously optimized to minimize the reconstruction errors for the observed entries. Then the missing entries can be readily recovered by propagating the latent variables to the output layer. DMF is compared with state-of-the-art methods of linear and nonlinear matrix completion in the tasks of toy matrix completion, image inpainting and collaborative filtering. The experimental results verify that DMF is able to provide higher matrix completion accuracy than existing methods do and DMF is applicable to large matrices. Copyright © 2017 Elsevier Ltd. All rights reserved.
Yang, S; Liu, D G
2014-01-01
Objectives: The purposes of the study are to investigate the consistency of linear measurements between CBCT orthogonally synthesized cephalograms and conventional cephalograms and to evaluate the influence of different magnifications on these comparisons based on a simulation algorithm. Methods: Conventional cephalograms and CBCT scans were taken on 12 dry skulls with spherical metal markers. Orthogonally synthesized cephalograms were created from CBCT data. Linear parameters on both cephalograms were measured via Photoshop CS v. 5.0 (Adobe® Systems, San Jose, CA), named measurement group (MG). Bland–Altman analysis was utilized to assess the agreement of two imaging modalities. Reproducibility was investigated using paired t-test. By a specific mathematical programme “cepha”, corresponding linear parameters [mandibular corpus length (Go-Me), mandibular ramus length (Co-Go), posterior facial height (Go-S)] on these two types of cephalograms were calculated, named simulation group (SG). Bland–Altman analysis was used to assess the agreement between MG and SG. Simulated linear measurements with varying magnifications were generated based on “cepha” as well. Bland–Altman analysis was used to assess the agreement of simulated measurements between two modalities. Results: Bland–Altman analysis suggested the agreement between measurements on conventional cephalograms and orthogonally synthesized cephalograms, with a mean bias of 0.47 mm. Comparison between MG and SG showed that the difference did not reach clinical significance. The consistency between simulated measurements of both modalities with four different magnifications was demonstrated. Conclusions: Normative data of conventional cephalograms could be used for CBCT orthogonally synthesized cephalograms during this transitional period. PMID:25029593
Kitayama, Tomoya; Kinoshita, Ayako; Sugimoto, Masahiro; Nakayama, Yoichi; Tomita, Masaru
2006-07-17
In order to improve understanding of metabolic systems there have been attempts to construct S-system models from time courses. Conventionally, non-linear curve-fitting algorithms have been used for modelling, because of the non-linear properties of parameter estimation from time series. However, the huge iterative calculations required have hindered the development of large-scale metabolic pathway models. To solve this problem we propose a novel method involving power-law modelling of metabolic pathways from the Jacobian of the targeted system and the steady-state flux profiles by linearization of S-systems. The results of two case studies modelling a straight and a branched pathway, respectively, showed that our method reduced the number of unknown parameters needing to be estimated. The time-courses simulated by conventional kinetic models and those described by our method behaved similarly under a wide range of perturbations of metabolite concentrations. The proposed method reduces calculation complexity and facilitates the construction of large-scale S-system models of metabolic pathways, realizing a practical application of reverse engineering of dynamic simulation models from the Jacobian of the targeted system and steady-state flux profiles.
Controller design approach based on linear programming.
Tanaka, Ryo; Shibasaki, Hiroki; Ogawa, Hiromitsu; Murakami, Takahiro; Ishida, Yoshihisa
2013-11-01
This study explains and demonstrates the design method for a control system with a load disturbance observer. Observer gains are determined by linear programming (LP) in terms of the Routh-Hurwitz stability criterion and the final-value theorem. In addition, the control model has a feedback structure, and feedback gains are determined to be the linear quadratic regulator. The simulation results confirmed that compared with the conventional method, the output estimated by our proposed method converges to a reference input faster when a load disturbance is added to a control system. In addition, we also confirmed the effectiveness of the proposed method by performing an experiment with a DC motor. © 2013 ISA. Published by ISA. All rights reserved.
Why conventional detection methods fail in identifying the existence of contamination events.
Liu, Shuming; Li, Ruonan; Smith, Kate; Che, Han
2016-04-15
Early warning systems are widely used to safeguard water security, but their effectiveness has raised many questions. To understand why conventional detection methods fail to identify contamination events, this study evaluates the performance of three contamination detection methods using data from a real contamination accident and two artificial datasets constructed using a widely applied contamination data construction approach. Results show that the Pearson correlation Euclidean distance (PE) based detection method performs better for real contamination incidents, while the Euclidean distance method (MED) and linear prediction filter (LPF) method are more suitable for detecting sudden spike-like variation. This analysis revealed why the conventional MED and LPF methods failed to identify existence of contamination events. The analysis also revealed that the widely used contamination data construction approach is misleading. Copyright © 2016 Elsevier Ltd. All rights reserved.
Tomita, Yuki; Uechi, Jun; Konno, Masahiro; Sasamoto, Saera; Iijima, Masahiro; Mizoguchi, Itaru
2018-04-17
We compared the accuracy of digital models generated by desktop-scanning of conventional impression/plaster models versus intraoral scanning. Eight ceramic spheres were attached to the buccal molar regions of dental epoxy models, and reference linear-distance measurement were determined using a contact-type coordinate measuring instrument. Alginate (AI group) and silicone (SI group) impressions were taken and converted into cast models using dental stone; the models were scanned using desktop scanner. As an alternative, intraoral scans were taken using an intraoral scanner, and digital models were generated from these scans (IOS group). Twelve linear-distance measurement combinations were calculated between different sphere-centers for all digital models. There were no significant differences among the three groups using total of six linear-distance measurements. When limited to five lineardistance measurement, the IOS group showed significantly higher accuracy compared to the AI and SI groups. Intraoral scans may be more accurate compared to scans of conventional impression/plaster models.
Linear discriminant analysis based on L1-norm maximization.
Zhong, Fujin; Zhang, Jiashu
2013-08-01
Linear discriminant analysis (LDA) is a well-known dimensionality reduction technique, which is widely used for many purposes. However, conventional LDA is sensitive to outliers because its objective function is based on the distance criterion using L2-norm. This paper proposes a simple but effective robust LDA version based on L1-norm maximization, which learns a set of local optimal projection vectors by maximizing the ratio of the L1-norm-based between-class dispersion and the L1-norm-based within-class dispersion. The proposed method is theoretically proved to be feasible and robust to outliers while overcoming the singular problem of the within-class scatter matrix for conventional LDA. Experiments on artificial datasets, standard classification datasets and three popular image databases demonstrate the efficacy of the proposed method.
Single Image Super-Resolution Using Global Regression Based on Multiple Local Linear Mappings.
Choi, Jae-Seok; Kim, Munchurl
2017-03-01
Super-resolution (SR) has become more vital, because of its capability to generate high-quality ultra-high definition (UHD) high-resolution (HR) images from low-resolution (LR) input images. Conventional SR methods entail high computational complexity, which makes them difficult to be implemented for up-scaling of full-high-definition input images into UHD-resolution images. Nevertheless, our previous super-interpolation (SI) method showed a good compromise between Peak-Signal-to-Noise Ratio (PSNR) performances and computational complexity. However, since SI only utilizes simple linear mappings, it may fail to precisely reconstruct HR patches with complex texture. In this paper, we present a novel SR method, which inherits the large-to-small patch conversion scheme from SI but uses global regression based on local linear mappings (GLM). Thus, our new SR method is called GLM-SI. In GLM-SI, each LR input patch is divided into 25 overlapped subpatches. Next, based on the local properties of these subpatches, 25 different local linear mappings are applied to the current LR input patch to generate 25 HR patch candidates, which are then regressed into one final HR patch using a global regressor. The local linear mappings are learned cluster-wise in our off-line training phase. The main contribution of this paper is as follows: Previously, linear-mapping-based conventional SR methods, including SI only used one simple yet coarse linear mapping to each patch to reconstruct its HR version. On the contrary, for each LR input patch, our GLM-SI is the first to apply a combination of multiple local linear mappings, where each local linear mapping is found according to local properties of the current LR patch. Therefore, it can better approximate nonlinear LR-to-HR mappings for HR patches with complex texture. Experiment results show that the proposed GLM-SI method outperforms most of the state-of-the-art methods, and shows comparable PSNR performance with much lower computational complexity when compared with a super-resolution method based on convolutional neural nets (SRCNN15). Compared with the previous SI method that is limited with a scale factor of 2, GLM-SI shows superior performance with average 0.79 dB higher in PSNR, and can be used for scale factors of 3 or higher.
A New SEYHAN's Approach in Case of Heterogeneity of Regression Slopes in ANCOVA.
Ankarali, Handan; Cangur, Sengul; Ankarali, Seyit
2018-06-01
In this study, when the assumptions of linearity and homogeneity of regression slopes of conventional ANCOVA are not met, a new approach named as SEYHAN has been suggested to use conventional ANCOVA instead of robust or nonlinear ANCOVA. The proposed SEYHAN's approach involves transformation of continuous covariate into categorical structure when the relationship between covariate and dependent variable is nonlinear and the regression slopes are not homogenous. A simulated data set was used to explain SEYHAN's approach. In this approach, we performed conventional ANCOVA in each subgroup which is constituted according to knot values and analysis of variance with two-factor model after MARS method was used for categorization of covariate. The first model is a simpler model than the second model that includes interaction term. Since the model with interaction effect has more subjects, the power of test also increases and the existing significant difference is revealed better. We can say that linearity and homogeneity of regression slopes are not problem for data analysis by conventional linear ANCOVA model by helping this approach. It can be used fast and efficiently for the presence of one or more covariates.
Carbon dioxide stripping in aquaculture -- part III: model verification
Colt, John; Watten, Barnaby; Pfeiffer, Tim
2012-01-01
Based on conventional mass transfer models developed for oxygen, the use of the non-linear ASCE method, 2-point method, and one parameter linear-regression method were evaluated for carbon dioxide stripping data. For values of KLaCO2 < approximately 1.5/h, the 2-point or ASCE method are a good fit to experimental data, but the fit breaks down at higher values of KLaCO2. How to correct KLaCO2 for gas phase enrichment remains to be determined. The one-parameter linear regression model was used to vary the C*CO2 over the test, but it did not result in a better fit to the experimental data when compared to the ASCE or fixed C*CO2 assumptions.
Morphology filter bank for extracting nodular and linear patterns in medical images.
Hashimoto, Ryutaro; Uchiyama, Yoshikazu; Uchimura, Keiichi; Koutaki, Gou; Inoue, Tomoki
2017-04-01
Using image processing to extract nodular or linear shadows is a key technique of computer-aided diagnosis schemes. This study proposes a new method for extracting nodular and linear patterns of various sizes in medical images. We have developed a morphology filter bank that creates multiresolution representations of an image. Analysis bank of this filter bank produces nodular and linear patterns at each resolution level. Synthesis bank can then be used to perfectly reconstruct the original image from these decomposed patterns. Our proposed method shows better performance based on a quantitative evaluation using a synthesized image compared with a conventional method based on a Hessian matrix, often used to enhance nodular and linear patterns. In addition, experiments show that our method can be applied to the followings: (1) microcalcifications of various sizes in mammograms can be extracted, (2) blood vessels of various sizes in retinal fundus images can be extracted, and (3) thoracic CT images can be reconstructed while removing normal vessels. Our proposed method is useful for extracting nodular and linear shadows or removing normal structures in medical images.
Shang, Shang; Bai, Jing; Song, Xiaolei; Wang, Hongkai; Lau, Jaclyn
2007-01-01
Conjugate gradient method is verified to be efficient for nonlinear optimization problems of large-dimension data. In this paper, a penalized linear and nonlinear combined conjugate gradient method for the reconstruction of fluorescence molecular tomography (FMT) is presented. The algorithm combines the linear conjugate gradient method and the nonlinear conjugate gradient method together based on a restart strategy, in order to take advantage of the two kinds of conjugate gradient methods and compensate for the disadvantages. A quadratic penalty method is adopted to gain a nonnegative constraint and reduce the illposedness of the problem. Simulation studies show that the presented algorithm is accurate, stable, and fast. It has a better performance than the conventional conjugate gradient-based reconstruction algorithms. It offers an effective approach to reconstruct fluorochrome information for FMT.
A Higher Harmonic Optimal Controller to Optimise Rotorcraft Aeromechanical Behaviour
NASA Technical Reports Server (NTRS)
Leyland, Jane Anne
1996-01-01
Three methods to optimize rotorcraft aeromechanical behavior for those cases where the rotorcraft plant can be adequately represented by a linear model system matrix were identified and implemented in a stand-alone code. These methods determine the optimal control vector which minimizes the vibration metric subject to constraints at discrete time points, and differ from the commonly used non-optimal constraint penalty methods such as those employed by conventional controllers in that the constraints are handled as actual constraints to an optimization problem rather than as just additional terms in the performance index. The first method is to use a Non-linear Programming algorithm to solve the problem directly. The second method is to solve the full set of non-linear equations which define the necessary conditions for optimality. The third method is to solve each of the possible reduced sets of equations defining the necessary conditions for optimality when the constraints are pre-selected to be either active or inactive, and then to simply select the best solution. The effects of maneuvers and aeroelasticity on the systems matrix are modelled by using a pseudo-random pseudo-row-dependency scheme to define the systems matrix. Cases run to date indicate that the first method of solution is reliable, robust, and easiest to use, and that it was superior to the conventional controllers which were considered.
NASA Astrophysics Data System (ADS)
Arenas, Gustavo; Noriega, Sergio; Vallo, Claudia; Duchowicz, Ricardo
2007-03-01
A fiber optic sensing method based on a Fizeau-type interferometric scheme was employed for monitoring linear polymerization shrinkage in dental restoratives. This technique offers several advantages over the conventional methods of measuring polymerization contraction. This simple, compact, non-invasive and self-calibrating system competes with both conventional and other high-resolution bulk interferometric techniques. In this work, an analysis of the quality of interference signal and fringes visibility was performed in order to characterize their resolution and application range. The measurements of percent linear contraction as a function of the sample thickness were carried out in this study on two dental composites: Filtek P60 (3M ESPE) Posterior Restorer and Filtek Z250 (3M ESPE) Universal Restorer. The results were discussed with respect to others obtained employing alternative techniques.
Improved method for calculating neoclassical transport coefficients in the banana regime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taguchi, M., E-mail: taguchi.masayoshi@nihon-u.ac.jp
The conventional neoclassical moment method in the banana regime is improved by increasing the accuracy of approximation to the linearized Fokker-Planck collision operator. This improved method is formulated for a multiple ion plasma in general tokamak equilibria. The explicit computation in a model magnetic field shows that the neoclassical transport coefficients can be accurately calculated in the full range of aspect ratio by the improved method. The some neoclassical transport coefficients for the intermediate aspect ratio are found to appreciably deviate from those obtained by the conventional moment method. The differences between the transport coefficients with these two methods aremore » up to about 20%.« less
Geometric Integration of Weakly Dissipative Systems
NASA Astrophysics Data System (ADS)
Modin, K.; Führer, C.; Soöderlind, G.
2009-09-01
Some problems in mechanics, e.g. in bearing simulation, contain subsystems that are conservative as well as weakly dissipative subsystems. Our experience is that geometric integration methods are often superior for such systems, as long as the dissipation is weak. Here we develop adaptive methods for dissipative perturbations of Hamiltonian systems. The methods are "geometric" in the sense that the form of the dissipative perturbation is preserved. The methods are linearly explicit, i.e., they require the solution of a linear subsystem. We sketch an analysis in terms of backward error analysis and numerical comparisons with a conventional RK method of the same order is given.
NASA Astrophysics Data System (ADS)
He, Xin; Frey, Eric C.
2007-03-01
Binary ROC analysis has solid decision-theoretic foundations and a close relationship to linear discriminant analysis (LDA). In particular, for the case of Gaussian equal covariance input data, the area under the ROC curve (AUC) value has a direct relationship to the Hotelling trace. Many attempts have been made to extend binary classification methods to multi-class. For example, Fukunaga extended binary LDA to obtain multi-class LDA, which uses the multi-class Hotelling trace as a figure-of-merit, and we have previously developed a three-class ROC analysis method. This work explores the relationship between conventional multi-class LDA and three-class ROC analysis. First, we developed a linear observer, the three-class Hotelling observer (3-HO). For Gaussian equal covariance data, the 3- HO provides equivalent performance to the three-class ideal observer and, under less strict conditions, maximizes the signal to noise ratio for classification of all pairs of the three classes simultaneously. The 3-HO templates are not the eigenvectors obtained from multi-class LDA. Second, we show that the three-class Hotelling trace, which is the figureof- merit in the conventional three-class extension of LDA, has significant limitations. Third, we demonstrate that, under certain conditions, there is a linear relationship between the eigenvectors obtained from multi-class LDA and 3-HO templates. We conclude that the 3-HO based on decision theory has advantages both in its decision theoretic background and in the usefulness of its figure-of-merit. Additionally, there exists the possibility of interpreting the two linear features extracted by the conventional extension of LDA from a decision theoretic point of view.
Oppugning the assumptions of spatial averaging of segment and joint orientations.
Pierrynowski, Michael Raymond; Ball, Kevin Arthur
2009-02-09
Movement scientists frequently calculate "arithmetic averages" when examining body segment or joint orientations. Such calculations appear routinely, yet are fundamentally flawed. Three-dimensional orientation data are computed as matrices, yet three-ordered Euler/Cardan/Bryant angle parameters are frequently used for interpretation. These parameters are not geometrically independent; thus, the conventional process of averaging each parameter is incorrect. The process of arithmetic averaging also assumes that the distances between data are linear (Euclidean); however, for the orientation data these distances are geodesically curved (Riemannian). Therefore we question (oppugn) whether use of the conventional averaging approach is an appropriate statistic. Fortunately, exact methods of averaging orientation data have been developed which both circumvent the parameterization issue, and explicitly acknowledge the Euclidean or Riemannian distance measures. The details of these matrix-based averaging methods are presented and their theoretical advantages discussed. The Euclidian and Riemannian approaches offer appealing advantages over the conventional technique. With respect to practical biomechanical relevancy, examinations of simulated data suggest that for sets of orientation data possessing characteristics of low dispersion, an isotropic distribution, and less than 30 degrees second and third angle parameters, discrepancies with the conventional approach are less than 1.1 degrees . However, beyond these limits, arithmetic averaging can have substantive non-linear inaccuracies in all three parameterized angles. The biomechanics community is encouraged to recognize that limitations exist with the use of the conventional method of averaging orientations. Investigations requiring more robust spatial averaging over a broader range of orientations may benefit from the use of matrix-based Euclidean or Riemannian calculations.
NASA Astrophysics Data System (ADS)
Kjærgaard, Thomas; Baudin, Pablo; Bykov, Dmytro; Eriksen, Janus Juul; Ettenhuber, Patrick; Kristensen, Kasper; Larkin, Jeff; Liakh, Dmitry; Pawłowski, Filip; Vose, Aaron; Wang, Yang Min; Jørgensen, Poul
2017-03-01
We present a scalable cross-platform hybrid MPI/OpenMP/OpenACC implementation of the Divide-Expand-Consolidate (DEC) formalism with portable performance on heterogeneous HPC architectures. The Divide-Expand-Consolidate formalism is designed to reduce the steep computational scaling of conventional many-body methods employed in electronic structure theory to linear scaling, while providing a simple mechanism for controlling the error introduced by this approximation. Our massively parallel implementation of this general scheme has three levels of parallelism, being a hybrid of the loosely coupled task-based parallelization approach and the conventional MPI +X programming model, where X is either OpenMP or OpenACC. We demonstrate strong and weak scalability of this implementation on heterogeneous HPC systems, namely on the GPU-based Cray XK7 Titan supercomputer at the Oak Ridge National Laboratory. Using the "resolution of the identity second-order Møller-Plesset perturbation theory" (RI-MP2) as the physical model for simulating correlated electron motion, the linear-scaling DEC implementation is applied to 1-aza-adamantane-trione (AAT) supramolecular wires containing up to 40 monomers (2440 atoms, 6800 correlated electrons, 24 440 basis functions and 91 280 auxiliary functions). This represents the largest molecular system treated at the MP2 level of theory, demonstrating an efficient removal of the scaling wall pertinent to conventional quantum many-body methods.
Elongation cutoff technique armed with quantum fast multipole method for linear scaling.
Korchowiec, Jacek; Lewandowski, Jakub; Makowski, Marcin; Gu, Feng Long; Aoki, Yuriko
2009-11-30
A linear-scaling implementation of the elongation cutoff technique (ELG/C) that speeds up Hartree-Fock (HF) self-consistent field calculations is presented. The cutoff method avoids the known bottleneck of the conventional HF scheme, that is, diagonalization, because it operates within the low dimension subspace of the whole atomic orbital space. The efficiency of ELG/C is illustrated for two model systems. The obtained results indicate that the ELG/C is a very efficient sparse matrix algebra scheme. Copyright 2009 Wiley Periodicals, Inc.
Significance of parametric spectral ratio methods in detection and recognition of whispered speech
NASA Astrophysics Data System (ADS)
Mathur, Arpit; Reddy, Shankar M.; Hegde, Rajesh M.
2012-12-01
In this article the significance of a new parametric spectral ratio method that can be used to detect whispered speech segments within normally phonated speech is described. Adaptation methods based on the maximum likelihood linear regression (MLLR) are then used to realize a mismatched train-test style speech recognition system. This proposed parametric spectral ratio method computes a ratio spectrum of the linear prediction (LP) and the minimum variance distortion-less response (MVDR) methods. The smoothed ratio spectrum is then used to detect whispered segments of speech within neutral speech segments effectively. The proposed LP-MVDR ratio method exhibits robustness at different SNRs as indicated by the whisper diarization experiments conducted on the CHAINS and the cell phone whispered speech corpus. The proposed method also performs reasonably better than the conventional methods for whisper detection. In order to integrate the proposed whisper detection method into a conventional speech recognition engine with minimal changes, adaptation methods based on the MLLR are used herein. The hidden Markov models corresponding to neutral mode speech are adapted to the whispered mode speech data in the whispered regions as detected by the proposed ratio method. The performance of this method is first evaluated on whispered speech data from the CHAINS corpus. The second set of experiments are conducted on the cell phone corpus of whispered speech. This corpus is collected using a set up that is used commercially for handling public transactions. The proposed whisper speech recognition system exhibits reasonably better performance when compared to several conventional methods. The results shown indicate the possibility of a whispered speech recognition system for cell phone based transactions.
ERIC Educational Resources Information Center
Baker, Bruce D.; Richards, Craig E.
1999-01-01
Applies neural network methods for forecasting 1991-95 per-pupil expenditures in U.S. public elementary and secondary schools. Forecasting models included the National Center for Education Statistics' multivariate regression model and three neural architectures. Regarding prediction accuracy, neural network results were comparable or superior to…
Schneiderman, Eva; Colón, Ellen; White, Donald J; St John, Samuel
2015-01-01
The purpose of this study was to compare the abrasivity of commercial dentifrices by two techniques: the conventional gold standard radiotracer-based Radioactive Dentin Abrasivity (RDA) method; and a newly validated technique based on V8 brushing that included a profilometry-based evaluation of dentin wear. This profilometry-based method is referred to as RDA-Profilometry Equivalent, or RDA-PE. A total of 36 dentifrices were sourced from four global dentifrice markets (Asia Pacific [including China], Europe, Latin America, and North America) and tested blindly using both the standard radiotracer (RDA) method and the new profilometry method (RDA-PE), taking care to follow specific details related to specimen preparation and treatment. Commercial dentifrices tested exhibited a wide range of abrasivity, with virtually all falling well under the industry accepted upper limit of 250; that is, 2.5 times the level of abrasion measured using an ISO 11609 abrasivity reference calcium pyrophosphate as the reference control. RDA and RDA-PE comparisons were linear across the entire range of abrasivity (r2 = 0.7102) and both measures exhibited similar reproducibility with replicate assessments. RDA-PE assessments were not just linearly correlated, but were also proportional to conventional RDA measures. The linearity and proportionality of the results of the current study support that both methods (RDA or RDA-PE) provide similar results and justify a rationale for making the upper abrasivity limit of 250 apply to both RDA and RDA-PE.
NASA Astrophysics Data System (ADS)
Sharan, A. M.; Sankar, S.; Sankar, T. S.
1982-08-01
A new approach for the calculation of response spectral density for a linear stationary random multidegree of freedom system is presented. The method is based on modifying the stochastic dynamic equations of the system by using a set of auxiliary variables. The response spectral density matrix obtained by using this new approach contains the spectral densities and the cross-spectral densities of the system generalized displacements and velocities. The new method requires significantly less computation time as compared to the conventional method for calculating response spectral densities. Two numerical examples are presented to compare quantitatively the computation time.
Multivariate detrending of fMRI signal drifts for real-time multiclass pattern classification.
Lee, Dongha; Jang, Changwon; Park, Hae-Jeong
2015-03-01
Signal drift in functional magnetic resonance imaging (fMRI) is an unavoidable artifact that limits classification performance in multi-voxel pattern analysis of fMRI. As conventional methods to reduce signal drift, global demeaning or proportional scaling disregards regional variations of drift, whereas voxel-wise univariate detrending is too sensitive to noisy fluctuations. To overcome these drawbacks, we propose a multivariate real-time detrending method for multiclass classification that involves spatial demeaning at each scan and the recursive detrending of drifts in the classifier outputs driven by a multiclass linear support vector machine. Experiments using binary and multiclass data showed that the linear trend estimation of the classifier output drift for each class (a weighted sum of drifts in the class-specific voxels) was more robust against voxel-wise artifacts that lead to inconsistent spatial patterns and the effect of online processing than voxel-wise detrending. The classification performance of the proposed method was significantly better, especially for multiclass data, than that of voxel-wise linear detrending, global demeaning, and classifier output detrending without demeaning. We concluded that the multivariate approach using classifier output detrending of fMRI signals with spatial demeaning preserves spatial patterns, is less sensitive than conventional methods to sample size, and increases classification performance, which is a useful feature for real-time fMRI classification. Copyright © 2014 Elsevier Inc. All rights reserved.
A developed nearly analytic discrete method for forward modeling in the frequency domain
NASA Astrophysics Data System (ADS)
Liu, Shaolin; Lang, Chao; Yang, Hui; Wang, Wenshuai
2018-02-01
High-efficiency forward modeling methods play a fundamental role in full waveform inversion (FWI). In this paper, the developed nearly analytic discrete (DNAD) method is proposed to accelerate frequency-domain forward modeling processes. We first derive the discretization of frequency-domain wave equations via numerical schemes based on the nearly analytic discrete (NAD) method to obtain a linear system. The coefficients of numerical stencils are optimized to make the linear system easier to solve and to minimize computing time. Wavefield simulation and numerical dispersion analysis are performed to compare the numerical behavior of DNAD method with that of the conventional NAD method. The results demonstrate the superiority of our proposed method. Finally, the DNAD method is implemented in frequency-domain FWI, and high-resolution inverse results are obtained.
Alternate energy sources for catheter ablation.
Wang, P J; Homoud, M K; Link, M S; Estes III, N A
1999-07-01
Because of the limitations of conventional radiofrequency ablation in creating large or linear lesions, alternative energy sources have been used as possible methods of catheter ablation. Modified radiofrequency energy, cryoablation, and microwave, laser, and ultrasound technologies may be able to create longer, deeper, and more controlled lesions and may be particularly suited for the treatment of ventricular tachycardias and for linear atrial ablation. Future studies will establish the efficacy of these new and promising technologies.
Han, Hyung Joon; Choi, Sae Byeol; Park, Man Sik; Lee, Jin Suk; Kim, Wan Bae; Song, Tae Jin; Choi, Sang Yong
2011-07-01
Single port laparoscopic surgery has come to the forefront of minimally invasive surgery. For those familiar with conventional techniques, however, this type of operation demands a different type of eye/hand coordination and involves unfamiliar working instruments. Herein, the authors describe the learning curve and the clinical outcomes of single port laparoscopic cholecystectomy for 150 consecutive patients with benign gallbladder disease. All patients underwent single port laparoscopic cholecystectomy using a homemade glove port by one of five operators with different levels of experiences of laparoscopic surgery. The learning curve for each operator was fitted using the non-linear ordinary least squares method based on a non-linear regression model. Mean operating time was 77.6 ± 28.5 min. Fourteen patients (6.0%) were converted to conventional laparoscopic cholecystectomy. Complications occurred in 15 patients (10.0%), as follows: bile duct injury (n = 2), surgical site infection (n = 8), seroma (n = 2), and wound pain (n = 3). One operator achieved a learning curve plateau at 61.4 min per procedure after 8.5 cases and his time improved by 95.3 min as compared with initial operation time. Younger surgeons showed significant decreases in mean operation time and achieved stable mean operation times. In particular, younger surgeons showed significant decreases in operation times after 20 cases. Experienced laparoscopic surgeons can safely perform single port laparoscopic cholecystectomy using conventional or angled laparoscopic instruments. The present study shows that an operator can overcome the single port laparoscopic cholecystectomy learning curve in about eight cases.
Mover Position Detection for PMTLM Based on Linear Hall Sensors through EKF Processing
Yan, Leyang; Zhang, Hui; Ye, Peiqing
2017-01-01
Accurate mover position is vital for a permanent magnet tubular linear motor (PMTLM) control system. In this paper, two linear Hall sensors are utilized to detect the mover position. However, Hall sensor signals contain third-order harmonics, creating errors in mover position detection. To filter out the third-order harmonics, a signal processing method based on the extended Kalman filter (EKF) is presented. The limitation of conventional processing method is first analyzed, and then EKF is adopted to detect the mover position. In the EKF model, the amplitude of the fundamental component and the percentage of the harmonic component are taken as state variables, and they can be estimated based solely on the measured sensor signals. Then, the harmonic component can be calculated and eliminated. The proposed method has the advantages of faster convergence, better stability and higher accuracy. Finally, experimental results validate the effectiveness and superiority of the proposed method. PMID:28383505
He, Haijun; Shao, Liyang; Qian, Heng; Zhang, Xinpu; Liang, Jiawei; Luo, Bin; Pan, Wei; Yan, Lianshan
2017-03-20
A novel demodulation method for Sagnac loop interferometer based sensor has been proposed and demonstrated, by unwrapping the phase changes with birefringence interrogation. A temperature sensor based on Sagnac loop interferometer has been used to verify the feasibility of the proposed method. Several tests with 40 °C temperature range have been accomplished with a great linearity of 0.9996 in full range. The proposed scheme is universal for all Sagnac loop interferometer based sensors and it has unlimited linear measurable range which overwhelming the conventional demodulation method with peak/dip tracing. Furthermore, the influence of the wavelength sampling interval and wavelength span on the demodulation error has been discussed in this work. The proposed interrogation method has a great significance for Sagnac loop interferometer sensor and it might greatly enhance the availability of this type of sensors in practical application.
Conditional parametric models for storm sewer runoff
NASA Astrophysics Data System (ADS)
Jonsdottir, H.; Nielsen, H. Aa; Madsen, H.; Eliasson, J.; Palsson, O. P.; Nielsen, M. K.
2007-05-01
The method of conditional parametric modeling is introduced for flow prediction in a sewage system. It is a well-known fact that in hydrological modeling the response (runoff) to input (precipitation) varies depending on soil moisture and several other factors. Consequently, nonlinear input-output models are needed. The model formulation described in this paper is similar to the traditional linear models like final impulse response (FIR) and autoregressive exogenous (ARX) except that the parameters vary as a function of some external variables. The parameter variation is modeled by local lines, using kernels for local linear regression. As such, the method might be referred to as a nearest neighbor method. The results achieved in this study were compared to results from the conventional linear methods, FIR and ARX. The increase in the coefficient of determination is substantial. Furthermore, the new approach conserves the mass balance better. Hence this new approach looks promising for various hydrological models and analysis.
Semiconductor Laser Diode Arrays by MOCVD (Metalorganic Chemical Vapor Deposition)
1987-09-01
laser diode arrays are intended to be used as an optical pump for solid state yttrium aluminum garnet (YAG) lasers. In particular, linear uniform...corresponds to about . , 8080A. Such thin layer structures, while difficult to grow by such conventional growth methods as liquid phase epitaxy ( LPE ...lower yet than for DH lasers grown by LPE . , - Conventional self-aligned stripe laser This structure is formed by growing (on an n-type GaAs substrate
Kjaergaard, Thomas; Baudin, Pablo; Bykov, Dmytro; ...
2016-11-16
Here, we present a scalable cross-platform hybrid MPI/OpenMP/OpenACC implementation of the Divide–Expand–Consolidate (DEC) formalism with portable performance on heterogeneous HPC architectures. The Divide–Expand–Consolidate formalism is designed to reduce the steep computational scaling of conventional many-body methods employed in electronic structure theory to linear scaling, while providing a simple mechanism for controlling the error introduced by this approximation. Our massively parallel implementation of this general scheme has three levels of parallelism, being a hybrid of the loosely coupled task-based parallelization approach and the conventional MPI +X programming model, where X is either OpenMP or OpenACC. We demonstrate strong and weak scalabilitymore » of this implementation on heterogeneous HPC systems, namely on the GPU-based Cray XK7 Titan supercomputer at the Oak Ridge National Laboratory. Using the “resolution of the identity second-order Moller–Plesset perturbation theory” (RI-MP2) as the physical model for simulating correlated electron motion, the linear-scaling DEC implementation is applied to 1-aza-adamantane-trione (AAT) supramolecular wires containing up to 40 monomers (2440 atoms, 6800 correlated electrons, 24 440 basis functions and 91 280 auxiliary functions). This represents the largest molecular system treated at the MP2 level of theory, demonstrating an efficient removal of the scaling wall pertinent to conventional quantum many-body methods.« less
DUL, MITCHELL W.; SWANSON, WILLIAM H.
2006-01-01
Purposes The purposes of this study are to compare macular perimetric sensitivities for conventional size III, frequency-doubling, and Gabor stimuli in terms of Weber contrast and to provide a theoretical interpretation of the results. Methods Twenty-two patients with glaucoma performed four perimetric tests: a conventional Swedish Interactive Threshold Algorithm (SITA) 10-2 test with Goldmann size III stimuli, two frequency-doubling tests (FDT 10-2, FDT Macula) with counterphase-modulated grating stimuli, and a laboratory-designed test with Gabor stimuli. Perimetric sensitivities were converted to the reciprocal of Weber contrast and sensitivities from different tests were compared using the Bland-Altman method. Effects of ganglion cell loss on perimetric sensitivities were then simulated with a two-stage neural model. Results The average perimetric loss was similar for all stimuli until advanced stages of ganglion cell loss, in which perimetric loss tended to be greater for size III stimuli than for frequency-doubling and Gabor stimuli. Comparison of the experimental data and model simulation suggests that, in the macula, linear relations between ganglion cell loss and perimetric sensitivity loss hold for all three stimuli. Conclusions Linear relations between perimetric loss and ganglion cell loss for all three stimuli can account for the similarity in perimetric loss until advanced stages. The results do not support the hypothesis that redundancy for frequency-doubling stimuli is lower than redundancy for size III stimuli. PMID:16840860
Guevara, V R
2004-02-01
A nonlinear programming optimization model was developed to maximize margin over feed cost in broiler feed formulation and is described in this paper. The model identifies the optimal feed mix that maximizes profit margin. Optimum metabolizable energy level and performance were found by using Excel Solver nonlinear programming. Data from an energy density study with broilers were fitted to quadratic equations to express weight gain, feed consumption, and the objective function income over feed cost in terms of energy density. Nutrient:energy ratio constraints were transformed into equivalent linear constraints. National Research Council nutrient requirements and feeding program were used for examining changes in variables. The nonlinear programming feed formulation method was used to illustrate the effects of changes in different variables on the optimum energy density, performance, and profitability and was compared with conventional linear programming. To demonstrate the capabilities of the model, I determined the impact of variation in prices. Prices for broiler, corn, fish meal, and soybean meal were increased and decreased by 25%. Formulations were identical in all other respects. Energy density, margin, and diet cost changed compared with conventional linear programming formulation. This study suggests that nonlinear programming can be more useful than conventional linear programming to optimize performance response to energy density in broiler feed formulation because an energy level does not need to be set.
Accurate evaluation of exchange fields in finite element micromagnetic solvers
NASA Astrophysics Data System (ADS)
Chang, R.; Escobar, M. A.; Li, S.; Lubarda, M. V.; Lomakin, V.
2012-04-01
Quadratic basis functions (QBFs) are implemented for solving the Landau-Lifshitz-Gilbert equation via the finite element method. This involves the introduction of a set of special testing functions compatible with the QBFs for evaluating the Laplacian operator. The results by using QBFs are significantly more accurate than those via linear basis functions. QBF approach leads to significantly more accurate results than conventionally used approaches based on linear basis functions. Importantly QBFs allow reducing the error of computing the exchange field by increasing the mesh density for structured and unstructured meshes. Numerical examples demonstrate the feasibility of the method.
Bobo-García, Gloria; Davidov-Pardo, Gabriel; Arroqui, Cristina; Vírseda, Paloma; Marín-Arroyo, María R; Navarro, Montserrat
2015-01-01
Total phenolic content (TPC) and antioxidant activity (AA) assays in microplates save resources and time, therefore they can be useful to overcome the fact that the conventional methods are time-consuming, labour intensive and use large amounts of reagents. An intra-laboratory validation of the Folin-Ciocalteu microplate method to measure TPC and the 2,2-diphenyl-1-picrylhydrazyl (DPPH) microplate method to measure AA was performed and compared with conventional spectrophotometric methods. To compare the TPC methods, the confidence intervals of a linear regression were used. In the range of 10-70 mg L(-1) of gallic acid equivalents (GAE), both methods were equivalent. To compare the AA methodologies, the F-test and t-test were used in a range from 220 to 320 µmol L(-1) of Trolox equivalents. Both methods had homogeneous variances, and the means were not significantively different. The limits of detection and quantification for the TPC microplate method were 0.74 and 2.24 mg L(-1) GAE and for the DPPH 12.07 and 36.58 µmol L(-1) of Trolox equivalents. The relative standard deviation of the repeatability and reproducibility for both microplate methods were ≤ 6.1%. The accuracy ranged from 88% to 100%. The microplate and the conventional methods are equals in a 95% confidence level. © 2014 Society of Chemical Industry.
NASA Astrophysics Data System (ADS)
McDonald, Michael C.; Kim, H. K.; Henry, J. R.; Cunningham, I. A.
2012-03-01
The detective quantum efficiency (DQE) is widely accepted as a primary measure of x-ray detector performance in the scientific community. A standard method for measuring the DQE, based on IEC 62220-1, requires the system to have a linear response meaning that the detector output signals are proportional to the incident x-ray exposure. However, many systems have a non-linear response due to characteristics of the detector, or post processing of the detector signals, that cannot be disabled and may involve unknown algorithms considered proprietary by the manufacturer. For these reasons, the DQE has not been considered as a practical candidate for routine quality assurance testing in a clinical setting. In this article we described a method that can be used to measure the DQE of both linear and non-linear systems that employ only linear image processing algorithms. The method was validated on a Cesium Iodide based flat panel system that simultaneously stores a raw (linear) and processed (non-linear) image for each exposure. It was found that the resulting DQE was equivalent to a conventional standards-compliant DQE with measurement precision, and the gray-scale inversion and linear edge enhancement did not affect the DQE result. While not IEC 62220-1 compliant, it may be adequate for QA programs.
Some Alignment Considerations for the Next Linear Collider
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruland, R
Next Linear Collider type accelerators require a new level of alignment quality. The relative alignment of these machines is to be maintained in an error envelope dimensioned in micrometers and for certain parts in nanometers. In the nanometer domain our terra firma cannot be considered monolithic but compares closer to jelly. Since conventional optical alignment methods cannot deal with the dynamics and cannot approach the level of accuracy, special alignment and monitoring techniques must be pursued.
NASA Technical Reports Server (NTRS)
Fricke, C. L.
1975-01-01
A solution to the problem of reflection from a semi-infinite atmosphere is presented, based upon Chandrasekhar's H-function method for linearly anisotropic phase functions. A modification to the Gauss quadrature formula which gives about the same accuracy with 10 points as the conventional Gauss quadrature does with 100 points was developed. A computer program achieving this solution is described and results are presented for several illustrative cases.
Sum-of-Squares-Based Region of Attraction Analysis for Gain-Scheduled Three-Loop Autopilot
NASA Astrophysics Data System (ADS)
Seo, Min-Won; Kwon, Hyuck-Hoon; Choi, Han-Lim
2018-04-01
A conventional method of designing a missile autopilot is to linearize the original nonlinear dynamics at several trim points, then to determine linear controllers for each linearized model, and finally implement gain-scheduling technique. The validation of such a controller is often based on linear system analysis for the linear closed-loop system at the trim conditions. Although this type of gain-scheduled linear autopilot works well in practice, validation based solely on linear analysis may not be sufficient to fully characterize the closed-loop system especially when the aerodynamic coefficients exhibit substantial nonlinearity with respect to the flight condition. The purpose of this paper is to present a methodology for analyzing the stability of a gain-scheduled controller in a setting close to the original nonlinear setting. The method is based on sum-of-squares (SOS) optimization that can be used to characterize the region of attraction of a polynomial system by solving convex optimization problems. The applicability of the proposed SOS-based methodology is verified on a short-period autopilot of a skid-to-turn missile.
Gauge invariance of excitonic linear and nonlinear optical response
NASA Astrophysics Data System (ADS)
Taghizadeh, Alireza; Pedersen, T. G.
2018-05-01
We study the equivalence of four different approaches to calculate the excitonic linear and nonlinear optical response of multiband semiconductors. These four methods derive from two choices of gauge, i.e., length and velocity gauges, and two ways of computing the current density, i.e., direct evaluation and evaluation via the time-derivative of the polarization density. The linear and quadratic response functions are obtained for all methods by employing a perturbative density-matrix approach within the mean-field approximation. The equivalence of all four methods is shown rigorously, when a correct interaction Hamiltonian is employed for the velocity gauge approaches. The correct interaction is written as a series of commutators containing the unperturbed Hamiltonian and position operators, which becomes equivalent to the conventional velocity gauge interaction in the limit of infinite Coulomb screening and infinitely many bands. As a case study, the theory is applied to hexagonal boron nitride monolayers, and the linear and nonlinear optical response found in different approaches are compared.
NASA Astrophysics Data System (ADS)
Birk, Udo; Szczurek, Aleksander; Cremer, Christoph
2017-12-01
Current approaches to overcome the conventional limit of the resolution potential of light microscopy (of about 200 nm for visible light), often suffer from non-linear effects, which render the quantification of the image intensities in the reconstructions difficult, and also affect the quantification of the biological structure under investigation. As an attempt to face these difficulties, we discuss a particular method of localization microscopy which is based on photostable fluorescent dyes. The proposed method can potentially be implemented as a fast alternative for quantitative localization microscopy, circumventing the need for the acquisition of thousands of image frames and complex, highly dye-specific imaging buffers. Although the need for calibration remains in order to extract quantitative data (such as the number of emitters), multispectral approaches are largely facilitated due to the much less stringent requirements on imaging buffers. Furthermore, multispectral acquisitions can be readily obtained using commercial instrumentation such as e.g. the conventional confocal laser scanning microscope.
Evaluation of airborne lidar data to predict vegetation Presence/Absence
Palaseanu-Lovejoy, M.; Nayegandhi, A.; Brock, J.; Woodman, R.; Wright, C.W.
2009-01-01
This study evaluates the capabilities of the Experimental Advanced Airborne Research Lidar (EAARL) in delineating vegetation assemblages in Jean Lafitte National Park, Louisiana. Five-meter-resolution grids of bare earth, canopy height, canopy-reflection ratio, and height of median energy were derived from EAARL data acquired in September 2006. Ground-truth data were collected along transects to assess species composition, canopy cover, and ground cover. To decide which model is more accurate, comparisons of general linear models and generalized additive models were conducted using conventional evaluation methods (i.e., sensitivity, specificity, Kappa statistics, and area under the curve) and two new indexes, net reclassification improvement and integrated discrimination improvement. Generalized additive models were superior to general linear models in modeling presence/absence in training vegetation categories, but no statistically significant differences between the two models were achieved in determining the classification accuracy at validation locations using conventional evaluation methods, although statistically significant improvements in net reclassifications were observed. ?? 2009 Coastal Education and Research Foundation.
Systematic methods for the design of a class of fuzzy logic controllers
NASA Astrophysics Data System (ADS)
Yasin, Saad Yaser
2002-09-01
Fuzzy logic control, a relatively new branch of control, can be used effectively whenever conventional control techniques become inapplicable or impractical. Various attempts have been made to create a generalized fuzzy control system and to formulate an analytically based fuzzy control law. In this study, two methods, the left and right parameterization method and the normalized spline-base membership function method, were utilized for formulating analytical fuzzy control laws in important practical control applications. The first model was used to design an idle speed controller, while the second was used to control an inverted control problem. The results of both showed that a fuzzy logic control system based on the developed models could be used effectively to control highly nonlinear and complex systems. This study also investigated the application of fuzzy control in areas not fully utilizing fuzzy logic control. Three important practical applications pertaining to the automotive industries were studied. The first automotive-related application was the idle speed of spark ignition engines, using two fuzzy control methods: (1) left and right parameterization, and (2) fuzzy clustering techniques and experimental data. The simulation and experimental results showed that a conventional controller-like performance fuzzy controller could be designed based only on experimental data and intuitive knowledge of the system. In the second application, the automotive cruise control problem, a fuzzy control model was developed using parameters adaptive Proportional plus Integral plus Derivative (PID)-type fuzzy logic controller. Results were comparable to those using linearized conventional PID and linear quadratic regulator (LQR) controllers and, in certain cases and conditions, the developed controller outperformed the conventional PID and LQR controllers. The third application involved the air/fuel ratio control problem, using fuzzy clustering techniques, experimental data, and a conversion algorithm, to develop a fuzzy-based control algorithm. Results were similar to those obtained by recently published conventional control based studies. The influence of the fuzzy inference operators and parameters on performance and stability of the fuzzy logic controller was studied Results indicated that, the selections of certain parameters or combinations of parameters, affect greatly the performance and stability of the fuzzy controller. Diagnostic guidelines used to tune or change certain factors or parameters to improve controller performance were developed based on knowledge gained from conventional control methods and knowledge gained from the experimental and the simulation results of this study.
NASA Astrophysics Data System (ADS)
Saito, Asaki; Yasutomi, Shin-ichi; Tamura, Jun-ichi; Ito, Shunji
2015-06-01
We introduce a true orbit generation method enabling exact simulations of dynamical systems defined by arbitrary-dimensional piecewise linear fractional maps, including piecewise linear maps, with rational coefficients. This method can generate sufficiently long true orbits which reproduce typical behaviors (inherent behaviors) of these systems, by properly selecting algebraic numbers in accordance with the dimension of the target system, and involving only integer arithmetic. By applying our method to three dynamical systems—that is, the baker's transformation, the map associated with a modified Jacobi-Perron algorithm, and an open flow system—we demonstrate that it can reproduce their typical behaviors that have been very difficult to reproduce with conventional simulation methods. In particular, for the first two maps, we show that we can generate true orbits displaying the same statistical properties as typical orbits, by estimating the marginal densities of their invariant measures. For the open flow system, we show that an obtained true orbit correctly converges to the stable period-1 orbit, which is inherently possessed by the system.
Masood, Athar; Stark, Ken D; Salem, Norman
2005-10-01
Conventional sample preparation for fatty acid analysis is a complicated, multiple-step process, and gas chromatography (GC) analysis alone can require >1 h per sample to resolve fatty acid methyl esters (FAMEs). Fast GC analysis was adapted to human plasma FAME analysis using a modified polyethylene glycol column with smaller internal diameters, thinner stationary phase films, increased carrier gas linear velocity, and faster temperature ramping. Our results indicated that fast GC analyses were comparable to conventional GC in peak resolution. A conventional transesterification method based on Lepage and Roy was simplified to a one-step method with the elimination of the neutralization and centrifugation steps. A robotics-amenable method was also developed, with lower methylation temperatures and in an open-tube format using multiple reagent additions. The simplified methods produced results that were quantitatively similar and with similar coefficients of variation as compared with the original Lepage and Roy method. The present streamlined methodology is suitable for the direct fatty acid analysis of human plasma, is appropriate for research studies, and will facilitate large clinical trials and make possible population studies.
Wired: Energy Drinks, Jock Identity, Masculine Norms, and Risk Taking
ERIC Educational Resources Information Center
Miller, Kathleen E.
2008-01-01
Objective: The author examined gendered links among sport-related identity, endorsement of conventional masculine norms, risk taking, and energy-drink consumption. Participants: The author surveyed 795 undergraduate students enrolled in introductory-level courses at a public university. Methods: The author conducted linear regression analyses of…
Seino, Junji; Nakai, Hiromi
2012-06-28
An accurate and efficient scheme for two-component relativistic calculations at the spin-free infinite-order Douglas-Kroll-Hess (IODKH) level is presented. The present scheme, termed local unitary transformation (LUT), is based on the locality of the relativistic effect. Numerical assessments of the LUT scheme were performed in diatomic molecules such as HX and X(2) (X = F, Cl, Br, I, and At) and hydrogen halide clusters, (HX)(n) (X = F, Cl, Br, and I). Total energies obtained by the LUT method agree well with conventional IODKH results. The computational costs of the LUT method are drastically lower than those of conventional methods since in the former there is linear-scaling with respect to the system size and a small prefactor.
NASA Technical Reports Server (NTRS)
Samba, A. S.
1985-01-01
The problem of solving banded linear systems by direct (non-iterative) techniques on the Vector Processor System (VPS) 32 supercomputer is considered. Two efficient direct methods for solving banded linear systems on the VPS 32 are described. The vector cyclic reduction (VCR) algorithm is discussed in detail. The performance of the VCR on a three parameter model problem is also illustrated. The VCR is an adaptation of the conventional point cyclic reduction algorithm. The second direct method is the Customized Reduction of Augmented Triangles' (CRAT). CRAT has the dominant characteristics of an efficient VPS 32 algorithm. CRAT is tailored to the pipeline architecture of the VPS 32 and as a consequence the algorithm is implicitly vectorizable.
Elzanfaly, Eman S; Hegazy, Maha A; Saad, Samah S; Salem, Maissa Y; Abd El Fattah, Laila E
2015-03-01
The introduction of sustainable development concepts to analytical laboratories has recently gained interest, however, most conventional high-performance liquid chromatography methods do not consider either the effect of the used chemicals or the amount of produced waste on the environment. The aim of this work was to prove that conventional methods can be replaced by greener ones with the same analytical parameters. The suggested methods were designed so that they neither use nor produce harmful chemicals and produce minimum waste to be used in routine analysis without harming the environment. This was achieved by using green mobile phases and short run times. Four mixtures were chosen as models for this study; clidinium bromide/chlordiazepoxide hydrochloride, phenobarbitone/pipenzolate bromide, mebeverine hydrochloride/sulpiride, and chlorphenoxamine hydrochloride/caffeine/8-chlorotheophylline either in their bulk powder or in their dosage forms. The methods were validated with respect to linearity, precision, accuracy, system suitability, and robustness. The developed methods were compared to the reported conventional high-performance liquid chromatography methods regarding their greenness profile. The suggested methods were found to be greener and more time- and solvent-saving than the reported ones; hence they can be used for routine analysis of the studied mixtures without harming the environment. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Application of the Radon-FCL approach to seismic random noise suppression and signal preservation
NASA Astrophysics Data System (ADS)
Meng, Fanlei; Li, Yue; Liu, Yanping; Tian, Yanan; Wu, Ning
2016-08-01
The fractal conservation law (FCL) is a linear partial differential equation that is modified by an anti-diffusive term of lower order. The analysis indicated that this algorithm could eliminate high frequencies and preserve or amplify low/medium-frequencies. Thus, this method is quite suitable for the simultaneous noise suppression and enhancement or preservation of seismic signals. However, the conventional FCL filters seismic data only along the time direction, thereby ignoring the spatial coherence between neighbouring traces, which leads to the loss of directional information. Therefore, we consider the development of the conventional FCL into the time-space domain and propose a Radon-FCL approach. We applied a Radon transform to implement the FCL method in this article; performing FCL filtering in the Radon domain achieves a higher level of noise attenuation. Using this method, seismic reflection events can be recovered with the sacrifice of fewer frequency components while effectively attenuating more random noise than conventional FCL filtering. Experiments using both synthetic and common shot point data demonstrate the advantages of the Radon-FCL approach versus the conventional FCL method with regard to both random noise attenuation and seismic signal preservation.
A coupling method for a cardiovascular simulation model which includes the Kalman filter.
Hasegawa, Yuki; Shimayoshi, Takao; Amano, Akira; Matsuda, Tetsuya
2012-01-01
Multi-scale models of the cardiovascular system provide new insight that was unavailable with in vivo and in vitro experiments. For the cardiovascular system, multi-scale simulations provide a valuable perspective in analyzing the interaction of three phenomenons occurring at different spatial scales: circulatory hemodynamics, ventricular structural dynamics, and myocardial excitation-contraction. In order to simulate these interactions, multiscale cardiovascular simulation systems couple models that simulate different phenomena. However, coupling methods require a significant amount of calculation, since a system of non-linear equations must be solved for each timestep. Therefore, we proposed a coupling method which decreases the amount of calculation by using the Kalman filter. In our method, the Kalman filter calculates approximations for the solution to the system of non-linear equations at each timestep. The approximations are then used as initial values for solving the system of non-linear equations. The proposed method decreases the number of iterations required by 94.0% compared to the conventional strong coupling method. When compared with a smoothing spline predictor, the proposed method required 49.4% fewer iterations.
Application of Neural Networks to Wind tunnel Data Response Surface Methods
NASA Technical Reports Server (NTRS)
Lo, Ching F.; Zhao, J. L.; DeLoach, Richard
2000-01-01
The integration of nonlinear neural network methods with conventional linear regression techniques is demonstrated for representative wind tunnel force balance data modeling. This work was motivated by a desire to formulate precision intervals for response surfaces produced by neural networks. Applications are demonstrated for representative wind tunnel data acquired at NASA Langley Research Center and the Arnold Engineering Development Center in Tullahoma, TN.
NASA Astrophysics Data System (ADS)
Kumar, Gaurav; Kumar, Ashok
2017-11-01
Structural control has gained significant attention in recent times. The standalone issue of power requirement during an earthquake has already been solved up to a large extent by designing semi-active control systems using conventional linear quadratic control theory, and many other intelligent control algorithms such as fuzzy controllers, artificial neural networks, etc. In conventional linear-quadratic regulator (LQR) theory, it is customary to note that the values of the design parameters are decided at the time of designing the controller and cannot be subsequently altered. During an earthquake event, the response of the structure may increase or decrease, depending the quasi-resonance occurring between the structure and the earthquake. In this case, it is essential to modify the value of the design parameters of the conventional LQR controller to obtain optimum control force to mitigate the vibrations due to the earthquake. A few studies have been done to sort out this issue but in all these studies it was necessary to maintain a database of the earthquake. To solve this problem and to find the optimized design parameters of the LQR controller in real time, a fast Fourier transform and particle swarm optimization based modified linear quadratic regulator method is presented here. This method comprises four different algorithms: particle swarm optimization (PSO), the fast Fourier transform (FFT), clipped control algorithm and the LQR. The FFT helps to obtain the dominant frequency for every time window. PSO finds the optimum gain matrix through the real-time update of the weighting matrix R, thereby, dispensing with the experimentation. The clipped control law is employed to match the magnetorheological (MR) damper force with the desired force given by the controller. The modified Bouc-Wen phenomenological model is taken to recognize the nonlinearities in the MR damper. The assessment of the advised method is done by simulation of a three-story structure having an MR damper at the ground floor level subjected to three different near-fault historical earthquake time histories, and the outcomes are equated with those of simple conventional LQR. The results establish that the advised methodology is more effective than conventional LQR controllers in reducing inter-storey drift, relative displacement, and acceleration response.
Reflection full-waveform inversion using a modified phase misfit function
NASA Astrophysics Data System (ADS)
Cui, Chao; Huang, Jian-Ping; Li, Zhen-Chun; Liao, Wen-Yuan; Guan, Zhe
2017-09-01
Reflection full-waveform inversion (RFWI) updates the low- and highwavenumber components, and yields more accurate initial models compared with conventional full-waveform inversion (FWI). However, there is strong nonlinearity in conventional RFWI because of the lack of low-frequency data and the complexity of the amplitude. The separation of phase and amplitude information makes RFWI more linear. Traditional phase-calculation methods face severe phase wrapping. To solve this problem, we propose a modified phase-calculation method that uses the phase-envelope data to obtain the pseudo phase information. Then, we establish a pseudophase-information-based objective function for RFWI, with the corresponding source and gradient terms. Numerical tests verify that the proposed calculation method using the phase-envelope data guarantees the stability and accuracy of the phase information and the convergence of the objective function. The application on a portion of the Sigsbee2A model and comparison with inversion results of the improved RFWI and conventional FWI methods verify that the pseudophase-based RFWI produces a highly accurate and efficient velocity model. Moreover, the proposed method is robust to noise and high frequency.
Han, Jubong; Lee, K B; Lee, Jong-Man; Park, Tae Soon; Oh, J S; Oh, Pil-Jei
2016-03-01
We discuss a new method to incorporate Type B uncertainty into least-squares procedures. The new method is based on an extension of the likelihood function from which a conventional least-squares function is derived. The extended likelihood function is the product of the original likelihood function with additional PDFs (Probability Density Functions) that characterize the Type B uncertainties. The PDFs are considered to describe one's incomplete knowledge on correction factors being called nuisance parameters. We use the extended likelihood function to make point and interval estimations of parameters in the basically same way as the least-squares function used in the conventional least-squares method is derived. Since the nuisance parameters are not of interest and should be prevented from appearing in the final result, we eliminate such nuisance parameters by using the profile likelihood. As an example, we present a case study for a linear regression analysis with a common component of Type B uncertainty. In this example we compare the analysis results obtained from using our procedure with those from conventional methods. Copyright © 2015. Published by Elsevier Ltd.
Extending local canonical correlation analysis to handle general linear contrasts for FMRI data.
Jin, Mingwu; Nandy, Rajesh; Curran, Tim; Cordes, Dietmar
2012-01-01
Local canonical correlation analysis (CCA) is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM), a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR) and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic.
Extending Local Canonical Correlation Analysis to Handle General Linear Contrasts for fMRI Data
Jin, Mingwu; Nandy, Rajesh; Curran, Tim; Cordes, Dietmar
2012-01-01
Local canonical correlation analysis (CCA) is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM), a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR) and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic. PMID:22461786
Sun, Shumei; Zhou, Hao; Zhou, Bin; Hu, Ziyou; Hou, Jinlin; Sun, Jian
2012-05-01
To evaluate the sensitivity and specificity of nested PCR combined with pyrosequencing in the detection of HBV drug-resistance gene. RtM204I (ATT) mutant and rtM204 (ATG) nonmutant plasmids mixed at different ratios were detected for mutations using nested-PCR combined with pyrosequencing, and the results were compared with those by conventional PCR pyrosequencing to analyze the linearity and consistency of the two methods. Clinical specimens with different viral loads were examined for drug-resistant mutations using nested PCR pyrosequencing and nested PCR combined with dideoxy sequencing (Sanger) for comparison of the detection sensitivity and specificity. The fitting curves demonstrated good linearity of both conventional PCR pyrosequencing and nested PCR pyrosequencing (R(2)>0.99, P<0.05). Nested PCR showed a better consistency with the predicted value than conventional PCR, and was superior to conventional PCR for detection of samples containing 90% mutant plasmid. In the detection of clinical specimens, Sanger sequencing had a significantly lower sensitivity than nested PCR pyrosequencing (92% vs 100%, P<0.01). The detection sensitivity of Sanger sequencing varied with the viral loads, especially in samples with low viral copies (HBV DNA ≤3log10 copies/ml), where the sensitivity was 78%, significantly lower than that of pyrosequencing (100%, P<0.01). Neither of the two methods yielded positive results for the negative control samples, suggesting their good specificity. Compared with nested PCR and Sanger sequencing method, nested PCR pyrosequencing has a higher sensitivity especially in clinical specimens with low viral copies, which can be important for early detection of HBV mutant strains and hence more effective clinical management.
Efficient convolutional sparse coding
Wohlberg, Brendt
2017-06-20
Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.
Hintikka, Laura; Haapala, Markus; Kuuranne, Tiia; Leinonen, Antti; Kostiainen, Risto
2013-10-18
A gas chromatography-microchip atmospheric pressure photoionization-tandem mass spectrometry (GC-μAPPI-MS/MS) method was developed for the analysis of anabolic androgenic steroids in urine as their trimethylsilyl derivatives. The method utilizes a heated nebulizer microchip in atmospheric pressure photoionization mode (μAPPI) with chlorobenzene as dopant, which provides high ionization efficiency by producing abundant radical cations with minimal fragmentation. The performance of GC-μAPPI-MS/MS was evaluated with respect to repeatability, linearity, linear range, and limit of detection (LOD). The results confirmed the potential of the method for doping control analysis of anabolic steroids. Repeatability (RSD<10%), linearity (R(2)≥0.996) and sensitivity (LODs 0.05-0.1ng/mL) were acceptable. Quantitative performance of the method was tested and compared with that of conventional GC-electron ionization-MS, and the results were in good agreement. Copyright © 2013 Elsevier B.V. All rights reserved.
Studies of superresolution range-Doppler imaging
NASA Astrophysics Data System (ADS)
Zhu, Zhaoda; Ye, Zhenru; Wu, Xiaoqing; Yin, Jun; She, Zhishun
1993-02-01
This paper presents three superresolution imaging methods, including the linear prediction data extrapolation DFT (LPDEDFT), the dynamic optimization linear least squares (DOLLS), and the Hopfield neural network nonlinear least squares (HNNNLS). Live data of a metalized scale model B-52 aircraft, mounted on a rotating platform in a microwave anechoic chamber, have in this way been processed, as has a flying Boeing-727 aircraft. The imaging results indicate that, compared to the conventional Fourier method, either higher resolution for the same effective bandwidth of transmitted signals and total rotation angle in imaging, or equal-quality images from smaller bandwidth and total rotation, angle may be obtained by these superresolution approaches. Moreover, these methods are compared in respect of their resolution capability and computational complexity.
Computation of output feedback gains for linear stochastic systems using the Zangwill-Powell method
NASA Technical Reports Server (NTRS)
Kaufman, H.
1977-01-01
Because conventional optimal linear regulator theory results in a controller which requires the capability of measuring and/or estimating the entire state vector, it is of interest to consider procedures for computing controls which are restricted to be linear feedback functions of a lower dimensional output vector and which take into account the presence of measurement noise and process uncertainty. To this effect a stochastic linear model has been developed that accounts for process parameter and initial uncertainty, measurement noise, and a restricted number of measurable outputs. Optimization with respect to the corresponding output feedback gains was then performed for both finite and infinite time performance indices without gradient computation by using Zangwill's modification of a procedure originally proposed by Powell.
On neural networks in identification and control of dynamic systems
NASA Technical Reports Server (NTRS)
Phan, Minh; Juang, Jer-Nan; Hyland, David C.
1993-01-01
This paper presents a discussion of the applicability of neural networks in the identification and control of dynamic systems. Emphasis is placed on the understanding of how the neural networks handle linear systems and how the new approach is related to conventional system identification and control methods. Extensions of the approach to nonlinear systems are then made. The paper explains the fundamental concepts of neural networks in their simplest terms. Among the topics discussed are feed forward and recurrent networks in relation to the standard state-space and observer models, linear and nonlinear auto-regressive models, linear, predictors, one-step ahead control, and model reference adaptive control for linear and nonlinear systems. Numerical examples are presented to illustrate the application of these important concepts.
The Accuracy of Shock Capturing in Two Spatial Dimensions
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Casper, Jay H.
1997-01-01
An assessment of the accuracy of shock capturing schemes is made for two-dimensional steady flow around a cylindrical projectile. Both a linear fourth-order method and a nonlinear third-order method are used in this study. It is shown, contrary to conventional wisdom, that captured two-dimensional shocks are asymptotically first-order, regardless of the design accuracy of the numerical method. The practical implications of this finding are discussed in the context of the efficacy of high-order numerical methods for discontinuous flows.
Inventory Management for Irregular Shipment of Goods in Distribution Centre
NASA Astrophysics Data System (ADS)
Takeda, Hitoshi; Kitaoka, Masatoshi; Usuki, Jun
2016-01-01
The shipping amount of commodity goods (Foods, confectionery, dairy products, such as public cosmetic pharmaceutical products) changes irregularly at the distribution center dealing with the general consumer goods. Because the shipment time and the amount of the shipment are irregular, the demand forecast becomes very difficult. For this, the inventory control becomes difficult, too. It cannot be applied to the shipment of the commodity by the conventional inventory control methods. This paper proposes the method for inventory control by cumulative flow curve method. It proposed the method of deciding the order quantity of the inventory control by the cumulative flow curve. Here, it proposes three methods. 1) Power method,2) Polynomial method and 3)Revised Holt's linear method that forecasts data with trends that is a kind of exponential smoothing method. This paper compares the economics of the conventional method, which is managed by the experienced and three new proposed methods. And, the effectiveness of the proposal method is verified from the numerical calculations.
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Gottlieb, David; Abarbanel, Saul; Don, Wai-Sun
1993-01-01
The conventional method of imposing time dependent boundary conditions for Runge-Kutta (RK) time advancement reduces the formal accuracy of the space-time method to first order locally, and second order globally, independently of the spatial operator. This counter intuitive result is analyzed in this paper. Two methods of eliminating this problem are proposed for the linear constant coefficient case: (1) impose the exact boundary condition only at the end of the complete RK cycle, (2) impose consistent intermediate boundary conditions derived from the physical boundary condition and its derivatives. The first method, while retaining the RK accuracy in all cases, results in a scheme with much reduced CFL condition, rendering the RK scheme less attractive. The second method retains the same allowable time step as the periodic problem. However it is a general remedy only for the linear case. For non-linear hyperbolic equations the second method is effective only for for RK schemes of third order accuracy or less. Numerical studies are presented to verify the efficacy of each approach.
Alsharbaty, Mohammed Hussein M; Alikhasi, Marzieh; Zarrati, Simindokht; Shamshiri, Ahmed Reza
2018-02-09
To evaluate the accuracy of a digital implant impression technique using a TRIOS 3Shape intraoral scanner (IOS) compared to conventional implant impression techniques (pick-up and transfer) in clinical situations. Thirty-six patients who had two implants (Implantium, internal connection) ranging in diameter between 3.8 and 4.8 mm in posterior regions participated in this study after signing a consent form. Thirty-six reference models (RM) were fabricated by attaching two impression copings intraorally, splinted with autopolymerizing acrylic resin, verified by sectioning through the middle of the index, and rejoined again with freshly mixed autopolymerizing acrylic resin pattern (Pattern Resin) with the brush bead method. After that, the splinted assemblies were attached to implant analogs (DANSE) and impressed with type III dental stone (Gypsum Microstone) in standard plastic die lock trays. Thirty-six working casts were fabricated for each conventional impression technique (i.e., pick-up and transfer). Thirty-six digital impressions were made with a TRIOS 3Shape IOS. Eight of the digitally scanned files were damaged; 28 digital scan files were retrieved to STL format. A coordinate-measuring machine (CMM) was used to record linear displacement measurements (x, y, and z-coordinates), interimplant distances, and angular displacements for the RMs and conventionally fabricated working casts. CATIA 3D evaluation software was used to assess the digital STL files for the same variables as the CMM measurements. CMM measurements made on the RMs and conventionally fabricated working casts were compared with 3D software measurements made on the digitally scanned files. Data were statistically analyzed using the generalized estimating equation (GEE) with an exchangeable correlation matrix and linear method, followed by the Bonferroni method for pairwise comparisons (α = 0.05). The results showed significant differences between the pick-up and digital groups in all of the measured variables (p < 0.001). Concerning the transfer and digital groups, the results were statistically significant in angular displacement (p < 0.001), distance measurements (p = 0.01), and linear displacement (p = 0.03); however, between the pick-up and transfer groups, there was no statistical significance in all of the measured variables (interimplant distance deviation, linear displacement, and angular displacement deviations). According to the results of this study, the digital implant impression technique had the least accuracy. Based on the study outcomes, distance and angulation errors associated with the intraoral digital implant impressions were too large to fabricate well-fitting restorations for partially edentulous patients. The pick-up implant impression technique was the most accurate, and the transfer technique revealed comparable accuracy to it. © 2018 by the American College of Prosthodontists.
An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation
Gao, Kai; Fu, Shubin; Chung, Eric T.
2018-02-13
The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less
An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Kai; Fu, Shubin; Chung, Eric T.
The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less
NASA Astrophysics Data System (ADS)
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, Andreas
2016-11-01
Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values) as a product of the source-receptor sensitivity (SRS) matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX) where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.
Solving coupled groundwater flow systems using a Jacobian Free Newton Krylov method
NASA Astrophysics Data System (ADS)
Mehl, S.
2012-12-01
Jacobian Free Newton Kyrlov (JFNK) methods can have several advantages for simulating coupled groundwater flow processes versus conventional methods. Conventional methods are defined here as those based on an iterative coupling (rather than a direct coupling) and/or that use Picard iteration rather than Newton iteration. In an iterative coupling, the systems are solved separately, coupling information is updated and exchanged between the systems, and the systems are re-solved, etc., until convergence is achieved. Trusted simulators, such as Modflow, are based on these conventional methods of coupling and work well in many cases. An advantage of the JFNK method is that it only requires calculation of the residual vector of the system of equations and thus can make use of existing simulators regardless of how the equations are formulated. This opens the possibility of coupling different process models via augmentation of a residual vector by each separate process, which often requires substantially fewer changes to the existing source code than if the processes were directly coupled. However, appropriate perturbation sizes need to be determined for accurate approximations of the Frechet derivative, which is not always straightforward. Furthermore, preconditioning is necessary for reasonable convergence of the linear solution required at each Kyrlov iteration. Existing preconditioners can be used and applied separately to each process which maximizes use of existing code and robust preconditioners. In this work, iteratively coupled parent-child local grid refinement models of groundwater flow and groundwater flow models with nonlinear exchanges to streams are used to demonstrate the utility of the JFNK approach for Modflow models. Use of incomplete Cholesky preconditioners with various levels of fill are examined on a suite of nonlinear and linear models to analyze the effect of the preconditioner. Comparisons of convergence and computer simulation time are made using conventional iteratively coupled methods and those based on Picard iteration to those formulated with JFNK to gain insights on the types of nonlinearities and system features that make one approach advantageous. Results indicate that nonlinearities associated with stream/aquifer exchanges are more problematic than those resulting from unconfined flow.
Identification and compensation of friction for a novel two-axis differential micro-feed system
NASA Astrophysics Data System (ADS)
Du, Fuxin; Zhang, Mingyang; Wang, Zhaoguo; Yu, Chen; Feng, Xianying; Li, Peigang
2018-06-01
Non-linear friction in a conventional drive feed system (CDFS) feeding at low speed is one of the main factors that lead to the complexity of the feed drive. The CDFS will inevitably enter or approach a non-linear creeping work area at extremely low speed. A novel two-axis differential micro-feed system (TDMS) is developed in this paper to overcome the accuracy limitation of CDFS. A dynamic model of TDMS is first established. Then, a novel all-component friction parameter identification method (ACFPIM) using a genetic algorithm (GA) to identify the friction parameters of a TDMS is introduced. The friction parameters of the ball screw and linear motion guides are identified independently using the method, assuring the accurate modelling of friction force at all components. A proportional-derivate feed drive position controller with an observer-based friction compensator is implemented to achieve an accurate trajectory tracking performance. Finally, comparative experiments demonstrate the effectiveness of the TDMS in inhibiting the disadvantageous influence of non-linear friction and the validity of the proposed identification method for TDMS.
NASA Astrophysics Data System (ADS)
Sui, Yi; Zheng, Ping; Tong, Chengde; Yu, Bin; Zhu, Shaohong; Zhu, Jianguo
2015-05-01
This paper describes a tubular dual-stator flux-switching permanent-magnet (PM) linear generator for free-piston energy converter. The operating principle, topology, and design considerations of the machine are investigated. Combining the motion characteristic of free-piston Stirling engine, a tubular dual-stator PM linear generator is designed by finite element method. Some major structural parameters, such as the outer and inner radii of the mover, PM thickness, mover tooth width, tooth width of the outer and inner stators, etc., are optimized to improve the machine performances like thrust capability and power density. In comparison with conventional single-stator PM machines like moving-magnet linear machine and flux-switching linear machine, the proposed dual-stator flux-switching PM machine shows advantages in higher mass power density, higher volume power density, and lighter mover.
Biodosimetry estimate for high-LET irradiation.
Wang, Z Z; Li, W J; Zhi, D J; Jing, X G; Wei, W; Gao, Q X; Liu, B
2007-08-01
The purpose of this paper is to prepare for an easy and reliable biodosimeter protocol for radiation accidents involving high-linear energy transfer (LET) exposure. Human peripheral blood lymphocytes were irradiated using carbon ions (LET: 34.6 keV microm(-1)), and the chromosome aberrations induced were analyzed using both a conventional colcemid block method and a calyculin A induced premature chromosome condensation (PCC) method. At a lower dose range (0-4 Gy), the measured dicentric (dics) and centric ring chromosomes (cRings) provided reasonable dose information. At higher doses (8 Gy), however, the frequency of dics and cRings was not suitable for dose estimation. Instead, we found that the number of Giemsa-stained drug-induced G2 prematurely condensed chromosomes (G2-PCC) can be used for dose estimation, since the total chromosome number (including fragments) was linearly correlated with radiation dose (r = 0.99). The ratio of the longest and the shortest chromosome length of the drug-induced G2-PCCs increased with radiation dose in a linear-quadratic manner (r = 0.96), which indicates that this ratio can also be used to estimate radiation doses. Obviously, it is easier to establish the dose response curve using the PCC technique than using the conventional metaphase chromosome method. It is assumed that combining the ratio of the longest and the shortest chromosome length with analysis of the total chromosome number might be a valuable tool for rapid and precise dose estimation for victims of radiation accidents.
2013-01-01
Background: Lack of high-fidelity simultaneous measurements of pressure and flow velocity in the aorta has impeded the direct validation of the water-hammer formula for estimating regional aortic pulse wave velocity (AO-PWV1) and has restricted the study of the change of beat-to-beat AO-PWV1 under varying physiological conditions in man. Methods: Aortic pulse wave velocity was derived using two methods in 15 normotensive subjects: 1) the conventional two-point (foot-to-foot) method (AO-PWV2) and 2) a one-point method (AO-PWV1) in which the pressure velocity-loop (PV-loop) was analyzed based on the water hammer formula using simultaneous measurements of flow velocity (Vm) and pressure (Pm) at the same site in the proximal aorta using a multisensor catheter. AO-PWV1 was calculated from the slope of the linear regression line between Pm and Vm where wave reflection (Pb) was at a minimum in early systole in the PV-loop using the water hammer formula, PWV1 = (Pm/Vm)/ρ, where ρ is the blood density. AO-PWV2 was calculated using the conventional two-point measurement method as the distance/traveling time of the wave between 2 sites for measuring P in the proximal aorta. Beat-to-beat alterations of AO-PWV1 in relationship to aortic pressure and linearity of the initial part of the PV-loop during a Valsalva maneuver were also assessed in one subject. Results: The initial part of the loop became steeper in association with the beat-to-beat increase in diastolic pressure in phase 4 during the Valsalva maneuver. The linearity of the initial part of the PV-loop was maintained consistently during the maneuver. Flow velocity vs. pressure in the proximal aorta was highly linear during early systole, with Pearson’s coefficients ranging from 0.9954 to 0.9998. The average values of AO-PWV1 and AO-PWV2 were 6.3 ± 1.2 and 6.7 ± 1.3 m/s, respectively. The regression line of AO-PWV1 on AO-PWV2 was y = 0.95x + 0.68 (r = 0.93, p <0.001). Conclusion: This study concluded that the water-hammer formula (one-point method) provides a reliable and conventional estimate of beat-to-beat aortic regional pulse wave velocity consistently regardless of the changes in physiological states in human clinically. (*English Translation of J Jpn Coll Angiol 2011; 51: 215-221) PMID:23825494
Talio, María Carolina; Acosta, María Gimena; Acosta, Mariano; Olsina, Roberto; Fernández, Liliana P
2015-05-15
A new method for zinc pre-concentration/separation and determination by molecular fluorescence is proposed. The metal was complexed with o-phenanthroline and eosin at pH 7.5 in Tris; a piece of filter paper was used as a solid support and solid fluorescent emission measured using a conventional quartz cuvette. Under optimal conditions, the limits of detection and quantification were 0.36 × 10(-3) and 1.29 × 10(-3) μg L(-1), respectively, and the linear range from 1.29 × 10(-3) to 4.50 μg L(-1). This method showed good sensitivity and selectivity, and it was applied to the determination of zinc in foods and tap water. The absence of filtration reduced the consumption of water and electricity. Additionally, the use of common filter papers makes it a simpler and more rapid alternative to conventional methods, with sensitivity and accuracy similar to atomic spectroscopies using a typical laboratory instrument. Copyright © 2014 Elsevier Ltd. All rights reserved.
Solid-state NMR imaging system
Gopalsami, Nachappa; Dieckman, Stephen L.; Ellingson, William A.
1992-01-01
An apparatus for use with a solid-state NMR spectrometer includes a special imaging probe with linear, high-field strength gradient fields and high-power broadband RF coils using a back projection method for data acquisition and image reconstruction, and a real-time pulse programmer adaptable for use by a conventional computer for complex high speed pulse sequences.
Fillet Weld Stress Using Finite Element Methods
NASA Technical Reports Server (NTRS)
Lehnhoff, T. F.; Green, G. W.
1985-01-01
Average elastic Von Mises equivalent stresses were calculated along the throat of a single lap fillet weld. The average elastic stresses were compared to initial yield and to plastic instability conditions to modify conventional design formulas is presented. The factor is a linear function of the thicknesses of the parent plates attached by the fillet weld.
Simultaneous HPLC determination of flavonoids and phenolic acids profile in Pêra-Rio orange juice.
Mesquita, E; Monteiro, M
2018-04-01
The aim of this study was to develop and validate an HPLC-DAD method to evaluate the phenolic compounds profile of organic and conventional Pêra-Rio orange juice. The proposed method was validated for 10 flavonoids and 6 phenolic acids. A wide linear range (0.01-223.4μg·g -1 ), good accuracy (79.5-129.2%) and precision (CV≤3.8%), low limits of detection (1-22ng·g -1 ) and quantification (0.7-7.4μg), and overall ruggedness were attained. Good recovery was achieved for all phenolic compounds after extraction and cleanup. The method was applied to organic and conventional Pêra-Rio orange juices from beginning, middle and end of the 2016 harvest. Flavones rutin, nobiletin and tangeretin, and flavanones hesperidin, narirutin and eriocitrin were identified and quantified in all organic and conventional juices. Identity was confirmed by mass spectrometry. Nineteen non-identified phenolic compounds were quantified based on DAD spectra characteristic of the chemical class: 7 cinnamic acid derivatives, 6 flavanones and 6 flavones. The phenolic compounds profile of Pêra-Rio orange juices changed during the harvest; levels increased in organic orange juices, and decreased or were about the same in conventional orange juices. Phenolic compounds levels were higher in organic (0.5-1143.7mg·100g -1 ) than in conventional orange juices (0.5-689.7mg·100g -1 ). PCA differentiated organic from conventional FS and NFC juices, and conventional FCOJ from conventional FS and NFC juices, thus differentiating cultivation and processing. Copyright © 2017. Published by Elsevier Ltd.
NASA Technical Reports Server (NTRS)
Schweikhard, W. G.; Dennon, S. R.
1986-01-01
A review of the Melick method of inlet flow dynamic distortion prediction by statistical means is provided. These developments include the general Melick approach with full dynamic measurements, a limited dynamic measurement approach, and a turbulence modelling approach which requires no dynamic rms pressure fluctuation measurements. These modifications are evaluated by comparing predicted and measured peak instantaneous distortion levels from provisional inlet data sets. A nonlinear mean-line following vortex model is proposed and evaluated as a potential criterion for improving the peak instantaneous distortion map generated from the conventional linear vortex of the Melick method. The model is simplified to a series of linear vortex segments which lay along the mean line. Maps generated with this new approach are compared with conventionally generated maps, as well as measured peak instantaneous maps. Inlet data sets include subsonic, transonic, and supersonic inlets under various flight conditions.
Yamada, Toru; Umeyama, Shinji; Matsuda, Keiji
2012-01-01
In conventional functional near-infrared spectroscopy (fNIRS), systemic physiological fluctuations evoked by a body's motion and psychophysiological changes often contaminate fNIRS signals. We propose a novel method for separating functional and systemic signals based on their hemodynamic differences. Considering their physiological origins, we assumed a negative and positive linear relationship between oxy- and deoxyhemoglobin changes of functional and systemic signals, respectively. Their coefficients are determined by an empirical procedure. The proposed method was compared to conventional and multi-distance NIRS. The results were as follows: (1) Nonfunctional tasks evoked substantial oxyhemoglobin changes, and comparatively smaller deoxyhemoglobin changes, in the same direction by conventional NIRS. The systemic components estimated by the proposed method were similar to the above finding. The estimated functional components were very small. (2) During finger-tapping tasks, laterality in the functional component was more distinctive using our proposed method than that by conventional fNIRS. The systemic component indicated task-evoked changes, regardless of the finger used to perform the task. (3) For all tasks, the functional components were highly coincident with signals estimated by multi-distance NIRS. These results strongly suggest that the functional component obtained by the proposed method originates in the cerebral cortical layer. We believe that the proposed method could improve the reliability of fNIRS measurements without any modification in commercially available instruments. PMID:23185590
Low-redundancy linear arrays in mirrored interferometric aperture synthesis.
Zhu, Dong; Hu, Fei; Wu, Liang; Li, Jun; Lang, Liang
2016-01-15
Mirrored interferometric aperture synthesis (MIAS) is a novel interferometry that can improve spatial resolution compared with that of conventional IAS. In one-dimensional (1-D) MIAS, antenna array with low redundancy has the potential to achieve a high spatial resolution. This Letter presents a technique for the direct construction of low-redundancy linear arrays (LRLAs) in MIAS and derives two regular analytical patterns that can yield various LRLAs in short computation time. Moreover, for a better estimation of the observed scene, a bi-measurement method is proposed to handle the rank defect associated with the transmatrix of those LRLAs. The results of imaging simulation demonstrate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Warsito, W.; Noorhamdani, A. S.; Suratmo; Dwi Sapri, R.; Alkaroma, D.; Azhar, A. Z.
2018-04-01
Simple method has been used for the synthesis of benzimidazole derivative from citronellal in kaffir lime oil under microwave irradiation. These compounds were synthesized also by conventional heating for comparison. In addtion, microwave-assited synthesis was also compared between using to dichloromethane and methanol solvents with variation of reaction time for 30 to 70 minutes and 4 to 12 h for conventional heating. The 2-citronellyl benzimidazole compound synthesized were characterised by FT-IR, GC-MS, 1H and 13C NMR spectroscopy. Comparison between conventional and microwave-assisted synthesis was done by comparing between correlation of reaction time and percentage yield. The time optimum of microwave-assisted and conventional synthesis using dichloromethane solvent respectively at 60 minutes (yield 19.23%) and 8 hours (yield 11.54%). In addition, microwave-assited synthesis increasing 157.81 times compared by conventional heating. While using methanol solvent tends to increase linearly however the percentage of yield only 0.77 times of synthesis using dichloromethane solvent.
Stiffness optimization of non-linear elastic structures
Wallin, Mathias; Ivarsson, Niklas; Tortorelli, Daniel
2017-11-13
Our paper revisits stiffness optimization of non-linear elastic structures. Due to the non-linearity, several possible stiffness measures can be identified and in this work conventional compliance, i.e. secant stiffness designs are compared to tangent stiffness designs. The optimization problem is solved by the method of moving asymptotes and the sensitivities are calculated using the adjoint method. And for the tangent cost function it is shown that although the objective involves the third derivative of the strain energy an efficient formulation for calculating the sensitivity can be obtained. Loss of convergence due to large deformations in void regions is addressed bymore » using a fictitious strain energy such that small strain linear elasticity is approached in the void regions. We formulate a well-posed topology optimization problem by using restriction which is achieved via a Helmholtz type filter. The numerical examples provided show that for low load levels, the designs obtained from the different stiffness measures coincide whereas for large deformations significant differences are observed.« less
Stiffness optimization of non-linear elastic structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallin, Mathias; Ivarsson, Niklas; Tortorelli, Daniel
Our paper revisits stiffness optimization of non-linear elastic structures. Due to the non-linearity, several possible stiffness measures can be identified and in this work conventional compliance, i.e. secant stiffness designs are compared to tangent stiffness designs. The optimization problem is solved by the method of moving asymptotes and the sensitivities are calculated using the adjoint method. And for the tangent cost function it is shown that although the objective involves the third derivative of the strain energy an efficient formulation for calculating the sensitivity can be obtained. Loss of convergence due to large deformations in void regions is addressed bymore » using a fictitious strain energy such that small strain linear elasticity is approached in the void regions. We formulate a well-posed topology optimization problem by using restriction which is achieved via a Helmholtz type filter. The numerical examples provided show that for low load levels, the designs obtained from the different stiffness measures coincide whereas for large deformations significant differences are observed.« less
Luenser, Arne; Kussmann, Jörg; Ochsenfeld, Christian
2016-09-28
We present a (sub)linear-scaling algorithm to determine indirect nuclear spin-spin coupling constants at the Hartree-Fock and Kohn-Sham density functional levels of theory. Employing efficient integral algorithms and sparse algebra routines, an overall (sub)linear scaling behavior can be obtained for systems with a non-vanishing HOMO-LUMO gap. Calculations on systems with over 1000 atoms and 20 000 basis functions illustrate the performance and accuracy of our reference implementation. Specifically, we demonstrate that linear algebra dominates the runtime of conventional algorithms for 10 000 basis functions and above. Attainable speedups of our method exceed 6 × in total runtime and 10 × in the linear algebra steps for the tested systems. Furthermore, a convergence study of spin-spin couplings of an aminopyrazole peptide upon inclusion of the water environment is presented: using the new method it is shown that large solvent spheres are necessary to converge spin-spin coupling values.
Classification of speech dysfluencies using LPC based parameterization techniques.
Hariharan, M; Chee, Lim Sin; Ai, Ooi Chia; Yaacob, Sazali
2012-06-01
The goal of this paper is to discuss and compare three feature extraction methods: Linear Predictive Coefficients (LPC), Linear Prediction Cepstral Coefficients (LPCC) and Weighted Linear Prediction Cepstral Coefficients (WLPCC) for recognizing the stuttered events. Speech samples from the University College London Archive of Stuttered Speech (UCLASS) were used for our analysis. The stuttered events were identified through manual segmentation and were used for feature extraction. Two simple classifiers namely, k-nearest neighbour (kNN) and Linear Discriminant Analysis (LDA) were employed for speech dysfluencies classification. Conventional validation method was used for testing the reliability of the classifier results. The study on the effect of different frame length, percentage of overlapping, value of ã in a first order pre-emphasizer and different order p were discussed. The speech dysfluencies classification accuracy was found to be improved by applying statistical normalization before feature extraction. The experimental investigation elucidated LPC, LPCC and WLPCC features can be used for identifying the stuttered events and WLPCC features slightly outperforms LPCC features and LPC features.
Sun, Hao; Dul, Mitchell W; Swanson, William H
2006-07-01
The purposes of this study are to compare macular perimetric sensitivities for conventional size III, frequency-doubling, and Gabor stimuli in terms of Weber contrast and to provide a theoretical interpretation of the results. Twenty-two patients with glaucoma performed four perimetric tests: a conventional Swedish Interactive Threshold Algorithm (SITA) 10-2 test with Goldmann size III stimuli, two frequency-doubling tests (FDT 10-2, FDT Macula) with counterphase-modulated grating stimuli, and a laboratory-designed test with Gabor stimuli. Perimetric sensitivities were converted to the reciprocal of Weber contrast and sensitivities from different tests were compared using the Bland-Altman method. Effects of ganglion cell loss on perimetric sensitivities were then simulated with a two-stage neural model. The average perimetric loss was similar for all stimuli until advanced stages of ganglion cell loss, in which perimetric loss tended to be greater for size III stimuli than for frequency-doubling and Gabor stimuli. Comparison of the experimental data and model simulation suggests that, in the macula, linear relations between ganglion cell loss and perimetric sensitivity loss hold for all three stimuli. Linear relations between perimetric loss and ganglion cell loss for all three stimuli can account for the similarity in perimetric loss until advanced stages. The results do not support the hypothesis that redundancy for frequency-doubling stimuli is lower than redundancy for size III stimuli.
Nakano, Masahiko; Yoshikawa, Takeshi; Hirata, So; Seino, Junji; Nakai, Hiromi
2017-11-05
We have implemented a linear-scaling divide-and-conquer (DC)-based higher-order coupled-cluster (CC) and Møller-Plesset perturbation theories (MPPT) as well as their combinations automatically by means of the tensor contraction engine, which is a computerized symbolic algebra system. The DC-based energy expressions of the standard CC and MPPT methods and the CC methods augmented with a perturbation correction were proposed for up to high excitation orders [e.g., CCSDTQ, MP4, and CCSD(2) TQ ]. The numerical assessment for hydrogen halide chains, polyene chains, and first coordination sphere (C1) model of photoactive yellow protein has revealed that the DC-based correlation methods provide reliable correlation energies with significantly less computational cost than that of the conventional implementations. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
The 21st century skills with model eliciting activities on linear program
NASA Astrophysics Data System (ADS)
Handajani, Septriana; Pratiwi, Hasih; Mardiyana
2018-04-01
Human resources in the 21st century are required to master various forms of skills, including critical thinking skills and problem solving. The teaching of the 21st century is a teaching that integrates literacy skills, knowledge, skills, attitudes, and mastery of ICT. This study aims to determine whether there are differences in the effect of applying Model Elliciting Activities (MEAs) that integrates 21st century skills, namely 4C and conventional learning to learning outcomes. This research was conducted at Vocational High School in the odd semester of 2017 and uses the experimental method. The experimental class is treated MEAs that integrates 4C skills and the control class is given conventional learning. Methods of data collection in this study using the method of documentation and test methods. The data analysis uses Z-test. Data obtained from experiment class and control class. The result of this study showed there are differences in the effect of applying MEAs that integrates 4C skills and conventional learning to learning outcomes. Classes with MEAs that integrates 4C skills give better learning outcomes than the ones in conventional learning classes. This happens because MEAs that integrates 4C skills can improved creativity skills, communication skills, collaboration skills, and problem-solving skills.
Elimination of numerical diffusion in 1 - phase and 2 - phase flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajamaeki, M.
1997-07-01
The new hydraulics solution method PLIM (Piecewise Linear Interpolation Method) is capable of avoiding the excessive errors, numerical diffusion and also numerical dispersion. The hydraulics solver CFDPLIM uses PLIM and solves the time-dependent one-dimensional flow equations in network geometry. An example is given for 1-phase flow in the case when thermal-hydraulics and reactor kinetics are strongly coupled. Another example concerns oscillations in 2-phase flow. Both the example computations are not possible with conventional methods.
Joshi, Varsha; Kumar, Vijesh; Rathore, Anurag S
2015-08-07
A method is proposed for rapid development of a short, analytical cation exchange high performance liquid chromatography method for analysis of charge heterogeneity in monoclonal antibody products. The parameters investigated and optimized include pH, shape of elution gradient and length of the column. It is found that the most important parameter for development of a shorter method is the choice of the shape of elution gradient. In this paper, we propose a step by step approach to develop a non-linear sigmoidal shape gradient for analysis of charge heterogeneity for two different monoclonal antibody products. The use of this gradient not only decreases the run time of the method to 4min against the conventional method that takes more than 40min but also the resolution is retained. Superiority of the phosphate gradient over sodium chloride gradient for elution of mAbs is also observed. The method has been successfully evaluated for specificity, sensitivity, linearity, limit of detection, and limit of quantification. Application of this method as a potential at-line process analytical technology tool has been suggested. Copyright © 2015 Elsevier B.V. All rights reserved.
A simple linear model for estimating ozone AOT40 at forest sites from raw passive sampling data.
Ferretti, Marco; Cristofolini, Fabiana; Cristofori, Antonella; Gerosa, Giacomo; Gottardini, Elena
2012-08-01
A rapid, empirical method is described for estimating weekly AOT40 from ozone concentrations measured with passive samplers at forest sites. The method is based on linear regression and was developed after three years of measurements in Trentino (northern Italy). It was tested against an independent set of data from passive sampler sites across Italy. It provides good weekly estimates compared with those measured by conventional monitors (0.85 ≤R(2)≤ 0.970; 97 ≤ RMSE ≤ 302). Estimates obtained using passive sampling at forest sites are comparable to those obtained by another estimation method based on modelling hourly concentrations (R(2) = 0.94; 131 ≤ RMSE ≤ 351). Regression coefficients of passive sampling are similar to those obtained with conventional monitors at forest sites. Testing against an independent dataset generated by passive sampling provided similar results (0.86 ≤R(2)≤ 0.99; 65 ≤ RMSE ≤ 478). Errors tend to accumulate when weekly AOT40 estimates are summed to obtain the total AOT40 over the May-July period, and the median deviation between the two estimation methods based on passive sampling is 11%. The method proposed does not require any assumptions, complex calculation or modelling technique, and can be useful when other estimation methods are not feasible, either in principle or in practice. However, the method is not useful when estimates of hourly concentrations are of interest.
A comparison of two multi-variable integrator windup protection schemes
NASA Technical Reports Server (NTRS)
Mattern, Duane
1993-01-01
Two methods are examined for limit and integrator wind-up protection for multi-input, multi-output linear controllers subject to actuator constraints. The methods begin with an existing linear controller that satisfies the specifications for the nominal, small perturbation, linear model of the plant. The controllers are formulated to include an additional contribution to the state derivative calculations. The first method to be examined is the multi-variable version of the single-input, single-output, high gain, Conventional Anti-Windup (CAW) scheme. Except for the actuator limits, the CAW scheme is linear. The second scheme to be examined, denoted the Modified Anti-Windup (MAW) scheme, uses a scalar to modify the magnitude of the controller output vector while maintaining the vector direction. The calculation of the scalar modifier is a nonlinear function of the controller outputs and the actuator limits. In both cases the constrained actuator is tracked. These two integrator windup protection methods are demonstrated on a turbofan engine control system with five measurements, four control variables, and four actuators. The closed-loop responses of the two schemes are compared and contrasted during limit operation. The issue of maintaining the direction of the controller output vector using the Modified Anti-Windup scheme is discussed and the advantages and disadvantages of both of the IWP methods are presented.
Dual-energy x-ray image decomposition by independent component analysis
NASA Astrophysics Data System (ADS)
Jiang, Yifeng; Jiang, Dazong; Zhang, Feng; Zhang, Dengfu; Lin, Gang
2001-09-01
The spatial distributions of bone and soft tissue in human body are separated by independent component analysis (ICA) of dual-energy x-ray images. It is because of the dual energy imaging modelí-s conformity to the ICA model that we can apply this method: (1) the absorption in body is mainly caused by photoelectric absorption and Compton scattering; (2) they take place simultaneously but are mutually independent; and (3) for monochromatic x-ray sources the total attenuation is achieved by linear combination of these two absorption. Compared with the conventional method, the proposed one needs no priori information about the accurate x-ray energy magnitude for imaging, while the results of the separation agree well with the conventional one.
Ensemble of sparse classifiers for high-dimensional biological data.
Kim, Sunghan; Scalzo, Fabien; Telesca, Donatello; Hu, Xiao
2015-01-01
Biological data are often high in dimension while the number of samples is small. In such cases, the performance of classification can be improved by reducing the dimension of data, which is referred to as feature selection. Recently, a novel feature selection method has been proposed utilising the sparsity of high-dimensional biological data where a small subset of features accounts for most variance of the dataset. In this study we propose a new classification method for high-dimensional biological data, which performs both feature selection and classification within a single framework. Our proposed method utilises a sparse linear solution technique and the bootstrap aggregating algorithm. We tested its performance on four public mass spectrometry cancer datasets along with two other conventional classification techniques such as Support Vector Machines and Adaptive Boosting. The results demonstrate that our proposed method performs more accurate classification across various cancer datasets than those conventional classification techniques.
Transfer Alignment Error Compensator Design Based on Robust State Estimation
NASA Astrophysics Data System (ADS)
Lyou, Joon; Lim, You-Chol
This paper examines the transfer alignment problem of the StrapDown Inertial Navigation System (SDINS), which is subject to the ship’s roll and pitch. Major error sources for velocity and attitude matching are lever arm effect, measurement time delay and ship-body flexure. To reduce these alignment errors, an error compensation method based on state augmentation and robust state estimation is devised. A linearized error model for the velocity and attitude matching transfer alignment system is derived first by linearizing the nonlinear measurement equation with respect to its time delay and dominant Y-axis flexure, and by augmenting the delay state and flexure state into conventional linear state equations. Then an H∞ filter is introduced to account for modeling uncertainties of time delay and the ship-body flexure. The simulation results show that this method considerably decreases azimuth alignment errors considerably.
Pan, Rui; Wang, Hansheng; Li, Runze
2016-01-01
This paper is concerned with the problem of feature screening for multi-class linear discriminant analysis under ultrahigh dimensional setting. We allow the number of classes to be relatively large. As a result, the total number of relevant features is larger than usual. This makes the related classification problem much more challenging than the conventional one, where the number of classes is small (very often two). To solve the problem, we propose a novel pairwise sure independence screening method for linear discriminant analysis with an ultrahigh dimensional predictor. The proposed procedure is directly applicable to the situation with many classes. We further prove that the proposed method is screening consistent. Simulation studies are conducted to assess the finite sample performance of the new procedure. We also demonstrate the proposed methodology via an empirical analysis of a real life example on handwritten Chinese character recognition. PMID:28127109
NASA Astrophysics Data System (ADS)
Zhang, Jiaying; Gang, Tie; Ye, Chaofeng; Cong, Sen
2018-04-01
Linear-chirp-Golay (LCG)-coded excitation combined with pulse compression is proposed in this paper to improve the time resolution and suppress sidelobe in ultrasonic testing. The LCG-coded excitation is binary complementary pair Golay signal with linear-chirp signal applied on every sub pulse. Compared with conventional excitation which is a common ultrasonic testing method using a brief narrow pulse as exciting signal, the performances of LCG-coded excitation, in terms of time resolution improvement and sidelobe suppression, are studied via numerical and experimental investigations. The numerical simulations are implemented using Matlab K-wave toolbox. It is seen from the simulation results that time resolution of LCG excitation is 35.5% higher and peak sidelobe level (PSL) is 57.6 dB lower than linear-chirp excitation with 2.4 MHz chirp bandwidth and 3 μs time duration. In the B-scan experiment, time resolution of LCG excitation is higher and PSL is lower than conventional brief pulse excitation and chirp excitation. In terms of time resolution, LCG-coded signal has better performance than chirp signal. Moreover, the impact of chirp bandwidth on LCG-coded signal is less than that on chirp signal. In addition, the sidelobe of LCG-coded signal is lower than that of chirp signal with pulse compression.
Linear least-squares method for global luminescent oil film skin friction field analysis
NASA Astrophysics Data System (ADS)
Lee, Taekjin; Nonomura, Taku; Asai, Keisuke; Liu, Tianshu
2018-06-01
A data analysis method based on the linear least-squares (LLS) method was developed for the extraction of high-resolution skin friction fields from global luminescent oil film (GLOF) visualization images of a surface in an aerodynamic flow. In this method, the oil film thickness distribution and its spatiotemporal development are measured by detecting the luminescence intensity of the thin oil film. From the resulting set of GLOF images, the thin oil film equation is solved to obtain an ensemble-averaged (steady) skin friction field as an inverse problem. In this paper, the formulation of a discrete linear system of equations for the LLS method is described, and an error analysis is given to identify the main error sources and the relevant parameters. Simulations were conducted to evaluate the accuracy of the LLS method and the effects of the image patterns, image noise, and sample numbers on the results in comparison with the previous snapshot-solution-averaging (SSA) method. An experimental case is shown to enable the comparison of the results obtained using conventional oil flow visualization and those obtained using both the LLS and SSA methods. The overall results show that the LLS method is more reliable than the SSA method and the LLS method can yield a more detailed skin friction topology in an objective way.
Hanya, Shizuo
2013-01-01
Lack of high-fidelity simultaneous measurements of pressure and flow velocity in the aorta has impeded the direct validation of the water-hammer formula for estimating regional aortic pulse wave velocity (AO-PWV1) and has restricted the study of the change of beat-to-beat AO-PWV1 under varying physiological conditions in man. Aortic pulse wave velocity was derived using two methods in 15 normotensive subjects: 1) the conventional two-point (foot-to-foot) method (AO-PWV2) and 2) a one-point method (AO-PWV1) in which the pressure velocity-loop (PV-loop) was analyzed based on the water hammer formula using simultaneous measurements of flow velocity (Vm) and pressure (Pm) at the same site in the proximal aorta using a multisensor catheter. AO-PWV1 was calculated from the slope of the linear regression line between Pm and Vm where wave reflection (Pb) was at a minimum in early systole in the PV-loop using the water hammer formula, PWV1 = (Pm/Vm)/ρ, where ρ is the blood density. AO-PWV2 was calculated using the conventional two-point measurement method as the distance/traveling time of the wave between 2 sites for measuring P in the proximal aorta. Beat-to-beat alterations of AO-PWV1 in relationship to aortic pressure and linearity of the initial part of the PV-loop during a Valsalva maneuver were also assessed in one subject. The initial part of the loop became steeper in association with the beat-to-beat increase in diastolic pressure in phase 4 during the Valsalva maneuver. The linearity of the initial part of the PV-loop was maintained consistently during the maneuver. Flow velocity vs. pressure in the proximal aorta was highly linear during early systole, with Pearson's coefficients ranging from 0.9954 to 0.9998. The average values of AO-PWV1 and AO-PWV2 were 6.3 ± 1.2 and 6.7 ± 1.3 m/s, respectively. The regression line of AO-PWV1 on AO-PWV2 was y = 0.95x + 0.68 (r = 0.93, p <0.001). This study concluded that the water-hammer formula (one-point method) provides a reliable and conventional estimate of beat-to-beat aortic regional pulse wave velocity consistently regardless of the changes in physiological states in human clinically. (English Translation of J Jpn Coll Angiol 2011; 51: 215-221).
Self Diagnostic Adhesive for Bonded Joints in Aircraft Structures
2016-10-04
validated under the fatigue/dynamic loading condition. 3) Both SEM (Spectral Element Modeling) and FEM ( Finite Element Modeling) simulation of the...Sensors ..................................................................... 22 Parametric Study of Sensor Performance via Finite Element Simulation...The frequency range that we are interested is around 800 kHz. Conventional linear finite element method (FEM) requires a very fine spatial
Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan; Kim, Hae-Young
2014-03-01
This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models.
Angle-domain inverse scattering migration/inversion in isotropic media
NASA Astrophysics Data System (ADS)
Li, Wuqun; Mao, Weijian; Li, Xuelei; Ouyang, Wei; Liang, Quan
2018-07-01
The classical seismic asymptotic inversion can be transformed into a problem of inversion of generalized Radon transform (GRT). In such methods, the combined parameters are linearly attached to the scattered wave-field by Born approximation and recovered by applying an inverse GRT operator to the scattered wave-field data. Typical GRT-style true-amplitude inversion procedure contains an amplitude compensation process after the weighted migration via dividing an illumination associated matrix whose elements are integrals of scattering angles. It is intuitional to some extent that performs the generalized linear inversion and the inversion of GRT together by this process for direct inversion. However, it is imprecise to carry out such operation when the illumination at the image point is limited, which easily leads to the inaccuracy and instability of the matrix. This paper formulates the GRT true-amplitude inversion framework in an angle-domain version, which naturally degrades the external integral term related to the illumination in the conventional case. We solve the linearized integral equation for combined parameters of different fixed scattering angle values. With this step, we obtain high-quality angle-domain common-image gathers (CIGs) in the migration loop which provide correct amplitude-versus-angle (AVA) behavior and reasonable illumination range for subsurface image points. Then we deal with the over-determined problem to solve each parameter in the combination by a standard optimization operation. The angle-domain GRT inversion method keeps away from calculating the inaccurate and unstable illumination matrix. Compared with the conventional method, the angle-domain method can obtain more accurate amplitude information and wider amplitude-preserved range. Several model tests demonstrate the effectiveness and practicability.
NASA Astrophysics Data System (ADS)
Madsen, Niels Kristian; Godtliebsen, Ian H.; Losilla, Sergio A.; Christiansen, Ove
2018-01-01
A new implementation of vibrational coupled-cluster (VCC) theory is presented, where all amplitude tensors are represented in the canonical polyadic (CP) format. The CP-VCC algorithm solves the non-linear VCC equations without ever constructing the amplitudes or error vectors in full dimension but still formally includes the full parameter space of the VCC[n] model in question resulting in the same vibrational energies as the conventional method. In a previous publication, we have described the non-linear-equation solver for CP-VCC calculations. In this work, we discuss the general algorithm for evaluating VCC error vectors in CP format including the rank-reduction methods used during the summation of the many terms in the VCC amplitude equations. Benchmark calculations for studying the computational scaling and memory usage of the CP-VCC algorithm are performed on a set of molecules including thiadiazole and an array of polycyclic aromatic hydrocarbons. The results show that the reduced scaling and memory requirements of the CP-VCC algorithm allows for performing high-order VCC calculations on systems with up to 66 vibrational modes (anthracene), which indeed are not possible using the conventional VCC method. This paves the way for obtaining highly accurate vibrational spectra and properties of larger molecules.
Tomassetti, Mauro; Merola, Giovanni; Martini, Elisabetta; Campanella, Luigi; Sanzò, Gabriella; Favero, Gabriele; Mazzei, Franco
2017-01-01
In this research, we developed a direct-flow surface plasmon resonance (SPR) immunosensor for ampicillin to perform direct, simple, and fast measurements of this important antibiotic. In order to better evaluate the performance, it was compared with a conventional amperometric immunosensor, working with a competitive format with the aim of finding out experimental real advantages and disadvantages of two respective methods. Results showed that certain analytical features of the new SPR immunodevice, such as the lower limit of detection (LOD) value and the width of the linear range, are poorer than those of a conventional amperometric immunosensor, which adversely affects the application to samples such as natural waters. On the other hand, the SPR immunosensor was more selective to ampicillin, and measurements were more easily and quickly attained compared to those performed with the conventional competitive immunosensor. PMID:28394296
Computation of output feedback gains for linear stochastic systems using the Zangnill-Powell Method
NASA Technical Reports Server (NTRS)
Kaufman, H.
1975-01-01
Because conventional optimal linear regulator theory results in a controller which requires the capability of measuring and/or estimating the entire state vector, it is of interest to consider procedures for computing controls which are restricted to be linear feedback functions of a lower dimensional output vector and which take into account the presence of measurement noise and process uncertainty. To this effect a stochastic linear model has been developed that accounts for process parameter and initial uncertainty, measurement noise, and a restricted number of measurable outputs. Optimization with respect to the corresponding output feedback gains was then performed for both finite and infinite time performance indices without gradient computation by using Zangwill's modification of a procedure originally proposed by Powell. Results using a seventh order process show the proposed procedures to be very effective.
SU-E-T-525: Ionization Chamber Perturbation in Flattening Filter Free Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Czarnecki, D; Voigts-Rhetz, P von; Zink, K
2015-06-15
Purpose: Changing the characteristic of a photon beam by mechanically removing the flattening filter may impact the dose response of ionization chambers. Thus, perturbation factors of cylindrical ionization chambers in conventional and flattening filter free photon beams were calculated by Monte Carlo simulations. Methods: The EGSnrc/BEAMnrc code system was used for all Monte Carlo calculations. BEAMnrc models of nine different linear accelerators with and without flattening filter were used to create realistic photon sources. Monte Carlo based calculations to determine the fluence perturbations due to the presens of the chambers components, the different material of the sensitive volume (air insteadmore » of water) as well as the volume effect were performed by the user code egs-chamber. Results: Stem, central electrode, wall, density and volume perturbation factors for linear accelerators with and without flattening filter were calculated as a function of the beam quality specifier TPR{sub 20/10}. A bias between the perturbation factors as a function of TPR{sub 20/10} for flattening filter free beams and conventional linear accelerators could not be observed for the perturbations caused by the components of the ionization chamber and the sensitive volume. Conclusion: The results indicate that the well-known small bias between the beam quality correction factor as a function of TPR20/10 for the flattening filter free and conventional linear accelerators is not caused by the geometry of the detector but rather by the material of the sensitive volume. This suggest that the bias for flattening filter free photon fields is only caused by the different material of the sensitive volume (air instead of water)« less
Williams, Calum; Rughoobur, Girish; Flewitt, Andrew J; Wilkinson, Timothy D
2016-11-10
A single-step fabrication method is presented for ultra-thin, linearly variable optical bandpass filters (LVBFs) based on a metal-insulator-metal arrangement using modified evaporation deposition techniques. This alternate process methodology offers reduced complexity and cost in comparison to conventional techniques for fabricating LVBFs. We are able to achieve linear variation of insulator thickness across a sample, by adjusting the geometrical parameters of a typical physical vapor deposition process. We demonstrate LVBFs with spectral selectivity from 400 to 850 nm based on Ag (25 nm) and MgF2 (75-250 nm). Maximum spectral transmittance is measured at ∼70% with a Q-factor of ∼20.
Design of a dual linear polarization antenna using split ring resonators at X-band
NASA Astrophysics Data System (ADS)
Ahmed, Sadiq; Chandra, Madhukar
2017-11-01
Dual linear polarization microstrip antenna configurations are very suitable for high-performance satellites, wireless communication and radar applications. This paper presents a new method to improve the co-cross polarization discrimination (XPD) for dual linear polarized microstrip antennas at 10 GHz. For this, three various configurations of a dual linear polarization antenna utilizing metamaterial unit cells are shown. In the first layout, the microstrip patch antenna is loaded with two pairs of spiral ring resonators, in the second model, a split ring resonator is placed between two microstrip feed lines, and in the third design, a complementary split ring resonators are etched in the ground plane. This work has two primary goals: the first is related to the addition of metamaterial unit cells to the antenna structure which permits compensation for an asymmetric current distribution flow on the microstrip antenna and thus yields a symmetrical current distribution on it. This compensation leads to an important enhancement in the XPD in comparison to a conventional dual linear polarized microstrip patch antenna. The simulation reveals an improvement of 7.9, 8.8, and 4 dB in the E and H planes for the three designs, respectively, in the XPD as compared to the conventional dual linear polarized patch antenna. The second objective of this paper is to present the characteristics and performances of the designs of the spiral ring resonator (S-RR), split ring resonator (SRR), and complementary split ring resonator (CSRR) metamaterial unit cells. The simulations are evaluated using the commercial full-wave simulator, Ansoft High-Frequency Structure Simulator (HFSS).
Hayashi, Ryusuke; Watanabe, Osamu; Yokoyama, Hiroki; Nishida, Shin'ya
2017-06-01
Characterization of the functional relationship between sensory inputs and neuronal or observers' perceptual responses is one of the fundamental goals of systems neuroscience and psychophysics. Conventional methods, such as reverse correlation and spike-triggered data analyses are limited in their ability to resolve complex and inherently nonlinear neuronal/perceptual processes because these methods require input stimuli to be Gaussian with a zero mean. Recent studies have shown that analyses based on a generalized linear model (GLM) do not require such specific input characteristics and have advantages over conventional methods. GLM, however, relies on iterative optimization algorithms and its calculation costs become very expensive when estimating the nonlinear parameters of a large-scale system using large volumes of data. In this paper, we introduce a new analytical method for identifying a nonlinear system without relying on iterative calculations and yet also not requiring any specific stimulus distribution. We demonstrate the results of numerical simulations, showing that our noniterative method is as accurate as GLM in estimating nonlinear parameters in many cases and outperforms conventional, spike-triggered data analyses. As an example of the application of our method to actual psychophysical data, we investigated how different spatiotemporal frequency channels interact in assessments of motion direction. The nonlinear interaction estimated by our method was consistent with findings from previous vision studies and supports the validity of our method for nonlinear system identification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
García, C.
Mixtures of AISI M2 high speed steel and vanadium carbide (3, 6 or 10 wt.%) were prepared by powder metallurgy and sintered by concentrated solar energy (CSE). Two different powerful solar furnaces were employed to sinter the parts and the results were compared with those obtained by conventional powder metallurgy using a tubular electric furnace. CSE allowed significant reduction of processing times and high heating rates. The wear resistance of compacts was studied by using rotating pin-on-disk and linearly reciprocating ball-on-flat methods. Wear mechanisms were investigated by means of scanning electron microscopy (SEM) observations and chemical inspections of the microstructuresmore » of the samples. Better wear properties than those obtained by conventional powder metallurgy were achieved. The refinement of the microstructure and the formation of carbonitrides were the reasons for this. - Highlights: •Powder metallurgy of mixtures of M2 high speed steel and VC are studied. •Some sintering is done by concentrated solar energy. •Rotating pin-on-disk and linearly reciprocating ball-on-flat methods are used. •The tribological properties and wear mechanisms, under dry sliding, are studied.« less
NASA Astrophysics Data System (ADS)
Bagheri Tolabi, Hajar; Hosseini, Rahil; Shakarami, Mahmoud Reza
2016-06-01
This article presents a novel hybrid optimization approach for a nonlinear controller of a distribution static compensator (DSTATCOM). The DSTATCOM is connected to a distribution system with the distributed generation units. The nonlinear control is based on partial feedback linearization. Two proportional-integral-derivative (PID) controllers regulate the voltage and track the output in this control system. In the conventional scheme, the trial-and-error method is used to determine the PID controller coefficients. This article uses a combination of a fuzzy system, simulated annealing (SA) and intelligent water drops (IWD) algorithms to optimize the parameters of the controllers. The obtained results reveal that the response of the optimized controlled system is effectively improved by finding a high-quality solution. The results confirm that using the tuning method based on the fuzzy-SA-IWD can significantly decrease the settling and rising times, the maximum overshoot and the steady-state error of the voltage step response of the DSTATCOM. The proposed hybrid tuning method for the partial feedback linearizing (PFL) controller achieved better regulation of the direct current voltage for the capacitor within the DSTATCOM. Furthermore, in the event of a fault the proposed controller tuned by the fuzzy-SA-IWD method showed better performance than the conventional controller or the PFL controller without optimization by the fuzzy-SA-IWD method with regard to both fault duration and clearing times.
Gynecomastia: glandular-liposculpture through a single transaxillary one hole incision.
Lee, Yung Ki; Lee, Jun Hee; Kang, Sang Yoon
2018-04-01
Gynecomastia is characterized by the benign proliferation of breast tissue in men. Herein, we present a new method for the treatment of gynecomastia, using ultrasound-assisted liposuction with both conventional and reverse-cutting edge tip cannulas in combination with a pull-through lipectomy technique with pituitary forceps through a single transaxillary incision. Thirty patients were treated with this technique at the author's institution from January 2010 to January 2015. Ten patients were treated with conventional surgical excision of the glandular/fibrous breast tissue combined with liposuction through a periareolar incision before January 2010. Medical records, clinical photographs and linear analog scale scores were analyzed to compare the surgical results and complications. The patients were required to rate their cosmetic outcomes based on the linear analog scale with which they rated their own surgical results; the mean overall average score indicated a good or high level of satisfaction. There were no incidences of skin necrosis, hematoma, infection and scar contracture; however, one case each of seroma and nipple inversion did occur. Operative time was reduced overall using the new technique since it is relatively simple and straightforward. According to the evaluation by the four independent researchers, the patients treated with this new technique showed statistically significant improvements in scar and nipple-areolar complex (NAC) deformity compared to those who were treated using the conventional method. Glandular liposculpture through a single transaxillary incision is an efficient and safe technique that can provide aesthetically satisfying and consistent results.
CNN based approach for activity recognition using a wrist-worn accelerometer.
Panwar, Madhuri; Dyuthi, S Ram; Chandra Prakash, K; Biswas, Dwaipayan; Acharyya, Amit; Maharatna, Koushik; Gautam, Arvind; Naik, Ganesh R
2017-07-01
In recent years, significant advancements have taken place in human activity recognition using various machine learning approaches. However, feature engineering have dominated conventional methods involving the difficult process of optimal feature selection. This problem has been mitigated by using a novel methodology based on deep learning framework which automatically extracts the useful features and reduces the computational cost. As a proof of concept, we have attempted to design a generalized model for recognition of three fundamental movements of the human forearm performed in daily life where data is collected from four different subjects using a single wrist worn accelerometer sensor. The validation of the proposed model is done with different pre-processing and noisy data condition which is evaluated using three possible methods. The results show that our proposed methodology achieves an average recognition rate of 99.8% as opposed to conventional methods based on K-means clustering, linear discriminant analysis and support vector machine.
Identification of the isomers using principal component analysis (PCA) method
NASA Astrophysics Data System (ADS)
Kepceoǧlu, Abdullah; Gündoǧdu, Yasemin; Ledingham, Kenneth William David; Kilic, Hamdi Sukur
2016-03-01
In this work, we have carried out a detailed statistical analysis for experimental data of mass spectra from xylene isomers. Principle Component Analysis (PCA) was used to identify the isomers which cannot be distinguished using conventional statistical methods for interpretation of their mass spectra. Experiments have been carried out using a linear TOF-MS coupled to a femtosecond laser system as an energy source for the ionisation processes. We have performed experiments and collected data which has been analysed and interpreted using PCA as a multivariate analysis of these spectra. This demonstrates the strength of the method to get an insight for distinguishing the isomers which cannot be identified using conventional mass analysis obtained through dissociative ionisation processes on these molecules. The PCA results dependending on the laser pulse energy and the background pressure in the spectrometers have been presented in this work.
Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan
2014-01-01
Objective This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Methods Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. Results The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. Conclusions The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models. PMID:24696823
A motion-constraint logic for moving-base simulators based on variable filter parameters
NASA Technical Reports Server (NTRS)
Miller, G. K., Jr.
1974-01-01
A motion-constraint logic for moving-base simulators has been developed that is a modification to the linear second-order filters generally employed in conventional constraints. In the modified constraint logic, the filter parameters are not constant but vary with the instantaneous motion-base position to increase the constraint as the system approaches the positional limits. With the modified constraint logic, accelerations larger than originally expected are limited while conventional linear filters would result in automatic shutdown of the motion base. In addition, the modified washout logic has frequency-response characteristics that are an improvement over conventional linear filters with braking for low-frequency pilot inputs. During simulated landing approaches of an externally blown flap short take-off and landing (STOL) transport using decoupled longitudinal controls, the pilots were unable to detect much difference between the modified constraint logic and the logic based on linear filters with braking.
A Fourier Method for Sidelobe Reduction in Equally Spaced Linear Arrays
NASA Astrophysics Data System (ADS)
Safaai-Jazi, Ahmad; Stutzman, Warren L.
2018-04-01
Uniformly excited, equally spaced linear arrays have a sidelobe level larger than -13.3 dB, which is too high for many applications. This limitation can be remedied by nonuniform excitation of array elements. We present an efficient method for sidelobe reduction in equally spaced linear arrays with low penalty on the directivity. The method involves the following steps: construction of a periodic function containing only the sidelobes of the uniformly excited array, calculation of the Fourier series of this periodic function, subtracting the series from the array factor of the original uniformly excited array after it is truncated, and finally mitigating the truncation effects which yields significant increase in sidelobe level reduction. A sidelobe reduction factor is incorporated into element currents that makes much larger sidelobe reductions possible and also allows varying the sidelobe level incrementally. It is shown that such newly formed arrays can provide sidelobe levels that are at least 22.7 dB below those of the uniformly excited arrays with the same size and number of elements. Analytical expressions for element currents are presented. Radiation characteristics of the sidelobe-reduced arrays introduced here are examined, and numerical results for directivity, sidelobe level, and half-power beam width are presented for example cases. Performance improvements over popular conventional array synthesis methods, such as Chebyshev and linear current tapered arrays, are obtained with the new method.
Statistical Methods for Generalized Linear Models with Covariates Subject to Detection Limits.
Bernhardt, Paul W; Wang, Huixia J; Zhang, Daowen
2015-05-01
Censored observations are a common occurrence in biomedical data sets. Although a large amount of research has been devoted to estimation and inference for data with censored responses, very little research has focused on proper statistical procedures when predictors are censored. In this paper, we consider statistical methods for dealing with multiple predictors subject to detection limits within the context of generalized linear models. We investigate and adapt several conventional methods and develop a new multiple imputation approach for analyzing data sets with predictors censored due to detection limits. We establish the consistency and asymptotic normality of the proposed multiple imputation estimator and suggest a computationally simple and consistent variance estimator. We also demonstrate that the conditional mean imputation method often leads to inconsistent estimates in generalized linear models, while several other methods are either computationally intensive or lead to parameter estimates that are biased or more variable compared to the proposed multiple imputation estimator. In an extensive simulation study, we assess the bias and variability of different approaches within the context of a logistic regression model and compare variance estimation methods for the proposed multiple imputation estimator. Lastly, we apply several methods to analyze the data set from a recently-conducted GenIMS study.
Comparison of variational real-space representations of the kinetic energy operator
NASA Astrophysics Data System (ADS)
Skylaris, Chris-Kriton; Diéguez, Oswaldo; Haynes, Peter D.; Payne, Mike C.
2002-08-01
We present a comparison of real-space methods based on regular grids for electronic structure calculations that are designed to have basis set variational properties, using as a reference the conventional method of finite differences (a real-space method that is not variational) and the reciprocal-space plane-wave method which is fully variational. We find that a definition of the finite-difference method [P. Maragakis, J. Soler, and E. Kaxiras, Phys. Rev. B 64, 193101 (2001)] satisfies one of the two properties of variational behavior at the cost of larger errors than the conventional finite-difference method. On the other hand, a technique which represents functions in a number of plane waves which is independent of system size closely follows the plane-wave method and therefore also the criteria for variational behavior. Its application is only limited by the requirement of having functions strictly localized in regions of real space, but this is a characteristic of an increasing number of modern real-space methods, as they are designed to have a computational cost that scales linearly with system size.
Wu, Zheng; Zeng, Li-bo; Wu, Qiong-shui
2016-02-01
The conventional cervical cancer screening methods mainly include TBS (the bethesda system) classification method and cellular DNA quantitative analysis, however, by using multiple staining method in one cell slide, which is staining the cytoplasm with Papanicolaou reagent and the nucleus with Feulgen reagent, the study of achieving both two methods in the cervical cancer screening at the same time is still blank. Because the difficulty of this multiple staining method is that the absorbance of the non-DNA material may interfere with the absorbance of DNA, so that this paper has set up a multi-spectral imaging system, and established an absorbance unmixing model by using multiple linear regression method based on absorbance's linear superposition character, and successfully stripped out the absorbance of DNA to run the DNA quantitative analysis, and achieved the perfect combination of those two kinds of conventional screening method. Through a series of experiment we have proved that between the absorbance of DNA which is calculated by the absorbance unmixxing model and the absorbance of DNA which is measured there is no significant difference in statistics when the test level is 1%, also the result of actual application has shown that there is no intersection between the confidence interval of the DNA index of the tetraploid cells which are screened by using this paper's analysis method when the confidence level is 99% and the DNA index's judging interval of cancer cells, so that the accuracy and feasibility of the quantitative DNA analysis with multiple staining method expounded by this paper have been verified, therefore this analytical method has a broad application prospect and considerable market potential in early diagnosis of cervical cancer and other cancers.
Microrheology with optical tweezers: measuring the relative viscosity of solutions 'at a glance'.
Tassieri, Manlio; Del Giudice, Francesco; Robertson, Emma J; Jain, Neena; Fries, Bettina; Wilson, Rab; Glidle, Andrew; Greco, Francesco; Netti, Paolo Antonio; Maffettone, Pier Luca; Bicanic, Tihana; Cooper, Jonathan M
2015-03-06
We present a straightforward method for measuring the relative viscosity of fluids via a simple graphical analysis of the normalised position autocorrelation function of an optically trapped bead, without the need of embarking on laborious calculations. The advantages of the proposed microrheology method are evident when it is adopted for measurements of materials whose availability is limited, such as those involved in biological studies. The method has been validated by direct comparison with conventional bulk rheology methods, and has been applied both to characterise synthetic linear polyelectrolytes solutions and to study biomedical samples.
Microrheology with Optical Tweezers: Measuring the relative viscosity of solutions ‘at a glance'
Tassieri, Manlio; Giudice, Francesco Del; Robertson, Emma J.; Jain, Neena; Fries, Bettina; Wilson, Rab; Glidle, Andrew; Greco, Francesco; Netti, Paolo Antonio; Maffettone, Pier Luca; Bicanic, Tihana; Cooper, Jonathan M.
2015-01-01
We present a straightforward method for measuring the relative viscosity of fluids via a simple graphical analysis of the normalised position autocorrelation function of an optically trapped bead, without the need of embarking on laborious calculations. The advantages of the proposed microrheology method are evident when it is adopted for measurements of materials whose availability is limited, such as those involved in biological studies. The method has been validated by direct comparison with conventional bulk rheology methods, and has been applied both to characterise synthetic linear polyelectrolytes solutions and to study biomedical samples. PMID:25743468
Compaction managed mirror bend achromat
Douglas, David [Yorktown, VA
2005-10-18
A method for controlling the momentum compaction in a beam of charged particles. The method includes a compaction-managed mirror bend achromat (CMMBA) that provides a beamline design that retains the large momentum acceptance of a conventional mirror bend achromat. The CMMBA also provides the ability to tailor the system momentum compaction spectrum as desired for specific applications. The CMMBA enables magnetostatic management of the longitudinal phase space in Energy Recovery Linacs (ERLs) thereby alleviating the need for harmonic linearization of the RF waveform.
An improved silver staining procedure for schizodeme analysis in polyacrylamide gradient gels.
Gonçalves, A M; Nehme, N S; Morel, C M
1990-01-01
A simple protocol is described for the silver staining of polyacrylamide gradient gels used for the separation of restriction fragments of kinetoplast DNA [schizodeme analysis of trypanosomatids (Morel et al., 1980)]. The method overcomes the problems of non-uniform staining and strong background color which are frequently encountered when conventional protocols for silver staining of linear gels are applied to gradient gels. The method described has proven to be of general applicability for DNA, RNA and protein separations in gradient gels.
2009-06-12
Phasing Model ......................................................................................................9 Figure 2. The Continuum of...the communist periphery. In a high-intensity conflict, doctrine at the time called for conventional forces to fight the traditional, linear fight...operations and proximity of cross component forces in a non- linear battlespace – Rigid business rules, translator applications, or manual workarounds to
Channel estimation based on quantized MMP for FDD massive MIMO downlink
NASA Astrophysics Data System (ADS)
Guo, Yao-ting; Wang, Bing-he; Qu, Yi; Cai, Hua-jie
2016-10-01
In this paper, we consider channel estimation for Massive MIMO systems operating in frequency division duplexing mode. By exploiting the sparsity of propagation paths in Massive MIMO channel, we develop a compressed sensing(CS) based channel estimator which can reduce the pilot overhead. As compared with the conventional least squares (LS) and linear minimum mean square error(LMMSE) estimation, the proposed algorithm is based on the quantized multipath matching pursuit - MMP - reduced the pilot overhead and performs better than other CS algorithms. The simulation results demonstrate the advantage of the proposed algorithm over various existing methods including the LS, LMMSE, CoSaMP and conventional MMP estimators.
Francisco, Fabiane Lacerda; Saviano, Alessandro Morais; Almeida, Túlia de Souza Botelho; Lourenço, Felipe Rebello
2016-05-01
Microbiological assays are widely used to estimate the relative potencies of antibiotics in order to guarantee the efficacy, safety, and quality of drug products. Despite of the advantages of turbidimetric bioassays when compared to other methods, it has limitations concerning the linearity and range of the dose-response curve determination. Here, we proposed to use partial least squares (PLS) regression to solve these limitations and to improve the prediction of relative potencies of antibiotics. Kinetic-reading microplate turbidimetric bioassays for apramacyin and vancomycin were performed using Escherichia coli (ATCC 8739) and Bacillus subtilis (ATCC 6633), respectively. Microbial growths were measured as absorbance up to 180 and 300min for apramycin and vancomycin turbidimetric bioassays, respectively. Conventional dose-response curves (absorbances or area under the microbial growth curve vs. log of antibiotic concentration) showed significant regression, however there were significant deviation of linearity. Thus, they could not be used for relative potency estimations. PLS regression allowed us to construct a predictive model for estimating the relative potencies of apramycin and vancomycin without over-fitting and it improved the linear range of turbidimetric bioassay. In addition, PLS regression provided predictions of relative potencies equivalent to those obtained from agar diffusion official methods. Therefore, we conclude that PLS regression may be used to estimate the relative potencies of antibiotics with significant advantages when compared to conventional dose-response curve determination. Copyright © 2016 Elsevier B.V. All rights reserved.
Signal Prediction With Input Identification
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Chen, Ya-Chin
1999-01-01
A novel coding technique is presented for signal prediction with applications including speech coding, system identification, and estimation of input excitation. The approach is based on the blind equalization method for speech signal processing in conjunction with the geometric subspace projection theory to formulate the basic prediction equation. The speech-coding problem is often divided into two parts, a linear prediction model and excitation input. The parameter coefficients of the linear predictor and the input excitation are solved simultaneously and recursively by a conventional recursive least-squares algorithm. The excitation input is computed by coding all possible outcomes into a binary codebook. The coefficients of the linear predictor and excitation, and the index of the codebook can then be used to represent the signal. In addition, a variable-frame concept is proposed to block the same excitation signal in sequence in order to reduce the storage size and increase the transmission rate. The results of this work can be easily extended to the problem of disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. Simulations are included to demonstrate the proposed method.
Multi-linear sparse reconstruction for SAR imaging based on higher-order SVD
NASA Astrophysics Data System (ADS)
Gao, Yu-Fei; Gui, Guan; Cong, Xun-Chao; Yang, Yue; Zou, Yan-Bin; Wan, Qun
2017-12-01
This paper focuses on the spotlight synthetic aperture radar (SAR) imaging for point scattering targets based on tensor modeling. In a real-world scenario, scatterers usually distribute in the block sparse pattern. Such a distribution feature has been scarcely utilized by the previous studies of SAR imaging. Our work takes advantage of this structure property of the target scene, constructing a multi-linear sparse reconstruction algorithm for SAR imaging. The multi-linear block sparsity is introduced into higher-order singular value decomposition (SVD) with a dictionary constructing procedure by this research. The simulation experiments for ideal point targets show the robustness of the proposed algorithm to the noise and sidelobe disturbance which always influence the imaging quality of the conventional methods. The computational resources requirement is further investigated in this paper. As a consequence of the algorithm complexity analysis, the present method possesses the superiority on resource consumption compared with the classic matching pursuit method. The imaging implementations for practical measured data also demonstrate the effectiveness of the algorithm developed in this paper.
Mateo, Tony; Chang, Alexandre; Mofid, Yassine; Pisella, Pierre-Jean; Ossant, Frederic
2014-11-01
In ophthalmic ultrasonography the crystalline lens is known to be the main source of phase aberration, causing a significant decrease in resolution and distortion effects on axial B-scans. This paper proposes a computationally efficient method to correct the phase aberration arising from the crystalline lens, including refraction effects using a bending ray tracing approach based on Fermat's principle. This method is used as a basis to perform eye-adapted beamforming (BF), with appropriate focusing delays for a 128-element 20-MHz linear array in both emission and reception. Implementation was achieved on an in-house developed experimental ultrasound scanning device, the ECODERM. The proposed BF was tested in vitro by imaging a wire phantom through an eye phantom consisting of a synthetic gelatin lens anatomically set up in an appropriate liquid (turpentine) to approach the in vivo velocity ratio. Both extremes of accommodation shapes of the human crystalline lens were investigated. The performance of the developed BF was evaluated in relation to that in homogeneous medium and compared to a conventional delay-and-sum (DAS) BF and a second adapted BF which was simplified to ignore the lens refraction. Global expectations provided by our method with the transducer array are reviewed by an analysis quantifying both image quality and spatial fidelity, as well as the detrimental effects of a crystalline lens in conventional reconstruction. Compared to conventional array imaging, the results indicated a two-fold improvement in the lateral resolution, greater sensitivity and a considerable reduction of spatial distortions that were sufficient to envisage reliable biometry directly in B-mode, especially phakometry.
Exhaustive Search for Sparse Variable Selection in Linear Regression
NASA Astrophysics Data System (ADS)
Igarashi, Yasuhiko; Takenaka, Hikaru; Nakanishi-Ohno, Yoshinori; Uemura, Makoto; Ikeda, Shiro; Okada, Masato
2018-04-01
We propose a K-sparse exhaustive search (ES-K) method and a K-sparse approximate exhaustive search method (AES-K) for selecting variables in linear regression. With these methods, K-sparse combinations of variables are tested exhaustively assuming that the optimal combination of explanatory variables is K-sparse. By collecting the results of exhaustively computing ES-K, various approximate methods for selecting sparse variables can be summarized as density of states. With this density of states, we can compare different methods for selecting sparse variables such as relaxation and sampling. For large problems where the combinatorial explosion of explanatory variables is crucial, the AES-K method enables density of states to be effectively reconstructed by using the replica-exchange Monte Carlo method and the multiple histogram method. Applying the ES-K and AES-K methods to type Ia supernova data, we confirmed the conventional understanding in astronomy when an appropriate K is given beforehand. However, we found the difficulty to determine K from the data. Using virtual measurement and analysis, we argue that this is caused by data shortage.
Mao, Yong; Zhou, Xiao-Bo; Pi, Dao-Ying; Sun, You-Xian; Wong, Stephen T C
2005-10-01
In microarray-based cancer classification, gene selection is an important issue owing to the large number of variables and small number of samples as well as its non-linearity. It is difficult to get satisfying results by using conventional linear statistical methods. Recursive feature elimination based on support vector machine (SVM RFE) is an effective algorithm for gene selection and cancer classification, which are integrated into a consistent framework. In this paper, we propose a new method to select parameters of the aforementioned algorithm implemented with Gaussian kernel SVMs as better alternatives to the common practice of selecting the apparently best parameters by using a genetic algorithm to search for a couple of optimal parameter. Fast implementation issues for this method are also discussed for pragmatic reasons. The proposed method was tested on two representative hereditary breast cancer and acute leukaemia datasets. The experimental results indicate that the proposed method performs well in selecting genes and achieves high classification accuracies with these genes.
Evaluation of a transfinite element numerical solution method for nonlinear heat transfer problems
NASA Technical Reports Server (NTRS)
Cerro, J. A.; Scotti, S. J.
1991-01-01
Laplace transform techniques have been widely used to solve linear, transient field problems. A transform-based algorithm enables calculation of the response at selected times of interest without the need for stepping in time as required by conventional time integration schemes. The elimination of time stepping can substantially reduce computer time when transform techniques are implemented in a numerical finite element program. The coupling of transform techniques with spatial discretization techniques such as the finite element method has resulted in what are known as transfinite element methods. Recently attempts have been made to extend the transfinite element method to solve nonlinear, transient field problems. This paper examines the theoretical basis and numerical implementation of one such algorithm, applied to nonlinear heat transfer problems. The problem is linearized and solved by requiring a numerical iteration at selected times of interest. While shown to be acceptable for weakly nonlinear problems, this algorithm is ineffective as a general nonlinear solution method.
Broadband linearisation of high-efficiency power amplifiers
NASA Technical Reports Server (NTRS)
Kenington, Peter B.; Parsons, Kieran J.; Bennett, David W.
1993-01-01
A feedforward-based amplifier linearization technique is presented which is capable of yielding significant improvements in both linearity and power efficiency over conventional amplifier classes (e.g. class-A or class-AB). Theoretical and practical results are presented showing that class-C stages may be used for both the main and error amplifiers yielding practical efficiencies well in excess of 30 percent, with theoretical efficiencies of much greater than 40 percent being possible. The levels of linearity which may be achieved are required for most satellite systems, however if greater linearity is required, the technique may be used in addition to conventional pre-distortion techniques.
A rapid method for optimization of the rocket propulsion system for single-stage-to-orbit vehicles
NASA Technical Reports Server (NTRS)
Eldred, C. H.; Gordon, S. V.
1976-01-01
A rapid analytical method for the optimization of rocket propulsion systems is presented for a vertical take-off, horizontal landing, single-stage-to-orbit launch vehicle. This method utilizes trade-offs between propulsion characteristics affecting flight performance and engine system mass. The performance results from a point-mass trajectory optimization program are combined with a linearized sizing program to establish vehicle sizing trends caused by propulsion system variations. The linearized sizing technique was developed for the class of vehicle systems studied herein. The specific examples treated are the optimization of nozzle expansion ratio and lift-off thrust-to-weight ratio to achieve either minimum gross mass or minimum dry mass. Assumed propulsion system characteristics are high chamber pressure, liquid oxygen and liquid hydrogen propellants, conventional bell nozzles, and the same fixed nozzle expansion ratio for all engines on a vehicle.
Tuning graphitic oxide for initiator- and metal-free aerobic epoxidation of linear alkenes
NASA Astrophysics Data System (ADS)
Pattisson, Samuel; Nowicka, Ewa; Gupta, Upendra N.; Shaw, Greg; Jenkins, Robert L.; Morgan, David J.; Knight, David W.; Hutchings, Graham J.
2016-09-01
Graphitic oxide has potential as a carbocatalyst for a wide range of reactions. Interest in this material has risen enormously due to it being a precursor to graphene via the chemical oxidation of graphite. Despite some studies suggesting that the chosen method of graphite oxidation can influence the physical properties of the graphitic oxide, the preparation method and extent of oxidation remain unresolved for catalytic applications. Here we show that tuning the graphitic oxide surface can be achieved by varying the amount and type of oxidant. The resulting materials differ in level of oxidation, surface oxygen content and functionality. Most importantly, we show that these graphitic oxide materials are active as unique carbocatalysts for low-temperature aerobic epoxidation of linear alkenes in the absence of initiator or metal. An optimum level of oxidation is necessary and materials produced via conventional permanganate-based methods are far from optimal.
Accelerating molecular property calculations with nonorthonormal Krylov space methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.
Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less
Accelerating molecular property calculations with nonorthonormal Krylov space methods
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; ...
2016-05-03
Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less
NASA Astrophysics Data System (ADS)
Talaghat, M. R.; Jokar, S. M.; Modarres, E.
2017-10-01
The reduction of fossil fuel resources and environmental issues made researchers find alternative fuels include biodiesels. One of the most widely used methods for production of biodiesel on a commercial scale is transesterification method. In this work, the biodiesel production by a transesterification method was modeled. Sodium hydroxide was considered as a catalyst to produce biodiesel from canola oil and methanol in a continuous tubular ceramic membranes reactor. As the Biodiesel production reaction from triglycerides is an equilibrium reaction, the reaction rate constants depend on temperature and related linearly to catalyst concentration. By using the mass balance for a membrane tubular reactor and considering the variation of raw materials and products concentration with time, the set of governing equations were solved by numerical methods. The results clearly show the superiority of membrane reactor than conventional tubular reactors. Afterward, the influences of molar ratio of alcohol to oil, weight percentage of the catalyst, and residence time on the performance of biodiesel production reactor were investigated.
Linear and nonlinear dynamic analysis of redundant load path bearingless rotor systems
NASA Technical Reports Server (NTRS)
Murthy, V. R.
1985-01-01
The bearingless rotorcraft offers reduced weight, less complexity and superior flying qualities. Almost all the current industrial structural dynamic programs of conventional rotors which consist of single load path rotor blades employ the transfer matrix method to determine natural vibration characteristics because this method is ideally suited for one dimensional chain like structures. This method is extended to multiple load path rotor blades without resorting to an equivalent single load path approximation. Unlike the conventional blades, it isk necessary to introduce the axial-degree-of-freedom into the solution process to account for the differential axial displacements in the different load paths. With the present extension, the current rotor dynamic programs can be modified with relative ease to account for the multiple load paths without resorting to the equivalent single load path modeling. The results obtained by the transfer matrix method are validated by comparing with the finite element solutions. A differential stiffness matrix due to blade rotation is derived to facilitate the finite element solutions.
Fan, Yurui; Huang, Guohe; Veawab, Amornvadee
2012-01-01
In this study, a generalized fuzzy linear programming (GFLP) method was developed to deal with uncertainties expressed as fuzzy sets that exist in the constraints and objective function. A stepwise interactive algorithm (SIA) was advanced to solve GFLP model and generate solutions expressed as fuzzy sets. To demonstrate its application, the developed GFLP method was applied to a regional sulfur dioxide (SO2) control planning model to identify effective SO2 mitigation polices with a minimized system performance cost under uncertainty. The results were obtained to represent the amount of SO2 allocated to different control measures from different sources. Compared with the conventional interval-parameter linear programming (ILP) approach, the solutions obtained through GFLP were expressed as fuzzy sets, which can provide intervals for the decision variables and objective function, as well as related possibilities. Therefore, the decision makers can make a tradeoff between model stability and the plausibility based on solutions obtained through GFLP and then identify desired policies for SO2-emission control under uncertainty.
Yan, Liang; Peng, Juanjuan; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming
2014-10-01
This paper proposes a novel permanent magnet linear motor possessing two movers and one stator. The two movers are isolated and can interact with the stator poles to generate independent forces and motions. Compared with conventional multiple motor driving system, it helps to increase the system compactness, and thus improve the power density and working efficiency. The magnetic field distribution is obtained by using equivalent magnetic circuit method. Following that, the formulation of force output considering armature reaction is carried out. Then inductances are analyzed with finite element method to investigate the relationships of the two movers. It is found that the mutual-inductances are nearly equal to zero, and thus the interaction between the two movers is negligible. A research prototype of the linear motor and a measurement apparatus on thrust force have been developed. Both numerical computation and experiment measurement are conducted to validate the analytical model of thrust force. Comparison shows that the analytical model matches the numerical and experimental results well.
Trelease, R B; Nieder, G L; Dørup, J; Hansen, M S
2000-04-15
Continuing evolution of computer-based multimedia technologies has produced QuickTime, a multiplatform digital media standard that is supported by stand-alone commercial programs and World Wide Web browsers. While its core functions might be most commonly employed for production and delivery of conventional video programs (e.g., lecture videos), additional QuickTime VR "virtual reality" features can be used to produce photorealistic, interactive "non-linear movies" of anatomical structures ranging in size from microscopic through gross anatomic. But what is really included in QuickTime VR and how can it be easily used to produce novel and innovative visualizations for education and research? This tutorial introduces the QuickTime multimedia environment, its QuickTime VR extensions, basic linear and non-linear digital video technologies, image acquisition, and other specialized QuickTime VR production methods. Four separate practical applications are presented for light and electron microscopy, dissectable preserved specimens, and explorable functional anatomy in magnetic resonance cinegrams.
Proposal of Evolutionary Simplex Method for Global Optimization Problem
NASA Astrophysics Data System (ADS)
Shimizu, Yoshiaki
To make an agile decision in a rational manner, role of optimization engineering has been notified increasingly under diversified customer demand. With this point of view, in this paper, we have proposed a new evolutionary method serving as an optimization technique in the paradigm of optimization engineering. The developed method has prospects to solve globally various complicated problem appearing in real world applications. It is evolved from the conventional method known as Nelder and Mead’s Simplex method by virtue of idea borrowed from recent meta-heuristic method such as PSO. Mentioning an algorithm to handle linear inequality constraints effectively, we have validated effectiveness of the proposed method through comparison with other methods using several benchmark problems.
NASA Astrophysics Data System (ADS)
Horiuchi, Toshiyuki; Watanabe, Jun; Suzuki, Yuta; Iwasaki, Jun-ya
2017-05-01
Two dimensional code marks are often used for the production management. In particular, in the production lines of liquid-crystal-display panels and others, data on fabrication processes such as production number and process conditions are written on each substrate or device in detail, and they are used for quality managements. For this reason, lithography system specialized in code mark printing is developed. However, conventional systems using lamp projection exposure or laser scan exposure are very expensive. Therefore, development of a low-cost exposure system using light emitting diodes (LEDs) and optical fibers with squared ends arrayed in a matrix is strongly expected. In the past research, feasibility of such a new exposure system was demonstrated using a handmade system equipped with 100 LEDs with a central wavelength of 405 nm, a 10×10 matrix of optical fibers with 1 mm square ends, and a 10X projection lens. Based on these progresses, a new method for fabricating large-scale arrays of finer fibers with squared ends was developed in this paper. At most 40 plastic optical fibers were arranged in a linear gap of an arraying instrument, and simultaneously squared by heating them on a hotplate at 120°C for 7 min. Fiber sizes were homogeneous within 496+/-4 μm. In addition, average light leak was improved from 34.4 to 21.3% by adopting the new method in place of conventional one by one squaring method. Square matrix arrays necessary for printing code marks will be obtained by piling the newly fabricated linear arrays up.
Radiation shielding design of a new tomotherapy facility.
Zacarias, Albert; Balog, John; Mills, Michael
2006-10-01
It is expected that intensity modulated radiation therapy (IMRT) and image guided radiation therapy (IGRT) will replace a large portion of radiation therapy treatments currently performed with conventional MLC-based 3D conformal techniques. IGRT may become the standard of treatment in the future for prostate and head and neck cancer. Many established facilities may convert existing vaults to perform this treatment method using new or upgraded equipment. In the future, more facilities undoubtedly will be considering de novo designs for their treatment vaults. A reevaluation of the design principles used in conventional vault design is of benefit to those considering this approach with a new tomotherapy facility. This is made more imperative as the design of the TomoTherapy system is unique in several aspects and does not fit well into the formalism of NCRP 49 for a conventional linear accelerator.
NASA Astrophysics Data System (ADS)
Sun, Xiao-Yan; Chu, Dong-Kai; Dong, Xin-Ran; Zhou, Chu; Li, Hai-Tao; Luo-Zhi; Hu, You-Wang; Zhou, Jian-Ying; Cong-Wang; Duan, Ji-An
2016-03-01
A High sensitive refractive index (RI) sensor based on Mach-Zehnder interferometer (MZI) in a conventional single-mode optical fiber is proposed, which is fabricated by femtosecond laser transversal-scanning inscription method and chemical etching. A rectangular cavity structure is formed in part of fiber core and cladding interface. The MZI sensor shows excellent refractive index sensitivity and linearity, which exhibits an extremely high RI sensitivity of -17197 nm/RIU (refractive index unit) with the linearity of 0.9996 within the refractive index range of 1.3371-1.3407. The experimental results are consistent with theoretical analysis.
Modern digital flight control system design for VTOL aircraft
NASA Technical Reports Server (NTRS)
Broussard, J. R.; Berry, P. W.; Stengel, R. F.
1979-01-01
Methods for and results from the design and evaluation of a digital flight control system (DFCS) for a CH-47B helicopter are presented. The DFCS employed proportional-integral control logic to provide rapid, precise response to automatic or manual guidance commands while following conventional or spiral-descent approach paths. It contained altitude- and velocity-command modes, and it adapted to varying flight conditions through gain scheduling. Extensive use was made of linear systems analysis techniques. The DFCS was designed, using linear-optimal estimation and control theory, and the effects of gain scheduling are assessed by examination of closed-loop eigenvalues and time responses.
Applications of Support Vector Machines In Chemo And Bioinformatics
NASA Astrophysics Data System (ADS)
Jayaraman, V. K.; Sundararajan, V.
2010-10-01
Conventional linear & nonlinear tools for classification, regression & data driven modeling are being replaced on a rapid scale by newer techniques & tools based on artificial intelligence and machine learning. While the linear techniques are not applicable for inherently nonlinear problems, newer methods serve as attractive alternatives for solving real life problems. Support Vector Machine (SVM) classifiers are a set of universal feed-forward network based classification algorithms that have been formulated from statistical learning theory and structural risk minimization principle. SVM regression closely follows the classification methodology. In this work recent applications of SVM in Chemo & Bioinformatics will be described with suitable illustrative examples.
Analysis of periodically excited non-linear systems by a parametric continuation technique
NASA Astrophysics Data System (ADS)
Padmanabhan, C.; Singh, R.
1995-07-01
The dynamic behavior and frequency response of harmonically excited piecewise linear and/or non-linear systems has been the subject of several recent investigations. Most of the prior studies employed harmonic balance or Galerkin schemes, piecewise linear techniques, analog simulation and/or direct numerical integration (digital simulation). Such techniques are somewhat limited in their ability to predict all of the dynamic characteristics, including bifurcations leading to the occurrence of unstable, subharmonic, quasi-periodic and/or chaotic solutions. To overcome this problem, a parametric continuation scheme, based on the shooting method, is applied specifically to a periodically excited piecewise linear/non-linear system, in order to improve understanding as well as to obtain the complete dynamic response. Parameter regions exhibiting bifurcations to harmonic, subharmonic or quasi-periodic solutions are obtained quite efficiently and systematically. Unlike other techniques, the proposed scheme can follow period-doubling bifurcations, and with some modifications obtain stable quasi-periodic solutions and their bifurcations. This knowledge is essential in establishing conditions for the occurrence of chaotic oscillations in any non-linear system. The method is first validated through the Duffing oscillator example, the solutions to which are also obtained by conventional one-term harmonic balance and perturbation methods. The second example deals with a clearance non-linearity problem for both harmonic and periodic excitations. Predictions from the proposed scheme match well with available analog simulation data as well as with multi-term harmonic balance results. Potential savings in computational time over direct numerical integration is demonstrated for some of the example cases. Also, this work has filled in some of the solution regimes for an impact pair, which were missed previously in the literature. Finally, one main limitation associated with the proposed procedure is discussed.
NASA Astrophysics Data System (ADS)
Zhao, Yan; Li, DongXu; Liu, ZhiZhen; Liu, Liang
2013-03-01
The dexterous upper limb serves as the most important tool for astronauts to implement in-orbit experiments and operations. This study developed a simulated weightlessness experiment and invented new measuring equipment to quantitatively evaluate the muscle ability of the upper limb. Isometric maximum voluntary contractions (MVCs) and surface electromyography (sEMG) signals of right-handed pushing at the three positions were measured for eleven subjects. In order to enhance the comprehensiveness and accuracy of muscle force assessment, the study focused on signal processing techniques. We applied a combination method, which consists of time-, frequency-, and bi-frequency-domain analyses. Time- and frequency-domain analyses estimated the root mean square (RMS) and median frequency (MDF) of sEMG signals, respectively. Higher order spectra (HOS) of bi-frequency domain evaluated the maximum bispectrum amplitude ( B max), Gaussianity level (Sg) and linearity level (S l ) of sEMG signals. Results showed that B max, S l , and RMS values all increased as force increased. MDF and Sg values both declined as force increased. The research demonstrated that the combination method is superior to the conventional time- and frequency-domain analyses. The method not only described sEMG signal amplitude and power spectrum, but also deeper characterized phase coupling information and non-Gaussianity and non-linearity levels of sEMG, compared to two conventional analyses. The finding from the study can aid ergonomist to estimate astronaut muscle performance, so as to optimize in-orbit operation efficacy and minimize musculoskeletal injuries.
Osono, Eiichi; Kobayashi, Eiko; Inoue, Yuki; Honda, Kazumi; Kumagai, Takuya; Negishi, Hideki; Okamatsu, Kentaro; Ichimura, Kyoko; Kamano, Chisako; Suzuki, Fumi; Norose, Yoshihiko; Takahashi, Megumi; Takaku, Shun; Fujioka, Noriaki; Hayama, Naoaki; Takizawa, Hideaki
2014-01-01
A chemiluminescence system, Milliflex Quantum (MFQ), to detect microcolonies, has been used in the pharmaceutical field. In this study, we investigated aquatic bacteria in hemodialysis solutions sampled from bioburden areas in 4 dialysis faculties. Using MFQ, microcolonies could be detected after a short incubation period. The colony count detected with MFQ after a 48-hour incubation was 92% ± 39%, compared to that after the conventionally used 7-14-day incubation period; in addition, the results also showed a linear correlation. Moreover, MFQ-based analysis allowed the visualization of damaged cells and of the high density due to the excessive amount of bacteria. These results suggested that MFQ had adequate sensitivity to detect microbacteria in dialysis solutions, and it was useful for validating the conditions of conventional culture methods.
PSO-based PID Speed Control of Traveling Wave Ultrasonic Motor under Temperature Disturbance
NASA Astrophysics Data System (ADS)
Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Azmi, Nur Iffah Mohamed; Romlay, Fadhlur Rahman Mohd
2018-03-01
Traveling wave ultrasonic motors (TWUSMs) have a time varying dynamics characteristics. Temperature rise in TWUSMs remains a problem particularly in sustaining optimum speed performance. In this study, a PID controller is used to control the speed of TWUSM under temperature disturbance. Prior to developing the controller, a linear approximation model which relates the speed to the temperature is developed based on the experimental data. Two tuning methods are used to determine PID parameters: conventional Ziegler-Nichols(ZN) and particle swarm optimization (PSO). The comparison of speed control performance between PSO-PID and ZN-PID is presented. Modelling, simulation and experimental work is carried out utilizing Fukoku-Shinsei USR60 as the chosen TWUSM. The results of the analyses and experimental work reveal that PID tuning using PSO-based optimization has the advantage over the conventional Ziegler-Nichols method.
Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling.
Zhao, Bo; Setsompop, Kawin; Adalsteinsson, Elfar; Gagoski, Borjan; Ye, Huihui; Ma, Dan; Jiang, Yun; Ellen Grant, P; Griswold, Mark A; Wald, Lawrence L
2018-02-01
This article introduces a constrained imaging method based on low-rank and subspace modeling to improve the accuracy and speed of MR fingerprinting (MRF). A new model-based imaging method is developed for MRF to reconstruct high-quality time-series images and accurate tissue parameter maps (e.g., T 1 , T 2 , and spin density maps). Specifically, the proposed method exploits low-rank approximations of MRF time-series images, and further enforces temporal subspace constraints to capture magnetization dynamics. This allows the time-series image reconstruction problem to be formulated as a simple linear least-squares problem, which enables efficient computation. After image reconstruction, tissue parameter maps are estimated via dictionary-based pattern matching, as in the conventional approach. The effectiveness of the proposed method was evaluated with in vivo experiments. Compared with the conventional MRF reconstruction, the proposed method reconstructs time-series images with significantly reduced aliasing artifacts and noise contamination. Although the conventional approach exhibits some robustness to these corruptions, the improved time-series image reconstruction in turn provides more accurate tissue parameter maps. The improvement is pronounced especially when the acquisition time becomes short. The proposed method significantly improves the accuracy of MRF, and also reduces data acquisition time. Magn Reson Med 79:933-942, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
MUSTA fluxes for systems of conservation laws
NASA Astrophysics Data System (ADS)
Toro, E. F.; Titarev, V. A.
2006-08-01
This paper is about numerical fluxes for hyperbolic systems and we first present a numerical flux, called GFORCE, that is a weighted average of the Lax-Friedrichs and Lax-Wendroff fluxes. For the linear advection equation with constant coefficient, the new flux reduces identically to that of the Godunov first-order upwind method. Then we incorporate GFORCE in the framework of the MUSTA approach [E.F. Toro, Multi-Stage Predictor-Corrector Fluxes for Hyperbolic Equations. Technical Report NI03037-NPA, Isaac Newton Institute for Mathematical Sciences, University of Cambridge, UK, 17th June, 2003], resulting in a version that we call GMUSTA. For non-linear systems this gives results that are comparable to those of the Godunov method in conjunction with the exact Riemann solver or complete approximate Riemann solvers, noting however that in our approach, the solution of the Riemann problem in the conventional sense is avoided. Both the GFORCE and GMUSTA fluxes are extended to multi-dimensional non-linear systems in a straightforward unsplit manner, resulting in linearly stable schemes that have the same stability regions as the straightforward multi-dimensional extension of Godunov's method. The methods are applicable to general meshes. The schemes of this paper share with the family of centred methods the common properties of being simple and applicable to a large class of hyperbolic systems, but the schemes of this paper are distinctly more accurate. Finally, we proceed to the practical implementation of our numerical fluxes in the framework of high-order finite volume WENO methods for multi-dimensional non-linear hyperbolic systems. Numerical results are presented for the Euler equations and for the equations of magnetohydrodynamics.
Sato, Takaji; Saito, Yoshihiro; Chikuma, Masahiko; Saito, Yutaka; Nagai, Sonoko
2008-03-01
A highly sensitive flow injection fluorometry for the determination of albumin was developed and applied to the determination of albumin in human bronchoalveolar lavage fluids (BALF). This method is based on binding of chromazurol S (CAS) to albumin. The calibration curve was linear in the range of 5-200 microg/ml of albumin. A highly linear correlation (r=0.986) was observed between the albumin level in BALF samples (n=25) determined by the proposed method and by a conventional fluorometric method using CAS (CAS manual method). The IgG interference was lower in the CAS flow injection method than in the CAS manual method. The albumin level in BALF collected from healthy volunteers (n=10) was 58.5+/-13.1 microg/ml. The albumin levels in BALF samples obtained from patients with sarcoidosis and idiopathic pulmonary fibrosis were increased. This finding shows that the determination of albumin levels in BALF samples is useful for investigating lung diseases and that CAS flow injection method is promising in the determination of trace albumin in BALF samples, because it is sensitive and precise.
Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling
2014-01-01
Objective In this study, a novel grey self-memory coupling model was developed to forecast the incidence rates of two notifiable infectious diseases (dysentery and gonorrhea); the effectiveness and applicability of this model was assessed based on its ability to predict the epidemiological trend of infectious diseases in China. Methods The linear model, the conventional GM(1,1) model and the GM(1,1) model with self-memory principle (SMGM(1,1) model) were used to predict the incidence rates of the two notifiable infectious diseases based on statistical incidence data. Both simulation accuracy and prediction accuracy were assessed to compare the predictive performances of the three models. The best-fit model was applied to predict future incidence rates. Results Simulation results show that the SMGM(1,1) model can take full advantage of the systematic multi-time historical data and possesses superior predictive performance compared with the linear model and the conventional GM(1,1) model. By applying the novel SMGM(1,1) model, we obtained the possible incidence rates of the two representative notifiable infectious diseases in China. Conclusion The disadvantages of the conventional grey prediction model, such as sensitivity to initial value, can be overcome by the self-memory principle. The novel grey self-memory coupling model can predict the incidence rates of infectious diseases more accurately than the conventional model, and may provide useful references for making decisions involving infectious disease prevention and control. PMID:25546054
Erich, Sarah; Schill, Sandra; Annweiler, Eva; Waiblinger, Hans-Ulrich; Kuballa, Thomas; Lachenmeier, Dirk W; Monakhova, Yulia B
2015-12-01
The increased sales of organically produced food create a strong need for analytical methods, which could authenticate organic and conventional products. Combined chemometric analysis of (1)H NMR-, (13)C NMR-spectroscopy data, stable-isotope data (IRMS) and α-linolenic acid content (gas chromatography) was used to differentiate organic and conventional milk. In total 85 raw, pasteurized and ultra-heat treated (UHT) milk samples (52 organic and 33 conventional) were collected between August 2013 and May 2014. The carbon isotope ratios of milk protein and milk fat as well as the α-linolenic acid content of these samples were determined. Additionally, the milk fat was analyzed by (1)H and (13)C NMR spectroscopy. The chemometric analysis of combined data (IRMS, GC, NMR) resulted in more precise authentication of German raw and retail milk with a considerably increased classification rate of 95% compared to 81% for NMR and 90% for IRMS using linear discriminate analysis. Copyright © 2015 Elsevier Ltd. All rights reserved.
Multi-loop control of UPS inverter with a plug-in odd-harmonic repetitive controller.
Razi, Reza; Karbasforooshan, Mohammad-Sadegh; Monfared, Mohammad
2017-03-01
This paper proposes an improved multi-loop control scheme for the single-phase uninterruptible power supply (UPS) inverter by using a plug-in odd-harmonic repetitive controller to regulate the output voltage. In the suggested control method, the output voltage and the filter capacitor current are used as the outer and inner loop feedback signals, respectively and the instantaneous value of the reference voltage feedforwarded to the output of the controller. Instead of conventional linear (proportional-integral/-resonant) and conventional repetitive controllers, a plug-in odd-harmonic repetitive controller is employed in the outer loop to regulate the output voltage, which occupies less memory space and offers faster tracking performance compared to the conventional one. Also, a simple proportional controller is used in the inner loop for active damping of possible resonances and improving the transient performance. The feedforward of the converter reference voltage enhances the robust performance of the system and simplifies the system modelling and the controller design. A step-by-step design procedure is presented for the proposed controller, which guarantees stability of the system under worst-case scenarios. Simulation and experimental results validate the excellent steady-state and transient performance of the proposed control scheme and provide the exact comparison of the proposed method with the conventional multi-loop control method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Nonlinear programming extensions to rational function approximations of unsteady aerodynamics
NASA Technical Reports Server (NTRS)
Tiffany, Sherwood H.; Adams, William M., Jr.
1987-01-01
This paper deals with approximating unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft. Two methods of formulating these approximations are extended to include both the same flexibility in constraining them and the same methodology in optimizing nonlinear parameters as another currently used 'extended least-squares' method. Optimal selection of 'nonlinear' parameters is made in each of the three methods by use of the same nonlinear (nongradient) optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is of lower order than that required when no optimization of the nonlinear terms is performed. The free 'linear' parameters are determined using least-squares matrix techniques on a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from the different approaches are described, and results are presented which show comparative evaluations from application of each of the extended methods to a numerical example. The results obtained for the example problem show a significant (up to 63 percent) reduction in the number of differential equations used to represent the unsteady aerodynamic forces in linear time-invariant equations of motion as compared to a conventional method in which nonlinear terms are not optimized.
Reconstruction of fluorophore concentration variation in dynamic fluorescence molecular tomography.
Zhang, Xuanxuan; Liu, Fei; Zuo, Simin; Shi, Junwei; Zhang, Guanglei; Bai, Jing; Luo, Jianwen
2015-01-01
Dynamic fluorescence molecular tomography (DFMT) is a potential approach for drug delivery, tumor detection, diagnosis, and staging. The purpose of DFMT is to quantify the changes of fluorescent agents in the bodies, which offer important information about the underlying physiological processes. However, the conventional method requires that the fluorophore concentrations to be reconstructed are stationary during the data collection period. As thus, it cannot offer the dynamic information of fluorophore concentration variation within the data collection period. In this paper, a method is proposed to reconstruct the fluorophore concentration variation instead of the fluorophore concentration through a linear approximation. The fluorophore concentration variation rate is introduced by the linear approximation as a new unknown term to be reconstructed and is used to obtain the time courses of fluorophore concentration. Simulation and phantom studies are performed to validate the proposed method. The results show that the method is able to reconstruct the fluorophore concentration variation rates and the time courses of fluorophore concentration with relative errors less than 0.0218.
Mobasheri, Nasrin; Karimi, Mehrdad; Hamedi, Javad
2018-06-05
New methods to determine antimicrobial susceptibility of bacterial pathogens especially the minimum inhibitory concentration (MIC) of antibiotics have great importance in pharmaceutical industry and treatment procedures. In the present study, the MIC of several antibiotics was determined against some pathogenic bacteria using macrodilution test. In order to accelerate and increase the efficiency of culture-based method to determine antimicrobial susceptibility, the possible relationship between the changes in some physico-chemical parameters including conductivity, electrical potential difference (EPD), pH and total number of test strains was investigated during the logarithmic phase of bacterial growth in presence of antibiotics. The correlation between changes in these physico-chemical parameters and growth of bacteria was statistically evaluated using linear and non-linear regression models. Finally, the calculated MIC values in new proposed method were compared with the MIC derived from macrodilution test. The results represent significant association between the changes in EPD and pH values and growth of the tested bacteria during the exponential phase of bacterial growth. It has been assumed that the proliferation of bacteria can cause the significant changes in EPD values. The MIC values in both conventional and new method were consistent to each other. In conclusion, cost and time effective antimicrobial susceptibility test can be developed based on monitoring the changes in EPD values. The new proposed strategy also can be used in high throughput screening of biocompounds for their antimicrobial activity in a relatively shorter time (6-8 h) in comparison with the conventional methods.
NASA Astrophysics Data System (ADS)
Zhou, Y.; Zhao, H.; Hao, H.; Wang, C.
2018-05-01
Accurate remote sensing water extraction is one of the primary tasks of watershed ecological environment study. Since the Yanhe water system has typical characteristics of a small water volume and narrow river channel, which leads to the difficulty for conventional water extraction methods such as Normalized Difference Water Index (NDWI). A new Multi-Spectral Threshold segmentation of the NDWI (MST-NDWI) water extraction method is proposed to achieve the accurate water extraction in Yanhe watershed. In the MST-NDWI method, the spectral characteristics of water bodies and typical backgrounds on the Landsat/TM images have been evaluated in Yanhe watershed. The multi-spectral thresholds (TM1, TM4, TM5) based on maximum-likelihood have been utilized before NDWI water extraction to realize segmentation for a division of built-up lands and small linear rivers. With the proposed method, a water map is extracted from the Landsat/TM images in 2010 in China. An accuracy assessment is conducted to compare the proposed method with the conventional water indexes such as NDWI, Modified NDWI (MNDWI), Enhanced Water Index (EWI), and Automated Water Extraction Index (AWEI). The result shows that the MST-NDWI method generates better water extraction accuracy in Yanhe watershed and can effectively diminish the confusing background objects compared to the conventional water indexes. The MST-NDWI method integrates NDWI and Multi-Spectral Threshold segmentation algorithms, with richer valuable information and remarkable results in accurate water extraction in Yanhe watershed.
Vibration measurement with nonlinear converter in the presence of noise
NASA Astrophysics Data System (ADS)
Mozuras, Almantas
2017-10-01
Conventional vibration measurement methods use the linear properties of physical converters. These methods are strongly influenced by nonlinear distortions, because ideal linear converters are not available. Practically, any converter can be considered as a linear one, when an output signal is very small. However, the influence of noise increases significantly and signal-to-noise ratio decreases at lower signals. When the output signal is increasing, the nonlinear distortions are also augmenting. If the wide spectrum vibration is measured, conventional methods face a harmonic distortion as well as intermodulation effects. Purpose of this research is to develop a measurement method of wide spectrum vibration by using a converter described by a nonlinear function of type f(x), where x =x(t) denotes the dependence of coordinate x on time t due to the vibration. Parameter x(t) describing the vibration is expressed as Fourier series. The spectral components of the converter output f(x(t)) are determined by using Fourier transform. The obtained system of nonlinear equations is solved using the least squares technique that permits to find x(t) in the presence of noise. This method allows one to carry out the absolute or relative vibration measurements. High resistance to noise is typical for the absolute vibration measurement, but it is necessary to know the Taylor expansion coefficients of the function f(x). If the Taylor expansion is not known, the relative measurement of vibration parameters is also possible, but with lower resistance to noise. This method allows one to eliminate the influence of nonlinear distortions to the measurement results, and consequently to eliminate harmonic distortion and intermodulation effects. The use of nonlinear properties of the converter for measurement gives some advantages related to an increased frequency range of the output signal (consequently increasing the number of equations) that allows one to decrease the noise influence on the measurement results. The greater is the nonlinearity the lower is noise. This method enables the use of the converters that are normally not suitable due to the high nonlinearity.
Ritto, F G; Schmitt, A R M; Pimentel, T; Canellas, J V; Medeiros, P J
2018-02-01
The aim of this study was to determine whether virtual surgical planning (VSP) is an accurate method for positioning the maxilla when compared to conventional articulator model surgery (CMS), through the superimposition of computed tomography (CT) images. This retrospective study included the records of 30 adult patients submitted to bimaxillary orthognathic surgery. Two groups were created according to the treatment planning performed: CMS and VSP. The treatment planning protocol was the same for all patients. Pre- and postoperative CT images were superimposed and the linear distances between upper jaw reference points were measured. Measurements were then compared to the treatment planning, and the difference in accuracy between CMS and VSP was determined using the t-test for independent samples. The success criterion adopted was a mean linear difference of <2mm. The mean linear difference between planned and obtained movements for CMS was 1.27±1.05mm, and for VSP was 1.20±1.08mm. With CMS, 80% of overlapping reference points had a difference of <2mm, while for VSP this value was 83.6%. There was no statistically significant difference between the two techniques regarding accuracy (P>0.05). Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Brian K. Via; Todd F. Shupe; Leslie H. Groom; Michael Stine; Chi-Leung So
2003-01-01
In manufacturing, monitoring the mechanical properties of wood with near infrared spectroscopy (NIR) is an attractive alternative to more conventional methods. However, no attention has been given to see if models differ between juvenile and mature wood. Additionally, it would be convenient if multiple linear regression (MLR) could perform well in the place of more...
Rotman Lens Sidewall Design and Optimization with Hybrid Hardware/Software Based Programming
2015-01-09
conventional MoM and stored in memory. The components of Zfar are computed as needed through a fast matrix vector multiplication ( MVM ), which...V vector. Iterative methods, e.g. BiCGSTAB, are employed for solving the linear equation. The matrix-vector multiplications ( MVMs ), which dominate...most of the computation in the solving phase, consists of calculating near and far MVMs . The far MVM comprises aggregation, translation, and
Dynamic Range Enhancement of High-Speed Electrical Signal Data via Non-Linear Compression
NASA Technical Reports Server (NTRS)
Laun, Matthew C. (Inventor)
2016-01-01
Systems and methods for high-speed compression of dynamic electrical signal waveforms to extend the measuring capabilities of conventional measuring devices such as oscilloscopes and high-speed data acquisition systems are discussed. Transfer function components and algorithmic transfer functions can be used to accurately measure signals that are within the frequency bandwidth but beyond the voltage range and voltage resolution capabilities of the measuring device.
The Application of Stress-Relaxation Test to Life Assessment of T911/T22 Weld Metal
NASA Astrophysics Data System (ADS)
Cao, Tieshan; Zhao, Jie; Cheng, Congqian; Li, Huifang
2016-03-01
A dissimilar weld metal was obtained through submerged arc welding of a T911 steel to a T22 steel, and its creep property was explored by stress-relaxation test assisted by some conventional creep tests. The creep rate information of the stress-relaxation test was compared to the minimum and the average creep rates of the conventional creep test. Log-log graph showed that the creep rate of the stress-relaxation test was in a linear relationship with the minimum creep rate of the conventional creep test. Thus, the creep rate of stress-relaxation test could be used in the Monkman-Grant relation to calculate the rupture life. The creep rate of the stress-relaxation test was similar to the average creep rate, and thereby the rupture life could be evaluated by a method of "time to rupture strain." The results also showed that rupture life which was assessed by the Monkman-Grant relation was more accurate than that obtained through the method of "time to rupture strain."
Score-moment combined linear discrimination analysis (SMC-LDA) as an improved discrimination method.
Han, Jintae; Chung, Hoeil; Han, Sung-Hwan; Yoon, Moon-Young
2007-01-01
A new discrimination method called the score-moment combined linear discrimination analysis (SMC-LDA) has been developed and its performance has been evaluated using three practical spectroscopic datasets. The key concept of SMC-LDA was to use not only the score from principal component analysis (PCA), but also the moment of the spectrum, as inputs for LDA to improve discrimination. Along with conventional score, moment is used in spectroscopic fields as an effective alternative for spectral feature representation. Three different approaches were considered. Initially, the score generated from PCA was projected onto a two-dimensional feature space by maximizing Fisher's criterion function (conventional PCA-LDA). Next, the same procedure was performed using only moment. Finally, both score and moment were utilized simultaneously for LDA. To evaluate discrimination performances, three different spectroscopic datasets were employed: (1) infrared (IR) spectra of normal and malignant stomach tissue, (2) near-infrared (NIR) spectra of diesel and light gas oil (LGO) and (3) Raman spectra of Chinese and Korean ginseng. For each case, the best discrimination results were achieved when both score and moment were used for LDA (SMC-LDA). Since the spectral representation character of moment was different from that of score, inclusion of both score and moment for LDA provided more diversified and descriptive information.
A Novel Two-Step Hierarchial Quantitative Structure-Activity ...
Background: Accurate prediction of in vivo toxicity from in vitro testing is a challenging problem. Large public–private consortia have been formed with the goal of improving chemical safety assessment by the means of high-throughput screening. Methods and results: A database containing experimental cytotoxicity values for in vitro half-maximal inhibitory concentration (IC50) and in vivo rodent median lethal dose (LD50) for more than 300 chemicals was compiled by Zentralstelle zur Erfassung und Bewertung von Ersatz- und Ergaenzungsmethoden zum Tierversuch (ZEBET ; National Center for Documentation and Evaluation of Alternative Methods to Animal Experiments) . The application of conventional quantitative structure–activity relationship (QSAR) modeling approaches to predict mouse or rat acute LD50 values from chemical descriptors of ZEBET compounds yielded no statistically significant models. The analysis of these data showed no significant correlation between IC50 and LD50. However, a linear IC50 versus LD50 correlation could be established for a fraction of compounds. To capitalize on this observation, we developed a novel two-step modeling approach as follows. First, all chemicals are partitioned into two groups based on the relationship between IC50 and LD50 values: One group comprises compounds with linear IC50 versus LD50 relationships, and another group comprises the remaining compounds. Second, we built conventional binary classification QSAR models t
NASA Technical Reports Server (NTRS)
Kim, H.; Crawford, F. W.
1977-01-01
It is pointed out that the conventional iterative analysis of nonlinear plasma wave phenomena, which involves a direct use of Maxwell's equations and the equations describing the particle dynamics, leads to formidable theoretical and algebraic complexities, especially for warm plasmas. As an effective alternative, the Lagrangian method may be applied. It is shown how this method may be used in the microscopic description of small-signal wave propagation and in the study of nonlinear wave interactions. The linear theory is developed for an infinite, homogeneous, collisionless, warm magnetoplasma. A summary is presented of a perturbation expansion scheme described by Galloway and Kim (1971), and Lagrangians to third order in perturbation are considered. Attention is given to the averaged-Lagrangian density, the action-transfer and coupled-mode equations, and the general solution of the coupled-mode equations.
NASA Astrophysics Data System (ADS)
Kiyono, Ken; Tsujimoto, Yutaka
2016-07-01
We develop a general framework to study the time and frequency domain characteristics of detrending-operation-based scaling analysis methods, such as detrended fluctuation analysis (DFA) and detrending moving average (DMA) analysis. In this framework, using either the time or frequency domain approach, the frequency responses of detrending operations are calculated analytically. Although the frequency domain approach based on conventional linear analysis techniques is only applicable to linear detrending operations, the time domain approach presented here is applicable to both linear and nonlinear detrending operations. Furthermore, using the relationship between the time and frequency domain representations of the frequency responses, the frequency domain characteristics of nonlinear detrending operations can be obtained. Based on the calculated frequency responses, it is possible to establish a direct connection between the root-mean-square deviation of the detrending-operation-based scaling analysis and the power spectrum for linear stochastic processes. Here, by applying our methods to DFA and DMA, including higher-order cases, exact frequency responses are calculated. In addition, we analytically investigate the cutoff frequencies of DFA and DMA detrending operations and show that these frequencies are not optimally adjusted to coincide with the corresponding time scale.
Kiyono, Ken; Tsujimoto, Yutaka
2016-07-01
We develop a general framework to study the time and frequency domain characteristics of detrending-operation-based scaling analysis methods, such as detrended fluctuation analysis (DFA) and detrending moving average (DMA) analysis. In this framework, using either the time or frequency domain approach, the frequency responses of detrending operations are calculated analytically. Although the frequency domain approach based on conventional linear analysis techniques is only applicable to linear detrending operations, the time domain approach presented here is applicable to both linear and nonlinear detrending operations. Furthermore, using the relationship between the time and frequency domain representations of the frequency responses, the frequency domain characteristics of nonlinear detrending operations can be obtained. Based on the calculated frequency responses, it is possible to establish a direct connection between the root-mean-square deviation of the detrending-operation-based scaling analysis and the power spectrum for linear stochastic processes. Here, by applying our methods to DFA and DMA, including higher-order cases, exact frequency responses are calculated. In addition, we analytically investigate the cutoff frequencies of DFA and DMA detrending operations and show that these frequencies are not optimally adjusted to coincide with the corresponding time scale.
Can Functional Cardiac Age be Predicted from ECG in a Normal Healthy Population
NASA Technical Reports Server (NTRS)
Schlegel, Todd; Starc, Vito; Leban, Manja; Sinigoj, Petra; Vrhovec, Milos
2011-01-01
In a normal healthy population, we desired to determine the most age-dependent conventional and advanced ECG parameters. We hypothesized that changes in several ECG parameters might correlate with age and together reliably characterize the functional age of the heart. Methods: An initial study population of 313 apparently healthy subjects was ultimately reduced to 148 subjects (74 men, 84 women, in the range from 10 to 75 years of age) after exclusion criteria. In all subjects, ECG recordings (resting 5-minute 12-lead high frequency ECG) were evaluated via custom software programs to calculate up to 85 different conventional and advanced ECG parameters including beat-to-beat QT and RR variability, waveform complexity, and signal-averaged, high-frequency and spatial/spatiotemporal ECG parameters. The prediction of functional age was evaluated by multiple linear regression analysis using the best 5 univariate predictors. Results: Ignoring what were ultimately small differences between males and females, the functional age was found to be predicted (R2= 0.69, P < 0.001) from a linear combination of 5 independent variables: QRS elevation in the frontal plane (p<0.001), a new repolarization parameter QTcorr (p<0.001), mean high frequency QRS amplitude (p=0.009), the variability parameter % VLF of RRV (p=0.021) and the P-wave width (p=0.10). Here, QTcorr represents the correlation between the calculated QT and the measured QT signal. Conclusions: In apparently healthy subjects with normal conventional ECGs, functional cardiac age can be estimated by multiple linear regression analysis of mostly advanced ECG results. Because some parameters in the regression formula, such as QTcorr, high frequency QRS amplitude and P-wave width also change with disease in the same direction as with increased age, increased functional age of the heart may reflect subtle age-related pathologies in cardiac electrical function that are usually hidden on conventional ECG.
Fourier-based linear systems description of free-breathing pulmonary magnetic resonance imaging
NASA Astrophysics Data System (ADS)
Capaldi, D. P. I.; Svenningsen, S.; Cunningham, I. A.; Parraga, G.
2015-03-01
Fourier-decomposition of free-breathing pulmonary magnetic resonance imaging (FDMRI) was recently piloted as a way to provide rapid quantitative pulmonary maps of ventilation and perfusion without the use of exogenous contrast agents. This method exploits fast pulmonary MRI acquisition of free-breathing proton (1H) pulmonary images and non-rigid registration to compensate for changes in position and shape of the thorax associated with breathing. In this way, ventilation imaging using conventional MRI systems can be undertaken but there has been no systematic evaluation of fundamental image quality measurements based on linear systems theory. We investigated the performance of free-breathing pulmonary ventilation imaging using a Fourier-based linear system description of each operation required to generate FDMRI ventilation maps. Twelve subjects with chronic obstructive pulmonary disease (COPD) or bronchiectasis underwent pulmonary function tests and MRI. Non-rigid registration was used to co-register the temporal series of pulmonary images. Pulmonary voxel intensities were aligned along a time axis and discrete Fourier transforms were performed on the periodic signal intensity pattern to generate frequency spectra. We determined the signal-to-noise ratio (SNR) of the FDMRI ventilation maps using a conventional approach (SNRC) and using the Fourier-based description (SNRF). Mean SNR was 4.7 ± 1.3 for subjects with bronchiectasis and 3.4 ± 1.8, for COPD subjects (p>.05). SNRF was significantly different than SNRC (p<.01). SNRF was approximately 50% of SNRC suggesting that the linear system model well-estimates the current approach.
Adi-Dako, Ofosua; Oppong Bekoe, Samuel; Ofori-Kwakye, Kwabena; Appiah, Enoch; Peprah, Paul
2017-01-01
An isocratic sensitive and precise reverse phase high-performance liquid chromatography (RP-HPLC) method was developed and validated for the determination and quantification of hydrocortisone in controlled-release and conventional (tablets and injections) pharmaceutical preparations. Chromatographic separation was achieved on an ODS (C18), 5 μ m, 4.6 × 150 mm, with an isocratic elution using a freshly prepared mobile phase of composition methanol : water : acetic acid (60 : 30 : 10, v/v/v) at a flow rate of 1.0 ml/min. The detection of the drug was successfully achieved at a wavelength of 254 nm. The retention time obtained for the drug was 2.26 min. The proposed method produced linear detectable responses in the concentration range of 0.02 to 0.4 mg/ml of hydrocortisone. High recoveries of 98-101% were attained at concentration levels of 80%, 100%, and 120%. The intraday and interday precision (RSD) were 0.19-0.55% and 0.33-0.71%, respectively. A comparison of hydrocortisone analyses data from the developed method and the official USP method showed no significant difference ( p > 0.05) at a 95% confidence interval. The method was successfully applied to the determination and quantification of hydrocortisone in six controlled-release and fifteen conventional release pharmaceutical preparations.
Oppong Bekoe, Samuel; Appiah, Enoch; Peprah, Paul
2017-01-01
An isocratic sensitive and precise reverse phase high-performance liquid chromatography (RP-HPLC) method was developed and validated for the determination and quantification of hydrocortisone in controlled-release and conventional (tablets and injections) pharmaceutical preparations. Chromatographic separation was achieved on an ODS (C18), 5 μm, 4.6 × 150 mm, with an isocratic elution using a freshly prepared mobile phase of composition methanol : water : acetic acid (60 : 30 : 10, v/v/v) at a flow rate of 1.0 ml/min. The detection of the drug was successfully achieved at a wavelength of 254 nm. The retention time obtained for the drug was 2.26 min. The proposed method produced linear detectable responses in the concentration range of 0.02 to 0.4 mg/ml of hydrocortisone. High recoveries of 98–101% were attained at concentration levels of 80%, 100%, and 120%. The intraday and interday precision (RSD) were 0.19–0.55% and 0.33–0.71%, respectively. A comparison of hydrocortisone analyses data from the developed method and the official USP method showed no significant difference (p > 0.05) at a 95% confidence interval. The method was successfully applied to the determination and quantification of hydrocortisone in six controlled-release and fifteen conventional release pharmaceutical preparations. PMID:28660092
An efficient parallel algorithm for the solution of a tridiagonal linear system of equations
NASA Technical Reports Server (NTRS)
Stone, H. S.
1971-01-01
Tridiagonal linear systems of equations are solved on conventional serial machines in a time proportional to N, where N is the number of equations. The conventional algorithms do not lend themselves directly to parallel computations on computers of the ILLIAC IV class, in the sense that they appear to be inherently serial. An efficient parallel algorithm is presented in which computation time grows as log sub 2 N. The algorithm is based on recursive doubling solutions of linear recurrence relations, and can be used to solve recurrence relations of all orders.
Kim, Sung Jae; Kim, Sung Hwan; Kim, Young Hwan; Chun, Yong Min
2015-01-01
The authors have observed a failure to achieve secure fixation in elderly patients when inserting a half-pin at the anteromedial surface of the tibia. The purpose of this study was to compare two methods for inserting a half-pin at tibia diaphysis in elderly patients. Twenty cadaveric tibias were divided into Group C or V. A half-pin was inserted into the tibias of Group C via the conventional method, from the anteromedial surface to the interosseous border of the tibia diaphysis, and into the tibias of Group V via the vertical method, from the anterior border to the posterior surface at the same level. The maximum insertion torque was measured during the bicortical insertion with a torque driver. The thickness of the cortex was measured by micro-computed tomography. The relationship between the thickness of the cortex engaged and the insertion torque was investigated. The maximum insertion torque and the thickness of the cortex were significantly higher in Group V than Group C. Both groups exhibited a statistically significant linear correlation between torque and thickness by Spearman's rank correlation analysis. Half-pins inserted by the vertical method achieved purchase of more cortex than those inserted by the conventional method. Considering that cortical thickness and insertion torque in Group V were significantly greater than those in Group C, we suggest that the vertical method of half-pin insertion may be an alternative to the conventional method in elderly patients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riyahi, S; Choi, W; Bhooshan, N
2016-06-15
Purpose: To compare linear and deformable registration methods for evaluation of tumor response to Chemoradiation therapy (CRT) in patients with esophageal cancer. Methods: Linear and multi-resolution BSpline deformable registration were performed on Pre-Post-CRT CT/PET images of 20 patients with esophageal cancer. For both registration methods, we registered CT using Mean Square Error (MSE) metric, however to register PET we used transformation obtained using Mutual Information (MI) from the same CT due to being multi-modality. Similarity of Warped-CT/PET was quantitatively evaluated using Normalized Mutual Information and plausibility of DF was assessed using inverse consistency Error. To evaluate tumor response four groupsmore » of tumor features were examined: (1) Conventional PET/CT e.g. SUV, diameter (2) Clinical parameters e.g. TNM stage, histology (3)spatial-temporal PET features that describe intensity, texture and geometry of tumor (4)all features combined. Dominant features were identified using 10-fold cross-validation and Support Vector Machine (SVM) was deployed for tumor response prediction while the accuracy was evaluated by ROC Area Under Curve (AUC). Results: Average and standard deviation of Normalized mutual information for deformable registration using MSE was 0.2±0.054 and for linear registration was 0.1±0.026, showing higher NMI for deformable registration. Likewise for MI metric, deformable registration had 0.13±0.035 comparing to linear counterpart with 0.12±0.037. Inverse consistency error for deformable registration for MSE metric was 4.65±2.49 and for linear was 1.32±2.3 showing smaller value for linear registration. The same conclusion was obtained for MI in terms of inverse consistency error. AUC for both linear and deformable registration was 1 showing no absolute difference in terms of response evaluation. Conclusion: Deformable registration showed better NMI comparing to linear registration, however inverse consistency of transformation was lower in linear registration. We do not expect to see significant difference when warping PET images using deformable or linear registration. This work was supported in part by the National Cancer Institute Grants R01CA172638.« less
Numerical Manifold Method for the Forced Vibration of Thin Plates during Bending
Jun, Ding; Song, Chen; Wei-Bin, Wen; Shao-Ming, Luo; Xia, Huang
2014-01-01
A novel numerical manifold method was derived from the cubic B-spline basis function. The new interpolation function is characterized by high-order coordination at the boundary of a manifold element. The linear elastic-dynamic equation used to solve the bending vibration of thin plates was derived according to the principle of minimum instantaneous potential energy. The method for the initialization of the dynamic equation and its solution process were provided. Moreover, the analysis showed that the calculated stiffness matrix exhibited favorable performance. Numerical results showed that the generalized degrees of freedom were significantly fewer and that the calculation accuracy was higher for the manifold method than for the conventional finite element method. PMID:24883403
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morrow, A; Rangaraj, D; Perez-Andujar, A
2016-06-15
Purpose: This work’s objective is to determine the overlap of processes, in terms of sub-processes and time, between acceptance testing and commissioning of a conventional medical linear accelerator and to evaluate the time saved by consolidating the two processes. Method: A process map for acceptance testing for medical linear accelerators was created from vendor documentation (Varian and Elekta). Using AAPM TG-106 and inhouse commissioning procedures, a process map was created for commissioning of said accelerators. The time to complete each sub-process in each process map was evaluated. Redundancies in the processes were found and the time spent on each weremore » calculated. Results: Mechanical testing significantly overlaps between the two processes - redundant work here amounts to 9.5 hours. Many beam non-scanning dosimetry tests overlap resulting in another 6 hours of overlap. Beam scanning overlaps somewhat - acceptance tests include evaluating PDDs and multiple profiles but for only one field size while commissioning beam scanning includes multiple field sizes and depths of profiles. This overlap results in another 6 hours of rework. Absolute dosimetry, field outputs, and end to end tests are not done at all in acceptance testing. Finally, all imaging tests done in acceptance are repeated in commissioning, resulting in about 8 hours of rework. The total time overlap between the two processes is about 30 hours. Conclusion: The process mapping done in this study shows that there are no tests done in acceptance testing that are not also recommended to do for commissioning. This results in about 30 hours of redundant work when preparing a conventional linear accelerator for clinical use. Considering these findings in the context of the 5000 linacs in the United states, consolidating acceptance testing and commissioning would have allowed for the treatment of an additional 25000 patients using no additional resources.« less
Adaptive compressed sensing of remote-sensing imaging based on the sparsity prediction
NASA Astrophysics Data System (ADS)
Yang, Senlin; Li, Xilong; Chong, Xin
2017-10-01
The conventional compressive sensing works based on the non-adaptive linear projections, and the parameter of its measurement times is usually set empirically. As a result, the quality of image reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was given. Then an estimation method for the sparsity of image was proposed based on the two dimensional discrete cosine transform (2D DCT). With an energy threshold given beforehand, the DCT coefficients were processed with both energy normalization and sorting in descending order, and the sparsity of the image can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of image effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparse degree estimated with the energy threshold provided, the proposed method can ensure the quality of image reconstruction.
Adaptive compressed sensing of multi-view videos based on the sparsity estimation
NASA Astrophysics Data System (ADS)
Yang, Senlin; Li, Xilong; Chong, Xin
2017-11-01
The conventional compressive sensing for videos based on the non-adaptive linear projections, and the measurement times is usually set empirically. As a result, the quality of videos reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was described. Then an estimation method for the sparsity of multi-view videos was proposed based on the two dimensional discrete wavelet transform (2D DWT). With an energy threshold given beforehand, the DWT coefficients were processed with both energy normalization and sorting by descending order, and the sparsity of the multi-view video can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of video frame effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparsity estimated with the energy threshold provided, the proposed method can ensure the reconstruction quality of multi-view videos.
Optimization, evaluation and calibration of a cross-strip DOI detector
NASA Astrophysics Data System (ADS)
Schmidt, F. P.; Kolb, A.; Pichler, B. J.
2018-02-01
This study depicts the evaluation of a SiPM detector with depth of interaction (DOI) capability via a dual-sided readout that is suitable for high-resolution positron emission tomography and magnetic resonance (PET/MR) imaging. Two different 12 × 12 pixelated LSO scintillator arrays with a crystal pitch of 1.60 mm are examined. One array is 20 mm-long with a crystal separation by the specular reflector Vikuiti enhanced specular reflector (ESR), and the other one is 18 mm-long and separated by the diffuse reflector Lumirror E60 (E60). An improvement in energy resolution from 22.6% to 15.5% for the scintillator array with the E60 reflector is achieved by taking a nonlinear light collection correction into account. The results are FWHM energy resolutions of 14.0% and 15.5%, average FWHM DOI resolutions of 2.96 mm and 1.83 mm, and FWHM coincidence resolving times of 1.09 ns and 1.48 ns for the scintillator array with the ESR and that with the E60 reflector, respectively. The measured DOI signal ratios need to be assigned to an interaction depth inside the scintillator crystal. A linear and a nonlinear method, using the intrinsic scintillator radiation from lutetium, are implemented for an easy to apply calibration and are compared to the conventional method, which exploits a setup with an externally collimated radiation beam. The deviation between the DOI functions of the linear or nonlinear method and the conventional method is determined. The resulting average of differences in DOI positions is 0.67 mm and 0.45 mm for the nonlinear calibration method for the scintillator array with the ESR and with the E60 reflector, respectively; Whereas the linear calibration method results in 0.51 mm and 0.32 mm for the scintillator array with the ESR and the E60 reflector, respectively; and is, due to its simplicity, also applicable in assembled detector systems.
Optimization, evaluation and calibration of a cross-strip DOI detector.
Schmidt, F P; Kolb, A; Pichler, B J
2018-02-20
This study depicts the evaluation of a SiPM detector with depth of interaction (DOI) capability via a dual-sided readout that is suitable for high-resolution positron emission tomography and magnetic resonance (PET/MR) imaging. Two different 12 × 12 pixelated LSO scintillator arrays with a crystal pitch of 1.60 mm are examined. One array is 20 mm-long with a crystal separation by the specular reflector Vikuiti enhanced specular reflector (ESR), and the other one is 18 mm-long and separated by the diffuse reflector Lumirror E60 (E60). An improvement in energy resolution from 22.6% to 15.5% for the scintillator array with the E60 reflector is achieved by taking a nonlinear light collection correction into account. The results are FWHM energy resolutions of 14.0% and 15.5%, average FWHM DOI resolutions of 2.96 mm and 1.83 mm, and FWHM coincidence resolving times of 1.09 ns and 1.48 ns for the scintillator array with the ESR and that with the E60 reflector, respectively. The measured DOI signal ratios need to be assigned to an interaction depth inside the scintillator crystal. A linear and a nonlinear method, using the intrinsic scintillator radiation from lutetium, are implemented for an easy to apply calibration and are compared to the conventional method, which exploits a setup with an externally collimated radiation beam. The deviation between the DOI functions of the linear or nonlinear method and the conventional method is determined. The resulting average of differences in DOI positions is 0.67 mm and 0.45 mm for the nonlinear calibration method for the scintillator array with the ESR and with the E60 reflector, respectively; Whereas the linear calibration method results in 0.51 mm and 0.32 mm for the scintillator array with the ESR and the E60 reflector, respectively; and is, due to its simplicity, also applicable in assembled detector systems.
NASA Astrophysics Data System (ADS)
Toufik, Mekkaoui; Atangana, Abdon
2017-10-01
Recently a new concept of fractional differentiation with non-local and non-singular kernel was introduced in order to extend the limitations of the conventional Riemann-Liouville and Caputo fractional derivatives. A new numerical scheme has been developed, in this paper, for the newly established fractional differentiation. We present in general the error analysis. The new numerical scheme was applied to solve linear and non-linear fractional differential equations. We do not need a predictor-corrector to have an efficient algorithm, in this method. The comparison of approximate and exact solutions leaves no doubt believing that, the new numerical scheme is very efficient and converges toward exact solution very rapidly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, J; Gao, H
2016-06-15
Purpose: Different from the conventional computed tomography (CT), spectral CT based on energy-resolved photon-counting detectors is able to provide the unprecedented material composition. However, an important missing piece for accurate spectral CT is to incorporate the detector response function (DRF), which is distorted by factors such as pulse pileup and charge-sharing. In this work, we propose material reconstruction methods for spectral CT with DRF. Methods: The polyenergetic X-ray forward model takes the DRF into account for accurate material reconstruction. Two image reconstruction methods are proposed: a direct method based on the nonlinear data fidelity from DRF-based forward model; a linear-data-fidelitymore » based method that relies on the spectral rebinning so that the corresponding DRF matrix is invertible. Then the image reconstruction problem is regularized with the isotropic TV term and solved by alternating direction method of multipliers. Results: The simulation results suggest that the proposed methods provided more accurate material compositions than the standard method without DRF. Moreover, the proposed method with linear data fidelity had improved reconstruction quality from the proposed method with nonlinear data fidelity. Conclusion: We have proposed material reconstruction methods for spectral CT with DRF, whichprovided more accurate material compositions than the standard methods without DRF. Moreover, the proposed method with linear data fidelity had improved reconstruction quality from the proposed method with nonlinear data fidelity. Jiulong Liu and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less
A fully implicit finite element method for bidomain models of cardiac electromechanics
Dal, Hüsnü; Göktepe, Serdar; Kaliske, Michael; Kuhl, Ellen
2012-01-01
We propose a novel, monolithic, and unconditionally stable finite element algorithm for the bidomain-based approach to cardiac electromechanics. We introduce the transmembrane potential, the extracellular potential, and the displacement field as independent variables, and extend the common two-field bidomain formulation of electrophysiology to a three-field formulation of electromechanics. The intrinsic coupling arises from both excitation-induced contraction of cardiac cells and the deformation-induced generation of intra-cellular currents. The coupled reaction-diffusion equations of the electrical problem and the momentum balance of the mechanical problem are recast into their weak forms through a conventional isoparametric Galerkin approach. As a novel aspect, we propose a monolithic approach to solve the governing equations of excitation-contraction coupling in a fully coupled, implicit sense. We demonstrate the consistent linearization of the resulting set of non-linear residual equations. To assess the algorithmic performance, we illustrate characteristic features by means of representative three-dimensional initial-boundary value problems. The proposed algorithm may open new avenues to patient specific therapy design by circumventing stability and convergence issues inherent to conventional staggered solution schemes. PMID:23175588
Chan, Kwun Chuen Gary; Qin, Jing
2015-10-01
Existing linear rank statistics cannot be applied to cross-sectional survival data without follow-up since all subjects are essentially censored. However, partial survival information are available from backward recurrence times and are frequently collected from health surveys without prospective follow-up. Under length-biased sampling, a class of linear rank statistics is proposed based only on backward recurrence times without any prospective follow-up. When follow-up data are available, the proposed rank statistic and a conventional rank statistic that utilizes follow-up information from the same sample are shown to be asymptotically independent. We discuss four ways to combine these two statistics when follow-up is present. Simulations show that all combined statistics have substantially improved power compared with conventional rank statistics, and a Mantel-Haenszel test performed the best among the proposal statistics. The method is applied to a cross-sectional health survey without follow-up and a study of Alzheimer's disease with prospective follow-up. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Jafari, Ramin; Chhabra, Shalini; Prince, Martin R; Wang, Yi; Spincemaille, Pascal
2018-04-01
To propose an efficient algorithm to perform dual input compartment modeling for generating perfusion maps in the liver. We implemented whole field-of-view linear least squares (LLS) to fit a delay-compensated dual-input single-compartment model to very high temporal resolution (four frames per second) contrast-enhanced 3D liver data, to calculate kinetic parameter maps. Using simulated data and experimental data in healthy subjects and patients, whole-field LLS was compared with the conventional voxel-wise nonlinear least-squares (NLLS) approach in terms of accuracy, performance, and computation time. Simulations showed good agreement between LLS and NLLS for a range of kinetic parameters. The whole-field LLS method allowed generating liver perfusion maps approximately 160-fold faster than voxel-wise NLLS, while obtaining similar perfusion parameters. Delay-compensated dual-input liver perfusion analysis using whole-field LLS allows generating perfusion maps with a considerable speedup compared with conventional voxel-wise NLLS fitting. Magn Reson Med 79:2415-2421, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Revision of laser-induced damage threshold evaluation from damage probability data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bataviciute, Gintare; Grigas, Povilas; Smalakys, Linas
2013-04-15
In this study, the applicability of commonly used Damage Frequency Method (DFM) is addressed in the context of Laser-Induced Damage Threshold (LIDT) testing with pulsed lasers. A simplified computer model representing the statistical interaction between laser irradiation and randomly distributed damage precursors is applied for Monte Carlo experiments. The reproducibility of LIDT predicted from DFM is examined under both idealized and realistic laser irradiation conditions by performing numerical 1-on-1 tests. A widely accepted linear fitting resulted in systematic errors when estimating LIDT and its error bars. For the same purpose, a Bayesian approach was proposed. A novel concept of parametricmore » regression based on varying kernel and maximum likelihood fitting technique is introduced and studied. Such approach exhibited clear advantages over conventional linear fitting and led to more reproducible LIDT evaluation. Furthermore, LIDT error bars are obtained as a natural outcome of parametric fitting which exhibit realistic values. The proposed technique has been validated on two conventionally polished fused silica samples (355 nm, 5.7 ns).« less
Umehara, Kensuke; Ota, Junko; Ishida, Takayuki
2017-10-18
In this study, the super-resolution convolutional neural network (SRCNN) scheme, which is the emerging deep-learning-based super-resolution method for enhancing image resolution in chest CT images, was applied and evaluated using the post-processing approach. For evaluation, 89 chest CT cases were sampled from The Cancer Imaging Archive. The 89 CT cases were divided randomly into 45 training cases and 44 external test cases. The SRCNN was trained using the training dataset. With the trained SRCNN, a high-resolution image was reconstructed from a low-resolution image, which was down-sampled from an original test image. For quantitative evaluation, two image quality metrics were measured and compared to those of the conventional linear interpolation methods. The image restoration quality of the SRCNN scheme was significantly higher than that of the linear interpolation methods (p < 0.001 or p < 0.05). The high-resolution image reconstructed by the SRCNN scheme was highly restored and comparable to the original reference image, in particular, for a ×2 magnification. These results indicate that the SRCNN scheme significantly outperforms the linear interpolation methods for enhancing image resolution in chest CT images. The results also suggest that SRCNN may become a potential solution for generating high-resolution CT images from standard CT images.
Feature-space-based FMRI analysis using the optimal linear transformation.
Sun, Fengrong; Morris, Drew; Lee, Wayne; Taylor, Margot J; Mills, Travis; Babyn, Paul S
2010-09-01
The optimal linear transformation (OLT), an image analysis technique of feature space, was first presented in the field of MRI. This paper proposes a method of extending OLT from MRI to functional MRI (fMRI) to improve the activation-detection performance over conventional approaches of fMRI analysis. In this method, first, ideal hemodynamic response time series for different stimuli were generated by convolving the theoretical hemodynamic response model with the stimulus timing. Second, constructing hypothetical signature vectors for different activity patterns of interest by virtue of the ideal hemodynamic responses, OLT was used to extract features of fMRI data. The resultant feature space had particular geometric clustering properties. It was then classified into different groups, each pertaining to an activity pattern of interest; the applied signature vector for each group was obtained by averaging. Third, using the applied signature vectors, OLT was applied again to generate fMRI composite images with high SNRs for the desired activity patterns. Simulations and a blocked fMRI experiment were employed for the method to be verified and compared with the general linear model (GLM)-based analysis. The simulation studies and the experimental results indicated the superiority of the proposed method over the GLM-based analysis in detecting brain activities.
NASA Astrophysics Data System (ADS)
Setiawan, E. P.; Rosadi, D.
2017-01-01
Portfolio selection problems conventionally means ‘minimizing the risk, given the certain level of returns’ from some financial assets. This problem is frequently solved with quadratic or linear programming methods, depending on the risk measure that used in the objective function. However, the solutions obtained by these method are in real numbers, which may give some problem in real application because each asset usually has its minimum transaction lots. In the classical approach considering minimum transaction lots were developed based on linear Mean Absolute Deviation (MAD), variance (like Markowitz’s model), and semi-variance as risk measure. In this paper we investigated the portfolio selection methods with minimum transaction lots with conditional value at risk (CVaR) as risk measure. The mean-CVaR methodology only involves the part of the tail of the distribution that contributed to high losses. This approach looks better when we work with non-symmetric return probability distribution. Solution of this method can be found with Genetic Algorithm (GA) methods. We provide real examples using stocks from Indonesia stocks market.
Radar Measurements of Ocean Surface Waves using Proper Orthogonal Decomposition
2017-03-30
rely on use of Fourier transforms (FFT) and filtering spectra on the linear dispersion relationship for ocean surface waves. This report discusses...the measured signal (e.g., Young et al., 1985). In addition, the methods often rely on filtering the FFT of radar backscatter or Doppler velocities...to those obtained with conventional FFT and dispersion curve filtering techniques (iv) Compare both results of(iii) to ground truth sensors (i .e
Column Chromatography To Obtain Organic Cation Sorption Isotherms.
Jolin, William C; Sullivan, James; Vasudevan, Dharni; MacKay, Allison A
2016-08-02
Column chromatography was evaluated as a method to obtain organic cation sorption isotherms for environmental solids while using the peak skewness to identify the linear range of the sorption isotherm. Custom packed HPLC columns and standard batch sorption techniques were used to intercompare sorption isotherms and solid-water sorption coefficients (Kd) for four organic cations (benzylamine, 2,4-dichlorobenzylamine, phenyltrimethylammonium, oxytetracycline) with two aluminosilicate clay minerals and one soil. A comparison of Freundlich isotherm parameters revealed isotherm linearity or nonlinearity was not significantly different between column chromatography and traditional batch experiments. Importantly, skewness (a metric of eluting peak symmetry) analysis of eluting peaks can establish isotherm linearity, thereby enabling a less labor intensive means to generate the extensive data sets of linear Kd values required for the development of predictive sorption models. Our findings clearly show that column chromatography can reproduce sorption measures from conventional batch experiments with the benefit of lower labor-intensity, faster analysis times, and allow for consistent sorption measures across laboratories with distinct chromatography instrumentation.
Influence of a high vacuum on the precise positioning using an ultrasonic linear motor.
Kim, Wan-Soo; Lee, Dong-Jin; Lee, Sun-Kyu
2011-01-01
This paper presents an investigation of the ultrasonic linear motor stage for use in a high vacuum environment. The slider table is driven by the hybrid bolt-clamped Langevin-type ultrasonic linear motor, which is excited with its different modes of natural frequencies in both lateral and longitudinal directions. In general, the friction behavior in a vacuum environment becomes different from that in an environment of atmospheric pressure and this difference significantly affects the performance of the ultrasonic linear motor. In this paper, to consistently provide stable and high power of output in a high vacuum, frequency matching was conducted. Moreover, to achieve the fine control performance in the vacuum environment, a modified nominal characteristic trajectory following control method was adopted. Finally, the stage was operated under high vacuum condition, and the operating performances were investigated compared with that of a conventional PI compensator. As a result, robustness of positioning was accomplished in a high vacuum condition with nanometer-level accuracy.
Linear MALDI-ToF simultaneous spectrum deconvolution and baseline removal.
Picaud, Vincent; Giovannelli, Jean-Francois; Truntzer, Caroline; Charrier, Jean-Philippe; Giremus, Audrey; Grangeat, Pierre; Mercier, Catherine
2018-04-05
Thanks to a reasonable cost and simple sample preparation procedure, linear MALDI-ToF spectrometry is a growing technology for clinical microbiology. With appropriate spectrum databases, this technology can be used for early identification of pathogens in body fluids. However, due to the low resolution of linear MALDI-ToF instruments, robust and accurate peak picking remains a challenging task. In this context we propose a new peak extraction algorithm from raw spectrum. With this method the spectrum baseline and spectrum peaks are processed jointly. The approach relies on an additive model constituted by a smooth baseline part plus a sparse peak list convolved with a known peak shape. The model is then fitted under a Gaussian noise model. The proposed method is well suited to process low resolution spectra with important baseline and unresolved peaks. We developed a new peak deconvolution procedure. The paper describes the method derivation and discusses some of its interpretations. The algorithm is then described in a pseudo-code form where the required optimization procedure is detailed. For synthetic data the method is compared to a more conventional approach. The new method reduces artifacts caused by the usual two-steps procedure, baseline removal then peak extraction. Finally some results on real linear MALDI-ToF spectra are provided. We introduced a new method for peak picking, where peak deconvolution and baseline computation are performed jointly. On simulated data we showed that this global approach performs better than a classical one where baseline and peaks are processed sequentially. A dedicated experiment has been conducted on real spectra. In this study a collection of spectra of spiked proteins were acquired and then analyzed. Better performances of the proposed method, in term of accuracy and reproductibility, have been observed and validated by an extended statistical analysis.
A new tritiated water measurement method with plastic scintillator pellets.
Furuta, Etsuko; Iwasaki, Noriko; Kato, Yuka; Tomozoe, Yusuke
2016-01-01
A new tritiated water measurement method with plastic scintillator pellets (PS-pellets) by using a conventional liquid scintillation counter was developed. The PS-pellets used were 3 mm in both diameter and length. A low potassium glass vial was filled full with the pellets, and tritiated water was applied to the vial from 5 to 100 μl. Then, the sample solution was scattered in the interstices of the pellets in a vial. This method needs no liquid scintillator, so no liquid organic waste fluid is generated. The counting efficiency with the pellets was approximately 48 % when a 5 μl solution was used, which was higher than that of conventional measurement using liquid scintillator. The relationship between count rate and activity showed good linearity. The pellets were able to be used repeatedly, so few solid wastes are generated with this method. The PS-pellets are useful for tritiated water measurement; however, it is necessary to develop a new device which can be applied to a larger volume and measure low level concentration like an environmental application.
Mapping urban environmental noise: a land use regression method.
Xie, Dan; Liu, Yi; Chen, Jining
2011-09-01
Forecasting and preventing urban noise pollution are major challenges in urban environmental management. Most existing efforts, including experiment-based models, statistical models, and noise mapping, however, have limited capacity to explain the association between urban growth and corresponding noise change. Therefore, these conventional methods can hardly forecast urban noise at a given outlook of development layout. This paper, for the first time, introduces a land use regression method, which has been applied for simulating urban air quality for a decade, to construct an urban noise model (LUNOS) in Dalian Municipality, Northwest China. The LUNOS model describes noise as a dependent variable of surrounding various land areas via a regressive function. The results suggest that a linear model performs better in fitting monitoring data, and there is no significant difference of the LUNOS's outputs when applied to different spatial scales. As the LUNOS facilitates a better understanding of the association between land use and urban environmental noise in comparison to conventional methods, it can be regarded as a promising tool for noise prediction for planning purposes and aid smart decision-making.
Multi-domain boundary element method for axi-symmetric layered linear acoustic systems
NASA Astrophysics Data System (ADS)
Reiter, Paul; Ziegelwanger, Harald
2017-12-01
Homogeneous porous materials like rock wool or synthetic foam are the main tool for acoustic absorption. The conventional absorbing structure for sound-proofing consists of one or multiple absorbers placed in front of a rigid wall, with or without air-gaps in between. Various models exist to describe these so called multi-layered acoustic systems mathematically for incoming plane waves. However, there is no efficient method to calculate the sound field in a half space above a multi layered acoustic system for an incoming spherical wave. In this work, an axi-symmetric multi-domain boundary element method (BEM) for absorbing multi layered acoustic systems and incoming spherical waves is introduced. In the proposed BEM formulation, a complex wave number is used to model absorbing materials as a fluid and a coordinate transformation is introduced which simplifies singular integrals of the conventional BEM to non-singular radial and angular integrals. The radial and angular part are integrated analytically and numerically, respectively. The output of the method can be interpreted as a numerical half space Green's function for grounds consisting of layered materials.
Analysis of biomedical time signals for characterization of cutaneous diabetic micro-angiopathy
NASA Astrophysics Data System (ADS)
Kraitl, Jens; Ewald, Hartmut
2007-02-01
Photo-plethysmography (PPG) is frequently used in research on microcirculation of blood. It is a non-invasive procedure and takes minimal time to be carried out. Usually PPG time series are analyzed by conventional linear methods, mainly Fourier analysis. These methods may not be optimal for the investigation of nonlinear effects of the hearth circulation system like vasomotion, autoregulation, thermoregulation, breathing, heartbeat and vessels. The wavelet analysis of the PPG time series is a specific, sensitive nonlinear method for the in vivo identification of hearth circulation patterns and human health status. This nonlinear analysis of PPG signals provides additional information which cannot be detected using conventional approaches. The wavelet analysis has been used to study healthy subjects and to characterize the health status of patients with a functional cutaneous microangiopathy which was associated with diabetic neuropathy. The non-invasive in vivo method is based on the radiation of monochromatic light through an area of skin on the finger. A Photometrical Measurement Device (PMD) has been developed. The PMD is suitable for non-invasive continuous online monitoring of one or more biologic constituent values and blood circulation patterns.
Development of a numerical model for vehicle-bridge interaction analysis of railway bridges
NASA Astrophysics Data System (ADS)
Kim, Hee Ju; Cho, Eun Sang; Ham, Jun Su; Park, Ki Tae; Kim, Tae Heon
2016-04-01
In the field of civil engineering, analyzing dynamic response was main concern for a long time. These analysis methods can be divided into moving load analysis method and moving mass analysis method, and formulating each an equation of motion has recently been studied after dividing vehicles and bridges. In this study, the numerical method is presented, which can consider the various train types and can solve the equations of motion for a vehicle-bridge interaction analysis by non-iteration procedure through formulating the coupled equations for motion. Also, 3 dimensional accurate numerical models was developed by KTX-vehicle in order to analyze dynamic response characteristics. The equations of motion for the conventional trains are derived, and the numerical models of the conventional trains are idealized by a set of linear springs and dashpots with 18 degrees of freedom. The bridge models are simplified by the 3 dimensional space frame element which is based on the Euler-Bernoulli theory. The rail irregularities of vertical and lateral directions are generated by PSD functions of the Federal Railroad Administration (FRA).
Generalized Predictive and Neural Generalized Predictive Control of Aerospace Systems
NASA Technical Reports Server (NTRS)
Kelkar, Atul G.
2000-01-01
The research work presented in this thesis addresses the problem of robust control of uncertain linear and nonlinear systems using Neural network-based Generalized Predictive Control (NGPC) methodology. A brief overview of predictive control and its comparison with Linear Quadratic (LQ) control is given to emphasize advantages and drawbacks of predictive control methods. It is shown that the Generalized Predictive Control (GPC) methodology overcomes the drawbacks associated with traditional LQ control as well as conventional predictive control methods. It is shown that in spite of the model-based nature of GPC it has good robustness properties being special case of receding horizon control. The conditions for choosing tuning parameters for GPC to ensure closed-loop stability are derived. A neural network-based GPC architecture is proposed for the control of linear and nonlinear uncertain systems. A methodology to account for parametric uncertainty in the system is proposed using on-line training capability of multi-layer neural network. Several simulation examples and results from real-time experiments are given to demonstrate the effectiveness of the proposed methodology.
NASA Astrophysics Data System (ADS)
Havens, Timothy C.; Cummings, Ian; Botts, Jonathan; Summers, Jason E.
2017-05-01
The linear ordered statistic (LOS) is a parameterized ordered statistic (OS) that is a weighted average of a rank-ordered sample. LOS operators are useful generalizations of aggregation as they can represent any linear aggregation, from minimum to maximum, including conventional aggregations, such as mean and median. In the fuzzy logic field, these aggregations are called ordered weighted averages (OWAs). Here, we present a method for learning LOS operators from training data, viz., data for which you know the output of the desired LOS. We then extend the learning process with regularization, such that a lower complexity or sparse LOS can be learned. Hence, we discuss what 'lower complexity' means in this context and how to represent that in the optimization procedure. Finally, we apply our learning methods to the well-known constant-false-alarm-rate (CFAR) detection problem, specifically for the case of background levels modeled by long-tailed distributions, such as the K-distribution. These backgrounds arise in several pertinent imaging problems, including the modeling of clutter in synthetic aperture radar and sonar (SAR and SAS) and in wireless communications.
An Induction Heating Method with Traveling Magnetic Field for Long Structure Metal
NASA Astrophysics Data System (ADS)
Sekine, Takamitsu; Tomita, Hideo; Obata, Shuji; Saito, Yukio
A novel dismantlable adhesion method for recycling operation of interior materials is proposed. This method is applied a high frequency induction heating and a thermoplastic adhesive. For an adhesion of interior material to long steel stud, a conventional spiral coil as like IH cooking heater gives inadequateness for uniform heating to the stud. Therefore, we have proposed an induction heating method with traveling magnetic field for perfect long structures bonding. In this paper, we describe on the new adhesion method using the 20kHz, three-phase 200V inverter and linear induction coil. From induction heating characteristics to thin steel plates and long studs, the method is cleared the usefulness for uniform heating to long structures.
NASA Astrophysics Data System (ADS)
Lasche, George; Coldwell, Robert; Metzger, Robert
2017-09-01
A new application (known as "VRF", or "Visual RobFit") for analysis of high-resolution gamma-ray spectra has been developed using non-linear fitting techniques to fit full-spectrum nuclide shapes. In contrast to conventional methods based on the results of an initial peak-search, the VRF analysis method forms, at each of many automated iterations, a spectrum-wide shape for each nuclide and, also at each iteration, it adjusts the activities of each nuclide, as well as user-enabled parameters of energy calibration, attenuation by up to three intervening or self-absorbing materials, peak width as a function of energy, full-energy peak efficiency, and coincidence summing until no better fit to the data can be obtained. This approach, which employs a new and significantly advanced underlying fitting engine especially adapted to nuclear spectra, allows identification of minor peaks that are masked by larger, overlapping peaks that would not otherwise be possible. The application and method are briefly described and two examples are presented.
Large angle solid state position sensitive x-ray detector system
Kurtz, David S.; Ruud, Clay O.
1998-01-01
A method and apparatus for x-ray measurement of certain properties of a solid material. In distinction to known methods and apparatus, this invention employs a specific fiber-optic bundle configuration, termed a reorganizer, itself known for other uses, for coherently transmitting visible light originating from the scintillation of diffracted x-radiation from the solid material gathered along a substantially one dimensional linear arc, to a two-dimensional photo-sensor array. The two-dimensional photodetector array, with its many closely packed light sensitive pixels, is employed to process the information contained in the diffracted radiation and present the information in the form of a conventional x-ray diffraction spectrum. By this arrangement, the angular range of the combined detector faces may be increased without loss of angular resolution. Further, the prohibitively expensive coupling together of a large number of individual linear diode photodetectors, which would be required to process signals generated by the diffracted radiation, is avoided.
Centroid estimation for a Shack-Hartmann wavefront sensor based on stream processing.
Kong, Fanpeng; Polo, Manuel Cegarra; Lambert, Andrew
2017-08-10
Using center of gravity to estimate the centroid of the spot in a Shack-Hartmann wavefront sensor, the measurement corrupts with photon and detector noise. Parameters, like window size, often require careful optimization to balance the noise error, dynamic range, and linearity of the response coefficient under different photon flux. It also needs to be substituted by the correlation method for extended sources. We propose a centroid estimator based on stream processing, where the center of gravity calculation window floats with the incoming pixel from the detector. In comparison with conventional methods, we show that the proposed estimator simplifies the choice of optimized parameters, provides a unit linear coefficient response, and reduces the influence of background and noise. It is shown that the stream-based centroid estimator also works well for limited size extended sources. A hardware implementation of the proposed estimator is discussed.
NASA Technical Reports Server (NTRS)
Ustinov, Eugene A.; Sunseri, Richard F.
2005-01-01
An approach is presented to the inversion of gravity fields based on evaluation of partials of observables with respect to gravity harmonics using the solution of adjoint problem of orbital dynamics of the spacecraft. Corresponding adjoint operator is derived directly from the linear operator of the linearized forward problem of orbital dynamics. The resulting adjoint problem is similar to the forward problem and can be solved by the same methods. For given highest degree N of gravity harmonics desired, this method involves integration of N adjoint solutions as compared to integration of N2 partials of the forward solution with respect to gravity harmonics in the conventional approach. Thus, for higher resolution gravity models, this approach becomes increasingly more effective in terms of computer resources as compared to the approach based on the solution of the forward problem of orbital dynamics.
Phase-sensitive spectral estimation by the hybrid filter diagonalization method.
Celik, Hasan; Ridge, Clark D; Shaka, A J
2012-01-01
A more robust way to obtain a high-resolution multidimensional NMR spectrum from limited data sets is described. The Filter Diagonalization Method (FDM) is used to analyze phase-modulated data and cast the spectrum in terms of phase-sensitive Lorentzian "phase-twist" peaks. These spectra are then used to obtain absorption-mode phase-sensitive spectra. In contrast to earlier implementations of multidimensional FDM, the absolute phase of the data need not be known beforehand, and linear phase corrections in each frequency dimension are possible, if they are required. Regularization is employed to improve the conditioning of the linear algebra problems that must be solved to obtain the spectral estimate. While regularization smoothes away noise and small peaks, a hybrid method allows the true noise floor to be correctly represented in the final result. Line shape transformation to a Gaussian-like shape improves the clarity of the spectra, and is achieved by a conventional Lorentzian-to-Gaussian transformation in the time-domain, after inverse Fourier transformation of the FDM spectra. The results obtained highlight the danger of not using proper phase-sensitive line shapes in the spectral estimate. The advantages of the new method for the spectral estimate are the following: (i) the spectrum can be phased by conventional means after it is obtained; (ii) there is a true and accurate noise floor; and (iii) there is some indication of the quality of fit in each local region of the spectrum. The method is illustrated with 2D NMR data for the first time, but is applicable to n-dimensional data without any restriction on the number of time/frequency dimensions. Copyright © 2011. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Mukhopadhyaya, Biswarup; Roy, Sourov
1998-06-01
We investigate the signal γγ+E/ in a high-energy linear e+e- collider, with a view to differentiating between gauge-mediated supersymmetry breaking and the conventional supersymmetric models. Prima facie, there is considerable chance of confusion between the two scenarios if the assumption of gaugino mass unification is relaxed. We show that the use of polarized electron beams enables one to distinguish between the two schemes in most cases. There are some regions in the parameter space where this idea does not work, and we suggest some additional methods of distinction. We also perform an analysis of some signals in the gauge-mediated model, coming from the pair production of the second-lightest neutralino.
Güçlü, Kubilay; Ozyürek, Mustafa; Güngör, Nilay; Baki, Sefa; Apak, Reşat
2013-09-10
Development of sensitive and selective methods of determination for biothiols is important because of their significant roles in biological systems. We present a new optical sensor using Ellman's reagent (DTNB)-adsorbed gold nanoparticles (Au-NPs) (DTNB-Au-NP) in a colloidal solution devised to selectively determine biologically important thiols (biothiols) from biological samples and pharmaceuticals. 5,5'-Dithio-bis(2-nitrobenzoic acid) (DTNB), a versatile water-soluble compound for quantitating free sulfhydryl groups in solution, was adsorbed through non-covalent interaction onto Au-NPs, and the absorbance changes associated with the formation of the yellow-colored 5-thio-2-nitrobenzoate (TNB(2-)) anion as a result of reaction with biothiols was measured at 410nm. The sensor gave a linear response over a wide concentration range of standard biothiols comprising cysteine, glutathione, homocysteine, cysteamine, dihydrolipoic acid and 1,4-dithioerythritol. The calibration curves of individual biothiols were constructed, and their molar absorptivities and linear concentration ranges determined. The cysteine equivalent thiol content (CETC) values of various biothiols using the DTNB-Au-NP assay were comparable to those of the conventional DTNB assay, showing that the immobilized DTNB reagent retained its reactivity toward thiols. Common biological sample ingredients like amino acids, flavonoids, vitamins, and plasma antioxidants did not interfere with the proposed sensing method. This assay was validated through linearity, additivity, precision and recovery, demonstrating that the assay is reliable and robust. DTNB-adsorbed Au-NPs probes provided higher sensitivity (i.e., lower detection limits) in biothiol determination than conventional DTNB reagent. Under optimized conditions, cysteine (Cys) was quantified by the proposed assay, with a detection limit (LOD) of 0.57μM and acceptable linearity ranging from 0.4 to 29.0μM (r=0.998). Copyright © 2013 Elsevier B.V. All rights reserved.
Multi-mode sliding mode control for precision linear stage based on fixed or floating stator.
Fang, Jiwen; Long, Zhili; Wang, Michael Yu; Zhang, Lufan; Dai, Xufei
2016-02-01
This paper presents the control performance of a linear motion stage driven by Voice Coil Motor (VCM). Unlike the conventional VCM, the stator of this VCM is regulated, which means it can be adjusted as a floating-stator or fixed-stator. A Multi-Mode Sliding Mode Control (MMSMC), including a conventional Sliding Mode Control (SMC) and an Integral Sliding Mode Control (ISMC), is designed to control the linear motion stage. The control is switched between SMC and IMSC based on the error threshold. To eliminate the chattering, a smooth function is adopted instead of a signum function. The experimental results with the floating stator show that the positioning accuracy and tracking performance of the linear motion stage are improved with the MMSMC approach.
Reinventing the Accelerator for the High Energy Frontier
Rosenzweig, James [UCLA, Los Angeles, California, United States
2017-12-09
The history of discovery in high-energy physics has been intimately connected with progress in methods of accelerating particles for the past 75 years. This remains true today, as the post-LHC era in particle physics will require significant innovation and investment in a superconducting linear collider. The choice of the linear collider as the next-generation discovery machine, and the selection of superconducting technology has rather suddenly thrown promising competing techniques -- such as very large hadron colliders, muon colliders, and high-field, high frequency linear colliders -- into the background. We discuss the state of such conventional options, and the likelihood of their eventual success. We then follow with a much longer view: a survey of a new, burgeoning frontier in high energy accelerators, where intense lasers, charged particle beams, and plasmas are all combined in a cross-disciplinary effort to reinvent the accelerator from its fundamental principles on up.
Zhou, Xiaotong; Meng, Xiangjun; Cheng, Longmei; Su, Chong; Sun, Yantong; Sun, Lingxia; Tang, Zhaohui; Fawcett, John Paul; Yang, Yan; Gu, Jingkai
2017-05-16
Polyethylene glycols (PEGs) are synthetic polymers composed of repeating ethylene oxide subunits. They display excellent biocompatibility and are widely used as pharmaceutical excipients. To fully understand the biological fate of PEGs requires accurate and sensitive analytical methods for their quantitation. Application of conventional liquid chromatography-tandem mass spectrometry (LC-MS/MS) is difficult because PEGs have polydisperse molecular weights (MWs) and tend to produce multicharged ions in-source resulting in innumerable precursor ions. As a result, multiple reaction monitoring (MRM) fails to scan all ion pairs so that information on the fate of unselected ions is missed. This Article addresses this problem by application of liquid chromatography-triple-quadrupole/time-of-flight mass spectrometry (LC-Q-TOF MS) based on the MS ALL technique. This technique performs information-independent acquisition by allowing all PEG precursor ions to enter the collision cell (Q2). In-quadrupole collision-induced dissociation (CID) in Q2 then effectively generates several fragments from all PEGs due to the high collision energy (CE). A particular PEG product ion (m/z 133.08592) was found to be common to all linear PEGs and allowed their total quantitation in rat plasma with high sensitivity, excellent linearity and reproducibility. Assay validation showed the method was linear for all linear PEGs over the concentration range 0.05-5.0 μg/mL. The assay was successfully applied to the pharmacokinetic study in rat involving intravenous administration of linear PEG 600, PEG 4000, and PEG 20000. It is anticipated the method will have wide ranging applications and stimulate the development of assays for other pharmaceutical polymers in the future.
Wang, Boshuo; Aberra, Aman S; Grill, Warren M; Peterchev, Angel V
2018-04-01
We present a theory and computational methods to incorporate transverse polarization of neuronal membranes into the cable equation to account for the secondary electric field generated by the membrane in response to transverse electric fields. The effect of transverse polarization on nonlinear neuronal activation thresholds is quantified and discussed in the context of previous studies using linear membrane models. The response of neuronal membranes to applied electric fields is derived under two time scales and a unified solution of transverse polarization is given for spherical and cylindrical cell geometries. The solution is incorporated into the cable equation re-derived using an asymptotic model that separates the longitudinal and transverse dimensions. Two numerical methods are proposed to implement the modified cable equation. Several common neural stimulation scenarios are tested using two nonlinear membrane models to compare thresholds of the conventional and modified cable equations. The implementations of the modified cable equation incorporating transverse polarization are validated against previous results in the literature. The test cases show that transverse polarization has limited effect on activation thresholds. The transverse field only affects thresholds of unmyelinated axons for short pulses and in low-gradient field distributions, whereas myelinated axons are mostly unaffected. The modified cable equation captures the membrane's behavior on different time scales and models more accurately the coupling between electric fields and neurons. It addresses the limitations of the conventional cable equation and allows sound theoretical interpretations. The implementation provides simple methods that are compatible with current simulation approaches to study the effect of transverse polarization on nonlinear membranes. The minimal influence by transverse polarization on axonal activation thresholds for the nonlinear membrane models indicates that predictions of stronger effects in linear membrane models with a fixed activation threshold are inaccurate. Thus, the conventional cable equation works well for most neuroengineering applications, and the presented modeling approach is well suited to address the exceptions.
Virtual rigid body: a new optical tracking paradigm in image-guided interventions
NASA Astrophysics Data System (ADS)
Cheng, Alexis; Lee, David S.; Deshmukh, Nishikant; Boctor, Emad M.
2015-03-01
Tracking technology is often necessary for image-guided surgical interventions. Optical tracking is one the options, but it suffers from line of sight and workspace limitations. Optical tracking is accomplished by attaching a rigid body marker, having a pattern for pose detection, onto a tool or device. A larger rigid body results in more accurate tracking, but at the same time large size limits its usage in a crowded surgical workspace. This work presents a prototype of a novel optical tracking method using a virtual rigid body (VRB). We define the VRB as a 3D rigid body marker in the form of pattern on a surface generated from a light source. Its pose can be recovered by observing the projected pattern with a stereo-camera system. The rigid body's size is no longer physically limited as we can manufacture small size light sources. Conventional optical tracking also requires line of sight to the rigid body. VRB overcomes these limitations by detecting a pattern projected onto the surface. We can project the pattern onto a region of interest, allowing the pattern to always be in the view of the optical tracker. This helps to decrease the occurrence of occlusions. This manuscript describes the method and results compared with conventional optical tracking in an experiment setup using known motions. The experiments are done using an optical tracker and a linear-stage, resulting in targeting errors of 0.38mm+/-0.28mm with our method compared to 0.23mm+/-0.22mm with conventional optical markers. Another experiment that replaced the linear stage with a robot arm resulted in rotational errors of 0.50+/-0.31° and 2.68+/-2.20° and the translation errors of 0.18+/-0.10 mm and 0.03+/-0.02 mm respectively.
Swept Impact Seismic Technique (SIST)
Park, C.B.; Miller, R.D.; Steeples, D.W.; Black, R.A.
1996-01-01
A coded seismic technique is developed that can result in a higher signal-to-noise ratio than a conventional single-pulse method does. The technique is cost-effective and time-efficient and therefore well suited for shallow-reflection surveys where high resolution and cost-effectiveness are critical. A low-power impact source transmits a few to several hundred high-frequency broad-band seismic pulses during several seconds of recording time according to a deterministic coding scheme. The coding scheme consists of a time-encoded impact sequence in which the rate of impact (cycles/s) changes linearly with time providing a broad range of impact rates. Impact times used during the decoding process are recorded on one channel of the seismograph. The coding concept combines the vibroseis swept-frequency and the Mini-Sosie random impact concepts. The swept-frequency concept greatly improves the suppression of correlation noise with much fewer impacts than normally used in the Mini-Sosie technique. The impact concept makes the technique simple and efficient in generating high-resolution seismic data especially in the presence of noise. The transfer function of the impact sequence simulates a low-cut filter with the cutoff frequency the same as the lowest impact rate. This property can be used to attenuate low-frequency ground-roll noise without using an analog low-cut filter or a spatial source (or receiver) array as is necessary with a conventional single-pulse method. Because of the discontinuous coding scheme, the decoding process is accomplished by a "shift-and-stacking" method that is much simpler and quicker than cross-correlation. The simplicity of the coding allows the mechanical design of the source to remain simple. Several different types of mechanical systems could be adapted to generate a linear impact sweep. In addition, the simplicity of the coding also allows the technique to be used with conventional acquisition systems, with only minor modifications.
Algorithm for ion beam figuring of low-gradient mirrors.
Jiao, Changjun; Li, Shengyi; Xie, Xuhui
2009-07-20
Ion beam figuring technology for low-gradient mirrors is discussed. Ion beam figuring is a noncontact machining technique in which a beam of high-energy ions is directed toward a target workpiece to remove material in a predetermined and controlled fashion. Owing to this noncontact mode of material removal, problems associated with tool wear and edge effects, which are common in conventional contact polishing processes, are avoided. Based on the Bayesian principle, an iterative dwell time algorithm for planar mirrors is deduced from the computer-controlled optical surfacing (CCOS) principle. With the properties of the removal function, the shaping process of low-gradient mirrors can be approximated by the linear model for planar mirrors. With these discussions, the error surface figuring technology for low-gradient mirrors with a linear path is set up. With the near-Gaussian property of the removal function, the figuring process with a spiral path can be described by the conventional linear CCOS principle, and a Bayesian-based iterative algorithm can be used to deconvolute the dwell time. Moreover, the selection criterion of the spiral parameter is given. Ion beam figuring technology with a spiral scan path based on these methods can be used to figure mirrors with non-axis-symmetrical errors. Experiments on SiC chemical vapor deposition planar and Zerodur paraboloid samples are made, and the final surface errors are all below 1/100 lambda.
Liang, Wenkel; Chapman, Craig T; Ding, Feizhi; Li, Xiaosong
2012-03-01
A first-principles solvated electronic dynamics method is introduced. Solvent electronic degrees of freedom are coupled to the time-dependent electronic density of a solute molecule by means of the implicit reaction field method, and the entire electronic system is propagated in time. This real-time time-dependent approach, incorporating the polarizable continuum solvation model, is shown to be very effective in describing the dynamical solvation effect in the charge transfer process and yields a consistent absorption spectrum in comparison to the conventional linear response results in solution. © 2012 American Chemical Society
Nuclear reactor transient analysis via a quasi-static kinetics Monte Carlo method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jo, YuGwon; Cho, Bumhee; Cho, Nam Zin, E-mail: nzcho@kaist.ac.kr
2015-12-31
The predictor-corrector quasi-static (PCQS) method is applied to the Monte Carlo (MC) calculation for reactor transient analysis. To solve the transient fixed-source problem of the PCQS method, fission source iteration is used and a linear approximation of fission source distributions during a macro-time step is introduced to provide delayed neutron source. The conventional particle-tracking procedure is modified to solve the transient fixed-source problem via MC calculation. The PCQS method with MC calculation is compared with the direct time-dependent method of characteristics (MOC) on a TWIGL two-group problem for verification of the computer code. Then, the results on a continuous-energy problemmore » are presented.« less
On shifted Jacobi spectral method for high-order multi-point boundary value problems
NASA Astrophysics Data System (ADS)
Doha, E. H.; Bhrawy, A. H.; Hafez, R. M.
2012-10-01
This paper reports a spectral tau method for numerically solving multi-point boundary value problems (BVPs) of linear high-order ordinary differential equations. The construction of the shifted Jacobi tau approximation is based on conventional differentiation. This use of differentiation allows the imposition of the governing equation at the whole set of grid points and the straight forward implementation of multiple boundary conditions. Extension of the tau method for high-order multi-point BVPs with variable coefficients is treated using the shifted Jacobi Gauss-Lobatto quadrature. Shifted Jacobi collocation method is developed for solving nonlinear high-order multi-point BVPs. The performance of the proposed methods is investigated by considering several examples. Accurate results and high convergence rates are achieved.
Linear-parameter-varying gain-scheduled control of aerospace systems
NASA Astrophysics Data System (ADS)
Barker, Jeffrey Michael
The dynamics of many aerospace systems vary significantly as a function of flight condition. Robust control provides methods of guaranteeing performance and stability goals across flight conditions. In mu-syntthesis, changes to the dynamical system are primarily treated as uncertainty. This method has been successfully applied to many control problems, and here is applied to flutter control. More recently, two techniques for generating robust gain-scheduled controller have been developed. Linear fractional transformation (LFT) gain-scheduled control is an extension of mu-synthesis in which the plant and controller are explicit functions of parameters measurable in real-time. This LFT gain-scheduled control technique is applied to the Benchmark Active Control Technology (BACT) wing, and compared with mu-synthesis control. Linear parameter-varying (LPV) gain-scheduled control is an extension of Hinfinity control to parameter varying systems. LPV gain-scheduled control directly incorporates bounds on the rate of change of the scheduling parameters, and often reduces conservatism inherent in LFT gain-scheduled control. Gain-scheduled LPV control of the BACT wing compares very favorably with the LFT controller. Gain-scheduled LPV controllers are generated for the lateral-directional and longitudinal axes of the Innovative Control Effectors (ICE) aircraft and implemented in nonlinear simulations and real-time piloted nonlinear simulations. Cooper-Harper and pilot-induced oscillation ratings were obtained for an initial design, a reference aircraft and a redesign. Piloted simulation results for the initial LPV gain-scheduled control of the ICE aircraft are compared with results for a conventional fighter aircraft in discrete pitch and roll angle tracking tasks. The results for the redesigned controller are significantly better than both the previous LPV controller and the conventional aircraft.
Application of Fast Multipole Methods to the NASA Fast Scattering Code
NASA Technical Reports Server (NTRS)
Dunn, Mark H.; Tinetti, Ana F.
2008-01-01
The NASA Fast Scattering Code (FSC) is a versatile noise prediction program designed to conduct aeroacoustic noise reduction studies. The equivalent source method is used to solve an exterior Helmholtz boundary value problem with an impedance type boundary condition. The solution process in FSC v2.0 requires direct manipulation of a large, dense system of linear equations, limiting the applicability of the code to small scales and/or moderate excitation frequencies. Recent advances in the use of Fast Multipole Methods (FMM) for solving scattering problems, coupled with sparse linear algebra techniques, suggest that a substantial reduction in computer resource utilization over conventional solution approaches can be obtained. Implementation of the single level FMM (SLFMM) and a variant of the Conjugate Gradient Method (CGM) into the FSC is discussed in this paper. The culmination of this effort, FSC v3.0, was used to generate solutions for three configurations of interest. Benchmarking against previously obtained simulations indicate that a twenty-fold reduction in computational memory and up to a four-fold reduction in computer time have been achieved on a single processor.
NASA Astrophysics Data System (ADS)
Van Londersele, Arne; De Zutter, Daniël; Vande Ginste, Dries
2017-08-01
This work focuses on efficient full-wave solutions of multiscale electromagnetic problems in the time domain. Three local implicitization techniques are proposed and carefully analyzed in order to relax the traditional time step limit of the Finite-Difference Time-Domain (FDTD) method on a nonuniform, staggered, tensor product grid: Newmark, Crank-Nicolson (CN) and Alternating-Direction-Implicit (ADI) implicitization. All of them are applied in preferable directions, alike Hybrid Implicit-Explicit (HIE) methods, as to limit the rank of the sparse linear systems. Both exponential and linear stability are rigorously investigated for arbitrary grid spacings and arbitrary inhomogeneous, possibly lossy, isotropic media. Numerical examples confirm the conservation of energy inside a cavity for a million iterations if the time step is chosen below the proposed, relaxed limit. Apart from the theoretical contributions, new accomplishments such as the development of the leapfrog Alternating-Direction-Hybrid-Implicit-Explicit (ADHIE) FDTD method and a less stringent Courant-like time step limit for the conventional, fully explicit FDTD method on a nonuniform grid, have immediate practical applications.
Yolcu, Şükran Melda; Fırat, Merve; Chormey, Dotse Selali; Büyükpınar, Çağdaş; Turak, Fatma; Bakırdere, Sezgin
2018-05-01
In this study, dispersive liquid-liquid microextraction was systematically optimized for the preconcentration of nickel after forming a complex with diphenylcarbazone. The measurement output of the flame atomic absorption spectrometer was further enhanced by fitting a custom-cut slotted quartz tube to the flame burner head. The extraction method increased the amount of nickel reaching the flame and the slotted quartz tube increased the residence time of nickel atoms in the flame to record higher absorbance. Two methods combined to give about 90 fold enhancement in sensitivity over the conventional flame atomic absorption spectrometry. The optimized method was applicable over a wide linear concentration range, and it gave a detection limit of 2.1 µg L -1 . Low relative standard deviations at the lowest concentration in the linear calibration plot indicated high precision for both extraction process and instrumental measurements. A coal fly ash standard reference material (SRM 1633c) was used to determine the accuracy of the method, and experimented results were compatible with the certified value. Spiked recovery tests were also used to validate the applicability of the method.
Seino, Junji; Nakai, Hiromi
2012-10-14
The local unitary transformation (LUT) scheme at the spin-free infinite-order Douglas-Kroll-Hess (IODKH) level [J. Seino and H. Nakai, J. Chem. Phys. 136, 244102 (2012)], which is based on the locality of relativistic effects, has been extended to a four-component Dirac-Coulomb Hamiltonian. In the previous study, the LUT scheme was applied only to a one-particle IODKH Hamiltonian with non-relativistic two-electron Coulomb interaction, termed IODKH/C. The current study extends the LUT scheme to a two-particle IODKH Hamiltonian as well as one-particle one, termed IODKH/IODKH, which has been a real bottleneck in numerical calculation. The LUT scheme with the IODKH/IODKH Hamiltonian was numerically assessed in the diatomic molecules HX and X(2) and hydrogen halide molecules, (HX)(n) (X = F, Cl, Br, and I). The total Hartree-Fock energies calculated by the LUT method agree well with conventional IODKH/IODKH results. The computational cost of the LUT method is reduced drastically compared with that of the conventional method. In addition, the LUT method achieves linear-scaling with respect to the system size and a small prefactor.
On NUFFT-based gridding for non-Cartesian MRI
NASA Astrophysics Data System (ADS)
Fessler, Jeffrey A.
2007-10-01
For MRI with non-Cartesian sampling, the conventional approach to reconstructing images is to use the gridding method with a Kaiser-Bessel (KB) interpolation kernel. Recently, Sha et al. [L. Sha, H. Guo, A.W. Song, An improved gridding method for spiral MRI using nonuniform fast Fourier transform, J. Magn. Reson. 162(2) (2003) 250-258] proposed an alternative method based on a nonuniform FFT (NUFFT) with least-squares (LS) design of the interpolation coefficients. They described this LS_NUFFT method as shift variant and reported that it yielded smaller reconstruction approximation errors than the conventional shift-invariant KB approach. This paper analyzes the LS_NUFFT approach in detail. We show that when one accounts for a certain linear phase factor, the core of the LS_NUFFT interpolator is in fact real and shift invariant. Furthermore, we find that the KB approach yields smaller errors than the original LS_NUFFT approach. We show that optimizing certain scaling factors can lead to a somewhat improved LS_NUFFT approach, but the high computation cost seems to outweigh the modest reduction in reconstruction error. We conclude that the standard KB approach, with appropriate parameters as described in the literature, remains the practical method of choice for gridding reconstruction in MRI.
NASA Astrophysics Data System (ADS)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.
NASA Technical Reports Server (NTRS)
Krauspe, P.
1985-01-01
The effect of downburst-type wind shears on the longitudinal dynamic behavior of an unguided aircraft is simulated numerically on the basis of published meteorological data and the flight characteristics of an A300-B passenger jet. The nonlinear differential equations of the aircraft motion are linearized by conventional methods, and the wind effects are introduced via the linear derivatives of the wind components referred to the wind gradients to obtain simplified technical models of the longitudinal response to all possible types of constant-gradient wind shears during the first 20-60 sec. Graphs, maps, and diagrams are provided, and a number of accidents presumed to have involved wind shears are analyzed in detail.
Joint polarization tracking and channel equalization based on radius-directed linear Kalman filter
NASA Astrophysics Data System (ADS)
Zhang, Qun; Yang, Yanfu; Zhong, Kangping; Liu, Jie; Wu, Xiong; Yao, Yong
2018-01-01
We propose a joint polarization tracking and channel equalization scheme based on radius-directed linear Kalman filter (RD-LKF) by introducing the butterfly finite-impulse-response (FIR) filter in our previously proposed RD-LKF method. Along with the fast polarization tracking, it can also simultaneously compensate the inter-symbol interference (ISI) effects including residual chromatic dispersion and polarization mode dispersion. Compared with the conventional radius-directed equalizer (RDE) algorithm, it is demonstrated experimentally that three times faster convergence speed, one order of magnitude better tracking capability, and better BER performance is obtained in polarization division multiplexing 16 quadrature amplitude modulation system. Besides, the influences of the algorithm parameters on the convergence and the tracking performance are investigated by numerical simulation.
Evaluation of alternatives to sound barrier walls.
DOT National Transportation Integrated Search
2013-06-01
The existing INDOTs noise wall specification was developed primarily on the basis of knowledge of the conventional precast concrete : panel systems. Currently, the constructed cost of conventional noise walls is approximately $2 million per linear...
[Nitrogen status diagnosis of rice by using a digital camera].
Jia, Liang-Liang; Fan, Ming-Sheng; Zhang, Fu-Suo; Chen, Xin-Ping; Lü, Shi-Hua; Sun, Yan-Ming
2009-08-01
In the present research, a field experiment with different N application rate was conducted to study the possibility of using visible band color analysis methods to monitor the N status of rice canopy. The Correlations of visible spectrum band color intensity between rice canopy image acquired from a digital camera and conventional nitrogen status diagnosis parameters of leaf SPAD chlorophyll meter readings, total N content, upland biomass and N uptake were studied. The results showed that the red color intensity (R), green color intensity (G) and normalized redness intensity (NRI) have significant inverse linear correlations with the conventional N diagnosis parameters of SPAD readings, total N content, upland biomass and total N uptake. The correlation coefficient values (r) were from -0.561 to -0.714 for red band (R), from -0.452 to -0.505 for green band (G), and from -0.541 to 0.817 for normalized redness intensity (NRI). But the normalized greenness intensity (NGI) showed a significant positive correlation with conventional N parameters and the correlation coefficient values (r) were from 0.505 to 0.559. Compared with SPAD readings, the normalized redness intensity (NRI), with a high r value of 0.541-0.780 with conventional N parameters, could better express the N status of rice. The digital image color analysis method showed the potential of being used in rice N status diagnosis in the future.
Jayaratne, Yasas Shri Nalaka; Uribe, Flavio; Janakiraman, Nandakumar
2017-01-01
The objective of this systematic review was to compare the antero-posterior, vertical and angular changes of maxillary incisors with conventional anchorage control techniques and mini-implant based space closure methods. The electronic databases Pubmed, Scopus, ISI Web of knowledge, Cochrane Library and Open Grey were searched for potentially eligible studies using a set of predetermined keywords. Full texts meeting the inclusion criteria as well as their references were manually searched. The primary outcome data (linear, angular, and vertical maxillary incisor changes) and secondary outcome data (overbite changes, soft tissue changes, biomechanical factors, root resorption and treatment duration) were extracted from the selected articles and entered into spreadsheets based on the type of anchorage used. The methodological quality of each study was assessed. Six studies met the inclusion criteria. The amount of incisor retraction was greater with buccally placed mini-implants than conventional anchorage techniques. The incisor retraction with indirect anchorage from palatal mini-implants was less when compared with buccally placed mini-implants. Incisor intrusion occurred with buccal mini-implants, whereas extrusion was seen with conventional anchorage. Limited data on the biomechanical variables or adverse effects such as root resorption were reported in these studies. More RCT's that take in to account relevant biomechanical variables and employ three-dimensional quantification of tooth movements are required to provide information on incisor changes during space closure.
Barkagan, Michael; Contreras-Valdes, Fernando M; Leshem, Eran; Buxton, Alfred E; Nakagawa, Hiroshi; Anter, Elad
2018-05-30
PV reconnection is often the result of catheter instability and tissue edema. High-power short-duration (HP-SD) ablation strategies have been shown to improve atrial linear continuity in acute pre-clinical models. This study compares the safety, efficacy and long-term durability of HP-SD ablation with conventional ablation. In 6 swine, 2 ablation lines were performed anterior and posterior to the crista terminalis, in the smooth and trabeculated right atrium, respectively; and the right superior PV was isolated. In 3 swine, ablation was performed using conventional parameters (THERMOCOOL-SMARTTOUCH ® SF; 30W/30 sec) and in 3 other swine using HP-SD parameters (QDOT-MICRO™, 90W/4 sec). After 30 days, linear integrity was examined by voltage mapping and pacing, and the heart and surrounding tissues were examined by histopathology. Acute line integrity was achieved with both ablation strategies; however, HP-SD ablation required 80% less RF time compared with conventional ablation (P≤0.01 for all lines). Chronic line integrity was higher with HP-SD ablation: all 3 posterior lines were continuous and transmural compared to only 1 line created by conventional ablation. In the trabeculated tissue, HP-SD ablation lesions were wider and of similar depth with 1 of 3 lines being continuous compared to 0 of 3 using conventional ablation. Chronic PVI without stenosis was evident in both groups. There were no steam-pops. Pleural markings were present in both strategies, but parenchymal lung injury was only evident with conventional ablation. HP-SD ablation strategy results in improved linear continuity, shorter ablation time, and a safety profile comparable to conventional ablation. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
NASA Astrophysics Data System (ADS)
Stepanova, Larisa; Bronnikov, Sergej
2018-03-01
The crack growth directional angles in the isotropic linear elastic plane with the central crack under mixed-mode loading conditions for the full range of the mixity parameter are found. Two fracture criteria of traditional linear fracture mechanics (maximum tangential stress and minimum strain energy density criteria) are used. Atomistic simulations of the central crack growth process in an infinite plane medium under mixed-mode loading using Large-scale Molecular Massively Parallel Simulator (LAMMPS), a classical molecular dynamics code, are performed. The inter-atomic potential used in this investigation is Embedded Atom Method (EAM) potential. The plane specimens with initial central crack were subjected to Mixed-Mode loadings. The simulation cell contains 400000 atoms. The crack propagation direction angles under different values of the mixity parameter in a wide range of values from pure tensile loading to pure shear loading in a wide diapason of temperatures (from 0.1 К to 800 К) are obtained and analyzed. It is shown that the crack propagation direction angles obtained by molecular dynamics method coincide with the crack propagation direction angles given by the multi-parameter fracture criteria based on the strain energy density and the multi-parameter description of the crack-tip fields.
Saving in cycles: how to get people to save more money.
Tam, Leona; Dholakia, Utpal
2014-02-01
Low personal savings rates are an important social issue in the United States. We propose and test one particular method to get people to save more money that is based on the cyclical time orientation. In contrast to conventional, popular methods that encourage individuals to ignore past mistakes, focus on the future, and set goals to save money, our proposed method frames the savings task in cyclical terms, emphasizing the present. Across the studies, individuals who used our proposed cyclical savings method, compared with individuals who used a linear savings method, provided an average of 74% higher savings estimates and saved an average of 78% more money. We also found that the cyclical savings method was more efficacious because it increased implementation planning and lowered future optimism regarding saving money.
Fujiwara, Yasuhiro; Maruyama, Hirotoshi; Toyomaru, Kanako; Nishizaka, Yuri; Fukamatsu, Masahiro
2018-06-01
Magnetic resonance imaging (MRI) is widely used to detect carotid atherosclerotic plaques. Although it is important to evaluate vulnerable carotid plaques containing lipids and intra-plaque hemorrhages (IPHs) using T 1 -weighted images, the image contrast changes depending on the imaging settings. Moreover, to distinguish between a thrombus and a hemorrhage, it is useful to evaluate the iron content of the plaque using both T 1 -weighted and T 2 *-weighted images. Therefore, a quantitative evaluation of carotid atherosclerotic plaques using T 1 and T 2 * values may be necessary for the accurate evaluation of plaque components. The purpose of this study was to determine whether the multi-echo phase-sensitive inversion recovery (mPSIR) sequence can improve T 1 contrast while simultaneously providing accurate T 1 and T 2 * values of an IPH. T 1 and T 2 * values measured using mPSIR were compared to values from conventional methods in phantom and in vivo studies. In the phantom study, the T 1 and T 2 * values estimated using mPSIR were linearly correlated with those of conventional methods. In the in vivo study, mPSIR demonstrated higher T 1 contrast between the IPH phantom and sternocleidomastoid muscle than the conventional method. Moreover, the T 1 and T 2 * values of the blood vessel wall and sternocleidomastoid muscle estimated using mPSIR were correlated with values measured by conventional methods and with values reported previously. The mPSIR sequence improved T 1 contrast while simultaneously providing accurate T 1 and T 2 * values of the neck region. Although further study is required to evaluate the clinical utility, mPSIR may improve carotid atherosclerotic plaque detection and provide detailed information about plaque components.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, W; Zaghian, M; Lim, G
2015-06-15
Purpose: The current practice of considering the relative biological effectiveness (RBE) of protons in intensity modulated proton therapy (IMPT) planning is to use a generic RBE value of 1.1. However, RBE is indeed a variable depending on the dose per fraction, the linear energy transfer, tissue parameters, etc. In this study, we investigate the impact of using variable RBE based optimization (vRBE-OPT) on IMPT dose distributions compared by conventional fixed RBE based optimization (fRBE-OPT). Methods: Proton plans of three head and neck cancer patients were included for our study. In order to calculate variable RBE, tissue specific parameters were obtainedmore » from the literature and dose averaged LET values were calculated by Monte Carlo simulations. Biological effects were calculated using the linear quadratic model and they were utilized in the variable RBE based optimization. We used a Polak-Ribiere conjugate gradient algorithm to solve the model. In fixed RBE based optimization, we used conventional physical dose optimization to optimize doses weighted by 1.1. IMPT plans for each patient were optimized by both methods (vRBE-OPT and fRBE-OPT). Both variable and fixed RBE weighted dose distributions were calculated for both methods and compared by dosimetric measures. Results: The variable RBE weighted dose distributions were more homogenous within the targets, compared with the fixed RBE weighted dose distributions for the plans created by vRBE-OPT. We observed that there were noticeable deviations between variable and fixed RBE weighted dose distributions if the plan were optimized by fRBE-OPT. For organs at risk sparing, dose distributions from both methods were comparable. Conclusion: Biological dose based optimization rather than conventional physical dose based optimization in IMPT planning may bring benefit in improved tumor control when evaluating biologically equivalent dose, without sacrificing OAR sparing, for head and neck cancer patients. The research is supported in part by National Institutes of Health Grant No. 2U19CA021239-35.« less
A linear stability analysis for nonlinear, grey, thermal radiative transfer problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollaber, Allan B., E-mail: wollaber@lanl.go; Larsen, Edward W., E-mail: edlarsen@umich.ed
2011-02-20
We present a new linear stability analysis of three time discretizations and Monte Carlo interpretations of the nonlinear, grey thermal radiative transfer (TRT) equations: the widely used 'Implicit Monte Carlo' (IMC) equations, the Carter Forest (CF) equations, and the Ahrens-Larsen or 'Semi-Analog Monte Carlo' (SMC) equations. Using a spatial Fourier analysis of the 1-D Implicit Monte Carlo (IMC) equations that are linearized about an equilibrium solution, we show that the IMC equations are unconditionally stable (undamped perturbations do not exist) if {alpha}, the IMC time-discretization parameter, satisfies 0.5 < {alpha} {<=} 1. This is consistent with conventional wisdom. However, wemore » also show that for sufficiently large time steps, unphysical damped oscillations can exist that correspond to the lowest-frequency Fourier modes. After numerically confirming this result, we develop a method to assess the stability of any time discretization of the 0-D, nonlinear, grey, thermal radiative transfer problem. Subsequent analyses of the CF and SMC methods then demonstrate that the CF method is unconditionally stable and monotonic, but the SMC method is conditionally stable and permits unphysical oscillatory solutions that can prevent it from reaching equilibrium. This stability theory provides new conditions on the time step to guarantee monotonicity of the IMC solution, although they are likely too conservative to be used in practice. Theoretical predictions are tested and confirmed with numerical experiments.« less
A linear stability analysis for nonlinear, grey, thermal radiative transfer problems
NASA Astrophysics Data System (ADS)
Wollaber, Allan B.; Larsen, Edward W.
2011-02-01
We present a new linear stability analysis of three time discretizations and Monte Carlo interpretations of the nonlinear, grey thermal radiative transfer (TRT) equations: the widely used “Implicit Monte Carlo” (IMC) equations, the Carter Forest (CF) equations, and the Ahrens-Larsen or “Semi-Analog Monte Carlo” (SMC) equations. Using a spatial Fourier analysis of the 1-D Implicit Monte Carlo (IMC) equations that are linearized about an equilibrium solution, we show that the IMC equations are unconditionally stable (undamped perturbations do not exist) if α, the IMC time-discretization parameter, satisfies 0.5 < α ⩽ 1. This is consistent with conventional wisdom. However, we also show that for sufficiently large time steps, unphysical damped oscillations can exist that correspond to the lowest-frequency Fourier modes. After numerically confirming this result, we develop a method to assess the stability of any time discretization of the 0-D, nonlinear, grey, thermal radiative transfer problem. Subsequent analyses of the CF and SMC methods then demonstrate that the CF method is unconditionally stable and monotonic, but the SMC method is conditionally stable and permits unphysical oscillatory solutions that can prevent it from reaching equilibrium. This stability theory provides new conditions on the time step to guarantee monotonicity of the IMC solution, although they are likely too conservative to be used in practice. Theoretical predictions are tested and confirmed with numerical experiments.
Spread-Spectrum Beamforming and Clutter Filtering for Plane-Wave Color Doppler Imaging.
Mansour, Omar; Poepping, Tamie L; Lacefield, James C
2016-07-21
Plane-wave imaging is desirable for its ability to achieve high frame rates, allowing the capture of fast dynamic events and continuous Doppler data. In most implementations of plane-wave imaging, multiple low-resolution images from different plane wave tilt angles are compounded to form a single high-resolution image, thereby reducing the frame rate. Compounding improves the lateral beam profile in the high-resolution image, but it also acts as a low-pass filter in slow time that causes attenuation and aliasing of signals with high Doppler shifts. This paper introduces a spread-spectrum color Doppler imaging method that produces high-resolution images without the use of compounding, thereby eliminating the tradeoff between beam quality, maximum unaliased Doppler frequency, and frame rate. The method uses a long, random sequence of transmit angles rather than a linear sweep of plane wave directions. The random angle sequence randomizes the phase of off-focus (clutter) signals, thereby spreading the clutter power in the Doppler spectrum, while keeping the spectrum of the in-focus signal intact. The ensemble of randomly tilted low-resolution frames also acts as the Doppler ensemble, so it can be much longer than a conventional linear sweep, thereby improving beam formation while also making the slow-time Doppler sampling frequency equal to the pulse repetition frequency. Experiments performed using a carotid artery phantom with constant flow demonstrate that the spread-spectrum method more accurately measures the parabolic flow profile of the vessel and outperforms conventional plane-wave Doppler in both contrast resolution and estimation of high flow velocities. The spread-spectrum method is expected to be valuable for Doppler applications that require measurement of high velocities at high frame rates.
Comparison of Conventional and ANN Models for River Flow Forecasting
NASA Astrophysics Data System (ADS)
Jain, A.; Ganti, R.
2011-12-01
Hydrological models are useful in many water resources applications such as flood control, irrigation and drainage, hydro power generation, water supply, erosion and sediment control, etc. Estimates of runoff are needed in many water resources planning, design development, operation and maintenance activities. River flow is generally estimated using time series or rainfall-runoff models. Recently, soft artificial intelligence tools such as Artificial Neural Networks (ANNs) have become popular for research purposes but have not been extensively adopted in operational hydrological forecasts. There is a strong need to develop ANN models based on real catchment data and compare them with the conventional models. In this paper, a comparative study has been carried out for river flow forecasting using the conventional and ANN models. Among the conventional models, multiple linear, and non linear regression, and time series models of auto regressive (AR) type have been developed. Feed forward neural network model structure trained using the back propagation algorithm, a gradient search method, was adopted. The daily river flow data derived from Godavari Basin @ Polavaram, Andhra Pradesh, India have been employed to develop all the models included here. Two inputs, flows at two past time steps, (Q(t-1) and Q(t-2)) were selected using partial auto correlation analysis for forecasting flow at time t, Q(t). A wide range of error statistics have been used to evaluate the performance of all the models developed in this study. It has been found that the regression and AR models performed comparably, and the ANN model performed the best amongst all the models investigated in this study. It is concluded that ANN model should be adopted in real catchments for hydrological modeling and forecasting.
On recontamination and directional-bias problems in Monte Carlo simulation of PDF turbulence models
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1991-01-01
Turbulent combustion can not be simulated adequately by conventional moment closure turbulence models. The difficulty lies in the fact that the reaction rate is in general an exponential function of the temperature, and the higher order correlations in the conventional moment closure models of the chemical source term can not be neglected, making the applications of such models impractical. The probability density function (pdf) method offers an attractive alternative: in a pdf model, the chemical source terms are closed and do not require additional models. A grid dependent Monte Carlo scheme was studied, since it is a logical alternative, wherein the number of computer operations increases only linearly with the increase of number of independent variables, as compared to the exponential increase in a conventional finite difference scheme. A new algorithm was devised that satisfies a restriction in the case of pure diffusion or uniform flow problems. Although for nonuniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Xiaoyao; Hall, Randall W.; Löffler, Frank
The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H2O, N2, and F2 molecules. The method is based on Feynman’s path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methodsmore » and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem.« less
Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design
NASA Astrophysics Data System (ADS)
Leube, P. C.; Geiges, A.; Nowak, W.
2012-02-01
Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically highlight the consideration of conceptual model uncertainty.
Space shuttle nonmetallic materials age life prediction
NASA Technical Reports Server (NTRS)
Mendenhall, G. D.; Hassell, J. A.; Nathan, R. A.
1975-01-01
The chemiluminescence from samples of polybutadiene, Viton, Teflon, Silicone, PL 731 Adhesive, and SP 296 Boron-Epoxy composite was measured at temperatures from 25 to 150 C. Excellent correlations were obtained between chemiluminescence and temperature. These correlations serve to validate accelerated aging tests (at elevated temperatures) designed to predict service life at lower temperatures. In most cases, smooth or linear correlations were obtained between chemiluminescence and physical properties of purified polymer gums, including the tensile strength, viscosity, and loss tangent. The latter is a complex function of certain polymer properties. Data were obtained with far greater ease by the chemiluminescence technique than by the conventional methods of study. The chemiluminescence from the Teflon (Halon) samples was discovered to arise from trace amounts of impurities, which were undetectable by conventional, destructive analysis of the sample.
Rodrigues, Nils; Weiskopf, Daniel
2018-01-01
Conventional dot plots use a constant dot size and are typically applied to show the frequency distribution of small data sets. Unfortunately, they are not designed for a high dynamic range of frequencies. We address this problem by introducing nonlinear dot plots. Adopting the idea of nonlinear scaling from logarithmic bar charts, our plots allow for dots of varying size so that columns with a large number of samples are reduced in height. For the construction of these diagrams, we introduce an efficient two-way sweep algorithm that leads to a dense and symmetrical layout. We compensate aliasing artifacts at high dot densities by a specifically designed low-pass filtering method. Examples of nonlinear dot plots are compared to conventional dot plots as well as linear and logarithmic histograms. Finally, we include feedback from an expert review.
Where Does the Ordered Line Come From? Evidence From a Culture of Papua New Guinea.
Cooperrider, Kensy; Marghetis, Tyler; Núñez, Rafael
2017-05-01
Number lines, calendars, and measuring sticks all represent order along some dimension (e.g., magnitude) as position on a line. In high-literacy, industrialized societies, this principle of spatial organization- linear order-is a fixture of visual culture and everyday cognition. But what are the principle's origins, and how did it become such a fixture? Three studies investigated intuitions about linear order in the Yupno, members of a culture of Papua New Guinea that lacks conventional representations involving ordered lines, and in U.S. undergraduates. Presented with cards representing differing sizes and numerosities, both groups arranged them using linear order or sometimes spatial grouping, a competing principle. But whereas the U.S. participants produced ordered lines in all tasks, strongly favoring a left-to-right format, the Yupno produced them less consistently, and with variable orientations. Conventional linear representations are thus not necessary to spark the intuition of linear order-which may have other experiential sources-but they nonetheless regiment when and how the principle is used.
Voltage regulation in linear induction accelerators
Parsons, William M.
1992-01-01
Improvement in voltage regulation in a Linear Induction Accelerator wherein a varistor, such as a metal oxide varistor, is placed in parallel with the beam accelerating cavity and the magnetic core. The non-linear properties of the varistor result in a more stable voltage across the beam accelerating cavity than with a conventional compensating resistance.
Dynamic single sideband modulation for realizing parametric loudspeaker
NASA Astrophysics Data System (ADS)
Sakai, Shinichi; Kamakura, Tomoo
2008-06-01
A parametric loudspeaker, that presents remarkably narrow directivity compared with a conventional loudspeaker, is newly produced and examined. To work the loudspeaker optimally, we prototyped digitally a single sideband modulator based on the Weaver method and appropriate signal processing. The processing techniques are to change the carrier amplitude dynamically depending on the envelope of audio signals, and then to operate the square root or fourth root to the carrier amplitude for improving input-output acoustic linearity. The usefulness of the present modulation scheme has been verified experimentally.
Zheng, Wenming; Lin, Zhouchen; Wang, Haixian
2014-04-01
A novel discriminant analysis criterion is derived in this paper under the theoretical framework of Bayes optimality. In contrast to the conventional Fisher's discriminant criterion, the major novelty of the proposed one is the use of L1 norm rather than L2 norm, which makes it less sensitive to the outliers. With the L1-norm discriminant criterion, we propose a new linear discriminant analysis (L1-LDA) method for linear feature extraction problem. To solve the L1-LDA optimization problem, we propose an efficient iterative algorithm, in which a novel surrogate convex function is introduced such that the optimization problem in each iteration is to simply solve a convex programming problem and a close-form solution is guaranteed to this problem. Moreover, we also generalize the L1-LDA method to deal with the nonlinear robust feature extraction problems via the use of kernel trick, and hereafter proposed the L1-norm kernel discriminant analysis (L1-KDA) method. Extensive experiments on simulated and real data sets are conducted to evaluate the effectiveness of the proposed method in comparing with the state-of-the-art methods.
High-frequency Pulse-compression Ultrasound Imaging with an Annular Array
NASA Astrophysics Data System (ADS)
Mamou, J.; Ketterling, J. A.; Silverman, R. H.
High-frequency ultrasound (HFU) allows fine-resolution imaging at the expense of limited depth-of-field (DOF) and shallow acoustic penetration depth. Coded-excitation imaging permits a significant increase in the signal-to-noise ratio (SNR) and therefore, the acoustic penetration depth. A 17-MHz, five-element annular array with a focal length of 31 mm and a total aperture of 10 mm was fabricated using a 25-μm thick piezopolymer membrane. An optimized 8-μs linear chirp spanning 6.5-32 MHz was used to excite the transducer. After data acquisition, the received signals were linearly filtered by a compression filter and synthetically focused. To compare the chirp-array imaging method with conventional impulse imaging in terms of resolution, a 25-μm wire was scanned and the -6-dB axial and lateral resolutions were computed at depths ranging from 20.5 to 40.5 mm. A tissue-mimicking phantom containing 10-μm glass beads was scanned, and backscattered signals were analyzed to evaluate SNR and penetration depth. Finally, ex-vivo ophthalmic images were formed and chirp-coded images showed features that were not visible in conventional impulse images.
Leapfrog variants of iterative methods for linear algebra equations
NASA Technical Reports Server (NTRS)
Saylor, Paul E.
1988-01-01
Two iterative methods are considered, Richardson's method and a general second order method. For both methods, a variant of the method is derived for which only even numbered iterates are computed. The variant is called a leapfrog method. Comparisons between the conventional form of the methods and the leapfrog form are made under the assumption that the number of unknowns is large. In the case of Richardson's method, it is possible to express the final iterate in terms of only the initial approximation, a variant of the iteration called the grand-leap method. In the case of the grand-leap variant, a set of parameters is required. An algorithm is presented to compute these parameters that is related to algorithms to compute the weights and abscissas for Gaussian quadrature. General algorithms to implement the leapfrog and grand-leap methods are presented. Algorithms for the important special case of the Chebyshev method are also given.
A Highly Linear and Wide Input Range Four-Quadrant CMOS Analog Multiplier Using Active Feedback
NASA Astrophysics Data System (ADS)
Huang, Zhangcai; Jiang, Minglu; Inoue, Yasuaki
Analog multipliers are one of the most important building blocks in analog signal processing circuits. The performance with high linearity and wide input range is usually required for analog four-quadrant multipliers in most applications. Therefore, a highly linear and wide input range four-quadrant CMOS analog multiplier using active feedback is proposed in this paper. Firstly, a novel configuration of four-quadrant multiplier cell is presented. Its input dynamic range and linearity are improved significantly by adding two resistors compared with the conventional structure. Then based on the proposed multiplier cell configuration, a four-quadrant CMOS analog multiplier with active feedback technique is implemented by two operational amplifiers. Because of both the proposed multiplier cell and active feedback technique, the proposed multiplier achieves a much wider input range with higher linearity than conventional structures. The proposed multiplier was fabricated by a 0.6µm CMOS process. Experimental results show that the input range of the proposed multiplier can be up to 5.6Vpp with 0.159% linearity error on VX and 4.8Vpp with 0.51% linearity error on VY for ±2.5V power supply voltages, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cashmore, Jason, E-mail: Jason.cashmore@uhb.nhs.uk; Ramtohul, Mark; Ford, Dan
Purpose: Intensity modulated radiotherapy (IMRT) has been linked with an increased risk of secondary cancer induction due to the extra leakage radiation associated with delivery of these techniques. Removal of the flattening filter offers a simple way of reducing head leakage, and it may be possible to generate equivalent IMRT plans and to deliver these on a standard linear accelerator operating in unflattened mode. Methods and Materials: An Elekta Precise linear accelerator has been commissioned to operate in both conventional and unflattened modes (energy matched at 6 MV) and a direct comparison made between the treatment planning and delivery ofmore » pediatric intracranial treatments using both approaches. These plans have been evaluated and delivered to an anthropomorphic phantom. Results: Plans generated in unflattened mode are clinically identical to those for conventional IMRT but can be delivered with greatly reduced leakage radiation. Measurements in an anthropomorphic phantom at clinically relevant positions including the thyroid, lung, ovaries, and testes show an average reduction in peripheral doses of 23.7%, 29.9%, 64.9%, and 70.0%, respectively, for identical plan delivery compared to conventional IMRT. Conclusions: IMRT delivery in unflattened mode removes an unwanted and unnecessary source of scatter from the treatment head and lowers leakage doses by up to 70%, thereby reducing the risk of radiation-induced second cancers. Removal of the flattening filter is recommended for IMRT treatments.« less
Time-lapse joint AVO inversion using generalized linear method based on exact Zoeppritz equations
NASA Astrophysics Data System (ADS)
Zhi, Longxiao; Gu, Hanming
2018-03-01
The conventional method of time-lapse AVO (Amplitude Versus Offset) inversion is mainly based on the approximate expression of Zoeppritz equations. Though the approximate expression is concise and convenient to use, it has certain limitations. For example, its application condition is that the difference of elastic parameters between the upper medium and lower medium is little and the incident angle is small. In addition, the inversion of density is not stable. Therefore, we develop the method of time-lapse joint AVO inversion based on exact Zoeppritz equations. In this method, we apply exact Zoeppritz equations to calculate the reflection coefficient of PP wave. And in the construction of objective function for inversion, we use Taylor series expansion to linearize the inversion problem. Through the joint AVO inversion of seismic data in baseline survey and monitor survey, we can obtain the P-wave velocity, S-wave velocity, density in baseline survey and their time-lapse changes simultaneously. We can also estimate the oil saturation change according to inversion results. Compared with the time-lapse difference inversion, the joint inversion doesn't need certain assumptions and can estimate more parameters simultaneously. It has a better applicability. Meanwhile, by using the generalized linear method, the inversion is easily implemented and its calculation cost is small. We use the theoretical model to generate synthetic seismic records to test and analyze the influence of random noise. The results can prove the availability and anti-noise-interference ability of our method. We also apply the inversion to actual field data and prove the feasibility of our method in actual situation.
Adaptive Filtering Using Recurrent Neural Networks
NASA Technical Reports Server (NTRS)
Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.
2005-01-01
A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.
Saravanan, Chandra; Shao, Yihan; Baer, Roi; Ross, Philip N; Head-Gordon, Martin
2003-04-15
A sparse matrix multiplication scheme with multiatom blocks is reported, a tool that can be very useful for developing linear-scaling methods with atom-centered basis functions. Compared to conventional element-by-element sparse matrix multiplication schemes, efficiency is gained by the use of the highly optimized basic linear algebra subroutines (BLAS). However, some sparsity is lost in the multiatom blocking scheme because these matrix blocks will in general contain negligible elements. As a result, an optimal block size that minimizes the CPU time by balancing these two effects is recovered. In calculations on linear alkanes, polyglycines, estane polymers, and water clusters the optimal block size is found to be between 40 and 100 basis functions, where about 55-75% of the machine peak performance was achieved on an IBM RS6000 workstation. In these calculations, the blocked sparse matrix multiplications can be 10 times faster than a standard element-by-element sparse matrix package. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 618-622, 2003
Real-time explosive particle detection using a cyclone particle concentrator.
Hashimoto, Yuichiro; Nagano, Hisashi; Takada, Yasuaki; Kashima, Hideo; Sugaya, Masakazu; Terada, Koichi; Sakairi, Minoru
2014-06-30
There is a need for more rapid methods for the detection of explosive particles. We have developed a novel real-time analysis technique for explosive particles that uses a cyclone particle concentrator. This technique can analyze sample surfaces for the presence of particles from explosives such as TNT and RDX within 3 s, which is much faster than is possible by conventional methods. Particles are detached from the sample surface with air jet pulses, and then introduced into a cyclone particle concentrator with a high pumping speed of about 80 L/min. A vaporizer placed at the bottom of the cyclone particle concentrator immediately converts the particles into a vapor. The vapor is then ionized in the atmospheric pressure chemical ionization (APCI) source of a linear ion trap mass spectrometer. An online connection between the vaporizer and a mass spectrometer enables high-speed detection within a few seconds, compared with the conventional off-line heating method that takes more than 10 s to raise the temperature of a sample filter unit. Since the configuration enriched the number density of explosive particles by about 80 times compared with that without the concentrator, a sub-ng amount of TNT particles on a surface was detectable. The detection limit of our technique is comparable with that of an explosives trace detector using ion mobility spectrometry. The technique will be beneficial for trace detection in security applications, because it detects explosive particles on the surface more speedily than conventional methods. Copyright © 2014 John Wiley & Sons, Ltd.
An Extension of the Time-Spectral Method to Overset Solvers
NASA Technical Reports Server (NTRS)
Leffell, Joshua Isaac; Murman, Scott M.; Pulliam, Thomas
2013-01-01
Relative motion in the Cartesian or overset framework causes certain spatial nodes to move in and out of the physical domain as they are dynamically blanked by moving solid bodies. This poses a problem for the conventional Time-Spectral approach, which expands the solution at every spatial node into a Fourier series spanning the period of motion. The proposed extension to the Time-Spectral method treats unblanked nodes in the conventional manner but expands the solution at dynamically blanked nodes in a basis of barycentric rational polynomials spanning partitions of contiguously defined temporal intervals. Rational polynomials avoid Runge's phenomenon on the equidistant time samples of these sub-periodic intervals. Fourier- and rational polynomial-based differentiation operators are used in tandem to provide a consistent hybrid Time-Spectral overset scheme capable of handling relative motion. The hybrid scheme is tested with a linear model problem and implemented within NASA's OVERFLOW Reynolds-averaged Navier- Stokes (RANS) solver. The hybrid Time-Spectral solver is then applied to inviscid and turbulent RANS cases of plunging and pitching airfoils and compared to time-accurate and experimental data. A limiter was applied in the turbulent case to avoid undershoots in the undamped turbulent eddy viscosity while maintaining accuracy. The hybrid scheme matches the performance of the conventional Time-Spectral method and converges to the time-accurate results with increased temporal resolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Xiaoyao; Hall, Randall W.; Department of Chemistry, Louisiana State University, Baton Rouge, Louisiana 70803
The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H{sub 2}O, N{sub 2}, and F{sub 2} molecules. The method is based on Feynman’s path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of othermore » quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem.« less
NASA Technical Reports Server (NTRS)
Klein, L. R.
1974-01-01
The free vibrations of elastic structures of arbitrary complexity were analyzed in terms of their component modes. The method was based upon the use of the normal unconstrained modes of the components in a Rayleigh-Ritz analysis. The continuity conditions were enforced by means of Lagrange Multipliers. Examples of the structures considered are: (1) beams with nonuniform properties; (2) airplane structures with high or low aspect ratio lifting surface components; (3) the oblique wing airplane; and (4) plate structures. The method was also applied to the analysis of modal damping of linear elastic structures. Convergence of the method versus the number of modes per component and/or the number of components is discussed and compared to more conventional approaches, ad-hoc methods, and experimental results.
Point focusing using loudspeaker arrays from the perspective of optimal beamforming.
Bai, Mingsian R; Hsieh, Yu-Hao
2015-06-01
Sound focusing is to create a concentrated acoustic field in the region surrounded by a loudspeaker array. This problem was tackled in the previous research via the Helmholtz integral approach, brightness control, acoustic contrast control, etc. In this paper, the same problem was revisited from the perspective of beamforming. A source array model is reformulated in terms of the steering matrix between the source and the field points, which lends itself to the use of beamforming algorithms such as minimum variance distortionless response (MVDR) and linearly constrained minimum variance (LCMV) originally intended for sensor arrays. The beamforming methods are compared with the conventional methods in terms of beam pattern, directional index, and control effort. Objective tests are conducted to assess the audio quality by using perceptual evaluation of audio quality (PEAQ). Experiments of produced sound field and listening tests are conducted in a listening room, with results processed using analysis of variance and regression analysis. In contrast to the conventional energy-based methods, the results have shown that the proposed methods are phase-sensitive in light of the distortionless constraint in formulating the array filters, which helps enhance audio quality and focusing performance.
NASA Astrophysics Data System (ADS)
Lin, Y.; O'Malley, D.; Vesselinov, V. V.
2015-12-01
Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.
Dong, Ji-Zhou; Moldoveanu, Serban C
2004-02-20
An improved gas chromatography-mass spectrometry (GC-MS) method was described for the analysis of carbonyl compounds in cigarette mainstream smoke (CMS) after 2,4-dinitrophenylhydrazine (DNPH) derivatization. Besides formaldehyde, acetaldehyde, acetone, acrolein, propionaldehyde, methyl ethyl ketone, butyraldehyde, and crotonaldehyde that are routinely analyzed in cigarette smoke, this technique separates and allows the analysis of several C4, C5 and C6 isomeric carbonyl compounds. Differentiation could be made between the linear and branched carbon chain components. In cigarette smoke, the branched chain carbonyls are found at higher level than the linear chain carbonyls. Also, several trace carbonyl compounds such as methoxyacetaldehyde were found for the first time in cigarette smoke. For the analysis, cigarette smoke was collected using DNPH-treated pads, which is a simpler procedure compared to conventional impinger collection. Thermal decomposition of DNPH-carbonyl compounds was minimized by the optimization of the GC conditions. The linear range of the method was significantly improved by using a standard mixture of DNPH-carbonyl compounds instead of individual compounds for calibration. The minimum detectable quantity for the carbonyls ranged from 1.4 to 5.6 microg/cigarette.
Novel crystal timing calibration method based on total variation
NASA Astrophysics Data System (ADS)
Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng
2016-11-01
A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.
Monsen, T; Ryden, P
2017-09-01
Urinary tract infections (UTIs) are among the most common bacterial infections in men and urine culture is gold standard for diagnosis. Considering the high prevalence of culture-negative specimens, any method that identifies such specimens is of interest. The aim was to evaluate a new screening concept for flow cytometry analysis (FCA). The outcomes were evaluated against urine culture, uropathogen species and three conventional screening methods. A prospective, consecutive study examined 1,312 urine specimens, collected during January and February 2012. The specimens were analyzed using the Sysmex UF1000i FCA. Based on the FCA data culture negative specimens were identified in a new model by use of linear discriminant analysis (FCA-LDA). In total 1,312 patients were included. In- and outpatients represented 19.6% and 79.4%, respectively; 68.3% of the specimens originated from women. Of the 610 culture-positive specimens, Escherichia coli represented 64%, enterococci 8% and Klebsiella spp. 7%. Screening with FCA-LDA at 95% sensitivity identified 42% (552/1312) as culture negative specimens when UTI was defined according to European guidelines. The proposed screening method was either superior or similar in comparison to the three conventional screening methods. In conclusion, the proposed/suggested and new FCA-LDA screening method was superior or similar to three conventional screening methods. We recommend the proposed screening method to be used in clinic to exclude culture negative specimens, to reduce workload, costs and the turnaround time. In addition, the FCA data may add information that enhance handling and support diagnosis of patients with suspected UTI pending urine culture [corrected].
Ultrafast Ultrasound Imaging of Ocular Anatomy and Blood Flow
Urs, Raksha; Ketterling, Jeffrey A.; Silverman, Ronald H.
2016-01-01
Purpose Ophthalmic ultrasound imaging is currently performed with mechanically scanned single-element probes. These probes have limited capabilities overall and lack the ability to image blood flow. Linear-array systems are able to detect blood flow, but these systems exceed ophthalmic acoustic intensity safety guidelines. Our aim was to implement and evaluate a new linear-array–based technology, compound coherent plane-wave ultrasound, which offers ultrafast imaging and depiction of blood flow at safe acoustic intensity levels. Methods We compared acoustic intensity generated by a 128-element, 18-MHz linear array operated in conventionally focused and plane-wave modes and characterized signal-to-noise ratio (SNR) and lateral resolution. We developed plane-wave B-mode, real-time color-flow, and high-resolution depiction of slow flow in postprocessed data collected continuously at a rate of 20,000 frames/s. We acquired in vivo images of the posterior pole of the eye by compounding plane-wave images acquired over ±10° and produced images depicting orbital and choroidal blood flow. Results With the array operated conventionally, Doppler modes exceeded Food and Drug Administration safety guidelines, but plane-wave modalities were well within guidelines. Plane-wave data allowed generation of high-quality compound B-mode images, with SNR increasing with the number of compounded frames. Real-time color-flow Doppler readily visualized orbital blood flow. Postprocessing of continuously acquired data blocks of 1.6-second duration allowed high-resolution depiction of orbital and choroidal flow over the cardiac cycle. Conclusions Newly developed high-frequency linear arrays in combination with plane-wave techniques present opportunities for the evaluation of ocular anatomy and blood flow, as well as visualization and analysis of other transient phenomena such as vessel wall motion over the cardiac cycle and saccade-induced vitreous motion. PMID:27428169
NASA Astrophysics Data System (ADS)
Laubscher, Markus; Bourquin, Stéphane; Froehly, Luc; Karamata, Boris; Lasser, Theo
2004-07-01
Current spectroscopic optical coherence tomography (OCT) methods rely on a posteriori numerical calculation. We present an experimental alternative for accessing spectroscopic information in OCT without post-processing based on wavelength de-multiplexing and parallel detection using a diffraction grating and a smart pixel detector array. Both a conventional A-scan with high axial resolution and the spectrally resolved measurement are acquired simultaneously. A proof-of-principle demonstration is given on a dynamically changing absorbing sample. The method's potential for fast spectroscopic OCT imaging is discussed. The spectral measurements obtained with this approach are insensitive to scan non-linearities or sample movements.
[Clinical application of biofragmentable anastomosis ring for intestinal anastomosis].
Ye, Feng; Lin, Jian-jiang
2006-11-01
To compare the efficacy of the biofragmentable anastomotic ring (BAR) with conventional hand-sutured and stapling techniques,and to evaluate the safety and applicability of the BAR in intestinal anastomosis. The totol of 498 patients performed intestinal anastomosis from January 2000 to November 2005 were allocated to BAR group (n=186), hand-sutured group (n=177) and linear cutter group (n=135). The operative time, postoperative convalescence and corresponding complication were recorded. Postoperative anastomotic inflammation and anastomotic stenosis were observed during half or one year follow-up of 436 patients. The operative time was (102 +/- 16) min in the BAR group, (121 +/- 15) min in the hand-sutured group, and (105 +/- 18 ) min in the linear cutter group. The difference was significant statistically (P <0.05). The operative time in BAR group and linear cutter group was shorter than hand-sutured group. One case of anastomotic leakage was noted in the BAR group, one case in the hand-sutured group, and none in the linear cutter group. They were cured by conservative methods. One case of anastomotic obstruction happened in the BAR group, one case in the hand-sutured group. Two of them were cured by conservative methods. Two cases of anastomotic obstruction happened in the hand-sutured group. However, one of them required reoperation to remove the obstruction. In the BAR, hand-sutured and the linear cutter group, the postoperative first flatus time was (67.2+/- 4.6) h, (70.2 +/- 5.8) h and (69.2 +/- 6.2)h, respectively. No significant differences were observed among three groups(P > 0.05). The rate of postoperative anastomotic inflammation was 3.0 % (5/164) in the BAR group, 47.8 % (76/159) in hand-sutured group and 7.1 % (8/113) in the linear cutter group. The difference was significant statistically (P <0.05). The rate of postoperative anastomotic inflammation in the BAR group and in the linear cutter group was less than that in hand-sutured group. BAR is one of rapid,safe and effective methods in intestinal anastomosis. It has less anastomotic inflammatory reaction than hand-sutured technique. It should be considered equal to manual and stapler methods.
NASA Technical Reports Server (NTRS)
Miller, James G.
1994-01-01
In this Progress Report, we describe our continuing research activities concerning the development and implementation of advanced ultrasonic nondestructive evaluation methods applied to the inspection and characterization of complex composite structures. We explore the feasibility of implementing medical linear array imaging technology as a viable ultrasonic-based nondestructive evaluation method to inspect and characterize complex materials. As an initial step toward the application of linear array imaging technology to the interrogation of a wide range of complex composite structures, we present images obtained using an unmodified medical ultrasonic imaging system of two epoxy-bonded aluminum plate specimens, each with intentionally disbonded regions. These images are compared with corresponding conventional ultrasonic contact transducer measurements in order to assess whether these images can detect disbonded regions and provide information regarding the nature of the disbonded region. We present a description of a standoff/delay fixture which has been designed, constructed, and implemented on a Hewlett-Packard SONOS 1500 medical imaging system. This standoff/delay fixture, when attached to a 7.5 MHz linear array probe, greatly enhances our ability to interrogate flat plate specimens. The final section of this Progress Report describes a woven composite plate specimen that has been specially machined to include intentional flaws. This woven composite specimen will allow us to assess the feasibility of applying linear array imaging technology to the inspection and characterization of complex textile composite materials. We anticipate the results of this on-going investigation may provide a step toward the development of a rapid, real-time, and portable method of ultrasonic inspection and characterization based on linear array technology.
A Q-Ising model application for linear-time image segmentation
NASA Astrophysics Data System (ADS)
Bentrem, Frank W.
2010-10-01
A computational method is presented which efficiently segments digital grayscale images by directly applying the Q-state Ising (or Potts) model. Since the Potts model was first proposed in 1952, physicists have studied lattice models to gain deep insights into magnetism and other disordered systems. For some time, researchers have realized that digital images may be modeled in much the same way as these physical systems ( i.e., as a square lattice of numerical values). A major drawback in using Potts model methods for image segmentation is that, with conventional methods, it processes in exponential time. Advances have been made via certain approximations to reduce the segmentation process to power-law time. However, in many applications (such as for sonar imagery), real-time processing requires much greater efficiency. This article contains a description of an energy minimization technique that applies four Potts (Q-Ising) models directly to the image and processes in linear time. The result is analogous to partitioning the system into regions of four classes of magnetism. This direct Potts segmentation technique is demonstrated on photographic, medical, and acoustic images.
Faraji, Hakim; Helalizadeh, Masoumeh; Kordi, Mohammad Reza
2018-01-01
A rapid, simple, and sensitive approach to the analysis of trihalomethanes (THMs) in swimming pool water samples has been developed. The main goal of this study was to overcome or to improve the shortcomings of conventional dispersive liquid-liquid microextraction (DLLME) and to maximize the realization of green analytical chemistry principles. The method involves a simple vortex-assisted microextraction step, in the absence of the dispersive solvent, followed by salting-out effect for the elimination of the centrifugation step. A bell-shaped device and a solidifiable solvent were used to simplify the extraction solvent collection after phase separation. Optimization of the independent variables was performed by using chemometric methods in three steps. The method was statistically validated based on authentic guidance documents. The completion time for extraction was less than 8 min, and the limits of detection were in the range between 4 and 72 ng L -1 . Using this method, good linearity and precision were achieved. The results of THMs determination in different real samples showed that in some cases the concentration of total THMs was more than threshold values of THMs determined by accredited healthcare organizations. This method indicated satisfactory analytical figures of merit. Graphical Abstract A novel green microextraction technique for overcoming the challenges of conventional DLLME. The proposed procedure complies with the principles of green/sustainable analytical chemistry, comprising decreasing the sample size, making easy automation of the process, reducing organic waste, diminishing energy consumption, replacing toxic reagents with safer reagents, and enhancing operator safety.
Camacho-Mauries, Daniel; Rodriguez-Díaz, José Luis; Salgado-Nesme, Noel; González, Quintín H; Vergara-Fernández, Omar
2013-02-01
The use of temporary stomas has been demonstrated to reduce septic complications, especially in high-risk anastomosis; therefore, it is necessary to reduce the number of complications secondary to ostomy takedowns, namely wound infection, anastomotic leaks, and intestinal obstruction. To compare the rates of superficial wound infection and patient satisfaction after pursestring closure of ostomy wound vs conventional linear closure. Patients undergoing colostomy or ileostomy closure between January 2010 and February 2011 were randomly assigned to linear closure (n = 30) or pursestring closure (n = 31) of their ostomy wound. Wound infection within 30 days of surgery was defined as the presence of purulent discharge, pain, erythema, warmth, or positive culture for bacteria. Patient satisfaction, healing time, difficulty managing the wound, and limitation of activities were analyzed with the Likert questionnaire. The infection rate for the control group was 36.6% (n = 11) vs 0% in the pursestring closure group (p < 0.0001). Healing time was 5.9 weeks in the linear closure group and 3.8 weeks in the pursestring group (p = 0.0002). Seventy percent of the patients with pursestring closure were very satisfied in comparison with 20% in the other group (p = 0.0001). This study was limited by the heterogeneity in the type of stoma in both groups. The pursestring method resulted in the absence of infection after ostomy wound closure (shorter healing time and improved patient satisfaction).
Robinson, Nicholas P
2013-01-01
Branched DNA molecules are generated by the essential processes of replication and recombination. Owing to their distinctive extended shapes, these intermediates migrate differently from linear double-stranded DNA under certain electrophoretic conditions. However, these branched species exist in the cell at much low abundance than the bulk linear DNA. Consequently, branched molecules cannot be visualized by conventional electrophoresis and ethidium bromide staining. Two-dimensional native-native agarose electrophoresis has therefore been developed as a method to facilitate the separation and visualization of branched replication and recombination intermediates. A wide variety of studies have employed this technique to examine branched molecules in eukaryotic, archaeal, and bacterial cells, providing valuable insights into how DNA is duplicated and repaired in all three domains of life.
Burger, Karin; Koehler, Thomas; Chabior, Michael; Allner, Sebastian; Marschner, Mathias; Fehringer, Andreas; Willner, Marian; Pfeiffer, Franz; Noël, Peter
2014-12-29
Phase-contrast x-ray computed tomography has a high potential to become clinically implemented because of its complementarity to conventional absorption-contrast.In this study, we investigate noise-reducing but resolution-preserving analytical reconstruction methods to improve differential phase-contrast imaging. We apply the non-linear Perona-Malik filter on phase-contrast data prior or post filtered backprojected reconstruction. Secondly, the Hilbert kernel is replaced by regularized iterative integration followed by ramp filtered backprojection as used for absorption-contrast imaging. Combining the Perona-Malik filter with this integration algorithm allows to successfully reveal relevant sample features, quantitatively confirmed by significantly increased structural similarity indices and contrast-to-noise ratios. With this concept, phase-contrast imaging can be performed at considerably lower dose.
The numerical dynamic for highly nonlinear partial differential equations
NASA Technical Reports Server (NTRS)
Lafon, A.; Yee, H. C.
1992-01-01
Problems associated with the numerical computation of highly nonlinear equations in computational fluid dynamics are set forth and analyzed in terms of the potential ranges of spurious behaviors. A reaction-convection equation with a nonlinear source term is employed to evaluate the effects related to spatial and temporal discretizations. The discretization of the source term is described according to several methods, and the various techniques are shown to have a significant effect on the stability of the spurious solutions. Traditional linearized stability analyses cannot provide the level of confidence required for accurate fluid dynamics computations, and the incorporation of nonlinear analysis is proposed. Nonlinear analysis based on nonlinear dynamical systems complements the conventional linear approach and is valuable in the analysis of hypersonic aerodynamics and combustion phenomena.
Voltage regulation in linear induction accelerators
Parsons, W.M.
1992-12-29
Improvement in voltage regulation in a linear induction accelerator wherein a varistor, such as a metal oxide varistor, is placed in parallel with the beam accelerating cavity and the magnetic core is disclosed. The non-linear properties of the varistor result in a more stable voltage across the beam accelerating cavity than with a conventional compensating resistance. 4 figs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
NMR diffusion simulation based on conditional random walk.
Gudbjartsson, H; Patz, S
1995-01-01
The authors introduce here a new, very fast, simulation method for free diffusion in a linear magnetic field gradient, which is an extension of the conventional Monte Carlo (MC) method or the convolution method described by Wong et al. (in 12th SMRM, New York, 1993, p.10). In earlier NMR-diffusion simulation methods, such as the finite difference method (FD), the Monte Carlo method, and the deterministic convolution method, the outcome of the calculations depends on the simulation time step. In the authors' method, however, the results are independent of the time step, although, in the convolution method the step size has to be adequate for spins to diffuse to adjacent grid points. By always selecting the largest possible time step the computation time can therefore be reduced. Finally the authors point out that in simple geometric configurations their simulation algorithm can be used to reduce computation time in the simulation of restricted diffusion.
Large angle solid state position sensitive x-ray detector system
Kurtz, D.S.; Ruud, C.O.
1998-03-03
A method and apparatus for x-ray measurement of certain properties of a solid material are disclosed. In distinction to known methods and apparatus, this invention employs a specific fiber-optic bundle configuration, termed a reorganizer, itself known for other uses, for coherently transmitting visible light originating from the scintillation of diffracted x-radiation from the solid material gathered along a substantially one dimensional linear arc, to a two-dimensional photo-sensor array. The two-dimensional photodetector array, with its many closely packed light sensitive pixels, is employed to process the information contained in the diffracted radiation and present the information in the form of a conventional x-ray diffraction spectrum. By this arrangement, the angular range of the combined detector faces may be increased without loss of angular resolution. Further, the prohibitively expensive coupling together of a large number of individual linear diode photodetectors, which would be required to process signals generated by the diffracted radiation, is avoided. 7 figs.
Large angle solid state position sensitive x-ray detector system
Kurtz, D.S.; Ruud, C.O.
1998-07-21
A method and apparatus are disclosed for x-ray measurement of certain properties of a solid material. In distinction to known methods and apparatus, this invention employs a specific fiber-optic bundle configuration, termed a reorganizer, itself known for other uses, for coherently transmitting visible light originating from the scintillation of diffracted x-radiation from the solid material gathered along a substantially one dimensional linear arc, to a two-dimensional photo-sensor array. The two-dimensional photodetector array, with its many closely packed light sensitive pixels, is employed to process the information contained in the diffracted radiation and present the information in the form of a conventional x-ray diffraction spectrum. By this arrangement, the angular range of the combined detector faces may be increased without loss of angular resolution. Further, the prohibitively expensive coupling together of a large number of individual linear diode photodetectors, which would be required to process signals generated by the diffracted radiation, is avoided. 7 figs.
Second-order non-linear optical studies on CdS microcrystallite-doped alkali borosilicate glasses
NASA Astrophysics Data System (ADS)
Liu, Hao; Liu, Qiming; Wang, Mingliang; Zhao, Xiujian
2007-05-01
CdS microcrystal-doped alkali borosilicate glasses were prepared by conventional fusion and heat-treatment method. Utilizing Maker fringe method, second-harmonic generation (SHG) was both observed from CdS-doped glasses before and after certain thermal/electrical poling. While because the direction of polarization axes of CdS crystals formed in the samples is random or insufficient interferences of generated SH waves occur, the fringe patterns obtained in samples without poling treatments showed no fine structures. For the poled samples, larger SH intensity has been obtained than that of the samples without any poling treatments. It was considered that the increase of an amount of hexagonal CdS in the anode surface layer caused by the applied dc field increased the SH intensity. The second-order non-linearity χ(2) was estimated to be 1.23 pm/V for the sample poled with 2.5 kV at 360 °C for 30 min.
Linear chirp phase perturbing approach for finding binary phased codes
NASA Astrophysics Data System (ADS)
Li, Bing C.
2017-05-01
Binary phased codes have many applications in communication and radar systems. These applications require binary phased codes to have low sidelobes in order to reduce interferences and false detection. Barker codes are the ones that satisfy these requirements and they have lowest maximum sidelobes. However, Barker codes have very limited code lengths (equal or less than 13) while many applications including low probability of intercept radar, and spread spectrum communication, require much higher code lengths. The conventional techniques of finding binary phased codes in literatures include exhaust search, neural network, and evolutionary methods, and they all require very expensive computation for large code lengths. Therefore these techniques are limited to find binary phased codes with small code lengths (less than 100). In this paper, by analyzing Barker code, linear chirp, and P3 phases, we propose a new approach to find binary codes. Experiments show that the proposed method is able to find long low sidelobe binary phased codes (code length >500) with reasonable computational cost.
Analysis of Instabilities in Non-Axisymmetric Hypersonic Boundary Layers Over Cones
NASA Technical Reports Server (NTRS)
Li, Fei; Choudhari, Meelan M.; Chang, Chau-Lyan; White, Jeffery A.
2010-01-01
Hypersonic flows over circular cones constitute one of the most important generic configurations for fundamental aerodynamic and aerothermodynamic studies. In this paper, numerical computations are carried out for Mach 6 flows over a 7-degree half-angle cone with two different flow incidence angles and a compression cone with a large concave curvature. Instability wave and transition-related flow physics are investigated using a series of advanced stability methods ranging from conventional linear stability theory (LST) and a higher-fidelity linear and nonlinear parabolized stability equations (PSE), to the 2D eigenvalue analysis based on partial differential equations. Computed N factor distribution pertinent to various instability mechanisms over the cone surface provides initial assessments of possible transition fronts and a guide to corresponding disturbance characteristics such as frequency and azimuthal wave numbers. It is also shown that strong secondary instability that eventually leads to transition to turbulence can be simulated very efficiently using a combination of advanced stability methods described above.
Spectral embedding finds meaningful (relevant) structure in image and microarray data
Higgs, Brandon W; Weller, Jennifer; Solka, Jeffrey L
2006-01-01
Background Accurate methods for extraction of meaningful patterns in high dimensional data have become increasingly important with the recent generation of data types containing measurements across thousands of variables. Principal components analysis (PCA) is a linear dimensionality reduction (DR) method that is unsupervised in that it relies only on the data; projections are calculated in Euclidean or a similar linear space and do not use tuning parameters for optimizing the fit to the data. However, relationships within sets of nonlinear data types, such as biological networks or images, are frequently mis-rendered into a low dimensional space by linear methods. Nonlinear methods, in contrast, attempt to model important aspects of the underlying data structure, often requiring parameter(s) fitting to the data type of interest. In many cases, the optimal parameter values vary when different classification algorithms are applied on the same rendered subspace, making the results of such methods highly dependent upon the type of classifier implemented. Results We present the results of applying the spectral method of Lafon, a nonlinear DR method based on the weighted graph Laplacian, that minimizes the requirements for such parameter optimization for two biological data types. We demonstrate that it is successful in determining implicit ordering of brain slice image data and in classifying separate species in microarray data, as compared to two conventional linear methods and three nonlinear methods (one of which is an alternative spectral method). This spectral implementation is shown to provide more meaningful information, by preserving important relationships, than the methods of DR presented for comparison. Tuning parameter fitting is simple and is a general, rather than data type or experiment specific approach, for the two datasets analyzed here. Tuning parameter optimization is minimized in the DR step to each subsequent classification method, enabling the possibility of valid cross-experiment comparisons. Conclusion Results from the spectral method presented here exhibit the desirable properties of preserving meaningful nonlinear relationships in lower dimensional space and requiring minimal parameter fitting, providing a useful algorithm for purposes of visualization and classification across diverse datasets, a common challenge in systems biology. PMID:16483359
Optimal estimation and scheduling in aquifer management using the rapid feedback control method
NASA Astrophysics Data System (ADS)
Ghorbanidehno, Hojat; Kokkinaki, Amalia; Kitanidis, Peter K.; Darve, Eric
2017-12-01
Management of water resources systems often involves a large number of parameters, as in the case of large, spatially heterogeneous aquifers, and a large number of "noisy" observations, as in the case of pressure observation in wells. Optimizing the operation of such systems requires both searching among many possible solutions and utilizing new information as it becomes available. However, the computational cost of this task increases rapidly with the size of the problem to the extent that textbook optimization methods are practically impossible to apply. In this paper, we present a new computationally efficient technique as a practical alternative for optimally operating large-scale dynamical systems. The proposed method, which we term Rapid Feedback Controller (RFC), provides a practical approach for combined monitoring, parameter estimation, uncertainty quantification, and optimal control for linear and nonlinear systems with a quadratic cost function. For illustration, we consider the case of a weakly nonlinear uncertain dynamical system with a quadratic objective function, specifically a two-dimensional heterogeneous aquifer management problem. To validate our method, we compare our results with the linear quadratic Gaussian (LQG) method, which is the basic approach for feedback control. We show that the computational cost of the RFC scales only linearly with the number of unknowns, a great improvement compared to the basic LQG control with a computational cost that scales quadratically. We demonstrate that the RFC method can obtain the optimal control values at a greatly reduced computational cost compared to the conventional LQG algorithm with small and controllable losses in the accuracy of the state and parameter estimation.
Yanamandra, Ramesh; Vadla, Chandra Sekhar; Puppala, Umamaheshwar; Patro, Balaram; Murthy, Yellajyosula L N; Ramaiah, Parimi Atchuta
2012-01-01
A new rapid, simple, sensitive, selective and accurate reversed-phase stability-indicating Ultra Performance Liquid Chromatography (RP-UPLC) technique was developed for the assay of Tolterodine Tartrate in pharmaceutical dosage form, human plasma and urine samples. The developed UPLC method is superior in technology to conventional HPLC with respect to speed, solvent consumption, resolution and cost of analysis. Chromatographic run time was 6 min in reversed-phase mode and ultraviolet detection was carried out at 220 nm for quantification. Efficient separation was achieved for all the degradants of Tolterodine Tartrate on BEH C18 sub-2-μm Acquity UPLC column using Trifluoroacetic acid and acetonitrile as organic solvent in a linear gradient program. The active pharmaceutical ingredient was extracted from tablet dosage form using a mixture of acetonitrile and water as diluent. The calibration graphs were linear and the method showed excellent recoveries for bulk and tablet dosage form. The test solution was found to be stable for 40 days when stored in the refrigerator between 2 and 8 °C. The developed UPLC method was validated and meets the requirements delineated by the International Conference on Harmonization (ICH) guidelines with respect to linearity, accuracy, precision, specificity and robustness. The intra-day and inter-day variation was found be less than 1%. The method was reproducible and selective for the estimation of Tolterodine Tartrate. Because the method could effectively separate the drug from its degradation products, it can be employed as a stability-indicating one.
Yanamandra, Ramesh; Vadla, Chandra Sekhar; Puppala, Umamaheshwar; Patro, Balaram; Murthy, Yellajyosula. L. N.; Ramaiah, Parimi Atchuta
2012-01-01
A new rapid, simple, sensitive, selective and accurate reversed-phase stability-indicating Ultra Performance Liquid Chromatography (RP-UPLC) technique was developed for the assay of Tolterodine Tartrate in pharmaceutical dosage form, human plasma and urine samples. The developed UPLC method is superior in technology to conventional HPLC with respect to speed, solvent consumption, resolution and cost of analysis. Chromatographic run time was 6 min in reversed-phase mode and ultraviolet detection was carried out at 220 nm for quantification. Efficient separation was achieved for all the degradants of Tolterodine Tartrate on BEH C18 sub-2-μm Acquity UPLC column using Trifluoroacetic acid and acetonitrile as organic solvent in a linear gradient program. The active pharmaceutical ingredient was extracted from tablet dosage form using a mixture of acetonitrile and water as diluent. The calibration graphs were linear and the method showed excellent recoveries for bulk and tablet dosage form. The test solution was found to be stable for 40 days when stored in the refrigerator between 2 and 8 °C. The developed UPLC method was validated and meets the requirements delineated by the International Conference on Harmonization (ICH) guidelines with respect to linearity, accuracy, precision, specificity and robustness. The intra-day and inter-day variation was found be less than 1%. The method was reproducible and selective for the estimation of Tolterodine Tartrate. Because the method could effectively separate the drug from its degradation products, it can be employed as a stability-indicating one. PMID:22396907
A New Method for Single-Epoch Ambiguity Resolution with Indoor Pseudolite Positioning.
Li, Xin; Zhang, Peng; Guo, Jiming; Wang, Jinling; Qiu, Weining
2017-04-21
Ambiguity resolution (AR) is crucial for high-precision indoor pseudolite positioning. Due to the existing characteristics of the pseudolite positioning system, such as the geometry structure of the stationary pseudolite which is consistently invariant, the indoor signal is easy to interrupt and the first order linear truncation error cannot be ignored, and a new AR method based on the idea of the ambiguity function method (AFM) is proposed in this paper. The proposed method is a single-epoch and nonlinear method that is especially well-suited for indoor pseudolite positioning. Considering the very low computational efficiency of conventional AFM, we adopt an improved particle swarm optimization (IPSO) algorithm to search for the best solution in the coordinate domain, and variances of a least squares adjustment is conducted to ensure the reliability of the solving ambiguity. Several experiments, including static and kinematic tests, are conducted to verify the validity of the proposed AR method. Numerical results show that the IPSO significantly improved the computational efficiency of AFM and has a more elaborate search ability compared to the conventional grid searching method. For the indoor pseudolite system, which had an initial approximate coordinate precision better than 0.2 m, the AFM exhibited good performances in both static and kinematic tests. With the corrected ambiguity gained from our proposed method, indoor pseudolite positioning can achieve centimeter-level precision using a low-cost single-frequency software receiver.
Investigation of melamine derived quaternary as ammonium salt potential shale inhibitor
NASA Astrophysics Data System (ADS)
Yu, Hongjiang; Hu, Weimin; Guo, Gang; Huang, Lei; Li, Lili; Gu, Xuefan; Zhang, Zhifang; Zhang, Jie; Chen, Gang
2017-06-01
Melamine, sodium chloroacetate and sodium hydroxide were used as raw materials to synthesize a kind of neutral quaternary ammonium salt (NQAS) as potential clay swelling inhibitor and water-based drilling fluid additive, and the reaction conditions were screened based on the linear expansion rate of bentonite. The inhibitive properties of NQASs were investigated by various methods, including montmorillonite (MMT) linear expansion test, mud ball immersing test, particle distribution measurement, thermogravimetric analysis and scanning electron microscopy etc. The results indicate that NQAS can inhibit expansion and dispersion of clay in water effectively. At the same condition, the bentonite linear expansion rate in NQAS-6 solution is much lower than those of others, and the hydration expansion degree of the mud ball in 0.5% NQAS-6 solution is appreciably weaker than the control test. The compatibility test indicates NQAS-6 could be compatible with the conventional additives in water-based drilling fluids, and the temperature resistance of modified starch was improved effectively. Meanwhile, the inhibitive mechanism was discussed through the particle distribution measurement.
Quantitative evaluation of phonetograms in the case of functional dysphonia.
Airainer, R; Klingholz, F
1993-06-01
According to the laryngeal clinical findings, figures making up a scale were assigned to vocally trained and vocally untrained persons suffering from different types of functional dysphonia. The different types of dysphonia--from the manifested hypofunctional to the extreme hyperfunctional dysphonia--were classified by means of this scale. Besides, the subjects' phonetograms were measured and approximated by three ellipses, what rendered possible the definition of phonetogram parameters. The combining of selected phonetogram parameters to linear combinations served the purpose of a phonetographic evaluation. The linear combinations were to bring phonetographic and clinical evaluations into correspondence as accurately as possible. It was necessary to use different kinds of linear combinations for male and female singers and nonsingers. As a result of the reclassification of 71 and the new classification of 89 patients, it was possible to graduate the types of functional dysphonia by means of computer-aided phonetogram evaluation with a clinically acceptable error rate. This method proved to be an important supplement to the conventional diagnostics of functional dysphonia.
Design and characterization of a linear Hencken-type burner
NASA Astrophysics Data System (ADS)
Campbell, M. F.; Bohlin, G. A.; Schrader, P. E.; Bambha, R. P.; Kliewer, C. J.; Johansson, K. O.; Michelsen, H. A.
2016-11-01
We have designed and constructed a Hencken-type burner that produces a 38-mm-long linear laminar partially premixed co-flow diffusion flame. This burner was designed to produce a linear flame for studies of soot chemistry, combining the benefit of the conventional Hencken burner's laminar flames with the advantage of the slot burner's geometry for optical measurements requiring a long interaction distance. It is suitable for measurements using optical imaging diagnostics, line-of-sight optical techniques, or off-axis optical-scattering methods requiring either a long or short path length through the flame. This paper presents details of the design and operation of this new burner. We also provide characterization information for flames produced by this burner, including relative flow-field velocities obtained using hot-wire anemometry, temperatures along the centerline extracted using direct one-dimensional coherent Raman imaging, soot volume fractions along the centerline obtained using laser-induced incandescence and laser extinction, and transmission electron microscopy images of soot thermophoretically sampled from the flame.
Local kernel nonparametric discriminant analysis for adaptive extraction of complex structures
NASA Astrophysics Data System (ADS)
Li, Quanbao; Wei, Fajie; Zhou, Shenghan
2017-05-01
The linear discriminant analysis (LDA) is one of popular means for linear feature extraction. It usually performs well when the global data structure is consistent with the local data structure. Other frequently-used approaches of feature extraction usually require linear, independence, or large sample condition. However, in real world applications, these assumptions are not always satisfied or cannot be tested. In this paper, we introduce an adaptive method, local kernel nonparametric discriminant analysis (LKNDA), which integrates conventional discriminant analysis with nonparametric statistics. LKNDA is adept in identifying both complex nonlinear structures and the ad hoc rule. Six simulation cases demonstrate that LKNDA have both parametric and nonparametric algorithm advantages and higher classification accuracy. Quartic unilateral kernel function may provide better robustness of prediction than other functions. LKNDA gives an alternative solution for discriminant cases of complex nonlinear feature extraction or unknown feature extraction. At last, the application of LKNDA in the complex feature extraction of financial market activities is proposed.
Holakooie, Mohammad Hosein; Ojaghi, Mansour; Taheri, Asghar
2016-01-01
This paper investigates sensorless indirect field oriented control (IFOC) of SLIM with full-order Luenberger observer. The dynamic equations of SLIM are first elaborated to draw full-order Luenberger observer with some simplifying assumption. The observer gain matrix is derived from conventional procedure so that observer poles are proportional to SLIM poles to ensure the stability of system for wide range of linear speed. The operation of observer is significantly impressed by adaptive scheme. A fuzzy logic control (FLC) is proposed as adaptive scheme to estimate linear speed using speed tuning signal. The parameters of FLC are tuned using an off-line method through chaotic optimization algorithm (COA). The performance of the proposed observer is verified by both numerical simulation and real-time hardware-in-the-loop (HIL) implementation. Moreover, a detailed comparative study among proposed and other speed observers is obtained under different operation conditions. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Reconstructing Information in Large-Scale Structure via Logarithmic Mapping
NASA Astrophysics Data System (ADS)
Szapudi, Istvan
We propose to develop a new method to extract information from large-scale structure data combining two-point statistics and non-linear transformations; before, this information was available only with substantially more complex higher-order statistical methods. Initially, most of the cosmological information in large-scale structure lies in two-point statistics. With non- linear evolution, some of that useful information leaks into higher-order statistics. The PI and group has shown in a series of theoretical investigations how that leakage occurs, and explained the Fisher information plateau at smaller scales. This plateau means that even as more modes are added to the measurement of the power spectrum, the total cumulative information (loosely speaking the inverse errorbar) is not increasing. Recently we have shown in Neyrinck et al. (2009, 2010) that a logarithmic (and a related Gaussianization or Box-Cox) transformation on the non-linear Dark Matter or galaxy field reconstructs a surprisingly large fraction of this missing Fisher information of the initial conditions. This was predicted by the earlier wave mechanical formulation of gravitational dynamics by Szapudi & Kaiser (2003). The present proposal is focused on working out the theoretical underpinning of the method to a point that it can be used in practice to analyze data. In particular, one needs to deal with the usual real-life issues of galaxy surveys, such as complex geometry, discrete sam- pling (Poisson or sub-Poisson noise), bias (linear, or non-linear, deterministic, or stochastic), redshift distortions, pro jection effects for 2D samples, and the effects of photometric redshift errors. We will develop methods for weak lensing and Sunyaev-Zeldovich power spectra as well, the latter specifically targetting Planck. In addition, we plan to investigate the question of residual higher- order information after the non-linear mapping, and possible applications for cosmology. Our aim will be to work out practical methods, with the ultimate goal of cosmological parameter estimation. We will quantify with standard MCMC and Fisher methods (including DETF Figure of merit when applicable) the efficiency of our estimators, comparing with the conventional method, that uses the un-transformed field. Preliminary results indicate that the increase for NASA's WFIRST in the DETF Figure of Merit would be 1.5-4.2 using a range of pessimistic to optimistic assumptions, respectively.
Evaluation of the site effect with Heuristic Methods
NASA Astrophysics Data System (ADS)
Torres, N. N.; Ortiz-Aleman, C.
2017-12-01
The seismic site response in an area depends mainly on the local geological and topographical conditions. Estimation of variations in ground motion can lead to significant contributions on seismic hazard assessment, in order to reduce human and economic losses. Site response estimation can be posed as a parameterized inversion approach which allows separating source and path effects. The generalized inversion (Field and Jacob, 1995) represents one of the alternative methods to estimate the local seismic response, which involves solving a strongly non-linear multiparametric problem. In this work, local seismic response was estimated using global optimization methods (Genetic Algorithms and Simulated Annealing) which allowed us to increase the range of explored solutions in a nonlinear search, as compared to other conventional linear methods. By using the VEOX Network velocity records, collected from August 2007 to March 2009, source, path and site parameters corresponding to the amplitude spectra of the S wave of the velocity seismic records are estimated. We can establish that inverted parameters resulting from this simultaneous inversion approach, show excellent agreement, not only in terms of adjustment between observed and calculated spectra, but also when compared to previous work from several authors.
Characterizing iron deposition in multiple sclerosis lesions using susceptibility weighted imaging
Haacke, E. Mark; Makki, Malek; Ge, Yulin; Maheshwari, Megha; Sehgal, Vivek; Hu, Jiani; Selvan, Madeswaran; Wu, Zhen; Latif, Zahid; Xuan, Yang; Khan, Omar; Garbern, James; Grossman, Robert I.
2009-01-01
Purpose To investigate whether the variable forms of putative iron deposition seen with susceptibility weighted imaging (SWI) will lead to a set of multiple sclerosis (MS) lesion characteristics different than that seen in conventional MR imaging. Materials and Methods Twenty-seven clinically definite MS patients underwent brain scans using magnetic resonance imaging including: pre- and post-contrast T1-weighted, T2-weighted, FLAIR, and SWI at 1.5T, 3T and 4T. MS lesions were identified separately in each imaging sequence. Lesions identified in SWI were re-evaluated for their iron content using the SWI filtered phase images. Results There were a variety of new lesion characteristics identified by SWI and these were classified into six types. A total of 75 lesions were seen only with conventional imaging, 143 only with SWI and 204 by both. From the iron quantification measurements, a moderate linear correlation between signal intensity and iron content (phase) was established. Conclusion The amount of iron deposition in the brain may serve as a surrogate biomarker for different MS lesion characteristics. SWI showed many lesions missed by conventional methods and six different lesion characteristics. SWI was particularly effective at recognizing the presence of iron in MS lesions and in the basal ganglia and pulvinar thalamus. PMID:19243035
The iso-response method: measuring neuronal stimulus integration with closed-loop experiments
Gollisch, Tim; Herz, Andreas V. M.
2012-01-01
Throughout the nervous system, neurons integrate high-dimensional input streams and transform them into an output of their own. This integration of incoming signals involves filtering processes and complex non-linear operations. The shapes of these filters and non-linearities determine the computational features of single neurons and their functional roles within larger networks. A detailed characterization of signal integration is thus a central ingredient to understanding information processing in neural circuits. Conventional methods for measuring single-neuron response properties, such as reverse correlation, however, are often limited by the implicit assumption that stimulus integration occurs in a linear fashion. Here, we review a conceptual and experimental alternative that is based on exploring the space of those sensory stimuli that result in the same neural output. As demonstrated by recent results in the auditory and visual system, such iso-response stimuli can be used to identify the non-linearities relevant for stimulus integration, disentangle consecutive neural processing steps, and determine their characteristics with unprecedented precision. Automated closed-loop experiments are crucial for this advance, allowing rapid search strategies for identifying iso-response stimuli during experiments. Prime targets for the method are feed-forward neural signaling chains in sensory systems, but the method has also been successfully applied to feedback systems. Depending on the specific question, “iso-response” may refer to a predefined firing rate, single-spike probability, first-spike latency, or other output measures. Examples from different studies show that substantial progress in understanding neural dynamics and coding can be achieved once rapid online data analysis and stimulus generation, adaptive sampling, and computational modeling are tightly integrated into experiments. PMID:23267315
Design of sewage treatment system by applying fuzzy adaptive PID controller
NASA Astrophysics Data System (ADS)
Jin, Liang-Ping; Li, Hong-Chan
2013-03-01
In the sewage treatment system, the dissolved oxygen concentration control, due to its nonlinear, time-varying, large time delay and uncertainty, is difficult to establish the exact mathematical model. While the conventional PID controller only works with good linear not far from its operating point, it is difficult to realize the system control when the operating point far off. In order to solve the above problems, the paper proposed a method which combine fuzzy control with PID methods and designed a fuzzy adaptive PID controller based on S7-300 PLC .It employs fuzzy inference method to achieve the online tuning for PID parameters. The control algorithm by simulation and practical application show that the system has stronger robustness and better adaptability.
QUEST+: A general multidimensional Bayesian adaptive psychometric method.
Watson, Andrew B
2017-03-01
QUEST+ is a Bayesian adaptive psychometric testing method that allows an arbitrary number of stimulus dimensions, psychometric function parameters, and trial outcomes. It is a generalization and extension of the original QUEST procedure and incorporates many subsequent developments in the area of parametric adaptive testing. With a single procedure, it is possible to implement a wide variety of experimental designs, including conventional threshold measurement; measurement of psychometric function parameters, such as slope and lapse; estimation of the contrast sensitivity function; measurement of increment threshold functions; measurement of noise-masking functions; Thurstone scale estimation using pair comparisons; and categorical ratings on linear and circular stimulus dimensions. QUEST+ provides a general method to accelerate data collection in many areas of cognitive and perceptual science.
A method for reducing the order of nonlinear dynamic systems
NASA Astrophysics Data System (ADS)
Masri, S. F.; Miller, R. K.; Sassi, H.; Caughey, T. K.
1984-06-01
An approximate method that uses conventional condensation techniques for linear systems together with the nonparametric identification of the reduced-order model generalized nonlinear restoring forces is presented for reducing the order of discrete multidegree-of-freedom dynamic systems that possess arbitrary nonlinear characteristics. The utility of the proposed method is demonstrated by considering a redundant three-dimensional finite-element model half of whose elements incorporate hysteretic properties. A nonlinear reduced-order model, of one-third the order of the original model, is developed on the basis of wideband stationary random excitation and the validity of the reduced-order model is subsequently demonstrated by its ability to predict with adequate accuracy the transient response of the original nonlinear model under a different nonstationary random excitation.
Gradient stationary phase optimized selectivity liquid chromatography with conventional columns.
Chen, Kai; Lynen, Frédéric; Szucs, Roman; Hanna-Brown, Melissa; Sandra, Pat
2013-05-21
Stationary phase optimized selectivity liquid chromatography (SOSLC) is a promising technique to optimize the selectivity of a given separation. By combination of different stationary phases, SOSLC offers excellent possibilities for method development under both isocratic and gradient conditions. The so far available commercial SOSLC protocol utilizes dedicated column cartridges and corresponding cartridge holders to build up the combined column of different stationary phases. The present work is aimed at developing and extending the gradient SOSLC approach towards coupling conventional columns. Generic tubing was used to connect short commercially available LC columns. Fast and base-line separation of a mixture of 12 compounds containing phenones, benzoic acids and hydroxybenzoates under both isocratic and linear gradient conditions was selected to demonstrate the potential of SOSLC. The influence of the connecting tubing on the deviation of predictions is also discussed.
Comparison of SeaWinds Backscatter Imaging Algorithms
Long, David G.
2017-01-01
This paper compares the performance and tradeoffs of various backscatter imaging algorithms for the SeaWinds scatterometer when multiple passes over a target are available. Reconstruction methods are compared with conventional gridding algorithms. In particular, the performance and tradeoffs in conventional ‘drop in the bucket’ (DIB) gridding at the intrinsic sensor resolution are compared to high-spatial-resolution imaging algorithms such as fine-resolution DIB and the scatterometer image reconstruction (SIR) that generate enhanced-resolution backscatter images. Various options for each algorithm are explored, including considering both linear and dB computation. The effects of sampling density and reconstruction quality versus time are explored. Both simulated and actual data results are considered. The results demonstrate the effectiveness of high-resolution reconstruction using SIR as well as its limitations and the limitations of DIB and fDIB. PMID:28828143
SAFT-assisted sound beam focusing using phased arrays (PA-SAFT) for non-destructive evaluation
NASA Astrophysics Data System (ADS)
Nanekar, Paritosh; Kumar, Anish; Jayakumar, T.
2015-04-01
Focusing of sound has always been a subject of interest in ultrasonic non-destructive evaluation. An integrated approach to sound beam focusing using phased array and synthetic aperture focusing technique (PA-SAFT) has been developed in the authors' laboratory. The approach involves SAFT processing on ultrasonic B-scan image collected by a linear array transducer using a divergent sound beam. The objective is to achieve sound beam focusing using fewer elements than the ones required using conventional phased array. The effectiveness of the approach is demonstrated on aluminium blocks with artificial flaws and steel plate samples with embedded volumetric weld flaws, such as slag and clustered porosities. The results obtained by the PA-SAFT approach are found to be comparable to those obtained by conventional phased array and full matrix capture - total focusing method approaches.
Yoon, Ki Young; Park, Chul Woo; Byeon, Jeong Hoon; Hwang, Jungho
2010-03-01
We proposed a rapid method to estimate the efficacies of air controlling devices in situ using ATP bioluminescence in combination with an inertial impactor. The inertial impactor was designed to have 1 mum of cutoff diameter, and its performance was estimated analytically, numerically, and experimentally. The proposed method was characterized using Staphylococcus epidermidis, which was aerosolized with a nebulizer. The bioaerosol concentrations were estimated within 25 min using the proposed method without a culturing process, which requires several days for colony formation. A linear relationship was obtained between the results of the proposed ATP method (RLU/m(3)) and the conventional culture-based method (CFU/m(3)), with R(2) 0.9283. The proposed method was applied to estimate the concentration of indoor bioaerosols, which were identified as a mixture of various microbial species including bacteria, fungi, and actinomycetes, in an occupational indoor environment, controlled by mechanical ventilation and an air cleaner. Consequently, the proposed method showed a linearity with the culture-based method for indoor bioaerosols with R(2) 0.8189, even though various kinds of microorganisms existed in the indoor air. The proposed method may be effective in monitoring the changes of relative concentration of indoor bioaerosols and estimating the effectiveness of air control devices in indoor environments.
Gajjar, Ketan; Ahmadzai, Abdullah A.; Valasoulis, George; Trevisan, Júlio; Founta, Christina; Nasioutziki, Maria; Loufopoulos, Aristotelis; Kyrgiou, Maria; Stasinou, Sofia Melina; Karakitsos, Petros; Paraskevaidis, Evangelos; Da Gama-Rose, Bianca; Martin-Hirsch, Pierre L.; Martin, Francis L.
2014-01-01
Background Subjective visual assessment of cervical cytology is flawed, and this can manifest itself by inter- and intra-observer variability resulting ultimately in the degree of discordance in the grading categorisation of samples in screening vs. representative histology. Biospectroscopy methods have been suggested as sensor-based tools that can deliver objective assessments of cytology. However, studies to date have been apparently flawed by a corresponding lack of diagnostic efficiency when samples have previously been classed using cytology screening. This raises the question as to whether categorisation of cervical cytology based on imperfect conventional screening reduces the diagnostic accuracy of biospectroscopy approaches; are these latter methods more accurate and diagnose underlying disease? The purpose of this study was to compare the objective accuracy of infrared (IR) spectroscopy of cervical cytology samples using conventional cytology vs. histology-based categorisation. Methods Within a typical clinical setting, a total of n = 322 liquid-based cytology samples were collected immediately before biopsy. Of these, it was possible to acquire subsequent histology for n = 154. Cytology samples were categorised according to conventional screening methods and subsequently interrogated employing attenuated total reflection Fourier-transform IR (ATR-FTIR) spectroscopy. IR spectra were pre-processed and analysed using linear discriminant analysis. Dunn’s test was applied to identify the differences in spectra. Within the diagnostic categories, histology allowed us to determine the comparative efficiency of conventional screening vs. biospectroscopy to correctly identify either true atypia or underlying disease. Results Conventional cytology-based screening results in poor sensitivity and specificity. IR spectra derived from cervical cytology do not appear to discriminate in a diagnostic fashion when categories were based on conventional screening. Scores plots of IR spectra exhibit marked crossover of spectral points between different cytological categories. Although, significant differences between spectral bands in different categories are noted, crossover samples point to the potential for poor specificity and hampers the development of biospectroscopy as a diagnostic tool. However, when histology-based categories are used to conduct analyses, the scores plot of IR spectra exhibit markedly better segregation. Conclusions Histology demonstrates that ATR-FTIR spectroscopy of liquid-based cytology identifies the presence of underlying atypia or disease missed in conventional cytology screening. This study points to an urgent need for a future biospectroscopy study where categories are based on such histology. It will allow for the validation of this approach as a screening tool. PMID:24404130
Multiview Locally Linear Embedding for Effective Medical Image Retrieval
Shen, Hualei; Tao, Dacheng; Ma, Dianfu
2013-01-01
Content-based medical image retrieval continues to gain attention for its potential to assist radiological image interpretation and decision making. Many approaches have been proposed to improve the performance of medical image retrieval system, among which visual features such as SIFT, LBP, and intensity histogram play a critical role. Typically, these features are concatenated into a long vector to represent medical images, and thus traditional dimension reduction techniques such as locally linear embedding (LLE), principal component analysis (PCA), or laplacian eigenmaps (LE) can be employed to reduce the “curse of dimensionality”. Though these approaches show promising performance for medical image retrieval, the feature-concatenating method ignores the fact that different features have distinct physical meanings. In this paper, we propose a new method called multiview locally linear embedding (MLLE) for medical image retrieval. Following the patch alignment framework, MLLE preserves the geometric structure of the local patch in each feature space according to the LLE criterion. To explore complementary properties among a range of features, MLLE assigns different weights to local patches from different feature spaces. Finally, MLLE employs global coordinate alignment and alternating optimization techniques to learn a smooth low-dimensional embedding from different features. To justify the effectiveness of MLLE for medical image retrieval, we compare it with conventional spectral embedding methods. We conduct experiments on a subset of the IRMA medical image data set. Evaluation results show that MLLE outperforms state-of-the-art dimension reduction methods. PMID:24349277
N-Way FRET Microscopy of Multiple Protein-Protein Interactions in Live Cells
Hoppe, Adam D.; Scott, Brandon L.; Welliver, Timothy P.; Straight, Samuel W.; Swanson, Joel A.
2013-01-01
Fluorescence Resonance Energy Transfer (FRET) microscopy has emerged as a powerful tool to visualize nanoscale protein-protein interactions while capturing their microscale organization and millisecond dynamics. Recently, FRET microscopy was extended to imaging of multiple donor-acceptor pairs, thereby enabling visualization of multiple biochemical events within a single living cell. These methods require numerous equations that must be defined on a case-by-case basis. Here, we present a universal multispectral microscopy method (N-Way FRET) to enable quantitative imaging for any number of interacting and non-interacting FRET pairs. This approach redefines linear unmixing to incorporate the excitation and emission couplings created by FRET, which cannot be accounted for in conventional linear unmixing. Experiments on a three-fluorophore system using blue, yellow and red fluorescent proteins validate the method in living cells. In addition, we propose a simple linear algebra scheme for error propagation from input data to estimate the uncertainty in the computed FRET images. We demonstrate the strength of this approach by monitoring the oligomerization of three FP-tagged HIV Gag proteins whose tight association in the viral capsid is readily observed. Replacement of one FP-Gag molecule with a lipid raft-targeted FP allowed direct observation of Gag oligomerization with no association between FP-Gag and raft-targeted FP. The N-Way FRET method provides a new toolbox for capturing multiple molecular processes with high spatial and temporal resolution in living cells. PMID:23762252
NASA Technical Reports Server (NTRS)
Miller, James G.
1995-01-01
In this Progress Report, the author describes the continuing research to explore the feasibility of implementing medical linear array imaging technology as a viable ultrasonic-based nondestructive evaluation method to inspect and characterize complex materials. Images obtained using an unmodified medical ultrasonic imaging system of a bonded aluminum plate sample with a simulated disbond region are presented. The disbond region was produced by adhering a piece of plain white paper to a piece of cellophane tape and applying the paper-tape combination to one of the aluminum plates. Because the area under the paper was not adhesively bonded to the aluminum plate, this arrangement more closely simulates a disbond. Images are also presented for an aluminum plate sample with an epoxy strip adhered to one side to help provide information for the interpretation of the images of the bonded aluminum plate sample containing the disbond region. These images are compared with corresponding conventional ultrasonic contact transducer measurements in order to provide information regarding the nature of the disbonded region. The results of this on-going investigation may provide a step toward the development of a rapid, real-time, and portable method of ultrasonic inspection and characterization based on linear array technology. In Section 2 of this Progress Report, the preparation of the aluminum plate specimens is described. Section 3 describes the method of linear array imaging. Sections 4 and 5 present the linear array images and results from contact transducer measurements, respectively. A discussion of the results are presented in Section 6.
de Sena, Rodrigo Caciano; Soares, Matheus; Pereira, Maria Luiza Oliveira; da Silva, Rogério Cruz Domingues; do Rosário, Francisca Ferreira; da Silva, Joao Francisco Cajaiba
2011-01-01
The development of a simple, rapid and low cost method based on video image analysis and aimed at the detection of low concentrations of precipitated barium sulfate is described. The proposed system is basically composed of a webcam with a CCD sensor and a conventional dichroic lamp. For this purpose, software for processing and analyzing the digital images based on the RGB (Red, Green and Blue) color system was developed. The proposed method had shown very good repeatability and linearity and also presented higher sensitivity than the standard turbidimetric method. The developed method is presented as a simple alternative for future applications in the study of precipitations of inorganic salts and also for detecting the crystallization of organic compounds. PMID:22346607
Modified neural networks for rapid recovery of tokamak plasma parameters for real time control
NASA Astrophysics Data System (ADS)
Sengupta, A.; Ranjan, P.
2002-07-01
Two modified neural network techniques are used for the identification of the equilibrium plasma parameters of the Superconducting Steady State Tokamak I from external magnetic measurements. This is expected to ultimately assist in a real time plasma control. As different from the conventional network structure where a single network with the optimum number of processing elements calculates the outputs, a multinetwork system connected in parallel does the calculations here in one of the methods. This network is called the double neural network. The accuracy of the recovered parameters is clearly more than the conventional network. The other type of neural network used here is based on the statistical function parametrization combined with a neural network. The principal component transformation removes linear dependences from the measurements and a dimensional reduction process reduces the dimensionality of the input space. This reduced and transformed input set, rather than the entire set, is fed into the neural network input. This is known as the principal component transformation-based neural network. The accuracy of the recovered parameters in the latter type of modified network is found to be a further improvement over the accuracy of the double neural network. This result differs from that obtained in an earlier work where the double neural network showed better performance. The conventional network and the function parametrization methods have also been used for comparison. The conventional network has been used for an optimization of the set of magnetic diagnostics. The effective set of sensors, as assessed by this network, are compared with the principal component based network. Fault tolerance of the neural networks has been tested. The double neural network showed the maximum resistance to faults in the diagnostics, while the principal component based network performed poorly. Finally the processing times of the methods have been compared. The double network and the principal component network involve the minimum computation time, although the conventional network also performs well enough to be used in real time.
A probabilistic Hu-Washizu variational principle
NASA Technical Reports Server (NTRS)
Liu, W. K.; Belytschko, T.; Besterfield, G. H.
1987-01-01
A Probabilistic Hu-Washizu Variational Principle (PHWVP) for the Probabilistic Finite Element Method (PFEM) is presented. This formulation is developed for both linear and nonlinear elasticity. The PHWVP allows incorporation of the probabilistic distributions for the constitutive law, compatibility condition, equilibrium, domain and boundary conditions into the PFEM. Thus, a complete probabilistic analysis can be performed where all aspects of the problem are treated as random variables and/or fields. The Hu-Washizu variational formulation is available in many conventional finite element codes thereby enabling the straightforward inclusion of the probabilistic features into present codes.
Band structure and unconventional electronic topology of CoSi
NASA Astrophysics Data System (ADS)
Pshenay-Severin, D. A.; Ivanov, Y. V.; Burkov, A. A.; Burkov, A. T.
2018-04-01
Semimetals with certain crystal symmetries may possess unusual electronic structure topology, distinct from that of the conventional Weyl and Dirac semimetals. Characteristic property of these materials is the existence of band-touching points with multiple (higher than two-fold) degeneracy and nonzero Chern number. CoSi is a representative of this group of materials exhibiting the so-called ‘new fermions’. We report on an ab initio calculation of the electronic structure of CoSi using density functional methods, taking into account the spin-orbit interactions. The linearized \
Linear phase compressive filter
McEwan, Thomas E.
1995-01-01
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line.
Local spatiotemporal time-frequency peak filtering method for seismic random noise reduction
NASA Astrophysics Data System (ADS)
Liu, Yanping; Dang, Bo; Li, Yue; Lin, Hongbo
2014-12-01
To achieve a higher level of seismic random noise suppression, the Radon transform has been adopted to implement spatiotemporal time-frequency peak filtering (TFPF) in our previous studies. Those studies involved performing TFPF in full-aperture Radon domain, including linear Radon and parabolic Radon. Although the superiority of this method to the conventional TFPF has been tested through processing on synthetic seismic models and field seismic data, there are still some limitations in the method. Both full-aperture linear Radon and parabolic Radon are applicable and effective for some relatively simple situations (e.g., curve reflection events with regular geometry) but inapplicable for complicated situations such as reflection events with irregular shapes, or interlaced events with quite different slope or curvature parameters. Therefore, a localized approach to the application of the Radon transform must be applied. It would serve the filter method better by adapting the transform to the local character of the data variations. In this article, we propose an idea that adopts the local Radon transform referred to as piecewise full-aperture Radon to realize spatiotemporal TFPF, called local spatiotemporal TFPF. Through experiments on synthetic seismic models and field seismic data, this study demonstrates the advantage of our method in seismic random noise reduction and reflection event recovery for relatively complicated situations of seismic data.
Kiriyama, Yoshimori; Matsumoto, Hideo; Toyama, Yoshiaki; Nagura, Takeo
2014-02-01
The aim of this study was to develop a new suture tension sensor for musculoskeletal soft tissue that shows deformation or movements. The suture tension sensor was 10 mm in size, which was small enough to avoid conflicting with the adjacent sensor. Furthermore, the sensor had good linearity up to a tension of 50 N, which is equivalent to the breaking strength of a size 1 absorbable suture defined by the United States Pharmacopeia. The design and mechanism were analyzed using a finite element model prior to developing the actual sensor. Based on the analysis, adequate material was selected, and the output linearity was confirmed and compared with the simulated result. To evaluate practical application, the incision of the skin and capsule were sutured during simulated total knee arthroplasty. When conventional surgery and minimally invasive surgery were performed, suture tensions were compared. In minimally invasive surgery, the distal portion of the knee was dissected, and the proximal portion of the knee was dissected additionally in conventional surgery. In the skin suturing, the maximum tension was 4.4 N, and this tension was independent of the sensor location. In contrast, the sensor suturing the capsule in the distal portion had a tension of 4.4 N in minimally invasive surgery, while the proximal sensor had a tension of 44 N in conventional surgery. The suture tensions increased nonlinearly and were dependent on the knee flexion angle. Furthermore, the tension changes showed hysteresis. This miniature tension sensor may help establish the optimal suturing method with adequate tension to ensure wound healing and early recovery.
Confinement with Perturbation Theory, After All?
NASA Astrophysics Data System (ADS)
Hoyer, Paul
2015-09-01
I call attention to the possibility that QCD bound states (hadrons) could be derived using rigorous Hamiltonian, perturbative methods. Solving Gauss' law for A 0 with a non-vanishing boundary condition at spatial infinity gives an linear potential for color singlet and qqq states. These states are Poincaré and gauge covariant and thus can serve as initial states of a perturbative expansion, replacing the conventional free in and out states. The coupling freezes at , allowing reasonable convergence. The bound states have a sea of pairs, while transverse gluons contribute only at . Pair creation in the linear A 0 potential leads to string breaking and hadron loop corrections. These corrections give finite widths to excited states, as required by unitarity. Several of these features have been verified analytically in D = 1 + 1 dimensions, and some in D = 3 + 1.
Dickinson, R J
1985-04-01
In a recent paper, Vaknine and Lorenz discuss the merits of lateral deconvolution of demodulated B-scans. While this technique will decrease the lateral blurring of single discrete targets, such as the diaphragm in their figure 3, it is inappropriate to apply the method to the echoes arising from inhomogeneous structures such as soft tissue. In this latter case, the echoes from individual scatterers within the resolution cell of the transducer interfere to give random fluctuations in received echo amplitude termed speckle. Although his process can be modeled as a linear convolution similar to that of conventional image formation theory, the process of demodulation is a nonlinear process which loses the all-important phase information, and prevents the subsequent restoration of the image by Wiener filtering, itself a linear process.
Clipping the cosmos: the bias and bispectrum of large scale structure.
Simpson, Fergus; James, J Berian; Heavens, Alan F; Heymans, Catherine
2011-12-30
A large fraction of the information collected by cosmological surveys is simply discarded to avoid length scales which are difficult to model theoretically. We introduce a new technique which enables the extraction of useful information from the bispectrum of galaxies well beyond the conventional limits of perturbation theory. Our results strongly suggest that this method increases the range of scales where the relation between the bispectrum and power spectrum in tree-level perturbation theory may be applied, from k(max) ∼ 0.1 to ∼0.7 hMpc(-1). This leads to correspondingly large improvements in the determination of galaxy bias. Since the clipped matter power spectrum closely follows the linear power spectrum, there is the potential to use this technique to probe the growth rate of linear perturbations and confront theories of modified gravity with observation.
NASA Astrophysics Data System (ADS)
Parracino, Stefano; Richetta, Maria; Gelfusa, Michela; Malizia, Andrea; Bellecci, Carlo; De Leo, Leonardo; Perrimezzi, Carlo; Fin, Alessandro; Forin, Marco; Giappicucci, Francesca; Grion, Massimo; Marchese, Giuseppe; Gaudio, Pasquale
2016-10-01
Urban air pollution causes deleterious effects on human health and the environment. To meet stringent standards imposed by the European Commission, advanced measurement methods are required. Remote sensing techniques, such as light detection and ranging (LiDAR), can be a valuable option for evaluating particulate matter (PM), emitted by vehicles in urban traffic, with high sensitivity and in shorter time intervals. Since air quality problems persist not only in large urban areas, a measuring campaign was specifically performed in a suburban area of Crotone, Italy, using both a compact LiDAR system and conventional instruments for real-time vehicle emissions monitoring along a congested road. First results reported in this paper show a strong dependence between variations of LiDAR backscattering signals and traffic-related air pollution levels. Moreover, time-resolved LiDAR data averaged in limited regions, directly above conventional monitoring stations at the border of an intersection, were found to be linearly correlated to the PM concentration levels with a correlation coefficient between 0.75 and 0.84.
One device, one equation: the simplest way to objectively evaluate psoriasis severity.
Choi, Jae Woo; Kim, Bo Ri; Choi, Chong Won; Youn, Sang Woong
2015-02-01
The erythema, scale and thickness of psoriasis lesions could be converted to bioengineering parameters. An objective psoriasis severity assessment is advantageous in terms of accuracy and reproducibility over conventional severity assessment. We aimed to formulate an objective psoriasis severity index with a single bioengineering device that can possibly substitute the conventional subjective Psoriasis Severity Index. A linear regression analysis was performed to derive the formula with the subjective Psoriasis Severity Index as the dependent variable and various bioengineering parameters determined from 157 psoriasis lesions as independent variables. The construct validity of the objective Psoriasis Severity Index was evaluated with an additional 30 psoriasis lesions through a Pearson correlation analysis. The formula is composed of hue and brightness, which are sufficiently obtainable with a Colorimeter alone. A very strong positive correlation was found between the objective and subjective psoriasis severity indexes. The objective Psoriasis Severity Index is a novel, practical and valid assessment method that can substitute the conventional one. Combined with subjective area assessment, it could further replace the Psoriasis Area and Severity Index which is currently most popular. © 2014 Japanese Dermatological Association.
NASA Astrophysics Data System (ADS)
Ochsenfeld, Christian; Head-Gordon, Martin
1997-05-01
To exploit the exponential decay found in numerical studies for the density matrix and its derivative with respect to nuclear displacements, we reformulate the coupled perturbed self-consistent field (CPSCF) equations and a quadratically convergent SCF (QCSCF) method for Hartree-Fock and density functional theory within a local density matrix-based scheme. Our D-CPSCF (density matrix-based CPSCF) and D-QCSCF schemes open the way for exploiting sparsity and to achieve asymptotically linear scaling of computational complexity with molecular size ( M), in case of D-CPSCF for all O( M) derivative densities. Furthermore, these methods are even for small molecules strongly competitive to conventional algorithms.
Spherical earth gravity and magnetic anomaly analysis by equivalent point source inversion
NASA Technical Reports Server (NTRS)
Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.
1981-01-01
To facilitate geologic interpretation of satellite elevation potential field data, analysis techniques are developed and verified in the spherical domain that are commensurate with conventional flat earth methods of potential field interpretation. A powerful approach to the spherical earth problem relates potential field anomalies to a distribution of equivalent point sources by least squares matrix inversion. Linear transformations of the equivalent source field lead to corresponding geoidal anomalies, pseudo-anomalies, vector anomaly components, spatial derivatives, continuations, and differential magnetic pole reductions. A number of examples using 1 deg-averaged surface free-air gravity anomalies of POGO satellite magnetometer data for the United States, Mexico, and Central America illustrate the capabilities of the method.
Patil, Nagaraj; Soni, Jalpa; Ghosh, Nirmalya; De, Priyadarsi
2012-11-29
Thermodynamically favored polymer-water interactions below the lower critical solution temperature (LCST) caused swelling-induced optical anisotropy (linear retardance) of thermoresponsive hydrogels based on poly(2-(2-methoxyethoxy)ethyl methacrylate). This was exploited to study the macroscopic deswelling kinetics quantitatively by a generalized polarimetry analysis method, based on measurement of the Mueller matrix and its subsequent inverse analysis via the polar decomposition approach. The derived medium polarization parameters, namely, linear retardance (δ), diattenuation (d), and depolarization coefficient (Δ), of the hydrogels showed interesting differences between the gels prepared by conventional free radical polymerization (FRP) and reversible addition-fragmentation chain transfer polymerization (RAFT) and also between dry and swollen state. The effect of temperature, cross-linking density, and polymerization technique employed to synthesize hydrogel on deswelling kinetics was systematically studied via conventional gravimetry and corroborated further with the corresponding Mueller matrix derived quantitative polarimetry characteristics (δ, d, and Δ). The RAFT gels exhibited higher swelling ratio and swelling-induced optical anisotropy compared to FRP gels and also deswelled faster at 30 °C. On the contrary, at 45 °C, deswelling was significantly retarded for the RAFT gels due to formation of a skin layer, which was confirmed and quantified via the enhanced diattenuation and depolarization parameters.
Clarke, G. M.; Murray, M.; Holloway, C. M. B.; Liu, K.; Zubovits, J. T.; Yaffe, M. J.
2012-01-01
Tumour size, most commonly measured by maximum linear extent, remains a strong predictor of survival in breast cancer. Tumour volume, proportional to the number of tumour cells, may be a more accurate surrogate for size. We describe a novel “3D pathology volumetric technique” for lumpectomies and compare it with 2D measurements. Volume renderings and total tumour volume are computed from digitized whole-mount serial sections using custom software tools. Results are presented for two lumpectomy specimens selected for tumour features which may challenge accurate measurement of tumour burden with conventional, sampling-based pathology: (1) an infiltrative pattern admixed with normal breast elements; (2) a localized invasive mass separated from the in situ component by benign tissue. Spatial relationships between key features (tumour foci, close or involved margins) are clearly visualized in volume renderings. Invasive tumour burden can be underestimated using conventional pathology, compared to the volumetric technique (infiltrative pattern: 30% underestimation; localized mass: 3% underestimation for invasive tumour, 44% for in situ component). Tumour volume approximated from 2D measurements (i.e., maximum linear extent), assuming elliptical geometry, was seen to overestimate volume compared to the 3D volumetric calculation (by a factor of 7x for the infiltrative pattern; 1.5x for the localized invasive mass). PMID:23320179
Enhancing the stabilization of aircraft pitch motion control via intelligent and classical method
NASA Astrophysics Data System (ADS)
Lukman, H.; Munawwarah, S.; Azizan, A.; Yakub, F.; Zaki, S. A.; Rasid, Z. A.
2017-12-01
The pitching movement of an aircraft is very important to ensure passengers are intrinsically safe and the aircraft achieve its maximum stability. The equations governing the motion of an aircraft are a complex set of six nonlinear coupled differential equations. Under certain assumptions, it can be decoupled and linearized into longitudinal and lateral equations. Pitch control is a longitudinal problem and thus, only the longitudinal dynamics equations are involved in this system. It is a third order nonlinear system, which is linearized about the operating point. The system is also inherently unstable due to the presence of a free integrator. Because of this, a feedback controller is added in order to solve this problem and enhance the system performance. This study uses two approaches in designing controller: a conventional controller and an intelligent controller. The pitch control scheme consists of proportional, integral and derivatives (PID) for conventional controller and fuzzy logic control (FLC) for intelligent controller. Throughout the paper, the performance of the presented controllers are investigated and compared based on the common criteria of step response. Simulation results have been obtained and analysed by using Matlab and Simulink software. The study shows that FLC controller has higher ability to control and stabilize the aircraft's pitch angle as compared to PID controller.
NASA Astrophysics Data System (ADS)
Nurhasanah, F.; Kusumah, Y. S.; Sabandar, J.; Suryadi, D.
2018-05-01
As one of the non-conventional mathematics concepts, Parallel Coordinates is potential to be learned by pre-service mathematics teachers in order to give them experiences in constructing richer schemes and doing abstraction process. Unfortunately, the study related to this issue is still limited. This study wants to answer a research question “to what extent the abstraction process of pre-service mathematics teachers in learning concept of Parallel Coordinates could indicate their performance in learning Analytic Geometry”. This is a case study that part of a larger study in examining mathematical abstraction of pre-service mathematics teachers in learning non-conventional mathematics concept. Descriptive statistics method is used in this study to analyze the scores from three different tests: Cartesian Coordinate, Parallel Coordinates, and Analytic Geometry. The participants in this study consist of 45 pre-service mathematics teachers. The result shows that there is a linear association between the score on Cartesian Coordinate and Parallel Coordinates. There also found that the higher levels of the abstraction process in learning Parallel Coordinates are linearly associated with higher student achievement in Analytic Geometry. The result of this study shows that the concept of Parallel Coordinates has a significant role for pre-service mathematics teachers in learning Analytic Geometry.
A New Finite Difference Q-compensated RTM Algorithm in Tilted Transverse Isotropic (TTI) Media
NASA Astrophysics Data System (ADS)
Zhou, T.; Hu, W.; Ning, J.
2017-12-01
Attenuating anisotropic geological body is difficult to image with conventional migration methods. In such kind of scenarios, recorded seismic data suffer greatly from both amplitude decay and phase distortion, resulting in degraded resolution, poor illumination and incorrect migration depth in imaging results. To efficiently obtain high quality images, we propose a novel TTI QRTM algorithm based on Generalized Standard Linear Solid model combined with a unique multi-stage optimization technique to simultaneously correct the decayed amplitude and the distorted phase velocity. Numerical tests (shown in the figure) demonstrate that our TTI QRTM algorithm effectively corrects migration depth, significantly improves illumination, and enhances resolution within and below the low Q regions. The result of our new method is very close to the reference RTM image, while QRTM without TTI cannot get a correct image. Compared to the conventional QRTM method based on a pseudo-spectral operator for fractional Laplacian evaluation, our method is more computationally efficient for large scale applications and more suitable for GPU acceleration. With the current multi-stage dispersion optimization scheme, this TTI QRTM method best performs in the frequency range 10-70 Hz, and could be used in a wider frequency range. Furthermore, as this method can also handle frequency dependent Q, it has potential to be applied in imaging deep structures where low Q exists, such as subduction zones, volcanic zones or fault zones with passive source observations.
NASA Astrophysics Data System (ADS)
Hosani, E. Al; Zhang, M.; Abascal, J. F. P. J.; Soleimani, M.
2016-11-01
Electrical capacitance tomography (ECT) is an imaging technology used to reconstruct the permittivity distribution within the sensing region. So far, ECT has been primarily used to image non-conductive media only, since if the conductivity of the imaged object is high, the capacitance measuring circuit will be almost shortened by the conductivity path and a clear image cannot be produced using the standard image reconstruction approaches. This paper tackles the problem of imaging metallic samples using conventional ECT systems by investigating the two main aspects of image reconstruction algorithms, namely the forward problem and the inverse problem. For the forward problem, two different methods to model the region of high conductivity in ECT is presented. On the other hand, for the inverse problem, three different algorithms to reconstruct the high contrast images are examined. The first two methods are the linear single step Tikhonov method and the iterative total variation regularization method, and use two sets of ECT data to reconstruct the image in time difference mode. The third method, namely the level set method, uses absolute ECT measurements and was developed using a metallic forward model. The results indicate that the applications of conventional ECT systems can be extended to metal samples using the suggested algorithms and forward model, especially using a level set algorithm to find the boundary of the metal.
Lin, Chun-I; Lee, Yung-Chun
2014-08-01
Line-focused PVDF transducers and defocusing measurement method are applied in this work to determine the dispersion curve of the Rayleigh-like surface waves propagating along the circumferential direction of a solid cylinder. Conventional waveform processing method has been modified to cope with the non-linear relationship between phase angle of wave interference and defocusing distance induced by a cylindrically curved surface. A cross correlation method is proposed to accurately extract the cylindrical Rayleigh wave velocity from measured data. Experiments have been carried out on one stainless steel and one glass cylinders. The experimentally obtained dispersion curves are in very good agreement with their theoretical counterparts. Variation of cylindrical Rayleigh wave velocity due to the cylindrical curvature is quantitatively verified using this new method. Other potential applications of this measurement method for cylindrical samples will be addressed. Copyright © 2014 Elsevier B.V. All rights reserved.
Evaluation of Statistical Methods for Modeling Historical Resource Production and Forecasting
NASA Astrophysics Data System (ADS)
Nanzad, Bolorchimeg
This master's thesis project consists of two parts. Part I of the project compares modeling of historical resource production and forecasting of future production trends using the logit/probit transform advocated by Rutledge (2011) with conventional Hubbert curve fitting, using global coal production as a case study. The conventional Hubbert/Gaussian method fits a curve to historical production data whereas a logit/probit transform uses a linear fit to a subset of transformed production data. Within the errors and limitations inherent in this type of statistical modeling, these methods provide comparable results. That is, despite that apparent goodness-of-fit achievable using the Logit/Probit methodology, neither approach provides a significant advantage over the other in either explaining the observed data or in making future projections. For mature production regions, those that have already substantially passed peak production, results obtained by either method are closely comparable and reasonable, and estimates of ultimately recoverable resources obtained by either method are consistent with geologically estimated reserves. In contrast, for immature regions, estimates of ultimately recoverable resources generated by either of these alternative methods are unstable and thus, need to be used with caution. Although the logit/probit transform generates high quality-of-fit correspondence with historical production data, this approach provides no new information compared to conventional Gaussian or Hubbert-type models and may have the effect of masking the noise and/or instability in the data and the derived fits. In particular, production forecasts for immature or marginally mature production systems based on either method need to be regarded with considerable caution. Part II of the project investigates the utility of a novel alternative method for multicyclic Hubbert modeling tentatively termed "cycle-jumping" wherein overlap of multiple cycles is limited. The model is designed in a way that each cycle is described by the same three parameters as conventional multicyclic Hubbert model and every two cycles are connected with a transition width. Transition width indicates the shift from one cycle to the next and is described as weighted coaddition of neighboring two cycles. It is determined by three parameters: transition year, transition width, and gamma parameter for weighting. The cycle-jumping method provides superior model compared to the conventional multicyclic Hubbert model and reflects historical production behavior more reasonably and practically, by better modeling of the effects of technological transitions and socioeconomic factors that affect historical resource production behavior by explicitly considering the form of the transitions between production cycles.
Zhao, Gang; Tan, Wei; Hou, Jiajia; Qiu, Xiaodong; Ma, Weiguang; Li, Zhixin; Dong, Lei; Zhang, Lei; Yin, Wangbao; Xiao, Liantuan; Axner, Ove; Jia, Suotang
2016-01-25
A methodology for calibration-free wavelength modulation spectroscopy (CF-WMS) that is based upon an extensive empirical description of the wavelength-modulation frequency response (WMFR) of DFB laser is presented. An assessment of the WMFR of a DFB laser by the use of an etalon confirms that it consists of two parts: a 1st harmonic component with an amplitude that is linear with the sweep and a nonlinear 2nd harmonic component with a constant amplitude. Simulations show that, among the various factors that affect the line shape of a background-subtracted peak-normalized 2f signal, such as concentration, phase shifts between intensity modulation and frequency modulation, and WMFR, only the last factor has a decisive impact. Based on this and to avoid the impractical use of an etalon, a novel method to pre-determine the parameters of the WMFR by fitting to a background-subtracted peak-normalized 2f signal has been developed. The accuracy of the new scheme to determine the WMFR is demonstrated and compared with that of conventional methods in CF-WMS by detection of trace acetylene. The results show that the new method provides a four times smaller fitting error than the conventional methods and retrieves concentration more accurately.
NASA Astrophysics Data System (ADS)
Talebpour, Zahra; Tavallaie, Roya; Ahmadi, Seyyed Hamid; Abdollahpour, Assem
2010-09-01
In this study, a new method for the simultaneous determination of penicillin G salts in pharmaceutical mixture via FT-IR spectroscopy combined with chemometrics was investigated. The mixture of penicillin G salts is a complex system due to similar analytical characteristics of components. Partial least squares (PLS) and radial basis function-partial least squares (RBF-PLS) were used to develop the linear and nonlinear relation between spectra and components, respectively. The orthogonal signal correction (OSC) preprocessing method was used to correct unexpected information, such as spectral overlapping and scattering effects. In order to compare the influence of OSC on PLS and RBF-PLS models, the optimal linear (PLS) and nonlinear (RBF-PLS) models based on conventional and OSC preprocessed spectra were established and compared. The obtained results demonstrated that OSC clearly enhanced the performance of both RBF-PLS and PLS calibration models. Also in the case of some nonlinear relation between spectra and component, OSC-RBF-PLS gave satisfactory results than OSC-PLS model which indicated that the OSC was helpful to remove extrinsic deviations from linearity without elimination of nonlinear information related to component. The chemometric models were tested on an external dataset and finally applied to the analysis commercialized injection product of penicillin G salts.
Hohmann, Monika; Monakhova, Yulia; Erich, Sarah; Christoph, Norbert; Wachter, Helmut; Holzgrabe, Ulrike
2015-11-04
Because the basic suitability of proton nuclear magnetic resonance spectroscopy ((1)H NMR) to differentiate organic versus conventional tomatoes was recently proven, the approach to optimize (1)H NMR classification models (comprising overall 205 authentic tomato samples) by including additional data of isotope ratio mass spectrometry (IRMS, δ(13)C, δ(15)N, and δ(18)O) and mid-infrared (MIR) spectroscopy was assessed. Both individual and combined analytical methods ((1)H NMR + MIR, (1)H NMR + IRMS, MIR + IRMS, and (1)H NMR + MIR + IRMS) were examined using principal component analysis (PCA), partial least squares discriminant analysis (PLS-DA), linear discriminant analysis (LDA), and common components and specific weight analysis (ComDim). With regard to classification abilities, fused data of (1)H NMR + MIR + IRMS yielded better validation results (ranging between 95.0 and 100.0%) than individual methods ((1)H NMR, 91.3-100%; MIR, 75.6-91.7%), suggesting that the combined examination of analytical profiles enhances authentication of organically produced tomatoes.
NASA Astrophysics Data System (ADS)
Matsumoto, Nobuhiro; Watanabe, Takuro; Maruyama, Masaaki; Horimoto, Yoshiyuki; Maeda, Tsuneaki; Kato, Kenji
2004-06-01
The gravimetric method is the most popular method for preparing reference gas mixtures with high accuracy. We have designed and manufactured novel mass measurement equipment for gravimetric preparation of reference gas mixtures. This equipment consists of an electronic mass-comparator with a maximum capacity of 15 kg and readability of 1 mg and an automatic cylinder exchanger. The structure of this equipment is simpler and the cost is much lower than a conventional mechanical knife-edge type large balance used for gravimetric preparation of primary gas mixtures in Japan. This cylinder exchanger can mount two cylinders alternatively on the weighing pan of the comparator. In this study, the performance of the equipment has been evaluated. At first, the linearity and repeatability of the mass measurement were evaluated using standard mass pieces. Then, binary gas mixtures of propane and nitrogen were prepared and compared with those prepared with the conventional knife-edge type balance. The comparison resulted in good consistency at the compatibility criterion described in ISO6143:2001.
Time-lapse joint AVO inversion using generalized linear method based on exact Zoeppritz equations
NASA Astrophysics Data System (ADS)
Zhi, L.; Gu, H.
2017-12-01
The conventional method of time-lapse AVO (Amplitude Versus Offset) inversion is mainly based on the approximate expression of Zoeppritz equations. Though the approximate expression is concise and convenient to use, it has certain limitations. For example, its application condition is that the difference of elastic parameters between the upper medium and lower medium is little and the incident angle is small. In addition, the inversion of density is not stable. Therefore, we develop the method of time-lapse joint AVO inversion based on exact Zoeppritz equations. In this method, we apply exact Zoeppritz equations to calculate the reflection coefficient of PP wave. And in the construction of objective function for inversion, we use Taylor expansion to linearize the inversion problem. Through the joint AVO inversion of seismic data in baseline survey and monitor survey, we can obtain P-wave velocity, S-wave velocity, density in baseline survey and their time-lapse changes simultaneously. We can also estimate the oil saturation change according to inversion results. Compared with the time-lapse difference inversion, the joint inversion has a better applicability. It doesn't need some assumptions and can estimate more parameters simultaneously. Meanwhile, by using the generalized linear method, the inversion is easily realized and its calculation amount is small. We use the Marmousi model to generate synthetic seismic records to test and analyze the influence of random noise. Without noise, all estimation results are relatively accurate. With the increase of noise, P-wave velocity change and oil saturation change are stable and less affected by noise. S-wave velocity change is most affected by noise. Finally we use the actual field data of time-lapse seismic prospecting to process and the results can prove the availability and feasibility of our method in actual situation.
Zhang, Xiaoyong; Qiu, Bensheng; Wei, Zijun; Yan, Fei; Shi, Caiyun; Su, Shi; Liu, Xin; Ji, Jim X; Xie, Guoxi
2017-01-01
To develop and assess a three-dimensional (3D) self-gated technique for the evaluation of myocardial infarction (MI) in mouse model without the use of external electrocardiogram (ECG) trigger and respiratory motion sensor on a 3T clinical MR system. A 3D T1-weighted GRE sequence with stack-of-stars sampling trajectories was developed and performed on six mice with MIs that were injected with a gadolinium-based contrast agent at a 3T clinical MR system. Respiratory and cardiac self-gating signals were derived from the Cartesian mapping of the k-space center along the partition encoding direction by bandpass filtering in image domain. The data were then realigned according to the predetermined self-gating signals for the following image reconstruction. In order to accelerate the data acquisition, image reconstruction was based on compressed sensing (CS) theory by exploiting temporal sparsity of the reconstructed images. In addition, images were also reconstructed from the same realigned data by conventional regridding method for demonstrating the advantageous of the proposed reconstruction method. Furthermore, the accuracy of detecting MI by the proposed method was assessed using histological analysis as the standard reference. Linear regression and Bland-Altman analysis were used to assess the agreement between the proposed method and the histological analysis. Compared to the conventional regridding method, the proposed CS method reconstructed images with much less streaking artifact, as well as a better contrast-to-noise ratio (CNR) between the blood and myocardium (4.1 ± 2.1 vs. 2.9 ± 1.1, p = 0.031). Linear regression and Bland-Altman analysis demonstrated that excellent correlation was obtained between infarct sizes derived from the proposed method and histology analysis. A 3D T1-weighted self-gating technique for mouse cardiac imaging was developed, which has potential for accurately evaluating MIs in mice at 3T clinical MR system without the use of external ECG trigger and respiratory motion sensor.
NASA Astrophysics Data System (ADS)
Pei, Zongrui; Eisenbach, Markus
2017-06-01
Dislocations are among the most important defects in determining the mechanical properties of both conventional alloys and high-entropy alloys. The Peierls-Nabarro model supplies an efficient pathway to their geometries and mobility. The difficulty in solving the integro-differential Peierls-Nabarro equation is how to effectively avoid the local minima in the energy landscape of a dislocation core. Among the other methods to optimize the dislocation core structures, we choose the algorithm of Particle Swarm Optimization, an algorithm that simulates the social behaviors of organisms. By employing more particles (bigger swarm) and more iterative steps (allowing them to explore for longer time), the local minima can be effectively avoided. But this would require more computational cost. The advantage of this algorithm is that it is readily parallelized in modern high computing architecture. We demonstrate the performance of our parallelized algorithm scales linearly with the number of employed cores.
Security analysis of quadratic phase based cryptography
NASA Astrophysics Data System (ADS)
Muniraj, Inbarasan; Guo, Changliang; Malallah, Ra'ed; Healy, John J.; Sheridan, John T.
2016-09-01
The linear canonical transform (LCT) is essential in modeling a coherent light field propagation through first-order optical systems. Recently, a generic optical system, known as a Quadratic Phase Encoding System (QPES), for encrypting a two-dimensional (2D) image has been reported. It has been reported together with two phase keys the individual LCT parameters serve as keys of the cryptosystem. However, it is important that such the encryption systems also satisfies some dynamic security properties. Therefore, in this work, we examine some cryptographic evaluation methods, such as Avalanche Criterion and Bit Independence, which indicates the degree of security of the cryptographic algorithms on QPES. We compare our simulation results with the conventional Fourier and the Fresnel transform based DRPE systems. The results show that the LCT based DRPE has an excellent avalanche and bit independence characteristics than that of using the conventional Fourier and Fresnel based encryption systems.
Choice of optical system is critical for the security of double random phase encryption systems
NASA Astrophysics Data System (ADS)
Muniraj, Inbarasan; Guo, Changliang; Malallah, Ra'ed; Cassidy, Derek; Zhao, Liang; Ryle, James P.; Healy, John J.; Sheridan, John T.
2017-06-01
The linear canonical transform (LCT) is used in modeling a coherent light-field propagation through first-order optical systems. Recently, a generic optical system, known as the quadratic phase encoding system (QPES), for encrypting a two-dimensional image has been reported. In such systems, two random phase keys and the individual LCT parameters (α,β,γ) serve as secret keys of the cryptosystem. It is important that such encryption systems also satisfy some dynamic security properties. We, therefore, examine such systems using two cryptographic evaluation methods, the avalanche effect and bit independence criterion, which indicate the degree of security of the cryptographic algorithms using QPES. We compared our simulation results with the conventional Fourier and the Fresnel transform-based double random phase encryption (DRPE) systems. The results show that the LCT-based DRPE has an excellent avalanche and bit independence characteristics compared to the conventional Fourier and Fresnel-based encryption systems.
Lee, Hyun-Soo; Choi, Seung Hong; Park, Sung-Hong
2017-07-01
To develop single and double acquisition methods to compensate for artifacts from eddy currents and transient oscillations in balanced steady-state free precession (bSSFP) with centric phase-encoding (PE) order for magnetization-prepared bSSFP imaging. A single and four different double acquisition methods were developed and evaluated with Bloch equation simulations, phantom/in vivo experiments, and quantitative analyses. For the single acquisition method, multiple PE groups, each of which was composed of N linearly changing PE lines, were ordered in a pseudocentric manner for optimal contrast and minimal signal fluctuations. Double acquisition methods used complex averaging of two images that had opposite artifact patterns from different acquisition orders or from different numbers of dummy scans. Simulation results showed high sensitivity of eddy-current and transient-oscillation artifacts to off-resonance frequency and PE schemes. The artifacts were reduced with the PE-grouping with N values from 3 to 8, similar to or better than the conventional pairing scheme of N = 2. The proposed double acquisition methods removed the remaining artifacts significantly. The proposed methods conserved detailed structures in magnetization transfer imaging well, compared with the conventional methods. The proposed single and double acquisition methods can be useful for artifact-free magnetization-prepared bSSFP imaging with desired contrast and minimized dummy scans. Magn Reson Med 78:254-263, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Fatiha, M.; Rahmat, A.; Solihat, R.
2017-09-01
The delivery of concepts in studying Biology often represented through a diagram to easily makes student understand about Biology material. One way to knowing the students’ understanding about diagram can be seen from causal relationship that is constructed by student in the propositional network representation form. This research reveal the trend of students’ propositional network representation patterns when confronted with convention diagram. This descriptive research involved 32 students at one of senior high school in Bandung. The research data was acquired by worksheet that was filled by diagram and it was developed according on information processing standards. The result of this research revealed three propositional network representation patterns are linear relationship, simple reciprocal relationship, and complex reciprocal relationship. The dominating pattern is linear form that is simply connect some information components in diagram by 59,4% students, the reciprocal relationship form with medium level by 28,1% students while the complex reciprocal relationship by only 3,1% and the rest was students who failed to connect information components by 9,4%. Based on results, most of student only able to connect information components on the picture in linear form and a few student constructing reciprocal relationship between information components on convention diagram.
Superconducting linear actuator
NASA Technical Reports Server (NTRS)
Johnson, Bruce; Hockney, Richard
1993-01-01
Special actuators are needed to control the orientation of large structures in space-based precision pointing systems. Electromagnetic actuators that presently exist are too large in size and their bandwidth is too low. Hydraulic fluid actuation also presents problems for many space-based applications. Hydraulic oil can escape in space and contaminate the environment around the spacecraft. A research study was performed that selected an electrically-powered linear actuator that can be used to control the orientation of a large pointed structure. This research surveyed available products, analyzed the capabilities of conventional linear actuators, and designed a first-cut candidate superconducting linear actuator. The study first examined theoretical capabilities of electrical actuators and determined their problems with respect to the application and then determined if any presently available actuators or any modifications to available actuator designs would meet the required performance. The best actuator was then selected based on available design, modified design, or new design for this application. The last task was to proceed with a conceptual design. No commercially-available linear actuator or modification capable of meeting the specifications was found. A conventional moving-coil dc linear actuator would meet the specification, but the back-iron for this actuator would weigh approximately 12,000 lbs. A superconducting field coil, however, eliminates the need for back iron, resulting in an actuator weight of approximately 1000 lbs.
Rasmussen Hellberg, Rosalee S; Morrissey, Michael T; Hanner, Robert H
2010-09-01
The purpose of this study was to develop a species-specific multiplex polymerase chain reaction (PCR) method that allows for the detection of salmon species substitution on the commercial market. Species-specific primers and TaqMan® probes were developed based on a comprehensive collection of mitochondrial 5' cytochrome c oxidase subunit I (COI) deoxyribonucleic acid (DNA) "barcode" sequences. Primers and probes were combined into multiplex assays and tested for specificity against 112 reference samples representing 25 species. Sensitivity and linearity tests were conducted using 10-fold serial dilutions of target DNA (single-species samples) and DNA admixtures containing the target species at levels of 10%, 1.0%, and 0.1% mixed with a secondary species. The specificity tests showed positive signals for the target DNA in both real-time and conventional PCR systems. Nonspecific amplification in both systems was minimal; however, false positives were detected at low levels (1.2% to 8.3%) in conventional PCR. Detection levels were similar for admixtures and single-species samples based on a 30 PCR cycle cut-off, with limits of 0.25 to 2.5 ng (1% to 10%) in conventional PCR and 0.05 to 5.0 ng (0.1% to 10%) in real-time PCR. A small-scale test with food samples showed promising results, with species identification possible even in heavily processed food items. Overall, this study presents a rapid, specific, and sensitive method for salmon species identification that can be applied to mixed-species and heavily processed samples in either conventional or real-time PCR formats. This study provides a newly developed method for salmon and trout species identification that will assist both industry and regulatory agencies in the detection and prevention of species substitution. This multiplex PCR method allows for rapid, high-throughput species identification even in heavily processed and mixed-species samples. An inter-laboratory study is currently being carried out to assess the ability of this method to identify species in a variety of commercial salmon and trout products.
Li, Peng; Ji, Haoran; Wang, Chengshan; ...
2017-03-22
The increasing penetration of distributed generators (DGs) exacerbates the risk of voltage violations in active distribution networks (ADNs). The conventional voltage regulation devices limited by the physical constraints are difficult to meet the requirement of real-time voltage and VAR control (VVC) with high precision when DGs fluctuate frequently. But, soft open point (SOP), a flexible power electronic device, can be used as the continuous reactive power source to realize the fast voltage regulation. Considering the cooperation of SOP and multiple regulation devices, this paper proposes a coordinated VVC method based on SOP for ADNs. Firstly, a time-series model of coordi-natedmore » VVC is developed to minimize operation costs and eliminate voltage violations of ADNs. Then, by applying the linearization and conic relaxation, the original nonconvex mixed-integer non-linear optimization model is converted into a mixed-integer second-order cone programming (MISOCP) model which can be efficiently solved to meet the requirement of voltage regulation rapidity. Here, we carried out some case studies on the IEEE 33-node system and IEEE 123-node system to illustrate the effectiveness of the proposed method.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Peng; Ji, Haoran; Wang, Chengshan
The increasing penetration of distributed generators (DGs) exacerbates the risk of voltage violations in active distribution networks (ADNs). The conventional voltage regulation devices limited by the physical constraints are difficult to meet the requirement of real-time voltage and VAR control (VVC) with high precision when DGs fluctuate frequently. But, soft open point (SOP), a flexible power electronic device, can be used as the continuous reactive power source to realize the fast voltage regulation. Considering the cooperation of SOP and multiple regulation devices, this paper proposes a coordinated VVC method based on SOP for ADNs. Firstly, a time-series model of coordi-natedmore » VVC is developed to minimize operation costs and eliminate voltage violations of ADNs. Then, by applying the linearization and conic relaxation, the original nonconvex mixed-integer non-linear optimization model is converted into a mixed-integer second-order cone programming (MISOCP) model which can be efficiently solved to meet the requirement of voltage regulation rapidity. Here, we carried out some case studies on the IEEE 33-node system and IEEE 123-node system to illustrate the effectiveness of the proposed method.« less
2010-01-01
Background The objectives of this study were to investigate whether there were differences between Norwegian Red cows in conventional and organic farming with respect to reproductive performance, udder health, and antibiotic resistance in udder pathogens. Methods Twenty-five conventional and 24 organic herds from south-east and middle Norway participated in the study. Herds were matched such that geographical location, herd size, and barn types were similar across the cohorts. All organic herds were certified as organic between 1997 and 2003. All herds were members of the Norwegian Dairy Herd Recording System. The herds were visited once during the study. The relationship between the outcomes and explanatory variables were assessed using mixed linear models. Results There were less > 2nd parity cows in conventional farming. The conventional cows had higher milk yields and received more concentrates than organic cows. Although after adjustment for milk yield and parity, somatic cell count was lower in organic cows than conventional cows. There was a higher proportion of quarters that were dried off at the herd visit in organic herds. No differences in the interval to first AI, interval to last AI or calving interval was revealed between organic and conventional cows. There was no difference between conventional and organic cows in quarter samples positive for mastitis bacteria from the herd visit. Milk yield and parity were associated with the likelihood of at least one quarter positive for mastitis bacteria. There was few S. aureus isolates resistance to penicillin in both management systems. Penicillin resistance against Coagulase negative staphylococci isolated from subclinically infected quarters was 48.5% in conventional herds and 46.5% in organic herds. Conclusion There were no large differences between reproductive performance and udder health between conventional and organic farming for Norwegian Red cows. PMID:20141638
A novel spinal kinematic analysis using X-ray imaging and vicon motion analysis: a case study.
Noh, Dong K; Lee, Nam G; You, Joshua H
2014-01-01
This study highlights a novel spinal kinematic analysis method and the feasibility of X-ray imaging measurements to accurately assess thoracic spine motion. The advanced X-ray Nash-Moe method and analysis were used to compute the segmental range of motion in thoracic vertebra pedicles in vivo. This Nash-Moe X-ray imaging method was compared with a standardized method using the Vicon 3-dimensional motion capture system. Linear regression analysis showed an excellent and significant correlation between the two methods (R2 = 0.99, p < 0.05), suggesting that the analysis of spinal segmental range of motion using X-ray imaging measurements was accurate and comparable to the conventional 3-dimensional motion analysis system. Clinically, this novel finding is compelling evidence demonstrating that measurements with X-ray imaging are useful to accurately decipher pathological spinal alignment and movement impairments in idiopathic scoliosis (IS).
Subsonic panel method for designing wing surfaces from pressure distribution
NASA Technical Reports Server (NTRS)
Bristow, D. R.; Hawk, J. D.
1983-01-01
An iterative method has been developed for designing wing section contours corresponding to a prescribed subcritical distribution of pressure. The calculations are initialized by using a surface panel method to analyze a baseline wing or wing-fuselage configuration. A first-order expansion to the baseline panel method equations is then used to calculate a matrix containing the partial derivative of potential at each control point with respect to each unknown geometry parameter. In every iteration cycle, the matrix is used both to calculate the geometry perturbation and to analyze the perturbed geometry. The distribution of potential on the perturbed geometry is established by simple linear extrapolation from the baseline solution. The extrapolated potential is converted to pressure by Bernoulli's equation. Not only is the accuracy of the approach good for very large perturbations, but the computing cost of each complete iteration cycle is substantially less than one analysis solution by a conventional panel method.
Bai, Shirong; Skodje, Rex T
2017-08-17
A new approach is presented for simulating the time-evolution of chemically reactive systems. This method provides an alternative to conventional modeling of mass-action kinetics that involves solving differential equations for the species concentrations. The method presented here avoids the need to solve the rate equations by switching to a representation based on chemical pathways. In the Sum Over Histories Representation (or SOHR) method, any time-dependent kinetic observable, such as concentration, is written as a linear combination of probabilities for chemical pathways leading to a desired outcome. In this work, an iterative method is introduced that allows the time-dependent pathway probabilities to be generated from a knowledge of the elementary rate coefficients, thus avoiding the pitfalls involved in solving the differential equations of kinetics. The method is successfully applied to the model Lotka-Volterra system and to a realistic H 2 combustion model.
Hagelstein, P.L.
1984-06-25
A short wavelength laser is provided that is driven by conventional-laser pulses. A multiplicity of panels, mounted on substrates, are supported in two separated and alternately staggered facing and parallel arrays disposed along an approximately linear path. When the panels are illuminated by the conventional-laser pulses, single pass EUV or soft x-ray laser pulses are produced.
Churpek, Matthew M; Yuen, Trevor C; Winslow, Christopher; Meltzer, David O; Kattan, Michael W; Edelson, Dana P
2016-02-01
Machine learning methods are flexible prediction algorithms that may be more accurate than conventional regression. We compared the accuracy of different techniques for detecting clinical deterioration on the wards in a large, multicenter database. Observational cohort study. Five hospitals, from November 2008 until January 2013. Hospitalized ward patients None Demographic variables, laboratory values, and vital signs were utilized in a discrete-time survival analysis framework to predict the combined outcome of cardiac arrest, intensive care unit transfer, or death. Two logistic regression models (one using linear predictor terms and a second utilizing restricted cubic splines) were compared to several different machine learning methods. The models were derived in the first 60% of the data by date and then validated in the next 40%. For model derivation, each event time window was matched to a non-event window. All models were compared to each other and to the Modified Early Warning score, a commonly cited early warning score, using the area under the receiver operating characteristic curve (AUC). A total of 269,999 patients were admitted, and 424 cardiac arrests, 13,188 intensive care unit transfers, and 2,840 deaths occurred in the study. In the validation dataset, the random forest model was the most accurate model (AUC, 0.80 [95% CI, 0.80-0.80]). The logistic regression model with spline predictors was more accurate than the model utilizing linear predictors (AUC, 0.77 vs 0.74; p < 0.01), and all models were more accurate than the MEWS (AUC, 0.70 [95% CI, 0.70-0.70]). In this multicenter study, we found that several machine learning methods more accurately predicted clinical deterioration than logistic regression. Use of detection algorithms derived from these techniques may result in improved identification of critically ill patients on the wards.
Abdul-Nasir, Aimi Salihah; Mashor, Mohd Yusoff; Mohamed, Zeehaida
2012-01-01
Malaria is one of the serious global health problem, causing widespread sufferings and deaths in various parts of the world. With the large number of cases diagnosed over the year, early detection and accurate diagnosis which facilitates prompt treatment is an essential requirement to control malaria. For centuries now, manual microscopic examination of blood slide remains the gold standard for malaria diagnosis. However, low contrast of the malaria and variable smears quality are some factors that may influence the accuracy of interpretation by microbiologists. In order to reduce this problem, this paper aims to investigate the performance of the proposed contrast enhancement techniques namely, modified global and modified linear contrast stretching as well as the conventional global and linear contrast stretching that have been applied on malaria images of P. vivax species. The results show that the proposed modified global and modified linear contrast stretching techniques have successfully increased the contrast of the parasites and the infected red blood cells compared to the conventional global and linear contrast stretching. Hence, the resultant images would become useful to microbiologists for identification of various stages and species of malaria.
A new method for calculation of the chlorine demand of natural and treated waters.
Warton, Ben; Heitz, Anna; Joll, Cynthia; Kagi, Robert
2006-08-01
Conventional methods of calculating chlorine demand are dose dependent, making intercomparison of samples difficult, especially in cases where the samples contain substantially different concentrations of dissolved organic carbon (DOC), or other chlorine-consuming species. Using the method presented here, the values obtained for chlorine demand are normalised, allowing valid comparison of chlorine demand between samples, independent of the chlorine dose. Since the method is not dose dependent, samples with substantially differing water quality characteristics can be reliably compared. In our method, we dosed separate aliquots of a water sample with different chlorine concentrations, and periodically measured the residual chlorine concentrations in these subsamples. The chlorine decay data obtained in this way were then fitted to first-order exponential decay functions, corresponding to short-term demand (0-4h) and long-term demand (4-168 h). From the derived decay functions, the residual concentrations at a given time within the experimental time window were calculated and plotted against the corresponding initial chlorine concentrations, giving a linear relationship. From this linear function, it was then possible to determine the residual chlorine concentration for any initial concentration (i.e. dose). Thus, using this method, the initial chlorine dose required to give any residual chlorine concentration can be calculated for any time within the experimental time window, from a single set of experimental data.
Linear phase compressive filter
McEwan, T.E.
1995-06-06
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line. 2 figs.
NASA Astrophysics Data System (ADS)
Sakaguchi, Toshimasa; Fujigaki, Motoharu; Murata, Yorinobu
2015-03-01
Accurate and wide-range shape measurement method is required in industrial field. The same technique is possible to be used for a shape measurement of a human body for the garment industry. Compact 3D shape measurement equipment is also required for embedding in the inspection system. A shape measurement by a phase shifting method can measure the shape with high spatial resolution because the coordinates can be obtained pixel by pixel. A key-device to develop compact equipment is a grating projector. Authors developed a linear LED projector and proposed a light source stepping method (LSSM) using the linear LED projector. The shape measurement euipment can be produced with low-cost and compact without any phase-shifting mechanical systems by using this method. Also it enables us to measure 3D shape in very short time by switching the light sources quickly. A phase unwrapping method is necessary to widen the measurement range with constant accuracy for phase shifting method. A general phase unwrapping method with difference grating pitches is often used. It is one of a simple phase unwrapping method. It is, however, difficult to apply the conventional phase unwrapping algorithm to the LSSM. Authors, therefore, developed an expansion unwrapping algorithm for the LSSM. In this paper, an expansion algorithm of measurement range suited for 3D shape measurement using two pitches of projected grating with the LSSM was evaluated.
Pérez-Olmos, R; Rios, A; Fernández, J R; Lapa, R A; Lima, J L
2001-01-05
In this paper, the construction and evaluation of an electrode selective to nitrate with improved sensitivity, constructed like a conventional electrode (ISE) but using an operational amplifier to sum the potentials supplied by four membranes (ESOA) is described. The two types of electrodes, without an inner reference solution, were constructed using tetraoctylammonium bromide as sensor, dibutylphthalate as solvent mediator and PVC as plastic matrix, the membranes obtained directly applied onto a conductive epoxy resin support. After the comparative evaluation of their working characteristics they were used in the determination of nitrate in different types of tobacco. The limit of detection of the direct potentiometric method developed was found to be 0.18 g kg(-1) and the precision and accuracy of the method, when applied to eight different samples of tobacco, expressed in terms of mean R.S.D. and average percentage of spike recovery was 0.6 and 100.3%, respectively. The comparison of variances showed, on all ocassions, that the results obtained by the ESOA were similar to those obtained by the conventional ISE, but with higher precision. Linear regression analysis showed good agreement (r=0.9994) between the results obtained by the developed potentiometric method and those of a spectrophotometric method based on brucine, adopted as reference method, when applied simultaneously to 32 samples of different types of tobacco.
Adventitious sounds identification and extraction using temporal-spectral dominance-based features.
Jin, Feng; Krishnan, Sridhar Sri; Sattar, Farook
2011-11-01
Respiratory sound (RS) signals carry significant information about the underlying functioning of the pulmonary system by the presence of adventitious sounds (ASs). Although many studies have addressed the problem of pathological RS classification, only a limited number of scientific works have focused on the analysis of the evolution of symptom-related signal components in joint time-frequency (TF) plane. This paper proposes a new signal identification and extraction method for various ASs based on instantaneous frequency (IF) analysis. The presented TF decomposition method produces a noise-resistant high definition TF representation of RS signals as compared to the conventional linear TF analysis methods, yet preserving the low computational complexity as compared to those quadratic TF analysis methods. The discarded phase information in conventional spectrogram has been adopted for the estimation of IF and group delay, and a temporal-spectral dominance spectrogram has subsequently been constructed by investigating the TF spreads of the computed time-corrected IF components. The proposed dominance measure enables the extraction of signal components correspond to ASs from noisy RS signal at high noise level. A new set of TF features has also been proposed to quantify the shapes of the obtained TF contours, and therefore strongly, enhances the identification of multicomponents signals such as polyphonic wheezes. An overall accuracy of 92.4±2.9% for the classification of real RS recordings shows the promising performance of the presented method.
A feasibility study using TomoDirect for craniospinal irradiation
Molloy, Janelle A.; Gleason, John F.; Feddock, Jonathan M.
2013-01-01
The feasibility of delivering craniospinal irradiation (CSI) with TomoDirect is investigated. A method is proposed to generate TomoDirect plans using standard three‐dimensional (3D) beam arrangements on Tomotherapy with junctioning of these fields to minimize hot or cold spots at the cranial/spinal junction. These plans are evaluated and compared to a helical Tomotherapy and a three‐dimensional conformal therapy (3D CRT) plan delivered on a conventional linear accelerator (linac) for CSI. The comparison shows that a TomoDirect plan with an overlap between the cranial and spinal fields might be preferable over Tomotherapy plans because of decreased low dose to large volumes of normal tissues outside of the planning target volume (PTV). Although the TomoDirect plans were not dosimetrically superior to a 3D CRT linac plan, the patient can be easily treated in the supine position, which is often more comfortable and efficient from an anesthesia standpoint. TomoDirect plans also have only one setup position which obviates the need for matching of fields and feathering of junctions, two issues encountered with conventional 3D CRT plans. TomoDirect plans can be delivered with comparable treatment times to conventional 3D plans and in shorter times than a Tomotherapy plan. In this paper, a method is proposed for creating TomoDirect craniospinal plans, and the dosimetric consequences for choosing different planning parameters are discussed. PACS number: 87.55.D‐ PMID:24036863
Prananingrum, Widyasri; Tomotake, Yoritoki; Naito, Yoshihito; Bae, Jiyoung; Sekine, Kazumitsu; Hamada, Kenichi; Ichikawa, Tetsuo
2016-08-01
The prosthetic applications of titanium have been challenging because titanium does not possess suitable properties for the conventional casting method using the lost wax technique. We have developed a production method for biomedical application of porous titanium using a moldless process. This study aimed to evaluate the physical and mechanical properties of porous titanium using various particle sizes, shapes, and mixing ratio of titanium powder to wax binder for use in prosthesis production. CP Ti powders with different particle sizes, shapes, and mixing ratios were divided into five groups. A 90:10wt% mixture of titanium powder and wax binder was prepared manually at 70°C. After debinding at 380°C, the specimen was sintered in Ar at 1100°C without a mold for 1h. The linear shrinkage ratio of sintered specimens ranged from 2.5% to 14.2%. The linear shrinkage ratio increased with decreasing particle size. While the linear shrinkage ratio of Groups 3, 4, and 5 were approximately 2%, Group 1 showed the highest shrinkage of all. The bending strength ranged from 106 to 428MPa under the influence of porosity. Groups 1 and 2 presented low porosity followed by higher strength. The shear bond strength ranged from 32 to 100MPa. The shear bond strength was also particle-size dependent. The decrease in the porosity increased the linear shrinkage ratio and bending strength. Shrinkage and mechanical strength required for prostheses were dependent on the particle size and shape of titanium powders. These findings suggested that this production method can be applied to the prosthetic framework by selecting the material design. Copyright © 2016 Elsevier Ltd. All rights reserved.
Phan, Quoc-Hung; Lo, Yu-Lung
2017-04-01
A surface plasmon resonance (SPR)-enhanced method is proposed for measuring the circular dichroism (CD), circular birefringence (CB), and degree of polarization (DOP) of turbid media using a Stokes–Mueller matrix polarimetry technique. The validity of the analytical model is confirmed by means of numerical simulations. The simulation results show that the proposed detection method enables the CD and CB properties to be measured with a resolution of 10 ? 4 refractive index unit (RIU) and 10 ? 5 ?? RIU , respectively, for refractive indices in the range of 1.3 to 1.4. The practical feasibility of the proposed method is demonstrated by detecting the CB/CD/DOP properties of glucose–chlorophyllin compound samples containing polystyrene microspheres. It is shown that the extracted CB value decreases linearly with the glucose concentration, while the extracted CD value increases linearly with the chlorophyllin concentration. However, the DOP is insensitive to both the glucose concentration and the chlorophyllin concentration. Consequently, the potential of the proposed SPR-enhanced Stokes–Mueller matrix polarimetry method for high-resolution CB/CD/DOP detection is confirmed. Notably, in contrast to conventional SPR techniques designed to detect relative refractive index changes, the SPR technique proposed in the present study allows absolute measurements of the optical properties (CB/CD/DOP) to be obtained.
ONERA-NASA Cooperative Effort on Liner Impedance Eduction
NASA Technical Reports Server (NTRS)
Primus, Julien; Piot, Estelle; Simon, Frank; Jones, Michael G.; Watson, Willie R
2013-01-01
As part of a cooperation between ONERA and NASA, the liner impedance eduction methods developed by the two research centers are compared. The NASA technique relies on an objective function built on acoustic pressure measurements located on the wall opposite the test liner, and the propagation code solves the convected Helmholtz equation in uniform ow using a finite element method that implements a continuous Galerkin discretization. The ONERA method uses an objective function based either on wall acoustic pressure or on acoustic velocity acquired above the liner by Laser Doppler Anemometry, and the propagation code solves the linearized Euler equations by a discontinuous Galerkin discretization. Two acoustic liners are tested in both ONERA and NASA ow ducts and the measured data are treated with the corresponding impedance eduction method. The first liner is a wire mesh facesheet mounted onto a honeycomb core, designed to be linear with respect to incident sound pressure level and to grazing ow velocity. The second one is a conventional, nonlinear, perforate-over-honeycomb single layer liner. Configurations without and with ow are considered. For the nonlinear liner, the comparison of liner impedance educed by NASA and ONERA shows a sensitivity to the experimental conditions, namely to the nature of the source and to the sample width.
Measuring nanoscale viscoelastic parameters of cells directly from AFM force-displacement curves.
Efremov, Yuri M; Wang, Wen-Horng; Hardy, Shana D; Geahlen, Robert L; Raman, Arvind
2017-05-08
Force-displacement (F-Z) curves are the most commonly used Atomic Force Microscopy (AFM) mode to measure the local, nanoscale elastic properties of soft materials like living cells. Yet a theoretical framework has been lacking that allows the post-processing of F-Z data to extract their viscoelastic constitutive parameters. Here, we propose a new method to extract nanoscale viscoelastic properties of soft samples like living cells and hydrogels directly from conventional AFM F-Z experiments, thereby creating a common platform for the analysis of cell elastic and viscoelastic properties with arbitrary linear constitutive relations. The method based on the elastic-viscoelastic correspondence principle was validated using finite element (FE) simulations and by comparison with the existed AFM techniques on living cells and hydrogels. The method also allows a discrimination of which viscoelastic relaxation model, for example, standard linear solid (SLS) or power-law rheology (PLR), best suits the experimental data. The method was used to extract the viscoelastic properties of benign and cancerous cell lines (NIH 3T3 fibroblasts, NMuMG epithelial, MDA-MB-231 and MCF-7 breast cancer cells). Finally, we studied the changes in viscoelastic properties related to tumorigenesis including TGF-β induced epithelial-to-mesenchymal transition on NMuMG cells and Syk expression induced phenotype changes in MDA-MB-231 cells.
Mathematical Simulation for Integrated Linear Fresnel Spectrometer Chip
NASA Technical Reports Server (NTRS)
Park, Yeonjoon; Yoon, Hargoon; Lee, Uhn; King, Glen C.; Choi, Sang H.
2012-01-01
A miniaturized solid-state optical spectrometer chip was designed with a linear gradient-gap Fresnel grating which was mounted perpendicularly to a sensor array surface and simulated for its performance and functionality. Unlike common spectrometers which are based on Fraunhoffer diffraction with a regular periodic line grating, the new linear gradient grating Fresnel spectrometer chip can be miniaturized to a much smaller form-factor into the Fresnel regime exceeding the limit of conventional spectrometers. This mathematical calculation shows that building a tiny motionless multi-pixel microspectrometer chip which is smaller than 1 cubic millimter of optical path volume is possible. The new Fresnel spectrometer chip is proportional to the energy scale (hc/lambda), while the conventional spectrometers are proportional to the wavelength scale (lambda). We report the theoretical optical working principle and new data collection algorithm of the new Fresnel spectrometer to build a compact integrated optical chip.
Theory, simulation and experiments for precise deflection control of radiotherapy electron beams.
Figueroa, R; Leiva, J; Moncada, R; Rojas, L; Santibáñez, M; Valente, M; Velásquez, J; Young, H; Zelada, G; Yáñez, R; Guillen, Y
2018-03-08
Conventional radiotherapy is mainly applied by linear accelerators. Although linear accelerators provide dual (electron/photon) radiation beam modalities, both of them are intrinsically produced by a megavoltage electron current. Modern radiotherapy treatment techniques are based on suitable devices inserted or attached to conventional linear accelerators. Thus, precise control of delivered beam becomes a main key issue. This work presents an integral description of electron beam deflection control as required for novel radiotherapy technique based on convergent photon beam production. Theoretical and Monte Carlo approaches were initially used for designing and optimizing device´s components. Then, dedicated instrumentation was developed for experimental verification of electron beam deflection due to the designed magnets. Both Monte Carlo simulations and experimental results support the reliability of electrodynamics models used to predict megavoltage electron beam control. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Masuda, Hiroshi; Kanda, Yutaro; Okamoto, Yoshifumi; Hirono, Kazuki; Hoshino, Reona; Wakao, Shinji; Tsuburaya, Tomonori
2017-12-01
It is very important to design electrical machineries with high efficiency from the point of view of saving energy. Therefore, topology optimization (TO) is occasionally used as a design method for improving the performance of electrical machinery under the reasonable constraints. Because TO can achieve a design with much higher degree of freedom in terms of structure, there is a possibility for deriving the novel structure which would be quite different from the conventional structure. In this paper, topology optimization using sequential linear programming using move limit based on adaptive relaxation is applied to two models. The magnetic shielding, in which there are many local minima, is firstly employed as firstly benchmarking for the performance evaluation among several mathematical programming methods. Secondly, induction heating model is defined in 2-D axisymmetric field. In this model, the magnetic energy stored in the magnetic body is maximized under the constraint on the volume of magnetic body. Furthermore, the influence of the location of the design domain on the solutions is investigated.
A High-Order, Time Invariant, Linearized Model for Application to HHCIAFCS Interaction Studies
NASA Technical Reports Server (NTRS)
Cheng, Rendy P.; Tischler, Mark B.; Celi, Roberto
2003-01-01
This paper describes a methodology for the extraction of a linear time invariant model from a nonlinear helicopter model, and followed by an examination of the interactions of the Higher Harmonic Control (HHC) and the Automatic Flight Control System (AFCS). This new method includes an embedded harmonic analyzer inside a linear time invariant model, which allows the periodicity of the helicopter response to be captured. The: coupled high-order model provides the needed level of dynamic fidelity to permit an analysis and optimization of the AFCS and HHC loops. Results of this study indicate that the closed-loop HHC system has little influence on the AFCS or on the vehicle handling qualities, which indicates that the AFCS does not need modification to work with the HHC system. The results also show that the vibration response to maneuvers must be considered during the HHC design process, which leads to much higher required HHC loop crossover frequencies. This research also demonstrates that the transient vibration response during maneuvers can be reduced by optimizing the closed-loop higher harmonic control laws using conventional control system analyses.
Bayesian linearized amplitude-versus-frequency inversion for quality factor and its application
NASA Astrophysics Data System (ADS)
Yang, Xinchao; Teng, Long; Li, Jingnan; Cheng, Jiubing
2018-06-01
We propose a straightforward attenuation inversion method by utilizing the amplitude-versus-frequency (AVF) characteristics of seismic data. A new linearized approximation equation of the angle and frequency dependent reflectivity in viscoelastic media is derived. We then use the presented equation to implement the Bayesian linear AVF inversion. The inversion result includes not only P-wave and S-wave velocities, and densities, but also P-wave and S-wave quality factors. Synthetic tests show that the AVF inversion surpasses the AVA inversion for quality factor estimation. However, a higher signal noise ratio (SNR) of data is necessary for the AVF inversion. To show its feasibility, we apply both the new Bayesian AVF inversion and conventional AVA inversion to a tight gas reservoir data in Sichuan Basin in China. Considering the SNR of the field data, a combination of AVF inversion for attenuation parameters and AVA inversion for elastic parameters is recommended. The result reveals that attenuation estimations could serve as a useful complement in combination with the AVA inversion results for the detection of tight gas reservoirs.
New Ultra-High Sensitivity, Absolute, Linear, and Rotary Encoders
NASA Technical Reports Server (NTRS)
Leviton, Douglas B.
1998-01-01
Several new types of absolute optical encoders of both rotary and linear function are discussed. The means for encoding are complete departures from conventional optical encoders and offer advantages of compact form, immunity to damage-induced dropouts of position information, and about an order of magnitude higher sensitivity over what is commercially available. Rotary versions have sensitivity from 0.02 arcseconds down to 0.003 arcsecond while linear models have sensitivity of 10 nm.
NASA Astrophysics Data System (ADS)
Khan, Junaid Ahmad; Mustafa, M.
2018-03-01
Boundary layer flow around a stretchable rough cylinder is modeled by taking into account boundary slip and transverse magnetic field effects. The main concern is to resolve heat/mass transfer problem considering non-linear radiative heat transfer and temperature/concentration jump aspects. Using conventional similarity approach, the equations of motion and heat transfer are converted into a boundary value problem whose solution is computed by shooting method for broad range of slip coefficients. The proposed numerical scheme appears to improve as the strengths of magnetic field and slip coefficients are enhanced. Axial velocity and temperature are considerably influenced by a parameter M which is inversely proportional to the radius of cylinder. A significant change in temperature profile is depicted for growing wall to ambient temperature ratio. Relevant physical quantities such as wall shear stress, local Nusselt number and local Sherwood number are elucidated in detail.
Nonlinear versus Ordinary Adaptive Control of Continuous Stirred-Tank Reactor
Dostal, Petr
2015-01-01
Unfortunately, the major group of the systems in industry has nonlinear behavior and control of such processes with conventional control approaches with fixed parameters causes problems and suboptimal or unstable control results. An adaptive control is one way to how we can cope with nonlinearity of the system. This contribution compares classic adaptive control and its modification with Wiener system. This configuration divides nonlinear controller into the dynamic linear part and the static nonlinear part. The dynamic linear part is constructed with the use of polynomial synthesis together with the pole-placement method and the spectral factorization. The static nonlinear part uses static analysis of the controlled plant for introducing the mathematical nonlinear description of the relation between the controlled output and the change of the control input. Proposed controller is tested by the simulations on the mathematical model of the continuous stirred-tank reactor with cooling in the jacket as a typical nonlinear system. PMID:26346878
Fluorescence-based monitoring of tracer and substrate distribution in an UASB reactor.
Lou, S J; Tartakovsky, B; Zeng, Y; Wu, P; Guiot, S R
2006-11-01
In this work, rhodamine-related fluorescence was measured on-line at four reactor heights in order to study hydrodynamics within an upflow anaerobic sludge bed reactor. A linear dependence of the dispersion coefficient (D) on the upflow velocity was observed, while the influence of the organic loading rate (OLR) was insignificant. Furthermore, the Bodenstein number of the reactor loaded with granulated sludge was found to be position-dependent with the largest values measured at the bottom of the sludge bed. This trend was not observed in the reactor without sludge. Chemical oxygen demand (COD) and volatile fatty acid (VFA) concentrations were measured at the same reactor heights as in rhodamine tests using conventional off-line analytical methods and on-line multiwavelength fluorometry. Significant spatial COD and VFA gradients were observed at organic loading rates above 6g COD l(R)(-1)d(-1) and linear upflow velocities below 0.8m h(-1).
On-line estimation of nonlinear physical systems
Christakos, G.
1988-01-01
Recursive algorithms for estimating states of nonlinear physical systems are presented. Orthogonality properties are rediscovered and the associated polynomials are used to linearize state and observation models of the underlying random processes. This requires some key hypotheses regarding the structure of these processes, which may then take account of a wide range of applications. The latter include streamflow forecasting, flood estimation, environmental protection, earthquake engineering, and mine planning. The proposed estimation algorithm may be compared favorably to Taylor series-type filters, nonlinear filters which approximate the probability density by Edgeworth or Gram-Charlier series, as well as to conventional statistical linearization-type estimators. Moreover, the method has several advantages over nonrecursive estimators like disjunctive kriging. To link theory with practice, some numerical results for a simulated system are presented, in which responses from the proposed and extended Kalman algorithms are compared. ?? 1988 International Association for Mathematical Geology.
System Identification Applied to Dynamic CFD Simulation and Wind Tunnel Data
NASA Technical Reports Server (NTRS)
Murphy, Patrick C.; Klein, Vladislav; Frink, Neal T.; Vicroy, Dan D.
2011-01-01
Demanding aerodynamic modeling requirements for military and civilian aircraft have provided impetus for researchers to improve computational and experimental techniques. Model validation is a key component for these research endeavors so this study is an initial effort to extend conventional time history comparisons by comparing model parameter estimates and their standard errors using system identification methods. An aerodynamic model of an aircraft performing one-degree-of-freedom roll oscillatory motion about its body axes is developed. The model includes linear aerodynamics and deficiency function parameters characterizing an unsteady effect. For estimation of unknown parameters two techniques, harmonic analysis and two-step linear regression, were applied to roll-oscillatory wind tunnel data and to computational fluid dynamics (CFD) simulated data. The model used for this study is a highly swept wing unmanned aerial combat vehicle. Differences in response prediction, parameters estimates, and standard errors are compared and discussed
Bahreyni Toossi, Mohammad Taghi; Momennezhad, Mehdi; Hashemi, Seyed Mohammad
2012-01-01
Aim Exact knowledge of dosimetric parameters is an essential pre-requisite of an effective treatment in radiotherapy. In order to fulfill this consideration, different techniques have been used, one of which is Monte Carlo simulation. Materials and methods This study used the MCNP-4Cb to simulate electron beams from Neptun 10 PC medical linear accelerator. Output factors for 6, 8 and 10 MeV electrons applied to eleven different conventional fields were both measured and calculated. Results The measurements were carried out by a Wellhofler-Scanditronix dose scanning system. Our findings revealed that output factors acquired by MCNP-4C simulation and the corresponding values obtained by direct measurements are in a very good agreement. Conclusion In general, very good consistency of simulated and measured results is a good proof that the goal of this work has been accomplished. PMID:24377010
Supra-Nanoparticle Functional Assemblies through Programmable Stacking
Tian, Cheng; Cordeiro, Marco Aurelio L.; Lhermitte, Julien; ...
2017-05-25
The quest for the by-design assembly of material and devices from nanoscale inorganic components is well recognized. Conventional self-assembly is often limited in its ability to control material morphology and structure simultaneously. We report a general method of assembling nanoparticles in a linear “pillar” morphology with regulated internal configurations. Our approach is inspired by supramolecular systems, where intermolecular stacking guides the assembly process to form diverse linear morphologies. Programmable stacking interactions were realized through incorporation of DNA coded recognition between the designed planar nanoparticle clusters. This resulted in the formation of multilayered pillar architectures with a well-defined internal nanoparticle organization.more » Furthermore, by controlling the number, position, size, and composition of the nanoparticles in each layer, a broad range of nanoparticle pillars were assembled and characterized in detail. In addition, we demonstrated the utility of this stacking assembly strategy for investigating plasmonic and electrical transport properties.« less
Supra-Nanoparticle Functional Assemblies through Programmable Stacking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tian, Cheng; Cordeiro, Marco Aurelio L.; Lhermitte, Julien
The quest for the by-design assembly of material and devices from nanoscale inorganic components is well recognized. Conventional self-assembly is often limited in its ability to control material morphology and structure simultaneously. We report a general method of assembling nanoparticles in a linear “pillar” morphology with regulated internal configurations. Our approach is inspired by supramolecular systems, where intermolecular stacking guides the assembly process to form diverse linear morphologies. Programmable stacking interactions were realized through incorporation of DNA coded recognition between the designed planar nanoparticle clusters. This resulted in the formation of multilayered pillar architectures with a well-defined internal nanoparticle organization.more » Furthermore, by controlling the number, position, size, and composition of the nanoparticles in each layer, a broad range of nanoparticle pillars were assembled and characterized in detail. In addition, we demonstrated the utility of this stacking assembly strategy for investigating plasmonic and electrical transport properties.« less
Supra-Nanoparticle Functional Assemblies through Programmable Stacking.
Tian, Cheng; Cordeiro, Marco Aurelio L; Lhermitte, Julien; Xin, Huolin L; Shani, Lior; Liu, Mingzhao; Ma, Chunli; Yeshurun, Yosef; DiMarzio, Donald; Gang, Oleg
2017-07-25
The quest for the by-design assembly of material and devices from nanoscale inorganic components is well recognized. Conventional self-assembly is often limited in its ability to control material morphology and structure simultaneously. Here, we report a general method of assembling nanoparticles in a linear "pillar" morphology with regulated internal configurations. Our approach is inspired by supramolecular systems, where intermolecular stacking guides the assembly process to form diverse linear morphologies. Programmable stacking interactions were realized through incorporation of DNA coded recognition between the designed planar nanoparticle clusters. This resulted in the formation of multilayered pillar architectures with a well-defined internal nanoparticle organization. By controlling the number, position, size, and composition of the nanoparticles in each layer, a broad range of nanoparticle pillars were assembled and characterized in detail. In addition, we demonstrated the utility of this stacking assembly strategy for investigating plasmonic and electrical transport properties.
Kim, Young Baek; Choi, Bum Ho; Lim, Yong Hwan; Yoo, Ha Na; Lee, Jong Ho; Kim, Jin Hyeok
2011-02-01
In this study, pentacene organic thin film was prepared using newly developed organic material auto-feeding system integrated with linear cell and characterized. The newly developed organic material auto-feeding system consists of 4 major parts: reservoir, micro auto-feeder, vaporizer, and linear cell. The deposition of organic thin film could be precisely controlled by adjusting feeding rate, main tube size, position and size of nozzle. 10 nm thick pentacene thin film prepared on glass substrate exhibited high uniformity of 3.46% which is higher than that of conventional evaporation method using point cell. The continuous deposition without replenishment of organic material can be performed over 144 hours with regulated deposition control. The grain size of pentacene film which affect to mobility of OTFT, was controlled as a function of the temperature.
Influence of salinity and temperature on acute toxicity of cadmium to Mysidopsis bahia molenock
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voyer, R.A.; Modica, G.
1990-01-01
Acute toxicity tests were conducted to compare estimates of toxicity, as modified by salinity and temperature, based on response surface techniques with those derived using conventional test methods, and to compare effect of a single episodic exposure to cadmium as a function of salinity with that of continuous exposure. Regression analysis indicated that mortality following continuous 96-hr exposure is related to linear and quadratic effects of salinity and cadmium at 20 C, and to the linear and quadratic effects of cadmium only at 25C. LC50s decreased with increases in temperature and decreases in salinity. Based on the regression model developed,more » 96-hr LC50s ranged from 15.5 to 28.0 micro Cd/L at 10 and 30% salinities, respectively, at 25C; and from 47 to 85 microgram Cd/L at these salinities at 20C.« less
Identifying fMRI Model Violations with Lagrange Multiplier Tests
Cassidy, Ben; Long, Christopher J; Rae, Caroline; Solo, Victor
2013-01-01
The standard modeling framework in Functional Magnetic Resonance Imaging (fMRI) is predicated on assumptions of linearity, time invariance and stationarity. These assumptions are rarely checked because doing so requires specialised software, although failure to do so can lead to bias and mistaken inference. Identifying model violations is an essential but largely neglected step in standard fMRI data analysis. Using Lagrange Multiplier testing methods we have developed simple and efficient procedures for detecting model violations such as non-linearity, non-stationarity and validity of the common Double Gamma specification for hemodynamic response. These procedures are computationally cheap and can easily be added to a conventional analysis. The test statistic is calculated at each voxel and displayed as a spatial anomaly map which shows regions where a model is violated. The methodology is illustrated with a large number of real data examples. PMID:22542665
Bayen, Stéphane; Yi, Xinzhu; Segovia, Elvagris; Zhou, Zhi; Kelly, Barry C
2014-04-18
Emerging contaminants such as antibiotics have received recent attention as they have been detected in natural waters and health concerns over potential antibiotic resistance. With the purpose to investigate fast and high-throughput analysis, and eventually the continuous on-line analysis of emerging contaminants, this study presents results on the analysis of seven selected antibiotics (sulfadiazine, sulfamethazine, sulfamerazine, sulfamethoxazole, chloramphenicol, lincomycin, tylosin) in surface freshwater and seawater using direct injection of a small sample volume (20μL) in liquid chromatography electrospray ionization tandem mass spectrometry (LC-ESI-MS/MS). Notably, direct injection of seawater in the LC-ESI-MS/MS was made possible on account of the post-column switch on the system, which allows diversion of salt-containing solutions flushed out of the column to the waste. Mean recoveries based on the isotope dilution method average 95±14% and 96±28% amongst the compounds for spiked freshwater and seawater, respectively. Linearity across six spiking levels was assessed and the response was linear (r(2)>0.99) for all compounds. Direct injection concentrations were compared for real samples to those obtained with the conventional SPE-based analysis and both techniques concurs on the presence/absence and levels of the compounds in real samples. These results suggest direct injection is a reliable method to detect antibiotics in both freshwater and seawater. Method detection limits for the direct injection technique (37pg/L to 226ng/L in freshwater, and from 16pg/to 26ng/L in seawater) are sufficient for a number of environmental applications, for example the fast screening of water samples for ecological risk assessments. In the present study of real samples, this new method allowed for example the positive detection of some compounds (e.g. lincomycin) down to the sub ng/L range. The direct injection method appears to be relatively cheaper and faster, requires a smaller sample size, and is more robust to equipment cross-contamination as compared to the conventional SPE-based method. Copyright © 2014 Elsevier B.V. All rights reserved.
Added, Marco Aurélio Nemitalla; Costa, Leonardo Oliveira Pena; Fukuda, Thiago Yukio; de Freitas, Diego Galace; Salomão, Evelyn Cassia; Monteiro, Renan Lima; Costa, Lucíola da Cunha Menezes
2013-10-24
Chronic nonspecific low back pain is a significant health condition with high prevalence worldwide and it is associated with enormous costs to society. Clinical practice guidelines show that many interventions are available to treat patients with chronic low back pain, but the vast majority of these interventions have a modest effect in reducing pain and disability. An intervention that has been widespread in recent years is the use of elastic bandages called Kinesio Taping. Although Kinesio Taping has been used extensively in clinical practice, current evidence does not support the use of this intervention; however these conclusions are based on a small number of underpowered studies. Therefore, questions remain about the effectiveness of the Kinesio Taping method as an additional treatment to interventions, such as conventional physiotherapy, that have already been recommended by the current clinical practice guidelines in robust and high-quality randomised controlled trials. We aim to determine the effectiveness of the addition of the use of Kinesio Taping in patients with chronic nonspecific low back pain who receive guideline-endorsed conventional physiotherapy. One hundred and forty-eight patients will be randomly allocated to receive either conventional physiotherapy, which consists of a combination of manual therapy techniques, general exercises, and specific stabilisation exercises (Guideline-Endorsed Conventional Physiotherapy Group) or to receive conventional physiotherapy with the addition of Kinesio Taping to the lumbar spine (Conventional Physiotherapy plus Kinesio Taping Group) over a period of 5 weeks (10 sessions of treatment). Clinical outcomes (pain intensity, disability and global perceived effect) will be collected at baseline and at 5 weeks, 3 months, and 6 months after randomisation. We will also collect satisfaction with care and adverse effects after treatment. Data will be collected by a blinded assessor. All statistical analysis will be conducted following the principles of intention to treat, and the effects of treatment will be calculated using Linear Mixed Models. The results of this study will provide new information about the usefulness of Kinesio Taping as an additional component of a guideline-endorsed physiotherapy program in patients with chronic nonspecific low back pain.
Re-Mediating Classroom Activity with a Non-Linear, Multi-Display Presentation Tool
ERIC Educational Resources Information Center
Bligh, Brett; Coyle, Do
2013-01-01
This paper uses an Activity Theory framework to evaluate the use of a novel, multi-screen, non-linear presentation tool. The Thunder tool allows presenters to manipulate and annotate multiple digital slides and to concurrently display a selection of juxtaposed resources across a wall-sized projection area. Conventional, single screen presentation…
Caraviello, D Z; Weigel, K A; Gianola, D
2004-05-01
Predicted transmitting abilities (PTA) of US Jersey sires for daughter longevity were calculated using a Weibull proportional hazards sire model and compared with predictions from a conventional linear animal model. Culling data from 268,008 Jersey cows with first calving from 1981 to 2000 were used. The proportional hazards model included time-dependent effects of herd-year-season contemporary group and parity by stage of lactation interaction, as well as time-independent effects of sire and age at first calving. Sire variances and parameters of the Weibull distribution were estimated, providing heritability estimates of 4.7% on the log scale and 18.0% on the original scale. The PTA of each sire was expressed as the expected risk of culling relative to daughters of an average sire. Risk ratios (RR) ranged from 0.7 to 1.3, indicating that the risk of culling for daughters of the best sires was 30% lower than for daughters of average sires and nearly 50% lower than than for daughters of the poorest sires. Sire PTA from the proportional hazards model were compared with PTA from a linear model similar to that used for routine national genetic evaluation of length of productive life (PL) using cross-validation in independent samples of herds. Models were compared using logistic regression of daughters' stayability to second, third, fourth, or fifth lactation on their sires' PTA values, with alternative approaches for weighting the contribution of each sire. Models were also compared using logistic regression of daughters' stayability to 36, 48, 60, 72, and 84 mo of life. The proportional hazards model generally yielded more accurate predictions according to these criteria, but differences in predictive ability between methods were smaller when using a Kullback-Leibler distance than with other approaches. Results of this study suggest that survival analysis methodology may provide more accurate predictions of genetic merit for longevity than conventional linear models.
SU-E-T-197: Helical Cranial-Spinal Treatments with a Linear Accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, J; Bernard, D; Liao, Y
2014-06-01
Purpose: Craniospinal irradiation (CSI) of systemic disease requires a high level of beam intensity modulation to reduce dose to bone marrow and other critical structures. Current helical delivery machines can take 30 minutes or more of beam-on time to complete these treatments. This pilot study aims to test the feasibility of performing helical treatments with a conventional linear accelerator using longitudinal couch travel during multiple gantry revolutions. Methods: The VMAT optimization package of the Eclipse 10.0 treatment planning system was used to optimize pseudo-helical CSI plans of 5 clinical patient scans. Each gantry revolution was divided into three 120° arcsmore » with each isocenter shifted longitudinally. Treatments requiring more than the maximum 10 arcs used multiple plans with each plan after the first being optimized including the dose of the others (Figure 1). The beam pitch was varied between 0.2 and 0.9 (couch speed 5- 20cm/revolution and field width of 22cm) and dose-volume histograms of critical organs were compared to tomotherapy plans. Results: Viable pseudo-helical plans were achieved using Eclipse. Decreasing the pitch from 0.9 to 0.2 lowered the maximum lens dose by 40%, the mean bone marrow dose by 2.1% and the maximum esophagus dose by 17.5%. (Figure 2). Linac-based helical plans showed dose results comparable to tomotherapy delivery for both target coverage and critical organ sparing, with the D50 of bone marrow and esophagus respectively 12% and 31% lower in the helical linear accelerator plan (Figure 3). Total mean beam-on time for the linear accelerator plan was 8.3 minutes, 54% faster than the tomotherapy average for the same plans. Conclusions: This pilot study has demonstrated the feasibility of planning pseudo-helical treatments for CSI targets using a conventional linac and dynamic couch movement, and supports the ongoing development of true helical optimization and delivery.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rafat, M; Bazalova, M; Palma, B
Purpose: To characterize the effect of very rapid dose delivery as compared to conventional therapeutic irradiation times on clonogenic cell survival. Methods: We used a Varian Trilogy linear accelerator to deliver doses up to 10 Gy using a 6 MV SRS photon beam. We irradiated four cancer cell lines in times ranging from 30 sec to 30 min. We also used a Varian TrueBeam linear accelerator to deliver 9 MeV electrons at 10 Gy in 10 s to 30 min to determine the effect of irradiation time on cell survival. We then evaluated the effect of using 60 and 120more » MeV electrons on cell survival using the Next Linear Collider Test Accelerator (NLCTA) beam line at the SLAC National Accelerator Laboratory. During irradiation, adherent cells were maintained at 37oC with 20%O2/5%CO2. Clonogenic assays were completed following irradiation to determine changes in cell survival due to dose delivery time and beam quality, and the survival data were fitted with the linear-quadratic model. Results: Cell lines varied in radiosensitivity, ranging from two to four logs of cell kill at 10 Gy for both conventional and very rapid irradiation. Delivering radiation in shorter times decreased survival in all cell lines. Log differences in cell kill ranged from 0.2 to 0.7 at 10 Gy for the short compared to the long irradiation time. Cell kill differences between short and long irradiations were more pronounced as doses increased for all cell lines. Conclusion: Our findings suggest that shortening delivery of therapeutic radiation doses to less than 1 minute may improve tumor cell kill. This study demonstrates the potential advantage of technologies under development to deliver stereotactic ablative radiation doses very rapidly. Bill Loo and Peter Maxim have received Honoraria from Varian and Research Support from Varian and RaySearch.« less
Zhang, Guoying; Gao, Bao; Huang, Hanmin
2015-06-22
A novel and efficient palladium-catalyzed hydroaminocarbonylation of alkenes with aminals has been developed under mild reaction conditions, and allows the synthesis of a wide range of N-alkyl linear amides in good yields with high regioselectivity. On the basis of this method, a cooperative catalytic system operating by the synergistic combination of palladium, paraformaldehyde, and acid was established for promoting the hydroaminocarbonylation of alkenes with both aromatic and aliphatic amines, which do not react well under conventional palladium-catalyzed hydroaminocarbonylation. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Protection from Space Radiation
NASA Technical Reports Server (NTRS)
Tripathi, R. K.; Wilson, J. W.; Shinn, J. L.; Singleterry, R. C.; Clowdsley, M. S.; Cucinotta, F. A.; Badhwar, G. D.; Kim, M. Y.; Badavi, F. F.; Heinbockel, J. H.
2000-01-01
The exposures anticipated for our astronauts in the anticipated Human Exploration and Development of Space (HEDS) will be significantly higher (both annual and carrier) than any other occupational group. In addition, the exposures in deep space result largely from the Galactic Cosmic Rays (GCR) for which there is as yet little experience. Some evidence exists indicating that conventional linear energy transfer (LET) defined protection quantities (quality factors) may not be appropriate [1,2]. The purpose of this presentation is to evaluate our current understanding of radiation protection with laboratory and flight experimental data and to discuss recent improvements in interaction models and transport methods.
Issues in deep space radiation protection
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Shinn, J. L.; Tripathi, R. K.; Singleterry, R. C.; Clowdsley, M. S.; Thibeault, S. A.; Cheatwood, F. M.; Schimmerling, W.; Cucinotta, F. A.; Badhwar, G. D.;
2001-01-01
The exposures in deep space are largely from the Galactic Cosmic Rays (GCR) for which there is as yet little biological experience. Mounting evidence indicates that conventional linear energy transfer (LET) defined protection quantities (quality factors) may not be appropriate for GCR ions. The available biological data indicates that aluminum alloy structures may generate inherently unhealthy internal spacecraft environments in the thickness range for space applications. Methods for optimization of spacecraft shielding and the associated role of materials selection are discussed. One material which may prove to be an important radiation protection material is hydrogenated carbon nanofibers. c 2001. Elsevier Science Ltd. All rights reserved.
Radiographic cup anteversion measurement corrected from pelvic tilt.
Wang, Liao; Thoreson, Andrew R; Trousdale, Robert T; Morrey, Bernard F; Dai, Kerong; An, Kai-Nan
2017-11-01
The purpose of this study was to develop a novel technique to improve the accuracy of radiographic cup anteversion measurement by correcting the influence of pelvic tilt. Ninety virtual total hip arthroplasties were simulated from computed tomography data of 6 patients with 15 predetermined cup orientations. For each simulated implantation, anteroposterior (AP) virtual pelvic radiographs were generated for 11 predetermined pelvic tilts. A linear regression model was created to capture the relationship between radiographic cup anteversion angle error measured on AP pelvic radiographs and pelvic tilt. Overall, nine hundred and ninety virtual AP pelvic radiographs were measured, and 90 linear regression models were created. Pearson's correlation analyses confirmed a strong correlation between the errors of conventional radiographic cup anteversion angle measured on AP pelvic radiographs and the magnitude of pelvic tilt (P < 0.001). The mean of 90 slopes and y-intercepts of the regression lines were -0.8 and -2.5°, which were applied as the general correction parameters for the proposed tool to correct conventional cup anteversion angle from the influence of pelvic tilt. The current method proposes to measure the pelvic tilt on a lateral radiograph, and to use it as a correction for the radiographic cup anteversion measurement on an AP pelvic radiograph. Thus, both AP and lateral pelvic radiographs are required for the measurement of pelvic posture-integrated cup anteversion. Compared with conventional radiographic cup anteversion, the errors of pelvic posture-integrated radiographic cup anteversion were reduced from 10.03 (SD = 5.13) degrees to 2.53 (SD = 1.33) degrees. Pelvic posture-integrated cup anteversion measurement improves the accuracy of radiographic cup anteversion measurement, which shows the potential of further clarifying the etiology of postoperative instability based on planar radiographs. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Source localization in an ocean waveguide using supervised machine learning.
Niu, Haiqiang; Reeves, Emma; Gerstoft, Peter
2017-09-01
Source localization in ocean acoustics is posed as a machine learning problem in which data-driven methods learn source ranges directly from observed acoustic data. The pressure received by a vertical linear array is preprocessed by constructing a normalized sample covariance matrix and used as the input for three machine learning methods: feed-forward neural networks (FNN), support vector machines (SVM), and random forests (RF). The range estimation problem is solved both as a classification problem and as a regression problem by these three machine learning algorithms. The results of range estimation for the Noise09 experiment are compared for FNN, SVM, RF, and conventional matched-field processing and demonstrate the potential of machine learning for underwater source localization.
Thermal Signature Identification System (TheSIS)
NASA Technical Reports Server (NTRS)
Merritt, Scott; Bean, Brian
2015-01-01
We characterize both nonlinear and high order linear responses of fiber-optic and optoelectronic components using spread spectrum temperature cycling methods. This Thermal Signature Identification System (TheSIS) provides much more detail than conventional narrowband or quasi-static temperature profiling methods. This detail allows us to match components more thoroughly, detect subtle reversible shifts in performance, and investigate the cause of instabilities or irreversible changes. In particular, we create parameterized models of athermal fiber Bragg gratings (FBGs), delay line interferometers (DLIs), and distributed feedback (DFB) lasers, then subject the alternative models to selection via the Akaike Information Criterion (AIC). Detailed pairing of components, e.g. FBGs, is accomplished by means of weighted distance metrics or norms, rather than on the basis of a single parameter, such as center wavelength.
A simple filter circuit for denoising biomechanical impact signals.
Subramaniam, Suba R; Georgakis, Apostolos
2009-01-01
We present a simple scheme for denoising non-stationary biomechanical signals with the aim of accurately estimating their second derivative (acceleration). The method is based on filtering in fractional Fourier domains using well-known low-pass filters in a way that amounts to a time-varying cut-off threshold. The resulting algorithm is linear and its design is facilitated by the relationship between the fractional Fourier transform and joint time-frequency representations. The implemented filter circuit employs only three low-order filters while its efficiency is further supported by the low computational complexity of the fractional Fourier transform. The results demonstrate that the proposed method can denoise the signals effectively and is more robust against noise as compared to conventional low-pass filters.
SU-F-E-06: Dosimetric Characterization of Small Photons Beams of a Novel Linear Accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almonte, A; Polanco, G; Sanchez, E
2016-06-15
Purpose: The aim of the present contribution was to measure the main dosimetric quantities of small fields produced by UNIQUE and evaluate its matching with the corresponding dosimetric data of one 21EX conventional linear accelerator (Varian) in operation at the same center. The second step was to evaluate comparative performance of the EDGE diode detector and the PinPoint micro-ionization chamber for dosimetry of small fields. Methods: UNIQUE is configured with MLC (120 leaves with 0.5 cm leaf width) and a single low photon energy of 6 MV. Beam data were measured with scanning EDGE diode detector (volume of 0.019 mm{supmore » 3}), a PinPoint micro-ionization chamber (PTW) and for larger fields (≥ 4×4cm{sup 2}) a PTW Semi flex chamber (0.125 cm{sup 3}) was used. The scanning system used was the 3D cylindrical tank manufactured by Sun Nuclear, Inc. The measurement of PDD and profiles were done at 100 cm SSD and 1.5 depth; the relative output factors were measured at 10 cm depth. Results: PDD and the profile data showed less than 1% variation between the two linear accelerators for fields size between 2×2 cm{sup 2} and 5×5cm{sup 2}. Output factor differences was less than 1% for field sizes between 3×3 cm{sup 2} and 10×10 cm{sup 2} and less of 1.5 % for fields of 1.5×1.5 cm{sup 2} and 2×2 cm{sup 2} respectively. The dmax value of the EDGE diode detector, measured from the PDD, was 8.347 mm for 0.5×0,5cm{sup 2} for UNIQUE. The performance of EDGE diode detector was comparable for all measurements in small fields. Conclusion: UNIQUE linear accelerator show similar dosimetrics characteristics as conventional 21EX Varian linear accelerator for small, medium and large field sizes.EDGE detector show good performance by measuring dosimetrics quantities in small fields typically used in IMRT and radiosurgery treatments.« less
Noh, Min-Ki; Lee, Baek-Soo; Kim, Shin-Yeop; Jeon, Hyeran Helen; Kim, Seong-Hun; Nelson, Gerald
2017-11-01
This article presents an alternate surgical treatment method to correct a severe anterior protrusion in an adult patient with an extremely thin alveolus. To accomplish an effective and efficient anterior segmental retraction without periodontal complications, the authors performed, under local anesthesia, a wide linear corticotomy and corticision in the maxilla and an anterior segmental osteotomy in mandible. In the maxilla, a wide linear corticotomy was performed under local anesthesia. In the maxillary first premolar area, a wide section of cortical bone was removed. Retraction forces were applied buccolingually with the aid of temporary skeletal anchorage devices. Corticision was later performed to close residual extraction space. In the mandible, an anterior segmental osteotomy was performed and the first premolars were extracted under local anesthesia. In the maxilla, a wide linear corticotomy facilitated a bony block movement with temporary skeletal anchorage devices, without complications. The remaining extraction space after the bony block movement was closed effectively, accelerated by corticision. In the mandible, anterior segmental retraction was facilitated by an anterior segmental osteotomy performed under local anesthesia. Corticision was later employed to accelerate individual tooth movements. A wide linear corticotomy and an anterior segmental osteotomy combined with corticision can be an effective and efficient alternative to conventional orthodontic treatment in the bialveolar protrusion patient with an extremely thin alveolar housing.
Noise removal in extended depth of field microscope images through nonlinear signal processing.
Zahreddine, Ramzi N; Cormack, Robert H; Cogswell, Carol J
2013-04-01
Extended depth of field (EDF) microscopy, achieved through computational optics, allows for real-time 3D imaging of live cell dynamics. EDF is achieved through a combination of point spread function engineering and digital image processing. A linear Wiener filter has been conventionally used to deconvolve the image, but it suffers from high frequency noise amplification and processing artifacts. A nonlinear processing scheme is proposed which extends the depth of field while minimizing background noise. The nonlinear filter is generated via a training algorithm and an iterative optimizer. Biological microscope images processed with the nonlinear filter show a significant improvement in image quality and signal-to-noise ratio over the conventional linear filter.
NASA Astrophysics Data System (ADS)
Lei, Meizhen; Wang, Liqiang
2018-01-01
The halbach-type linear oscillatory motor (HT-LOM) is multi-variable, highly coupled, nonlinear and uncertain, and difficult to get a satisfied result by conventional PID control. An incremental adaptive fuzzy controller (IAFC) for stroke tracking was presented, which combined the merits of PID control, the fuzzy inference mechanism and the adaptive algorithm. The integral-operation is added to the conventional fuzzy control algorithm. The fuzzy scale factor can be online tuned according to the load force and stroke command. The simulation results indicate that the proposed control scheme can achieve satisfied stroke tracking performance and is robust with respect to parameter variations and external disturbance.
NASA Astrophysics Data System (ADS)
Chen, Fan; Huang, Shaoxiong; Ding, Jinjin; Ding, Jinjin; Gao, Bo; Xie, Yuguang; Wang, Xiaoming
2018-01-01
This paper proposes a fast reliability assessing method for distribution grid with distributed renewable energy generation. First, the Weibull distribution and the Beta distribution are used to describe the probability distribution characteristics of wind speed and solar irradiance respectively, and the models of wind farm, solar park and local load are built for reliability assessment. Then based on power system production cost simulation probability discretization and linearization power flow, a optimal power flow objected with minimum cost of conventional power generation is to be resolved. Thus a reliability assessment for distribution grid is implemented fast and accurately. The Loss Of Load Probability (LOLP) and Expected Energy Not Supplied (EENS) are selected as the reliability index, a simulation for IEEE RBTS BUS6 system in MATLAB indicates that the fast reliability assessing method calculates the reliability index much faster with the accuracy ensured when compared with Monte Carlo method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakalli, I., E-mail: izzet.sakalli@emu.edu.tr; Mirekhtiary, S. F., E-mail: fatemeh.mirekhtiary@emu.edu.tr
2013-10-15
Hawking radiation of a non-asymptotically flat 4-dimensional spherically symmetric and static dilatonic black hole (BH) via the Hamilton-Jacobi (HJ) method is studied. In addition to the naive coordinates, we use four more different coordinate systems that are well-behaved at the horizon. Except for the isotropic coordinates, direct computation by the HJ method leads to the standard Hawking temperature for all coordinate systems. The isotropic coordinates allow extracting the index of refraction from the Fermat metric. It is explicitly shown that the index of refraction determines the value of the tunneling rate and its natural consequence, the Hawking temperature. The isotropicmore » coordinates in the conventional HJ method produce a wrong result for the temperature of the linear dilaton. Here, we explain how this discrepancy can be resolved by regularizing the integral possessing a pole at the horizon.« less
High-order fuzzy time-series based on multi-period adaptation model for forecasting stock markets
NASA Astrophysics Data System (ADS)
Chen, Tai-Liang; Cheng, Ching-Hsue; Teoh, Hia-Jong
2008-02-01
Stock investors usually make their short-term investment decisions according to recent stock information such as the late market news, technical analysis reports, and price fluctuations. To reflect these short-term factors which impact stock price, this paper proposes a comprehensive fuzzy time-series, which factors linear relationships between recent periods of stock prices and fuzzy logical relationships (nonlinear relationships) mined from time-series into forecasting processes. In empirical analysis, the TAIEX (Taiwan Stock Exchange Capitalization Weighted Stock Index) and HSI (Heng Seng Index) are employed as experimental datasets, and four recent fuzzy time-series models, Chen’s (1996), Yu’s (2005), Cheng’s (2006) and Chen’s (2007), are used as comparison models. Besides, to compare with conventional statistic method, the method of least squares is utilized to estimate the auto-regressive models of the testing periods within the databases. From analysis results, the performance comparisons indicate that the multi-period adaptation model, proposed in this paper, can effectively improve the forecasting performance of conventional fuzzy time-series models which only factor fuzzy logical relationships in forecasting processes. From the empirical study, the traditional statistic method and the proposed model both reveal that stock price patterns in the Taiwan stock and Hong Kong stock markets are short-term.
Real-time quantitative PCR of Staphylococcus aureus and application in restaurant meals.
Berrada, H; Soriano, J M; Mañes, J; Picó, Y
2006-01-01
Staphylococcus aureus is considered the second most common pathogen to cause outbreaks of food poisoning, exceeded only by Campylobacter. Consumption of foods containing this microorganism is often identified as the cause of illness. In this study, a rapid, reliable, and sensitive real-time quantitative PCR was developed and compared with conventional culture methods. Real-time quantitative PCR was carried out by purifying DNA extracts of S. aureus with a Staphylococcus sample preparation kit and quantifying it in the LightCycler system with hybridization probes. The assay was linear from a range of 10 to 10(6) S. aureus cells (r2 > 0.997). The PCR reaction presented an efficiency of >85%. Accuracy of the PCR-based assay, expressed as percent bias, was around 13%, and the precision, expressed as a percentage of the coefficient of variation, was 7 to 10%. Intraday and interday variability were studied at 10(2) CFU/g and was 12 and 14%, respectively. The proposed method was applied to the analysis of 77 samples of restaurant meals in Valencia (Spain). In 11.6% of samples S. aureus was detected by real-time quantitative PCR, as well as by the conventional microbiological method. An excellent correspondence between real-time quantitative PCR and microbiological numbers (CFU/g) was observed with deviations of < 28%.
NASA Astrophysics Data System (ADS)
Myserlis, I.; Angelakis, E.; Kraus, A.; Liontas, C. A.; Marchili, N.; Aller, M. F.; Aller, H. D.; Karamanavis, V.; Fuhrmann, L.; Krichbaum, T. P.; Zensus, J. A.
2018-01-01
We present an analysis pipeline that enables the recovery of reliable information for all four Stokes parameters with high accuracy. Its novelty relies on the effective treatment of the instrumental effects even before the computation of the Stokes parameters, contrary to conventionally used methods such as that based on the Müller matrix. For instance, instrumental linear polarization is corrected across the whole telescope beam and significant Stokes Q and U can be recovered even when the recorded signals are severely corrupted by instrumental effects. The accuracy we reach in terms of polarization degree is of the order of 0.1-0.2%. The polarization angles are determined with an accuracy of almost 1°. The presented methodology was applied to recover the linear and circular polarization of around 150 active galactic nuclei, which were monitored between July 2010 and April 2016 with the Effelsberg 100-m telescope at 4.85 GHz and 8.35 GHz with a median cadence of 1.2 months. The polarized emission of the Moon was used to calibrate the polarization angle measurements. Our analysis showed a small system-induced rotation of about 1° at both observing frequencies. Over the examined period, five sources have significant and stable linear polarization; three sources remain constantly linearly unpolarized; and a total of 11 sources have stable circular polarization degree mc, four of them with non-zero mc. We also identify eight sources that maintain a stable polarization angle. All this is provided to the community for future polarization observations reference. We finally show that our analysis method is conceptually different from those traditionally used and performs better than the Müller matrix method. Although it has been developed for a system equipped with circularly polarized feeds, it can easily be generalized to systems with linearly polarized feeds as well. The data used to create Fig. C.1 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/609/A68
Disposable cartridge extraction of retinol and alpha-tocopherol from fatty samples.
Bourgeois, C F; Ciba, N
1988-01-01
A new approach is proposed for liquid/solid extraction of retinol and alpha-tocopherol from samples, using a disposable kieselguhr cartridge. The substitution of the mixture methanol-ethanol-n-butanol (4 + 3 + 1) for methanol in the alkaline hydrolysis solution makes it now possible to process fatty samples. Methanol is necessary to solubilize the antioxidant ascorbic acid, and a linear chain alcohol such as n-butanol is necessary to reduce the size of soap micelles so that they can penetrate into the kieselguhr pores. In comparisons of the proposed method with conventional methods on mineral premixes and fatty feedstuffs, recovery and accuracy are at least as good by the proposed method. Advantages are increased rate of determinations and the ability to hydrolyze and extract retinol and alpha-tocopherol together from the same sample.
Hsieh, Chung-Bao; Chen, Chung-Jueng; Chen, Teng-Wei; Yu, Jyh-Cherng; Shen, Kuo-Liang; Chang, Tzu-Ming; Liu, Yao-Chi
2004-01-01
AIM: To investigate whether the non-invasive real-time Indocynine green (ICG) clearance is a sensitive index of liver viability in patients before, during, and after liver transplantation. METHODS: Thirteen patients were studied, two before, three during, and eight following liver transplantation, with two patients suffering acute rejection. The conventional invasive ICG clearance test and ICG pulse spectrophotometry non-invasive real-time ICG clearance test were performed simultaneously. Using linear regression analysis we tested the correlation between these two methods. The transplantation condition of these patients and serum total bilirubin (T. Bil), alanine aminotransferase (ALT), and platelet count were also evaluated. RESULTS: The correlation between these two methods was excellent (r2 = 0.977). CONCLUSION: ICG pulse spectrophotometry clearance is a quick, non-invasive, and reliable liver function test in transplantation patients. PMID:15285026
A linear stepping endovascular intervention robot with variable stiffness and force sensing.
He, Chengbin; Wang, Shuxin; Zuo, Siyang
2018-05-01
Robotic-assisted endovascular intervention surgery has attracted significant attention and interest in recent years. However, limited designs have focused on the variable stiffness mechanism of the catheter shaft. Flexible catheter needs to be partially switched to a rigid state that can hold its shape against external force to achieve a stable and effective insertion procedure. Furthermore, driving catheter in a similar way with manual procedures has the potential to make full use of the extensive experience from conventional catheter navigation. Besides driving method, force sensing is another significant factor for endovascular intervention. This paper presents a variable stiffness catheterization system that can provide stable and accurate endovascular intervention procedure with a linear stepping mechanism that has a similar operation mode to the conventional catheter navigation. A specially designed shape-memory polymer tube with water cooling structure is used to achieve variable stiffness of the catheter. Hence, four FBG sensors are attached to the catheter tip in order to monitor the tip contact force situation with temperature compensation. Experimental results show that the actuation unit is able to deliver linear and rotational motions. We have shown the feasibility of FBG force sensing to reduce the effect of temperature and detect the tip contact force. The designed catheter can change its stiffness partially, and the stiffness of the catheter can be remarkably increased in rigid state. Hence, in the rigid state, the catheter can hold its shape against a [Formula: see text] load. The prototype has also been validated with a vascular phantom, demonstrating the potential clinical value of the system. The proposed system provides important insights into the design of compact robotic-assisted catheter incorporating effective variable stiffness mechanism and real-time force sensing for intraoperative endovascular intervention.
Task-based statistical image reconstruction for high-quality cone-beam CT
NASA Astrophysics Data System (ADS)
Dang, Hao; Webster Stayman, J.; Xu, Jennifer; Zbijewski, Wojciech; Sisniega, Alejandro; Mow, Michael; Wang, Xiaohui; Foos, David H.; Aygun, Nafi; Koliatsos, Vassilis E.; Siewerdsen, Jeffrey H.
2017-11-01
Task-based analysis of medical imaging performance underlies many ongoing efforts in the development of new imaging systems. In statistical image reconstruction, regularization is often formulated in terms to encourage smoothness and/or sharpness (e.g. a linear, quadratic, or Huber penalty) but without explicit formulation of the task. We propose an alternative regularization approach in which a spatially varying penalty is determined that maximizes task-based imaging performance at every location in a 3D image. We apply the method to model-based image reconstruction (MBIR—viz., penalized weighted least-squares, PWLS) in cone-beam CT (CBCT) of the head, focusing on the task of detecting a small, low-contrast intracranial hemorrhage (ICH), and we test the performance of the algorithm in the context of a recently developed CBCT prototype for point-of-care imaging of brain injury. Theoretical predictions of local spatial resolution and noise are computed via an optimization by which regularization (specifically, the quadratic penalty strength) is allowed to vary throughout the image to maximize local task-based detectability index ({{d}\\prime} ). Simulation studies and test-bench experiments were performed using an anthropomorphic head phantom. Three PWLS implementations were tested: conventional (constant) penalty; a certainty-based penalty derived to enforce constant point-spread function, PSF; and the task-based penalty derived to maximize local detectability at each location. Conventional (constant) regularization exhibited a fairly strong degree of spatial variation in {{d}\\prime} , and the certainty-based method achieved uniform PSF, but each exhibited a reduction in detectability compared to the task-based method, which improved detectability up to ~15%. The improvement was strongest in areas of high attenuation (skull base), where the conventional and certainty-based methods tended to over-smooth the data. The task-driven reconstruction method presents a promising regularization method in MBIR by explicitly incorporating task-based imaging performance as the objective. The results demonstrate improved ICH conspicuity and support the development of high-quality CBCT systems.
The first ANDES elements: 9-DOF plate bending triangles
NASA Technical Reports Server (NTRS)
Militello, Carmelo; Felippa, Carlos A.
1991-01-01
New elements are derived to validate and assess the assumed natural deviatoric strain (ANDES) formulation. This is a brand new variant of the assumed natural strain (ANS) formulation of finite elements, which has recently attracted attention as an effective method for constructing high-performance elements for linear and nonlinear analysis. The ANDES formulation is based on an extended parametrized variational principle developed in recent publications. The key concept is that only the deviatoric part of the strains is assumed over the element whereas the mean strain part is discarded in favor of a constant stress assumption. Unlike conventional ANS elements, ANDES elements satisfy the individual element test (a stringent form of the patch test) a priori while retaining the favorable distortion-insensitivity properties of ANS elements. The first application of this formulation is the development of several Kirchhoff plate bending triangular elements with the standard nine degrees of freedom. Linear curvature variations are sampled along the three sides with the corners as gage reading points. These sample values are interpolated over the triangle using three schemes. Two schemes merge back to conventional ANS elements, one being identical to the Discrete Kirchhoff Triangle (DKT), whereas the third one produces two new ANDES elements. Numerical experiments indicate that one of the ANDES element is relatively insensitive to distortion compared to previously derived high-performance plate-bending elements, while retaining accuracy for nondistorted elements.
Propagating synchrony in feed-forward networks
Jahnke, Sven; Memmesheimer, Raoul-Martin; Timme, Marc
2013-01-01
Coordinated patterns of precisely timed action potentials (spikes) emerge in a variety of neural circuits but their dynamical origin is still not well understood. One hypothesis states that synchronous activity propagating through feed-forward chains of groups of neurons (synfire chains) may dynamically generate such spike patterns. Additionally, synfire chains offer the possibility to enable reliable signal transmission. So far, mostly densely connected chains, often with all-to-all connectivity between groups, have been theoretically and computationally studied. Yet, such prominent feed-forward structures have not been observed experimentally. Here we analytically and numerically investigate under which conditions diluted feed-forward chains may exhibit synchrony propagation. In addition to conventional linear input summation, we study the impact of non-linear, non-additive summation accounting for the effect of fast dendritic spikes. The non-linearities promote synchronous inputs to generate precisely timed spikes. We identify how non-additive coupling relaxes the conditions on connectivity such that it enables synchrony propagation at connectivities substantially lower than required for linearly coupled chains. Although the analytical treatment is based on a simple leaky integrate-and-fire neuron model, we show how to generalize our methods to biologically more detailed neuron models and verify our results by numerical simulations with, e.g., Hodgkin Huxley type neurons. PMID:24298251
Jeong, Bongwon; Cho, Hanna; Keum, Hohyun; Kim, Seok; Michael McFarland, D; Bergman, Lawrence A; King, William P; Vakakis, Alexander F
2014-11-21
Intentional utilization of geometric nonlinearity in micro/nanomechanical resonators provides a breakthrough to overcome the narrow bandwidth limitation of linear dynamic systems. In past works, implementation of intentional geometric nonlinearity to an otherwise linear nano/micromechanical resonator has been successfully achieved by local modification of the system through nonlinear attachments of nanoscale size, such as nanotubes and nanowires. However, the conventional fabrication method involving manual integration of nanoscale components produced a low yield rate in these systems. In the present work, we employed a transfer-printing assembly technique to reliably integrate a silicon nanomembrane as a nonlinear coupling component onto a linear dynamic system with two discrete microcantilevers. The dynamics of the developed system was modeled analytically and investigated experimentally as the coupling strength was finely tuned via FIB post-processing. The transition from the linear to the nonlinear dynamic regime with gradual change in the coupling strength was experimentally studied. In addition, we observed for the weakly coupled system that oscillation was asynchronous in the vicinity of the resonance, thus exhibiting a nonlinear complex mode. We conjectured that the emergence of this nonlinear complex mode could be attributed to the nonlinear damping arising from the attached nanomembrane.
Relationship between Testicular Volume and Conventional or Nonconventional Sperm Parameters
Condorelli, Rosita; Calogero, Aldo E.; La Vignera, Sandro
2013-01-01
Background. Reduced testicular volume (TV) (<12 cm3) is associated with lower testicular function. Several studies explored the conventional sperm parameters (concentration, motility, and morphology) and the endocrine function (gonadotropins and testosterone serum concentrations) in the patients with reduction of TV. No other parameters have been examined. Aim. This study aims at evaluating some biofunctional sperm parameters by flow cytometry in the semen of men with reduced TV compared with that of subjects with normal TV. Methods. 78 patients without primary scrotal disease were submitted to ultrasound evaluation of the testis. They were divided into two groups according to testicular volume: A Group, including 40 patients with normal testicular volume (TV > 15 cm3) and B Group, including 38 patients with reduced testicular volume (TV ≤ 12 cm3). All patients underwent serum hormone concentration, conventional and biofunctional (flow cytometry) sperm parameters evaluation. Results. With regard to biofunctional sperm parameters, all values (mitochondrial membrane potential, phosphatidylserine externalization, chromatin compactness, and DNA fragmentation) were strongly negatively correlated with testicular volume (P < 0.0001). Conclusions. This study for the first time in the literature states that the biofunctional sperm parameters worsen and with near linear correlation, with decreasing testicular volume. PMID:24089610
Knüppel, Sven; Meidtner, Karina; Arregui, Maria; Holzhütter, Hermann-Georg; Boeing, Heiner
2015-07-01
Analyzing multiple single nucleotide polymorphisms (SNPs) is a promising approach to finding genetic effects beyond single-locus associations. We proposed the use of multilocus stepwise regression (MSR) to screen for allele combinations as a method to model joint effects, and compared the results with the often used genetic risk score (GRS), conventional stepwise selection, and the shrinkage method LASSO. In contrast to MSR, the GRS, conventional stepwise selection, and LASSO model each genotype by the risk allele doses. We reanalyzed 20 unlinked SNPs related to type 2 diabetes (T2D) in the EPIC-Potsdam case-cohort study (760 cases, 2193 noncases). No SNP-SNP interactions and no nonlinear effects were found. Two SNP combinations selected by MSR (Nagelkerke's R² = 0.050 and 0.048) included eight SNPs with mean allele combination frequency of 2%. GRS and stepwise selection selected nearly the same SNP combinations consisting of 12 and 13 SNPs (Nagelkerke's R² ranged from 0.020 to 0.029). LASSO showed similar results. The MSR method showed the best model fit measured by Nagelkerke's R² suggesting that further improvement may render this method a useful tool in genetic research. However, our comparison suggests that the GRS is a simple way to model genetic effects since it does not consider linkage, SNP-SNP interactions, and no non-linear effects. © 2015 John Wiley & Sons Ltd/University College London.
Zhang, Wei-Dong; Wang, Ying; Wang, Qing; Yang, Wan-Jun; Gu, Yi; Wang, Rong; Song, Xiao-Mei; Wang, Xiao-Juan
2012-08-01
A sensitive and reliable ultra-high performance liquid chromatography-electrospray ionization-tandem mass spectrometry has been developed and partially validated to evaluate the quality of Semen Cassiae (Cassia obtusifolia L.) through simultaneous determination of 11 anthraquinones and two naphtha-γ-pyrone compounds. The analysis was achieved on a Poroshell 120 EC-C(18) column (100 mm × 2.1 mm, 2.7 μm; Agilent, Palo Alto, CA, USA) with gradient elution using a mobile phase that consisted of acetonitrile-water (30 mM ammonium acetate) at a flow rate of 0.4 mL/min. For quantitative analysis, all calibration curves showed perfect linear regression (r(2) > 0.99) within the testing range. This method was also validated with respect to precision and accuracy, and was successfully applied to quantify the 13 components in nine batches of Semen Cassiae samples from different areas. The performance of developed method was compared with that of conventional high-performance liquid chromatography method. The significant advantages of the former include high-speed chromatographic separation, four times faster than high-performance liquid chromatography with conventional columns, and great enhancement in sensitivity. This developed method provided a new basis for overall assessment on quality of Semen Cassiae. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Yin, Xinyou; Belay, Daniel W; van der Putten, Peter E L; Struik, Paul C
2014-12-01
Maximum quantum yield for leaf CO2 assimilation under limiting light conditions (Φ CO2LL) is commonly estimated as the slope of the linear regression of net photosynthetic rate against absorbed irradiance over a range of low-irradiance conditions. Methodological errors associated with this estimation have often been attributed either to light absorptance by non-photosynthetic pigments or to some data points being beyond the linear range of the irradiance response, both causing an underestimation of Φ CO2LL. We demonstrate here that a decrease in photosystem (PS) photochemical efficiency with increasing irradiance, even at very low levels, is another source of error that causes a systematic underestimation of Φ CO2LL. A model method accounting for this error was developed, and was used to estimate Φ CO2LL from simultaneous measurements of gas exchange and chlorophyll fluorescence on leaves using various combinations of species, CO2, O2, or leaf temperature levels. The conventional linear regression method under-estimated Φ CO2LL by ca. 10-15%. Differences in the estimated Φ CO2LL among measurement conditions were generally accounted for by different levels of photorespiration as described by the Farquhar-von Caemmerer-Berry model. However, our data revealed that the temperature dependence of PSII photochemical efficiency under low light was an additional factor that should be accounted for in the model.
A light sheet confocal microscope for image cytometry with a variable linear slit detector
NASA Astrophysics Data System (ADS)
Hutcheson, Joshua A.; Khan, Foysal Z.; Powless, Amy J.; Benson, Devin; Hunter, Courtney; Fritsch, Ingrid; Muldoon, Timothy J.
2016-03-01
We present a light sheet confocal microscope (LSCM) capable of high-resolution imaging of cell suspensions in a microfluidic environment. In lieu of conventional pressure-driven flow or mechanical translation of the samples, we have employed a novel method of fluid transport, redox-magnetohydrodynamics (redox-MHD). This method achieves fluid motion by inducing a small current into the suspension in the presence of a magnetic field via electrodes patterned onto a silicon chip. This on-chip transportation requires no moving parts, and is coupled to the remainder of the imaging system. The microscopy system comprises a 450 nm diode 20 mW laser coupled to a single mode fiber and a cylindrical lens that converges the light sheet into the back aperture of a 10x, 0.3 NA objective lens in an epi-illumination configuration. The emission pathway contains a 150 mm tube lens that focuses the light onto the linear sensor at the conjugate image plane. The linear sensor (ELiiXA+ 8k/4k) has three lateral binning modes which enables variable detection aperture widths between 5, 10, or 20 μm, which can be used to vary axial resolution. We have demonstrated redox-MHD-enabled light sheet microscopy in suspension of fluorescent polystyrene beads. This approach has potential as a high-throughput image cytometer with myriad cellular diagnostic applications.
ERIC Educational Resources Information Center
Duke, Naomi; Macmillan, Ross
2016-01-01
Education is a key sociological variable in the explanation of health and health disparities. Conventional wisdom emphasizes a life course--human capital perspective with expectations of causal effects that are quasi-linear, large in magnitude for high levels of educational attainment, and reasonably robust in the face of measured and unmeasured…
A new method for assessing the accuracy of full arch impressions in patients.
Kuhr, F; Schmidt, A; Rehmann, P; Wöstmann, B
2016-12-01
To evaluate a new method of measuring the real deviation (trueness) of full arch impressions intraorally and to investigate the trueness of digital full arch impressions in comparison to a conventional impression procedure in clinical use. Four metal spheres were fixed with composite using a metal application aid to the lower teeth of 50 test subjects as reference structures. One conventional impression (Impregum Penta Soft) with subsequent type-IV gypsum model casting (CI) and three different digital impressions were performed in the lower jaw of each test person with the following intraoral scanners: Sirona CEREC Omnicam (OC), 3M True Definition (TD), Heraeus Cara TRIOS (cT). The digital and conventional (gypsum) models were analyzed relative to the spheres. Linear distance and angle measurements between the spheres, as well as digital superimpositions of the spheres with the reference data set were executed. With regard to the distance measurements, CI showed the smallest deviations followed by intraoral scanners TD, cT and OC. A digital superimposition procedure yielded the same order for the outcomes: CI (15±4μm), TD (23±9μm), cT (37±14μm), OC (214±38μm). Angle measurements revealed the smallest deviation for TD (0.06°±0,07°) followed by CI (0.07°±0.07°), cT (0.13°±0.15°) and OC (0.28°±0.21°). The new measuring method is suitable for measuring the dimensional accuracy of full arch impressions intraorally. CI is still significantly more accurate than full arch scans with intraoral scanners in clinical use. Conventional full arch impressions with polyether impression materials are still more accurate than full arch digital impressions. Digital impression systems using powder application and active wavefront sampling technology achieve the most accurate results in comparison to other intraoral scanning systems (DRKS-ID: DRKS00009360, German Clinical Trials Register). Copyright © 2016 Elsevier Ltd. All rights reserved.
Ohuchi, Hiroko
2007-11-01
A novel method that can greatly improve the dosimetric sensitivity limit of a radiochromic film (RCF) through use of a set of color components, e.g., red and green, outputs from a RGB color scanner has been developed. RCFs are known to have microscopic and macroscopic nonuniformities, which come from the thickness variations in the film's active radiochromic layer and coating. These variations in the response make the optical signal-to-noise ratio lower, resulting in lower film sensitivity. To mitigate the effects of RCF nonuniform response, an optical common-mode rejection (CMR) was developed. The CMR compensates nonuniform response by creating a ratio of the two signals where the factors common to both numerator and denominator cancel out. The CMR scheme was applied to the mathematical operation of creating a ratio using two components, red and green outputs from a scanner. The two light component lights are neighboring wavebands about 100 nm apart and suffer a common fate, with the exception of wavelength-dependent events, having passed together along common attenuation paths. Two types of dose-response curves as a function of delivered dose ranging from 3.7 mGy to 8.1 Gy for 100 kV x-ray beams were obtained with the optical CMR scheme and the conventional analysis method using red component, respectively. In the range of 3.7 mGy to 81 mGy, the optical densities obtained with the optical CMR showed a good consistency among eight measured samples and an improved consistency with a linear fit within 1 standard deviation of each measured optical densities, while those with the conventional analysis exhibited a large discrepancy among eight samples and did not show a consistency with a linear fit.
Nascimbene, Juri; Marini, Lorenzo; Paoletti, Maurizio G
2012-05-01
The majority of research on organic farming has considered arable and grassland farming systems in Central and Northern Europe, whilst only a few studies have been carried out in Mediterranean agro-systems, such as vineyards, despite their economic importance. The main aim of the study was to test whether organic farming enhances local plant species richness in both crop and non-crop areas of vineyard farms located in intensive conventional landscapes. Nine conventional and nine organic farms were selected in an intensively cultivated region (i.e. no gradient in landscape composition) in northern Italy. In each farm, vascular plants were sampled in one vineyard and in two non-crop linear habitats, grass strips and hedgerows, adjacent to vineyards and therefore potentially influenced by farming. We used linear mixed models to test the effect of farming, and species longevity (annual vs. perennial) separately for the three habitat types. In our intensive agricultural landscapes organic farming promoted local plant species richness in vineyard fields, and grassland strips while we found no effect for linear hedgerows. Differences in species richness were not associated to differences in species composition, indicating that similar plant communities were hosted in vineyard farms independently of the management type. This negative effect of conventional farming was probably due to the use of herbicides, while mechanical operations and mowing regime did not differ between organic and conventional farms. In grassland strips, and only marginally in vineyards, we found that the positive effect of organic farming was more pronounced for perennial than annual species.
Sharma, Anuj; Verma, Subash Chandra; Saxena, Nisha; Chadda, Neetu; Singh, Narendra Pratap; Sinha, Arun Kumar
2006-03-01
Microwave-assisted extraction (MAE), ultrasound-assisted extraction (UAE) and conventional extraction of vanillin and its quantification by HPLC in pods of Vanilla planifolia is described. A range of nonpolar to polar solvents were used for the extraction of vanillin employing MAE, UAE and conventional methods. Various extraction parameters such as nature of the solvent, solvent volume, time of irradiation, microwave and ultrasound energy inputs were optimized. HPLC was performed on RP ODS column (4.6 mm ID x 250 mm, 5 microm, Waters), a photodiode array detector (Waters 2996) using gradient solvent system of ACN and ortho-phosphoric acid in water (0.001:99.999 v/v) at 25 degrees C. Regression equation revealed a linear relationship (r2 > 0.9998) between the mass of vanillin injected and the peak areas. The detection limit (S/N = 3) and limit of quantification (S/N = 10) were 0.65 and 1.2 microg/g, respectively. Recovery was achieved in the range 98.5-99.6% for vanillin. Maximum yield of vanilla extract (29.81, 29.068 and 14.31% by conventional extraction, MAE and UAE, respectively) was found in a mixture of ethanol/water (40:60 v/v). Dehydrated ethanolic extract showed the highest amount of vanillin (1.8, 1.25 and 0.99% by MAE, conventional extraction and UAE, respectively).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sayah, N; Weiss, E; Watkins, W
Purpose: To evaluate the dose-mapping error (DME) inherent to conventional dose-mapping algorithms as a function of dose-matrix resolution. Methods: As DME has been reported to be greatest where dose-gradients overlap tissue-density gradients, non-clinical 66 Gy IMRT plans were generated for 11 lung patients with the target edge defined as the maximum 3D density gradient on the 0% (end of inhale) breathing phase. Post-optimization, Beams were copied to 9 breathing phases. Monte Carlo dose computed (with 2*2*2 mm{sup 3} resolution) on all 10 breathing phases was deformably mapped to phase 0% using the Monte Carlo energy-transfer method with congruent mass-mapping (EMCM);more » an externally implemented tri-linear interpolation method with voxel sub-division; Pinnacle’s internal (tri-linear) method; and a post-processing energy-mass voxel-warping method (dTransform). All methods used the same base displacement-vector-field (or it’s pseudo-inverse as appropriate) for the dose mapping. Mapping was also performed at 4*4*4 mm{sup 3} by merging adjacent dose voxels. Results: Using EMCM as the reference standard, no clinically significant (>1 Gy) DMEs were found for the mean lung dose (MLD), lung V20Gy, or esophagus dose-volume indices, although MLD and V20Gy were statistically different (2*2*2 mm{sup 3}). Pinnacle-to-EMCM target D98% DMEs of 4.4 and 1.2 Gy were observed ( 2*2*2 mm{sup 3}). However dTransform, which like EMCM conserves integral dose, had DME >1 Gy for one case. The root mean square RMS of the DME for the tri-linear-to- EMCM methods was lower for the smaller voxel volume for the tumor 4D-D98%, lung V20Gy, and cord D1%. Conclusion: When tissue gradients overlap with dose gradients, organs-at-risk DME was statistically significant but not clinically significant. Target-D98%-DME was deemed clinically significant for 2/11 patients (2*2*2 mm{sup 3}). Since tri-linear RMS-DME between EMCM and tri-linear was reduced at 2*2*2 mm{sup 3}, use of this resolution is recommended for dose mapping. Interpolative dose methods are sufficiently accurate for the majority of cases. J.V. Siebers receives funding support from Varian Medical Systems.« less
On the Stability of Jump-Linear Systems Driven by Finite-State Machines with Markovian Inputs
NASA Technical Reports Server (NTRS)
Patilkulkarni, Sudarshan; Herencia-Zapana, Heber; Gray, W. Steven; Gonzalez, Oscar R.
2004-01-01
This paper presents two mean-square stability tests for a jump-linear system driven by a finite-state machine with a first-order Markovian input process. The first test is based on conventional Markov jump-linear theory and avoids the use of any higher-order statistics. The second test is developed directly using the higher-order statistics of the machine s output process. The two approaches are illustrated with a simple model for a recoverable computer control system.
Iterative Nonlinear Tikhonov Algorithm with Constraints for Electromagnetic Tomography
NASA Technical Reports Server (NTRS)
Xu, Feng; Deshpande, Manohar
2012-01-01
Low frequency electromagnetic tomography such as the capacitance tomography (ECT) has been proposed for monitoring and mass-gauging of gas-liquid two-phase system under microgravity condition in NASA's future long-term space missions. Due to the ill-posed inverse problem of ECT, images reconstructed using conventional linear algorithms often suffer from limitations such as low resolution and blurred edges. Hence, new efficient high resolution nonlinear imaging algorithms are needed for accurate two-phase imaging. The proposed Iterative Nonlinear Tikhonov Regularized Algorithm with Constraints (INTAC) is based on an efficient finite element method (FEM) forward model of quasi-static electromagnetic problem. It iteratively minimizes the discrepancy between FEM simulated and actual measured capacitances by adjusting the reconstructed image using the Tikhonov regularized method. More importantly, it enforces the known permittivity of two phases to the unknown pixels which exceed the reasonable range of permittivity in each iteration. This strategy does not only stabilize the converging process, but also produces sharper images. Simulations show that resolution improvement of over 2 times can be achieved by INTAC with respect to conventional approaches. Strategies to further improve spatial imaging resolution are suggested, as well as techniques to accelerate nonlinear forward model and thus increase the temporal resolution.
The Capacity Gain of Orbital Angular Momentum Based Multiple-Input-Multiple-Output System
Zhang, Zhuofan; Zheng, Shilie; Chen, Yiling; Jin, Xiaofeng; Chi, Hao; Zhang, Xianmin
2016-01-01
Wireless communication using electromagnetic wave carrying orbital angular momentum (OAM) has attracted increasing interest in recent years, and its potential to increase channel capacity has been explored widely. In this paper, we compare the technique of using uniform linear array consist of circular traveling-wave OAM antennas for multiplexing with the conventional multiple-in-multiple-out (MIMO) communication method, and numerical results show that the OAM based MIMO system can increase channel capacity while communication distance is long enough. An equivalent model is proposed to illustrate that the OAM multiplexing system is equivalent to a conventional MIMO system with a larger element spacing, which means OAM waves could decrease the spatial correlation of MIMO channel. In addition, the effects of some system parameters, such as OAM state interval and element spacing, on the capacity advantage of OAM based MIMO are also investigated. Our results reveal that OAM waves are complementary with MIMO method. OAM waves multiplexing is suitable for long-distance line-of-sight (LoS) communications or communications in open area where the multi-path effect is weak and can be used in massive MIMO systems as well. PMID:27146453
Spelleken, E; Crowe, S B; Sutherland, B; Challens, C; Kairn, T
2018-03-01
Gafchromic EBT3 film is widely used for patient specific quality assurance of complex treatment plans. Film dosimetry techniques commonly involve the use of transmission scanning to produce TIFF files, which are analysed using a non-linear calibration relationship between the dose and red channel net optical density (netOD). Numerous film calibration techniques featured in the literature have not been independently verified or evaluated. A range of previously published film dosimetry techniques were re-evaluated, to identify whether these methods produce better results than the commonly-used non-linear, netOD method. EBT3 film was irradiated at calibration doses between 0 and 4000 cGy and 25 pieces of film were irradiated at 200 cGy to evaluate uniformity. The film was scanned using two different scanners: The Epson Perfection V800 and the Epson Expression 10000XL. Calibration curves, uncertainty in the fit of the curve, overall uncertainty and uniformity were calculated following the methods described by the different calibration techniques. It was found that protocols based on a conventional film dosimetry technique produced results that were accurate and uniform to within 1%, while some of the unconventional techniques produced much higher uncertainties (> 25% for some techniques). Some of the uncommon methods produced reliable results when irradiated to the standard treatment doses (< 400 cGy), however none could be recommended as an efficient or accurate replacement for a common film analysis technique which uses transmission scanning, red colour channel analysis, netOD and a non-linear calibration curve for measuring doses up to 4000 cGy when using EBT3 film.
NASA Astrophysics Data System (ADS)
Huang, Xingguo; Sun, Jianguo; Greenhalgh, Stewart
2018-04-01
We present methods for obtaining numerical and analytic solutions of the complex eikonal equation in inhomogeneous acoustic VTI media (transversely isotropic media with a vertical symmetry axis). The key and novel point of the method for obtaining numerical solutions is to transform the problem of solving the highly nonlinear acoustic VTI eikonal equation into one of solving the relatively simple eikonal equation for the background (isotropic) medium and a system of linear partial differential equations. Specifically, to obtain the real and imaginary parts of the complex traveltime in inhomogeneous acoustic VTI media, we generalize a perturbation theory, which was developed earlier for solving the conventional real eikonal equation in inhomogeneous anisotropic media, to the complex eikonal equation in such media. After the perturbation analysis, we obtain two types of equations. One is the complex eikonal equation for the background medium and the other is a system of linearized partial differential equations for the coefficients of the corresponding complex traveltime formulas. To solve the complex eikonal equation for the background medium, we employ an optimization scheme that we developed for solving the complex eikonal equation in isotropic media. Then, to solve the system of linearized partial differential equations for the coefficients of the complex traveltime formulas, we use the finite difference method based on the fast marching strategy. Furthermore, by applying the complex source point method and the paraxial approximation, we develop the analytic solutions of the complex eikonal equation in acoustic VTI media, both for the isotropic and elliptical anisotropic background medium. Our numerical results demonstrate the effectiveness of our derivations and illustrate the influence of the beam widths and the anisotropic parameters on the complex traveltimes.
Hulet, R. Michael; Zhang, Guangyu; McDermott, Patrick; Kinney, Erinna L.; Schwab, Kellogg J.; Joseph, Sam W.
2011-01-01
Background: In U.S. conventional poultry production, antimicrobials are used for therapeutic, prophylactic, and nontherapeutic purposes. Researchers have shown that this can select for antibiotic-resistant commensal and pathogenic bacteria on poultry farms and in poultry-derived products. However, no U.S. studies have investigated on-farm changes in resistance as conventional poultry farms transition to organic practices and cease using antibiotics. Objective: We investigated the prevalence of antibiotic-resistant Enterococcus on U.S. conventional poultry farms that transitioned to organic practices. Methods: Poultry litter, feed, and water samples were collected from 10 conventional and 10 newly organic poultry houses in 2008 and tested for Enterococcus. Enterococcus (n = 259) was identified using the Vitek® 2 Compact System and tested for susceptibility to 17 antimicrobials using the Sensititre™ microbroth dilution system. Data were analyzed using SAS software (version 9.2), and statistical associations were derived based on generalized linear mixed models. Results: Litter, feed, and water samples were Enterococcus positive. The percentages of resistant Enterococcus faecalis and resistant Enterococcus faecium were significantly lower (p < 0.05) among isolates from newly organic versus conventional poultry houses for two (erythromycin and tylosin) and five (ciprofloxacin, gentamicin, nitrofurantoin, penicillin, and tetracycline) antimicrobials, respectively. Forty-two percent of E. faecalis isolates from conventional poultry houses were multidrug resistant (MDR; resistant to three or more antimicrobial classes), compared with 10% of isolates from newly organic poultry houses (p = 0.02); 84% of E. faecium isolates from conventional poultry houses were MDR, compared with 17% of isolates from newly organic poultry houses (p < 0.001). Conclusions: Our findings suggest that the voluntary removal of antibiotics from large-scale U.S. poultry farms that transition to organic practices is associated with a lower prevalence of antibiotic-resistant and MDR Enterococcus. PMID:21827979
Lotfy, Hayam M; Hegazy, Maha A; Rezk, Mamdouh R; Omran, Yasmin Rostom
2015-09-05
Smart spectrophotometric methods have been applied and validated for the simultaneous determination of a binary mixture of chloramphenicol (CPL) and prednisolone acetate (PA) without preliminary separation. Two novel methods have been developed; the first method depends upon advanced absorbance subtraction (AAS), while the other method relies on advanced amplitude modulation (AAM); in addition to the well established dual wavelength (DW), ratio difference (RD) and constant center coupled with spectrum subtraction (CC-SS) methods. Accuracy, precision and linearity ranges of these methods were determined. Moreover, selectivity was assessed by analyzing synthetic mixtures of both drugs. The proposed methods were successfully applied to the assay of drugs in their pharmaceutical formulations. No interference was observed from common additives and the validity of the methods was tested. The obtained results have been statistically compared to that of official spectrophotometric methods to give a conclusion that there is no significant difference between the proposed methods and the official ones with respect to accuracy and precision. Copyright © 2015 Elsevier B.V. All rights reserved.
Comparison of conventional and novel quadrupole drift tube magnets inspired by Klaus Halbach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feinberg, B.
1995-02-01
Quadrupole drift tube magnets for a heavy-ion linac provide a demanding application of magnet technology. A comparison is made of three different solutions to the problem of providing an adjustable high-field-strength quadrupole magnet in a small volume. A conventional tape-wound electromagnet quadrupole magnet (conventional) is compared with an adjustable permanent-magnet/iron quadrupole magnet (hybrid) and a laced permanent-magnet/iron/electromagnet (laced). Data is presented from magnets constructed for the SuperHILAC heavy-ion linear accelerator, and conclusions are drawn for various applications.
Fu, Xin; Huang, Kelong; Liu, Suqin
2010-02-01
In this paper, a rapid, simple, and sensitive method was described for detection of the total bacterial count using SiO(2)-coated CdSe/ZnS quantum dots (QDs) as a fluorescence marker that covalently coupled with bacteria using glutaraldehyde as the crosslinker. Highly luminescent CdSe/ZnS were prepared by applying cadmium oxide and zinc stearate as precursors instead of pyrophoric organometallic precursors. A reverse-microemulsion technique was used to synthesize CdSe/ZnS/SiO(2) composite nanoparticles with a SiO(2) surface coating. Our results showed that CdSe/ZnS/SiO(2) composite nanoparticles prepared with this method possessed highly luminescent, biologically functional, and monodispersive characteristics, and could successfully be covalently conjugated with the bacteria. As a demonstration, it was found that the method had higher sensitivity and could count bacteria in 3 x 10(2) CFU/mL, lower than the conventional plate counting and organic dye-based method. A linear relationship of the fluorescence peak intensity (Y) and the total bacterial count (X) was established in the range of 3 x 10(2)-10(7) CFU/mL using the equation Y = 374.82X-938.27 (R = 0.99574). The results of the determination for the total count of bacteria in seven real samples were identical with the conventional plate count method, and the standard deviation was satisfactory.
Poojary, Mahesha M; Passamonti, Paolo
2016-12-09
This paper reports on improved conventional thermal silylation (CTS) and microwave-assisted silylation (MAS) methods for simultaneous determination of tocopherols and sterols by gas chromatography. Reaction parameters in each of the methods developed were systematically optimized using a full factorial design followed by a central composite design. Initially, experimental conditions for CTS were optimized using a block heater. Further, a rapid MAS was developed and optimized. To understand microwave heating mechanisms, MAS was optimized by two distinct modes of microwave heating: temperature-controlled MAS and power-controlled MAS, using dedicated instruments where reaction temperature and microwave power level were controlled and monitored online. Developed methods: were compared with routine overnight derivatization. On a comprehensive level, while both CTS and MAS were found to be efficient derivatization techniques, MAS significantly reduced the reaction time. The optimal derivatization temperature and time for CTS found to be 55°C and 54min, while it was 87°C and 1.2min for temperature-controlled MAS. Further, a microwave power of 300W and a derivatization time 0.5min found to be optimal for power-controlled MAS. The use of an appropriate derivatization solvent, such as pyridine, was found to be critical for the successful determination. Catalysts, like potassium acetate and 4-dimethylaminopyridine, enhanced the efficiency slightly. The developed methods showed excellent analytical performance in terms of linearity, accuracy and precision. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wu, Di; He, Yong
2007-11-01
The aim of this study is to investigate the potential of the visible and near infrared spectroscopy (Vis/NIRS) technique for non-destructive measurement of soluble solids contents (SSC) in grape juice beverage. 380 samples were studied in this paper. Smoothing way of Savitzky-Golay and standard normal variate were applied for the pre-processing of spectral data. Least-squares support vector machines (LS-SVM) with RBF kernel function was applied to developing the SSC prediction model based on the Vis/NIRS absorbance data. The determination coefficient for prediction (Rp2) of the results predicted by LS-SVM model was 0. 962 and root mean square error (RMSEP) was 0. 434137. It is concluded that Vis/NIRS technique can quantify the SSC of grape juice beverage fast and non-destructively.. At the same time, LS-SVM model was compared with PLS and back propagation neural network (BP-NN) methods. The results showed that LS-SVM was superior to the conventional linear and non-linear methods in predicting SSC of grape juice beverage. In this study, the generation ability of LS-SVM, PLS and BP-NN models were also investigated. It is concluded that LS-SVM regression method is a promising technique for chemometrics in quantitative prediction.
Neural Networks and other Techniques for Fault Identification and Isolation of Aircraft Systems
NASA Technical Reports Server (NTRS)
Innocenti, M.; Napolitano, M.
2003-01-01
Fault identification, isolation, and accomodation have become critical issues in the overall performance of advanced aircraft systems. Neural Networks have shown to be a very attractive alternative to classic adaptation methods for identification and control of non-linear dynamic systems. The purpose of this paper is to show the improvements in neural network applications achievable through the use of learning algorithms more efficient than the classic Back-Propagation, and through the implementation of the neural schemes in parallel hardware. The results of the analysis of a scheme for Sensor Failure, Detection, Identification and Accommodation (SFDIA) using experimental flight data of a research aircraft model are presented. Conventional approaches to the problem are based on observers and Kalman Filters while more recent methods are based on neural approximators. The work described in this paper is based on the use of neural networks (NNs) as on-line learning non-linear approximators. The performances of two different neural architectures were compared. The first architecture is based on a Multi Layer Perceptron (MLP) NN trained with the Extended Back Propagation algorithm (EBPA). The second architecture is based on a Radial Basis Function (RBF) NN trained with the Extended-MRAN (EMRAN) algorithms. In addition, alternative methods for communications links fault detection and accomodation are presented, relative to multiple unmanned aircraft applications.
Negre, Christian F A; Mniszewski, Susan M; Cawkwell, Marc J; Bock, Nicolas; Wall, Michael E; Niklasson, Anders M N
2016-07-12
We present a reduced complexity algorithm to compute the inverse overlap factors required to solve the generalized eigenvalue problem in a quantum-based molecular dynamics (MD) simulation. Our method is based on the recursive, iterative refinement of an initial guess of Z (inverse square root of the overlap matrix S). The initial guess of Z is obtained beforehand by using either an approximate divide-and-conquer technique or dynamical methods, propagated within an extended Lagrangian dynamics from previous MD time steps. With this formulation, we achieve long-term stability and energy conservation even under the incomplete, approximate, iterative refinement of Z. Linear-scaling performance is obtained using numerically thresholded sparse matrix algebra based on the ELLPACK-R sparse matrix data format, which also enables efficient shared-memory parallelization. As we show in this article using self-consistent density-functional-based tight-binding MD, our approach is faster than conventional methods based on the diagonalization of overlap matrix S for systems as small as a few hundred atoms, substantially accelerating quantum-based simulations even for molecular structures of intermediate size. For a 4158-atom water-solvated polyalanine system, we find an average speedup factor of 122 for the computation of Z in each MD step.
McFarquhar, Martyn; McKie, Shane; Emsley, Richard; Suckling, John; Elliott, Rebecca; Williams, Stephen
2016-01-01
Repeated measurements and multimodal data are common in neuroimaging research. Despite this, conventional approaches to group level analysis ignore these repeated measurements in favour of multiple between-subject models using contrasts of interest. This approach has a number of drawbacks as certain designs and comparisons of interest are either not possible or complex to implement. Unfortunately, even when attempting to analyse group level data within a repeated-measures framework, the methods implemented in popular software packages make potentially unrealistic assumptions about the covariance structure across the brain. In this paper, we describe how this issue can be addressed in a simple and efficient manner using the multivariate form of the familiar general linear model (GLM), as implemented in a new MATLAB toolbox. This multivariate framework is discussed, paying particular attention to methods of inference by permutation. Comparisons with existing approaches and software packages for dependent group-level neuroimaging data are made. We also demonstrate how this method is easily adapted for dependency at the group level when multiple modalities of imaging are collected from the same individuals. Follow-up of these multimodal models using linear discriminant functions (LDA) is also discussed, with applications to future studies wishing to integrate multiple scanning techniques into investigating populations of interest. PMID:26921716
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pearlberg, J.L.; Sandler, M.A.; Kvale, P.
1985-03-01
Laser therapy is a new modality for treatment of airway lesions. The authors examined 18 patients prior to laser photoresection of tracheobronchial lesions. Thirteen had cancers involving the distal trachea, carina, and/or proximal bronchi; five had benign lesions of the middle or proximal trachea. Each patient was examined by conventional linear tomography (CLT) and computed tomography (CT). CT was valuable in patients who had lesions of the distal trachea, carina, and/or proximal bronchi. Its particular usefulness, and its advantage relative to CLT, consisted in its ability to delineate vascular structures adjacent to the planned area of photoresection. Neither CLT normore » CT was helpful in evaluation of benign lesions of the proximal trachea.« less
Comparison of morphological and conventional edge detectors in medical imaging applications
NASA Astrophysics Data System (ADS)
Kaabi, Lotfi; Loloyan, Mansur; Huang, H. K.
1991-06-01
Recently, mathematical morphology has been used to develop efficient image analysis tools. This paper compares the performance of morphological and conventional edge detectors applied to radiological images. Two morphological edge detectors including the dilation residue found by subtracting the original signal from its dilation by a small structuring element, and the blur-minimization edge detector which is defined as the minimum of erosion and dilation residues of the blurred image version, are compared with the linear Laplacian and Sobel and the non-linear Robert edge detectors. Various structuring elements were used in this study: regular 2-dimensional, and 3-dimensional. We utilized two criterions for edge detector's performance classification: edge point connectivity and the sensitivity to the noise. CT/MR and chest radiograph images have been used as test data. Comparison results show that the blur-minimization edge detector, with a rolling ball-like structuring element outperforms other standard linear and nonlinear edge detectors. It is less noise sensitive, and performs the most closed contours.
Terahertz-driven linear electron acceleration
Nanni, Emilio A.; Huang, Wenqian R.; Hong, Kyung-Han; Ravi, Koustuban; Fallahi, Arya; Moriena, Gustavo; Dwayne Miller, R. J.; Kärtner, Franz X.
2015-01-01
The cost, size and availability of electron accelerators are dominated by the achievable accelerating gradient. Conventional high-brightness radio-frequency accelerating structures operate with 30–50 MeV m−1 gradients. Electron accelerators driven with optical or infrared sources have demonstrated accelerating gradients orders of magnitude above that achievable with conventional radio-frequency structures. However, laser-driven wakefield accelerators require intense femtosecond sources and direct laser-driven accelerators suffer from low bunch charge, sub-micron tolerances and sub-femtosecond timing requirements due to the short wavelength of operation. Here we demonstrate linear acceleration of electrons with keV energy gain using optically generated terahertz pulses. Terahertz-driven accelerating structures enable high-gradient electron/proton accelerators with simple accelerating structures, high repetition rates and significant charge per bunch. These ultra-compact terahertz accelerators with extremely short electron bunches hold great potential to have a transformative impact for free electron lasers, linear colliders, ultrafast electron diffraction, X-ray science and medical therapy with X-rays and electron beams. PMID:26439410
Terahertz-driven linear electron acceleration
Nanni, Emilio A.; Huang, Wenqian R.; Hong, Kyung-Han; ...
2015-10-06
The cost, size and availability of electron accelerators are dominated by the achievable accelerating gradient. Conventional high-brightness radio-frequency accelerating structures operate with 30–50 MeVm -1 gradients. Electron accelerators driven with optical or infrared sources have demonstrated accelerating gradients orders of magnitude above that achievable with conventional radio-frequency structures. However, laser-driven wakefield accelerators require intense femtosecond sources and direct laser-driven accelerators suffer from low bunch charge, sub-micron tolerances and sub-femtosecond timing requirements due to the short wavelength of operation. Here we demonstrate linear acceleration of electrons with keV energy gain using optically generated terahertz pulses. Terahertz-driven accelerating structures enable high-gradient electron/protonmore » accelerators with simple accelerating structures, high repetition rates and significant charge per bunch. As a result, these ultra-compact terahertz accelerators with extremely short electron bunches hold great potential to have a transformative impact for free electron lasers, linear colliders, ultrafast electron diffraction, X-ray science and medical therapy with X-rays and electron beams.« less
Size and shape measurement in contemporary cephalometrics.
McIntyre, Grant T; Mossey, Peter A
2003-06-01
The traditional method of analysing cephalograms--conventional cephalometric analysis (CCA)--involves the calculation of linear distance measurements, angular measurements, area measurements, and ratios. Because shape information cannot be determined from these 'size-based' measurements, an increasing number of studies employ geometric morphometric tools in the cephalometric analysis of craniofacial morphology. Most of the discussions surrounding the appropriateness of CCA, Procrustes superimposition, Euclidean distance matrix analysis (EDMA), thin-plate spline analysis (TPS), finite element morphometry (FEM), elliptical Fourier functions (EFF), and medial axis analysis (MAA) have centred upon mathematical and statistical arguments. Surprisingly, little information is available to assist the orthodontist in the clinical relevance of each technique. This article evaluates the advantages and limitations of the above methods currently used to analyse the craniofacial morphology on cephalograms and investigates their clinical relevance and possible applications.
A neighboring structure reconstructed matching algorithm based on LARK features
NASA Astrophysics Data System (ADS)
Xue, Taobei; Han, Jing; Zhang, Yi; Bai, Lianfa
2015-11-01
Aimed at the low contrast ratio and high noise of infrared images, and the randomness and ambient occlusion of its objects, this paper presents a neighboring structure reconstructed matching (NSRM) algorithm based on LARK features. The neighboring structure relationships of local window are considered based on a non-negative linear reconstruction method to build a neighboring structure relationship matrix. Then the LARK feature matrix and the NSRM matrix are processed separately to get two different similarity images. By fusing and analyzing the two similarity images, those infrared objects are detected and marked by the non-maximum suppression. The NSRM approach is extended to detect infrared objects with incompact structure. High performance is demonstrated on infrared body set, indicating a lower false detecting rate than conventional methods in complex natural scenes.
A hybrid group method of data handling with discrete wavelet transform for GDP forecasting
NASA Astrophysics Data System (ADS)
Isa, Nadira Mohamed; Shabri, Ani
2013-09-01
This study is proposed the application of hybridization model using Group Method of Data Handling (GMDH) and Discrete Wavelet Transform (DWT) in time series forecasting. The objective of this paper is to examine the flexibility of the hybridization GMDH in time series forecasting by using Gross Domestic Product (GDP). A time series data set is used in this study to demonstrate the effectiveness of the forecasting model. This data are utilized to forecast through an application aimed to handle real life time series. This experiment compares the performances of a hybrid model and a single model of Wavelet-Linear Regression (WR), Artificial Neural Network (ANN), and conventional GMDH. It is shown that the proposed model can provide a promising alternative technique in GDP forecasting.
Scherer, Gerhard; Urban, Michael; Hagedorn, Heinz-Werner; Serafin, Richard; Feng, Shixia; Kapur, Sunil; Muhammad, Raheema; Jin, Yan; Sarkar, Mohamadi; Roethig, Hans-Juergen
2010-10-01
Alkylating agents occur in the environment and are formed endogenously. Tobacco smoke contains a variety of alkylating agents or precursors including, among others, N-nitrosodimethylamine (NDMA), 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK), acrylonitrile and ethylene oxide. We developed and validated a method for the simultaneous determination of methylmercapturic acid (MMA, biomarker for methylating agents such as NDMA and NNK), 2-hydroxyethylmercapturic acid (HEMA, biomarker for ethylene oxide) and 2-cyanoethylmercapturic acid (CEMA, biomarker for acrylonitrile) in human urine using deuterated internal standards of each compound. The method involves liquid/liquid extraction of the urine sample, solid phase extraction on anion exchange cartridges, derivatization with pentafluorobenzyl bromide (PFBBr), liquid/liquid extraction of the reaction mixture and LC-MS/MS analysis with positive electrospray ionization. The method was linear in the ranges of 5.00-600, 1.00-50.0 and 1.50-900 ng/ml for MMA, HEMA and CEMA, respectively. The method was applied to two clinical studies in adult smokers of conventional cigarettes who either continued smoking conventional cigarettes, were switched to test cigarettes consisting of either an electrically heated cigarette smoking system (EHCSS) or having a highly activated carbon granule filter that were shown to have reduced exposure to specific smoke constituents, or stopped smoking. Urinary excretion of MMA was found to be unaffected by switching to the test cigarettes or stop smoking. Urinary HEMA excretion decreased by 46 to 54% after switching to test cigarettes and by approximately 74% when stopping smoking. Urinary CEMA excretion decreased by 74-77% when switching to test cigarettes and by approximately 90% when stopping smoking. This validated method for urinary alkylmercapturic acids is suitable to distinguish differences in exposure not only between smokers and nonsmokers but also between smoking of conventional and the two test cigarettes investigated in this study. Copyright © 2010 Elsevier B.V. All rights reserved.
Genomic prediction based on data from three layer lines using non-linear regression models.
Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L
2014-11-06
Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional occurrence of large negative accuracies when the evaluated line was not included in the training dataset. Furthermore, when using a multi-line training dataset, non-linear models provided information on the genotype data that was complementary to the linear models, which indicates that the underlying data distributions of the three studied lines were indeed heterogeneous.
Concept and design of super junction devices
NASA Astrophysics Data System (ADS)
Zhang, Bo; Zhang, Wentong; Qiao, Ming; Zhan, Zhenya; Li, Zhaoji
2018-02-01
The super junction (SJ) has been recognized as the " milestone” of the power MOSFET, which is the most important innovation concept of the voltage-sustaining layer (VSL). The basic structure of the SJ is a typical junction-type VSL (J-VSL) with the periodic N and P regions. However, the conventional VSL is a typical resistance-type VSL (R-VSL) with only an N or P region. It is a qualitative change of the VSL from the R-VSL to the J-VSL, introducing the bulk depletion to increase the doping concentration and optimize the bulk electric field of the SJ. This paper firstly summarizes the development of the SJ, and then the optimization theory of the SJ is discussed for both the vertical and the lateral devices, including the non-full depletion mode, the minimum specific on-resistance optimization method and the equivalent substrate model. The SJ concept breaks the conventional " silicon limit” relationship of R on∝V B 2.5, showing a quasi-linear relationship of R on∝V B 1.03.
A novel way to go whole-cell in patch-clamp experiments.
Inayat, Samsoon; Zhao, Yan; Cantrell, Donal R; Dikin, Dmitryi; Pinto, Lawrence H; Troy, John B
2010-11-01
With a conventional patch-clamp electrode, an Ag/AgCl wire sits stationary inside the pipette. To move from the gigaseal cell-attached configuration to whole-cell recording, suction is applied inside the pipette. We have designed and developed a novel Pushpen patch-clamp electrode, in which a W wire insulated and wound with Ag/AgCl wire can move linearly inside the pipette. The W wire has a conical tip, which can protrude from the pipette tip like a push pen, a procedure we call the Pushpen Operation. We use the Pushpen operation to impale the cell membrane in cell-attached configuration to go whole-cell without disruption of the gigaseal. We successfully recorded whole-cell currents from chinese hamster ovarian cells expressing influenza A virus protein A/M2, after obtaining whole-cell configuration with the Pushpen operation. This novel method of achieving whole-cell configuration may have a higher success rate than is the case with the conventional patch clamp technique.
Investigation on wear characteristic of biopolymer gear
NASA Astrophysics Data System (ADS)
Ghazali, Wafiuddin Bin Md; Daing Idris, Daing Mohamad Nafiz Bin; Sofian, Azizul Helmi Bin; Basrawi, Mohamad Firdaus bin; Khalil Ibrahim, Thamir
2017-10-01
Polymer is widely used in many mechanical components such as gear. With the world going to a more green and sustainable environment, polymers which are bio based are being recognized as a replacement for conventional polymers based on fossil fuel. The use of biopolymer in mechanical components especially gear have not been fully explored yet. This research focuses on biopolymer for spur gear and whether the conventional method to investigate wear characteristic is applicable. The spur gears are produced by injection moulding and tested on several speeds using a custom test equipment. The wear formation such as tooth fracture, tooth deformation, debris and weight loss was observed on the biopolymer spur gear. It was noted that the biopolymer gear wear mechanism was similar with other type of polymer spur gears. It also undergoes stages of wear which are; running in, linear and rapid. It can be said that the wear mechanism of biopolymer spur gear is comparable to fossil fuel based polymer spur gear, thus it can be considered to replace polymer gears in suitable applications.
Pei, Zongrui; Max-Planck-Inst. fur Eisenforschung, Duseldorf; Eisenbach, Markus
2017-02-06
Dislocations are among the most important defects in determining the mechanical properties of both conventional alloys and high-entropy alloys. The Peierls-Nabarro model supplies an efficient pathway to their geometries and mobility. The difficulty in solving the integro-differential Peierls-Nabarro equation is how to effectively avoid the local minima in the energy landscape of a dislocation core. Among the other methods to optimize the dislocation core structures, we choose the algorithm of Particle Swarm Optimization, an algorithm that simulates the social behaviors of organisms. By employing more particles (bigger swarm) and more iterative steps (allowing them to explore for longer time), themore » local minima can be effectively avoided. But this would require more computational cost. The advantage of this algorithm is that it is readily parallelized in modern high computing architecture. We demonstrate the performance of our parallelized algorithm scales linearly with the number of employed cores.« less
Method and system for data clustering for very large databases
NASA Technical Reports Server (NTRS)
Livny, Miron (Inventor); Zhang, Tian (Inventor); Ramakrishnan, Raghu (Inventor)
1998-01-01
Multi-dimensional data contained in very large databases is efficiently and accurately clustered to determine patterns therein and extract useful information from such patterns. Conventional computer processors may be used which have limited memory capacity and conventional operating speed, allowing massive data sets to be processed in a reasonable time and with reasonable computer resources. The clustering process is organized using a clustering feature tree structure wherein each clustering feature comprises the number of data points in the cluster, the linear sum of the data points in the cluster, and the square sum of the data points in the cluster. A dense region of data points is treated collectively as a single cluster, and points in sparsely occupied regions can be treated as outliers and removed from the clustering feature tree. The clustering can be carried out continuously with new data points being received and processed, and with the clustering feature tree being restructured as necessary to accommodate the information from the newly received data points.
Rapid Prototyping Technique for the Fabrication of Millifluidic Devices for Polymer Formulations
NASA Astrophysics Data System (ADS)
Cabral, Joao; Harrison, Christopher; Eric, Amis; Karim, Alamgir
2003-03-01
We describe a rapid prototyping technique for the fabrication of 600 micron deep fluidic channels in a solvent-resistant polymeric matrix. Using a conventional illumination source, a laser-jet printed mask, and a commercially available thioelene-based adhesive, we demonstrate the fabrication of fluidic channels which are impervious to a wide range of solvents. The fabrication of channels with this depth by conventional lithography would be both challenging and time-consuming. We demonstrate two lithography methods: one which fabricates channels sealed between glass plates (closed face) and one which fabricates structures on a single plate (open-faced). Furthermore, we demonstrate that this technology can be used to fabricate channels with a depth which varies linearly with distance. The latter is completely compatible with silicone replication technniques. Additionally, we demonstrate that siloxane-based elastomer molds of these channels can be readily made for aqueous applications. Applications to on-line phase mapping of polymer solutions (PEO-Water-Salt) and off line phase separation studies will be discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohamed, M. Shadi, E-mail: m.s.mohamed@durham.ac.uk; Seaid, Mohammed; Trevelyan, Jon
2013-10-15
We investigate the effectiveness of the partition-of-unity finite element method for transient conduction–radiation problems in diffusive grey media. The governing equations consist of a semi-linear transient heat equation for the temperature field and a stationary diffusion approximation to the radiation in grey media. The coupled equations are integrated in time using a semi-implicit method in the finite element framework. We show that for the considered problems, a combination of hyperbolic and exponential enrichment functions based on an approximation of the boundary layer leads to improved accuracy compared to the conventional finite element method. It is illustrated that this approach canmore » be more efficient than using h adaptivity to increase the accuracy of the finite element method near the boundary walls. The performance of the proposed partition-of-unity method is analyzed on several test examples for transient conduction–radiation problems in two space dimensions.« less
Augmented Lagrange Hopfield network for solving economic dispatch problem in competitive environment
NASA Astrophysics Data System (ADS)
Vo, Dieu Ngoc; Ongsakul, Weerakorn; Nguyen, Khai Phuc
2012-11-01
This paper proposes an augmented Lagrange Hopfield network (ALHN) for solving economic dispatch (ED) problem in the competitive environment. The proposed ALHN is a continuous Hopfield network with its energy function based on augmented Lagrange function for efficiently dealing with constrained optimization problems. The ALHN method can overcome the drawbacks of the conventional Hopfield network such as local optimum, long computational time, and linear constraints. The proposed method is used for solving the ED problem with two revenue models of revenue based on payment for power delivered and payment for reserve allocated. The proposed ALHN has been tested on two systems of 3 units and 10 units for the two considered revenue models. The obtained results from the proposed methods are compared to those from differential evolution (DE) and particle swarm optimization (PSO) methods. The result comparison has indicated that the proposed method is very efficient for solving the problem. Therefore, the proposed ALHN could be a favorable tool for ED problem in the competitive environment.
Thermal conductivity measurement of fluids using the 3ω method
NASA Astrophysics Data System (ADS)
Lee, Seung-Min
2009-02-01
We have developed a procedure to measure the thermal conductivity of dielectric liquids and gases using a steady state ac hot wire method in which a thin metal wire is used as a heater and thermometer. The temperature response of the heater wire was measured in a four-probe geometry using an electronic circuit developed for the conventional 3ω method. The measurements have been performed in the frequency range from 1 mHz to 1 kHz. We devised a method to transform the raw data into well-known linear logarithmic frequency dependence plot. After the transformation, an optimal frequency region of the thermal conductivity data was clearly determined as has been done with the data from thin metal film heater. The method was tested with air, water, ethanol, mono-, and tetraethylene glycol. Volumetric heat capacity of the fluids was also calculated with uncertainty and the capability as a probe for metal-liquid thermal boundary conductance was discussed.
Directional filtering for block recovery using wavelet features
NASA Astrophysics Data System (ADS)
Hyun, Seung H.; Eom, Il K.; Kim, Yoo S.
2005-07-01
When images compressed with block-based compression techniques are transmitted over a noisy channel, unexpected block losses occur. Conventional methods that do not consider edge directions can cause blocked blurring artifacts. In this paper, we present a post-processing-based block recovery scheme using Haar wavelet features. The adaptive selection of neighboring blocks is performed based on the energy of wavelet subbands (EWS) and difference between DC values (DDC). The lost blocks are recovered by linear interpolation in the spatial domain using selected blocks. The method using only EWS performs well for horizontal and vertical edges, but not as well for diagonal edges. Conversely, only using DDC performs well for diagonal edges with the exception of line- or roof-type edge profiles. Therefore, we combine EWS and DDC for better results. The proposed directional recovery method is effective for the strong edge because exploit the varying neighboring blocks adaptively according to the edges and the directional information in the image. The proposed method outperforms the previous methods that used only fixed blocks.
Benini, L; Caliari, S; Guidi, G C; Vaona, B; Talamini, G; Vantini, I; Scuro, L A
1989-01-01
This investigation was aimed at comparing a new method for measuring faecal fat excretion, carried out with a semi-automated instrument by using near infrared analysis (NIRA), with the traditional titrimetric (Van de Kamer) and gravimetric (Sobel) methods. Near infrared analysis faecal fat was assayed on the three day stool collection from 118 patients (68 chronic pancreatitis, 19 organic diseases of the gastrointestinal tract, 19 alcoholic liver disease, 12 functional gastrointestinal disorders). A strict linear correlation was found between NIRA and both the titrimetric (r = 0.928, p less than 0.0001) and the gravimetric (r = 0.971, p less than 0.0001) methods. On homogenised faeces, a mean coefficient of variation of 2.1 (SD 1.71)% was found. Before homogenisation (where a mean coefficient of variation of 7% was found) accurate results were obtained when the mean of five measurements was considered. In conclusion, the assay of faecal fat excretion by the near infrared reflessometry appears a simple, rapid and reliable method for measuring steatorrhoea. PMID:2583563
Using compressive measurement to obtain images at ultra low-light-level
NASA Astrophysics Data System (ADS)
Ke, Jun; Wei, Ping
2013-08-01
In this paper, a compressive imaging architecture is used for ultra low-light-level imaging. In such a system, features, instead of object pixels, are imaged onto a photocathode, and then magnified by an image intensifier. By doing so, system measurement SNR is increased significantly. Therefore, the new system can image objects at ultra low-ligh-level, while a conventional system has difficulty. PCA projection is used to collect feature measurements in this work. Linear Wiener operator and nonlinear method based on FoE model are used to reconstruct objects. Root mean square error (RMSE) is used to quantify system reconstruction quality.
Continuous recording of pulmonary artery pressure in unrestricted subjects.
Ikram, H; Richards, A M; Hamilton, E J; Nicholls, M G
1984-01-01
Continuous ambulatory pulmonary artery pressures were recorded using a conventional No 5 French Goodale-Lubin filled catheter linked to the Oxford Medilog system of a portable transducer-perfusion unit and miniaturised recorder. Data retrieval and analysis were performed using a PB2 Medilog playback unit linked to a PDP 11 computer system. The total system has a frequency response linear to 8 Hz allowing accurate pressure recording over the full range of heart rates. Ten recordings in 10 patients yielded artefact free data for 80% or more of the recorded period. This inexpensive reliable method allows pulmonary artery pressures to be recorded in unrestricted subjects. Images PMID:6704262
A method of setting limits for the purpose of quality assurance
NASA Astrophysics Data System (ADS)
Sanghangthum, Taweap; Suriyapee, Sivalee; Kim, Gwe-Ya; Pawlicki, Todd
2013-10-01
The result from any assurance measurement needs to be checked against some limits for acceptability. There are two types of limits; those that define clinical acceptability (action limits) and those that are meant to serve as a warning that the measurement is close to the action limits (tolerance limits). Currently, there is no standard procedure to set these limits. In this work, we propose an operational procedure to set tolerance limits and action limits. The approach to establish the limits is based on techniques of quality engineering using control charts and a process capability index. The method is different for tolerance limits and action limits with action limits being categorized into those that are specified and unspecified. The procedure is to first ensure process control using the I-MR control charts. Then, the tolerance limits are set equal to the control chart limits on the I chart. Action limits are determined using the Cpm process capability index with the requirements that the process must be in-control. The limits from the proposed procedure are compared to an existing or conventional method. Four examples are investigated: two of volumetric modulated arc therapy (VMAT) point dose quality assurance (QA) and two of routine linear accelerator output QA. The tolerance limits range from about 6% larger to 9% smaller than conventional action limits for VMAT QA cases. For the linac output QA, tolerance limits are about 60% smaller than conventional action limits. The operational procedure describe in this work is based on established quality management tools and will provide a systematic guide to set up tolerance and action limits for different equipment and processes.
Optimal GENCO bidding strategy
NASA Astrophysics Data System (ADS)
Gao, Feng
Electricity industries worldwide are undergoing a period of profound upheaval. The conventional vertically integrated mechanism is being replaced by a competitive market environment. Generation companies have incentives to apply novel technologies to lower production costs, for example: Combined Cycle units. Economic dispatch with Combined Cycle units becomes a non-convex optimization problem, which is difficult if not impossible to solve by conventional methods. Several techniques are proposed here: Mixed Integer Linear Programming, a hybrid method, as well as Evolutionary Algorithms. Evolutionary Algorithms share a common mechanism, stochastic searching per generation. The stochastic property makes evolutionary algorithms robust and adaptive enough to solve a non-convex optimization problem. This research implements GA, EP, and PS algorithms for economic dispatch with Combined Cycle units, and makes a comparison with classical Mixed Integer Linear Programming. The electricity market equilibrium model not only helps Independent System Operator/Regulator analyze market performance and market power, but also provides Market Participants the ability to build optimal bidding strategies based on Microeconomics analysis. Supply Function Equilibrium (SFE) is attractive compared to traditional models. This research identifies a proper SFE model, which can be applied to a multiple period situation. The equilibrium condition using discrete time optimal control is then developed for fuel resource constraints. Finally, the research discusses the issues of multiple equilibria and mixed strategies, which are caused by the transmission network. Additionally, an advantage of the proposed model for merchant transmission planning is discussed. A market simulator is a valuable training and evaluation tool to assist sellers, buyers, and regulators to understand market performance and make better decisions. A traditional optimization model may not be enough to consider the distributed, large-scale, and complex energy market. This research compares the performance and searching paths of different artificial life techniques such as Genetic Algorithm (GA), Evolutionary Programming (EP), and Particle Swarm (PS), and look for a proper method to emulate Generation Companies' (GENCOs) bidding strategies. After deregulation, GENCOs face risk and uncertainty associated with the fast-changing market environment. A profit-based bidding decision support system is critical for GENCOs to keep a competitive position in the new environment. Most past research do not pay special attention to the piecewise staircase characteristic of generator offer curves. This research proposes an optimal bidding strategy based on Parametric Linear Programming. The proposed algorithm is able to handle actual piecewise staircase energy offer curves. The proposed method is then extended to incorporate incomplete information based on Decision Analysis. Finally, the author develops an optimal bidding tool (GenBidding) and applies it to the RTS96 test system.
Computing Finite-Time Lyapunov Exponents with Optimally Time Dependent Reduction
NASA Astrophysics Data System (ADS)
Babaee, Hessam; Farazmand, Mohammad; Sapsis, Themis; Haller, George
2016-11-01
We present a method to compute Finite-Time Lyapunov Exponents (FTLE) of a dynamical system using Optimally Time-Dependent (OTD) reduction recently introduced by H. Babaee and T. P. Sapsis. The OTD modes are a set of finite-dimensional, time-dependent, orthonormal basis {ui (x , t) } |i=1N that capture the directions associated with transient instabilities. The evolution equation of the OTD modes is derived from a minimization principle that optimally approximates the most unstable directions over finite times. To compute the FTLE, we evolve a single OTD mode along with the nonlinear dynamics. We approximate the FTLE from the reduced system obtained from projecting the instantaneous linearized dynamics onto the OTD mode. This results in a significant reduction in the computational cost compared to conventional methods for computing FTLE. We demonstrate the efficiency of our method for double Gyre and ABC flows. ARO project 66710-EG-YIP.
Final report for “Extreme-scale Algorithms and Solver Resilience”
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gropp, William Douglas
2017-06-30
This is a joint project with principal investigators at Oak Ridge National Laboratory, Sandia National Laboratories, the University of California at Berkeley, and the University of Tennessee. Our part of the project involves developing performance models for highly scalable algorithms and the development of latency tolerant iterative methods. During this project, we extended our performance models for the Multigrid method for solving large systems of linear equations and conducted experiments with highly scalable variants of conjugate gradient methods that avoid blocking synchronization. In addition, we worked with the other members of the project on alternative techniques for resilience and reproducibility.more » We also presented an alternative approach for reproducible dot-products in parallel computations that performs almost as well as the conventional approach by separating the order of computation from the details of the decomposition of vectors across the processes.« less