The investigation of social networks based on multi-component random graphs
NASA Astrophysics Data System (ADS)
Zadorozhnyi, V. N.; Yudin, E. B.
2018-01-01
The methods of non-homogeneous random graphs calibration are developed for social networks simulation. The graphs are calibrated by the degree distributions of the vertices and the edges. The mathematical foundation of the methods is formed by the theory of random graphs with the nonlinear preferential attachment rule and the theory of Erdôs-Rényi random graphs. In fact, well-calibrated network graph models and computer experiments with these models would help developers (owners) of the networks to predict their development correctly and to choose effective strategies for controlling network projects.
Ivanciuc, O; Ivanciuc, T; Klein, D J; Seitz, W A; Balaban, A T
2001-02-01
Quantitative structure-retention relationships (QSRR) represent statistical models that quantify the connection between the molecular structure and the chromatographic retention indices of organic compounds, allowing the prediction of retention indices of novel, not yet synthesized compounds, solely from their structural descriptors. Using multiple linear regression, QSRR models for the gas chromatographic Kováts retention indices of 129 alkylbenzenes are generated using molecular graph descriptors. The correlational ability of structural descriptors computed from 10 molecular matrices is investigated, showing that the novel reciprocal matrices give numerical indices with improved correlational ability. A QSRR equation with 5 graph descriptors gives the best calibration and prediction results, demonstrating the usefulness of the molecular graph descriptors in modeling chromatographic retention parameters. The sequential orthogonalization of descriptors suggests simpler QSRR models by eliminating redundant structural information.
Zakrzewski, Robert; Ciesielski, Witold
2005-09-25
The reaction between iodine and azide ions induced by thiopental was utilized as a postcolumn reaction for chromatographic determination of thiopental. The method is based on the separation of thiopental on an Nova-Pak CN HP column with an acetonitrile-aqueous solution of sodium azide as a mobile phase, followed by spectrophotometric measurement of the residual iodine (lambda=350 nm) from the postcolumn iodine-azide reaction induced by thiopental after mixing an iodine solution containing iodide ions with the column effluent containing azide ions and thiopental. Chromatograms obtained for thiopental showed negative peaks as a result of the decrease in background absorbance. The detection limit (defined as S/N=3) was 20 nM (0.4 pmol injected amount) for thiopental. Calibration graphs, plotted as peak area versus concentrations, were linear from 40 nM. The elaborated method was applied to determine thiopental in urine samples. The detection limit (defined as S/N=3) was 0.025 nmol/ml urine. Calibration graphs, plotted as peak area versus concentrations, were linear from 0.05 nmol/ml urine. Authentic urine samples were analyzed, thiopental was determined at nmol/ml urine level.
Niazi, Ali; Zolgharnein, Javad; Afiuni-Zadeh, Somaie
2007-11-01
Ternary mixtures of thiamin, riboflavin and pyridoxal have been simultaneously determined in synthetic and real samples by applications of spectrophotometric and least-squares support vector machines. The calibration graphs were linear in the ranges of 1.0 - 20.0, 1.0 - 10.0 and 1.0 - 20.0 microg ml(-1) with detection limits of 0.6, 0.5 and 0.7 microg ml(-1) for thiamin, riboflavin and pyridoxal, respectively. The experimental calibration matrix was designed with 21 mixtures of these chemicals. The concentrations were varied between calibration graph concentrations of vitamins. The simultaneous determination of these vitamin mixtures by using spectrophotometric methods is a difficult problem, due to spectral interferences. The partial least squares (PLS) modeling and least-squares support vector machines were used for the multivariate calibration of the spectrophotometric data. An excellent model was built using LS-SVM, with low prediction errors and superior performance in relation to PLS. The root mean square errors of prediction (RMSEP) for thiamin, riboflavin and pyridoxal with PLS and LS-SVM were 0.6926, 0.3755, 0.4322 and 0.0421, 0.0318, 0.0457, respectively. The proposed method was satisfactorily applied to the rapid simultaneous determination of thiamin, riboflavin and pyridoxal in commercial pharmaceutical preparations and human plasma samples.
Encapsulation of Volatile Citronella Essential Oil by Coacervation: Efficiency and Release Study
NASA Astrophysics Data System (ADS)
Manaf, M. A.; Subuki, I.; Jai, J.; Raslan, R.; Mustapa, A. N.
2018-05-01
The volatile citronella essential oil was encapsulated by simple coacervation and complex coacervation using Arabic gum and gelatin as wall material. Glutaraldehyde was used in the methodology as crosslinking agent. The citronella standard calibration graph obtained with R2 of 0.9523 was used for the accurate determination of encapsulation efficiency and release study. The release kinetic was analysed based on Fick"s law of diffusion for polymeric system and linear graph of Log fraction release over Log time was constructed to determine the release rate constant, k and diffusion coefficient, n. Both coacervation methods in the present study produce encapsulation efficiency around 94%. The produced capsules for both coacervation processes were discussed based on the capsules morphology and release kinetic mechanisms.
Ozcan, Hakki Mevlut; Sagiroglu, Ayten
2010-08-01
In this study the biosensor was constructed by immobilizing tissue homogenate of banana peel onto a glassy carbon electrode surface. Effects of immobilization materials amounts, effects of pH, buffer concentration and temperature on biosensor response were studied. In addition, the detection ranges of 13 phenolic compounds were obtained with the help of the calibration graphs. Storage stability, repeatability of the biosensor, inhibitory effect and sample applications were also investigated. A typical calibration curve for the sensor revealed a linear range of 10-80 microM catechol. In reproducibility studies, variation coefficient and standard deviation were calculated as 2.69%, 1.44 x 10(-3) microM, respectively.
NASA Astrophysics Data System (ADS)
Salem, A. A.; Barsoum, B. N.; Izake, E. L.
2004-03-01
New spectrophotometric and fluorimetric methods have been developed to determine diazepam, bromazepam and clonazepam (1,4-benzodiazepines) in pure forms, pharmaceutical preparations and biological fluid. The new methods are based on measuring absorption or emission spectra in methanolic potassium hydroxide solution. Fluorimetric methods have proved selective with low detection limits, whereas photometric methods showed relatively high detection limits. Successive applications of developed methods for drugs determination in pharmaceutical preparations and urine samples were performed. Photometric methods gave linear calibration graphs in the ranges of 2.85-28.5, 0.316-3.16, and 0.316-3.16 μg ml -1 with detection limits of 1.27, 0.08 and 0.13 μg ml -1 for diazepam, bromazepam and clonazepam, respectively. Corresponding average errors of 2.60, 5.26 and 3.93 and relative standard deviations (R.S.D.s) of 2.79, 2.12 and 2.83, respectively, were obtained. Fluorimetric methods gave linear calibration graphs in the ranges of 0.03-0.34, 0.03-0.32 and 0.03-0.38 μg ml -1 with detection limits of 7.13, 5.67 and 16.47 ng ml -1 for diazepam, bromazepam and clonazepam, respectively. Corresponding average errors of 0.29, 4.33 and 5.42 and R.S.D.s of 1.27, 1.96 and 1.14 were obtained, respectively. Statistical Students t-test and F-test have been used and satisfactory results were obtained.
Fresh broad (Vicia faba) tissue homogenate-based biosensor for determination of phenolic compounds.
Ozcan, Hakki Mevlut; Sagiroglu, Ayten
2014-08-01
In this study, a novel fresh broad (Vicia faba) tissue homogenate-based biosensor for determination of phenolic compounds was developed. The biosensor was constructed by immobilizing tissue homogenate of fresh broad (Vicia faba) on to glassy carbon electrode. For the stability of the biosensor, general immobilization techniques were used to secure the fresh broad tissue homogenate in gelatin-glutaraldehyde cross-linking matrix. In the optimization and characterization studies, the amount of fresh broad tissue homogenate and gelatin, glutaraldehyde percentage, optimum pH, optimum temperature and optimum buffer concentration, thermal stability, interference effects, linear range, storage stability, repeatability and sample applications (Wine, beer, fruit juices) were also investigated. Besides, the detection ranges of thirteen phenolic compounds were obtained with the help of the calibration graphs. A typical calibration curve for the sensor revealed a linear range of 5-60 μM catechol. In reproducibility studies, variation coefficient (CV) and standard deviation (SD) were calculated as 1.59%, 0.64×10(-3) μM, respectively.
ERIC Educational Resources Information Center
Kar, Tugrul
2016-01-01
This study examined prospective middle school mathematics teachers' problem-posing skills by investigating their ability to associate linear graphs with daily life situations. Prospective teachers were given linear graphs and asked to pose problems that could potentially be represented by the graphs. Their answers were analyzed in two stages. In…
A binary linear programming formulation of the graph edit distance.
Justice, Derek; Hero, Alfred
2006-08-01
A binary linear programming formulation of the graph edit distance for unweighted, undirected graphs with vertex attributes is derived and applied to a graph recognition problem. A general formulation for editing graphs is used to derive a graph edit distance that is proven to be a metric, provided the cost function for individual edit operations is a metric. Then, a binary linear program is developed for computing this graph edit distance, and polynomial time methods for determining upper and lower bounds on the solution of the binary program are derived by applying solution methods for standard linear programming and the assignment problem. A recognition problem of comparing a sample input graph to a database of known prototype graphs in the context of a chemical information system is presented as an application of the new method. The costs associated with various edit operations are chosen by using a minimum normalized variance criterion applied to pairwise distances between nearest neighbors in the database of prototypes. The new metric is shown to perform quite well in comparison to existing metrics when applied to a database of chemical graphs.
Many-core graph analytics using accelerated sparse linear algebra routines
NASA Astrophysics Data System (ADS)
Kozacik, Stephen; Paolini, Aaron L.; Fox, Paul; Kelmelis, Eric
2016-05-01
Graph analytics is a key component in identifying emerging trends and threats in many real-world applications. Largescale graph analytics frameworks provide a convenient and highly-scalable platform for developing algorithms to analyze large datasets. Although conceptually scalable, these techniques exhibit poor performance on modern computational hardware. Another model of graph computation has emerged that promises improved performance and scalability by using abstract linear algebra operations as the basis for graph analysis as laid out by the GraphBLAS standard. By using sparse linear algebra as the basis, existing highly efficient algorithms can be adapted to perform computations on the graph. This approach, however, is often less intuitive to graph analytics experts, who are accustomed to vertex-centric APIs such as Giraph, GraphX, and Tinkerpop. We are developing an implementation of the high-level operations supported by these APIs in terms of linear algebra operations. This implementation is be backed by many-core implementations of the fundamental GraphBLAS operations required, and offers the advantages of both the intuitive programming model of a vertex-centric API and the performance of a sparse linear algebra implementation. This technology can reduce the number of nodes required, as well as the run-time for a graph analysis problem, enabling customers to perform more complex analysis with less hardware at lower cost. All of this can be accomplished without the requirement for the customer to make any changes to their analytics code, thanks to the compatibility with existing graph APIs.
Teixeira, Juliana Araujo; Baggio, Maria Luiza; Fisberg, Regina Mara; Marchioni, Dirce Maria Lobo
2010-12-01
The objective of this study was to estimate the regressions calibration for the dietary data that were measured using the quantitative food frequency questionnaire (QFFQ) in the Natural History of HPV Infection in Men: the HIM Study in Brazil. A sample of 98 individuals from the HIM study answered one QFFQ and three 24-hour recalls (24HR) at interviews. The calibration was performed using linear regression analysis in which the 24HR was the dependent variable and the QFFQ was the independent variable. Age, body mass index, physical activity, income and schooling were used as adjustment variables in the models. The geometric means between the 24HR and the calibration-corrected QFFQ were statistically equal. The dispersion graphs between the instruments demonstrate increased correlation after making the correction, although there is greater dispersion of the points with worse explanatory power of the models. Identification of the regressions calibration for the dietary data of the HIM study will make it possible to estimate the effect of the diet on HPV infection, corrected for the measurement error of the QFFQ.
Determination of triclosan in antiperspirant gels by first-order derivative spectrophotometry.
Du, Lina; Li, Miao; Jin, Yiguang
2011-10-01
A first-order derivative UV spectrophotometric method was developed to determine triclosan, a broad-spectrum antimicrobial agent, in health care products containing fragrances which could interfere the determination as impurities. Different extraction methods were compared. Triclosan was extracted with chloroform and diluted with ethanol followed by the derivative spectrophotometric measurement. The interference of fragrances was completely eliminated. The calibration graph was found to be linear in the range of 7.5-45 microg x mL(-1). The method is simple, rapid, sensitive and proper to determine triclosan in fragrance-containing health care products.
Graph-based normalization and whitening for non-linear data analysis.
Aaron, Catherine
2006-01-01
In this paper we construct a graph-based normalization algorithm for non-linear data analysis. The principle of this algorithm is to get a spherical average neighborhood with unit radius. First we present a class of global dispersion measures used for "global normalization"; we then adapt these measures using a weighted graph to build a local normalization called "graph-based" normalization. Then we give details of the graph-based normalization algorithm and illustrate some results. In the second part we present a graph-based whitening algorithm built by analogy between the "global" and the "local" problem.
Li, Dongdong; Wang, Lili
2010-05-01
A highly sensitive microstructured polymer optical fiber (MPOF) probe for hydrogen peroxide was made by forming a rhodamine 6G-doped titanium dioxide film on the side walls of array holes in an MPOF. It was found that hydrogen peroxide only has a response to the MPOF probe in a certain concentration of potassium iodide in sulfuric acid solution. The calibration graph of fluorescence intensity versus hydrogen peroxide concentration is linear in the range of 1.6 x 10(-7) mol/L to 9.6 x 10(-5) mol/L. The method, with high sensitivity and a wide linear range, has been applied to the determination of trace amounts of hydrogen peroxide in a few real samples, such as rain water and contact lens disinfectant, with satisfactory results.
Tantishaiyakul, V; Poeaknapo, C; Sribun, P; Sirisuppanon, K
1998-06-01
A rapid, simple and direct assay procedure based on first-derivative spectrophotometry, using a zero-crossing and peak-to-base measurement at 234 and 324 nm, respectively, has been developed for the specific determination of dextromethorphan HBr and bromhexine HCl in tablets. Calibration graphs were linear with the correlation coefficients of 0.9999 for both analytes. The limit of detections were 0.033 and 0.103 microgram ml-1 for dextromethorphan HBr and bromhexine HCl, respectively. A HPLC method has been developed as the reference method. The results obtained by the first-derivative spectrophotometry were in good agreement with those found by the HPLC method.
3D Surface Reconstruction and Automatic Camera Calibration
NASA Technical Reports Server (NTRS)
Jalobeanu, Andre
2004-01-01
Illustrations in this view-graph presentation are presented on a Bayesian approach to 3D surface reconstruction and camera calibration.Existing methods, surface analysis and modeling,preliminary surface reconstruction results, and potential applications are addressed.
ERIC Educational Resources Information Center
Zhu, Zheng; Chen, Peijie; Zhuang, Jie
2013-01-01
Purpose: The purpose of this study was to develop and cross-validate an equation based on ActiGraph accelerometer GT3X output to predict children and youth's energy expenditure (EE) of physical activity (PA). Method: Participants were 367 Chinese children and youth (179 boys and 188 girls, aged 9 to 17 years old) who wore 1 ActiGraph GT3X…
Graph cuts via l1 norm minimization.
Bhusnurmath, Arvind; Taylor, Camillo J
2008-10-01
Graph cuts have become an increasingly important tool for solving a number of energy minimization problems in computer vision and other fields. In this paper, the graph cut problem is reformulated as an unconstrained l1 norm minimization that can be solved effectively using interior point methods. This reformulation exposes connections between the graph cuts and other related continuous optimization problems. Eventually the problem is reduced to solving a sequence of sparse linear systems involving the Laplacian of the underlying graph. The proposed procedure exploits the structure of these linear systems in a manner that is easily amenable to parallel implementations. Experimental results obtained by applying the procedure to graphs derived from image processing problems are provided.
Gnutzmann, Sven; Waltner, Daniel
2016-12-01
We consider exact and asymptotic solutions of the stationary cubic nonlinear Schrödinger equation on metric graphs. We focus on some basic example graphs. The asymptotic solutions are obtained using the canonical perturbation formalism developed in our earlier paper [S. Gnutzmann and D. Waltner, Phys. Rev. E 93, 032204 (2016)2470-004510.1103/PhysRevE.93.032204]. For closed example graphs (interval, ring, star graph, tadpole graph), we calculate spectral curves and show how the description of spectra reduces to known characteristic functions of linear quantum graphs in the low-intensity limit. Analogously for open examples, we show how nonlinear scattering of stationary waves arises and how it reduces to known linear scattering amplitudes at low intensities. In the short-wavelength asymptotics we discuss how genuine nonlinear effects may be described using the leading order of canonical perturbation theory: bifurcation of spectral curves (and the corresponding solutions) in closed graphs and multistability in open graphs.
Analysis of graphic representation ability in oscillation phenomena
NASA Astrophysics Data System (ADS)
Dewi, A. R. C.; Putra, N. M. D.; Susilo
2018-03-01
This study aims to investigates how the ability of students to representation graphs of linear function and harmonic function in understanding of oscillation phenomena. Method of this research used mix methods with concurrent embedded design. The subjects were 35 students of class X MIA 3 SMA 1 Bae Kudus. Data collection through giving essays and interviews that lead to the ability to read and draw graphs in material of Hooke's law and oscillation characteristics. The results of study showed that most of the students had difficulty in drawing graph of linear function and harmonic function of deviation with time. Students’ difficulties in drawing the graph of linear function is the difficulty of analyzing the variable data needed in graph making, confusing the placement of variable data on the coordinate axis, the difficulty of determining the scale interval on each coordinate, and the variation of how to connect the dots forming the graph. Students’ difficulties in representing the graph of harmonic function is to determine the time interval of sine harmonic function, the difficulty to determine the initial deviation point of the drawing, the difficulty of finding the deviation equation of the case of oscillation characteristics and the confusion to different among the maximum deviation (amplitude) with the length of the spring caused the load.Complexity of the characteristic attributes of the oscillation phenomena graphs, students tend to show less well the ability of graphical representation of harmonic functions than the performance of the graphical representation of linear functions.
Characteristics of mobile MOSFET dosimetry system for megavoltage photon beams
Kumar, A. Sathish; Sharma, S. D.; Ravindran, B. Paul
2014-01-01
The characteristics of a mobile metal oxide semiconductor field effect transistor (mobile MOSFET) detector for standard bias were investigated for megavoltage photon beams. This study was performed with a brass alloy build-up cap for three energies namely Co-60, 6 and 15 MV photon beams. The MOSFETs were calibrated and the performance characteristics were analyzed with respect to dose rate dependence, energy dependence, field size dependence, linearity, build-up factor, and angular dependence for all the three energies. A linear dose-response curve was noted for Co-60, 6 MV, and 15 MV photons. The calibration factors were found to be 1.03, 1, and 0.79 cGy/mV for Co-60, 6 MV, and 15 MV photon energies, respectively. The calibration graph has been obtained to the dose up to 600 cGy, and the dose-response curve was found to be linear. The MOSFETs were found to be energy independent both for measurements performed at depth as well as on the surface with build-up. However, field size dependence was also analyzed for variable field sizes and found to be field size independent. Angular dependence was analyzed by keeping the MOSFET dosimeter in parallel and perpendicular orientation to the angle of incidence of the radiation with and without build-up on the surface of the phantom. The maximum variation for the three energies was found to be within ± 2% for the gantry angles 90° and 270°, the deviations without the build-up for the same gantry angles were found to be 6%, 25%, and 60%, respectively. The MOSFET response was found to be independent of dose rate for all three energies. The dosimetric characteristics of the MOSFET detector make it a suitable in vivo dosimeter for megavoltage photon beams. PMID:25190992
Characteristics of mobile MOSFET dosimetry system for megavoltage photon beams.
Kumar, A Sathish; Sharma, S D; Ravindran, B Paul
2014-07-01
The characteristics of a mobile metal oxide semiconductor field effect transistor (mobile MOSFET) detector for standard bias were investigated for megavoltage photon beams. This study was performed with a brass alloy build-up cap for three energies namely Co-60, 6 and 15 MV photon beams. The MOSFETs were calibrated and the performance characteristics were analyzed with respect to dose rate dependence, energy dependence, field size dependence, linearity, build-up factor, and angular dependence for all the three energies. A linear dose-response curve was noted for Co-60, 6 MV, and 15 MV photons. The calibration factors were found to be 1.03, 1, and 0.79 cGy/mV for Co-60, 6 MV, and 15 MV photon energies, respectively. The calibration graph has been obtained to the dose up to 600 cGy, and the dose-response curve was found to be linear. The MOSFETs were found to be energy independent both for measurements performed at depth as well as on the surface with build-up. However, field size dependence was also analyzed for variable field sizes and found to be field size independent. Angular dependence was analyzed by keeping the MOSFET dosimeter in parallel and perpendicular orientation to the angle of incidence of the radiation with and without build-up on the surface of the phantom. The maximum variation for the three energies was found to be within ± 2% for the gantry angles 90° and 270°, the deviations without the build-up for the same gantry angles were found to be 6%, 25%, and 60%, respectively. The MOSFET response was found to be independent of dose rate for all three energies. The dosimetric characteristics of the MOSFET detector make it a suitable in vivo dosimeter for megavoltage photon beams.
Global dynamics for switching systems and their extensions by linear differential equations
NASA Astrophysics Data System (ADS)
Huttinga, Zane; Cummins, Bree; Gedeon, Tomáš; Mischaikow, Konstantin
2018-03-01
Switching systems use piecewise constant nonlinearities to model gene regulatory networks. This choice provides advantages in the analysis of behavior and allows the global description of dynamics in terms of Morse graphs associated to nodes of a parameter graph. The parameter graph captures spatial characteristics of a decomposition of parameter space into domains with identical Morse graphs. However, there are many cellular processes that do not exhibit threshold-like behavior and thus are not well described by a switching system. We consider a class of extensions of switching systems formed by a mixture of switching interactions and chains of variables governed by linear differential equations. We show that the parameter graphs associated to the switching system and any of its extensions are identical. For each parameter graph node, there is an order-preserving map from the Morse graph of the switching system to the Morse graph of any of its extensions. We provide counterexamples that show why possible stronger relationships between the Morse graphs are not valid.
Global dynamics for switching systems and their extensions by linear differential equations.
Huttinga, Zane; Cummins, Bree; Gedeon, Tomáš; Mischaikow, Konstantin
2018-03-15
Switching systems use piecewise constant nonlinearities to model gene regulatory networks. This choice provides advantages in the analysis of behavior and allows the global description of dynamics in terms of Morse graphs associated to nodes of a parameter graph. The parameter graph captures spatial characteristics of a decomposition of parameter space into domains with identical Morse graphs. However, there are many cellular processes that do not exhibit threshold-like behavior and thus are not well described by a switching system. We consider a class of extensions of switching systems formed by a mixture of switching interactions and chains of variables governed by linear differential equations. We show that the parameter graphs associated to the switching system and any of its extensions are identical. For each parameter graph node, there is an order-preserving map from the Morse graph of the switching system to the Morse graph of any of its extensions. We provide counterexamples that show why possible stronger relationships between the Morse graphs are not valid.
The Use of Graphs in Specific Situations of the Initial Conditions of Linear Differential Equations
ERIC Educational Resources Information Center
Buendía, Gabriela; Cordero, Francisco
2013-01-01
In this article, we present a discussion on the role of graphs and its significance in the relation between the number of initial conditions and the order of a linear differential equation, which is known as the initial value problem. We propose to make a functional framework for the use of graphs that intends to broaden the explanations of the…
ERIC Educational Resources Information Center
Earnest, Darrell Steven
2012-01-01
This dissertation explores fifth and eighth grade students' interpretations of three kinds of mathematical representations: number lines, the Cartesian plane, and graphs of linear functions. Two studies were conducted. In Study 1, I administered the paper-and-pencil Linear Representations Assessment (LRA) to examine students'…
Analysis of bakery products by laser-induced breakdown spectroscopy.
Bilge, Gonca; Boyacı, İsmail Hakkı; Eseller, Kemal Efe; Tamer, Uğur; Çakır, Serhat
2015-08-15
In this study, we focused on the detection of Na in bakery products by using laser-induced breakdown spectroscopy (LIBS) as a quick and simple method. LIBS experiments were performed to examine the Na at 589 nm to quantify NaCl. A series of standard bread sample pellets containing various concentrations of NaCl (0.025-3.5%) were used to construct the calibration curves and to determine the detection limits of the measurements. Calibration graphs were drawn to indicate functions of NaCl and Na concentrations, which showed good linearity in the range of 0.025-3.5% NaCl and 0.01-1.4% Na concentrations with correlation coefficients (R(2)) values greater than 0.98 and 0.96. The obtained detection limits for NaCl and Na were 175 and 69 ppm, respectively. Performed experimental studies showed that LIBS is a convenient method for commercial bakery products to quantify NaCl concentrations as a rapid and in situ technique. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ascorbic Acid Determination in Commercial Fruit Juice Samples by Cyclic Voltammetry
Pisoschi, Aurelia Magdalena; Danet, Andrei Florin; Kalinowski, Slawomir
2008-01-01
A method was developed for assessing ascorbic acid concentration in commercial fruit juice by cyclic voltammetry. The anodic oxidation peak for ascorbic acid occurs at about 490 mV on a Pt disc working electrode (versus SCE). The influence of the potential sweep speed on the peak height was studied. The obtained calibration graph shows a linear dependence between peak height and ascorbic acid concentration in the domain (0.1–10 mmol·L−1). The equation of the calibration graph was y = 6.391x + 0.1903 (where y represents the value of intensity measured for the anodic peak height, expressed as μA and x the analyte concentration, as mmol·L−1, r2 = 0.9995, r.s.d. = 1.14%, n = 10, Cascorbic acid = 2 mmol·L−1). The developed method was applied to ascorbic acid assessment in fruit juice. The ascorbic acid content determined ranged from 0.83 to 1.67 mmol·L−1 for orange juice, from 0.58 to 1.93 mmol·L−1 for lemon juice, and from 0.46 to 1.84 mmol·L−1 for grapefruit juice. Different ascorbic acid concentrations (from standard solutions) were added to the analysed samples, the degree of recovery being comprised between 94.35% and 104%. Ascorbic acid determination results obtained by cyclic voltammetry were compared with those obtained by the volumetric method with dichlorophenol indophenol. The results obtained by the two methods were in good agreement. PMID:19343183
Solving Graph Laplacian Systems Through Recursive Bisections and Two-Grid Preconditioning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ponce, Colin; Vassilevski, Panayot S.
2016-02-18
We present a parallelizable direct method for computing the solution to graph Laplacian-based linear systems derived from graphs that can be hierarchically bipartitioned with small edge cuts. For a graph of size n with constant-size edge cuts, our method decomposes a graph Laplacian in time O(n log n), and then uses that decomposition to perform a linear solve in time O(n log n). We then use the developed technique to design a preconditioner for graph Laplacians that do not have this property. Finally, we augment this preconditioner with a two-grid method that accounts for much of the preconditioner's weaknesses. Wemore » present an analysis of this method, as well as a general theorem for the condition number of a general class of two-grid support graph-based preconditioners. Numerical experiments illustrate the performance of the studied methods.« less
Simple scale interpolator facilitates reading of graphs
NASA Technical Reports Server (NTRS)
Fazio, A.; Henry, B.; Hood, D.
1966-01-01
Set of cards with scale divisions and a scale finder permits accurate reading of the coordinates of points on linear or logarithmic graphs plotted on rectangular grids. The set contains 34 different scales for linear plotting and 28 single cycle scales for log plots.
2013-08-14
Connectivity Graph; Graph Search; Bounded Disturbances; Linear Time-Varying (LTV); Clohessy - Wiltshire -Hill (CWH) 16. SECURITY CLASSIFICATION OF: 17...the linearization of the relative motion model given by the Hill- Clohessy - Wiltshire (CWH) equations is used [14]. A. Nonlinear equations of motion...equations can be used to describe the motion of the debris. B. Linearized HCW equations in discrete-time For δr << R, the linearized Hill- Clohessy
Graph-based linear scaling electronic structure theory.
Niklasson, Anders M N; Mniszewski, Susan M; Negre, Christian F A; Cawkwell, Marc J; Swart, Pieter J; Mohd-Yusof, Jamal; Germann, Timothy C; Wall, Michael E; Bock, Nicolas; Rubensson, Emanuel H; Djidjev, Hristo
2016-06-21
We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.
Graph-based linear scaling electronic structure theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niklasson, Anders M. N., E-mail: amn@lanl.gov; Negre, Christian F. A.; Cawkwell, Marc J.
2016-06-21
We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.
Pebdani, Arezou Amiri; Shabani, Ali Mohammad Haji; Dadfarnia, Shayessteh; Khodadoust, Saeid
2015-08-05
A simple solid phase microextraction method based on molecularly imprinted polymer sorbent in the hollow fiber (MIP-HF-SPME) combined with fiber optic-linear array spectrophotometer has been applied for the extraction and determination of diclofenac in environmental and biological samples. The effects of different parameters such as pH, times of extraction, type and volume of the organic solvent, stirring rate and donor phase volume on the extraction efficiency of the diclofenac were investigated and optimized. Under the optimal conditions, the calibration graph was linear (r(2)=0.998) in the range of 3.0-85.0 μg L(-1) with a detection limit of 0.7 μg L(-1) for preconcentration of 25.0 mL of the sample and the relative standard deviation (n=6) less than 5%. This method was applied successfully for the extraction and determination of diclofenac in different matrices (water, urine and plasma) and accuracy was examined through the recovery experiments. Copyright © 2015 Elsevier B.V. All rights reserved.
Hoang, Vu Dang; Ly, Dong Thi Ha; Tho, Nguyen Huu; Minh Thi Nguyen, Hue
2014-01-01
The application of first-order derivative and wavelet transforms to UV spectra and ratio spectra was proposed for the simultaneous determination of ibuprofen and paracetamol in their combined tablets. A new hybrid approach on the combined use of first-order derivative and wavelet transforms to spectra was also discussed. In this application, DWT (sym6 and haar), CWT (mexh), and FWT were optimized to give the highest spectral recoveries. Calibration graphs in the linear concentration ranges of ibuprofen (12–32 mg/L) and paracetamol (20–40 mg/L) were obtained by measuring the amplitudes of the transformed signals. Our proposed spectrophotometric methods were statistically compared to HPLC in terms of precision and accuracy. PMID:24949492
Hoang, Vu Dang; Ly, Dong Thi Ha; Tho, Nguyen Huu; Nguyen, Hue Minh Thi
2014-01-01
The application of first-order derivative and wavelet transforms to UV spectra and ratio spectra was proposed for the simultaneous determination of ibuprofen and paracetamol in their combined tablets. A new hybrid approach on the combined use of first-order derivative and wavelet transforms to spectra was also discussed. In this application, DWT (sym6 and haar), CWT (mexh), and FWT were optimized to give the highest spectral recoveries. Calibration graphs in the linear concentration ranges of ibuprofen (12-32 mg/L) and paracetamol (20-40 mg/L) were obtained by measuring the amplitudes of the transformed signals. Our proposed spectrophotometric methods were statistically compared to HPLC in terms of precision and accuracy.
Shimada, K; Mino, T; Nakajima, M; Wakabayashi, H; Yamato, S
1994-11-04
A simple and sensitive high-performance liquid chromatographic (HPLC) method for the determination of phenothiazine (PHE) is described. PHE is converted to diphenylamine (DIP) by desulfurization with Raney nickel catalyst. DIP is highly sensitive to electrochemical detection. The calibration graph for PHE quantification after desulfurization was linear between 0.1 and 2.0 ng per injection. The detection limit (signal-to-noise ratio = 3) of PHE after desulfurization was 10 pg, which is twenty times higher than that of the parent compound PHE. The proposed desulfurization technique was applied to other PHE-related compounds. The structural confirmation of the desulfurized product of PHE was carried out by LC-MS using atmospheric pressure chemical ionization.
Structure and strategy in encoding simplified graphs
NASA Technical Reports Server (NTRS)
Schiano, Diane J.; Tversky, Barbara
1992-01-01
Tversky and Schiano (1989) found a systematic bias toward the 45-deg line in memory for the slopes of identical lines when embedded in graphs, but not in maps, suggesting the use of a cognitive reference frame specifically for encoding meaningful graphs. The present experiments explore this issue further using the linear configurations alone as stimuli. Experiments 1 and 2 demonstrate that perception and immediate memory for the slope of a test line within orthogonal 'axes' are predictable from purely structural considerations. In Experiments 3 and 4, subjects were instructed to use a diagonal-reference strategy in viewing the stimuli, which were described as 'graphs' only in Experiment 3. Results for both studies showed the diagonal bias previously found only for graphs. This pattern provides converging evidence for the diagonal as a cognitive reference frame in encoding linear graphs, and demonstrates that even in highly simplified displays, strategic factors can produce encoding biases not predictable solely from stimulus structure alone.
Label Information Guided Graph Construction for Semi-Supervised Learning.
Zhuang, Liansheng; Zhou, Zihan; Gao, Shenghua; Yin, Jingwen; Lin, Zhouchen; Ma, Yi
2017-09-01
In the literature, most existing graph-based semi-supervised learning methods only use the label information of observed samples in the label propagation stage, while ignoring such valuable information when learning the graph. In this paper, we argue that it is beneficial to consider the label information in the graph learning stage. Specifically, by enforcing the weight of edges between labeled samples of different classes to be zero, we explicitly incorporate the label information into the state-of-the-art graph learning methods, such as the low-rank representation (LRR), and propose a novel semi-supervised graph learning method called semi-supervised low-rank representation. This results in a convex optimization problem with linear constraints, which can be solved by the linearized alternating direction method. Though we take LRR as an example, our proposed method is in fact very general and can be applied to any self-representation graph learning methods. Experiment results on both synthetic and real data sets demonstrate that the proposed graph learning method can better capture the global geometric structure of the data, and therefore is more effective for semi-supervised learning tasks.
Caetano, Tibério S; McAuley, Julian J; Cheng, Li; Le, Quoc V; Smola, Alex J
2009-06-01
As a fundamental problem in pattern recognition, graph matching has applications in a variety of fields, from computer vision to computational biology. In graph matching, patterns are modeled as graphs and pattern recognition amounts to finding a correspondence between the nodes of different graphs. Many formulations of this problem can be cast in general as a quadratic assignment problem, where a linear term in the objective function encodes node compatibility and a quadratic term encodes edge compatibility. The main research focus in this theme is about designing efficient algorithms for approximately solving the quadratic assignment problem, since it is NP-hard. In this paper we turn our attention to a different question: how to estimate compatibility functions such that the solution of the resulting graph matching problem best matches the expected solution that a human would manually provide. We present a method for learning graph matching: the training examples are pairs of graphs and the 'labels' are matches between them. Our experimental results reveal that learning can substantially improve the performance of standard graph matching algorithms. In particular, we find that simple linear assignment with such a learning scheme outperforms Graduated Assignment with bistochastic normalisation, a state-of-the-art quadratic assignment relaxation algorithm.
Using MathCAD to Teach One-Dimensional Graphs
ERIC Educational Resources Information Center
Yushau, B.
2004-01-01
Topics such as linear and nonlinear equations and inequalities, compound inequalities, linear and nonlinear absolute value equations and inequalities, rational equations and inequality are commonly found in college algebra and precalculus textbooks. What is common about these topics is the fact that their solutions and graphs lie in the real line…
ERIC Educational Resources Information Center
Hattikudur, Shanta; Prather, Richard W.; Asquith, Pamela; Alibali, Martha W.; Knuth, Eric J.; Nathan, Mitchell
2012-01-01
Middle-school students are expected to understand key components of graphs, such as slope and y-intercept. However, constructing graphs is a skill that has received relatively little research attention. This study examined students' construction of graphs of linear functions, focusing specifically on the relative difficulties of graphing slope and…
NASA Astrophysics Data System (ADS)
Salem, A. A.
2006-09-01
New sensitive, reliable and reproducible fluorimetric methods for determining microgram amounts of nucleic acids based on their reactions with Fe(II), Os(III) or Sm(III) complexes of 4,7-diphenyl-1,10-phenanthroline are proposed. Two complementary single stranded synthetic DNA sequences based on calf thymus as well as their hybridized double stranded were used. Nucleic acids were found to react instantaneously at room temperature in Tris-Cl buffer pH 7, with the investigated complexes resulting in decreasing their fluorescence emission. Two fluorescence peaks around 388 and 567 nm were obtained for the three complexes using excitation λmax of 280 nm and were used for this investigation. Linear calibration graphs in the range 1-6 μg/ml were obtained. Detection limits of 0.35-0.98 μg/ml were obtained. Using the calibration graphs for the synthetic dsDNA, relative standard deviations of 2.0-5.0% were obtained for analyzing DNA in the extraction products from calf thymus and human blood. Corresponding Recovery% of 80-114 were obtained. Student's t-values at 95% confidence level showed insignificant difference between the real and measured values. Results obtained by these methods were compared with the ethidium bromide method using the F-test and satisfactory results were obtained. The association constants and number of binding sites of synthetic ssDNA and dsDNA with the three complexes were estimated using Rosenthanl graphic method. The interaction mechanism was discussed and an intercalation mechanism was suggested for the binding reaction between nucleic acids and the three complexes.
Generalizing a categorization of students' interpretations of linear kinematics graphs
NASA Astrophysics Data System (ADS)
Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul
2016-06-01
We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque Country, Spain (University of the Basque Country). We discuss how we adapted the categorization to accommodate a much more diverse student cohort and explain how the prior knowledge of students may account for many differences in the prevalence of approaches and success rates. Although calculus-based physics students make fewer mistakes than algebra-based physics students, they encounter similar difficulties that are often related to incorrectly dividing two coordinates. We verified that a qualitative understanding of kinematics is an important but not sufficient condition for students to determine a correct value for the speed. When comparing responses to questions on linear distance-time graphs with responses to isomorphic questions on linear water level versus time graphs, we observed that the context of a question influences the approach students use. Neither qualitative understanding nor an ability to find the slope of a context-free graph proved to be a reliable predictor for the approach students use when they determine the instantaneous speed.
NASA Astrophysics Data System (ADS)
Komachi, Mamoru; Kudo, Taku; Shimbo, Masashi; Matsumoto, Yuji
Bootstrapping has a tendency, called semantic drift, to select instances unrelated to the seed instances as the iteration proceeds. We demonstrate the semantic drift of Espresso-style bootstrapping has the same root as the topic drift of Kleinberg's HITS, using a simplified graph-based reformulation of bootstrapping. We confirm that two graph-based algorithms, the von Neumann kernels and the regularized Laplacian, can reduce the effect of semantic drift in the task of word sense disambiguation (WSD) on Senseval-3 English Lexical Sample Task. Proposed algorithms achieve superior performance to Espresso and previous graph-based WSD methods, even though the proposed algorithms have less parameters and are easy to calibrate.
Mohammadnezhad, Nasim; Matin, Amir Abbas; Samadi, Naser; Shomali, Ashkan; Valizadeh, Hassan
2017-01-01
Linear ionic liquid bonded to fused silica and its application as a solid-phase microextraction fiber for the extraction of bisphenol A (BPA) from water samples were studied. After optimization of microextraction conditions (15 mL sample volume, extraction time of 40 min, extraction temperature of 30 ± 1°C, 300 μL acetonitrile as the desorption solvent, and desorption time of 7 min), the fiber was used to extract BPA from packed mineral water, followed by HPLC-UV on an XDB-C18 column (150 × 4.6 mm id, 3.5 μm particle) with a mobile phase of acetonitrile-water (45 + 55%, v/v) and flow rate of 1 mL . min-1). A low LOD (0.20 μg . L-1) and good linearity (0.9977) in the calibration graph indicated that the proposed method was suitable for the determination of BPA.
Listing triangles in expected linear time on a class of power law graphs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nordman, Daniel J.; Wilson, Alyson G.; Phillips, Cynthia Ann
Enumerating triangles (3-cycles) in graphs is a kernel operation for social network analysis. For example, many community detection methods depend upon finding common neighbors of two related entities. We consider Cohen's simple and elegant solution for listing triangles: give each node a 'bucket.' Place each edge into the bucket of its endpoint of lowest degree, breaking ties consistently. Each node then checks each pair of edges in its bucket, testing for the adjacency that would complete that triangle. Cohen presents an informal argument that his algorithm should run well on real graphs. We formalize this argument by providing an analysismore » for the expected running time on a class of random graphs, including power law graphs. We consider a rigorously defined method for generating a random simple graph, the erased configuration model (ECM). In the ECM each node draws a degree independently from a marginal degree distribution, endpoints pair randomly, and we erase self loops and multiedges. If the marginal degree distribution has a finite second moment, it follows immediately that Cohen's algorithm runs in expected linear time. Furthermore, it can still run in expected linear time even when the degree distribution has such a heavy tail that the second moment is not finite. We prove that Cohen's algorithm runs in expected linear time when the marginal degree distribution has finite 4/3 moment and no vertex has degree larger than {radical}n. In fact we give the precise asymptotic value of the expected number of edge pairs per bucket. A finite 4/3 moment is required; if it is unbounded, then so is the number of pairs. The marginal degree distribution of a power law graph has bounded 4/3 moment when its exponent {alpha} is more than 7/3. Thus for this class of power law graphs, with degree at most {radical}n, Cohen's algorithm runs in expected linear time. This is precisely the value of {alpha} for which the clustering coefficient tends to zero asymptotically, and it is in the range that is relevant for the degree distribution of the World-Wide Web.« less
Labeling RDF Graphs for Linear Time and Space Querying
NASA Astrophysics Data System (ADS)
Furche, Tim; Weinzierl, Antonius; Bry, François
Indices and data structures for web querying have mostly considered tree shaped data, reflecting the view of XML documents as tree-shaped. However, for RDF (and when querying ID/IDREF constraints in XML) data is indisputably graph-shaped. In this chapter, we first study existing indexing and labeling schemes for RDF and other graph datawith focus on support for efficient adjacency and reachability queries. For XML, labeling schemes are an important part of the widespread adoption of XML, in particular for mapping XML to existing (relational) database technology. However, the existing indexing and labeling schemes for RDF (and graph data in general) sacrifice one of the most attractive properties of XML labeling schemes, the constant time (and per-node space) test for adjacency (child) and reachability (descendant). In the second part, we introduce the first labeling scheme for RDF data that retains this property and thus achieves linear time and space processing of acyclic RDF queries on a significantly larger class of graphs than previous approaches (which are mostly limited to tree-shaped data). Finally, we show how this labeling scheme can be applied to (acyclic) SPARQL queries to obtain an evaluation algorithm with time and space complexity linear in the number of resources in the queried RDF graph.
Discrete Methods and their Applications
1993-02-03
problem of finding all near-optimal solutions to a linear program. In paper [18], we give a brief and elementary proof of a result of Hoffman [1952) about...relies only on linear programming duality; second, we obtain geometric and algebraic representations of the bounds that are determined explicitly in...same. We have studied the problem of finding the minimum n such that a given unit interval graph is an n--graph. A linear time algorithm to compute
Preserving Differential Privacy in Degree-Correlation based Graph Generation
Wang, Yue; Wu, Xintao
2014-01-01
Enabling accurate analysis of social network data while preserving differential privacy has been challenging since graph features such as cluster coefficient often have high sensitivity, which is different from traditional aggregate functions (e.g., count and sum) on tabular data. In this paper, we study the problem of enforcing edge differential privacy in graph generation. The idea is to enforce differential privacy on graph model parameters learned from the original network and then generate the graphs for releasing using the graph model with the private parameters. In particular, we develop a differential privacy preserving graph generator based on the dK-graph generation model. We first derive from the original graph various parameters (i.e., degree correlations) used in the dK-graph model, then enforce edge differential privacy on the learned parameters, and finally use the dK-graph model with the perturbed parameters to generate graphs. For the 2K-graph model, we enforce the edge differential privacy by calibrating noise based on the smooth sensitivity, rather than the global sensitivity. By doing this, we achieve the strict differential privacy guarantee with smaller magnitude noise. We conduct experiments on four real networks and compare the performance of our private dK-graph models with the stochastic Kronecker graph generation model in terms of utility and privacy tradeoff. Empirical evaluations show the developed private dK-graph generation models significantly outperform the approach based on the stochastic Kronecker generation model. PMID:24723987
Generalizing a Categorization of Students' Interpretations of Linear Kinematics Graphs
ERIC Educational Resources Information Center
Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul
2016-01-01
We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque…
Graphing the Model or Modeling the Graph? Not-so-Subtle Problems in Linear IS-LM Analysis.
ERIC Educational Resources Information Center
Alston, Richard M.; Chi, Wan Fu
1989-01-01
Outlines the differences between the traditional and modern theoretical models of demand for money. States that the two models are often used interchangeably in textbooks, causing ambiguity. Argues against the use of linear specifications that imply that income velocity can increase without limit and that autonomous components of aggregate demand…
Headridge, J B; Smith, D R
1972-07-01
An induction-heated graphite furnace, coupled to a Unicam SP 90 atomic-absorption spectrometer, is described for the direct determination of trace elements in metals and alloys. The furnace is capable of operation at temperatures up to 2400 degrees , and has been used to obtain calibration graphs for the determination of ppm quantities of bismuth in lead-base alloys, cast irons and stainless steels, and for the determination of cadmium at the ppm level in zinc-base alloys. Milligram samples of the alloys were atomized directly. Calibration graphs for the determination of the elements in solutions were obtained for comparison. The accuracy and precision of the determination are presented and discussed.
Ono, I; Matsuda, K; Kanno, S
1996-04-12
A column-switching high-performance liquid chromatography method with ultraviolet detection at 210 nm has been developed for the determination of N-(trans-4-isopropylcyclohexylcarbonyl)-D-phenylalanine (AY4166, I) in human plasma. Plasma samples were prepared by solid-phase extraction with Sep-Pak Light tC18, followed by HPLC. The calibration graph for I was linear in the range 0.1-20 micrograms/ml. The limit of quantitation of I, in plasma, was 0.05 microgram/ml. The recovery of spiked I (0.5 microgram/ml) to drug-free plasma was over 92% and the relative standard deviation of spiked I (0.5 microgram/ml) compared to drug-free plasma was 4.3% (n = 8).
NASA Astrophysics Data System (ADS)
Lazic, V.; De Ninno, A.
2017-11-01
The laser induced plasma spectroscopy was applied on particles attached on substrate represented by a silica wafer covered with a thin oil film. The substrate itself weakly interacts with a ns Nd:YAG laser (1064 nm) while presence of particles strongly enhances the plasma emission, here detected by a compact spectrometer array. Variations of the sample mass from one laser spot to another exceed one order of magnitude, as estimated by on-line photography and the initial image calibration for different sample loadings. Consequently, the spectral lines from particles show extreme intensity fluctuations from one sampling point to another, between the detection threshold and the detector's saturation in some cases. In such conditions the common calibration approach based on the averaged spectra, also when considering ratios of the element lines i.e. concentrations, produces errors too large for measuring the sample compositions. On the other hand, intensities of an analytical and the reference line from single shot spectra are linearly correlated. The corresponding slope depends on the concentration ratio and it is weakly sensitive to fluctuations of the plasma temperature inside the data set. A use of the slopes for constructing the calibration graphs significantly reduces the error bars but it does not eliminate the point scattering caused by the matrix effect, which is also responsible for large differences in the average plasma temperatures among the samples. Well aligned calibration points were obtained after identifying the couples of transitions less sensitive to variations of the plasma temperature, and this was achieved by simple theoretical simulations. Such selection of the analytical lines minimizes the matrix effect, and together with the chosen calibration approach, allows to measure the relative element concentrations even in highly unstable laser induced plasmas.
Graph-cut based discrete-valued image reconstruction.
Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim
2015-05-01
Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.
The growth rate of vertex-transitive planar graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Babai, L.
1997-06-01
A graph is vertex-transitive if all of its vertices axe equivalent under automorphisms. Confirming a conjecture of Jon Kleinberg and Eva Tardos, we prove the following trichotomy theorem concerning locally finite vertex-transitive planar graphs: the rate of growth of a graph with these properties is either linear or quadratic or exponential. The same result holds more generally for locally finite, almost vertex-transitive planar graphs (the automorphism group has a finite number of orbits). The proof uses the elements of hyperbolic plane geometry.
ERIC Educational Resources Information Center
Wemyss, Thomas; van Kampen, Paul
2013-01-01
We have investigated the various approaches taken by first-year university students (n[image omitted]550) when asked to determine the direction of motion, the constancy of speed, and a numerical value of the speed of an object at a point on a numerical linear distance-time graph. We investigated the prevalence of various well-known general…
Graph embedding and extensions: a general framework for dimensionality reduction.
Yan, Shuicheng; Xu, Dong; Zhang, Benyu; Zhang, Hong-Jiang; Yang, Qiang; Lin, Stephen
2007-01-01
Over the past few decades, a large family of algorithms - supervised or unsupervised; stemming from statistics or geometry theory - has been designed to provide different solutions to the problem of dimensionality reduction. Despite the different motivations of these algorithms, we present in this paper a general formulation known as graph embedding to unify them within a common framework. In graph embedding, each algorithm can be considered as the direct graph embedding or its linear/kernel/tensor extension of a specific intrinsic graph that describes certain desired statistical or geometric properties of a data set, with constraints from scale normalization or a penalty graph that characterizes a statistical or geometric property that should be avoided. Furthermore, the graph embedding framework can be used as a general platform for developing new dimensionality reduction algorithms. By utilizing this framework as a tool, we propose a new supervised dimensionality reduction algorithm called Marginal Fisher Analysis in which the intrinsic graph characterizes the intraclass compactness and connects each data point with its neighboring points of the same class, while the penalty graph connects the marginal points and characterizes the interclass separability. We show that MFA effectively overcomes the limitations of the traditional Linear Discriminant Analysis algorithm due to data distribution assumptions and available projection directions. Real face recognition experiments show the superiority of our proposed MFA in comparison to LDA, also for corresponding kernel and tensor extensions.
Inexpensive portable drug detector
NASA Technical Reports Server (NTRS)
Dimeff, J.; Heimbuch, A. H.; Parker, J. A.
1977-01-01
Inexpensive, easy-to-use, self-scanning, self-calibrating, portable unit automatically graphs fluorescence spectrum of drug sample. Device also measures rate of movement through chromatographic column for forensic and medical testing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barker, Andrew T.; Gelever, Stephan A.; Lee, Chak S.
2017-12-12
smoothG is a collection of parallel C++ classes/functions that algebraically constructs reduced models of different resolutions from a given high-fidelity graph model. In addition, smoothG also provides efficient linear solvers for the reduced models. Other than pure graph problem, the software finds its application in subsurface flow and power grid simulations in which graph Laplacians are found
Graph partitions and cluster synchronization in networks of oscillators
Schaub, Michael T.; O’Clery, Neave; Billeh, Yazan N.; Delvenne, Jean-Charles; Lambiotte, Renaud; Barahona, Mauricio
2017-01-01
Synchronization over networks depends strongly on the structure of the coupling between the oscillators. When the coupling presents certain regularities, the dynamics can be coarse-grained into clusters by means of External Equitable Partitions of the network graph and their associated quotient graphs. We exploit this graph-theoretical concept to study the phenomenon of cluster synchronization, in which different groups of nodes converge to distinct behaviors. We derive conditions and properties of networks in which such clustered behavior emerges, and show that the ensuing dynamics is the result of the localization of the eigenvectors of the associated graph Laplacians linked to the existence of invariant subspaces. The framework is applied to both linear and non-linear models, first for the standard case of networks with positive edges, before being generalized to the case of signed networks with both positive and negative interactions. We illustrate our results with examples of both signed and unsigned graphs for consensus dynamics and for partial synchronization of oscillator networks under the master stability function as well as Kuramoto oscillators. PMID:27781454
Ni, Hui; He, Guo-qing; Ruan, Hui; Chen, Qi-he; Chen, Feng
2005-01-01
A derivative ratio spectrophotometric method was used for the simultaneous determination of β-carotene and astaxanthin produced from Phaffia rhodozyma. Absorbencies of a series of the standard carotenoids in the range of 441 nm to 490 nm demonstrated that their absorptive spectra accorded with Beer’s law and that the additivity when the concentrations of β-carotene and astaxanthin and their mixture were within the range of 0 to 5 µg/ml, 0 to 6 µg/ml, and 0 to 6 µg/ml, respectively. When the wavelength interval (Δλ) at 2 nm was selected to calculate the first derivative ratio spectra values, the first derivative amplitudes at 461 nm and 466 nm were suitable for quantitatively determining β-carotene and astaxanthin, respectively. Effect of divisor on derivative ratio spectra could be neglected; any concentration used as divisor in range of 1.0 to 4.0 µg/ml is ideal for calculating the derivative ratio spectra values of the two carotenoids. Calibration graphs were established for β-carotene within 0–6.0 µg/ml and for astaxanthin within 0–5.0 µg/ml with their corresponding regressive equations in: y=−0.0082x−0.0002 and y=0.0146x−0.0006, respectively. R-square values in excess of 0.999 indicated the good linearity of the calibration graphs. Sample recovery rates were found satisfactory (>99%) with relative standard deviations (RSD) of less than 5%. This method was successfully applied to simultaneous determination of β-carotene and astaxanthin in the laboratory-prepared mixtures and the extract from the Phaffia rhodozyma culture. PMID:15909336
Optimal Clustering in Graphs with Weighted Edges: A Unified Approach to the Threshold Problem.
ERIC Educational Resources Information Center
Goetschel, Roy; Voxman, William
1987-01-01
Relations on a finite set V are viewed as weighted graphs. Using the language of graph theory, two methods of partitioning V are examined: selecting threshold values and applying them to a maximal weighted spanning forest, and using a parametric linear program to obtain a most adhesive partition. (Author/EM)
NASA Astrophysics Data System (ADS)
Ma, Yinbiao; Wei, Xiaojuan
2017-04-01
A novel method for the determination of platinum in waste platinum-loaded carbon catalyst samples was established by inductively coupled plasma optical emission spectrometry after samples digested by microwave oven with aqua regia. Such experiment conditions were investigated as the influence of sample digestion methods, digestion time, digestion temperature and interfering ions on the determination. Under the optimized conditions, the linear range of calibration graph for Pt was 0 ˜ 200.00 mg L-1, and the recovery was 95.67% ˜ 104.29%. The relative standard deviation (RSDs) for Pt was 1.78 %. The proposed method was applied to determine the same samples with atomic absorption spectrometry with the results consistently, which is suitable for the determination of platinum in waste platinum-loaded carbon catalyst samples.
1976-03-01
350Pa and 35MPa (0.05 lb/sqin and 5000 lb/sqin) and accelerometers with range maxima between 1.0g sub n and 100g sub n . Both types of transducer are...calibrated by subjecting them and an accurate reference transducer to a continuous sweep of input parameter. Graphs are drawn by an X- Y recorder of
El-Yazbi, Fawzi A; Amin, Omayma A; El-Kimary, Eman I; Khamis, Essam F; Younis, Sameh E
2016-08-01
An accurate, precise, rapid, specific and economic high-performance thin-layer chromatographic (HPTLC) method has been developed for the simultaneous quantitative determination of febuxostat (FEB) and diclofenac potassium (DIC). The chromatographic separation was performed on precoated silica gel 60 GF254 plates with chloroform-methanol 7:3 (v/v) as the mobile phase. The developed plates were scanned and quantified at 289 nm. Experimental conditions including band size, mobile phase composition and chamber-saturation time were critically studied, and the optimum conditions were selected. A satisfactory resolution (Rs = 2.67) with RF 0.48 and 0.69 and high sensitivity with limits of detection of 4 and 7 ng/band for FEB and DIC, respectively, were obtained. In addition, derivative ratio and ratio difference spectrophotometric methods were established for the analysis of such a mixture. All methods were validated as per the ICH guidelines. In the HPTLC method, the calibration plots were linear between 0.01-0.55 and 0.02-0.60 µg/band, for FEB and DIC, respectively. For the spectrophotometric methods, the calibration graphs were linear between 2-14 and 4-18 µg/mL for FEB and DIC, respectively. The simplicity and specificity of the proposed methods suggest their application in quality control analysis of FEB and DIC in their raw materials and tablets. A comparison of the proposed methods with the existing methods is presented. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Direction of Auditory Pitch-Change Influences Visual Search for Slope From Graphs.
Parrott, Stacey; Guzman-Martinez, Emmanuel; Orte, Laura; Grabowecky, Marcia; Huntington, Mark D; Suzuki, Satoru
2015-01-01
Linear trend (slope) is important information conveyed by graphs. We investigated how sounds influenced slope detection in a visual search paradigm. Four bar graphs or scatter plots were presented on each trial. Participants looked for a positive-slope or a negative-slope target (in blocked trials), and responded to targets in a go or no-go fashion. For example, in a positive-slope-target block, the target graph displayed a positive slope while other graphs displayed negative slopes (a go trial), or all graphs displayed negative slopes (a no-go trial). When an ascending or descending sound was presented concurrently, ascending sounds slowed detection of negative-slope targets whereas descending sounds slowed detection of positive-slope targets. The sounds had no effect when they immediately preceded the visual search displays, suggesting that the results were due to crossmodal interaction rather than priming. The sounds also had no effect when targets were words describing slopes, such as "positive," "negative," "increasing," or "decreasing," suggesting that the results were unlikely due to semantic-level interactions. Manipulations of spatiotemporal similarity between sounds and graphs had little effect. These results suggest that ascending and descending sounds influence visual search for slope based on a general association between the direction of auditory pitch-change and visual linear trend.
NASA Astrophysics Data System (ADS)
Arulraj, Abraham Daniel; Vijayan, Muthunanthevar; Vasantha, Vairathevar Sivasamy
2015-09-01
In this paper, very simple and rapid sensor has been developed for the spectrophotometric determination of pico-molar level of hydrazine using Alizarin red. There was a decrease of optical intensity of the probe in the presence of hydrazine. The LOD is calculated from the linear graph between 5-100 pM as 0.66 pM of hydrazine which is well below the risk level proposed by Agency for Toxic Substance and Disease Registry. The probe selectivity for the detection of hydrazine was tested in the presence of commonly encountered metal ions and anions. The calibration curves showed good linearity for working ranges from 5-100 pM and 0.5-40 mM respectively, with R2 = 0.9911 and 0.9744, indicate the validity of the Beer-Lambert law. The binding constant and the free energy change values are determined by the Benesi-Hildebrand method. Determination of hydrazine in environmental water and human urine samples are successfully performed by the proposed method with the recovery of 100%.
Arulraj, Abraham Daniel; Vijayan, Muthunanthevar; Vasantha, Vairathevar Sivasamy
2015-09-05
In this paper, very simple and rapid sensor has been developed for the spectrophotometric determination of pico-molar level of hydrazine using Alizarin red. There was a decrease of optical intensity of the probe in the presence of hydrazine. The LOD is calculated from the linear graph between 5-100 pM as 0.66 pM of hydrazine which is well below the risk level proposed by Agency for Toxic Substance and Disease Registry. The probe selectivity for the detection of hydrazine was tested in the presence of commonly encountered metal ions and anions. The calibration curves showed good linearity for working ranges from 5-100 pM and 0.5-40 mM respectively, with R(2)=0.9911 and 0.9744, indicate the validity of the Beer-Lambert law. The binding constant and the free energy change values are determined by the Benesi-Hildebrand method. Determination of hydrazine in environmental water and human urine samples are successfully performed by the proposed method with the recovery of 100%. Copyright © 2015 Elsevier B.V. All rights reserved.
Granada, Andréa; Murakami, Fabio S; Sartori, Tatiane; Lemos-Senna, Elenara; Silva, Marcos A S
2008-01-01
A simple, rapid, and sensitive reversed-phase column high-performance liquid chromatographic method was developed and validated to quantify camptothecin (CPT) in polymeric nanocapsule suspensions. The chromatographic separation was performed on a Supelcosil LC-18 column (15 cm x 4.6 mm id, 5 microm) using a mobile phase consisting of methanol-10 mM KH2PO4 (60 + 40, v/v; pH 2.8) at a flow rate of 1.0 mL/min and ultraviolet detection at 254 nm. The calibration graph was linear from 0.5 to 3.0 microg/mL with a correlation coefficient of 0.9979, and the limit of quantitation was 0.35 microg/mL. The assay recovery ranged from 97.3 to 105.0%. The intraday and interday relative standard deviation values were < 5.0%. The validation results confirmed that the developed method is specific, linear, accurate, and precise for its intended use. The current method was successfully applied to the evaluation of CPT entrapment efficiency and drug content in polymeric nanocapsule suspensions during the early stage of formulation development.
An Algebraic Approach to Inference in Complex Networked Structures
2015-07-09
44], [45],[46] where the shift is the elementary non-trivial filter that generates, under an appropriate notion of shift invariance, all linear ... elementary filter, and its output is a graph signal with the value at vertex n of the graph given approximately by a weighted linear combination of...AFRL-AFOSR-VA-TR-2015-0265 An Algebraic Approach to Inference in Complex Networked Structures Jose Moura CARNEGIE MELLON UNIVERSITY Final Report 07
Text categorization of biomedical data sets using graph kernels and a controlled vocabulary.
Bleik, Said; Mishra, Meenakshi; Huan, Jun; Song, Min
2013-01-01
Recently, graph representations of text have been showing improved performance over conventional bag-of-words representations in text categorization applications. In this paper, we present a graph-based representation for biomedical articles and use graph kernels to classify those articles into high-level categories. In our representation, common biomedical concepts and semantic relationships are identified with the help of an existing ontology and are used to build a rich graph structure that provides a consistent feature set and preserves additional semantic information that could improve a classifier's performance. We attempt to classify the graphs using both a set-based graph kernel that is capable of dealing with the disconnected nature of the graphs and a simple linear kernel. Finally, we report the results comparing the classification performance of the kernel classifiers to common text-based classifiers.
SpectralNET – an application for spectral graph analysis and visualization
Forman, Joshua J; Clemons, Paul A; Schreiber, Stuart L; Haggarty, Stephen J
2005-01-01
Background Graph theory provides a computational framework for modeling a variety of datasets including those emerging from genomics, proteomics, and chemical genetics. Networks of genes, proteins, small molecules, or other objects of study can be represented as graphs of nodes (vertices) and interactions (edges) that can carry different weights. SpectralNET is a flexible application for analyzing and visualizing these biological and chemical networks. Results Available both as a standalone .NET executable and as an ASP.NET web application, SpectralNET was designed specifically with the analysis of graph-theoretic metrics in mind, a computational task not easily accessible using currently available applications. Users can choose either to upload a network for analysis using a variety of input formats, or to have SpectralNET generate an idealized random network for comparison to a real-world dataset. Whichever graph-generation method is used, SpectralNET displays detailed information about each connected component of the graph, including graphs of degree distribution, clustering coefficient by degree, and average distance by degree. In addition, extensive information about the selected vertex is shown, including degree, clustering coefficient, various distance metrics, and the corresponding components of the adjacency, Laplacian, and normalized Laplacian eigenvectors. SpectralNET also displays several graph visualizations, including a linear dimensionality reduction for uploaded datasets (Principal Components Analysis) and a non-linear dimensionality reduction that provides an elegant view of global graph structure (Laplacian eigenvectors). Conclusion SpectralNET provides an easily accessible means of analyzing graph-theoretic metrics for data modeling and dimensionality reduction. SpectralNET is publicly available as both a .NET application and an ASP.NET web application from . Source code is available upon request. PMID:16236170
SpectralNET--an application for spectral graph analysis and visualization.
Forman, Joshua J; Clemons, Paul A; Schreiber, Stuart L; Haggarty, Stephen J
2005-10-19
Graph theory provides a computational framework for modeling a variety of datasets including those emerging from genomics, proteomics, and chemical genetics. Networks of genes, proteins, small molecules, or other objects of study can be represented as graphs of nodes (vertices) and interactions (edges) that can carry different weights. SpectralNET is a flexible application for analyzing and visualizing these biological and chemical networks. Available both as a standalone .NET executable and as an ASP.NET web application, SpectralNET was designed specifically with the analysis of graph-theoretic metrics in mind, a computational task not easily accessible using currently available applications. Users can choose either to upload a network for analysis using a variety of input formats, or to have SpectralNET generate an idealized random network for comparison to a real-world dataset. Whichever graph-generation method is used, SpectralNET displays detailed information about each connected component of the graph, including graphs of degree distribution, clustering coefficient by degree, and average distance by degree. In addition, extensive information about the selected vertex is shown, including degree, clustering coefficient, various distance metrics, and the corresponding components of the adjacency, Laplacian, and normalized Laplacian eigenvectors. SpectralNET also displays several graph visualizations, including a linear dimensionality reduction for uploaded datasets (Principal Components Analysis) and a non-linear dimensionality reduction that provides an elegant view of global graph structure (Laplacian eigenvectors). SpectralNET provides an easily accessible means of analyzing graph-theoretic metrics for data modeling and dimensionality reduction. SpectralNET is publicly available as both a .NET application and an ASP.NET web application from http://chembank.broad.harvard.edu/resources/. Source code is available upon request.
NASA Astrophysics Data System (ADS)
Doherty, W.; Lightfoot, P. C.; Ames, D. E.
2014-08-01
The effects of polynomial interpolation and internal standardization drift corrections on the inter-measurement dispersion (statistical) of isotope ratios measured with a multi-collector plasma mass spectrometer were investigated using the (analyte, internal standard) isotope systems of (Ni, Cu), (Cu, Ni), (Zn, Cu), (Zn, Ga), (Sm, Eu), (Hf, Re) and (Pb, Tl). The performance of five different correction factors was compared using a (statistical) range based merit function ωm which measures the accuracy and inter-measurement range of the instrument calibration. The frequency distribution of optimal correction factors over two hundred data sets uniformly favored three particular correction factors while the remaining two correction factors accounted for a small but still significant contribution to the reduction of the inter-measurement dispersion. Application of the merit function is demonstrated using the detection of Cu and Ni isotopic fractionation in laboratory and geologic-scale chemical reactor systems. Solvent extraction (diphenylthiocarbazone (Cu, Pb) and dimethylglyoxime (Ni) was used to either isotopically fractionate the metal during extraction using the method of competition or to isolate the Cu and Ni from the sample (sulfides and associated silicates). In the best case, differences in isotopic composition of ± 3 in the fifth significant figure could be routinely and reliably detected for Cu65/63 and Ni61/62. One of the internal standardization drift correction factors uses a least squares estimator to obtain a linear functional relationship between the measured analyte and internal standard isotope ratios. Graphical analysis demonstrates that the points on these graphs are defined by highly non-linear parametric curves and not two linearly correlated quantities which is the usual interpretation of these graphs. The success of this particular internal standardization correction factor was found in some cases to be due to a fortuitous, scale dependent, parametric curve effect.
Constructing the L2-Graph for Robust Subspace Learning and Subspace Clustering.
Peng, Xi; Yu, Zhiding; Yi, Zhang; Tang, Huajin
2017-04-01
Under the framework of graph-based learning, the key to robust subspace clustering and subspace learning is to obtain a good similarity graph that eliminates the effects of errors and retains only connections between the data points from the same subspace (i.e., intrasubspace data points). Recent works achieve good performance by modeling errors into their objective functions to remove the errors from the inputs. However, these approaches face the limitations that the structure of errors should be known prior and a complex convex problem must be solved. In this paper, we present a novel method to eliminate the effects of the errors from the projection space (representation) rather than from the input space. We first prove that l 1 -, l 2 -, l ∞ -, and nuclear-norm-based linear projection spaces share the property of intrasubspace projection dominance, i.e., the coefficients over intrasubspace data points are larger than those over intersubspace data points. Based on this property, we introduce a method to construct a sparse similarity graph, called L2-graph. The subspace clustering and subspace learning algorithms are developed upon L2-graph. We conduct comprehensive experiment on subspace learning, image clustering, and motion segmentation and consider several quantitative benchmarks classification/clustering accuracy, normalized mutual information, and running time. Results show that L2-graph outperforms many state-of-the-art methods in our experiments, including L1-graph, low rank representation (LRR), and latent LRR, least square regression, sparse subspace clustering, and locally linear representation.
Hamiltonian Cycle Enumeration via Fermion-Zeon Convolution
NASA Astrophysics Data System (ADS)
Staples, G. Stacey
2017-12-01
Beginning with a simple graph having finite vertex set V, operators are induced on fermion and zeon algebras by the action of the graph's adjacency matrix and combinatorial Laplacian on the vector space spanned by the graph's vertices. When the graph is simple (undirected with no loops or multiple edges), the matrices are symmetric and the induced operators are self-adjoint. The goal of the current paper is to recover a number of known graph-theoretic results from quantum observables constructed as linear operators on fermion and zeon Fock spaces. By considering an "indeterminate" fermion/zeon Fock space, a fermion-zeon convolution operator is defined whose trace recovers the number of Hamiltonian cycles in the graph. This convolution operator is a quantum observable whose expectation reveals the number of Hamiltonian cycles in the graph.
Science 101: When Drawing Graphs from Collected Data, Why Don't You Just "Connect the Dots?"
ERIC Educational Resources Information Center
Robertson, William C.
2007-01-01
Using "error bars" on graphs is a good way to help students see that, within the inherent uncertainty of the measurements due to the instruments used for measurement, the data points do, in fact, lie along the line that represents the linear relationship. In this article, the author explains why connecting the dots on graphs of collected data is…
Glocker, Ben; Paragios, Nikos; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir
2007-01-01
In this paper we propose a novel non-rigid volume registration based on discrete labeling and linear programming. The proposed framework reformulates registration as a minimal path extraction in a weighted graph. The space of solutions is represented using a set of a labels which are assigned to predefined displacements. The graph topology corresponds to a superimposed regular grid onto the volume. Links between neighborhood control points introduce smoothness, while links between the graph nodes and the labels (end-nodes) measure the cost induced to the objective function through the selection of a particular deformation for a given control point once projected to the entire volume domain, Higher order polynomials are used to express the volume deformation from the ones of the control points. Efficient linear programming that can guarantee the optimal solution up to (a user-defined) bound is considered to recover the optimal registration parameters. Therefore, the method is gradient free, can encode various similarity metrics (simple changes on the graph construction), can guarantee a globally sub-optimal solution and is computational tractable. Experimental validation using simulated data with known deformation, as well as manually segmented data demonstrate the extreme potentials of our approach.
Finding Strong Bridges and Strong Articulation Points in Linear Time
NASA Astrophysics Data System (ADS)
Italiano, Giuseppe F.; Laura, Luigi; Santaroni, Federico
Given a directed graph G, an edge is a strong bridge if its removal increases the number of strongly connected components of G. Similarly, we say that a vertex is a strong articulation point if its removal increases the number of strongly connected components of G. In this paper, we present linear-time algorithms for computing all the strong bridges and all the strong articulation points of directed graphs, solving an open problem posed in [2].
1991-11-08
only simple bounds on delays but also relate the delays in linear inequalities so that tradeoffs are apparent. We model circuits as communicating...set of linear inequalities constraining the variables. These relations provide synthesis tools with information about tradeoffs between circuit delays...available to express the original circuit as a graph of elementary gates and then cover the graph’s fanout-free trees with collections of three-input
40 CFR Appendix B to Part 75 - Quality Assurance and Quality Control Procedures
Code of Federal Regulations, 2012 CFR
2012-07-01
... Systems 1.2.1Calibration Error Test and Linearity Check Procedures Keep a written record of the procedures used for daily calibration error tests and linearity checks (e.g., how gases are to be injected..., and when calibration adjustments should be made). Identify any calibration error test and linearity...
40 CFR Appendix B to Part 75 - Quality Assurance and Quality Control Procedures
Code of Federal Regulations, 2013 CFR
2013-07-01
... Systems 1.2.1Calibration Error Test and Linearity Check Procedures Keep a written record of the procedures used for daily calibration error tests and linearity checks (e.g., how gases are to be injected..., and when calibration adjustments should be made). Identify any calibration error test and linearity...
Sarafidis, Pantelis A; Georgianos, Panagiotis I; Karpetas, Antonios; Bikos, Athanasios; Korelidou, Linda; Tersi, Maria; Divanis, Dimitrios; Tzanis, Georgios; Mavromatidis, Konstantinos; Liakopoulos, Vassilios; Zebekakis, Pantelis E; Lasaridis, Anastasios; Protogerou, Athanase D
2014-01-01
Elevated wave reflections and arterial stiffness, as well as ambulatory blood pressure (BP) are independent predictors of cardiovascular risk in end-stage-renal-disease. This study is the first to evaluate in hemodialysis patients the validity of a new ambulatory oscillometric device (Mobil-O-Graph, IEM, Germany), which estimates aortic BP, augmentation index (AIx) and pulse wave velocity (PWV). Aortic SBP (aSBP), heart rate-adjusted AIx (AIx(75)) and PWV measured with Mobil-O-Graph were compared with the values from the most widely used tonometric device (Sphygmocor, ArtCor, Australia) in 73 hemodialysis patients. Measurements were made in a randomized order after 10 min of rest in the supine position at least 30 min before a dialysis session. Brachial BP (mercury sphygmomanometer) was used for the calibration of Sphygmocor's waveform. Sphygmocor-derived aSBP and AIx(75) did not differ from the relevant Mobil-O-Graph measurements (aSBP: 136.3 ± 19.6 vs. 133.5 ± 19.3 mm Hg, p = 0.068; AIx(75): 28.4 ± 9.3 vs. 30.0 ± 11.8%, p = 0.229). The small difference in aSBP is perhaps explained by a relevant difference in brachial SBP used for calibration (146.9 ± 20.4 vs. 145.2 ± 19.9 mm Hg, p = 0.341). Sphygmocor PWV was higher than Mobil-O-Graph PWV (10.3 ± 3.4 vs. 9.5 ± 2.1 m/s, p < 0.01). All 3 parameters estimated by Mobil-O-Graph showed highly significant (p < 0.001) correlations with the relevant measurements of Sphygmocor (aSBP, r = 0.770; AIx(75), r = 0.400; PWV, r = 0.739). The Bland-Altman Plots for aSBP and AIx(75) showed acceptable agreement between the two devices and no evidence of systemic bias for PWV. As in other populations, acceptable agreement between Mobil-O-Graph and Sphygmocor was evident for aSBP and AIx(75) in hemodialysis patients; PWV was slightly underestimated by Mobil-O-Graph. © 2014 S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Hermawan, D.; Suwandri; Sulaeman, U.; Istiqomah, A.; Aboul-Enein, H. Y.
2017-02-01
A simple high performance liquid chromatography (HPLC) method has been developed in this study for the analysis of miconazole, an antifungal drug, in powder sample. The optimized HPLC system using C8 column was achieved using mobile phase composition containing methanol:water (85:15, v/v), a flow rate of 0.8 mL/min, and UV detection at 220 nm. The calibration graph was linear in the range from 10 to 50 mg/L with r 2 of 0.9983. The limit of detection (LOD) and limit of quantitation (LOQ) obtained were 2.24 mg/L and 7.47 mg/L, respectively. The present HPLC method is applicable for the determination of miconazole in the powder sample with a recovery of 101.28 % (RSD = 0.96%, n = 3). The developed HPLC method provides short analysis time, high reproducibility and high sensitivity.
J Greenhow, E; Viñas, P
1984-08-01
A systematic comparison has been made of two indicator systems for the non-aqueous catalytic thermometric titration of strong and weak organic bases. The indicator reagents, alpha-methylstyrene and mixtures of acetic anhydride and hydroxy compounds, are shown to give results (for 14 representative bases) which do not diner significantly in coefficient of variation or titration error. Calibration graphs for all the samples, in the range 0.01-0.1 meq, are linear, with correlation coefficients of 0.995 or better. Aniline, benzylamine, n-butylamine, morpholine, pyrrole, l-dopa, alpha-methyl-l-dopa, dl-alpha-alanine, dl-leucine and l-cysteine cannot be determined when acetic anhydride is present in the sample solution, but some primary and second amines can. This is explained in terms of rates of acetylation of the amino groups.
Ma, Yan; Cao, Wei; Qiao, Shuang; Liu, Wenwen; Yang, Jinghe
2011-01-01
Chemiluminescence (CL) detection for the determination of estrogen benzoate, using the reaction of tris(1,10-phenanthroline)ruthenium(II)-Na(2)SO(3)-permanganate, is described. This method is based on the CL reaction of estrogen benzoate (EB) with acidic potassium permanganate and tris(1,10-phenanthroline)ruthenium(II). The CL intensity is greatly enhanced when Na(2)SO(3) is added. After optimization of the different experimental parameters, a calibration graph for estrogen benzoate is linear in the range 0.05-10 µg/mL. The 3 s limit of detection is 0.024 µg/mL and the relative standard deviation was 1.3% for 1.0 µg/mL estrogen benzoate (n = 11). This proposed method was successfully applied to commercial injection samples and emulsion cosmetics. The mechanism of CL reaction was also studied. Copyright © 2011 John Wiley & Sons, Ltd.
A kinetic method for the determination of thiourea by its catalytic effect in micellar media
NASA Astrophysics Data System (ADS)
Abbasi, Shahryar; Khani, Hossein; Gholivand, Mohammad Bagher; Naghipour, Ali; Farmany, Abbas; Abbasi, Freshteh
2009-03-01
A highly sensitive, selective and simple kinetic method was developed for the determination of trace levels of thiourea based on its catalytic effect on the oxidation of janus green in phosphoric acid media and presence of Triton X-100 surfactant without any separation and pre-concentration steps. The reaction was monitored spectrophotometrically by tracing the formation of the green-colored oxidized product of janus green at 617 nm within 15 min of mixing the reagents. The effect of some factors on the reaction speed was investigated. Following the recommended procedure, thiourea could be determined with linear calibration graph in 0.03-10.00 μg/ml range. The detection limit of the proposed method is 0.02 μg/ml. Most of foreign species do not interfere with the determination. The high sensitivity and selectivity of the proposed method allowed its successful application to fruit juice and industrial waste water.
Seno, Kunihiko; Matumura, Kazuki; Oshima, Mitsuko; Motomizu, Shoji
2008-04-01
1-(3-dimethylaminopropyl)-3-ethylcarbodiimide hydrochloride (EDC.HCl) is a very useful agent to form amide bonds (peptide bonds) in an aqueous medium. A simple and fast detection system was developed using the reaction with pyridine and ethylenediamine in acidic aqueous solution and spectrophotometric flow injection analysis. The absorbances were measured at 400 nm and the reaction was accelerated at 40 degrees C. The calibration graph showed good linearity from 0 to 10% of EDC.HCl solutions: the regression equation was y=3.15x10(4)x (y, peak area; x, % concentration of EDC.HCl). The RSD was under 1.0%. Sample throughput was 15 h(-1). This method was applied to monitoring the EDC.HCl concentration that remained after the anhydration of phthalic acid in water, esterification of acetic acid in methanol or dehydration condensation of malonic acid and ethylenediamine in water.
NASA Astrophysics Data System (ADS)
Lotfy, Hayam M.; Hegazy, Maha A.; Mowaka, Shereen; Mohamed, Ekram Hany
2016-01-01
A comparative study of smart spectrophotometric techniques for the simultaneous determination of Omeprazole (OMP), Tinidazole (TIN) and Doxycycline (DOX) without prior separation steps is developed. These techniques consist of several consecutive steps utilizing zero/or ratio/or derivative spectra. The proposed techniques adopt nine simple different methods, namely direct spectrophotometry, dual wavelength, first derivative-zero crossing, amplitude factor, spectrum subtraction, ratio subtraction, derivative ratio-zero crossing, constant center, and successive derivative ratio method. The calibration graphs are linear over the concentration range of 1-20 μg/mL, 5-40 μg/mL and 2-30 μg/mL for OMP, TIN and DOX, respectively. These methods are tested by analyzing synthetic mixtures of the above drugs and successfully applied to commercial pharmaceutical preparation. The methods that are validated according to the ICH guidelines, accuracy, precision, and repeatability, were found to be within the acceptable limits.
Manassra, Adnan; Khamis, Mustafa; El-Dakiky, Magdy; Abdel-Qader, Zuhair; Al-Rimawi, Fuad
2010-03-11
An HPLC method using UV detection is proposed for the simultaneous determination of pseudophedrine hydrochloride, codeine phosphate, and triprolidine hydrochloride in liquid formulation. C18 column (250mmx4.0mm) is used as the stationary phase with a mixture of methanol:acetate buffer:acetonitrile (85:5:10, v/v) as the mobile phase. The factors affecting column separation of the analytes were studied. The calibration graphs exhibited a linear concentration range of 0.06-1.0mg/ml for pseudophedrine hydrochloride, 0.02-1.0mg/ml for codeine phosphate, and 0.0025-1.0mg/ml for triprolidine hydrochloride for a sample size of 5microl with correlation coefficients of better than 0.999 for all active ingredients studied. The results demonstrate that this method is reliable, reproducible and suitable for routine use with analysis time of less than 4min. Copyright 2009 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Samadi, A.; Amjadi, M.
2016-07-01
Halloysite nanotubes (HNTs) have been introduced as a new solid phase extraction adsorbent for preconcentration of iron(II) as a complex with 2,2-bipyridine. The cationic complex is effectively adsorbed on the sorbent in the pH range of 3.5-6.0 and efficiently desorbed by trichloroacetic acid. The eluted complex has a strong absorption around 520 nm, which was used for determination of Fe(II). After optimizing extraction conditions, the linear range of the calibration graph was 5.0-500 μg/L with a detection limit of 1.3 μg/L. The proposed method was successfully applied for the determination of trace iron in various water and food samples, and the accuracy was assessed through the recovery experiments and analysis of a certified reference material (NIST 1643e).
ERIC Educational Resources Information Center
Caddle, Mary C.; Brizuela, Barbara M.
2011-01-01
This paper looks at 21 fifth grade students as they discuss a linear graph in the Cartesian plane. The problem presented to students depicted a graph showing distance as a function of elapsed time for a person walking at a constant rate of 5 miles/h. The question asked students to consider how many more hours, after having already walked 4 h,…
Linear game non-contextuality and Bell inequalities—a graph-theoretic approach
NASA Astrophysics Data System (ADS)
Rosicka, M.; Ramanathan, R.; Gnaciński, P.; Horodecki, K.; Horodecki, M.; Horodecki, P.; Severini, S.
2016-04-01
We study the classical and quantum values of a class of one- and two-party unique games, that generalizes the well-known XOR games to the case of non-binary outcomes. In the bipartite case the generalized XOR (XOR-d) games we study are a subclass of the well-known linear games. We introduce a ‘constraint graph’ associated to such a game, with the constraints defining the game represented by an edge-coloring of the graph. We use the graph-theoretic characterization to relate the task of finding equivalent games to the notion of signed graphs and switching equivalence from graph theory. We relate the problem of computing the classical value of single-party anti-correlation XOR games to finding the edge bipartization number of a graph, which is known to be MaxSNP hard, and connect the computation of the classical value of XOR-d games to the identification of specific cycles in the graph. We construct an orthogonality graph of the game from the constraint graph and study its Lovász theta number as a general upper bound on the quantum value even in the case of single-party contextual XOR-d games. XOR-d games possess appealing properties for use in device-independent applications such as randomness of the local correlated outcomes in the optimal quantum strategy. We study the possibility of obtaining quantum algebraic violation of these games, and show that no finite XOR-d game possesses the property of pseudo-telepathy leaving the frequently used chained Bell inequalities as the natural candidates for such applications. We also show this lack of pseudo-telepathy for multi-party XOR-type inequalities involving two-body correlation functions.
Adaptive tracking control of leader-following linear multi-agent systems with external disturbances
NASA Astrophysics Data System (ADS)
Lin, Hanquan; Wei, Qinglai; Liu, Derong; Ma, Hongwen
2016-10-01
In this paper, the consensus problem for leader-following linear multi-agent systems with external disturbances is investigated. Brownian motions are used to describe exogenous disturbances. A distributed tracking controller based on Riccati inequalities with an adaptive law for adjusting coupling weights between neighbouring agents is designed for leader-following multi-agent systems under fixed and switching topologies. In traditional distributed static controllers, the coupling weights depend on the communication graph. However, coupling weights associated with the feedback gain matrix in our method are updated by state errors between neighbouring agents. We further present the stability analysis of leader-following multi-agent systems with stochastic disturbances under switching topology. Most traditional literature requires the graph to be connected all the time, while the communication graph is only assumed to be jointly connected in this paper. The design technique is based on Riccati inequalities and algebraic graph theory. Finally, simulations are given to show the validity of our method.
Thread Graphs, Linear Rank-Width and Their Algorithmic Applications
NASA Astrophysics Data System (ADS)
Ganian, Robert
The introduction of tree-width by Robertson and Seymour [7] was a breakthrough in the design of graph algorithms. A lot of research since then has focused on obtaining a width measure which would be more general and still allowed efficient algorithms for a wide range of NP-hard problems on graphs of bounded width. To this end, Oum and Seymour have proposed rank-width, which allows the solution of many such hard problems on a less restricted graph classes (see e.g. [3,4]). But what about problems which are NP-hard even on graphs of bounded tree-width or even on trees? The parameter used most often for these exceptionally hard problems is path-width, however it is extremely restrictive - for example the graphs of path-width 1 are exactly paths.
Biogeographic Dating of Speciation Times Using Paleogeographically Informed Processes
Landis, Michael J.
2017-01-01
Abstract Standard models of molecular evolution cannot estimate absolute speciation times alone, and require external calibrations to do so, such as fossils. Because fossil calibration methods rely on the incomplete fossil record, a great number of nodes in the tree of life cannot be dated precisely. However, many major paleogeographical events are dated, and since biogeographic processes depend on paleogeographical conditions, biogeographic dating may be used as an alternative or complementary method to fossil dating. I demonstrate how a time-stratified biogeographic stochastic process may be used to estimate absolute divergence times by conditioning on dated paleogeographical events. Informed by the current paleogeographical literature, I construct an empirical dispersal graph using 25 areas and 26 epochs for the past 540 Ma of Earth’s history. Simulations indicate biogeographic dating performs well so long as paleogeography imposes constraint on biogeographic character evolution. To gauge whether biogeographic dating may be of practical use, I analyzed the well-studied turtle clade (Testudines) to assess how well biogeographic dating fares when compared to fossil-calibrated dating estimates reported in the literature. Fossil-free biogeographic dating estimated the age of the most recent common ancestor of extant turtles to be from the Late Triassic, which is consistent with fossil-based estimates. Dating precision improves further when including a root node fossil calibration. The described model, paleogeographical dispersal graph, and analysis scripts are available for use with RevBayes. PMID:27155009
A componential model of human interaction with graphs: 1. Linear regression modeling
NASA Technical Reports Server (NTRS)
Gillan, Douglas J.; Lewis, Robert
1994-01-01
Task analyses served as the basis for developing the Mixed Arithmetic-Perceptual (MA-P) model, which proposes (1) that people interacting with common graphs to answer common questions apply a set of component processes-searching for indicators, encoding the value of indicators, performing arithmetic operations on the values, making spatial comparisons among indicators, and repsonding; and (2) that the type of graph and user's task determine the combination and order of the components applied (i.e., the processing steps). Two experiments investigated the prediction that response time will be linearly related to the number of processing steps according to the MA-P model. Subjects used line graphs, scatter plots, and stacked bar graphs to answer comparison questions and questions requiring arithmetic calculations. A one-parameter version of the model (with equal weights for all components) and a two-parameter version (with different weights for arithmetic and nonarithmetic processes) accounted for 76%-85% of individual subjects' variance in response time and 61%-68% of the variance taken across all subjects. The discussion addresses possible modifications in the MA-P model, alternative models, and design implications from the MA-P model.
Laplacian Estrada and normalized Laplacian Estrada indices of evolving graphs.
Shang, Yilun
2015-01-01
Large-scale time-evolving networks have been generated by many natural and technological applications, posing challenges for computation and modeling. Thus, it is of theoretical and practical significance to probe mathematical tools tailored for evolving networks. In this paper, on top of the dynamic Estrada index, we study the dynamic Laplacian Estrada index and the dynamic normalized Laplacian Estrada index of evolving graphs. Using linear algebra techniques, we established general upper and lower bounds for these graph-spectrum-based invariants through a couple of intuitive graph-theoretic measures, including the number of vertices or edges. Synthetic random evolving small-world networks are employed to show the relevance of the proposed dynamic Estrada indices. It is found that neither the static snapshot graphs nor the aggregated graph can approximate the evolving graph itself, indicating the fundamental difference between the static and dynamic Estrada indices.
Axial calibration methods of piezoelectric load sharing dynamometer
NASA Astrophysics Data System (ADS)
Zhang, Jun; Chang, Qingbing; Ren, Zongjin; Shao, Jun; Wang, Xinlei; Tian, Yu
2018-06-01
The relationship between input and output of load sharing dynamometer is seriously non-linear in different loading points of a plane, so it's significant for accutately measuring force to precisely calibrate the non-linear relationship. In this paper, firstly, based on piezoelectric load sharing dynamometer, calibration experiments of different loading points are performed in a plane. And then load sharing testing system is respectively calibrated based on BP algorithm and ELM (Extreme Learning Machine) algorithm. Finally, the results show that the calibration result of ELM is better than BP for calibrating the non-linear relationship between input and output of loading sharing dynamometer in the different loading points of a plane, which verifies that ELM algorithm is feasible in solving force non-linear measurement problem.
Fibonacci Identities, Matrices, and Graphs
ERIC Educational Resources Information Center
Huang, Danrun
2005-01-01
General strategies used to help discover, prove, and generalize identities for Fibonacci numbers are described along with some properties about the determinants of square matrices. A matrix proof for identity (2) that has received immense attention from many branches of mathematics, like linear algebra, dynamical systems, graph theory and others…
Sequential injection redox or acid-base titration for determination of ascorbic acid or acetic acid.
Lenghor, Narong; Jakmunee, Jaroon; Vilen, Michael; Sara, Rolf; Christian, Gary D; Grudpan, Kate
2002-12-06
Two sequential injection titration systems with spectrophotometric detection have been developed. The first system for determination of ascorbic acid was based on redox reaction between ascorbic acid and permanganate in an acidic medium and lead to a decrease in color intensity of permanganate, monitored at 525 nm. A linear dependence of peak area obtained with ascorbic acid concentration up to 1200 mg l(-1) was achieved. The relative standard deviation for 11 replicate determinations of 400 mg l(-1) ascorbic acid was 2.9%. The second system, for acetic acid determination, was based on acid-base titration of acetic acid with sodium hydroxide using phenolphthalein as an indicator. The decrease in color intensity of the indicator was proportional to the acid content. A linear calibration graph in the range of 2-8% w v(-1) of acetic acid with a relative standard deviation of 4.8% (5.0% w v(-1) acetic acid, n=11) was obtained. Sample throughputs of 60 h(-1) were achieved for both systems. The systems were successfully applied for the assays of ascorbic acid in vitamin C tablets and acetic acid content in vinegars, respectively.
Huang, Yuan; Zheng, Zhiqun; Huang, Liying; Yao, Hong; Wu, Xiao Shan; Li, Shaoguang; Lin, Dandan
2017-05-10
A rapid, simple, cost-effective dispersive liquid-phase microextraction based on solidified floating organic drop (SFOD-LPME) was developed in this study. Along with high-performance liquid chromatography, we used the developed approach to determine and enrich trace amounts of four glucocorticoids, namely, prednisone, betamethasone, dexamethasone, and cortisone acetate, in animal-derived food. We also investigated and optimized several important parameters that influenced the extraction efficiency of SFOD-LPME. These parameters include the extractant species, volumes of extraction and dispersant solvents, sodium chloride addition, sample pH, extraction time and temperature, and stirring rate. Under optimum experimental conditions, the calibration graph exhibited linearity over the range of 1.2-200.0ng/ml for the four analytes, with a reasonable linearity(r 2 : 0.9990-0.9999). The enrichment factor was 142-276, and the detection limits was 0.39-0.46ng/ml (0.078-0.23μg/kg). This method was successfully applied to analyze actual food samples, and good spiked recoveries of over 81.5%-114.3% were obtained. Copyright © 2017. Published by Elsevier B.V.
Communication: Analysing kinetic transition networks for rare events.
Stevenson, Jacob D; Wales, David J
2014-07-28
The graph transformation approach is a recently proposed method for computing mean first passage times, rates, and committor probabilities for kinetic transition networks. Here we compare the performance to existing linear algebra methods, focusing on large, sparse networks. We show that graph transformation provides a much more robust framework, succeeding when numerical precision issues cause the other methods to fail completely. These are precisely the situations that correspond to rare event dynamics for which the graph transformation was introduced.
Investigating Integer Restrictions in Linear Programming
ERIC Educational Resources Information Center
Edwards, Thomas G.; Chelst, Kenneth R.; Principato, Angela M.; Wilhelm, Thad L.
2015-01-01
Linear programming (LP) is an application of graphing linear systems that appears in many Algebra 2 textbooks. Although not explicitly mentioned in the Common Core State Standards for Mathematics, linear programming blends seamlessly into modeling with mathematics, the fourth Standard for Mathematical Practice (CCSSI 2010, p. 7). In solving a…
Naseri, Mohammad Taghi; Hemmatkhah, Payam; Hosseini, Mohammad Reza Milani; Assadi, Yaghoub
2008-03-03
The dispersive liquid-liquid microextraction (DLLME) was combined with the flame atomic absorption spectrometry (FAAS) for determination of lead in the water samples. Diethyldithiophosphoric acid (DDTP), carbon tetrachloride and methanol were used as chelating agent, extraction solvent and disperser solvent, respectively. A new FAAS sample introduction system was employed for the microvolume nebulization of the non-flammable chlorinated organic extracts. Injection of 20 microL volumes of the organic extract into an air-acetylene flame provided very sensitive spike-like and reproducible signals. Some effective parameters on the microextraction and the complex formation were selected and optimized. These parameters include extraction and disperser solvent type as well as their volume, extraction time, salt effect, pH and amount of the chelating agent. Under the optimized conditions, the enrichment factor of 450 was obtained from a sample volume of 25.0 mL. The enhancement factor, calculated as the ratio of the slopes of the calibration graphs with and without preconcentration, which was about 1000. The calibration graph was linear in the range of 1-70 microgL(-1) with a detection limit of 0.5 microgL(-1). The relative standard deviation (R.S.D.) for seven replicate measurements of 5.0 and 50 microgL(-1) of lead were 3.8 and 2.0%, respectively. The relative recoveries of lead in tap, well, river and seawater samples at the spiking level of 20 microgL(-1) ranged from 93.8 to 106.2%. The characteristics of the proposed method were compared with those of the liquid-liquid extraction (LLE), cloud point extraction (CPE), on-line and off-line solid-phase extraction (SPE) as well as co-precipitation, based on bibliographic data. Operation simplicity, rapidity, low cost, high enrichment factor, good repeatability, and low consumption of the extraction solvent at a microliter level are the main advantages of the proposed method.
NASA Astrophysics Data System (ADS)
İlktaç, Raif; Aksuner, Nur; Henden, Emur
2017-03-01
In this study, magnetite-molecularly imprinted polymer has been used for the first time as selective adsorbent before the fluorimetric determination of carbendazim. Adsorption capacity of the magnetite-molecularly imprinted polymer was found to be 2.31 ± 0.63 mg g- 1 (n = 3). Limit of detection (LOD) and limit of quantification (LOQ) of the method were found to be 2.3 and 7.8 μg L- 1, respectively. Calibration graph was linear in the range of 10-1000 μg L- 1. Rapidity is an important advantage of the method where re-binding and recovery processes of carbendazim can be completed within an hour. The same imprinted polymer can be used for the determination of carbendazim without any capacity loss repeatedly for at least ten times. Proposed method has been successfully applied to determine carbendazim residues in apple and orange, where the recoveries of the spiked samples were found to be in the range of 95.7-103%. Characterization of the adsorbent and the effects of some potential interferences were also evaluated. With the reasonably high capacity and reusability of the adsorbent, dynamic calibration range, rapidity, simplicity, cost-effectiveness and with suitable LOD and LOQ, the proposed method is an ideal method for the determination of carbendazim.
Akgul Kalkan, Esin; Sahiner, Mehtap; Ulker Cakir, Dilek; Alpaslan, Duygu; Yilmaz, Selehattin
2016-01-01
Using high-performance liquid chromatography (HPLC) and 2,4-dinitrophenylhydrazine (2,4-DNPH) as a derivatizing reagent, an analytical method was developed for the quantitative determination of acetone in human blood. The determination was carried out at 365 nm using an ultraviolet-visible (UV-Vis) diode array detector (DAD). For acetone as its 2,4-dinitrophenylhydrazone derivative, a good separation was achieved with a ThermoAcclaim C18 column (15 cm × 4.6 mm × 3 μm) at retention time (t R) 12.10 min and flowrate of 1 mL min−1 using a (methanol/acetonitrile) water elution gradient. The methodology is simple, rapid, sensitive, and of low cost, exhibits good reproducibility, and allows the analysis of acetone in biological fluids. A calibration curve was obtained for acetone using its standard solutions in acetonitrile. Quantitative analysis of acetone in human blood was successfully carried out using this calibration graph. The applied method was validated in parameters of linearity, limit of detection and quantification, accuracy, and precision. We also present acetone as a useful tool for the HPLC-based metabolomic investigation of endogenous metabolism and quantitative clinical diagnostic analysis. PMID:27298750
Discovering Authorities and Hubs in Different Topological Web Graph Structures.
ERIC Educational Resources Information Center
Meghabghab, George
2002-01-01
Discussion of citation analysis on the Web considers Web hyperlinks as a source to analyze citations. Topics include basic graph theory applied to Web pages, including matrices, linear algebra, and Web topology; and hubs and authorities, including a search technique called HITS (Hyperlink Induced Topic Search). (Author/LRW)
The bilinear-biquadratic model on the complete graph
NASA Astrophysics Data System (ADS)
Jakab, Dávid; Szirmai, Gergely; Zimborás, Zoltán
2018-03-01
We study the spin-1 bilinear-biquadratic model on the complete graph of N sites, i.e. when each spin is interacting with every other spin with the same strength. Because of its complete permutation invariance, this Hamiltonian can be rewritten as the linear combination of the quadratic Casimir operators of \
A manifold learning approach to target detection in high-resolution hyperspectral imagery
NASA Astrophysics Data System (ADS)
Ziemann, Amanda K.
Imagery collected from airborne platforms and satellites provide an important medium for remotely analyzing the content in a scene. In particular, the ability to detect a specific material within a scene is of high importance to both civilian and defense applications. This may include identifying "targets" such as vehicles, buildings, or boats. Sensors that process hyperspectral images provide the high-dimensional spectral information necessary to perform such analyses. However, for a d-dimensional hyperspectral image, it is typical for the data to inherently occupy an m-dimensional space, with m << d. In the remote sensing community, this has led to a recent increase in the use of manifold learning, which aims to characterize the embedded lower-dimensional, non-linear manifold upon which the hyperspectral data inherently lie. Classic hyperspectral data models include statistical, linear subspace, and linear mixture models, but these can place restrictive assumptions on the distribution of the data; this is particularly true when implementing traditional target detection approaches, and the limitations of these models are well-documented. With manifold learning based approaches, the only assumption is that the data reside on an underlying manifold that can be discretely modeled by a graph. The research presented here focuses on the use of graph theory and manifold learning in hyperspectral imagery. Early work explored various graph-building techniques with application to the background model of the Topological Anomaly Detection (TAD) algorithm, which is a graph theory based approach to anomaly detection. This led towards a focus on target detection, and in the development of a specific graph-based model of the data and subsequent dimensionality reduction using manifold learning. An adaptive graph is built on the data, and then used to implement an adaptive version of locally linear embedding (LLE). We artificially induce a target manifold and incorporate it into the adaptive LLE transformation; the artificial target manifold helps to guide the separation of the target data from the background data in the new, lower-dimensional manifold coordinates. Then, target detection is performed in the manifold space.
Graph Mining Meets the Semantic Web
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Sangkeun; Sukumar, Sreenivas R; Lim, Seung-Hwan
The Resource Description Framework (RDF) and SPARQL Protocol and RDF Query Language (SPARQL) were introduced about a decade ago to enable flexible schema-free data interchange on the Semantic Web. Today, data scientists use the framework as a scalable graph representation for integrating, querying, exploring and analyzing data sets hosted at different sources. With increasing adoption, the need for graph mining capabilities for the Semantic Web has emerged. We address that need through implementation of three popular iterative Graph Mining algorithms (Triangle count, Connected component analysis, and PageRank). We implement these algorithms as SPARQL queries, wrapped within Python scripts. We evaluatemore » the performance of our implementation on 6 real world data sets and show graph mining algorithms (that have a linear-algebra formulation) can indeed be unleashed on data represented as RDF graphs using the SPARQL query interface.« less
Plane representations of graphs and visibility between parallel segments
NASA Astrophysics Data System (ADS)
Tamassia, R.; Tollis, I. G.
1985-04-01
Several layout compaction strategies for VLSI are based on the concept of visibility between parallel segments, where we say that two parallel segments of a given set are visible if they can be joined by a segment orthogonal to them, which does not intersect any other segment. This paper studies visibility representations of graphs, which are constructed by mapping vertices to horizontal segments, and edges to vertical segments drawn between visible vertex-segments. Clearly, every graph that admits such a representation must be a planar. The authors consider three types of visibility representations, and give complete characterizations of the classes of graphs that admit them. Furthermore, they present linear time algorithms for testing the existence of and constructing visibility representations of planar graphs.
Robust Algorithms for on Minor-Free Graphs Based on the Sherali-Adams Hierarchy
NASA Astrophysics Data System (ADS)
Magen, Avner; Moharrami, Mohammad
This work provides a Linear Programming-based Polynomial Time Approximation Scheme (PTAS) for two classical NP-hard problems on graphs when the input graph is guaranteed to be planar, or more generally Minor Free. The algorithm applies a sufficiently large number (some function of when approximation is required) of rounds of the so-called Sherali-Adams Lift-and-Project system. needed to obtain a -approximation, where f is some function that depends only on the graph that should be avoided as a minor. The problem we discuss are the well-studied problems, the and problems. An curious fact we expose is that in the world of minor-free graph, the is harder in some sense than the.
NASA Astrophysics Data System (ADS)
Tahani, Masoud; Askari, Amir R.
2014-09-01
In spite of the fact that pull-in instability of electrically actuated nano/micro-beams has been investigated by many researchers to date, no explicit formula has been presented yet which can predict pull-in voltage based on a geometrically non-linear and distributed parameter model. The objective of present paper is to introduce a simple and accurate formula to predict this value for a fully clamped electrostatically actuated nano/micro-beam. To this end, a non-linear Euler-Bernoulli beam model is employed, which accounts for the axial residual stress, geometric non-linearity of mid-plane stretching, distributed electrostatic force and the van der Waals (vdW) attraction. The non-linear boundary value governing equation of equilibrium is non-dimensionalized and solved iteratively through single-term Galerkin based reduced order model (ROM). The solutions are validated thorough direct comparison with experimental and other existing results reported in previous studies. Pull-in instability under electrical and vdW loads are also investigated using universal graphs. Based on the results of these graphs, non-dimensional pull-in and vdW parameters, which are defined in the text, vary linearly versus the other dimensionless parameters of the problem. Using this fact, some linear equations are presented to predict pull-in voltage, the maximum allowable length, the so-called detachment length, and the minimum allowable gap for a nano/micro-system. These linear equations are also reduced to a couple of universal pull-in formulas for systems with small initial gap. The accuracy of the universal pull-in formulas are also validated by comparing its results with available experimental and some previous geometric linear and closed-form findings published in the literature.
Siyah Mansoory, Meysam; Oghabian, Mohammad Ali; Jafari, Amir Homayoun; Shahbabaie, Alireza
2017-01-01
Graph theoretical analysis of functional Magnetic Resonance Imaging (fMRI) data has provided new measures of mapping human brain in vivo. Of all methods to measure the functional connectivity between regions, Linear Correlation (LC) calculation of activity time series of the brain regions as a linear measure is considered the most ubiquitous one. The strength of the dependence obligatory for graph construction and analysis is consistently underestimated by LC, because not all the bivariate distributions, but only the marginals are Gaussian. In a number of studies, Mutual Information (MI) has been employed, as a similarity measure between each two time series of the brain regions, a pure nonlinear measure. Owing to the complex fractal organization of the brain indicating self-similarity, more information on the brain can be revealed by fMRI Fractal Dimension (FD) analysis. In the present paper, Box-Counting Fractal Dimension (BCFD) is introduced for graph theoretical analysis of fMRI data in 17 methamphetamine drug users and 18 normal controls. Then, BCFD performance was evaluated compared to those of LC and MI methods. Moreover, the global topological graph properties of the brain networks inclusive of global efficiency, clustering coefficient and characteristic path length in addict subjects were investigated too. Compared to normal subjects by using statistical tests (P<0.05), topological graph properties were postulated to be disrupted significantly during the resting-state fMRI. Based on the results, analyzing the graph topological properties (representing the brain networks) based on BCFD is a more reliable method than LC and MI.
Eigenvalue asymptotics for the damped wave equation on metric graphs
NASA Astrophysics Data System (ADS)
Freitas, Pedro; Lipovský, Jiří
2017-09-01
We consider the linear damped wave equation on finite metric graphs and analyse its spectral properties with an emphasis on the asymptotic behaviour of eigenvalues. In the case of equilateral graphs and standard coupling conditions we show that there is only a finite number of high-frequency abscissas, whose location is solely determined by the averages of the damping terms on each edge. We further describe some of the possible behaviour when the edge lengths are no longer necessarily equal but remain commensurate.
Ma, Li Ying; Wang, Huai You; Xie, Hui; Xu, Li Xiao
2004-07-01
The fluorescence property of fluorescein isothiocyanate (FITC) in acid-alkaline medium was studied by spectrofluorimetry. The characteristic of FITC response to hydrogen ion has been examined in acid-alkaline solution. A novel pH chemical sensor was prepared based on the relationship between the relative fluorescence intensity of FITC and pH. The measurement of relative fluorescence intensity was carried out at 362 nm with excitation at 250 nm. The excellent linear relationship was obtained between relative fluorescence intensity and pH in the range of pH 1-5. The linear regression equation of the calibration graph is F = 66.871 + 6.605 pH (F is relative fluorescence intensity), with a correlation coefficient of linear regression of 0.9995. Effects of temperature, concentration of FITC on the response to hydrogen ion had been examined. It was important that this chemical sensor was long lifetime, and the property of response to hydrogen ion was stable for at least 70 days. This pH sensor can be used for measuring pH value in water solution. The accuracy is 0.01 pH unit. The results obtained by the pH sensor agreed with those by the pH meter. Obviously, this pH sensor is potential for determining pH change real time in biological system.
NASA Astrophysics Data System (ADS)
Ma, Li Ying; Wang, Huai You; Xie, Hui; Xu, Li Xiao
2004-07-01
The fluorescence property of fluorescein isothiocyanate (FITC) in acid-alkaline medium was studied by spectrofluorimetry. The characteristic of FITC response to hydrogen ion has been examined in acid-alkaline solution. A novel pH chemical sensor was prepared based on the relationship between the relative fluorescence intensity of FITC and pH. The measurement of relative fluorescence intensity was carried out at 362 nm with excitation at 250 nm. The excellent linear relationship was obtained between relative fluorescence intensity and pH in the range of pH 1-5. The linear regression equation of the calibration graph is F=66.871+6.605 pH ( F is relative fluorescence intensity), with a correlation coefficient of linear regression of 0.9995. Effects of temperature, concentration of FITC on the response to hydrogen ion had been examined. It was important that this chemical sensor was long lifetime, and the property of response to hydrogen ion was stable for at least 70 days. This pH sensor can be used for measuring pH value in water solution. The accuracy is 0.01 pH unit. The results obtained by the pH sensor agreed with those by the pH meter. Obviously, this pH sensor is potential for determining pH change real time in biological system.
Hologram production and representation for corrected image
NASA Astrophysics Data System (ADS)
Jiao, Gui Chao; Zhang, Rui; Su, Xue Mei
2015-12-01
In this paper, a CCD sensor device is used to record the distorted homemade grid images which are taken by a wide angle camera. The distorted images are corrected by using methods of position calibration and correction of gray with vc++ 6.0 and opencv software. Holography graphes for the corrected pictures are produced. The clearly reproduced images are obtained where Fresnel algorithm is used in graph processing by reducing the object and reference light from Fresnel diffraction to delete zero-order part of the reproduced images. The investigation is useful in optical information processing and image encryption transmission.
Graphing in Groups: Learning about Lines in a Collaborative Classroom Network Environment
ERIC Educational Resources Information Center
White, Tobin; Wallace, Matthew; Lai, Kevin
2012-01-01
This article presents a design experiment in which we explore new structures for classroom collaboration supported by a classroom network of handheld graphing calculators. We describe a design for small group investigations of linear functions and present findings from its implementation in three high school algebra classrooms. Our coding of the…
Distributed Computation of the knn Graph for Large High-Dimensional Point Sets
Plaku, Erion; Kavraki, Lydia E.
2009-01-01
High-dimensional problems arising from robot motion planning, biology, data mining, and geographic information systems often require the computation of k nearest neighbor (knn) graphs. The knn graph of a data set is obtained by connecting each point to its k closest points. As the research in the above-mentioned fields progressively addresses problems of unprecedented complexity, the demand for computing knn graphs based on arbitrary distance metrics and large high-dimensional data sets increases, exceeding resources available to a single machine. In this work we efficiently distribute the computation of knn graphs for clusters of processors with message passing. Extensions to our distributed framework include the computation of graphs based on other proximity queries, such as approximate knn or range queries. Our experiments show nearly linear speedup with over one hundred processors and indicate that similar speedup can be obtained with several hundred processors. PMID:19847318
Lee, Jae H.; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T.; Seo, Youngho
2014-01-01
The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting. PMID:27081299
Lee, Jae H; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T; Seo, Youngho
2014-11-01
The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Visweswara Sathanur, Arun; Choudhury, Sutanay; Joslyn, Cliff A.
Property graphs can be used to represent heterogeneous networks with attributed vertices and edges. Given one property graph, simulating another graph with same or greater size with identical statistical properties with respect to the attributes and connectivity is critical for privacy preservation and benchmarking purposes. In this work we tackle the problem of capturing the statistical dependence of the edge connectivity on the vertex labels and using the same distribution to regenerate property graphs of the same or expanded size in a scalable manner. However, accurate simulation becomes a challenge when the attributes do not completely explain the network structure.more » We propose the Property Graph Model (PGM) approach that uses an attribute (or label) augmentation strategy to mitigate the problem and preserve the graph connectivity as measured via degree distribution, vertex label distributions and edge connectivity. Our proposed algorithm is scalable with a linear complexity in the number of edges in the target graph. We illustrate the efficacy of the PGM approach in regenerating and expanding the datasets by leveraging two distinct illustrations.« less
A path following algorithm for the graph matching problem.
Zaslavskiy, Mikhail; Bach, Francis; Vert, Jean-Philippe
2009-12-01
We propose a convex-concave programming approach for the labeled weighted graph matching problem. The convex-concave programming formulation is obtained by rewriting the weighted graph matching problem as a least-square problem on the set of permutation matrices and relaxing it to two different optimization problems: a quadratic convex and a quadratic concave optimization problem on the set of doubly stochastic matrices. The concave relaxation has the same global minimum as the initial graph matching problem, but the search for its global minimum is also a hard combinatorial problem. We, therefore, construct an approximation of the concave problem solution by following a solution path of a convex-concave problem obtained by linear interpolation of the convex and concave formulations, starting from the convex relaxation. This method allows to easily integrate the information on graph label similarities into the optimization problem, and therefore, perform labeled weighted graph matching. The algorithm is compared with some of the best performing graph matching methods on four data sets: simulated graphs, QAPLib, retina vessel images, and handwritten Chinese characters. In all cases, the results are competitive with the state of the art.
Simultaneous determination of rutin and ascorbic acid in a sequential injection lab-at-valve system.
Al-Shwaiyat, Mohammed Khair E A; Miekh, Yuliia V; Denisenko, Tatyana A; Vishnikin, Andriy B; Andruch, Vasil; Bazel, Yaroslav R
2018-02-05
A green, simple, accurate and highly sensitive sequential injection lab-at-valve procedure has been developed for the simultaneous determination of ascorbic acid (Asc) and rutin using 18-molybdo-2-phosphate Wells-Dawson heteropoly anion (18-MPA). The method is based on the dependence of the reaction rate between 18-MPA and reducing agents on the solution pH. Only Asc is capable of interacting with 18-MPA at pH 4.7, while at pH 7.4 the reaction with both Asc and rutin proceeds simultaneously. In order to improve the precision and sensitivity of the analysis, to minimize reagent consumption and to remove the Schlieren effect, the manifold for the sequential injection analysis was supplemented with external reaction chamber, and the reaction mixture was segmented. By the reduction of 18-MPA with reducing agents one- and two-electron heteropoly blues are formed. The fraction of one-electron heteropoly blue increases at low concentrations of the reducer. Measurement of the absorbance at a wavelength corresponding to the isobestic point allows strictly linear calibration graphs to be obtained. The calibration curves were linear in the concentration ranges of 0.3-24mgL -1 and 0.2-14mgL -1 with detection limits of 0.13mgL -1 and 0.09mgL -1 for rutin and Asc, respectively. The determination of rutin was possible in the presence of up to a 20-fold molar excess of Asc. The method was applied to the determination of Asc and rutin in ascorutin tablets with acceptable accuracy and precision (1-2%). Copyright © 2017 Elsevier B.V. All rights reserved.
Multi-A Graph Patrolling and Partitioning
NASA Astrophysics Data System (ADS)
Elor, Y.; Bruckstein, A. M.
2012-12-01
We introduce a novel multi agent patrolling algorithm inspired by the behavior of gas filled balloons. Very low capability ant-like agents are considered with the task of patrolling an unknown area modeled as a graph. While executing the proposed algorithm, the agents dynamically partition the graph between them using simple local interactions, every agent assuming the responsibility for patrolling his subgraph. Balanced graph partition is an emergent behavior due to the local interactions between the agents in the swarm. Extensive simulations on various graphs (environments) showed that the average time to reach a balanced partition is linear with the graph size. The simulations yielded a convincing argument for conjecturing that if the graph being patrolled contains a balanced partition, the agents will find it. However, we could not prove this. Nevertheless, we have proved that if a balanced partition is reached, the maximum time lag between two successive visits to any vertex using the proposed strategy is at most twice the optimal so the patrol quality is at least half the optimal. In case of weighted graphs the patrol quality is at least (1)/(2){lmin}/{lmax} of the optimal where lmax (lmin) is the longest (shortest) edge in the graph.
L1-norm locally linear representation regularization multi-source adaptation learning.
Tao, Jianwen; Wen, Shiting; Hu, Wenjun
2015-09-01
In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object. Copyright © 2015 Elsevier Ltd. All rights reserved.
Decentralized Observer with a Consensus Filter for Distributed Discrete-Time Linear Systems
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Mandic, Milan
2011-01-01
This paper presents a decentralized observer with a consensus filter for the state observation of a discrete-time linear distributed systems. In this setup, each agent in the distributed system has an observer with a model of the plant that utilizes the set of locally available measurements, which may not make the full plant state detectable. This lack of detectability is overcome by utilizing a consensus filter that blends the state estimate of each agent with its neighbors' estimates. We assume that the communication graph is connected for all times as well as the sensing graph. It is proven that the state estimates of the proposed observer asymptotically converge to the actual plant states under arbitrarily changing, but connected, communication and sensing topologies. As a byproduct of this research, we also obtained a result on the location of eigenvalues, the spectrum, of the Laplacian for a family of graphs with self-loops.
Descriptions of Free and Freeware Software in the Mathematics Teaching
NASA Astrophysics Data System (ADS)
Antunes de Macedo, Josue; Neves de Almeida, Samara; Voelzke, Marcos Rincon
2016-05-01
This paper presents the analysis and the cataloging of free and freeware mathematical software available on the internet, a brief explanation of them, and types of licenses for use in teaching and learning. The methodology is based on the qualitative research. Among the different types of software found, it stands out in algebra, the Winmat, that works with linear algebra, matrices and linear systems. In geometry, the GeoGebra, which can be used in the study of functions, plan and spatial geometry, algebra and calculus. For graphing, can quote the Graph and Graphequation. With Graphmatica software, it is possible to build various graphs of mathematical equations on the same screen, representing cartesian equations, inequalities, parametric among other functions. The Winplot allows the user to build graphics in two and three dimensions functions and mathematical equations. Thus, this work aims to present the teachers some free math software able to be used in the classroom.
Linear finite-difference bond graph model of an ionic polymer actuator
NASA Astrophysics Data System (ADS)
Bentefrit, M.; Grondel, S.; Soyer, C.; Fannir, A.; Cattan, E.; Madden, J. D.; Nguyen, T. M. G.; Plesse, C.; Vidal, F.
2017-09-01
With the recent growing interest for soft actuation, many new types of ionic polymers working in air have been developed. Due to the interrelated mechanical, electrical, and chemical properties which greatly influence the characteristics of such actuators, their behavior is complex and difficult to understand, predict and optimize. In light of this challenge, an original linear multiphysics finite difference bond graph model was derived to characterize this ionic actuation. This finite difference scheme was divided into two coupled subparts, each related to a specific physical, electrochemical or mechanical domain, and then converted into a bond graph model as this language is particularly suited for systems from multiple energy domains. Simulations were then conducted and a good agreement with the experimental results was obtained. Furthermore, an analysis of the power efficiency of such actuators as a function of space and time was proposed and allowed to evaluate their performance.
NASA Technical Reports Server (NTRS)
Dejong, J.; Spencer, E. A.
1983-01-01
A 205 mm transfer standard orifice plate meter assembly, consisting of two orifice plates in series separated by a length of pipe containing a flow straightener, was calibrated in two water flow facilities. Results show that the agreement in the characteristics of such a differential pressure transfer standard package is within 0.17% over a 10:1 range from flow rates of approximately 8 to 80 l/sec. When the range over which the comparison was made was limited to that for which the calibration graphs gave straight lines, the agreement is 0.1% in 3 of the 4 calibrations (0.17% in the fourth).
Who Will Win?: Predicting the Presidential Election Using Linear Regression
ERIC Educational Resources Information Center
Lamb, John H.
2007-01-01
This article outlines a linear regression activity that engages learners, uses technology, and fosters cooperation. Students generated least-squares linear regression equations using TI-83 Plus[TM] graphing calculators, Microsoft[C] Excel, and paper-and-pencil calculations using derived normal equations to predict the 2004 presidential election.…
2010-11-30
Erdos- Renyi -Gilbert random graph [Erdos and Renyi , 1959; Gilbert, 1959], the Watts-Strogatz “small world” framework [Watts and Strogatz, 1998], and the...2003). Evolution of Networks. Oxford University Press, USA. Erdos, P. and Renyi , A. (1959). On Random Graphs. Publications Mathematicae, 6 290–297
Linear Algebra and Sequential Importance Sampling for Network Reliability
2011-12-01
first test case is an Erdős- Renyi graph with 100 vertices and 150 edges. Figure 1 depicts the relative variance of the three Algorithms: Algorithm TOP...e va ria nc e Figure 1: Relative variance of various algorithms on Erdős Renyi graph, 100 vertices 250 edges. Key: Solid = TOP-DOWN algorithm
Graphing techniques for materials laboratory using Excel
NASA Technical Reports Server (NTRS)
Kundu, Nikhil K.
1994-01-01
Engineering technology curricula stress hands on training and laboratory practices in most of the technical courses. Laboratory reports should include analytical as well as graphical evaluation of experimental data. Experience shows that many students neither have the mathematical background nor the expertise for graphing. This paper briefly describes the procedure and data obtained from a number of experiments such as spring rate, stress concentration, endurance limit, and column buckling for a variety of materials. Then with a brief introduction to Microsoft Excel the author explains the techniques used for linear regression and logarithmic graphing.
Multiple directed graph large-class multi-spectral processor
NASA Technical Reports Server (NTRS)
Casasent, David; Liu, Shiaw-Dong; Yoneyama, Hideyuki
1988-01-01
Numerical analysis techniques for the interpretation of high-resolution imaging-spectrometer data are described and demonstrated. The method proposed involves the use of (1) a hierarchical classifier with a tree structure generated automatically by a Fisher linear-discriminant-function algorithm and (2) a novel multiple-directed-graph scheme which reduces the local maxima and the number of perturbations required. Results for a 500-class test problem involving simulated imaging-spectrometer data are presented in tables and graphs; 100-percent-correct classification is achieved with an improvement factor of 5.
LQR-Based Optimal Distributed Cooperative Design for Linear Discrete-Time Multiagent Systems.
Zhang, Huaguang; Feng, Tao; Liang, Hongjing; Luo, Yanhong
2017-03-01
In this paper, a novel linear quadratic regulator (LQR)-based optimal distributed cooperative design method is developed for synchronization control of general linear discrete-time multiagent systems on a fixed, directed graph. Sufficient conditions are derived for synchronization, which restrict the graph eigenvalues into a bounded circular region in the complex plane. The synchronizing speed issue is also considered, and it turns out that the synchronizing region reduces as the synchronizing speed becomes faster. To obtain more desirable synchronizing capacity, the weighting matrices are selected by sufficiently utilizing the guaranteed gain margin of the optimal regulators. Based on the developed LQR-based cooperative design framework, an approximate dynamic programming technique is successfully introduced to overcome the (partially or completely) model-free cooperative design for linear multiagent systems. Finally, two numerical examples are given to illustrate the effectiveness of the proposed design methods.
Mohammadiazar, Sirwan; Hasanli, Fateme; Maham, Mehdi; Payami Samarin, Somayeh
2017-08-01
Electrochemically co-deposited sol-gel/Cu nanocomposites have been introduced as a novel, simple and single-step technique for preparation of solid-phase microextraction (SPME) coating to extract methadone (MDN) (a synthetic opioid) in urine samples. The porous surface structure of the sol-gel/Cu nanocomposite coating was revealed by scanning electron microscopy. Direct immersion SPME followed by HPLC-UV determination was employed. The factors influencing the SPME procedure, such as the salt content, desorption solvent type, pH and equilibration time, were optimized. The best conditions were obtained with no salt content, acetonitrile as desorption solvent type, pH 9 and 10 min equilibration time. The calibration graphs for urine samples showed good linearity. The detection limit was about 0.2 ng mL -1 . Also, the novel method for preparation of nanocomposite fiber was compared with previously reported techniques for MDN determination. The results show that the novel nanocomposite fiber has relatively high extraction efficiency. Copyright © 2016 John Wiley & Sons, Ltd.
Somnam, Sarawut; Jakmunee, Jaroon; Grudpan, Kate; Lenghor, Narong; Motomizu, Shoji
2008-12-01
An automated hydrodynamic sequential injection (HSI) system with spectrophotometric detection was developed. Thanks to the hydrodynamic injection principle, simple devices can be used for introducing reproducible microliter volumes of both sample and reagent into the flow channel to form stacked zones in a similar fashion to those in a sequential injection system. The zones were then pushed to the detector and a peak profile was recorded. The determination of nitrite and nitrate in water samples by employing the Griess reaction was chosen as a model. Calibration graphs with linearity in the range of 0.7 - 40 muM were obtained for both nitrite and nitrate. Detection limits were found to be 0.3 muM NO(2)(-) and 0.4 muM NO(3)(-), respectively, with a sample throughput of 20 h(-1) for consecutive determination of both the species. The developed system was successfully applied to the analysis of water samples, employing simple and cost-effective instrumentation and offering higher degrees of automation and low chemical consumption.
Yao, Hanchun; Zhang, Min; Zeng, Wenyuan; Zeng, Xiaoying; Zhang, Zhenzhong
2014-05-01
A rapid and sensitive flow injection chemiluminescence (FI-CL) method is described for the determination of 2-methoxyestradiol (2ME) based on enhancement of the CL intensity from a potassium ferricyanide-calcein system in sodium hydroxide medium. The optimum conditions for the CL emission were investigated. Under optimized conditions, a linear calibration graph was obtained over the range 1.0 × 10(-8) to 1.0 × 10(-6) mol/L (r = 0.998) 2ME with a detection limit (3σ) of 5.4 × 10(-9) mol/L. The relative standard deviation (RSD) for 5.0 × 10(-7) mol/L 2ME was 1.7%. As a preliminary application, the proposed method was successfully applied to the determination of 2ME in injection solutions and serum samples. The possible CL mechanism was also proposed. Copyright © 2013 John Wiley & Sons, Ltd.
Carlucci, Giuseppe; Pasquale, Dorina Di; Ruggieri, Fabrizio; Mazzeo, Pietro
2005-12-15
A method based on solid-phase extraction (SPE) and high-performance liquid chromatography (HPLC) was developed for the simultaneous determination of 3-(3,5-diclorophenyl)-5-ethenyl-5-methyl-2,4-oxazolidinedione (vinclozolin) and 3-(3,5-diclorophenyl)-N-(1-methylethyl)-2,4-dioxo-1-imidazolidinecarboxamide (iprodione) in human urine. Urine samples containing vinclozolin and iprodione were collected by solid phase extraction using C(18) cartridges. The chromatographic separation was achieved on a Spherisorb ODS2 (250 mm x 4.6 mm, 5 microm) column with an isocratic mobile phase of acetonitrile-water (60:40, v/v). Detection was UV absorbance at 220 nm. The calibration graphs were linear from 30 to 1000 ng/mL for the two fungicides. Intra- and inter-day R.S.D. did not exceed 2.9%. The quantitation limit was 50 ng/mL for vinclozolin and 30 ng/mL for iprodione, respectively.
Ezzati Nazhad Dolatabadi, Jafar; Hamishehkar, Hamed; de la Guardia, Miguel; Valizadeh, Hadi
2014-01-01
Introduction: Alendronate sodium enhances bone formation and increases osteoblast proliferation and maturation and leads to the inhibition of osteoblast apoptosis. Therefore, a rapid and simple spectrofluorometric method has been developed and validated for the quantitative determination of it. Methods: The procedure is based on the reaction of primary amino group of alendronate with o-phthalaldehyde (OPA) in sodium hydroxide solution. Results: The calibration graph was linear over the concentration range of 0.0-2.4 μM and limit of detection and limit of quantification of the method was 8.89 and 29 nanomolar, respectively. The enthalpy and entropy of the reaction between alendronate sodium and OPA showed that the reaction is endothermic and entropy favored (ΔH = 154.08 kJ/mol; ΔS = 567.36 J/mol K) which indicates that OPA interaction with alendronate is increased at elevated temperature. Conclusion: This simple method can be used as a practical technique for the analysis of alendronate in various samples. PMID:24790897
NASA Astrophysics Data System (ADS)
Amjadi, M.; Sodouri, T.
2014-05-01
In this work, a simple colorimetric method based on the formation of silver nanoparticles (Ag NPs) was developed for the determination of cannabinoids including Δ9-tetrahydrocannabinol (Δ9-THC), cannabidiol (CBD) and cannabinol (CBN). These compounds in a basic solution at 80°C reduce [Ag(NH3)2]+ to form Ag NPs. The produced NPs were characterized by transmission electron microscopy and UV-Vis absorption spectroscopy. The brown-yellow color of the solution that results from the localized surface plasmon resonance of Ag NPs can be observed by the bare eye. The calibration graph obtained by plotting the absorbance at 410 nm versus the concentration of each analyte was linear in the range of 0.1-5.0 μg/ml for all tested cannabinoids. The limits of detection were 0.065, 0.077, and 0.052 μg/ml for Δ9-THC, CBN and CBD, respectively. The developed method was applied to the determination of total cannabinoids in hashish.
Ono, I; Matsuda, K; Kanno, S
1997-05-09
A simple, rapid and sensitive two column-switching high-performance liquid chromatographic (HPLC) method with ultraviolet detection at 210 nm has been developed for the determination of N-(trans-4-isopropylcyclohexanecarbonyl)-D-phenylalanine (AY4166, I) and its seven metabolites in human plasma and urine. Measurements of I and its metabolites were carried out by two column-switching HPLC, because metabolites were classified into two groups according to their retention times. After purification of plasma samples using solid-phase extraction and direct dilution of urinary samples, I and each metabolite were injected into HPLC. The calibration graphs for plasma and urinary samples were linear in the ranges 0.1 to 10 microg ml(-1) and 0.5 to 50 microg ml(-1), respectively. Recoveries of I and its seven metabolites were over 88% by the standard addition method and the relative standard deviations of I and its metabolites were 1-6%.
NASA Astrophysics Data System (ADS)
Maddah, B.; Hosseini, F.; Ahmadi, M.; Rajabi, A. Asghar; Beik-Mohammadlood, Z.
2016-05-01
A novel and sensitive extraction procedure using sodium dodecyl sulfate (SDS) modified maghemite nanoparticles (MNPs) as an efficient solid phase has been developed for removal, preconcentration, and spectrophotometric determination of trace amounts of a naphthalene analog of dexmedetomidine (4-(1-(na phthalene-1-yl)ethyl)-1Himidazole, NMED). The MNPs were obtained by a coprecipitation method, and their surfaces were furthermore modified by SDS. The size and morphological properties of the synthesized MNPs were determined by X-ray diffraction analysis, FT-IR, vibrating sample magnetometry, and scanning electron microscopy. NMED was adsorbed at pH 3.0. The adsorbed drug was then desorbed and determined by spectrophotometry at 280 nm. The calibration graph was linear in the range 1 × 10-6-1 × 10-4 mol/L of NMED with a correlation coefficient of 0.989. The detection limit of the method for NMED determination was 3.7 × 10-7 mol/L. The method was successfully applied to the determination of NMED in human urine samples.
Hasanpour, Foroozan; Hadadzadeh, Hassan; Taei, Masoumeh; Nekouei, Mohsen; Mozafari, Elmira
2016-05-01
Analytical performance of conventional spectrophotometer was developed by coupling of effective dispersive liquid-liquid micro-extraction method with spectrophotometric determination for ultra-trace determination of cobalt. The method was based on the formation of Co(II)-alpha-benzoin oxime complex and its extraction using a dispersive liquid-liquid micro-extraction technique. During the present work, several important variables such as pH, ligand concentration, amount and type of dispersive, and extracting solvent were optimized. It was found that the crucial factor for the Co(II)-alpha benzoin oxime complex formation is the pH of the alkaline alcoholic medium. Under the optimized condition, the calibration graph was linear in the ranges of 1.0-110 μg L(-1) with the detection limit (S/N = 3) of 0.5 μg L(-1). The preconcentration operation of 25 mL of sample gave enhancement factor of 75. The proposed method was applied for determination of Co(II) in soil samples.
Generalized graphs and unitary irrational central charge in the superconformal master equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halpern, M.B.; Obers, N.A.
1991-12-01
For each magic basis of Lie {ital g}, it is known that the Virasoro master equation on affine {ital g} contains a generalized graph theory of conformal level-families. In this paper, it is found that the superconformal master equation on affine {ital g}{times}SO(dim {ital g}) similarly contains a generalized graph theory of superconformal level-families for each magic basis of {ital g}. The superconformal level-families satisfy linear equations on the generalized graphs, and the first exact unitary irrational solutions of the superconformal master equation are obtained on the sine-area graphs of {ital g}=SU({ital n}), including the simplest unitary irrational central chargesmore » {ital c}=6{ital nx}/({ital nx}+8 sin{sup 2}(rs{pi}/n)) yet observed in the program.« less
Relevance of graph literacy in the development of patient-centered communication tools.
Nayak, Jasmir G; Hartzler, Andrea L; Macleod, Liam C; Izard, Jason P; Dalkin, Bruce M; Gore, John L
2016-03-01
To determine the literacy skill sets of patients in the context of graphical interpretation of interactive dashboards. We assessed literacy characteristics of prostate cancer patients and assessed comprehension of quality of life dashboards. Health literacy, numeracy and graph literacy were assessed with validated tools. We divided patients into low vs. high numeracy and graph literacy. We report descriptive statistics on literacy, dashboard comprehension, and relationships between groups. We used correlation and multiple linear regressions to examine factors associated with dashboard comprehension. Despite high health literacy in educated patients (78% college educated), there was variation in numeracy and graph literacy. Numeracy and graph literacy scores were correlated (r=0.37). In those with low literacy, graph literacy scores most strongly correlated with dashboard comprehension (r=0.59-0.90). On multivariate analysis, graph literacy was independently associated with dashboard comprehension, adjusting for age, education, and numeracy level. Even among higher educated patients; variation in the ability to comprehend graphs exists. Clinicians must be aware of these differential proficiencies when counseling patients. Tools for patient-centered communication that employ visual displays need to account for literacy capabilities to ensure that patients can effectively engage these resources. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Collaborative mining and transfer learning for relational data
NASA Astrophysics Data System (ADS)
Levchuk, Georgiy; Eslami, Mohammed
2015-06-01
Many of the real-world problems, - including human knowledge, communication, biological, and cyber network analysis, - deal with data entities for which the essential information is contained in the relations among those entities. Such data must be modeled and analyzed as graphs, with attributes on both objects and relations encode and differentiate their semantics. Traditional data mining algorithms were originally designed for analyzing discrete objects for which a set of features can be defined, and thus cannot be easily adapted to deal with graph data. This gave rise to the relational data mining field of research, of which graph pattern learning is a key sub-domain [11]. In this paper, we describe a model for learning graph patterns in collaborative distributed manner. Distributed pattern learning is challenging due to dependencies between the nodes and relations in the graph, and variability across graph instances. We present three algorithms that trade-off benefits of parallelization and data aggregation, compare their performance to centralized graph learning, and discuss individual benefits and weaknesses of each model. Presented algorithms are designed for linear speedup in distributed computing environments, and learn graph patterns that are both closer to ground truth and provide higher detection rates than centralized mining algorithm.
Dowding, Dawn; Merrill, Jacqueline A; Onorato, Nicole; Barrón, Yolanda; Rosati, Robert J; Russell, David
2018-02-01
To explore home care nurses' numeracy and graph literacy and their relationship to comprehension of visualized data. A multifactorial experimental design using online survey software. Nurses were recruited from 2 Medicare-certified home health agencies. Numeracy and graph literacy were measured using validated scales. Nurses were randomized to 1 of 4 experimental conditions. Each condition displayed data for 1 of 4 quality indicators, in 1 of 4 different visualized formats (bar graph, line graph, spider graph, table). A mixed linear model measured the impact of numeracy, graph literacy, and display format on data understanding. In all, 195 nurses took part in the study. They were slightly more numerate and graph literate than the general population. Overall, nurses understood information presented in bar graphs most easily (88% correct), followed by tables (81% correct), line graphs (77% correct), and spider graphs (41% correct). Individuals with low numeracy and low graph literacy had poorer comprehension of information displayed across all formats. High graph literacy appeared to enhance comprehension of data regardless of numeracy capabilities. Clinical dashboards are increasingly used to provide information to clinicians in visualized format, under the assumption that visual display reduces cognitive workload. Results of this study suggest that nurses' comprehension of visualized information is influenced by their numeracy, graph literacy, and the display format of the data. Individual differences in numeracy and graph literacy skills need to be taken into account when designing dashboard technology. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Biogeographic Dating of Speciation Times Using Paleogeographically Informed Processes.
Landis, Michael J
2017-03-01
Standard models of molecular evolution cannot estimate absolute speciation times alone, and require external calibrations to do so, such as fossils. Because fossil calibration methods rely on the incomplete fossil record, a great number of nodes in the tree of life cannot be dated precisely. However, many major paleogeographical events are dated, and since biogeographic processes depend on paleogeographical conditions, biogeographic dating may be used as an alternative or complementary method to fossil dating. I demonstrate how a time-stratified biogeographic stochastic process may be used to estimate absolute divergence times by conditioning on dated paleogeographical events. Informed by the current paleogeographical literature, I construct an empirical dispersal graph using 25 areas and 26 epochs for the past 540 Ma of Earth's history. Simulations indicate biogeographic dating performs well so long as paleogeography imposes constraint on biogeographic character evolution. To gauge whether biogeographic dating may be of practical use, I analyzed the well-studied turtle clade (Testudines) to assess how well biogeographic dating fares when compared to fossil-calibrated dating estimates reported in the literature. Fossil-free biogeographic dating estimated the age of the most recent common ancestor of extant turtles to be from the Late Triassic, which is consistent with fossil-based estimates. Dating precision improves further when including a root node fossil calibration. The described model, paleogeographical dispersal graph, and analysis scripts are available for use with RevBayes. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Evolutionary Games of Multiplayer Cooperation on Graphs
Arranz, Jordi; Traulsen, Arne
2016-01-01
There has been much interest in studying evolutionary games in structured populations, often modeled as graphs. However, most analytical results so far have only been obtained for two-player or linear games, while the study of more complex multiplayer games has been usually tackled by computer simulations. Here we investigate evolutionary multiplayer games on graphs updated with a Moran death-Birth process. For cycles, we obtain an exact analytical condition for cooperation to be favored by natural selection, given in terms of the payoffs of the game and a set of structure coefficients. For regular graphs of degree three and larger, we estimate this condition using a combination of pair approximation and diffusion approximation. For a large class of cooperation games, our approximations suggest that graph-structured populations are stronger promoters of cooperation than populations lacking spatial structure. Computer simulations validate our analytical approximations for random regular graphs and cycles, but show systematic differences for graphs with many loops such as lattices. In particular, our simulation results show that these kinds of graphs can even lead to more stringent conditions for the evolution of cooperation than well-mixed populations. Overall, we provide evidence suggesting that the complexity arising from many-player interactions and spatial structure can be captured by pair approximation in the case of random graphs, but that it need to be handled with care for graphs with high clustering. PMID:27513946
Graph C ∗-algebras and Z2-quotients of quantum spheres
NASA Astrophysics Data System (ADS)
Hajac, Piotr M.; Matthes, Rainer; Szymański, Wojciech
2003-06-01
We consider two Z2-actions on the Podleś generic quantum spheres. They yield, as noncommutative quotient spaces, the Klimek-Lesmewski q-disc and the quantum real projective space, respectively. The C ∗-algebas of all these quantum spaces are described as graph C ∗-algebras. The K-groups of the thus presented C ∗-algebras are then easily determined from the general theory of graph C ∗-algebas. For the quantum real projective space, we also recall the classification of the classes of irreducible ∗-representations of its algebra and give a linear basis for this algebra.
Novel approaches to analysis by flow injection gradient titration.
Wójtowicz, Marzena; Kozak, Joanna; Kościelniak, Paweł
2007-09-26
Two novel procedures for flow injection gradient titration with the use of a single stock standard solution are proposed. In the multi-point single-line (MP-SL) method the calibration graph is constructed on the basis of a set of standard solutions, which are generated in a standard reservoir and subsequently injected into the titrant. According to the single-point multi-line (SP-ML) procedure the standard solution and a sample are injected into the titrant stream from four loops of different capacities, hence four calibration graphs are able to be constructed and the analytical result is calculated on the basis of a generalized slope of these graphs. Both approaches have been tested on the example of spectrophotometric acid-base titration of hydrochloric and acetic acids with using bromothymol blue and phenolphthalein as indicators, respectively, and sodium hydroxide as a titrant. Under optimized experimental conditions the analytical results of precision less than 1.8 and 2.5% (RSD) and of accuracy less than 3.0 and 5.4% (relative error (RE)) were obtained for MP-SL and SP-ML procedures, respectively, in ranges of 0.0031-0.0631 mol L(-1) for samples of hydrochloric acid and of 0.1680-1.7600 mol L(-1) for samples of acetic acid. The feasibility of both methods was illustrated by applying them to the total acidity determination in vinegar samples with precision lower than 0.5 and 2.9% (RSD) for MP-SL and SP-ML procedures, respectively.
1987-03-31
processors . The symmetry-breaking algorithms give efficient ways to convert probabilistic algorithms to deterministic algorithms. Some of the...techniques have been applied to construct several efficient linear- processor algorithms for graph problems, including an O(lg* n)-time algorithm for (A + 1...On n-node graphs, the algorithm works in O(log 2 n) time using only n processors , in contrast to the previous best algorithm which used about n3
Kirchhoff index of linear hexagonal chains
NASA Astrophysics Data System (ADS)
Yang, Yujun; Zhang, Heping
The resistance distance rij between vertices i and j of a connected (molecular) graph G is computed as the effective resistance between nodes i and j in the corresponding network constructed from G by replacing each edge of G with a unit resistor. The Kirchhoff index Kf(G) is the sum of resistance distances between all pairs of vertices. In this work, according to the decomposition theorem of Laplacian polynomial, we obtain that the Laplacian spectrum of linear hexagonal chain Ln consists of the Laplacian spectrum of path P2n+1 and eigenvalues of a symmetric tridiagonal matrix of order 2n + 1. By applying the relationship between roots and coefficients of the characteristic polynomial of the above matrix, explicit closed-form formula for Kirchhoff index of Ln is derived in terms of Laplacian spectrum. To our surprise, the Krichhoff index of Ln is approximately to one half of its Wiener index. Finally, we show that holds for all graphs G in a class of graphs including Ln.0
NASA Technical Reports Server (NTRS)
Kyle, H. L.; House, F. B.; Ardanuy, P. E.; Jacobowitz, H.; Maschhoff, R. H.; Hickey, J. R.
1984-01-01
In-flight calibration adjustments are developed to process data obtained from the wide-field-of-view channels of Nimbus-6 and Nimbus-7 after the failure of the Nimbus-7 longwave scanner on June 22, 1980. The sensor characteristics are investigated; the satellite environment is examined in detail; and algorithms are constructed to correct for long-term sensor-response changes, on/off-cycle thermal transients, and filter-dome absorption of longwave radiation. Data and results are presented in graphs and tables, including comparisons of the old and new algorithms.
40 CFR 89.323 - NDIR analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... coefficients. If any range is within 2 percent of being linear a linear calibration may be used. Include zero...
40 CFR 89.323 - NDIR analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... coefficients. If any range is within 2 percent of being linear a linear calibration may be used. Include zero...
40 CFR 89.323 - NDIR analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... coefficients. If any range is within 2 percent of being linear a linear calibration may be used. Include zero...
40 CFR 89.323 - NDIR analyzer calibration.
Code of Federal Regulations, 2010 CFR
2010-07-01
... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... coefficients. If any range is within 2 percent of being linear a linear calibration may be used. Include zero...
Graph theory applied to noise and vibration control in statistical energy analysis models.
Guasch, Oriol; Cortés, Lluís
2009-06-01
A fundamental aspect of noise and vibration control in statistical energy analysis (SEA) models consists in first identifying and then reducing the energy flow paths between subsystems. In this work, it is proposed to make use of some results from graph theory to address both issues. On the one hand, linear and path algebras applied to adjacency matrices of SEA graphs are used to determine the existence of any order paths between subsystems, counting and labeling them, finding extremal paths, or determining the power flow contributions from groups of paths. On the other hand, a strategy is presented that makes use of graph cut algorithms to reduce the energy flow from a source subsystem to a receiver one, modifying as few internal and coupling loss factors as possible.
Trust from the past: Bayesian Personalized Ranking based Link Prediction in Knowledge Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Baichuan; Choudhury, Sutanay; Al-Hasan, Mohammad
2016-02-01
Estimating the confidence for a link is a critical task for Knowledge Graph construction. Link prediction, or predicting the likelihood of a link in a knowledge graph based on prior state is a key research direction within this area. We propose a Latent Feature Embedding based link recommendation model for prediction task and utilize Bayesian Personalized Ranking based optimization technique for learning models for each predicate. Experimental results on large-scale knowledge bases such as YAGO2 show that our approach achieves substantially higher performance than several state-of-art approaches. Furthermore, we also study the performance of the link prediction algorithm in termsmore » of topological properties of the Knowledge Graph and present a linear regression model to reason about its expected level of accuracy.« less
Weighted graph cuts without eigenvectors a multilevel approach.
Dhillon, Inderjit S; Guan, Yuqiang; Kulis, Brian
2007-11-01
A variety of clustering algorithms have recently been proposed to handle data that is not linearly separable; spectral clustering and kernel k-means are two of the main methods. In this paper, we discuss an equivalence between the objective functions used in these seemingly different methods--in particular, a general weighted kernel k-means objective is mathematically equivalent to a weighted graph clustering objective. We exploit this equivalence to develop a fast, high-quality multilevel algorithm that directly optimizes various weighted graph clustering objectives, such as the popular ratio cut, normalized cut, and ratio association criteria. This eliminates the need for any eigenvector computation for graph clustering problems, which can be prohibitive for very large graphs. Previous multilevel graph partitioning methods, such as Metis, have suffered from the restriction of equal-sized clusters; our multilevel algorithm removes this restriction by using kernel k-means to optimize weighted graph cuts. Experimental results show that our multilevel algorithm outperforms a state-of-the-art spectral clustering algorithm in terms of speed, memory usage, and quality. We demonstrate that our algorithm is applicable to large-scale clustering tasks such as image segmentation, social network analysis and gene network analysis.
Model predictive control of P-time event graphs
NASA Astrophysics Data System (ADS)
Hamri, H.; Kara, R.; Amari, S.
2016-12-01
This paper deals with model predictive control of discrete event systems modelled by P-time event graphs. First, the model is obtained by using the dater evolution model written in the standard algebra. Then, for the control law, we used the finite-horizon model predictive control. For the closed-loop control, we used the infinite-horizon model predictive control (IH-MPC). The latter is an approach that calculates static feedback gains which allows the stability of the closed-loop system while respecting the constraints on the control vector. The problem of IH-MPC is formulated as a linear convex programming subject to a linear matrix inequality problem. Finally, the proposed methodology is applied to a transportation system.
Observer-based distributed adaptive iterative learning control for linear multi-agent systems
NASA Astrophysics Data System (ADS)
Li, Jinsha; Liu, Sanyang; Li, Junmin
2017-10-01
This paper investigates the consensus problem for linear multi-agent systems from the viewpoint of two-dimensional systems when the state information of each agent is not available. Observer-based fully distributed adaptive iterative learning protocol is designed in this paper. A local observer is designed for each agent and it is shown that without using any global information about the communication graph, all agents achieve consensus perfectly for all undirected connected communication graph when the number of iterations tends to infinity. The Lyapunov-like energy function is employed to facilitate the learning protocol design and property analysis. Finally, simulation example is given to illustrate the theoretical analysis.
Regression Model Term Selection for the Analysis of Strain-Gage Balance Calibration Data
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred; Volden, Thomas R.
2010-01-01
The paper discusses the selection of regression model terms for the analysis of wind tunnel strain-gage balance calibration data. Different function class combinations are presented that may be used to analyze calibration data using either a non-iterative or an iterative method. The role of the intercept term in a regression model of calibration data is reviewed. In addition, useful algorithms and metrics originating from linear algebra and statistics are recommended that will help an analyst (i) to identify and avoid both linear and near-linear dependencies between regression model terms and (ii) to make sure that the selected regression model of the calibration data uses only statistically significant terms. Three different tests are suggested that may be used to objectively assess the predictive capability of the final regression model of the calibration data. These tests use both the original data points and regression model independent confirmation points. Finally, data from a simplified manual calibration of the Ames MK40 balance is used to illustrate the application of some of the metrics and tests to a realistic calibration data set.
Dinç, Erdal; Ozdemir, Abdil
2005-01-01
Multivariate chromatographic calibration technique was developed for the quantitative analysis of binary mixtures enalapril maleate (EA) and hydrochlorothiazide (HCT) in tablets in the presence of losartan potassium (LST). The mathematical algorithm of multivariate chromatographic calibration technique is based on the use of the linear regression equations constructed using relationship between concentration and peak area at the five-wavelength set. The algorithm of this mathematical calibration model having a simple mathematical content was briefly described. This approach is a powerful mathematical tool for an optimum chromatographic multivariate calibration and elimination of fluctuations coming from instrumental and experimental conditions. This multivariate chromatographic calibration contains reduction of multivariate linear regression functions to univariate data set. The validation of model was carried out by analyzing various synthetic binary mixtures and using the standard addition technique. Developed calibration technique was applied to the analysis of the real pharmaceutical tablets containing EA and HCT. The obtained results were compared with those obtained by classical HPLC method. It was observed that the proposed multivariate chromatographic calibration gives better results than classical HPLC.
Protein domain organisation: adding order.
Kummerfeld, Sarah K; Teichmann, Sarah A
2009-01-29
Domains are the building blocks of proteins. During evolution, they have been duplicated, fused and recombined, to produce proteins with novel structures and functions. Structural and genome-scale studies have shown that pairs or groups of domains observed together in a protein are almost always found in only one N to C terminal order and are the result of a single recombination event that has been propagated by duplication of the multi-domain unit. Previous studies of domain organisation have used graph theory to represent the co-occurrence of domains within proteins. We build on this approach by adding directionality to the graphs and connecting nodes based on their relative order in the protein. Most of the time, the linear order of domains is conserved. However, using the directed graph representation we have identified non-linear features of domain organization that are over-represented in genomes. Recognising these patterns and unravelling how they have arisen may allow us to understand the functional relationships between domains and understand how the protein repertoire has evolved. We identify groups of domains that are not linearly conserved, but instead have been shuffled during evolution so that they occur in multiple different orders. We consider 192 genomes across all three kingdoms of life and use domain and protein annotation to understand their functional significance. To identify these features and assess their statistical significance, we represent the linear order of domains in proteins as a directed graph and apply graph theoretical methods. We describe two higher-order patterns of domain organisation: clusters and bi-directionally associated domain pairs and explore their functional importance and phylogenetic conservation. Taking into account the order of domains, we have derived a novel picture of global protein organization. We found that all genomes have a higher than expected degree of clustering and more domain pairs in forward and reverse orientation in different proteins relative to random graphs with identical degree distributions. While these features were statistically over-represented, they are still fairly rare. Looking in detail at the proteins involved, we found strong functional relationships within each cluster. In addition, the domains tended to be involved in protein-protein interaction and are able to function as independent structural units. A particularly striking example was the human Jak-STAT signalling pathway which makes use of a set of domains in a range of orders and orientations to provide nuanced signaling functionality. This illustrated the importance of functional and structural constraints (or lack thereof) on domain organisation.
Brain Graph Topology Changes Associated with Anti-Epileptic Drug Use
Levin, Harvey S.; Chiang, Sharon
2015-01-01
Abstract Neuroimaging studies of functional connectivity using graph theory have furthered our understanding of the network structure in temporal lobe epilepsy (TLE). Brain network effects of anti-epileptic drugs could influence such studies, but have not been systematically studied. Resting-state functional MRI was analyzed in 25 patients with TLE using graph theory analysis. Patients were divided into two groups based on anti-epileptic medication use: those taking carbamazepine/oxcarbazepine (CBZ/OXC) (n=9) and those not taking CBZ/OXC (n=16) as a part of their medication regimen. The following graph topology metrics were analyzed: global efficiency, betweenness centrality (BC), clustering coefficient, and small-world index. Multiple linear regression was used to examine the association of CBZ/OXC with graph topology. The two groups did not differ from each other based on epilepsy characteristics. Use of CBZ/OXC was associated with a lower BC. Longer epilepsy duration was also associated with a lower BC. These findings can inform graph theory-based studies in patients with TLE. The changes observed are discussed in relation to the anti-epileptic mechanism of action and adverse effects of CBZ/OXC. PMID:25492633
NASA Astrophysics Data System (ADS)
Bektasli, Behzat
Graphs have a broad use in science classrooms, especially in physics. In physics, kinematics is probably the topic for which graphs are most widely used. The participants in this study were from two different grade-12 physics classrooms, advanced placement and calculus-based physics. The main purpose of this study was to search for the relationships between student spatial ability, logical thinking, mathematical achievement, and kinematics graphs interpretation skills. The Purdue Spatial Visualization Test, the Middle Grades Integrated Process Skills Test (MIPT), and the Test of Understanding Graphs in Kinematics (TUG-K) were used for quantitative data collection. Classroom observations were made to acquire ideas about classroom environment and instructional techniques. Factor analysis, simple linear correlation, multiple linear regression, and descriptive statistics were used to analyze the quantitative data. Each instrument has two principal components. The selection and calculation of the slope and of the area were the two principal components of TUG-K. MIPT was composed of a component based upon processing text and a second component based upon processing symbolic information. The Purdue Spatial Visualization Test was composed of a component based upon one-step processing and a second component based upon two-step processing of information. Student ability to determine the slope in a kinematics graph was significantly correlated with spatial ability, logical thinking, and mathematics aptitude and achievement. However, student ability to determine the area in a kinematics graph was only significantly correlated with student pre-calculus semester 2 grades. Male students performed significantly better than female students on the slope items of TUG-K. Also, male students performed significantly better than female students on the PSAT mathematics assessment and spatial ability. This study found that students have different levels of spatial ability, logical thinking, and mathematics aptitude and achievement levels. These different levels were related to student learning of kinematics and they need to be considered when kinematics is being taught. It might be easier for students to understand the kinematics graphs if curriculum developers include more activities related to spatial ability and logical thinking.
Robust consensus control with guaranteed rate of convergence using second-order Hurwitz polynomials
NASA Astrophysics Data System (ADS)
Fruhnert, Michael; Corless, Martin
2017-10-01
This paper considers homogeneous networks of general, linear time-invariant, second-order systems. We consider linear feedback controllers and require that the directed graph associated with the network contains a spanning tree and systems are stabilisable. We show that consensus with a guaranteed rate of convergence can always be achieved using linear state feedback. To achieve this, we provide a new and simple derivation of the conditions for a second-order polynomial with complex coefficients to be Hurwitz. We apply this result to obtain necessary and sufficient conditions to achieve consensus with networks whose graph Laplacian matrix may have complex eigenvalues. Based on the conditions found, methods to compute feedback gains are proposed. We show that gains can be chosen such that consensus is achieved robustly over a variety of communication structures and system dynamics. We also consider the use of static output feedback.
Development of a prognostic model for predicting spontaneous singleton preterm birth.
Schaaf, Jelle M; Ravelli, Anita C J; Mol, Ben Willem J; Abu-Hanna, Ameen
2012-10-01
To develop and validate a prognostic model for prediction of spontaneous preterm birth. Prospective cohort study using data of the nationwide perinatal registry in The Netherlands. We studied 1,524,058 singleton pregnancies between 1999 and 2007. We developed a multiple logistic regression model to estimate the risk of spontaneous preterm birth based on maternal and pregnancy characteristics. We used bootstrapping techniques to internally validate our model. Discrimination (AUC), accuracy (Brier score) and calibration (calibration graphs and Hosmer-Lemeshow C-statistic) were used to assess the model's predictive performance. Our primary outcome measure was spontaneous preterm birth at <37 completed weeks. Spontaneous preterm birth occurred in 57,796 (3.8%) pregnancies. The final model included 13 variables for predicting preterm birth. The predicted probabilities ranged from 0.01 to 0.71 (IQR 0.02-0.04). The model had an area under the receiver operator characteristic curve (AUC) of 0.63 (95% CI 0.63-0.63), the Brier score was 0.04 (95% CI 0.04-0.04) and the Hosmer Lemeshow C-statistic was significant (p<0.0001). The calibration graph showed overprediction at higher values of predicted probability. The positive predictive value was 26% (95% CI 20-33%) for the 0.4 probability cut-off point. The model's discrimination was fair and it had modest calibration. Previous preterm birth, drug abuse and vaginal bleeding in the first half of pregnancy were the most important predictors for spontaneous preterm birth. Although not applicable in clinical practice yet, this model is a next step towards early prediction of spontaneous preterm birth that enables caregivers to start preventive therapy in women at higher risk. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Linear dynamic range enhancement in a CMOS imager
NASA Technical Reports Server (NTRS)
Pain, Bedabrata (Inventor)
2008-01-01
A CMOS imager with increased linear dynamic range but without degradation in noise, responsivity, linearity, fixed-pattern noise, or photometric calibration comprises a linear calibrated dual gain pixel in which the gain is reduced after a pre-defined threshold level by switching in an additional capacitance. The pixel may include a novel on-pixel latch circuit that is used to switch in the additional capacitance.
Dowd, Kieran P.; Harrington, Deirdre M.; Donnelly, Alan E.
2012-01-01
Background The activPAL has been identified as an accurate and reliable measure of sedentary behaviour. However, only limited information is available on the accuracy of the activPAL activity count function as a measure of physical activity, while no unit calibration of the activPAL has been completed to date. This study aimed to investigate the criterion validity of the activPAL, examine the concurrent validity of the activPAL, and perform and validate a value calibration of the activPAL in an adolescent female population. The performance of the activPAL in estimating posture was also compared with sedentary thresholds used with the ActiGraph accelerometer. Methodologies Thirty adolescent females (15 developmental; 15 cross-validation) aged 15–18 years performed 5 activities while wearing the activPAL, ActiGraph GT3X, and the Cosmed K4B2. A random coefficient statistics model examined the relationship between metabolic equivalent (MET) values and activPAL counts. Receiver operating characteristic analysis was used to determine activity thresholds and for cross-validation. The random coefficient statistics model showed a concordance correlation coefficient of 0.93 (standard error of the estimate = 1.13). An optimal moderate threshold of 2997 was determined using mixed regression, while an optimal vigorous threshold of 8229 was determined using receiver operating statistics. The activPAL count function demonstrated very high concurrent validity (r = 0.96, p<0.01) with the ActiGraph count function. Levels of agreement for sitting, standing, and stepping between direct observation and the activPAL and ActiGraph were 100%, 98.1%, 99.2% and 100%, 0%, 100%, respectively. Conclusions These findings suggest that the activPAL is a valid, objective measurement tool that can be used for both the measurement of physical activity and sedentary behaviours in an adolescent female population. PMID:23094069
Linear Time Algorithms to Restrict Insider Access using Multi-Policy Access Control Systems
Mell, Peter; Shook, James; Harang, Richard; Gavrila, Serban
2017-01-01
An important way to limit malicious insiders from distributing sensitive information is to as tightly as possible limit their access to information. This has always been the goal of access control mechanisms, but individual approaches have been shown to be inadequate. Ensemble approaches of multiple methods instantiated simultaneously have been shown to more tightly restrict access, but approaches to do so have had limited scalability (resulting in exponential calculations in some cases). In this work, we take the Next Generation Access Control (NGAC) approach standardized by the American National Standards Institute (ANSI) and demonstrate its scalability. The existing publicly available reference implementations all use cubic algorithms and thus NGAC was widely viewed as not scalable. The primary NGAC reference implementation took, for example, several minutes to simply display the set of files accessible to a user on a moderately sized system. In our approach, we take these cubic algorithms and make them linear. We do this by reformulating the set theoretic approach of the NGAC standard into a graph theoretic approach and then apply standard graph algorithms. We thus can answer important access control decision questions (e.g., which files are available to a user and which users can access a file) using linear time graph algorithms. We also provide a default linear time mechanism to visualize and review user access rights for an ensemble of access control mechanisms. Our visualization appears to be a simple file directory hierarchy but in reality is an automatically generated structure abstracted from the underlying access control graph that works with any set of simultaneously instantiated access control policies. It also provide an implicit mechanism for symbolic linking that provides a powerful access capability. Our work thus provides the first efficient implementation of NGAC while enabling user privilege review through a novel visualization approach. This may help transition from concept to reality the idea of using ensembles of simultaneously instantiated access control methodologies, thereby limiting insider threat. PMID:28758045
Ring Laser Gyro G-Sensitive Misalignment Calibration in Linear Vibration Environments.
Wang, Lin; Wu, Wenqi; Li, Geng; Pan, Xianfei; Yu, Ruihang
2018-02-16
The ring laser gyro (RLG) dither axis will bend and exhibit errors due to the specific forces acting on the instrument, which are known as g-sensitive misalignments of the gyros. The g-sensitive misalignments of the RLG triad will cause severe attitude error in vibration or maneuver environments where large-amplitude specific forces and angular rates coexist. However, g-sensitive misalignments are usually ignored when calibrating the strapdown inertial navigation system (SINS). This paper proposes a novel method to calibrate the g-sensitive misalignments of an RLG triad in linear vibration environments. With the SINS is attached to a linear vibration bench through outer rubber dampers, rocking of the SINS can occur when the linear vibration is performed on the SINS. Therefore, linear vibration environments can be created to simulate the harsh environment during aircraft flight. By analyzing the mathematical model of g-sensitive misalignments, the relationship between attitude errors and specific forces as well as angular rates is established, whereby a calibration scheme with approximately optimal observations is designed. Vibration experiments are conducted to calibrate g-sensitive misalignments of the RLG triad. Vibration tests also show that SINS velocity error decreases significantly after g-sensitive misalignments compensation.
A characterization of horizontal visibility graphs and combinatorics on words
NASA Astrophysics Data System (ADS)
Gutin, Gregory; Mansour, Toufik; Severini, Simone
2011-06-01
A Horizontal Visibility Graph (HVG) is defined in association with an ordered set of non-negative reals. HVGs realize a methodology in the analysis of time series, their degree distribution being a good discriminator between randomness and chaos Luque et al. [B. Luque, L. Lacasa, F. Ballesteros, J. Luque, Horizontal visibility graphs: exact results for random time series, Phys. Rev. E 80 (2009), 046103]. We prove that a graph is an HVG if and only if it is outerplanar and has a Hamilton path. Therefore, an HVG is a noncrossing graph, as defined in algebraic combinatorics Flajolet and Noy [P. Flajolet, M. Noy, Analytic combinatorics of noncrossing configurations, Discrete Math., 204 (1999) 203-229]. Our characterization of HVGs implies a linear time recognition algorithm. Treating ordered sets as words, we characterize subfamilies of HVGs highlighting various connections with combinatorial statistics and introducing the notion of a visible pair. With this technique, we determine asymptotically the average number of edges of HVGs.
From the Laboratory to the Classroom: A Technology-Intensive Curriculum for Functions and Graphs.
ERIC Educational Resources Information Center
Magidson, Susan
1992-01-01
Addresses the challenges, risks, and rewards of teaching about linear functions in a technology-rich environment from a constructivist perspective. Describes an algebra class designed for junior high school students that focuses of the representations and real-world applications of linear functions. (MDH)
Technology, Linear Equations, and Buying a Car.
ERIC Educational Resources Information Center
Sandefur, James T.
1992-01-01
Discusses the use of technology in solving compound interest-rate problems that can be modeled by linear relationships. Uses a graphing calculator to solve the specific problem of determining the amount of money that can be borrowed to buy a car for a given monthly payment and interest rate. (MDH)
Graphs and matroids weighted in a bounded incline algebra.
Lu, Ling-Xia; Zhang, Bei
2014-01-01
Firstly, for a graph weighted in a bounded incline algebra (or called a dioid), a longest path problem (LPP, for short) is presented, which can be considered the uniform approach to the famous shortest path problem, the widest path problem, and the most reliable path problem. The solutions for LPP and related algorithms are given. Secondly, for a matroid weighted in a linear matroid, the maximum independent set problem is studied.
Hydrogen recombiner catalyst test supporting data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Britton, M.D.
1995-01-19
This is a data package supporting the Hydrogen Recombiner Catalyst Performance and Carbon Monoxide Sorption Capacity Test Report, WHC-SD-WM-TRP-211, Rev 0. This report contains 10 appendices which consist of the following: Mass spectrometer analysis reports: HRC samples 93-001 through 93-157; Gas spectrometry analysis reports: HRC samples 93-141 through 93-658; Mass spectrometer procedure PNL-MA-299 ALO-284; Alternate analytical method for ammonia and water vapor; Sample log sheets; Job Safety analysis; Certificate of mixture analysis for feed gases; Flow controller calibration check; Westinghouse Standards Laboratory report on Bois flow calibrator; and Sorption capacity test data, tables, and graphs.
Consensus seeking in a network of discrete-time linear agents with communication noises
NASA Astrophysics Data System (ADS)
Wang, Yunpeng; Cheng, Long; Hou, Zeng-Guang; Tan, Min; Zhou, Chao; Wang, Ming
2015-07-01
This paper studies the mean square consensus of discrete-time linear time-invariant multi-agent systems with communication noises. A distributed consensus protocol, which is composed of the agent's own state feedback and the relative states between the agent and its neighbours, is proposed. A time-varying consensus gain a[k] is applied to attenuate the effect of noises which inherits in the inaccurate measurement of relative states with neighbours. A polynomial, namely 'parameter polynomial', is constructed. And its coefficients are the parameters in the feedback gain vector of the proposed protocol. It turns out that the parameter polynomial plays an important role in guaranteeing the consensus of linear multi-agent systems. By the proposed protocol, necessary and sufficient conditions for mean square consensus are presented under different topology conditions: (1) if the communication topology graph has a spanning tree and every node in the graph has at least one parent node, then the mean square consensus can be achieved if and only if ∑∞k = 0a[k] = ∞, ∑∞k = 0a2[k] < ∞ and all roots of the parameter polynomial are in the unit circle; (2) if the communication topology graph has a spanning tree and there exits one node without any parent node (the leader-follower case), then the mean square consensus can be achieved if and only if ∑∞k = 0a[k] = ∞, limk → ∞a[k] = 0 and all roots of the parameter polynomial are in the unit circle; (3) if the communication topology graph does not have a spanning tree, then the mean square consensus can never be achieved. Finally, one simulation example on the multiple aircrafts system is provided to validate the theoretical analysis.
ERIC Educational Resources Information Center
Hosker, Bill S.
2018-01-01
A highly simplified variation on the do-it-yourself spectrophotometer using a smartphone's light sensor as a detector and an app to calculate and display absorbance values was constructed and tested. This simple version requires no need for electronic components or postmeasurement spectral analysis. Calibration graphs constructed from two…
ERIC Educational Resources Information Center
Labuhn, Andju Sara; Zimmerman, Barry J.; Hasselhorn, Marcus
2010-01-01
The purpose of this study was to examine the effects of self-evaluative standards and graphed feedback on calibration accuracy and performance in mathematics. Specifically, we explored the influence of mastery learning standards as opposed to social comparison standards as well as of individual feedback as opposed to social comparison feedback. 90…
Chen, Hongqi; Ling, Bo; Yuan, Fei; Zhou, Cailing; Chen, Jingguo; Wang, Lun
2012-01-01
A highly sensitive flow-injection chemiluminescence (FIA-CL) method based on the CdTe nanocrystals and potassium permanganate chemiluminescence system was developed for the determination of L-ascorbic acid. It was found that sodium hexametaphosphate (SP), as an enhancer, could increase the chemiluminescence (CL) emission from the redox reaction of CdTe quantum dots with potassium permanganate in near-neutral pH conditions. L-ascorbic acid is suggested as a sensitive enhancer for use in the above energy-transfer excitation process. Under optimal conditions, the calibration graph of emission intensity against logarithmic l-ascorbic acid concentration was linear in the range 1.0 × 10(-9)-5.0 × 10(-6) mol/L, with a correlation coefficient of 0.9969 and relative standard deviation (RSD) of 2.3% (n = 7) at 5.0 × 10(-7) mol/L. The method was successfully used to determine L-ascorbic acid in vitamin C tablets. The possible mechanism of the chemiluminescence in the system is also discussed. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Mirabi, Ali; Shokuhi Rad, Ali; Khodadad, Hadiseh
2015-09-01
Magnetic nanocomposites surface (MNCS) which has anionic surfactant sodium dodecyl sulfate (SDS) coating and has undergone dithiooxamide treatment as the sorbent could be an easy and useful method to extract and make a pre-concentrated in detecting the copper ions before they are determined via the flame atomic absorption spectrometry (FAAS). The influences of the experimental parameters such as the pH of the sample, the type and concentration of the eluent, dithiooxamide concentration and volume, amount of sorbent and the interactions of ions with respect to the copper ion detection have been studied. The calibration graph was linear in the range of 2-600 ng ml-1 with detection limit of 0.2 ng ml-1. Relative standard deviation (RSD) for 6 replicate measurements was 1.8%. This method of detection has been applied to the determination of Cu ions at levels in real samples such as wheat flour, tomatoes, potatoes, red beans, oat, tap water, river water and sea water with satisfactory results.
Grabarczyk, Malgorzata; Wardak, Cecylia
2014-01-01
This article describes a differential pulse adsorptive stripping voltammetric method for the trace determination of gallium in environmental water samples. It is based on the adsorptive deposition of the complex Ga(III)-cupferron at the hanging mercury drop electrode (HMDE) at -0.4 V (versus Ag/AgCl) and its cathodic stripping during the potential scan. The method was optimized as concerns the main electrochemical parameters that affect the voltammetric determination (supporting electrolyte, pH, cupferron concentration, deposition potential and time). The calibration graph is linear from 5 × 10(-10) to 5 × 10(-7) mol L(-1) with a detection limit calculated as 1.3 × 10(-10) mol L(-1) for deposition time of 30 s. The influence of interfering substances such as surfactants and humic substances present in the matrices of natural water samples on the Ga(III) signal was examined and a satisfying minimization of these interferences was proposed. The procedure was applied to direct determination of gallium in environmental water samples.
Eskandari, Habibollah; Shariati, Mohammad Reza
2011-10-17
A new method was proposed for the determination of ammonium based on the preconcentration with dodecylbenzene sulfonate modified magnetite nanoparticles. Ammonium was oxidized to nitrite by hypobromite and then the nitrite produced was determined spectrophotometrically, using sulfabenzamide and N-(1-naphthyl) ethylenediamine after solid phase extraction. The azo dye produced was desorbed by an appropriate small volume of sodium hydroxide prior to the absorbance measurement. The linear calibration graphs were obtained in the concentration range of 0.03-6.00 ng mL(-1) ammonium. The relative standard deviation and recovery percents were 1.0 and 99.0, respectively, for 1.0 ng mL(-1) ammonium, and the limit of detection was 3.2 ng L(-1) ammonium. The interfering effects of a large number of diverse ions on the determination of ammonium were studied. The method was applied to the determination of ammonium in various types of water resources. The results revealed a high efficiency for the recommended ammonium determination method. Copyright © 2011 Elsevier B.V. All rights reserved.
Heydari, Rouhollah; Hosseini, Mohammad; Zarabi, Sanaz
2015-01-01
In this paper, a simple and cost effective method was developed for extraction and pre-concentration of carmine in food samples by using cloud point extraction (CPE) prior to its spectrophotometric determination. Carmine was extracted from aqueous solution using Triton X-100 as extracting solvent. The effects of main parameters such as solution pH, surfactant and salt concentrations, incubation time and temperature were investigated and optimized. Calibration graph was linear in the range of 0.04-5.0 μg mL(-1) of carmine in the initial solution with regression coefficient of 0.9995. The limit of detection (LOD) and limit of quantification were 0.012 and 0.04 μg mL(-1), respectively. Relative standard deviation (RSD) at low concentration level (0.05 μg mL(-1)) of carmine was 4.8% (n=7). Recovery values in different concentration levels were in the range of 93.7-105.8%. The obtained results demonstrate the proposed method can be applied satisfactory to determine the carmine in food samples. Copyright © 2015 Elsevier B.V. All rights reserved.
Wring, S A; Hart, J P; Birch, B J
1989-12-01
High-performance liquid chromatography with electrochemical detection (LCEC), incorporating a novel carbon-epoxy resin working electrode modified with cobalt phthalocyanine, has been employed for preliminary studies directed towards the determination of normal circulating levels of reduced glutathione (GSH) in human plasma. The mobile phase consisted of 0.05 M phosphate buffer (pH 3) containing 0.1% m/m ethylenediaminetetraacetic acid (EDTA); the calibration graph was linear in the range 0.24-30.7 ng of GSH injected. The mean recovery of GSH added to a control serum over the physiological concentration range (0.38-3.07 ng ml-1) was 99%; this was achieved following a simple sample pre-treatment method, prior to LCEC, involving chelation of divalent cations with EDTA and subsequent acidification with orthophosphoric acid. Using the LCEC method, the mean circulating level of GSH in plasma, found in three normal subjects, was 2.69 microM, GSH; this indicates that the method might be applicable to the determination of depressed circulating levels of GSH.
Seno, Kunihiko; Matumura, Kazuki; Oshita, Koji; Oshima, Mitsuko; Motomizu, Shoji
2009-03-01
A sensitive and rapid flow-injection analysis was developed for the determination of 1-(3-dimethylaminopropyl)-3-ethylcarbodiimide hydrochloride (EDC.HCl), which was used for the formation of amide (peptide) and esters as a dehydration or condensation reagent. The EDC.HCl could be determined by the flow-injection analysis based on a specific condensation reaction between malonic acid and ethylenediamine in aquatic media. The reaction was accelerated at 60 degrees C, and the absorbance of the product was detected at 262 nm. The calibration graph of EDC.HCl showed good linearity in the range from 0 to 0.1% (0 to 0.0005 M), whose regression equation was y = 1.52 x 10(9)x (y, peak area; x, % concentration of EDC.HCl). The proposed method allowed high-throughput analysis; the sample throughput was 12 samples per hour. The limit of detection (LOD) and the relative standard deviation (RSD) were 2 x 10(-6) M and 1.0%, respectively. This reaction is proceeded in aqueous solution and specific for EDC.HCl.
Yanu, Pattama; Jakmunee, Jaroon
2017-09-01
A flow injection conductometric (FIC) system for determination of total Kjeldahl nitrogen (TKN) was developed for estimating total protein content in food. A small scale Kjeldahl digestion was performed with a short digestion time of only 20min. The digested solution was injected into the FIC system, and TKN was converted to ammonia gas in an alkaline donor stream of the system. The gas diffused through a membrane and dissolved into an acceptor stream causing an increase in conductivity as detected by a detector and recorded as a peak. Under the optimum condition, a linear calibration graph in the range of 4.00-100.00mgL -1 was obtained with LOD of 0.05mgL -1 . A good precision (0.04% RSD, n=11, 30.00mgNL -1 ) and high sample throughput of 72h -1 was achieved. The method was applied for determination of protein in some traditional northern Thai foods, revealing that they are good sources of proteins. Copyright © 2017 Elsevier Ltd. All rights reserved.
Anderson, M A; Wachs, T; Henion, J D
1997-02-01
A method based on ionspray liquid chromatography/tandem mass spectrometry (LC/MS/MS) was developed for the determination of reserpine in equine plasma. A comparison was made of the isolation of reserpine from plasma by liquid-liquid extraction and by solid-phase extraction. A structural analog, rescinnamine, was used as the internal standard. The reconstituted extracts were analyzed by ionspray LC/MS/MS in the selected reaction monitoring (SRM) mode. The calibration graph for reserpine extracted from equine plasma obtained using liquid-liquid extraction was linear from 10 to 5000 pg ml-1 and that using solid-phase extraction from 100 to 5000 pg ml-1. The lower level of quantitation (LLQ) using liquid-liquid and solid-phase extraction was 50 and 200 pg ml-1, respectively. The lower level of detection for reserpine by LC/MS/MS was 10 pg ml-1. The intra-assay accuracy did not exceed 13% for liquid-liquid and 12% for solid-phase extraction. The recoveries for the LLQ were 68% for liquid-liquid and 58% for solid-phase extraction.
Khodadoust, Saeid; Ghaedi, Mehrorang
2014-12-10
In this study a rapid and effective method (dispersive liquid-liquid microextraction (DLLME)) was developed for extraction of methyl red (MR) prior to its determination by UV-Vis spectrophotometry. Influence variables on DLLME such as volume of chloroform (as extractant solvent) and methanol (as dispersive solvent), pH and ionic strength and extraction time were investigated. Then significant variables were optimized by using a Box-Behnken design (BBD) and desirability function (DF). The optimized conditions (100μL of chloroform, 1.3mL of ethanol, pH 4 and 4% (w/v) NaCl) resulted in a linear calibration graph in the range of 0.015-10.0mgmL(-1) of MR in initial solution with R(2)=0.995 (n=5). The limits of detection (LOD) and limit of quantification (LOQ) were 0.005 and 0.015mgmL(-1), respectively. Finally, the DLLME method was applied for determination of MR in different water samples with relative standard deviation (RSD) less than 5% (n=5). Copyright © 2014 Elsevier B.V. All rights reserved.
Electro-focusing liquid extractive surface analysis (EF-LESA) coupled to mass spectrometry.
Brenton, A Gareth; Godfrey, A Ruth
2014-04-01
Analysis of the chemical composition of surfaces by liquid sampling devices interfaced to mass spectrometry is attractive as the sample stream can be continuously monitored at good sensitivity and selectivity. A sampling probe has been constructed that takes discrete liquid samples (typically <100 nL) of a surface. It incorporates an electrostatic lens system, comprising three electrodes, to which static and pulsed voltages are applied to form a conical "liquid tip", employed to dissolve analytes at a surface. A prototype system demonstrates spatial resolution of 0.093 mm(2). Time of contact between the liquid tip and the surface is controlled to standardize extraction. Calibration graphs of different analyte concentrations on a stainless surface have been measured, together with the probe's reproducibility, carryover, and recovery. A leucine enkephalin-coated surface demonstrated good linearity (R(2) = 0.9936), with a recovery of 90% and a limit of detection of 38 fmol per single spot sampled. The probe is compact and can be fitted into automated sample analysis equipment having potential for rapid analysis of surfaces at a good spatial resolution.
Electro-Focusing Liquid Extractive Surface Analysis (EF-LESA) Coupled to Mass Spectrometry
2014-01-01
Analysis of the chemical composition of surfaces by liquid sampling devices interfaced to mass spectrometry is attractive as the sample stream can be continuously monitored at good sensitivity and selectivity. A sampling probe has been constructed that takes discrete liquid samples (typically <100 nL) of a surface. It incorporates an electrostatic lens system, comprising three electrodes, to which static and pulsed voltages are applied to form a conical “liquid tip”, employed to dissolve analytes at a surface. A prototype system demonstrates spatial resolution of 0.093 mm2. Time of contact between the liquid tip and the surface is controlled to standardize extraction. Calibration graphs of different analyte concentrations on a stainless surface have been measured, together with the probe’s reproducibility, carryover, and recovery. A leucine enkephalin-coated surface demonstrated good linearity (R2 = 0.9936), with a recovery of 90% and a limit of detection of 38 fmol per single spot sampled. The probe is compact and can be fitted into automated sample analysis equipment having potential for rapid analysis of surfaces at a good spatial resolution. PMID:24597530
Roosta, Mostafa; Ghaedi, Mehrorang; Daneshfar, Ali
2014-10-15
A novel approach, ultrasound-assisted reverse micelles dispersive liquid-liquid microextraction (USA-RM-DLLME) followed by high performance liquid chromatography (HPLC) was developed for selective determination of acetoin in butter. The melted butter sample was diluted and homogenised by n-hexane and Triton X-100, respectively. Subsequently, 400μL of distilled water was added and the microextraction was accelerated by 4min sonication. After 8.5min of centrifugation, sedimented phase (surfactant-rich phase) was withdrawn by microsyringe and injected into the HPLC system for analysis. The influence of effective variables was optimised using Box-Behnken design (BBD) combined with desirability function (DF). Under optimised experimental conditions, the calibration graph was linear over the range of 0.6-200mgL(-1). The detection limit of method was 0.2mgL(-1) and coefficient of determination was 0.9992. The relative standard deviations (RSDs) were less than 5% (n=5) while the recoveries were in the range of 93.9-107.8%. Copyright © 2014. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
López-García, I.; Viñas, P.; Romero-Romero, R.; Hernández-Córdoba, M.
2007-01-01
A procedure for the electrothermal atomic absorption spectrometric determination of phosphorus in honey, milk and infant formulas using slurried samples is described. Suspensions prepared in a medium containing 50% v/v concentrated hydrogen peroxide, 1% v/v concentrated nitric acid, 10% m/v glucose, 5% m/v sucrose and 100 mg l - 1 of potassium were introduced directly into the furnace. For the honey samples, multiple injection of the sample was necessary. The modifier selected was a mixture of 20 μg palladium and 5 μg magnesium nitrate, which was injected after the sample and before proceeding with the drying and calcination steps. Calibration was performed using aqueous standards prepared in the same suspension medium and the graph was linear between 5 and 80 mg l - 1 of phosphorus. The reliability of the procedure was checked by comparing the results obtained by the new developed method with those found when using a reference spectrophotometric method after a mineralization step, and by analyzing several certified reference materials.
Yanamandra, Ramesh; Vadla, Chandra Sekhar; Puppala, Umamaheshwar; Patro, Balaram; Murthy, Yellajyosula L N; Ramaiah, Parimi Atchuta
2012-01-01
A new rapid, simple, sensitive, selective and accurate reversed-phase stability-indicating Ultra Performance Liquid Chromatography (RP-UPLC) technique was developed for the assay of Tolterodine Tartrate in pharmaceutical dosage form, human plasma and urine samples. The developed UPLC method is superior in technology to conventional HPLC with respect to speed, solvent consumption, resolution and cost of analysis. Chromatographic run time was 6 min in reversed-phase mode and ultraviolet detection was carried out at 220 nm for quantification. Efficient separation was achieved for all the degradants of Tolterodine Tartrate on BEH C18 sub-2-μm Acquity UPLC column using Trifluoroacetic acid and acetonitrile as organic solvent in a linear gradient program. The active pharmaceutical ingredient was extracted from tablet dosage form using a mixture of acetonitrile and water as diluent. The calibration graphs were linear and the method showed excellent recoveries for bulk and tablet dosage form. The test solution was found to be stable for 40 days when stored in the refrigerator between 2 and 8 °C. The developed UPLC method was validated and meets the requirements delineated by the International Conference on Harmonization (ICH) guidelines with respect to linearity, accuracy, precision, specificity and robustness. The intra-day and inter-day variation was found be less than 1%. The method was reproducible and selective for the estimation of Tolterodine Tartrate. Because the method could effectively separate the drug from its degradation products, it can be employed as a stability-indicating one.
Yanamandra, Ramesh; Vadla, Chandra Sekhar; Puppala, Umamaheshwar; Patro, Balaram; Murthy, Yellajyosula. L. N.; Ramaiah, Parimi Atchuta
2012-01-01
A new rapid, simple, sensitive, selective and accurate reversed-phase stability-indicating Ultra Performance Liquid Chromatography (RP-UPLC) technique was developed for the assay of Tolterodine Tartrate in pharmaceutical dosage form, human plasma and urine samples. The developed UPLC method is superior in technology to conventional HPLC with respect to speed, solvent consumption, resolution and cost of analysis. Chromatographic run time was 6 min in reversed-phase mode and ultraviolet detection was carried out at 220 nm for quantification. Efficient separation was achieved for all the degradants of Tolterodine Tartrate on BEH C18 sub-2-μm Acquity UPLC column using Trifluoroacetic acid and acetonitrile as organic solvent in a linear gradient program. The active pharmaceutical ingredient was extracted from tablet dosage form using a mixture of acetonitrile and water as diluent. The calibration graphs were linear and the method showed excellent recoveries for bulk and tablet dosage form. The test solution was found to be stable for 40 days when stored in the refrigerator between 2 and 8 °C. The developed UPLC method was validated and meets the requirements delineated by the International Conference on Harmonization (ICH) guidelines with respect to linearity, accuracy, precision, specificity and robustness. The intra-day and inter-day variation was found be less than 1%. The method was reproducible and selective for the estimation of Tolterodine Tartrate. Because the method could effectively separate the drug from its degradation products, it can be employed as a stability-indicating one. PMID:22396907
A Brief Historical Introduction to Matrices and Their Applications
ERIC Educational Resources Information Center
Debnath, L.
2014-01-01
This paper deals with the ancient origin of matrices, and the system of linear equations. Included are algebraic properties of matrices, determinants, linear transformations, and Cramer's Rule for solving the system of algebraic equations. Special attention is given to some special matrices, including matrices in graph theory and electrical…
Derive Workshop Matrix Algebra and Linear Algebra.
ERIC Educational Resources Information Center
Townsley Kulich, Lisa; Victor, Barbara
This document presents the course content for a workshop that integrates the use of the computer algebra system Derive with topics in matrix and linear algebra. The first section is a guide to using Derive that provides information on how to write algebraic expressions, make graphs, save files, edit, define functions, differentiate expressions,…
Graphical Description of Johnson-Neyman Outcomes for Linear and Quadratic Regression Surfaces.
ERIC Educational Resources Information Center
Schafer, William D.; Wang, Yuh-Yin
A modification of the usual graphical representation of heterogeneous regressions is described that can aid in interpreting significant regions for linear or quadratic surfaces. The standard Johnson-Neyman graph is a bivariate plot with the criterion variable on the ordinate and the predictor variable on the abscissa. Regression surfaces are drawn…
ERIC Educational Resources Information Center
Yildiz Ulus, Aysegul
2013-01-01
This paper examines experimental and algorithmic contributions of advanced calculators (graphing and computer algebra system, CAS) in teaching the concept of "diagonalization," one of the key topics in Linear Algebra courses taught at the undergraduate level. Specifically, the proposed hypothesis of this study is to assess the effective…
Comparison of kinetic model for biogas production from corn cob
NASA Astrophysics Data System (ADS)
Shitophyta, L. M.; Maryudi
2018-04-01
Energy demand increases every day, while the energy source especially fossil energy depletes increasingly. One of the solutions to overcome the energy depletion is to provide renewable energies such as biogas. Biogas can be generated by corn cob and food waste. In this study, biogas production was carried out by solid-state anaerobic digestion. The steps of biogas production were the preparation of feedstock, the solid-state anaerobic digestion, and the measurement of biogas volume. This study was conducted on TS content of 20%, 22%, and 24%. The aim of this research was to compare kinetic models of biogas production from corn cob and food waste as a co-digestion using the linear, exponential equation, and first-kinetic models. The result showed that the exponential equation had a better correlation than the linear equation on the ascending graph of biogas production. On the contrary, the linear equation had a better correlation than the exponential equation on the descending graph of biogas production. The correlation values on the first-kinetic model had the smallest value compared to the linear and exponential models.
Salem, Saeed; Ozcaglar, Cagri
2014-01-01
Advances in genomic technologies have enabled the accumulation of vast amount of genomic data, including gene expression data for multiple species under various biological and environmental conditions. Integration of these gene expression datasets is a promising strategy to alleviate the challenges of protein functional annotation and biological module discovery based on a single gene expression data, which suffers from spurious coexpression. We propose a joint mining algorithm that constructs a weighted hybrid similarity graph whose nodes are the coexpression links. The weight of an edge between two coexpression links in this hybrid graph is a linear combination of the topological similarities and co-appearance similarities of the corresponding two coexpression links. Clustering the weighted hybrid similarity graph yields recurrent coexpression link clusters (modules). Experimental results on Human gene expression datasets show that the reported modules are functionally homogeneous as evident by their enrichment with biological process GO terms and KEGG pathways.
Graph Structured Program Evolution: Evolution of Loop Structures
NASA Astrophysics Data System (ADS)
Shirakawa, Shinichi; Nagao, Tomoharu
Recently, numerous automatic programming techniques have been developed and applied in various fields. A typical example is genetic programming (GP), and various extensions and representations of GP have been proposed thus far. Complex programs and hand-written programs, however, may contain several loops and handle multiple data types. In this chapter, we propose a new method called Graph Structured Program Evolution (GRAPE). The representation of GRAPE is a graph structure; therefore, it can represent branches and loops using this structure. Each programis constructed as an arbitrary directed graph of nodes and a data set. The GRAPE program handles multiple data types using the data set for each type, and the genotype of GRAPE takes the form of a linear string of integers. We apply GRAPE to three test problems, factorial, exponentiation, and list sorting, and demonstrate that the optimum solution in each problem is obtained by the GRAPE system.
NASA Astrophysics Data System (ADS)
Shakeri, Nadim; Jalili, Saeed; Ahmadi, Vahid; Rasoulzadeh Zali, Aref; Goliaei, Sama
2015-01-01
The problem of finding the Hamiltonian path in a graph, or deciding whether a graph has a Hamiltonian path or not, is an NP-complete problem. No exact solution has been found yet, to solve this problem using polynomial amount of time and space. In this paper, we propose a two dimensional (2-D) optical architecture based on optical electronic devices such as micro ring resonators, optical circulators and MEMS based mirror (MEMS-M) to solve the Hamiltonian Path Problem, for undirected graphs in linear time. It uses a heuristic algorithm and employs n+1 different wavelengths of a light ray, to check whether a Hamiltonian path exists or not on a graph with n vertices. Then if a Hamiltonian path exists, it reports the path. The device complexity of the proposed architecture is O(n2).
NASA Astrophysics Data System (ADS)
Chen, Yuan-Ho
2017-05-01
In this work, we propose a counting-weighted calibration method for field-programmable-gate-array (FPGA)-based time-to-digital converter (TDC) to provide non-linearity calibration for use in positron emission tomography (PET) scanners. To deal with the non-linearity in FPGA, we developed a counting-weighted delay line (CWD) to count the delay time of the delay cells in the TDC in order to reduce the differential non-linearity (DNL) values based on code density counts. The performance of the proposed CWD-TDC with regard to linearity far exceeds that of TDC with a traditional tapped delay line (TDL) architecture, without the need for nonlinearity calibration. When implemented in a Xilinx Vertix-5 FPGA device, the proposed CWD-TDC achieved time resolution of 60 ps with integral non-linearity (INL) and DNL of [-0.54, 0.24] and [-0.66, 0.65] least-significant-bit (LSB), respectively. This is a clear indication of the suitability of the proposed FPGA-based CWD-TDC for use in PET scanners.
ERIC Educational Resources Information Center
Schultz, James E.; Waters, Michael S.
2000-01-01
Discusses representations in the context of solving a system of linear equations. Views representations (concrete, tables, graphs, algebraic, matrices) from perspectives of understanding, technology, generalization, exact versus approximate solution, and learning style. (KHR)
Dinh, Hieu; Rajasekaran, Sanguthevar
2011-07-15
Exact-match overlap graphs have been broadly used in the context of DNA assembly and the shortest super string problem where the number of strings n ranges from thousands to billions. The length ℓ of the strings is from 25 to 1000, depending on the DNA sequencing technologies. However, many DNA assemblers using overlap graphs suffer from the need for too much time and space in constructing the graphs. It is nearly impossible for these DNA assemblers to handle the huge amount of data produced by the next-generation sequencing technologies where the number n of strings could be several billions. If the overlap graph is explicitly stored, it would require Ω(n(2)) memory, which could be prohibitive in practice when n is greater than a hundred million. In this article, we propose a novel data structure using which the overlap graph can be compactly stored. This data structure requires only linear time to construct and and linear memory to store. For a given set of input strings (also called reads), we can informally define an exact-match overlap graph as follows. Each read is represented as a node in the graph and there is an edge between two nodes if the corresponding reads overlap sufficiently. A formal description follows. The maximal exact-match overlap of two strings x and y, denoted by ov(max)(x, y), is the longest string which is a suffix of x and a prefix of y. The exact-match overlap graph of n given strings of length ℓ is an edge-weighted graph in which each vertex is associated with a string and there is an edge (x, y) of weight ω=ℓ-|ov(max)(x, y)| if and only if ω ≤ λ, where |ov(max)(x, y)| is the length of ov(max)(x, y) and λ is a given threshold. In this article, we show that the exact-match overlap graphs can be represented by a compact data structure that can be stored using at most (2λ-1)(2⌈logn⌉+⌈logλ⌉)n bits with a guarantee that the basic operation of accessing an edge takes O(log λ) time. We also propose two algorithms for constructing the data structure for the exact-match overlap graph. The first algorithm runs in O(λℓnlogn) worse-case time and requires O(λ) extra memory. The second one runs in O(λℓn) time and requires O(n) extra memory. Our experimental results on a huge amount of simulated data from sequence assembly show that the data structure can be constructed efficiently in time and memory. Our DNA sequence assembler that incorporates the data structure is freely available on the web at http://www.engr.uconn.edu/~htd06001/assembler/leap.zip
Absolute charge calibration of scintillating screens for relativistic electron detection
NASA Astrophysics Data System (ADS)
Buck, A.; Zeil, K.; Popp, A.; Schmid, K.; Jochmann, A.; Kraft, S. D.; Hidding, B.; Kudyakov, T.; Sears, C. M. S.; Veisz, L.; Karsch, S.; Pawelke, J.; Sauerbrey, R.; Cowan, T.; Krausz, F.; Schramm, U.
2010-03-01
We report on new charge calibrations and linearity tests with high-dynamic range for eight different scintillating screens typically used for the detection of relativistic electrons from laser-plasma based acceleration schemes. The absolute charge calibration was done with picosecond electron bunches at the ELBE linear accelerator in Dresden. The lower detection limit in our setup for the most sensitive scintillating screen (KODAK Biomax MS) was 10 fC/mm2. The screens showed a linear photon-to-charge dependency over several orders of magnitude. An onset of saturation effects starting around 10-100 pC/mm2 was found for some of the screens. Additionally, a constant light source was employed as a luminosity reference to simplify the transfer of a one-time absolute calibration to different experimental setups.
Calibration of z-axis linearity for arbitrary optical topography measuring instruments
NASA Astrophysics Data System (ADS)
Eifler, Matthias; Seewig, Jörg; Hering, Julian; von Freymann, Georg
2015-05-01
The calibration of the height axis of optical topography measurement instruments is essential for reliable topography measurements. A state of the art technology for the calibration of the linearity and amplification of the z-axis is the use of step height artefacts. However, a proper calibration requires numerous step heights at different positions within the measurement range. The procedure is extensive and uses artificial surface structures that are not related to real measurement tasks. Concerning these limitations, approaches should to be developed that work for arbitrary topography measurement devices and require little effort. Hence, we propose calibration artefacts which are based on the 3D-Abbott-Curve and image desired surface characteristics. Further, real geometric structures are used as an initial point of the calibration artefact. Based on these considerations, an algorithm is introduced which transforms an arbitrary measured surface into a measurement artefact for the z-axis linearity. The method works both for profiles and topographies. For considering effects of manufacturing, measuring, and evaluation an iterative approach is chosen. The mathematical impact of these processes can be calculated with morphological signal processing. The artefact is manufactured with 3D laser lithography and characterized with different optical measurement devices. An introduced calibration routine can calibrate the entire z-axis-range within one measurement and minimizes the required effort. With the results it is possible to locate potential linearity deviations and to adjust the z-axis. Results of different optical measurement principles are compared in order to evaluate the capabilities of the new artefact.
On the theory of thermometric titration.
Piloyan, G O; Dolinina, Y V
1974-09-01
The general equation defining the change in solution temperature DeltaT during a thermometric titration is DeltaT = T - T(0) = - AV 1 + BV where A and B are constants, V is the volume of titrant used to produce temperature T, and T(0) is the initial temperature. There is a linear relation between the inverse values of DeltaT and V: 1 Delta T = - a V - b where a = 1/A and b = B/A, both a and b being constants. A linear relation between DeltaT and V is usually a special case of this general relation, and is valid only over a narrow range of V. Graphs of 1/DeltaTvs. 1/V are more suitable for practical calculations than the usual graphs of DeltaTvs. V.
Linear and nonlinear trending and prediction for AVHRR time series data
NASA Technical Reports Server (NTRS)
Smid, J.; Volf, P.; Slama, M.; Palus, M.
1995-01-01
The variability of AVHRR calibration coefficient in time was analyzed using algorithms of linear and non-linear time series analysis. Specifically we have used the spline trend modeling, autoregressive process analysis, incremental neural network learning algorithm and redundancy functional testing. The analysis performed on available AVHRR data sets revealed that (1) the calibration data have nonlinear dependencies, (2) the calibration data depend strongly on the target temperature, (3) both calibration coefficients and the temperature time series can be modeled, in the first approximation, as autonomous dynamical systems, (4) the high frequency residuals of the analyzed data sets can be best modeled as an autoregressive process of the 10th degree. We have dealt with a nonlinear identification problem and the problem of noise filtering (data smoothing). The system identification and filtering are significant problems for AVHRR data sets. The algorithms outlined in this study can be used for the future EOS missions. Prediction and smoothing algorithms for time series of calibration data provide a functional characterization of the data. Those algorithms can be particularly useful when calibration data are incomplete or sparse.
Balss, K M; Llanos, G; Papandreou, G; Maryanoff, C A
2008-04-01
Raman spectroscopy was used to differentiate each component found in the CYPHER Sirolimus-eluting Coronary Stent. The unique spectral features identified for each component were then used to develop three separate calibration curves to describe the solid phase distribution found on drug-polymer coated stents. The calibration curves were obtained by analyzing confocal Raman spectral depth profiles from a set of 16 unique formulations of drug-polymer coatings sprayed onto stents and planar substrates. The sirolimus model was linear from 0 to 100 wt % of drug. The individual polymer calibration curves for poly(ethylene-co-vinyl acetate) [PEVA] and poly(n-butyl methacrylate) [PBMA] were also linear from 0 to 100 wt %. The calibration curves were tested on three independent drug-polymer coated stents. The sirolimus calibration predicted the drug content within 1 wt % of the laboratory assay value. The polymer calibrations predicted the content within 7 wt % of the formulation solution content. Attenuated total reflectance Fourier transform infrared (ATR-FTIR) spectra from five formulations confirmed a linear response to changes in sirolimus and polymer content. Copyright 2007 Wiley Periodicals, Inc.
Vavilin, Vasily A; Rytov, Sergey V; Shim, Natalia; Vogt, Carsten
2016-06-01
The non-linear dynamics of stable carbon and hydrogen isotope signatures during methane oxidation by the methanotrophic bacteria Methylosinus sporium strain 5 (NCIMB 11126) and Methylocaldum gracile strain 14 L (NCIMB 11912) under copper-rich (8.9 µM Cu(2+)), copper-limited (0.3 µM Cu(2+)) or copper-regular (1.1 µM Cu(2+)) conditions has been described mathematically. The model was calibrated by experimental data of methane quantities and carbon and hydrogen isotope signatures of methane measured previously in laboratory microcosms reported by Feisthauer et al. [ 1 ] M. gracile initially oxidizes methane by a particulate methane monooxygenase and assimilates formaldehyde via the ribulose monophosphate pathway, whereas M. sporium expresses a soluble methane monooxygenase under copper-limited conditions and uses the serine pathway for carbon assimilation. The model shows that during methane solubilization dominant carbon and hydrogen isotope fractionation occurs. An increase of biomass due to growth of methanotrophs causes an increase of particulate or soluble monooxygenase that, in turn, decreases soluble methane concentration intensifying methane solubilization. The specific maximum rate of methane oxidation υm was proved to be equal to 4.0 and 1.3 mM mM(-1) h(-1) for M. sporium under copper-rich and copper-limited conditions, respectively, and 0.5 mM mM(-1) h(-1) for M. gracile. The model shows that methane oxidation cannot be described by traditional first-order kinetics. The kinetic isotope fractionation ceases when methane concentrations decrease close to the threshold value. Applicability of the non-linear model was confirmed by dynamics of carbon isotope signature for carbon dioxide that was depleted and later enriched in (13)C. Contrasting to the common Rayleigh linear graph, the dynamic curves allow identifying inappropriate isotope data due to inaccurate substrate concentration analyses. The non-linear model pretty adequately described experimental data presented in the two-dimensional plot of hydrogen versus carbon stable isotope signatures.
Linearization of Positional Response Curve of a Fiber-optic Displacement Sensor
NASA Astrophysics Data System (ADS)
Babaev, O. G.; Matyunin, S. A.; Paranin, V. D.
2018-01-01
Currently, the creation of optical measuring instruments and sensors for measuring linear displacement is one of the most relevant problems in the area of instrumentation. Fiber-optic contactless sensors based on the magneto-optical effect are of special interest. They are essentially contactless, non-electrical and have a closed optical channel not subject to contamination. The main problem of this type of sensors is the non-linearity of their positional response curve due to the hyperbolic nature of the magnetic field intensity variation induced by moving the magnetic source mounted on the controlled object relative to the sensing element. This paper discusses an algorithmic method of linearizing the positional response curve of fiber-optic displacement sensors in any selected range of the displacements to be measured. The method is divided into two stages: 1 - definition of the calibration function, 2 - measurement and linearization of the positional response curve (including its temperature stabilization). The algorithm under consideration significantly reduces the number of points of the calibration function, which is essential for the calibration of temperature dependence, due to the use of the points that randomly deviate from the grid points with uniform spacing. Subsequent interpolation of the deviating points and piecewise linear-plane approximation of the calibration function reduces the microcontroller storage capacity for storing the calibration function and the time required to process the measurement results. The paper also presents experimental results of testing real samples of fiber-optic displacement sensors.
Inference of Spatio-Temporal Functions Over Graphs via Multikernel Kriged Kalman Filtering
NASA Astrophysics Data System (ADS)
Ioannidis, Vassilis N.; Romero, Daniel; Giannakis, Georgios B.
2018-06-01
Inference of space-time varying signals on graphs emerges naturally in a plethora of network science related applications. A frequently encountered challenge pertains to reconstructing such dynamic processes, given their values over a subset of vertices and time instants. The present paper develops a graph-aware kernel-based kriged Kalman filter that accounts for the spatio-temporal variations, and offers efficient online reconstruction, even for dynamically evolving network topologies. The kernel-based learning framework bypasses the need for statistical information by capitalizing on the smoothness that graph signals exhibit with respect to the underlying graph. To address the challenge of selecting the appropriate kernel, the proposed filter is combined with a multi-kernel selection module. Such a data-driven method selects a kernel attuned to the signal dynamics on-the-fly within the linear span of a pre-selected dictionary. The novel multi-kernel learning algorithm exploits the eigenstructure of Laplacian kernel matrices to reduce computational complexity. Numerical tests with synthetic and real data demonstrate the superior reconstruction performance of the novel approach relative to state-of-the-art alternatives.
A Kernel Embedding-Based Approach for Nonstationary Causal Model Inference.
Hu, Shoubo; Chen, Zhitang; Chan, Laiwan
2018-05-01
Although nonstationary data are more common in the real world, most existing causal discovery methods do not take nonstationarity into consideration. In this letter, we propose a kernel embedding-based approach, ENCI, for nonstationary causal model inference where data are collected from multiple domains with varying distributions. In ENCI, we transform the complicated relation of a cause-effect pair into a linear model of variables of which observations correspond to the kernel embeddings of the cause-and-effect distributions in different domains. In this way, we are able to estimate the causal direction by exploiting the causal asymmetry of the transformed linear model. Furthermore, we extend ENCI to causal graph discovery for multiple variables by transforming the relations among them into a linear nongaussian acyclic model. We show that by exploiting the nonstationarity of distributions, both cause-effect pairs and two kinds of causal graphs are identifiable under mild conditions. Experiments on synthetic and real-world data are conducted to justify the efficacy of ENCI over major existing methods.
Identifiability Results for Several Classes of Linear Compartment Models.
Meshkat, Nicolette; Sullivant, Seth; Eisenberg, Marisa
2015-08-01
Identifiability concerns finding which unknown parameters of a model can be estimated, uniquely or otherwise, from given input-output data. If some subset of the parameters of a model cannot be determined given input-output data, then we say the model is unidentifiable. In this work, we study linear compartment models, which are a class of biological models commonly used in pharmacokinetics, physiology, and ecology. In past work, we used commutative algebra and graph theory to identify a class of linear compartment models that we call identifiable cycle models, which are unidentifiable but have the simplest possible identifiable functions (so-called monomial cycles). Here we show how to modify identifiable cycle models by adding inputs, adding outputs, or removing leaks, in such a way that we obtain an identifiable model. We also prove a constructive result on how to combine identifiable models, each corresponding to strongly connected graphs, into a larger identifiable model. We apply these theoretical results to several real-world biological models from physiology, cell biology, and ecology.
Weighted graph based ordering techniques for preconditioned conjugate gradient methods
NASA Technical Reports Server (NTRS)
Clift, Simon S.; Tang, Wei-Pai
1994-01-01
We describe the basis of a matrix ordering heuristic for improving the incomplete factorization used in preconditioned conjugate gradient techniques applied to anisotropic PDE's. Several new matrix ordering techniques, derived from well-known algorithms in combinatorial graph theory, which attempt to implement this heuristic, are described. These ordering techniques are tested against a number of matrices arising from linear anisotropic PDE's, and compared with other matrix ordering techniques. A variation of RCM is shown to generally improve the quality of incomplete factorization preconditioners.
Linear positioning laser calibration setup of CNC machine tools
NASA Astrophysics Data System (ADS)
Sui, Xiulin; Yang, Congjing
2002-10-01
The linear positioning laser calibration setup of CNC machine tools is capable of executing machine tool laser calibraiotn and backlash compensation. Using this setup, hole locations on CNC machien tools will be correct and machien tool geometry will be evaluated and adjusted. Machien tool laser calibration and backlash compensation is a simple and straightforward process. First the setup is to 'find' the stroke limits of the axis. Then the laser head is then brought into correct alignment. Second is to move the machine axis to the other extreme, the laser head is now aligned, using rotation and elevation adjustments. Finally the machine is moved to the start position and final alignment is verified. The stroke of the machine, and the machine compensation interval dictate the amount of data required for each axis. These factors determine the amount of time required for a through compensation of the linear positioning accuracy. The Laser Calibrator System monitors the material temperature and the air density; this takes into consideration machine thermal growth and laser beam frequency. This linear positioning laser calibration setup can be used on CNC machine tools, CNC lathes, horizontal centers and vertical machining centers.
NASA Astrophysics Data System (ADS)
Kang, Pilsang; Koo, Changhoi; Roh, Hokyu
2017-11-01
Since simple linear regression theory was established at the beginning of the 1900s, it has been used in a variety of fields. Unfortunately, it cannot be used directly for calibration. In practical calibrations, the observed measurements (the inputs) are subject to errors, and hence they vary, thus violating the assumption that the inputs are fixed. Therefore, in the case of calibration, the regression line fitted using the method of least squares is not consistent with the statistical properties of simple linear regression as already established based on this assumption. To resolve this problem, "classical regression" and "inverse regression" have been proposed. However, they do not completely resolve the problem. As a fundamental solution, we introduce "reversed inverse regression" along with a new methodology for deriving its statistical properties. In this study, the statistical properties of this regression are derived using the "error propagation rule" and the "method of simultaneous error equations" and are compared with those of the existing regression approaches. The accuracy of the statistical properties thus derived is investigated in a simulation study. We conclude that the newly proposed regression and methodology constitute the complete regression approach for univariate linear calibrations.
ERIC Educational Resources Information Center
Novak, Melissa A.
2017-01-01
The purpose of this qualitative practitioner research study was to describe middle school algebra students' experiences of learning linear functions through kinesthetic movement. Participants were comprised of 8th grade algebra students. Practitioner research was used because I wanted to improve my teaching so students will have more success in…
Complete Tri-Axis Magnetometer Calibration with a Gyro Auxiliary
Yang, Deng; You, Zheng; Li, Bin; Duan, Wenrui; Yuan, Binwen
2017-01-01
Magnetometers combined with inertial sensors are widely used for orientation estimation, and calibrations are necessary to achieve high accuracy. This paper presents a complete tri-axis magnetometer calibration algorithm with a gyro auxiliary. The magnetic distortions and sensor errors, including the misalignment error between the magnetometer and assembled platform, are compensated after calibration. With the gyro auxiliary, the magnetometer linear interpolation outputs are calculated, and the error parameters are evaluated under linear operations of magnetometer interpolation outputs. The simulation and experiment are performed to illustrate the efficiency of the algorithm. After calibration, the heading errors calculated by magnetometers are reduced to 0.5° (1σ). This calibration algorithm can also be applied to tri-axis accelerometers whose error model is similar to tri-axis magnetometers. PMID:28587115
Course transformation: Content, structure and effectiveness analysis
NASA Astrophysics Data System (ADS)
DuHadway, Linda P.
The organization of learning materials is often limited by the systems available for delivery of such material. Currently, the learning management system (LMS) is widely used to distribute course materials. These systems deliver the material in a text-based, linear way. As online education continues to expand and educators seek to increase their effectiveness by adding more effective active learning strategies, these delivery methods become a limitation. This work demonstrates the possibility of presenting course materials in a graphical way that expresses important relations and provides support for manipulating the order of those materials. The ENABLE system gathers data from an existing course, uses text analysis techniques, graph theory, graph transformation, and a user interface to create and present graphical course maps. These course maps are able to express information not currently available in the LMS. Student agents have been developed to traverse these course maps to identify the variety of possible paths through the material. The temporal relations imposed by the current course delivery methods have been replaced by prerequisite relations that express ordering that provides educational value. Reducing the connections to these more meaningful relations allows more possibilities for change. Technical methods are used to explore and calibrate linear and nonlinear models of learning. These methods are used to track mastery of learning material and identify relative difficulty values. Several probability models are developed and used to demonstrate that data from existing, temporally based courses can be used to make predictions about student success in courses using the same material but organized without the temporal limitations. Combined, these demonstrate the possibility of tools and techniques that can support the implementation of a graphical course map that allows varied paths and provides an enriched, more informative interface between the educator, the student, and the learning material. This fundamental change in how course materials are presented and interfaced with has the potential to make educational opportunities available to a broader spectrum of people with diverse abilities and circumstances. The graphical course map can be pivotal in attaining this transition.
Graph-theoretic approach to quantum correlations.
Cabello, Adán; Severini, Simone; Winter, Andreas
2014-01-31
Correlations in Bell and noncontextuality inequalities can be expressed as a positive linear combination of probabilities of events. Exclusive events can be represented as adjacent vertices of a graph, so correlations can be associated to a subgraph. We show that the maximum value of the correlations for classical, quantum, and more general theories is the independence number, the Lovász number, and the fractional packing number of this subgraph, respectively. We also show that, for any graph, there is always a correlation experiment such that the set of quantum probabilities is exactly the Grötschel-Lovász-Schrijver theta body. This identifies these combinatorial notions as fundamental physical objects and provides a method for singling out experiments with quantum correlations on demand.
Figure-ground segmentation based on class-independent shape priors
NASA Astrophysics Data System (ADS)
Li, Yang; Liu, Yang; Liu, Guojun; Guo, Maozu
2018-01-01
We propose a method to generate figure-ground segmentation by incorporating shape priors into the graph-cuts algorithm. Given an image, we first obtain a linear representation of an image and then apply directional chamfer matching to generate class-independent, nonparametric shape priors, which provide shape clues for the graph-cuts algorithm. We then enforce shape priors in a graph-cuts energy function to produce object segmentation. In contrast to previous segmentation methods, the proposed method shares shape knowledge for different semantic classes and does not require class-specific model training. Therefore, the approach obtains high-quality segmentation for objects. We experimentally validate that the proposed method outperforms previous approaches using the challenging PASCAL VOC 2010/2012 and Berkeley (BSD300) segmentation datasets.
Synthesis Polarimetry Calibration
NASA Astrophysics Data System (ADS)
Moellenbrock, George
2017-10-01
Synthesis instrumental polarization calibration fundamentals for both linear (ALMA) and circular (EVLA) feed bases are reviewed, with special attention to the calibration heuristics supported in CASA. Practical problems affecting modern instruments are also discussed.
System and method for calibrating a rotary absolute position sensor
NASA Technical Reports Server (NTRS)
Davis, Donald R. (Inventor); Permenter, Frank Noble (Inventor); Radford, Nicolaus A (Inventor)
2012-01-01
A system includes a rotary device, a rotary absolute position (RAP) sensor generating encoded pairs of voltage signals describing positional data of the rotary device, a host machine, and an algorithm. The algorithm calculates calibration parameters usable to determine an absolute position of the rotary device using the encoded pairs, and is adapted for linearly-mapping an ellipse defined by the encoded pairs to thereby calculate the calibration parameters. A method of calibrating the RAP sensor includes measuring the rotary position as encoded pairs of voltage signals, linearly-mapping an ellipse defined by the encoded pairs to thereby calculate the calibration parameters, and calculating an absolute position of the rotary device using the calibration parameters. The calibration parameters include a positive definite matrix (A) and a center point (q) of the ellipse. The voltage signals may include an encoded sine and cosine of a rotary angle of the rotary device.
Saterbak, Ann; Moturu, Anoosha; Volz, Tracy
2018-03-01
Rice University's bioengineering department incorporates written, oral, and visual communication instruction into its undergraduate curriculum to aid student learning and to prepare students to communicate their knowledge and discoveries precisely and persuasively. In a tissue culture lab course, we used a self- and peer-review tool called Calibrated Peer Review™ (CPR) to diagnose student learning gaps in visual communication skills on a poster assignment. We then designed an active learning intervention that required students to practice the visual communication skills that needed improvement and used CPR to measure the changes. After the intervention, we observed that students performed significantly better in their ability to develop high quality graphs and tables that represent experimental data. Based on these outcomes, we conclude that guided task practice, collaborative learning, and calibrated peer review can be used to improve engineering students' visual communication skills.
Li, Zhengqiang; Li, Kaitao; Li, Li; Xu, Hua; Xie, Yisong; Ma, Yan; Li, Donghui; Goloub, Philippe; Yuan, Yinlin; Zheng, Xiaobing
2018-02-10
Polarization observation of sky radiation is the frontier approach to improve the remote sensing of atmospheric components, e.g., aerosol and clouds. The polarization calibration of the ground-based Sun-sky radiometer is the basis for obtaining accurate degree of linear polarization (DOLP) measurement. In this paper, a DOLP calibration method based on a laboratory polarized light source (POLBOX) is introduced in detail. Combined with the CE318-DP Sun-sky polarized radiometer, a calibration scheme for DOLP measurement is established for the spectral range of 440-1640 nm. Based on the calibration results of the Sun-sky radiometer observation network, the polarization calibration coefficient and the DOLP calibration residual are analyzed statistically. The results show that the DOLP residual of the calibration scheme is about 0.0012, and thus it can be estimated that the final DOLP calibration accuracy of this method is about 0.005. Finally, it is verified that the accuracy of the calibration results is in accordance with the expected results by comparing the simulated DOLP with the vector radiative transfer calculations.
All-Digital Time-Domain CMOS Smart Temperature Sensor with On-Chip Linearity Enhancement.
Chen, Chun-Chi; Chen, Chao-Lieh; Lin, Yi
2016-01-30
This paper proposes the first all-digital on-chip linearity enhancement technique for improving the accuracy of the time-domain complementary metal-oxide semiconductor (CMOS) smart temperature sensor. To facilitate on-chip application and intellectual property reuse, an all-digital time-domain smart temperature sensor was implemented using 90 nm Field Programmable Gate Arrays (FPGAs). Although the inverter-based temperature sensor has a smaller circuit area and lower complexity, two-point calibration must be used to achieve an acceptable inaccuracy. With the help of a calibration circuit, the influence of process variations was reduced greatly for one-point calibration support, reducing the test costs and time. However, the sensor response still exhibited a large curvature, which substantially affected the accuracy of the sensor. Thus, an on-chip linearity-enhanced circuit is proposed to linearize the curve and achieve a new linearity-enhanced output. The sensor was implemented on eight different Xilinx FPGA using 118 slices per sensor in each FPGA to demonstrate the benefits of the linearization. Compared with the unlinearized version, the maximal inaccuracy of the linearized version decreased from 5 °C to 2.5 °C after one-point calibration in a range of -20 °C to 100 °C. The sensor consumed 95 μW using 1 kSa/s. The proposed linearity enhancement technique significantly improves temperature sensing accuracy, avoiding costly curvature compensation while it is fully synthesizable for future Very Large Scale Integration (VLSI) system.
All-Digital Time-Domain CMOS Smart Temperature Sensor with On-Chip Linearity Enhancement
Chen, Chun-Chi; Chen, Chao-Lieh; Lin, Yi
2016-01-01
This paper proposes the first all-digital on-chip linearity enhancement technique for improving the accuracy of the time-domain complementary metal-oxide semiconductor (CMOS) smart temperature sensor. To facilitate on-chip application and intellectual property reuse, an all-digital time-domain smart temperature sensor was implemented using 90 nm Field Programmable Gate Arrays (FPGAs). Although the inverter-based temperature sensor has a smaller circuit area and lower complexity, two-point calibration must be used to achieve an acceptable inaccuracy. With the help of a calibration circuit, the influence of process variations was reduced greatly for one-point calibration support, reducing the test costs and time. However, the sensor response still exhibited a large curvature, which substantially affected the accuracy of the sensor. Thus, an on-chip linearity-enhanced circuit is proposed to linearize the curve and achieve a new linearity-enhanced output. The sensor was implemented on eight different Xilinx FPGA using 118 slices per sensor in each FPGA to demonstrate the benefits of the linearization. Compared with the unlinearized version, the maximal inaccuracy of the linearized version decreased from 5 °C to 2.5 °C after one-point calibration in a range of −20 °C to 100 °C. The sensor consumed 95 μW using 1 kSa/s. The proposed linearity enhancement technique significantly improves temperature sensing accuracy, avoiding costly curvature compensation while it is fully synthesizable for future Very Large Scale Integration (VLSI) system. PMID:26840316
Increasing the sensitivity of the Jaffe reaction for creatinine
NASA Technical Reports Server (NTRS)
Tom, H. Y.
1973-01-01
Study of analytical procedure has revealed that linearity of creatinine calibration curve can be extended by using 0.03 molar picric acid solution made up in 70 percent ethanol instead of water. Three to five times more creatinine concentration can be encompassed within linear portion of calibration curve.
Measuring the hierarchy of feedforward networks
NASA Astrophysics Data System (ADS)
Corominas-Murtra, Bernat; Rodríguez-Caso, Carlos; Goñi, Joaquín; Solé, Ricard
2011-03-01
In this paper we explore the concept of hierarchy as a quantifiable descriptor of ordered structures, departing from the definition of three conditions to be satisfied for a hierarchical structure: order, predictability, and pyramidal structure. According to these principles, we define a hierarchical index taking concepts from graph and information theory. This estimator allows to quantify the hierarchical character of any system susceptible to be abstracted in a feedforward causal graph, i.e., a directed acyclic graph defined in a single connected structure. Our hierarchical index is a balance between this predictability and pyramidal condition by the definition of two entropies: one attending the onward flow and the other for the backward reversion. We show how this index allows to identify hierarchical, antihierarchical, and nonhierarchical structures. Our formalism reveals that departing from the defined conditions for a hierarchical structure, feedforward trees and the inverted tree graphs emerge as the only causal structures of maximal hierarchical and antihierarchical systems respectively. Conversely, null values of the hierarchical index are attributed to a number of different configuration networks; from linear chains, due to their lack of pyramid structure, to full-connected feedforward graphs where the diversity of onward pathways is canceled by the uncertainty (lack of predictability) when going backward. Some illustrative examples are provided for the distinction among these three types of hierarchical causal graphs.
Comparison of university students' understanding of graphs in different contexts
NASA Astrophysics Data System (ADS)
Planinic, Maja; Ivanjek, Lana; Susac, Ana; Milin-Sipus, Zeljka
2013-12-01
This study investigates university students’ understanding of graphs in three different domains: mathematics, physics (kinematics), and contexts other than physics. Eight sets of parallel mathematics, physics, and other context questions about graphs were developed. A test consisting of these eight sets of questions (24 questions in all) was administered to 385 first year students at University of Zagreb who were either prospective physics or mathematics teachers or prospective physicists or mathematicians. Rasch analysis of data was conducted and linear measures for item difficulties were obtained. Average difficulties of items in three domains (mathematics, physics, and other contexts) and over two concepts (graph slope, area under the graph) were computed and compared. Analysis suggests that the variation of average difficulty among the three domains is much smaller for the concept of graph slope than for the concept of area under the graph. Most of the slope items are very close in difficulty, suggesting that students who have developed sufficient understanding of graph slope in mathematics are generally able to transfer it almost equally successfully to other contexts. A large difference was found between the difficulty of the concept of area under the graph in physics and other contexts on one side and mathematics on the other side. Comparison of average difficulty of the three domains suggests that mathematics without context is the easiest domain for students. Adding either physics or other context to mathematical items generally seems to increase item difficulty. No significant difference was found between the average item difficulty in physics and contexts other than physics, suggesting that physics (kinematics) remains a difficult context for most students despite the received instruction on kinematics in high school.
Retina verification system based on biometric graph matching.
Lajevardi, Seyed Mehdi; Arakala, Arathi; Davis, Stephen A; Horadam, Kathy J
2013-09-01
This paper presents an automatic retina verification framework based on the biometric graph matching (BGM) algorithm. The retinal vasculature is extracted using a family of matched filters in the frequency domain and morphological operators. Then, retinal templates are defined as formal spatial graphs derived from the retinal vasculature. The BGM algorithm, a noisy graph matching algorithm, robust to translation, non-linear distortion, and small rotations, is used to compare retinal templates. The BGM algorithm uses graph topology to define three distance measures between a pair of graphs, two of which are new. A support vector machine (SVM) classifier is used to distinguish between genuine and imposter comparisons. Using single as well as multiple graph measures, the classifier achieves complete separation on a training set of images from the VARIA database (60% of the data), equaling the state-of-the-art for retina verification. Because the available data set is small, kernel density estimation (KDE) of the genuine and imposter score distributions of the training set are used to measure performance of the BGM algorithm. In the one dimensional case, the KDE model is validated with the testing set. A 0 EER on testing shows that the KDE model is a good fit for the empirical distribution. For the multiple graph measures, a novel combination of the SVM boundary and the KDE model is used to obtain a fair comparison with the KDE model for the single measure. A clear benefit in using multiple graph measures over a single measure to distinguish genuine and imposter comparisons is demonstrated by a drop in theoretical error of between 60% and more than two orders of magnitude.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu Feipeng; Shi Hongjian; Bai Pengxiang
In fringe projection, the CCD camera and the projector are often placed at equal height. In this paper, we will study the calibration of an unequal arrangement of the CCD camera and the projector. The principle of fringe projection with two-dimensional digital image correlation to acquire the profile of object surface is described in detail. By formula derivation and experiment, the linear relationship between the out-of-plane calibration coefficient and the y coordinate is clearly found. To acquire the three-dimensional (3D) information of an object correctly, this paper presents an effective calibration method with linear least-squares fitting, which is very simplemore » in principle and calibration. Experiments are implemented to validate the availability and reliability of the calibration method.« less
Enhancing SAMOS Data Access in DOMS via a Neo4j Property Graph Database.
NASA Astrophysics Data System (ADS)
Stallard, A. P.; Smith, S. R.; Elya, J. L.
2016-12-01
The Shipboard Automated Meteorological and Oceanographic System (SAMOS) initiative provides routine access to high-quality marine meteorological and near-surface oceanographic observations from research vessels. The Distributed Oceanographic Match-Up Service (DOMS) under development is a centralized service that allows researchers to easily match in situ and satellite oceanographic data from distributed sources to facilitate satellite calibration, validation, and retrieval algorithm development. The service currently uses Apache Solr as a backend search engine on each node in the distributed network. While Solr is a high-performance solution that facilitates creation and maintenance of indexed data, it is limited in the sense that its schema is fixed. The property graph model escapes this limitation by creating relationships between data objects. The authors will present the development of the SAMOS Neo4j property graph database including new search possibilities that take advantage of the property graph model, performance comparisons with Apache Solr, and a vision for graph databases as a storage tool for oceanographic data. The integration of the SAMOS Neo4j graph into DOMS will also be described. Currently, Neo4j contains spatial and temporal records from SAMOS which are modeled into a time tree and r-tree using Graph Aware and Spatial plugin tools for Neo4j. These extensions provide callable Java procedures within CYPHER (Neo4j's query language) that generate in-graph structures. Once generated, these structures can be queried using procedures from these libraries, or directly via CYPHER statements. Neo4j excels at performing relationship and path-based queries, which challenge relational-SQL databases because they require memory intensive joins due to the limitation of their design. Consider a user who wants to find records over several years, but only for specific months. If a traditional database only stores timestamps, this type of query would be complex and likely prohibitively slow. Using the time tree model, one can specify a path from the root to the data which restricts resolutions to certain timeframes (e.g., months). This query can be executed without joins, unions, or other compute-intensive operations, putting Neo4j at a computational advantage to the SQL database alternative.
Small diameter symmetric networks from linear groups
NASA Technical Reports Server (NTRS)
Campbell, Lowell; Carlsson, Gunnar E.; Dinneen, Michael J.; Faber, Vance; Fellows, Michael R.; Langston, Michael A.; Moore, James W.; Multihaupt, Andrew P.; Sexton, Harlan B.
1992-01-01
In this note is reported a collection of constructions of symmetric networks that provide the largest known values for the number of nodes that can be placed in a network of a given degree and diameter. Some of the constructions are in the range of current potential engineering significance. The constructions are Cayley graphs of linear groups obtained by experimental computation.
Spline smoothing of histograms by linear programming
NASA Technical Reports Server (NTRS)
Bennett, J. O.
1972-01-01
An algorithm for an approximating function to the frequency distribution is obtained from a sample of size n. To obtain the approximating function a histogram is made from the data. Next, Euclidean space approximations to the graph of the histogram using central B-splines as basis elements are obtained by linear programming. The approximating function has area one and is nonnegative.
Structured sparse linear graph embedding.
Wang, Haixian
2012-03-01
Subspace learning is a core issue in pattern recognition and machine learning. Linear graph embedding (LGE) is a general framework for subspace learning. In this paper, we propose a structured sparse extension to LGE (SSLGE) by introducing a structured sparsity-inducing norm into LGE. Specifically, SSLGE casts the projection bases learning into a regression-type optimization problem, and then the structured sparsity regularization is applied to the regression coefficients. The regularization selects a subset of features and meanwhile encodes high-order information reflecting a priori structure information of the data. The SSLGE technique provides a unified framework for discovering structured sparse subspace. Computationally, by using a variational equality and the Procrustes transformation, SSLGE is efficiently solved with closed-form updates. Experimental results on face image show the effectiveness of the proposed method. Copyright © 2011 Elsevier Ltd. All rights reserved.
Multiple degree of freedom object recognition using optical relational graph decision nets
NASA Technical Reports Server (NTRS)
Casasent, David P.; Lee, Andrew J.
1988-01-01
Multiple-degree-of-freedom object recognition concerns objects with no stable rest position with all scale, rotation, and aspect distortions possible. It is assumed that the objects are in a fairly benign background, so that feature extractors are usable. In-plane distortion invariance is provided by use of a polar-log coordinate transform feature space, and out-of-plane distortion invariance is provided by linear discriminant function design. Relational graph decision nets are considered for multiple-degree-of-freedom pattern recognition. The design of Fisher (1936) linear discriminant functions and synthetic discriminant function for use at the nodes of binary and multidecision nets is discussed. Case studies are detailed for two-class and multiclass problems. Simulation results demonstrate the robustness of the processors to quantization of the filter coefficients and to noise.
On extreme points of the diffusion polytope
Hay, M. J.; Schiff, J.; Fisch, N. J.
2017-01-04
Here, we consider a class of diffusion problems defined on simple graphs in which the populations at any two vertices may be averaged if they are connected by an edge. The diffusion polytope is the convex hull of the set of population vectors attainable using finite sequences of these operations. A number of physical problems have linear programming solutions taking the diffusion polytope as the feasible region, e.g. the free energy that can be removed from plasma using waves, so there is a need to describe and enumerate its extreme points. We also review known results for the case ofmore » the complete graph Kn, and study a variety of problems for the path graph Pn and the cyclic graph Cn. Finall, we describe the different kinds of extreme points that arise, and identify the diffusion polytope in a number of simple cases. In the case of increasing initial populations on Pn the diffusion polytope is topologically an n-dimensional hypercube.« less
Quantitative Literacy: Working with Log Graphs
NASA Astrophysics Data System (ADS)
Shawl, S.
2013-04-01
The need for working with and understanding different types of graphs is a common occurrence in everyday life. Examples include anything having to do investments, being an educated juror in a case that involves evidence presented graphically, and understanding many aspect of our current political discourse. Within a science class graphs play a crucial role in presenting and interpreting data. In astronomy, where the range of graphed values is many orders of magnitude, log-axes must be used and understood. Experience shows that students do not understand how to read and interpret log-axes or how they differ from linear. Alters (1996), in a study of college students in an algebra-based physics class, found little understanding of log plotting. The purpose of this poster is to show the method and progression I have developed for use in my “ASTRO 101” class, with the goal being to help students better understand the H-R diagram, mass-luminosity relationship, and digital spectra.
2014-01-01
Background Advances in genomic technologies have enabled the accumulation of vast amount of genomic data, including gene expression data for multiple species under various biological and environmental conditions. Integration of these gene expression datasets is a promising strategy to alleviate the challenges of protein functional annotation and biological module discovery based on a single gene expression data, which suffers from spurious coexpression. Results We propose a joint mining algorithm that constructs a weighted hybrid similarity graph whose nodes are the coexpression links. The weight of an edge between two coexpression links in this hybrid graph is a linear combination of the topological similarities and co-appearance similarities of the corresponding two coexpression links. Clustering the weighted hybrid similarity graph yields recurrent coexpression link clusters (modules). Experimental results on Human gene expression datasets show that the reported modules are functionally homogeneous as evident by their enrichment with biological process GO terms and KEGG pathways. PMID:25221624
Linguraru, Marius George; Pura, John A; Chowdhury, Ananda S; Summers, Ronald M
2010-01-01
The interpretation of medical images benefits from anatomical and physiological priors to optimize computer-aided diagnosis (CAD) applications. Diagnosis also relies on the comprehensive analysis of multiple organs and quantitative measures of soft tissue. An automated method optimized for medical image data is presented for the simultaneous segmentation of four abdominal organs from 4D CT data using graph cuts. Contrast-enhanced CT scans were obtained at two phases: non-contrast and portal venous. Intra-patient data were spatially normalized by non-linear registration. Then 4D erosion using population historic information of contrast-enhanced liver, spleen, and kidneys was applied to multi-phase data to initialize the 4D graph and adapt to patient specific data. CT enhancement information and constraints on shape, from Parzen windows, and location, from a probabilistic atlas, were input into a new formulation of a 4D graph. Comparative results demonstrate the effects of appearance and enhancement, and shape and location on organ segmentation.
NASA Astrophysics Data System (ADS)
Kurien, Binoy G.; Ashcom, Jonathan B.; Shah, Vinay N.; Rachlin, Yaron; Tarokh, Vahid
2017-01-01
Atmospheric turbulence presents a fundamental challenge to Fourier phase recovery in optical interferometry. Typical reconstruction algorithms employ Bayesian inference techniques which rely on prior knowledge of the scene under observation. In contrast, redundant spacing calibration (RSC) algorithms employ redundancy in the baselines of the interferometric array to directly expose the contribution of turbulence, thereby enabling phase recovery for targets of arbitrary and unknown complexity. Traditionally RSC algorithms have been applied directly to single-exposure measurements, which are reliable only at high photon flux in general. In scenarios of low photon flux, such as those arising in the observation of dim objects in space, one must instead rely on time-averaged, atmosphere-invariant quantities such as the bispectrum. In this paper, we develop a novel RSC-based algorithm for prior-less phase recovery in which we generalize the bispectrum to higher order atmosphere-invariants (n-spectra) for improved sensitivity. We provide a strategy for selection of a high-signal-to-noise ratio set of n-spectra using the graph-theoretic notion of the minimum cycle basis. We also discuss a key property of this set (wrap-invariance), which then enables reliable application of standard linear estimation techniques to recover the Fourier phases from the 2π-wrapped n-spectra phases. For validation, we analyse the expected shot-noise-limited performance of our algorithm for both pairwise and Fizeau interferometric architectures, and corroborate this analysis with simulation results showing performance near an atmosphere-oracle Cramer-Rao bound. Lastly, we apply techniques from the field of compressed sensing to perform image reconstruction from the estimated complex visibilities.
Minimizing camera-eye optical aberrations during the 3D reconstruction of retinal structures
NASA Astrophysics Data System (ADS)
Aldana-Iuit, Javier; Martinez-Perez, M. Elena; Espinosa-Romero, Arturo; Diaz-Uribe, Rufino
2010-05-01
3D reconstruction of blood vessels is a powerful visualization tool for physicians, since it allows them to refer to qualitative representation of their subject of study. In this paper we propose a 3D reconstruction method of retinal vessels from fundus images. The reconstruction method propose herein uses images of the same retinal structure in epipolar geometry. Images are preprocessed by RISA system for segmenting blood vessels and obtaining feature points for correspondences. The correspondence points process is solved using correlation. The LMedS analysis and Graph Transformation Matching algorithm are used for outliers suppression. Camera projection matrices are computed with the normalized eight point algorithm. Finally, we retrieve 3D position of the retinal tree points by linear triangulation. In order to increase the power of visualization, 3D tree skeletons are represented by surfaces via generalized cylinders whose radius correspond to morphological measurements obtained by RISA. In this paper the complete calibration process including the fundus camera and the optical properties of the eye, the so called camera-eye system is proposed. On one hand, the internal parameters of the fundus camera are obtained by classical algorithms using a reference pattern. On the other hand, we minimize the undesirable efects of the aberrations induced by the eyeball optical system assuming that contact enlarging lens corrects astigmatism, spherical and coma aberrations are reduced changing the aperture size and eye refractive errors are suppressed adjusting camera focus during image acquisition. Evaluation of two self-calibration proposals and results of 3D blood vessel surface reconstruction are presented.
Flexible arms provide constant force for pressure switch calibration
NASA Technical Reports Server (NTRS)
Cain, D. E.; Kunz, R. W.
1966-01-01
In-place calibration of a pressure switch is provided by a system of radially oriented flexing arms which, when rotated at a known velocity, convert the centrifugal force of the arms to a linear force along the shaft. The linear force, when applied to a pressure switch diaphragm, can then be calculated.
40 CFR 91.321 - NDIR analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... of full-scale concentration. A minimum of six evenly spaced points covering at least 80 percent of..., a linear calibration may be used. To determine if this criterion is met: (1) Perform a linear least-square regression on the data generated. Use an equation of the form y=mx, where x is the actual chart...
40 CFR 91.321 - NDIR analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... of full-scale concentration. A minimum of six evenly spaced points covering at least 80 percent of..., a linear calibration may be used. To determine if this criterion is met: (1) Perform a linear least-square regression on the data generated. Use an equation of the form y=mx, where x is the actual chart...
40 CFR 91.321 - NDIR analyzer calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... of full-scale concentration. A minimum of six evenly spaced points covering at least 80 percent of..., a linear calibration may be used. To determine if this criterion is met: (1) Perform a linear least-square regression on the data generated. Use an equation of the form y=mx, where x is the actual chart...
40 CFR 91.321 - NDIR analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... of full-scale concentration. A minimum of six evenly spaced points covering at least 80 percent of..., a linear calibration may be used. To determine if this criterion is met: (1) Perform a linear least-square regression on the data generated. Use an equation of the form y=mx, where x is the actual chart...
NASA Astrophysics Data System (ADS)
Primo, Amedeo; Tancredi, Lorenzo
2017-08-01
We consider the calculation of the master integrals of the three-loop massive banana graph. In the case of equal internal masses, the graph is reduced to three master integrals which satisfy an irreducible system of three coupled linear differential equations. The solution of the system requires finding a 3 × 3 matrix of homogeneous solutions. We show how the maximal cut can be used to determine all entries of this matrix in terms of products of elliptic integrals of first and second kind of suitable arguments. All independent solutions are found by performing the integration which defines the maximal cut on different contours. Once the homogeneous solution is known, the inhomogeneous solution can be obtained by use of Euler's variation of constants.
Kinetics of the Shanghai Maglev: Kinematical Analysis of a Real "Textbook" Case of Linear Motion
NASA Astrophysics Data System (ADS)
Hsu, Tung
2014-10-01
A vehicle starts from rest at constant acceleration, then cruises at constant speed for a time. Next, it decelerates at a constant rate.… This and similar statements are common in elementary physics courses. Students are asked to graph the motion of the vehicle or find the velocity, acceleration, and distance traveled by the vehicle from a given graph.1 However, a "constant acceleration-constant velocity-constant deceleration" motion, which gives us an ideal trapezoidal shape in the velocity-time graph, is not common in everyday life. Driving a car or riding a bicycle for a short distance can be much more complicated. Therefore, it is interesting to take a look at a real case of "constant acceleration-constant velocity-constant deceleration" motion.
On Learning Cluster Coefficient of Private Networks
Wang, Yue; Wu, Xintao; Zhu, Jun; Xiang, Yang
2013-01-01
Enabling accurate analysis of social network data while preserving differential privacy has been challenging since graph features such as clustering coefficient or modularity often have high sensitivity, which is different from traditional aggregate functions (e.g., count and sum) on tabular data. In this paper, we treat a graph statistics as a function f and develop a divide and conquer approach to enforce differential privacy. The basic procedure of this approach is to first decompose the target computation f into several less complex unit computations f1, …, fm connected by basic mathematical operations (e.g., addition, subtraction, multiplication, division), then perturb the output of each fi with Laplace noise derived from its own sensitivity value and the distributed privacy threshold εi, and finally combine those perturbed fi as the perturbed output of computation f. We examine how various operations affect the accuracy of complex computations. When unit computations have large global sensitivity values, we enforce the differential privacy by calibrating noise based on the smooth sensitivity, rather than the global sensitivity. By doing this, we achieve the strict differential privacy guarantee with smaller magnitude noise. We illustrate our approach by using clustering coefficient, which is a popular statistics used in social network analysis. Empirical evaluations on five real social networks and various synthetic graphs generated from three random graph models show the developed divide and conquer approach outperforms the direct approach. PMID:24429843
Dose calibrator linearity test: 99mTc versus 18F radioisotopes*
Willegaignon, José; Sapienza, Marcelo Tatit; Coura-Filho, George Barberio; Garcez, Alexandre Teles; Alves, Carlos Eduardo Gonzalez Ribeiro; Cardona, Marissa Anabel Rivera; Gutterres, Ricardo Fraga; Buchpiguel, Carlos Alberto
2015-01-01
Objective The present study was aimed at evaluating the viability of replacing 18F with 99mTc in dose calibrator linearity testing. Materials and Methods The test was performed with sources of 99mTc (62 GBq) and 18F (12 GBq) whose activities were measured up to values lower than 1 MBq. Ratios and deviations between experimental and theoretical 99mTc and 18F sources activities were calculated and subsequently compared. Results Mean deviations between experimental and theoretical 99mTc and 18F sources activities were 0.56 (± 1.79)% and 0.92 (± 1.19)%, respectively. The mean ratio between activities indicated by the device for the 99mTc source as measured with the equipment pre-calibrated to measure 99mTc and 18F was 3.42 (± 0.06), and for the 18F source this ratio was 3.39 (± 0.05), values considered constant over the measurement time. Conclusion The results of the linearity test using 99mTc were compatible with those obtained with the 18F source, indicating the viability of utilizing both radioisotopes in dose calibrator linearity testing. Such information in association with the high potential of radiation exposure and costs involved in 18F acquisition suggest 99mTc as the element of choice to perform dose calibrator linearity tests in centers that use 18F, without any detriment to the procedure as well as to the quality of the nuclear medicine service. PMID:25798005
Ferreyra, Carola F; Ortiz, Cristina S
2005-01-01
The aim of this research was to develop and validate a sensitive, rapid, easy, and precise reversed-phase liquid chromatography (LC) method for stability studies of bifonazole (I) formulated with tinctures of calendula flower (II). The method was especially developed for the analysis and quantitative determination of I and II in pure and combined forms in cream pharmaceutical formulations without using gradient elution and at room temperature. The influence on the stability of compound I of temperature, artificial radiation, and drug II used for the new pharmaceutical design was evaluated. The LC separation was carried out using a Supelcosil LC-18 column (25 cm x 4.6 mm id, 5 microm particle size); the mobile phase was composed of methanol-0.1 M ammonium acetate buffer (85 + 15, v/v) pumped isocratically at a flow rate of 1 mL/min; and ultraviolet detection was at 254 nm. The analysis time was less than 10 min. Calibration graphs were found to be linear in the 0.125-0.375 mg/mL (rI = 0.9991) and 0.639-1.916 mg/mL (rII = 0.9995) ranges for I and II, respectively. The linearity, precision, recovery, and limits of detection and quantification were satisfactory for I and II. The results obtained suggested that the developed LC method is selective and specific for the analysis of I and II in pharmaceutical products, and that it can be applied to stability studies.
NASA Astrophysics Data System (ADS)
Arayne, M. Saeed; Sultana, Najma; Siddiqui, Farhan Ahmed; Mirza, Agha Zeeshan; Zuberi, M. Hashim
2008-11-01
Two simple and sensitive spectrophotometric methods in ultraviolet and visible region are described for the determination of tranexamic acid in pure form and pharmaceutical preparations. The first method is based on the reaction of the drug with ninhydrin at boiling temperature and by measuring the increase in absorbance at 575 nm as a function of time. The initial rate, rate constant and fixed time (120 min) procedures were used for constructing the calibration graphs to determine the concentration of the drug, which showed a linear response over the concentration range 16-37 μg mL -1 with correlation coefficient " r" 0.9997, 0.996, 0.9999, LOQ 6.968, 7.138, 2.462 μgmL -1 and LOD 2.090, 2.141 and 0.739 μgmL -1, respectively. In second method tranexamic acid was reacted with ferric chloride solution, yellowish orange colored chromogen showed λ max at 375 nm showing linearity in the concentration range of 50-800 μg mL -1 with correlation coefficient " r" 0.9997, LOQ 6.227 μgmL -1 and LOD 1.868 μgmL -1. The variables affecting the development of the color were optimized and the developed methods were validated statistically and through recovery studies. These results were also verified by IR and NMR spectroscopy. The proposed methods have been successfully applied to the determination of tranexamic acid in commercial pharmaceutical formulation.
Calibration of a Six-Degree-of-Freedom Acceleration Measurement Device
DOT National Transportation Integrated Search
1994-12-01
This report describes the calibration of a six-degree-of-freedom acceleration measurement system designed for use in the measurement of linear and angular head accelerations of anthropomorphic dummies during crash tests. The calibration methodology, ...
NASA Technical Reports Server (NTRS)
Markham, Brian; Morfitt, Ron; Kvaran, Geir; Biggar, Stuart; Leisso, Nathan; Czapla-Myers, Jeff
2011-01-01
Goals: (1) Present an overview of the pre-launch radiance, reflectance & uniformity calibration of the Operational Land Imager (OLI) (1a) Transfer to orbit/heliostat (1b) Linearity (2) Discuss on-orbit plans for radiance, reflectance and uniformity calibration of the OLI
Accelerometer Method and Apparatus for Integral Display and Control Functions
NASA Technical Reports Server (NTRS)
Bozeman, Richard J., Jr. (Inventor)
1996-01-01
Method and apparatus for detecting mechanical vibrations and outputting a signal in response thereto. Art accelerometer package having integral display and control functions is suitable for mounting upon the machinery to be monitored. Display circuitry provides signals to a bar graph display which may be used to monitor machine conditions over a period of time. Control switches may be set which correspond to elements in the bar graph to provide an alert if vibration signals increase in amplitude over a selected trip point. The circuitry is shock mounted within the accelerometer housing. The method provides for outputting a broadband analog accelerometer signal, integrating this signal to produce a velocity signal, integrating and calibrating the velocity signal before application to a display driver, and selecting a trip point at which a digitally compatible output signal is generated.
Accelerometer Method and Apparatus for Integral Display and Control Functions
NASA Technical Reports Server (NTRS)
Bozeman, Richard J., Jr. (Inventor)
1998-01-01
Method and apparatus for detecting mechanical vibrations and outputting a signal in response thereto is discussed. An accelerometer package having integral display and control functions is suitable for mounting upon the machinery to be monitored. Display circuitry provides signals to a bar graph display which may be used to monitor machine conditions over a period of time. Control switches may be set which correspond to elements in the bar graph to provide an alert if vibration signals increase in amplitude over a selected trip point. The circuitry is shock mounted within the accelerometer housing. The method provides for outputting a broadband analog accelerometer signal, integrating this signal to produce a velocity signal, integrating and calibrating the velocity signal before application to a display driver, and selecting a trip point at which a digitally compatible output signal is generated.
ERIC Educational Resources Information Center
Metz, James
2001-01-01
Describes an activity designed to help students connect the ideas of linear growth and exponential growth through graphs of the future value of accounts that earn simple interest and accounts that earn compound interest. Includes worksheets and solutions. (KHR)
Indoor calibration of Sky Quality Meters: Linearity, spectral responsivity and uncertainty analysis
NASA Astrophysics Data System (ADS)
Pravettoni, M.; Strepparava, D.; Cereghetti, N.; Klett, S.; Andretta, M.; Steiger, M.
2016-09-01
The indoor calibration of brightness sensors requires extremely low values of irradiance in the most accurate and reproducible way. In this work the testing equipment of an ISO 17025 accredited laboratory for electrical testing, qualification and type approval of solar photovoltaic modules was modified in order to test the linearity of the instruments from few mW/cm2 down to fractions of nW/cm2, corresponding to levels of simulated brightness from 6 to 19 mag/arcsec2. Sixteen Sky Quality Meter (SQM) produced by Unihedron, a Canadian manufacturer, were tested, also assessing the impact of the ageing of their protective glasses on the calibration coefficients and the drift of the instruments. The instruments are in operation on measurement points and observatories at different sites and altitudes in Southern Switzerland, within the framework of OASI, the Environmental Observatory of Southern Switzerland. The authors present the results of the calibration campaign: linearity; brightness calibration, with and without protective glasses; transmittance measurement of the glasses; and spectral responsivity of the devices. A detailed uncertainty analysis is also provided, according to the ISO 17025 standard.
Burgués, Javier; Jiménez-Soto, Juan Manuel; Marco, Santiago
2018-07-12
The limit of detection (LOD) is a key figure of merit in chemical sensing. However, the estimation of this figure of merit is hindered by the non-linear calibration curve characteristic of semiconductor gas sensor technologies such as, metal oxide (MOX), gasFETs or thermoelectric sensors. Additionally, chemical sensors suffer from cross-sensitivities and temporal stability problems. The application of the International Union of Pure and Applied Chemistry (IUPAC) recommendations for univariate LOD estimation in non-linear semiconductor gas sensors is not straightforward due to the strong statistical requirements of the IUPAC methodology (linearity, homoscedasticity, normality). Here, we propose a methodological approach to LOD estimation through linearized calibration models. As an example, the methodology is applied to the detection of low concentrations of carbon monoxide using MOX gas sensors in a scenario where the main source of error is the presence of uncontrolled levels of humidity. Copyright © 2018 Elsevier B.V. All rights reserved.
Intelligent Distributed Systems
2015-10-23
periodic gossiping algorithms by using convex combination rules rather than standard averaging rules. On a ring graph, we have discovered how to sequence...the gossips within a period to achieve the best possible convergence rate and we have related this optimal value to the classic edge coloring problem...consensus. There are three different approaches to distributed averaging: linear iterations, gossiping , and dou- ble linear iterations which are also known as
Unsteady transonic flows - Introduction, current trends, applications
NASA Technical Reports Server (NTRS)
Yates, E. C., Jr.
1985-01-01
The computational treatment of unsteady transonic flows is discussed, reviewing the historical development and current techniques. The fundamental physical principles are outlined; the governing equations are introduced; three-dimensional linearized and two-dimensional linear-perturbation theories in frequency domain are described in detail; and consideration is given to frequency-domain FEMs and time-domain finite-difference and integral-equation methods. Extensive graphs and diagrams are included.
Integrated Sensing Processor, Phase 2
2005-12-01
performance analysis for several baseline classifiers including neural nets, linear classifiers, and kNN classifiers. Use of CCDR as a preprocessing step...below the level of the benchmark non-linear classifier for this problem ( kNN ). Furthermore, the CCDR preconditioned kNN achieved a 10% improvement over...the benchmark kNN without CCDR. Finally, we found an important connection between intrinsic dimension estimation via entropic graphs and the optimal
Lahuerta-Zamora, Luis; Mellado-Romero, Ana M
2017-06-01
A new system for continuous flow chemiluminescence detection, based on the use of a simple and low-priced lens-free digital camera (with complementary metal oxide semiconductor technology) as a detector, is proposed for the quantitative determination of paracetamol in commercial pharmaceutical formulations. Through the camera software, AVI video files of the chemiluminescence emission are captured and then, using friendly ImageJ public domain software (from National Institutes for Health), properly processed in order to extract the analytical information. The calibration graph was found to be linear over the range 0.01-0.10 mg L -1 and over the range 1.0-100.0 mg L -1 of paracetamol, the limit of detection being 10 μg L -1 . No significative interferences were found. Paracetamol was determined in three different pharmaceutical formulations: Termalgin®, Efferalgan® and Gelocatil®. The obtained results compared well with those declared on the formulation label and with those obtained through the official analytical method of British Pharmacopoeia. Graphical abstract Abbreviated scheme of the new chemiluminescence detection system proposed in this paper.
Determination of etoxazole residues in fruits and vegetables by SPE clean-up and HPLC-DAD.
Malhat, Farag; Badawy, Hany; Barakat, Dalia; Saber, Ayman
2013-01-01
A method for determination of etoxazole residues in apples, strawberries and green beans was developed and validated. The analyte was extracted with acetonitrile from foodstuff and a charcoal-celite cartridge was used for clean-up of raw extracts. Reversed phase high performance liquid chromatography with photodiode array detector (HPLC-DAD) was used for the determination and quantification of etoxazole residues in the studied samples. The calibration graphs of etoxazole in a solvent or three blank matrixes were linear within the tested intervals 0.01-2 mg L(-1), with correlation coefficient of determination >0.999. The combined solid phase extraction (SPE) clean-up and the chromatographic method steps were sensitive and reliable for simultaneous determination of etoxazole residues in the studied samples. The average recoveries of etoxazole in the tested foodstuffs were between 93.4 to 102% at spiking levels of 0.01, 0.10, and 0.50 mg kg(-1), with relative standard deviations ranging from 2.8 to 4.7%, in agreement with directives for method validation in residue analyses. The limit of detection (LOD) of the HPLC-DAD system was 100 pg. The limit of quantification of the entire method was 0.01 mg kg(-1).
Aeenehvand, Saeed; Toudehrousta, Zahra; Kamankesh, Marzieh; Mashayekh, Morteza; Tavakoli, Hamid Reza; Mohammadi, Abdorreza
2016-01-01
This study developed an analytical method based on microwave-assisted extraction and dispersive liquid-liquid microextraction followed by high-performance liquid chromatography for the determination of three polar heterocyclic aromatic amines from hamburger patties. Effective parameters controlling the performance of the microextraction process, such as the type and volume of extraction and disperser solvents, microwave time, nature of alkaline aqueous solution, pH and salt amount, were optimized. The calibration graphs were linear in the range of 1-200 ng g(-1), with a coefficient of determination (R(2)) better than 0.9993. The relative standard deviations (RSD) for seven analyses were between 3.2% and 6.5%. The recoveries of those compounds in hamburger patties were from 90% to 105%. Detection limits were between 0.06 and 0.21 ng g(-1). A comparison of the proposed method with the existing literature demonstrates that it is a simple, rapid, highly selective and sensitive, and it gives good enrichment factors and detection limits for determining HAAs in real hamburger patties samples. Copyright © 2015 Elsevier Ltd. All rights reserved.
Han, Quan; Huo, Yanyan; Wu, Jiangyan; He, Yaping; Yang, Xiaohui; Yang, Longhu
2017-03-24
A highly sensitive method based on cloud point extraction (CPE) separation/preconcentration and graphite furnace atomic absorption spectrometry (GFAAS) detection has been developed for the determination of ultra-trace amounts of rhodium in water samples. A new reagent, 2-(5-iodo-2-pyridylazo)-5-dimethylaminoaniline (5-I-PADMA), was used as the chelating agent and the nonionic surfactant TritonX-114 was chosen as extractant. In a HAc-NaAc buffer solution at pH 5.5, Rh(III) reacts with 5-I-PADMA to form a stable chelate by heating in a boiling water bath for 10 min. Subsequently, the chelate is extracted into the surfactant phase and separated from bulk water. The factors affecting CPE were investigated. Under the optimized conditions, the calibration graph was linear in the range of 0.1-6.0 ng/mL, the detection limit was 0.023 ng/mL for rhodium and relative standard deviation was 3.67% ( c = 1.0 ng/mL, n = 11).The method has been applied to the determination of trace rhodium in water samples with satisfactory results.
Alshana, Usama; Ertaş, Nusret; Göğer, Nilgün G
2015-08-15
Dispersive liquid-liquid microextraction (DLLME) with back-extraction was used prior to capillary electrophoresis (CE) for the extraction of four parabens. Optimum extraction conditions were: 200 μL chloroform (extraction solvent), 1.0 mL acetonitrile (disperser solvent) and 1 min extraction time. Back-extraction of parabens from chloroform into a 50mM sodium hydroxide solution within 10s facilitated their direct injection into CE. The analytes were separated at 12°C and 25 kV with a background electrolyte of 25 mM borate buffer containing 5.0% (v/v) acetonitrile. Enrichment factors were in the range of 4.3-10.7 and limits of detection ranged from 0.1 to 0.2 μg mL(-1). Calibration graphs showed good linearity with coefficients of determination (R(2)) higher than 0.9957 and relative standard deviations (%RSDs) lower than 3.5%. DLLME-CE was demonstrated to be a simple and rapid method for the determination of parabens in human milk and food with relative recoveries in the range of 86.7-103.3%. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
da Silva, Aline Santana; Fernandes, Flávio Cesar Bedatty; Tognolli, João Olímpio; Pezza, Leonardo; Pezza, Helena Redigolo
2011-09-01
This article describes a simple, inexpensive, and environmentally friendly method for the monitoring of glyphosate using diffuse reflectance spectroscopy. The proposed method is based on reflectance measurements of the colored compound produced from the spot test reaction between glyphosate and p-dimethylaminocinnamaldehyde ( p-DAC) in acid medium, using a filter paper as solid support. Experimental designs were used to optimize the analytical conditions. All reflectance measurements were carried out at 495 nm. Under optimal conditions, the glyphosate calibration graphs obtained by plotting the optical density of the reflectance signal (A R) against the concentration were linear in the range 50-500 μg mL -1, with a correlation coefficient of 0.9987. The limit of detection (LOD) for glyphosate was 7.28 μg mL -1. The technique was successfully applied to the direct determination of glyphosate in commercial formulations, as well as in water samples (river water, pure water and mineral drinking water) after a previous clean-up or pre-concentration step. Recoveries were in the ranges 93.2-102.6% and 91.3-102.9% for the commercial formulations and water samples, respectively.
da Silva, Aline Santana; Fernandes, Flávio Cesar Bedatty; Tognolli, João Olímpio; Pezza, Leonardo; Pezza, Helena Redigolo
2011-09-01
This article describes a simple, inexpensive, and environmentally friendly method for the monitoring of glyphosate using diffuse reflectance spectroscopy. The proposed method is based on reflectance measurements of the colored compound produced from the spot test reaction between glyphosate and p-dimethylaminocinnamaldehyde (p-DAC) in acid medium, using a filter paper as solid support. Experimental designs were used to optimize the analytical conditions. All reflectance measurements were carried out at 495 nm. Under optimal conditions, the glyphosate calibration graphs obtained by plotting the optical density of the reflectance signal (AR) against the concentration were linear in the range 50-500 μg mL(-1), with a correlation coefficient of 0.9987. The limit of detection (LOD) for glyphosate was 7.28 μg mL(-1). The technique was successfully applied to the direct determination of glyphosate in commercial formulations, as well as in water samples (river water, pure water and mineral drinking water) after a previous clean-up or pre-concentration step. Recoveries were in the ranges 93.2-102.6% and 91.3-102.9% for the commercial formulations and water samples, respectively. Copyright © 2011 Elsevier B.V. All rights reserved.
Bashiry, Moein; Mohammadi, Abdorreza; Hosseini, Hedayat; Kamankesh, Marzieh; Aeenehvand, Saeed; Mohammadi, Zaniar
2016-01-01
A novel method based on microwave-assisted extraction and dispersive liquid-liquid microextraction (MAE-DLLME) followed by high-performance liquid chromatography (HPLC) was developed for the determination of three polyamines from turkey breast meat samples. Response surface methodology (RSM) based on central composite design (CCD) was used to optimize the effective factors in DLLME process. The optimum microextraction efficiency was obtained under optimized conditions. The calibration graphs of the proposed method were linear in the range of 20-200 ng g(-1), with the coefficient determination (R(2)) higher than 0.9914. The relative standard deviations were 6.72-7.30% (n = 7). The limits of detection were in the range of 0.8-1.4 ng g(-1). The recoveries of these compounds in spiked turkey breast meat samples were from 95% to 105%. The increased sensitivity in using the MAE-DLLME-HPLC-UV has been demonstrated. Compared with previous methods, the proposed method is an accurate, rapid and reliable sample-pretreatment method. Copyright © 2015 Elsevier Ltd. All rights reserved.
Alizadeh, Taher; Ganjali, Mohammad Reza; Rafiei, Faride
2017-06-29
In this study an innovative method was introduced for selective and precise determination of urea in various real samples including urine, blood serum, soil and water. The method was based on the square wave voltammetry determination of an electroactive product, generated during diacetylmonoxime reaction with urea. A carbon paste electrode, modified with multi-walled carbon nanotubes (MWCNTs) was found to be an appropriate electrochemical transducer for recording of the electrochemical signal. It was found that the chemical reaction conditions influenced the analytical signal directly. The calibration graph of the method was linear in the range of 1 × 10 -7 - 1 × 10 -2 mol L -1 . The detection limit was calculated to be 52 nmol L -1 . Relative standard error of the method was also calculated to be 3.9% (n = 3). The developed determination procedure was applied for urea determination in various real samples including soil, urine, plasma and water samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Bavili Tabrizi, Ahad; Abdollahi, Ali
2015-10-01
A simple, rapid and sensitive spectrofluorimetric method was developed for the determination of di-syston, ethion and phorate in environmental water samples. The procedure is based on the oxidation of these pesticides with cerium (IV) to produce cerium (III), and its fluorescence was monitored at 368 ± 3 nm after excitation at 257 ± 3 nm. The variables effecting oxidation of each pesticide were studied and optimized. Under the experimental conditions used, the calibration graphs were linear over the range 0.2-15, 0.1-13, 0.1-13 ng mL(-1) for di-syston, ethion and phorate, respectively. The limit of detection and quantification were in the range 0.034-0.096 and 0.112-0.316 ng mL(-1), respectively. Intra- and inter-day assay precisions, expressed as the relative standard deviation (RSD), were lower than 5.2 % and 6.7 %, respectively. Good recoveries in the range 86 %-108 % were obtained for spiked water samples. The proposed method was applied to the determination of studied pesticides in environmental water samples.
Afzali, Darush; Mostafavi, Ali; Taher, Mohammad Ali; Rezaeipour, Ebrahim; Khayatzadeh Mahani, Mohammad
2005-04-01
A procedure for separation and preconcentration of trace amounts of cadmium has been proposed. A column of analcime zeolite modified with benzyldimethyltetradecylammonium chloride and loaded with 2-(5-bromo-2-pyridylazo)-5-diethylaminophenol (5-Br-PADAP) was used for retention of cadmium. The cadmium was quantitatively retained on the column at pH approximately 9 and was recovered from column with 5 ml of 2 M nitric acid with a preconcentration factor of 140. Anodic stripping differential pulse voltammetry was used for determination of cadmium. A 0.05 ng/ml detection limit for the preconcentration of aqueous solution of cadmium was obtained. The relative standard deviation (RSD) for eight replicate determinations at the 1 microg/ml cadmium levels was 0.31% (calculated with the peak height obtained). The calibration graph using the preconcentration system was linear from 0.01 to 150 microg/ml in final solution with a correlation coefficient of 0.9997. For optimization of conditions, various parameters such as the effect of pH, flow rate, instrumental conditions and interference of number of ions, were studied in detail. This method was successfully applied for determination of cadmium in various complex samples.
Fontana, Ariel R; Patil, Sangram H; Banerjee, Kaushik; Altamirano, Jorgelina C
2010-04-28
A fast and effective microextraction technique is proposed for preconcentration of 2,4,6-trichloroanisole (2,4,6-TCA) from wine samples prior gas chromatography tandem mass spectrometric (GC-MS/MS) analysis. The proposed technique is based on ultrasonication (US) for favoring the emulsification phenomenon during the extraction stage. Several variables influencing the relative response of the target analyte were studied and optimized. Under optimal experimental conditions, 2,4,6-TCA was quantitatively extracted achieving enhancement factors (EF) > or = 400 and limits of detection (LODs) 0.6-0.7 ng L(-1) with relative standard deviations (RSDs) < or = 11.3%, when 10 ng L(-1) 2,4,6-TCA standard-wine sample blend was analyzed. The calibration graphs for white and red wine were linear within the range of 5-1000 ng L(-1), and estimation coefficients (r(2)) were > or = 0.9995. Validation of the methodology was carried out by standard addition method at two concentrations (10 and 50 ng L(-1)) achieving recoveries >80% indicating satisfactory robustness of the method. The methodology was successfully applied for determination of 2,4,6-TCA in different wine samples.
NASA Astrophysics Data System (ADS)
Sorokin, N. I.; Krivandina, E. A.; Zhmurova, Z. I.
2013-11-01
The density of single crystals of nonstoichiometric phases Ba1 - x La x F2 + x (0 ≤ x ≤ 0.5) and Sr0.8La0.2 - x Lu x F2.2 (0 ≤ x ≤ 0.2) with the fluorite (CaF2) structure type and R 1 - y Sr y F3 - y ( R = Pr, Nd; 0 ≤ y ≤ 0.15) with the tysonite (LaF3) structure type has been measured. Single crystals were grown from a melt by the Bridgman method. The measured concentration dependences of single crystal density are linear. The interstitial and vacancy models of defect formation in the fluorite and tysonite phases, respectively, are confirmed. To implement the composition control of single crystals of superionic conductors M 1 - x R x F2 + x and R 1 - y M y F3 - y in practice, calibration graphs of X-ray density in the MF2- RF3 systems ( M = Ca, Sr, Ba, Cd, Pb; R = La-Lu, Y) are plotted.
Abdolmohammad-Zadeh, Hossein; Tavarid, Keyvan; Talleb, Zeynab
2012-01-01
Nanostructured nickel-aluminum-zirconium ternary layered double hydroxide was successfully applied as a solid-phase extraction sorbent for the separation and pre-concentration of trace levels of iodate in food, environmental and biological samples. An indirect method was used for monitoring of the extracted iodate ions. The method is based on the reaction of the iodate with iodide in acidic solution to produce iodine, which can be spectrophotometrically monitored at 352 nm. The absorbance is directly proportional to the concentration of iodate in the sample. The effect of several parameters such as pH, sample flow rate, amount of nanosorbent, elution conditions, sample volume, and coexisting ions on the recovery was investigated. In the optimum experimental conditions, the limit of detection (3s) and enrichment factor were 0.12 μg mL−1 and 20, respectively. The calibration graph using the preconcentration system was linear in the range of 0.2–2.8 μg mL−1 with a correlation coefficient of 0.998. In order to validate the presented method, a certified reference material, NIST SRM 1549, was also analyzed. PMID:22619590
Marchisio, P F; Sales, A; Cerutti, S; Marchevski, E; Martinez, L D
2005-09-30
The present paper proposes an on-line preconcentration procedure for lead determination in Ilex paraguariensis (St. Hilaire) samples by ultrasonic nebulization associated to inductively coupled plasma optical emission spectrometry (USN-ICP-OES). It is based on the precipitation of lead(II) ion on a minicolumn packed with polyurethane foam using 2-(5-bromo-2-pyridilazo)-5-diethylaminophenol (5-Br-PADAP) as precipitating reagent. The collected analyte precipitate was quantitatively eluted from the minicolumn with 20% (v/v) nitric acid. An enhancement factor of 225-fold was obtained (15 for USN and 15 for preconcentration). The detection limit (DL) value for the preconcentration of 10.0 ml of sample was 40.0 ng/l. The relative standard deviation (R.S.D.) was 3.0% for a Pb concentration of 1 microg/l, calculated from the peak heights obtained. The calibration graph using the preconcentration system for lead was linear with a correlation coefficient of 0.9997, at levels near the detection limits up to at least 100 microg/l. The preconcentration procedure was successfully applied to the determination of lead in mate tea samples.
Asadollahi, Tahereh; Dadfarnia, Shayessteh; Shabani, Ali Mohammad Haji
2010-06-30
A novel dispersive liquid-liquid microextraction based on solidification of floating organic drop (DLLME-SFO) for separation/preconcentration of ultra trace amount of vanadium and its determination with the electrothermal atomic absorption spectrometry (ETAAS) was developed. The DLLME-SFO behavior of vanadium (V) using N-benzoyl-N-phenylhydroxylamine (BPHA) as complexing agent was systematically investigated. The factors influencing the complex formation and extraction by DLLME-SFO method were optimized. Under the optimized conditions: 100 microL, 200 microL and 25 mL of extraction solvent (1-undecanol), disperser solvent (acetone) and sample volume, respectively, an enrichment factor of 184, a detection limit (based on 3S(b)/m) of 7 ng L(-1) and a relative standard deviation of 4.6% (at 500 ng L(-1)) were obtained. The calibration graph using the preconcentration system for vanadium was linear from 20 to 1000 ng L(-1) with a correlation coefficient of 0.9996. The method was successfully applied for the determination of vanadium in water and parsley. Copyright 2010 Elsevier B.V. All rights reserved.
Influence of speed and step frequency during walking and running on motion sensor output.
Rowlands, Ann V; Stone, Michelle R; Eston, Roger G
2007-04-01
Studies have reported strong linear relationships between accelerometer output and walking/running speeds up to 10 km x h(-1). However, ActiGraph uniaxial accelerometer counts plateau at higher speeds. The aim of this study was to determine the relationships of triaxial accelerometry, uniaxial accelerometry, and pedometry with speed and step frequency (SF) across a range of walking and running speeds. Nine male runners wore two ActiGraph uniaxial accelerometers, two RT3 triaxial accelerometers (all set at a 1-s epoch), and two Yamax pedometers. Each participant walked for 60 s at 4 and 6 km x h(-1), ran for 60 s at 10, 12, 14, 16, and 18 km x h(-1), and ran for 30 s at 20, 22, 24, and 26 km x h(-1). Step frequency was recorded by a visual count. ActiGraph counts peaked at 10 km x h(-10 (2.5-3.0 Hz SF) and declined thereafter (r=0.02, P>0.05). After correction for frequency-dependent filtering, output plateaued at 10 km x h(-1) but did not decline (r=0.77, P<0.05). Similarly, RT3 vertical counts plateaued at speeds > 10 km x h(-1) (r=0.86, P<0.01). RT3 vector magnitude and anteroposterior and mediolateral counts maintained a linear relationship with speed (r>0.96, P<0.001). Step frequency assessed by pedometry compared well with actual step frequency up to 20 km x h(-1) (approximately 3.5 Hz) but then underestimated actual steps (Yamax r=0.97; ActiGraph pedometer r=0.88, both P<0.001). Increasing underestimation of activity by the ActiGraph as speed increases is related to frequency-dependent filtering and assessment of acceleration in the vertical plane only. RT3 vector magnitude was strongly related to speed, reflecting the predominance of horizontal acceleration at higher speeds. These results indicate that high-intensity activity is underestimated by the ActiGraph, even after correction for frequency-dependent filtering, but not by the RT3. Pedometer output is highly correlated with step frequency.
Calibration Experiments for a Computer Vision Oyster Volume Estimation System
ERIC Educational Resources Information Center
Chang, G. Andy; Kerns, G. Jay; Lee, D. J.; Stanek, Gary L.
2009-01-01
Calibration is a technique that is commonly used in science and engineering research that requires calibrating measurement tools for obtaining more accurate measurements. It is an important technique in various industries. In many situations, calibration is an application of linear regression, and is a good topic to be included when explaining and…
Parameterizations for reducing camera reprojection error for robot-world hand-eye calibration
USDA-ARS?s Scientific Manuscript database
Accurate robot-world, hand-eye calibration is crucial to automation tasks. In this paper, we discuss the robot-world, hand-eye calibration problem which has been modeled as the linear relationship AX equals ZB, where X and Z are the unknown calibration matrices composed of rotation and translation ...
NASA Technical Reports Server (NTRS)
Decker, Arthur J.
2004-01-01
A completely optical calibration process has been developed at Glenn for calibrating a neural-network-based nondestructive evaluation (NDE) method. The NDE method itself detects very small changes in the characteristic patterns or vibration mode shapes of vibrating structures as discussed in many references. The mode shapes or characteristic patterns are recorded using television or electronic holography and change when a structure experiences, for example, cracking, debonds, or variations in fastener properties. An artificial neural network can be trained to be very sensitive to changes in the mode shapes, but quantifying or calibrating that sensitivity in a consistent, meaningful, and deliverable manner has been challenging. The standard calibration approach has been difficult to implement, where the response to damage of the trained neural network is compared with the responses of vibration-measurement sensors. In particular, the vibration-measurement sensors are intrusive, insufficiently sensitive, and not numerous enough. In response to these difficulties, a completely optical alternative to the standard calibration approach was proposed and tested successfully. Specifically, the vibration mode to be monitored for structural damage was intentionally contaminated with known amounts of another mode, and the response of the trained neural network was measured as a function of the peak-to-peak amplitude of the contaminating mode. The neural network calibration technique essentially uses the vibration mode shapes of the undamaged structure as standards against which the changed mode shapes are compared. The published response of the network can be made nearly independent of the contaminating mode, if enough vibration modes are used to train the net. The sensitivity of the neural network can be adjusted for the environment in which the test is to be conducted. The response of a neural network trained with measured vibration patterns for use on a vibration isolation table in the presence of various sources of laboratory noise is shown. The output of the neural network is called the degradable classification index. The curve was generated by a simultaneous comparison of means, and it shows a peak-to-peak sensitivity of about 100 nm. The following graph uses model generated data from a compressor blade to show that much higher sensitivities are possible when the environment can be controlled better. The peak-to-peak sensitivity here is about 20 nm. The training procedure was modified for the second graph, and the data were subjected to an intensity-dependent transformation called folding. All the measurements for this approach to calibration were optical. The peak-to-peak amplitudes of the vibration modes were measured using heterodyne interferometry, and the modes themselves were recorded using television (electronic) holography.
Computer Algebra Systems in Undergraduate Instruction.
ERIC Educational Resources Information Center
Small, Don; And Others
1986-01-01
Computer algebra systems (such as MACSYMA and muMath) can carry out many of the operations of calculus, linear algebra, and differential equations. Use of them with sketching graphs of rational functions and with other topics is discussed. (MNS)
A Whirlwind Tour of Computational Geometry.
ERIC Educational Resources Information Center
Graham, Ron; Yao, Frances
1990-01-01
Described is computational geometry which used concepts and results from classical geometry, topology, combinatorics, as well as standard algorithmic techniques such as sorting and searching, graph manipulations, and linear programing. Also included are special techniques and paradigms. (KR)
There's a Green Glob in Your Classroom.
ERIC Educational Resources Information Center
Dugdale, Sharon
1983-01-01
Discusses computer games (called intrinsic models) focusing on mathematics rather than on unrelated motivations (flashing lights or sounds). Games include "Green Globs," (equations/linear functions), "Darts"/"Torpedo" (fractions), "Escape" (graphing), and "Make-a-Monster" (equivalent fractions and…
Efficient parallel architecture for highly coupled real-time linear system applications
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Homaifar, Abdollah; Barua, Soumavo
1988-01-01
A systematic procedure is developed for exploiting the parallel constructs of computation in a highly coupled, linear system application. An overall top-down design approach is adopted. Differential equations governing the application under consideration are partitioned into subtasks on the basis of a data flow analysis. The interconnected task units constitute a task graph which has to be computed in every update interval. Multiprocessing concepts utilizing parallel integration algorithms are then applied for efficient task graph execution. A simple scheduling routine is developed to handle task allocation while in the multiprocessor mode. Results of simulation and scheduling are compared on the basis of standard performance indices. Processor timing diagrams are developed on the basis of program output accruing to an optimal set of processors. Basic architectural attributes for implementing the system are discussed together with suggestions for processing element design. Emphasis is placed on flexible architectures capable of accommodating widely varying application specifics.
Dynamic graphs, community detection, and Riemannian geometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakker, Craig; Halappanavar, Mahantesh; Visweswara Sathanur, Arun
A community is a subset of a wider network where the members of that subset are more strongly connected to each other than they are to the rest of the network. In this paper, we consider the problem of identifying and tracking communities in graphs that change over time {dynamic community detection} and present a framework based on Riemannian geometry to aid in this task. Our framework currently supports several important operations such as interpolating between and averaging over graph snapshots. We compare these Riemannian methods with entry-wise linear interpolation and that the Riemannian methods are generally better suited tomore » dynamic community detection. Next steps with the Riemannian framework include developing higher-order interpolation methods (e.g. the analogues of polynomial and spline interpolation) and a Riemannian least-squares regression method for working with noisy data.« less
Zhang, Huaguang; Feng, Tao; Yang, Guang-Hong; Liang, Hongjing
2015-07-01
In this paper, the inverse optimal approach is employed to design distributed consensus protocols that guarantee consensus and global optimality with respect to some quadratic performance indexes for identical linear systems on a directed graph. The inverse optimal theory is developed by introducing the notion of partial stability. As a result, the necessary and sufficient conditions for inverse optimality are proposed. By means of the developed inverse optimal theory, the necessary and sufficient conditions are established for globally optimal cooperative control problems on directed graphs. Basic optimal cooperative design procedures are given based on asymptotic properties of the resulting optimal distributed consensus protocols, and the multiagent systems can reach desired consensus performance (convergence rate and damping rate) asymptotically. Finally, two examples are given to illustrate the effectiveness of the proposed methods.
Small, J R
1993-01-01
This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434
Extraction of object skeletons in multispectral imagery by the orthogonal regression fitting
NASA Astrophysics Data System (ADS)
Palenichka, Roman M.; Zaremba, Marek B.
2003-03-01
Accurate and automatic extraction of skeletal shape of objects of interest from satellite images provides an efficient solution to such image analysis tasks as object detection, object identification, and shape description. The problem of skeletal shape extraction can be effectively solved in three basic steps: intensity clustering (i.e. segmentation) of objects, extraction of a structural graph of the object shape, and refinement of structural graph by the orthogonal regression fitting. The objects of interest are segmented from the background by a clustering transformation of primary features (spectral components) with respect to each pixel. The structural graph is composed of connected skeleton vertices and represents the topology of the skeleton. In the general case, it is a quite rough piecewise-linear representation of object skeletons. The positions of skeleton vertices on the image plane are adjusted by means of the orthogonal regression fitting. It consists of changing positions of existing vertices according to the minimum of the mean orthogonal distances and, eventually, adding new vertices in-between if a given accuracy if not yet satisfied. Vertices of initial piecewise-linear skeletons are extracted by using a multi-scale image relevance function. The relevance function is an image local operator that has local maximums at the centers of the objects of interest.
Abdelnour, Farras; Voss, Henning U.; Raj, Ashish
2014-01-01
The relationship between anatomic connectivity of large-scale brain networks and their functional connectivity is of immense importance and an area of active research. Previous attempts have required complex simulations which model the dynamics of each cortical region, and explore the coupling between regions as derived by anatomic connections. While much insight is gained from these non-linear simulations, they can be computationally taxing tools for predicting functional from anatomic connectivities. Little attention has been paid to linear models. Here we show that a properly designed linear model appears to be superior to previous non-linear approaches in capturing the brain’s long-range second order correlation structure that governs the relationship between anatomic and functional connectivities. We derive a linear network of brain dynamics based on graph diffusion, whereby the diffusing quantity undergoes a random walk on a graph. We test our model using subjects who underwent diffusion MRI and resting state fMRI. The network diffusion model applied to the structural networks largely predicts the correlation structures derived from their fMRI data, to a greater extent than other approaches. The utility of the proposed approach is that it can routinely be used to infer functional correlation from anatomic connectivity. And since it is linear, anatomic connectivity can also be inferred from functional data. The success of our model confirms the linearity of ensemble average signals in the brain, and implies that their long-range correlation structure may percolate within the brain via purely mechanistic processes enacted on its structural connectivity pathways. PMID:24384152
ASD FieldSpec Calibration Setup and Techniques
NASA Technical Reports Server (NTRS)
Olive, Dan
2001-01-01
This paper describes the Analytical Spectral Devices (ASD) Fieldspec Calibration Setup and Techniques. The topics include: 1) ASD Fieldspec FR Spectroradiometer; 2) Components of Calibration; 3) Equipment list; 4) Spectral Setup; 5) Spectral Calibration; 6) Radiometric and Linearity Setup; 7) Radiometric setup; 8) Datadets Required; 9) Data files; and 10) Field of View Measurement. This paper is in viewgraph form.
Research on Geometric Calibration of Spaceborne Linear Array Whiskbroom Camera
Sheng, Qinghong; Wang, Qi; Xiao, Hui; Wang, Qing
2018-01-01
The geometric calibration of a spaceborne thermal-infrared camera with a high spatial resolution and wide coverage can set benchmarks for providing an accurate geographical coordinate for the retrieval of land surface temperature. The practice of using linear array whiskbroom Charge-Coupled Device (CCD) arrays to image the Earth can help get thermal-infrared images of a large breadth with high spatial resolutions. Focusing on the whiskbroom characteristics of equal time intervals and unequal angles, the present study proposes a spaceborne linear-array-scanning imaging geometric model, whilst calibrating temporal system parameters and whiskbroom angle parameters. With the help of the YG-14—China’s first satellite equipped with thermal-infrared cameras of high spatial resolution—China’s Anyang Imaging and Taiyuan Imaging are used to conduct an experiment of geometric calibration and a verification test, respectively. Results have shown that the plane positioning accuracy without ground control points (GCPs) is better than 30 pixels and the plane positioning accuracy with GCPs is better than 1 pixel. PMID:29337885
Calibration Methods for a 3D Triangulation Based Camera
NASA Astrophysics Data System (ADS)
Schulz, Ulrike; Böhnke, Kay
A sensor in a camera takes a gray level image (1536 x 512 pixels), which is reflected by a reference body. The reference body is illuminated by a linear laser line. This gray level image can be used for a 3D calibration. The following paper describes how a calibration program calculates the calibration factors. The calibration factors serve to determine the size of an unknown reference body.
Five-Hole Flow Angle Probe Calibration for the NASA Glenn Icing Research Tunnel
NASA Technical Reports Server (NTRS)
Gonsalez, Jose C.; Arrington, E. Allen
1999-01-01
A spring 1997 test section calibration program is scheduled for the NASA Glenn Research Center Icing Research Tunnel following the installation of new water injecting spray bars. A set of new five-hole flow angle pressure probes was fabricated to properly calibrate the test section for total pressure, static pressure, and flow angle. The probes have nine pressure ports: five total pressure ports on a hemispherical head and four static pressure ports located 14.7 diameters downstream of the head. The probes were calibrated in the NASA Glenn 3.5-in.-diameter free-jet calibration facility. After completing calibration data acquisition for two probes, two data prediction models were evaluated. Prediction errors from a linear discrete model proved to be no worse than those from a full third-order multiple regression model. The linear discrete model only required calibration data acquisition according to an abridged test matrix, thus saving considerable time and financial resources over the multiple regression model that required calibration data acquisition according to a more extensive test matrix. Uncertainties in calibration coefficients and predicted values of flow angle, total pressure, static pressure. Mach number. and velocity were examined. These uncertainties consider the instrumentation that will be available in the Icing Research Tunnel for future test section calibration testing.
Device for determining frost depth and density
NASA Technical Reports Server (NTRS)
Huneidi, F.
1983-01-01
A hand held device having a forward open window portion adapted to be pushed downwardly into the frost on a surface, and a rear container portion adapted to receive the frost removed from the window area are described. A graph on a side of the container enables an observer to determine the density of the frost from certain measurements noted. The depth of the frost is noted from calibrated lines on the sides of the open window portion.
Understanding Solubility through Excel Spreadsheets
NASA Astrophysics Data System (ADS)
Brown, Pamela
2001-02-01
This article describes assignments related to the solubility of inorganic salts that can be given in an introductory general chemistry course. Le Châtelier's principle, solubility, unit conversion, and thermodynamics are tied together to calculate heats of solution by two methods: heats of formation and an application of the van't Hoff equation. These assignments address the need for math, graphing, and computer skills in the chemical technology program by developing skill in the use of Microsoft Excel to prepare spreadsheets and graphs and to perform linear and nonlinear curve-fitting. Background information on the value of understanding and predicting solubility is provided.
Lessons learned from the AIRS pre-flight radiometric calibration
NASA Astrophysics Data System (ADS)
Pagano, Thomas S.; Aumann, Hartmut H.; Weiler, Margie
2013-09-01
The Atmospheric Infrared Sounder (AIRS) instrument flies on the NASA Aqua satellite and measures the upwelling hyperspectral earth radiance in the spectral range of 3.7-15.4 μm with a nominal ground resolution at nadir of 13.5 km. The AIRS spectra are achieved using a temperature controlled grating spectrometer and HgCdTe infrared linear arrays providing 2378 channels with a nominal spectral resolution of approximately 1200. The AIRS pre-flight tests that impact the radiometric calibration include a full system radiometric response (linearity), polarization response, and response vs scan angle (RVS). We re-derive the AIRS instrument radiometric calibration coefficients from the pre-flight polarization measurements, the response vs scan (RVS) angle tests as well as the linearity tests, and a recent lunar roll test that allowed the AIRS to view the moon. The data and method for deriving the coefficients is discussed in detail and the resulting values compared amongst the different tests. Finally, we examine the residual errors in the reconstruction of the external calibrator blackbody radiances and the efficacy of a new radiometric uncertainty model. Results show the radiometric calibration of AIRS to be excellent and the radiometric uncertainty model does a reasonable job of characterizing the errors.
Calibration with confidence: a principled method for panel assessment.
MacKay, R S; Kenna, R; Low, R J; Parker, S
2017-02-01
Frequently, a set of objects has to be evaluated by a panel of assessors, but not every object is assessed by every assessor. A problem facing such panels is how to take into account different standards among panel members and varying levels of confidence in their scores. Here, a mathematically based algorithm is developed to calibrate the scores of such assessors, addressing both of these issues. The algorithm is based on the connectivity of the graph of assessors and objects evaluated, incorporating declared confidences as weights on its edges. If the graph is sufficiently well connected, relative standards can be inferred by comparing how assessors rate objects they assess in common, weighted by the levels of confidence of each assessment. By removing these biases, 'true' values are inferred for all the objects. Reliability estimates for the resulting values are obtained. The algorithm is tested in two case studies: one by computer simulation and another based on realistic evaluation data. The process is compared to the simple averaging procedure in widespread use, and to Fisher's additive incomplete block analysis. It is anticipated that the algorithm will prove useful in a wide variety of situations such as evaluation of the quality of research submitted to national assessment exercises; appraisal of grant proposals submitted to funding panels; ranking of job applicants; and judgement of performances on degree courses wherein candidates can choose from lists of options.
Calibration with confidence: a principled method for panel assessment
MacKay, R. S.; Low, R. J.; Parker, S.
2017-01-01
Frequently, a set of objects has to be evaluated by a panel of assessors, but not every object is assessed by every assessor. A problem facing such panels is how to take into account different standards among panel members and varying levels of confidence in their scores. Here, a mathematically based algorithm is developed to calibrate the scores of such assessors, addressing both of these issues. The algorithm is based on the connectivity of the graph of assessors and objects evaluated, incorporating declared confidences as weights on its edges. If the graph is sufficiently well connected, relative standards can be inferred by comparing how assessors rate objects they assess in common, weighted by the levels of confidence of each assessment. By removing these biases, ‘true’ values are inferred for all the objects. Reliability estimates for the resulting values are obtained. The algorithm is tested in two case studies: one by computer simulation and another based on realistic evaluation data. The process is compared to the simple averaging procedure in widespread use, and to Fisher's additive incomplete block analysis. It is anticipated that the algorithm will prove useful in a wide variety of situations such as evaluation of the quality of research submitted to national assessment exercises; appraisal of grant proposals submitted to funding panels; ranking of job applicants; and judgement of performances on degree courses wherein candidates can choose from lists of options. PMID:28386432
NASA Astrophysics Data System (ADS)
Kurien, Binoy G.; Tarokh, Vahid; Rachlin, Yaron; Shah, Vinay N.; Ashcom, Jonathan B.
2016-10-01
We provide new results enabling robust interferometric image reconstruction in the presence of unknown aperture piston variation via the technique of redundant spacing calibration (RSC). The RSC technique uses redundant measurements of the same interferometric baseline with different pairs of apertures to reveal the piston variation among these pairs. In both optical and radio interferometry, the presence of phase-wrapping ambiguities in the measurements is a fundamental issue that needs to be addressed for reliable image reconstruction. In this paper, we show that these ambiguities affect recently developed RSC phasor-based reconstruction approaches operating on the complex visibilities, as well as traditional phase-based approaches operating on their logarithm. We also derive new sufficient conditions for an interferometric array to be immune to these ambiguities in the sense that their effect can be rendered benign in image reconstruction. This property, which we call wrap-invariance, has implications for the reliability of imaging via classical three-baseline phase closures as well as generalized closures. We show that wrap-invariance is conferred upon arrays whose interferometric graph satisfies a certain cycle-free condition. For cases in which this condition is not satisfied, a simple algorithm is provided for identifying those graph cycles which prevent its satisfaction. We apply this algorithm to diagnose and correct a member of a pattern family popular in the literature.
Refinement of moisture calibration curves for nuclear gage : interim report no. 1.
DOT National Transportation Integrated Search
1972-01-01
This study was initiated to determine the correct moisture calibration curves for different nuclear gages. It was found that the Troxler Model 227 had a linear response between count ratio and moisture content. Also, the two calibration curves for th...
3D morphology reconstruction using linear array CCD binocular stereo vision imaging system
NASA Astrophysics Data System (ADS)
Pan, Yu; Wang, Jinjiang
2018-01-01
Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.
An Improved Multi-Sensor Fusion Navigation Algorithm Based on the Factor Graph
Zeng, Qinghua; Chen, Weina; Liu, Jianye; Wang, Huizhe
2017-01-01
An integrated navigation system coupled with additional sensors can be used in the Micro Unmanned Aerial Vehicle (MUAV) applications because the multi-sensor information is redundant and complementary, which can markedly improve the system accuracy. How to deal with the information gathered from different sensors efficiently is an important problem. The fact that different sensors provide measurements asynchronously may complicate the processing of these measurements. In addition, the output signals of some sensors appear to have a non-linear character. In order to incorporate these measurements and calculate a navigation solution in real time, the multi-sensor fusion algorithm based on factor graph is proposed. The global optimum solution is factorized according to the chain structure of the factor graph, which allows for a more general form of the conditional probability density. It can convert the fusion matter into connecting factors defined by these measurements to the graph without considering the relationship between the sensor update frequency and the fusion period. An experimental MUAV system has been built and some experiments have been performed to prove the effectiveness of the proposed method. PMID:28335570
An Improved Multi-Sensor Fusion Navigation Algorithm Based on the Factor Graph.
Zeng, Qinghua; Chen, Weina; Liu, Jianye; Wang, Huizhe
2017-03-21
An integrated navigation system coupled with additional sensors can be used in the Micro Unmanned Aerial Vehicle (MUAV) applications because the multi-sensor information is redundant and complementary, which can markedly improve the system accuracy. How to deal with the information gathered from different sensors efficiently is an important problem. The fact that different sensors provide measurements asynchronously may complicate the processing of these measurements. In addition, the output signals of some sensors appear to have a non-linear character. In order to incorporate these measurements and calculate a navigation solution in real time, the multi-sensor fusion algorithm based on factor graph is proposed. The global optimum solution is factorized according to the chain structure of the factor graph, which allows for a more general form of the conditional probability density. It can convert the fusion matter into connecting factors defined by these measurements to the graph without considering the relationship between the sensor update frequency and the fusion period. An experimental MUAV system has been built and some experiments have been performed to prove the effectiveness of the proposed method.
Figure-Ground Segmentation Using Factor Graphs
Shen, Huiying; Coughlan, James; Ivanchenko, Volodymyr
2009-01-01
Foreground-background segmentation has recently been applied [26,12] to the detection and segmentation of specific objects or structures of interest from the background as an efficient alternative to techniques such as deformable templates [27]. We introduce a graphical model (i.e. Markov random field)-based formulation of structure-specific figure-ground segmentation based on simple geometric features extracted from an image, such as local configurations of linear features, that are characteristic of the desired figure structure. Our formulation is novel in that it is based on factor graphs, which are graphical models that encode interactions among arbitrary numbers of random variables. The ability of factor graphs to express interactions higher than pairwise order (the highest order encountered in most graphical models used in computer vision) is useful for modeling a variety of pattern recognition problems. In particular, we show how this property makes factor graphs a natural framework for performing grouping and segmentation, and demonstrate that the factor graph framework emerges naturally from a simple maximum entropy model of figure-ground segmentation. We cast our approach in a learning framework, in which the contributions of multiple grouping cues are learned from training data, and apply our framework to the problem of finding printed text in natural scenes. Experimental results are described, including a performance analysis that demonstrates the feasibility of the approach. PMID:20160994
Radiation calibration for LWIR Hyperspectral Imager Spectrometer
NASA Astrophysics Data System (ADS)
Yang, Zhixiong; Yu, Chunchao; Zheng, Wei-jian; Lei, Zhenggang; Yan, Min; Yuan, Xiaochun; Zhang, Peizhong
2014-11-01
The radiometric calibration of LWIR Hyperspectral imager Spectrometer is presented. The lab has been developed to LWIR Interferometric Hyperspectral imager Spectrometer Prototype(CHIPED-I) to study Lab Radiation Calibration, Two-point linear calibration is carried out for the spectrometer by using blackbody respectively. Firstly, calibration measured relative intensity is converted to the absolute radiation lightness of the object. Then, radiation lightness of the object is is converted the brightness temperature spectrum by the method of brightness temperature. The result indicated †that this method of Radiation Calibration calibration was very good.
Method and apparatus for calibrating a linear variable differential transformer
Pokrywka, Robert J [North Huntingdon, PA
2005-01-18
A calibration apparatus for calibrating a linear variable differential transformer (LVDT) having an armature positioned in au LVDT armature orifice, and the armature able to move along an axis of movement. The calibration apparatus includes a heating mechanism with an internal chamber, a temperature measuring mechanism for measuring the temperature of the LVDT, a fixture mechanism with an internal chamber for at least partially accepting the LVDT and for securing the LVDT within the heating mechanism internal chamber, a moving mechanism for moving the armature, a position measurement mechanism for measuring the position of the armature, and an output voltage measurement mechanism. A method for calibrating an LVDT, including the steps of: powering the LVDT; heating the LVDT to a desired temperature; measuring the position of the armature with respect to the armature orifice; and measuring the output voltage of the LVDT.
NASA Astrophysics Data System (ADS)
Wu, Jing; Huang, Junbing; Wu, Hanping; Gu, Hongcan; Tang, Bo
2014-12-01
In order to verify the validity of the regional reference grating method in solve the strain/temperature cross sensitive problem in the actual ship structural health monitoring system, and to meet the requirements of engineering, for the sensitivity coefficients of regional reference grating method, national standard measurement equipment is used to calibrate the temperature sensitivity coefficient of selected FBG temperature sensor and strain sensitivity coefficient of FBG strain sensor in this modal. And the thermal expansion sensitivity coefficient of the steel for ships is calibrated with water bath method. The calibration results show that the temperature sensitivity coefficient of FBG temperature sensor is 28.16pm/°C within -10~30°C, and its linearity is greater than 0.999, the strain sensitivity coefficient of FBG strain sensor is 1.32pm/μɛ within -2900~2900μɛ whose linearity is almost to 1, the thermal expansion sensitivity coefficient of the steel for ships is 23.438pm/°C within 30~90°C, and its linearity is greater than 0.998. Finally, the calibration parameters are used in the actual ship structure health monitoring system for temperature compensation. The results show that the effect of temperature compensation is good, and the calibration parameters meet the engineering requirements, which provide an important reference for fiber Bragg grating sensor is widely used in engineering.
[Health for All-Italia: an indicator system on health].
Burgio, Alessandra; Crialesi, Roberta; Loghi, Marzia
2003-01-01
The Health for All - Italia information system collects health data from several sources. It is intended to be a cornerstone for the achievement of an overview about health in Italy. Health is analyzed at different levels, ranging from health services, health needs, lifestyles, demographic, social, economic and environmental contexts. The database associated software allows to pin down statistical data into graphs and tables, and to carry out simple statistical analysis. It is therefore possible to view the indicators' time series, make simple projections and compare the various indicators over the years for each territorial unit. This is possible by means of tables, graphs (histograms, line graphs, frequencies, linear regression with calculation of correlation coefficients, etc) and maps. These charts can be exported to other programs (i.e. Word, Excel, Power Point), or they can be directly printed in color or black and white.
Typical performance of approximation algorithms for NP-hard problems
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2016-11-01
Typical performance of approximation algorithms is studied for randomized minimum vertex cover problems. A wide class of random graph ensembles characterized by an arbitrary degree distribution is discussed with the presentation of a theoretical framework. Herein, three approximation algorithms are examined: linear-programming relaxation, loopy-belief propagation, and the leaf-removal algorithm. The former two algorithms are analyzed using a statistical-mechanical technique, whereas the average-case analysis of the last one is conducted using the generating function method. These algorithms have a threshold in the typical performance with increasing average degree of the random graph, below which they find true optimal solutions with high probability. Our study reveals that there exist only three cases, determined by the order of the typical performance thresholds. In addition, we provide some conditions for classification of the graph ensembles and demonstrate explicitly some examples for the difference in thresholds.
2016-11-22
structure of the graph, we replace the ℓ1- norm by the nonconvex Capped -ℓ1 norm , and obtain the Generalized Capped -ℓ1 regularized logistic regression...X. M. Yuan. Linearized augmented lagrangian and alternating direction methods for nuclear norm minimization. Mathematics of Computation, 82(281):301...better approximations of ℓ0- norm theoretically and computationally beyond ℓ1- norm , for example, the compressive sensing (Xiao et al., 2011). The
Agogo, George O.; van der Voet, Hilko; Veer, Pieter van’t; Ferrari, Pietro; Leenders, Max; Muller, David C.; Sánchez-Cantalejo, Emilio; Bamia, Christina; Braaten, Tonje; Knüppel, Sven; Johansson, Ingegerd; van Eeuwijk, Fred A.; Boshuizen, Hendriek
2014-01-01
In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model. PMID:25402487
Watson, Christopher G; Stopp, Christian; Newburger, Jane W; Rivkin, Michael J
2018-02-01
Adolescents with d-transposition of the great arteries (d-TGA) who had the arterial switch operation in infancy have been found to have structural brain differences compared to healthy controls. We used cortical thickness measurements obtained from structural brain MRI to determine group differences in global brain organization using a graph theoretical approach. Ninety-two d-TGA subjects and 49 controls were scanned using one of two identical 1.5-Tesla MRI systems. Mean cortical thickness was obtained from 34 regions per hemisphere using Freesurfer. A linear model was used for each brain region to adjust for subject age, sex, and scanning location. Structural connectivity for each group was inferred based on the presence of high inter-regional correlations of the linear model residuals, and binary connectivity matrices were created by thresholding over a range of correlation values for each group. Graph theory analysis was performed using packages in R. Permutation tests were performed to determine significance of between-group differences in global network measures. Within-group connectivity patterns were qualitatively different between groups. At lower network densities, controls had significantly more long-range connections. The location and number of hub regions differed between groups: controls had a greater number of hubs at most network densities. The control network had a significant rightward asymmetry compared to the d-TGA group at all network densities. Using graph theory analysis of cortical thickness correlations, we found differences in brain structural network organization among d-TGA adolescents compared to controls. These may be related to the white matter and gray matter differences previously found in this cohort, and in turn may be related to the cognitive deficits this cohort presents.
Consensus Algorithms for Networks of Systems with Second- and Higher-Order Dynamics
NASA Astrophysics Data System (ADS)
Fruhnert, Michael
This thesis considers homogeneous networks of linear systems. We consider linear feedback controllers and require that the directed graph associated with the network contains a spanning tree and systems are stabilizable. We show that, in continuous-time, consensus with a guaranteed rate of convergence can always be achieved using linear state feedback. For networks of continuous-time second-order systems, we provide a new and simple derivation of the conditions for a second-order polynomials with complex coefficients to be Hurwitz. We apply this result to obtain necessary and sufficient conditions to achieve consensus with networks whose graph Laplacian matrix may have complex eigenvalues. Based on the conditions found, methods to compute feedback gains are proposed. We show that gains can be chosen such that consensus is achieved robustly over a variety of communication structures and system dynamics. We also consider the use of static output feedback. For networks of discrete-time second-order systems, we provide a new and simple derivation of the conditions for a second-order polynomials with complex coefficients to be Schur. We apply this result to obtain necessary and sufficient conditions to achieve consensus with networks whose graph Laplacian matrix may have complex eigenvalues. We show that consensus can always be achieved for marginally stable systems and discretized systems. Simple conditions for consensus achieving controllers are obtained when the Laplacian eigenvalues are all real. For networks of continuous-time time-variant higher-order systems, we show that uniform consensus can always be achieved if systems are quadratically stabilizable. In this case, we provide a simple condition to obtain a linear feedback control. For networks of discrete-time higher-order systems, we show that constant gains can be chosen such that consensus is achieved for a variety of network topologies. First, we develop simple results for networks of time-invariant systems and networks of time-variant systems that are given in controllable canonical form. Second, we formulate the problem in terms of Linear Matrix Inequalities (LMIs). The condition found simplifies the design process and avoids the parallel solution of multiple LMIs. The result yields a modified Algebraic Riccati Equation (ARE) for which we present an equivalent LMI condition.
Information-optimal genome assembly via sparse read-overlap graphs.
Shomorony, Ilan; Kim, Samuel H; Courtade, Thomas A; Tse, David N C
2016-09-01
In the context of third-generation long-read sequencing technologies, read-overlap-based approaches are expected to play a central role in the assembly step. A fundamental challenge in assembling from a read-overlap graph is that the true sequence corresponds to a Hamiltonian path on the graph, and, under most formulations, the assembly problem becomes NP-hard, restricting practical approaches to heuristics. In this work, we avoid this seemingly fundamental barrier by first setting the computational complexity issue aside, and seeking an algorithm that targets information limits In particular, we consider a basic feasibility question: when does the set of reads contain enough information to allow unambiguous reconstruction of the true sequence? Based on insights from this information feasibility question, we present an algorithm-the Not-So-Greedy algorithm-to construct a sparse read-overlap graph. Unlike most other assembly algorithms, Not-So-Greedy comes with a performance guarantee: whenever information feasibility conditions are satisfied, the algorithm reduces the assembly problem to an Eulerian path problem on the resulting graph, and can thus be solved in linear time. In practice, this theoretical guarantee translates into assemblies of higher quality. Evaluations on both simulated reads from real genomes and a PacBio Escherichia coli K12 dataset demonstrate that Not-So-Greedy compares favorably with standard string graph approaches in terms of accuracy of the resulting read-overlap graph and contig N50. Available at github.com/samhykim/nsg courtade@eecs.berkeley.edu or dntse@stanford.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Yanamandra, R.; Vadla, C. S.; Puppala, U. M.; Patro, B.; Murthy, Y. L. N.; Parimi, A. R.
2012-01-01
A rapid, simple, sensitive and selective analytical method was developed by using reverse phase ultra performance liquid chromatographic technique for the simultaneous estimation of bambuterol hydrochloride and montelukast sodium in combined tablet dosage form. The developed method is superior in technology to conventional high performance liquid chromatography with respect to speed, resolution, solvent consumption, time, and cost of analysis. Elution time for the separation was 6 min and ultra violet detection was carried out at 210 nm. Efficient separation was achieved on BEH C18 sub-2-μm Acquity UPLC column using 0.025% (v/v) trifluoro acetic acid in water and acetonitrile as organic solvent in a linear gradient program. Resolutions between bambuterol hydrochloride and montelukast sodium were found to be more than 31. The active pharmaceutical ingredient was extracted from tablet dosage from using a mixture of methanol, acetonitrile and water as diluent. The calibration graphs were linear for bambuterol hydrochloride and montelukast sodium in the range of 6.25-37.5 μg/ml. The percentage recoveries for bambuterol hydrochloride and montelukast sodium were found to be in the range of 99.1-100.0% and 98.0-101.6%, respectively. The test solution was found to be stable for 7 days when stored in the refrigerator between 2-8°. Developed UPLC method was validated as per International Conference on Harmonization specifications for method validation. This method can be successfully employed for simultaneous estimation of bambuterol hydrochloride and montelukast sodium in bulk drugs and formulations. PMID:23325991
Modeling and Density Estimation of an Urban Freeway Network Based on Dynamic Graph Hybrid Automata
Chen, Yangzhou; Guo, Yuqi; Wang, Ying
2017-01-01
In this paper, in order to describe complex network systems, we firstly propose a general modeling framework by combining a dynamic graph with hybrid automata and thus name it Dynamic Graph Hybrid Automata (DGHA). Then we apply this framework to model traffic flow over an urban freeway network by embedding the Cell Transmission Model (CTM) into the DGHA. With a modeling procedure, we adopt a dual digraph of road network structure to describe the road topology, use linear hybrid automata to describe multi-modes of dynamic densities in road segments and transform the nonlinear expressions of the transmitted traffic flow between two road segments into piecewise linear functions in terms of multi-mode switchings. This modeling procedure is modularized and rule-based, and thus is easily-extensible with the help of a combination algorithm for the dynamics of traffic flow. It can describe the dynamics of traffic flow over an urban freeway network with arbitrary topology structures and sizes. Next we analyze mode types and number in the model of the whole freeway network, and deduce a Piecewise Affine Linear System (PWALS) model. Furthermore, based on the PWALS model, a multi-mode switched state observer is designed to estimate the traffic densities of the freeway network, where a set of observer gain matrices are computed by using the Lyapunov function approach. As an example, we utilize the PWALS model and the corresponding switched state observer to traffic flow over Beijing third ring road. In order to clearly interpret the principle of the proposed method and avoid computational complexity, we adopt a simplified version of Beijing third ring road. Practical application for a large-scale road network will be implemented by decentralized modeling approach and distributed observer designing in the future research. PMID:28353664
Modeling and Density Estimation of an Urban Freeway Network Based on Dynamic Graph Hybrid Automata.
Chen, Yangzhou; Guo, Yuqi; Wang, Ying
2017-03-29
In this paper, in order to describe complex network systems, we firstly propose a general modeling framework by combining a dynamic graph with hybrid automata and thus name it Dynamic Graph Hybrid Automata (DGHA). Then we apply this framework to model traffic flow over an urban freeway network by embedding the Cell Transmission Model (CTM) into the DGHA. With a modeling procedure, we adopt a dual digraph of road network structure to describe the road topology, use linear hybrid automata to describe multi-modes of dynamic densities in road segments and transform the nonlinear expressions of the transmitted traffic flow between two road segments into piecewise linear functions in terms of multi-mode switchings. This modeling procedure is modularized and rule-based, and thus is easily-extensible with the help of a combination algorithm for the dynamics of traffic flow. It can describe the dynamics of traffic flow over an urban freeway network with arbitrary topology structures and sizes. Next we analyze mode types and number in the model of the whole freeway network, and deduce a Piecewise Affine Linear System (PWALS) model. Furthermore, based on the PWALS model, a multi-mode switched state observer is designed to estimate the traffic densities of the freeway network, where a set of observer gain matrices are computed by using the Lyapunov function approach. As an example, we utilize the PWALS model and the corresponding switched state observer to traffic flow over Beijing third ring road. In order to clearly interpret the principle of the proposed method and avoid computational complexity, we adopt a simplified version of Beijing third ring road. Practical application for a large-scale road network will be implemented by decentralized modeling approach and distributed observer designing in the future research.
Simple Parametric Model for Intensity Calibration of Cassini Composite Infrared Spectrometer Data
NASA Technical Reports Server (NTRS)
Brasunas, J.; Mamoutkine, A.; Gorius, N.
2016-01-01
Accurate intensity calibration of a linear Fourier-transform spectrometer typically requires the unknown science target and the two calibration targets to be acquired under identical conditions. We present a simple model suitable for vector calibration that enables accurate calibration via adjustments of measured spectral amplitudes and phases when these three targets are recorded at different detector or optics temperatures. Our model makes calibration more accurate both by minimizing biases due to changing instrument temperatures that are always present at some level and by decreasing estimate variance through incorporating larger averages of science and calibration interferogram scans.
NASA Technical Reports Server (NTRS)
Axholt, Magnus; Skoglund, Martin; Peterson, Stephen D.; Cooper, Matthew D.; Schoen, Thomas B.; Gustafsson, Fredrik; Ynnerman, Anders; Ellis, Stephen R.
2010-01-01
Augmented Reality (AR) is a technique by which computer generated signals synthesize impressions that are made to coexist with the surrounding real world as perceived by the user. Human smell, taste, touch and hearing can all be augmented, but most commonly AR refers to the human vision being overlaid with information otherwise not readily available to the user. A correct calibration is important on an application level, ensuring that e.g. data labels are presented at correct locations, but also on a system level to enable display techniques such as stereoscopy to function properly [SOURCE]. Thus, vital to AR, calibration methodology is an important research area. While great achievements already have been made, there are some properties in current calibration methods for augmenting vision which do not translate from its traditional use in automated cameras calibration to its use with a human operator. This paper uses a Monte Carlo simulation of a standard direct linear transformation camera calibration to investigate how user introduced head orientation noise affects the parameter estimation during a calibration procedure of an optical see-through head mounted display.
Weak variations of Lipschitz graphs and stability of phase boundaries
NASA Astrophysics Data System (ADS)
Grabovsky, Yury; Kucher, Vladislav A.; Truskinovsky, Lev
2011-03-01
In the case of Lipschitz extremals of vectorial variational problems, an important class of strong variations originates from smooth deformations of the corresponding non-smooth graphs. These seemingly singular variations, which can be viewed as combinations of weak inner and outer variations, produce directions of differentiability of the functional and lead to singularity-centered necessary conditions on strong local minima: an equality, arising from stationarity, and an inequality, implying configurational stability of the singularity set. To illustrate the underlying coupling between inner and outer variations, we study in detail the case of smooth surfaces of gradient discontinuity representing, for instance, martensitic phase boundaries in non-linear elasticity.
Croft, Stephen; Burr, Thomas Lee; Favalli, Andrea; ...
2015-12-10
We report that the declared linear density of 238U and 235U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar – Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of 235U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to modelmore » the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. Lastly, we find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters« less
Configurations and calibration methods for passive sampling techniques.
Ouyang, Gangfeng; Pawliszyn, Janusz
2007-10-19
Passive sampling technology has developed very quickly in the past 15 years, and is widely used for the monitoring of pollutants in different environments. The design and quantification of passive sampling devices require an appropriate calibration method. Current calibration methods that exist for passive sampling, including equilibrium extraction, linear uptake, and kinetic calibration, are presented in this review. A number of state-of-the-art passive sampling devices that can be used for aqueous and air monitoring are introduced according to their calibration methods.
Exploring Difference Equations with Spreadsheets.
ERIC Educational Resources Information Center
Walsh, Thomas P.
1996-01-01
When using spreadsheets to explore real-world problems involving periodic change, students can observe what happens at each period, generate a graph, and learn how changing the starting quantity or constants affects results. Spreadsheet lessons for high school students are presented that explore mathematical modeling, linear programming, and…
Building an Understanding of Functions: A Series of Activities for Pre-Calculus
ERIC Educational Resources Information Center
Carducci, Olivia M.
2008-01-01
Building block toys can be used to illustrate various concepts connected with functions including graphs and rates of change of linear and exponential functions, piecewise functions, and composition of functions. Five brief activities suitable for a pre-calculus course are described.
Evanoff, M G; Roehrig, H; Giffords, R S; Capp, M P; Rovinelli, R J; Hartmann, W H; Merritt, C
2001-06-01
This report discusses calibration and set-up procedures for medium-resolution monochrome cathode ray tubes (CRTs) taken in preparation of the oral portion of the board examination of the American Board of Radiology (ABR). The board examinations took place in more than 100 rooms of a hotel. There was one display-station (a computer and the associated CRT display) in each of the hotel rooms used for the examinations. The examinations covered the radiologic specialties cardiopulmonary, musculoskeletal, gastrointestinal, vascular, pediatric, and genitourinary. The software used for set-up and calibration was the VeriLUM 4.0 package from Image Smiths in Germantown, MD. The set-up included setting minimum luminance and maximum luminance, as well as positioning of the CRT in each examination room with respect to reflections of roomlights. The calibration for the grey scale rendition was done meeting the Digital Imaging and communication in Medicine (DICOM) 14 Standard Display Function. We describe these procedures, and present the calibration data in. tables and graphs, listing initial values of minimum luminance, maximum luminance, and grey scale rendition (DICOM 14 standard display function). Changes of these parameters over the duration of the examination were observed and recorded on 11 monitors in a particular room. These changes strongly suggest that all calibrated CRTs be monitored over the duration of the examination. In addition, other CRT performance data affecting image quality such as spatial resolution should be included in set-up and image quality-control procedures.
Estimating Causal Effects with Ancestral Graph Markov Models
Malinsky, Daniel; Spirtes, Peter
2017-01-01
We present an algorithm for estimating bounds on causal effects from observational data which combines graphical model search with simple linear regression. We assume that the underlying system can be represented by a linear structural equation model with no feedback, and we allow for the possibility of latent variables. Under assumptions standard in the causal search literature, we use conditional independence constraints to search for an equivalence class of ancestral graphs. Then, for each model in the equivalence class, we perform the appropriate regression (using causal structure information to determine which covariates to include in the regression) to estimate a set of possible causal effects. Our approach is based on the “IDA” procedure of Maathuis et al. (2009), which assumes that all relevant variables have been measured (i.e., no unmeasured confounders). We generalize their work by relaxing this assumption, which is often violated in applied contexts. We validate the performance of our algorithm on simulated data and demonstrate improved precision over IDA when latent variables are present. PMID:28217244
40 CFR 89.322 - Carbon dioxide analyzer calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...
40 CFR 89.322 - Carbon dioxide analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...
40 CFR 89.322 - Carbon dioxide analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...
40 CFR 89.322 - Carbon dioxide analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data... interference, system check, and calibration test procedures specified in 40 CFR part 1065 may be used in lieu...
ERIC Educational Resources Information Center
Jurs, Stephen; And Others
The scree test and its linear regression technique are reviewed, and results of its use in factor analysis and Delphi data sets are described. The scree test was originally a visual approach for making judgments about eigenvalues, which considered the relationships of the eigenvalues to one another as well as their actual values. The graph that is…
Novel crystal timing calibration method based on total variation
NASA Astrophysics Data System (ADS)
Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng
2016-11-01
A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.
Model checking for linear temporal logic: An efficient implementation
NASA Technical Reports Server (NTRS)
Sherman, Rivi; Pnueli, Amir
1990-01-01
This report provides evidence to support the claim that model checking for linear temporal logic (LTL) is practically efficient. Two implementations of a linear temporal logic model checker is described. One is based on transforming the model checking problem into a satisfiability problem; the other checks an LTL formula for a finite model by computing the cross-product of the finite state transition graph of the program with a structure containing all possible models for the property. An experiment was done with a set of mutual exclusion algorithms and tested safety and liveness under fairness for these algorithms.
Three dimensional radiative flow of magnetite-nanofluid with homogeneous-heterogeneous reactions
NASA Astrophysics Data System (ADS)
Hayat, Tasawar; Rashid, Madiha; Alsaedi, Ahmed
2018-03-01
Present communication deals with the effects of homogeneous-heterogeneous reactions in flow of nanofluid by non-linear stretching sheet. Water based nanofluid containing magnetite nanoparticles is considered. Non-linear radiation and non-uniform heat sink/source effects are examined. Non-linear differential systems are computed by Optimal homotopy analysis method (OHAM). Convergent solutions of nonlinear systems are established. The optimal data of auxiliary variables is obtained. Impact of several non-dimensional parameters for velocity components, temperature and concentration fields are examined. Graphs are plotted for analysis of surface drag force and heat transfer rate.
Estimating energy expenditure from heart rate in older adults: a case for calibration.
Schrack, Jennifer A; Zipunnikov, Vadim; Goldsmith, Jeff; Bandeen-Roche, Karen; Crainiceanu, Ciprian M; Ferrucci, Luigi
2014-01-01
Accurate measurement of free-living energy expenditure is vital to understanding changes in energy metabolism with aging. The efficacy of heart rate as a surrogate for energy expenditure is rooted in the assumption of a linear function between heart rate and energy expenditure, but its validity and reliability in older adults remains unclear. To assess the validity and reliability of the linear function between heart rate and energy expenditure in older adults using different levels of calibration. Heart rate and energy expenditure were assessed across five levels of exertion in 290 adults participating in the Baltimore Longitudinal Study of Aging. Correlation and random effects regression analyses assessed the linearity of the relationship between heart rate and energy expenditure and cross-validation models assessed predictive performance. Heart rate and energy expenditure were highly correlated (r=0.98) and linear regardless of age or sex. Intra-person variability was low but inter-person variability was high, with substantial heterogeneity of the random intercept (s.d. =0.372) despite similar slopes. Cross-validation models indicated individual calibration data substantially improves accuracy predictions of energy expenditure from heart rate, reducing the potential for considerable measurement bias. Although using five calibration measures provided the greatest reduction in the standard deviation of prediction errors (1.08 kcals/min), substantial improvement was also noted with two (0.75 kcals/min). These findings indicate standard regression equations may be used to make population-level inferences when estimating energy expenditure from heart rate in older adults but caution should be exercised when making inferences at the individual level without proper calibration.
Juarez, Paul D; Hood, Darryl B; Rogers, Gary L; Baktash, Suzanne H; Saxton, Arnold M; Matthews-Juarez, Patricia; Im, Wansoo; Cifuentes, Myriam Patricia; Phillips, Charles A; Lichtveld, Maureen Y; Langston, Michael A
2017-01-01
Objectives The aim is to identify exposures associated with lung cancer mortality and mortality disparities by race and gender using an exposome database coupled to a graph theoretical toolchain. Methods Graph theoretical algorithms were employed to extract paracliques from correlation graphs using associations between 2162 environmental exposures and lung cancer mortality rates in 2067 counties, with clique doubling applied to compute an absolute threshold of significance. Factor analysis and multiple linear regressions then were used to analyze differences in exposures associated with lung cancer mortality and mortality disparities by race and gender. Results While cigarette consumption was highly correlated with rates of lung cancer mortality for both white men and women, previously unidentified novel exposures were more closely associated with lung cancer mortality and mortality disparities for blacks, particularly black women. Conclusions Exposures beyond smoking moderate lung cancer mortality and mortality disparities by race and gender. Policy Implications An exposome approach and database coupled with scalable combinatorial analytics provides a powerful new approach for analyzing relationships between multiple environmental exposures, pathways and health outcomes. An assessment of multiple exposures is needed to appropriately translate research findings into environmental public health practice and policy. PMID:29152601
NASA Astrophysics Data System (ADS)
Adami, Riccardo; Cacciapuoti, Claudio; Finco, Domenico; Noja, Diego
We define the Schrödinger equation with focusing, cubic nonlinearity on one-vertex graphs. We prove global well-posedness in the energy domain and conservation laws for some self-adjoint boundary conditions at the vertex, i.e. Kirchhoff boundary condition and the so-called δ and δ‧ boundary conditions. Moreover, in the same setting, we study the collision of a fast solitary wave with the vertex and we show that it splits in reflected and transmitted components. The outgoing waves preserve a soliton character over a time which depends on the logarithm of the velocity of the ingoing solitary wave. Over the same timescale, the reflection and transmission coefficients of the outgoing waves coincide with the corresponding coefficients of the linear problem. In the analysis of the problem, we follow ideas borrowed from the seminal paper [17] about scattering of fast solitons by a delta interaction on the line, by Holmer, Marzuola and Zworski. The present paper represents an extension of their work to the case of graphs and, as a byproduct, it shows how to extend the analysis of soliton scattering by other point interactions on the line, interpreted as a degenerate graph.
Writing a Scientific Paper II. Communication by Graphics
NASA Astrophysics Data System (ADS)
Sterken, C.
2011-07-01
This paper discusses facets of visual communication by way of images, graphs, diagrams and tabular material. Design types and elements of graphical images are presented, along with advice on how to create graphs, and on how to read graphical illustrations. This is done in astronomical context, using case studies and historical examples of good and bad graphics. Design types of graphs (scatter and vector plots, histograms, pie charts, ternary diagrams and three-dimensional surface graphs) are explicated, as well as the major components of graphical images (axes, legends, textual parts, etc.). The basic features of computer graphics (image resolution, vector images, bitmaps, graphical file formats and file conversions) are explained, as well as concepts of color models and of color spaces (with emphasis on aspects of readability of color graphics by viewers suffering from color-vision deficiencies). Special attention is given to the verity of graphical content, and to misrepresentations and errors in graphics and associated basic statistics. Dangers of dot joining and curve fitting are discussed, with emphasis on the perception of linearity, the issue of nonsense correlations, and the handling of outliers. Finally, the distinction between data, fits and models is illustrated.
NASA Astrophysics Data System (ADS)
Chen, Shujuan; Li, Nan; Zhang, Xinshen; Yang, Dongjing; Jiang, Heimei
2015-03-01
A simple and new low pressure ion chromatography combined with flow injection spectrophotometric procedure for determining Fe(II) and Fe(III) was established. It is based on the selective adsorption of low pressure ion chromatography column to Fe(II) and Fe(III), the online reduction reaction of Fe(III) and the reaction of Fe(II) in sodium acetate with phenanthroline, resulting in an intense orange complex with a suitable absorption at 515 nm. Various chemical (such as the concentration of colour reagent, eluant and reductive agent) and instrumental parameters (reaction coil length, reductive coil length and wavelength) were studied and were optimized. Under the optimum conditions calibration graph of Fe(II)/Fe(III) was linear in the Fe(II)/Fe(III) range of 0.040-1.0 mg/L. The detection limit of Fe(III) and Fe(II) was respectively 3.09 and 1.55 μg/L, the relative standard deviation (n = 10) of Fe(II) and Fe(III) 1.89% and 1.90% for 0.5 mg/L of Fe(II) and Fe(III) respectively. About 2.5 samples in 1 h can be analyzed. The interfering effects of various chemical species were studied. The method was successfully applied in the determination of water samples.
Determination of arsenic species in rice samples using CPE and ETAAS.
Costa, Bruno Elias Dos Santos; Coelho, Nívia Maria Melo; Coelho, Luciana Melo
2015-07-01
A highly sensitive and selective procedure for the determination of arsenate and total arsenic in food by electrothermal atomic absorption spectrometry after cloud point extraction (ETAAS/CPE) was developed. The procedure is based on the formation of a complex of As(V) ions with molybdate in the presence of 50.0 mmol L(-1) sulfuric acid. The complex was extracted into the surfactant-rich phase of 0.06% (w/v) Triton X-114. The variables affecting the complex formation, extraction and phase separation were optimized using factorial designs. Under the optimal conditions, the calibration graph was linear in the range of 0.05-10.0 μg L(-1). The detection and quantification limits were 10 and 33 ng L(-1), respectively and the corresponding value for the relative standard deviation for 10 replicates was below 5%. Recovery values of between 90.8% and 113.1% were obtained for spiked samples. The accuracy of the method was evaluated by comparison with the results obtained for the analysis of a rice flour sample (certified material IRMM-804) and no significant difference at the 95% confidence level was observed. The method was successfully applied to the determination of As(V) and total arsenic in rice samples. Copyright © 2015 Elsevier Ltd. All rights reserved.
Kataoka, M; Nishimura, K; Kambara, T
1983-12-01
A trace amount of molybdenum(VI) can be determined by using its catalytic effect on the oxidation of iodide to iodine by hydrogen peroxide in acidic medium. Addition of ascorbic acid added to the reaction mixture produces the Landolt effect, i.e., the iodine produced by the indicator reaction is reduced immediately by the ascorbic add. Hence the concentration of iodide begins to decrease once all the ascorbic acid has been consumed. The induction period is measured by monitoring the concentration of iodide ion with an iodide ion-selective electrode. The reciprocal of the induction period varies linearly with the concentration of molybdenum(VI). The most suitable pH and concentrations of hydrogen peroxide and potassium iodide are found to be 1.5, 5 and 10mM, respectively. An appropriate amount of ascorbic acid is added to the reaction mixture according to the concentration of molybdenum(VI) in the sample solution. A calibration graph with good proportionality is obtained for the molybdenum(VI) concentration range from 0.1 to 160 muM. Iron(III), vanadium(IV), zirconium(IV), tungsten(VI), copper(II) and chromium(VI) interfere, but iron(III) and copper(II) can be masked with EDTA.
Bahrani, Sonia; Ghaedi, Mehrorang; Ostovan, Abbas; Javadian, Hamedreza; Mansoorkhani, Mohammad Javad Khoshnood; Taghipour, Tahere
2018-02-05
In this research, a facile and selective method was described to extract l-cysteine (l-Cys), an essential α-amino acid for anti-ageing playing an important role in human health, from human blood plasma sample. The importance of this research was the mild and time-consuming synthesis of zinc organic polymer (Zn-MOP) as an adsorbent and evaluation of its ability for efficient enrichment of l-Cys by ultrasound-assisted dispersive micro solid-phase extraction (UA-DMSPE) method. The structure of Zn-MOP was investigated by FT-IR, XRD and SEM. Analysis of variance (ANOVA) was applied for the experimental data to reach the best optimum conditions. The quantification of l-Cys was carried out by high performance liquid chromatography with UV detection set at λ=230nm. The calibration graph showed reasonable linear responses towards l-Cys concentrations in the range of 4.0-1000μg/L (r 2 =0.999) with low limit of detection (0.76μg/L, S/N=3) and RSD≤2.18 (n=3). The results revealed the applicability and high performance of this novel strategy in detecting trace l-Cys by Zn-MOP in complicated matrices. Copyright © 2017 Elsevier B.V. All rights reserved.
Hattori, Takanari; Okamura, Hideo; Asaoka, Satoshi; Fukushi, Keiichi
2017-08-18
Transient isotachophoresis (tITP) with a system-induced terminator (SIT) was developed for capillary zone electrophoresis (CZE) determination of aniline (An + ) and pyridine (Py + ) in sewage samples. After sample injection, a water vial was set at the sample-inlet side. Then voltage was applied to generate a system-induced terminator (H + ). Experiments and simulations revealed a concentration effect by tITP with an SIT: background electrolyte (BGE) - 100mM acetic acid (AcOH) and 50mM NaOH (pH 4.6); detection wavelength - 200nm for An + and 254nm for Py + ; vacuum injection period - 15s (190nL); SIT generation - 10kV applied for 80s with the sample inlet side anode; separation voltage - 20kV with the sample inlet side anode. The limits of detection (LODs, S/N=3) of An + and Py + respectively reached 10 and 42μg/L, with good repeatability (peak area RSDs≤6.9%) and calibration graph linearity (R 2 =0.9997). The proposed method was applied for determination of An + and Py + in sewage samples. Recoveries of An + (0.50mg/L) and Py + (2.0mg/L) in spiked sewage samples were 94-104%. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Septia Rinda, Arfidyaninggar; Uraisin, Kanchana; Sabarudin, Akhmad; Nacapricha, Duangjai; Wilairat, Prapin
2018-01-01
Cobalt has been reported for being abused as an illegal doping agent due to its ability as an erythropoiesis-stimulating agent for enhancing performance in racehorses. Since 2015, cobalt is listed as a prohibited substance by the International Federation of Horseracing Authorities (IFHA) with a urinary threshold of 0.1 μg cobalt per mL urine. To prevent the misuse of cobalt in racehorse, a simple method for detection of cobalt is desirable. In this work, the detection of cobalt is based on the spectrometric detection of the complex formation between cobalt(II) and 2-(5-bromo-2-pyridylazo)-5-[N-n-propyl-N-(3-sulfopropyl)amino]aniline at pH 4. The absorbance of the complex is monitored at 602 nm. The metal:ligand ratio of the complex is 1:2. The calibration graph was linear in the range of 0 - 2.5 μM {Absorbance = (0.0825 ± 0.0013)[Co2+] + (0.0406 ± 0.0003), r2 = 0.999} and the detection limit (3 SD of intercept)/slope) was 0.044 μM. The proposed method has been successfully applied to horse urine samples with the recoveries in the range 91 - 98%.
Jerez, Javier; Isaguirre, Andrea C; Bazán, Cristian; Martinez, Luis D; Cerutti, Soledad
2014-06-01
An on-line scandium preconcentration and determination system implemented with inductively coupled plasma optical emission spectrometry associated with flow injection was studied. Trace amounts of scandium were preconcentrated by sorption on a minicolumn packed with oxidized multiwalled carbon nanotubes, at pH 1.5. The retained analyte was removed from the minicolumn with 30% (v/v) nitric acid. A total enrichment factor of 225-fold was obtained within a preconcentration time of 300 s (for a 25 mL sample volume). The overall time required for preconcentration and elution of 25 mL of sample was about 6 min; the throughput was about 10 samples per hour. The value of the detection limit was 4 ng L(-1) and the precision for 10 replicate determinations at 100 ng L(-1) Sc level was 5% relative standard deviation, calculated from the peak heights obtained. The calibration graph using the preconcentration system was linear with a correlation coefficient of 0.9996 at levels near the detection limits up to at least 10 mg L(-1). After optimization, the method was successfully applied to the determination of Sc in an acid drainage from an abandoned mine located in the province of San Luis, Argentina. Copyright © 2014 Elsevier B.V. All rights reserved.
Takayanagi, Toshio; Inaba, Yuya; Kanzaki, Hiroyuki; Jyoichi, Yasutaka; Motomizu, Shoji
2009-09-15
Catalytic effect of metal ions on luminol chemiluminescence (CL) was investigated by sequential injection analysis (SIA). The SIA system was set up with two solenoid micropumps, an eight-port selection valve, and a photosensor module with a fountain-type chemiluminescence cell. The SIA system was controlled and the CL signals were collected by a LabVIEW program. Aqueous solutions of luminol, H(2)O(2), and a sample solution containing metal ion were sequentially aspirated to the holding coil, and the zones were immediately propelled to the detection cell. After optimizing the parameters using 1 x 10(-5)M Fe(3+) solution, catalytic effect of some metal species was compared. Among 16 metal species examined, relatively strong CL responses were obtained with Fe(3+), Fe(2+), VO(2+), VO(3)(-), MnO(4)(-), Co(2+), and Cu(2+). The limits of detection by the present SIA system were comparable to FIA systems. Permanganate ion showed the highest CL sensitivity among the metal species examined; the calibration graph for MnO(4)(-) was linear at the concentration level of 10(-8)M and the limit of detection for MnO(4)(-) was 4.0 x 10(-10)M (S/N=3).
Analysis of hydroquinone and some of its ethers by using capillary electrochromatography.
Desiderio, C; Ossicini, L; Fanali, S
2000-07-28
Capillary electrochromatography (CEC) was used for the analysis of relevant compounds in cosmetic preparation. Hydroquinone (HQ) and some of its ethers (methyl-, dimethyl-, benzyl-, phenyl-, propyl-HQ derivatives) were analyzed by using an octadecylsilica (ODS) stationary phase packed in fused-silica capillary (100 microm I.D.; 30 cm and 21.5 cm total and effective lengths, respectively). 20 mM Ammonium acetate pH 6-acetonitrile (50-70%) were the mobile phases used for the experiments. The acetonitrile (ACN) content strongly influenced the resolution of the studied compounds as well as the efficiency and the retention factor. Baseline resolution for the studied analytes was achieved at both the lowest and the highest percentage of ACN, the last one providing the shortest analysis time. Mobile phase containing 70% of ACN was therefore used for the analysis of an extract of skin-toning cream declared to contain HQ. Good repeatability of both retention times, peak areas and peak areas ratio (Asample/Ainternational standard) was found. The calibration graphs were linear in the concentration range studied (5-90 microg/ml) with correlation coefficients between 0.9975 and 09991. The analysis of the cosmetic preparation revealed the presence of HQ (1.72%, w/w) and of two additional peaks (not identified).
Rezvani, Seyyed Ahmad; Soleymanpour, Ahmad
2016-03-04
A very convenient, sensitive and precise solid phase extraction (SPE) system was developed for enrichment and determination of ultra-trace of cadmium ion in water and plant samples. This method was based on the retention of cadmium(II) ions by l-cystine adsorbed in Y-zeolite and carry out in a packed mini-column. The retained cadmium ions then were eluted and determined by flame atomic absorption spectrometry. The scanning electron microscopy (SEM), powder X-ray diffraction (XRD) and Fourier Transform Infrared (FT-IR) spectroscopy techniques were applied for the characterization of cystine modified zeolite (CMZ). Some experimental conditions affecting the analytical performance such as pH, eluent type, concentration of sample, eluent flow rate and also the presence of interfering ions were investigated. The calibration graph was linear within the range of 0.1-7.5ngmL(-1) and limit of detection was obtained 0.04ngmL(-1) with the preconcentration factor of 400. The relative standard deviation (RSD) was obtained 1.4%, indicating the excellent reproducibility of this method. The proposed method was successfully applied for the extraction and determination of cadmium(II) ion in black tea, cigarette's tobacco and also various water samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Dokpikul, Nattawut; Chaiyasith, Wipharat Chuachuad; Sananmuang, Ratana; Ampiah-Bonney, Richmond J
2018-04-25
A novel method was developed by SAE-DLLME for chromium speciation in water and rice samples using 2-thenoyltrifluoroacetone (TTA) as a chelating reagent by ETAAS. The speciation of Cr(III) and Cr(VI) was achieved by complexation of Cr(III)-TTA and the total Cr was measured after reduction of Cr(VI) to Cr. The calibration graph was linear in the range of 0.02-2.50 µg L -1 , with a detection limit of 0.0052 µg L -1 . The %RSD was in range of 2.90-3.30% at 0.5, 1.5 and 2.5 µg L -1 of Cr(III), n = 5 and the EF was 54.47. The method was applied to chromium speciation and total chromium determination in real samples and gave recoveries in the range of 96.2-103.5% and 97.1-102.7% for Cr(III) and Cr(VI) in water samples and 93.7-103.5% of total Cr in rice samples. The accuracy of the method was evaluated by analysis of SRM 1573a with good agreement compared to the certified value. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Xia, Qinghai; Yang, Yaling; Liu, Mousheng
2012-10-01
An aluminium sensitized spectrofluorimetric method coupled with salting-out assisted liquid-liquid ultrasonic extraction for the determination of four widely used fluoroquinolones (FQs) namely norfloxacin (NOR), ofloxacin (OFL), ciprofloxacin (CIP) and gatifloxacin (GAT) in bovine raw milk was described. The analytical procedure involves the fluorescence sensitization of aluminium (Al3+) by complexation with FQs, salting-out assisted liquid-liquid ultrasonic extraction (SALLUE), followed by spectrofluorometry. The influence of several parameters on the extraction (the salt species, the amount of salt, pH, temperature and phase volume ratio) was investigated. Under optimized experimental conditions, the detection limits of the method in milk varied from 0.009 μg/mL for NOR to 0.016 μg/mL for GAT (signal-to-noise ratio (S/N) = 3). The relative standard deviations (RSD) values were found to be relatively low (0.54-2.48% for four compounds). The calibration graph was linear from 0.015 to 2.25 μg/mL with coefficient of determinations not less than 0.9974. The methodology developed was applied to the determination of FQs in bovine raw milk samples. The main advantage of this method is simple, accurate and green. The method showed promising applications for analyzing polar analytes especially polar drugs in various sample matrices.
An ion source for radiofrequency-pulsed glow discharge time-of-flight mass spectrometry
NASA Astrophysics Data System (ADS)
González Gago, C.; Lobo, L.; Pisonero, J.; Bordel, N.; Pereiro, R.; Sanz-Medel, A.
2012-10-01
A Grimm-type glow discharge (GD) has been designed and constructed as an ion source for pulsed radiofrequency GD spectrometry when coupled to an orthogonal time of flight mass spectrometer. Pulse shapes of argon species and analytes were studied as a function of the discharge conditions using a new in-house ion source (UNIOVI GD) and results have been compared with a previous design (PROTOTYPE GD). Different behavior and shapes of the pulse profiles have been observed for the two sources evaluated, particularly for the plasma gas ionic species detected. In the more analytically relevant region (afterglow), signals for 40Ar+ with this new design were negligible, while maximum intensity was reached earlier in time for 41(ArH)+ than when using the PROTOTYPE GD. Moreover, while maximum 40Ar+ signals measured along the pulse period were similar in both sources, 41(ArH)+ and 80(Ar2)+ signals tend to be noticeable higher using the PROTOTYPE chamber. The UNIOVI GD design was shown to be adequate for sensitive direct analysis of solid samples, offering linear calibration graphs and good crater shapes. Limits of detection (LODs) are in the same order of magnitude for both sources, although the UNIOVI source provides slightly better LODs for those analytes with masses slightly higher than 41(ArH)+.
Amin, A S; Saleh, H M
2017-08-17
A simple spectrophotometric methods has been developed for the determination of nortriptyline hydrochloride in pure and in pharmaceuticalformulations based on the formation of ion-pair complexes with sudun II (S II ), sudan (IV) (S IV ) and sudan black B (S BB ). The selectivity of the method was improved through extraction with chloroform. The optimum conditions for complete extracted colour development were assessed. The absorbance measurements were made at 534, 596 and 649 nm for S II , S IV and S BB complexes, respectively. The calibration graph was linear in the ranges 0.5- 280. 0.5- 37.5 and 0.5 - 31.0 μg ml -1 of the drug usiny the same reagents, respectively. The precision of the procedure was checked by calculating the relative standard deviation of ten replicate determinations on 15 μg ml -1 of nortriptyline HCI and was found to be 1.7, 1.3 and 1.55% using S II , S IV , and S BB complexes, respectively. The molar absorptivity and Sandell sensitivity for each ion-pair were calculated. The proposed methods were successfully applied to the deterniination of pure nortriptyline HCI and in pharmaceutical formulations, and the results demonstrated that the method is equally accurate, precise and reproducible as the official method.
Rahman, Nafisur; Kashif, Mohammad
2010-03-01
Point and interval hypothesis tests performed to validate two simple and economical, kinetic spectrophotometric methods for the assay of lansoprazole are described. The methods are based on the formation of chelate complex of the drug with Fe(III) and Zn(II). The reaction is followed spectrophotometrically by measuring the rate of change of absorbance of coloured chelates of the drug with Fe(III) and Zn(II) at 445 and 510 nm, respectively. The stoichiometric ratio of lansoprazole to Fe(III) and Zn(II) complexes were found to be 1:1 and 2:1, respectively. The initial-rate and fixed-time methods are adopted for determination of drug concentrations. The calibration graphs are linear in the range 50-200 µg ml⁻¹ (initial-rate method), 20-180 µg ml⁻¹ (fixed-time method) for lansoprazole-Fe(III) complex and 120-300 (initial-rate method), and 90-210 µg ml⁻¹ (fixed-time method) for lansoprazole-Zn(II) complex. The inter-day and intra-day precision data showed good accuracy and precision of the proposed procedure for analysis of lansoprazole. The point and interval hypothesis tests indicate that the proposed procedures are not biased. Copyright © 2010 John Wiley & Sons, Ltd.
A novel stopped flow injection-amperometric procedure for the determination of chlorate.
Tue-Ngeun, Orawan; Jakmunee, Jaroon; Grudpan, Kate
2005-12-15
A novel stopped flow injection-amperometric (sFI-Amp) procedure for determination of chlorate has been developed. The reaction of chlorate with excess potassium iodide and hydrochloric acid, forming iodine/triiodide that is further electrochemically reduced at a glassy carbon electrode at +200mV versus Ag/AgCl electrode is employed. In order to increase sensitivity without using of too high acid concentration, promoting of the reaction by increasing reaction time and temperature can be carried out. This can be done without increase of dispersion of the product zone by stopping the flow while the injected zone is being in a mixing coil which is immersed in a water bath of 55+/-0.5 degrees C. In a closed system of FIA, a side reaction of oxygen with iodide is also minimized. Under a set of conditions, linear calibration graphs were in the ranges of 1.2x10(-6)-6.0x10(-5)moll(-1)and 6.0x10(-5)-6.0x10(-4)moll(-1). A sample throughput of 25h(-1) was accomplished. Relative standard deviation was 2% (n=21, 1.2x10(-4)moll(-1) chlorate). The proposed sFI-Amp procedure was successfully applied to the determination of chlorate in soil samples from longan plantation area.
Asadi, Mohammad; Haji Shabani, Ali Mohammad; Dadfarnia, Shayessteh; Abbasi, Bijan
2015-12-18
A novel, rapid, simple and green vortex-assisted surfactant-enhanced emulsification microextraction method based on solidification of floating organic drop was developed for simultaneous separation/preconcentration and determination of ultra trace amounts of naproxen and nabumetone with high performance liquid chromatography-fluorescence detection. Some parameters influencing the extraction efficiency of analytes such as type and volume of extractant, type and concentration of surfactant, sample pH, KCl concentration, sample volume, and vortex time were investigated and optimized. Under optimal conditions, the calibration graph exhibited linearity in the range of 3.0-300.0ngL(-1) for naproxen and 7.0-300.0ngL(-1) for nabumetone with a good coefficient of determination (R(2)>0.999). The limits of detection were 0.9 and 2.1ngL(-1). The relative standard deviations for inter- and intra-day assays were in the range of 5.8-10.1% and 3.8-6.1%, respectively. The method was applied to the determination of naproxen and nabumetone in urine, water, wastewater and milk samples and the accuracy was evaluated through recovery experiments. Copyright © 2015 Elsevier B.V. All rights reserved.
Spectrophotometric determination of ofloxacin in pharmaceuticals by redox reaction
NASA Astrophysics Data System (ADS)
Ramesh, P. J.; Basavaiah, K.; Rajendraprasad, N.; Devi, O. Zenita; Vinay, K. B.
2011-07-01
Two simple spectrophotometric methods have been developed to analyze ofloxacin (OFX) in pharmaceuticals. The methods are based on the oxidation of OFX by a measured excess of cerium(IV) sulfate in H2SO4 medium. This was followed by the determination of the unreacted oxidant by reacting it with either p-toluidine ( p-TD) and measuring the absorbance at 525 nm (method A) or o-dianisidine ( o-DA) and measuring the absorbance at 470 nm (method B). In both methods, the amount of cerium(IV) sulfate reacted corresponds to the amount of OFX. Calibration graphs were linear over the ranges of 0-120 and 0-4 g/ml OFX for methods A and B, respectively. The calculated molar absorptivity (2.34ṡ103 and 5.99ṡ104), Sandell sensitivity, and limit of quantification for the methods are reported. The intra-day precision (%RSD) and accuracy (%RE) were < 8.0 and ≤ 4.0%, respectively, and the inter-day RSD and RE values were within 5 and 4.0%, respectively. The applicability of the methods was demonstrated by determining OFX in tablets with an accuracy (%RE) of < 3% and precision (%RSD) of ≤2.65%. The accuracy of the methods was further ascertained by recovery experiments via a standard-addition procedure.
Saraji, Mohammad; Ghambari, Hoda
2015-10-01
Trace analysis of chlorophenols in water was performed by simultaneous silylation and dispersive liquid-liquid microextraction followed by gas chromatography with mass spectrometry. Dispersive liquid-liquid microextraction was carried out using an organic solvent lighter than water (n-hexane). The effect of different silylating reagents on the method efficiency was investigated. The influence of derivatization reagent volume, presence of catalyst and derivatization/extraction time on the yield of the derivatization reaction was studied. Different parameters affecting extraction efficiency such as kind and volume of extraction and disperser solvents, pH of the sample and addition of salt were also investigated and optimized. Under the optimum conditions, the calibration graphs were linear in the range of 0.05-100 ng/mL and the limit of detection was 0.01 ng/mL. The enrichment factors were 242, 351, and 363 for 4-chlorophenol, 2,4-dichlorophenol, and 2,4,6-trichlorophenol, respectively. The values of intra- and inter-day relative standard deviations were in the range of 3.0-6.4 and 6.1-9.9%, respectively. The applicability of the method was investigated by analyzing water and wastewater samples. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Ulu, Sevgi Tatar
2009-06-01
A highly sensitive spectrofluorimetric method was developed for the first time, for the analysis of three fluoroquinolones (FQ) antibacterials, namely enrofloxacin (ENR), levofloxacin (LEV) and ofloxacin (OFL) in pharmaceutical preparations through charge transfer (CT) complex formation with 2,3,5,6-tetrachloro- p-benzoquinone (chloranil,CLA). At the optimum reaction conditions, the FQ-CLA complexes showed excitation maxima ranging from 359 to 363 nm and emission maxima ranging from 442 to 488 nm. Rectilinear calibration graphs were obtained in the concentration range of 50-1000, 50-1000 and 25-500 ng mL -1 for ENR, LEV and OFL, respectively. The detection limit was found to be 17 ng mL -1 for ENR, 17 ng mL -1 for LEV, 8 ng mL -1 for OFL, respectively. Excipients used as additive in commercial formulations did not interfere in the analysis. The method was validated according to the ICH guidelines with respect to specificity, linearity, accuracy, precision and robustness. The proposed method was successfully applied to the analysis of pharmaceutical preparations. The results obtained were in good agreement with those obtained using the official method; no significant difference in the accuracy and precision as revealed by the accepted values of t- and F-tests, respectively.
NASA Astrophysics Data System (ADS)
Chen, Suming; Zhang, Zhujun
2008-06-01
The method of synthesis and evaluation of molecularly imprinted polymers was reported. As a selective solid-phase extraction sorbent, the polymers were coupled with electrochemical fluorimetry detection for the efficient determination of methotrexate in serum and urine. Methotrexate was preconcentrated in the molecularly imprinted solid-phase extraction microcolumn packed with molecularly imprinted polymers, and then eluted. The eluate was detected by fluorescence spectrophotometer after electrochemical oxidation. The conditions of preconcentration, elution, electrochemical oxidation and determination were carefully studied. Under the selected experimental conditions, the calibration graph of the fluorescence intensity versus methotrexate concentration was linear from 4 × 10 -9 g mL -1 to 5 × 10 -7 g mL -1, and the detection limit was 8.2 × 10 -10 g mL -1 (3 σ). The relative standard deviation was 3.92% ( n = 7) for 1 × 10 -7 g mL -1 methotrexate. The experiments showed that the selectivity and sensitivity of fluorimetry could be greatly improved by the proposed method. This method has been successfully applied to the determination of methotrexate. At the same time, the binding characteristics of the polymers to the methotrexate were evaluated by batch and dynamic methods.
Laser-Induced Breakdown Spectroscopy Based Protein Assay for Cereal Samples.
Sezer, Banu; Bilge, Gonca; Boyaci, Ismail Hakki
2016-12-14
Protein content is an important quality parameter in terms of price, nutritional value, and labeling of various cereal samples. However, conventional analysis methods, namely, Kjeldahl and Dumas, have major drawbacks such as long analysis time, titration mistakes, and carrier gas dependence with high purity. For this reason, there is an urgent need for rapid, reliable, and environmentally friendly technologies for protein analysis. The present study aims to develop a new method for protein analysis in wheat flour and whole meal by using laser-induced breakdown spectroscopy (LIBS), which is a multielemental, fast, and simple spectroscopic method. Unlike the Kjeldahl and Dumas methods, it has potential to analyze a high number of samples in considerably short time. In the study, nitrogen peaks in LIBS spectra of wheat flour and whole meal samples with different protein contents were correlated with results of the standard Dumas method with the aid of chemometric methods. A calibration graph showed good linearity with the protein content between 7.9 and 20.9% and a 0.992 coefficient of determination (R 2 ). The limit of detection was calculated as 0.26%. The results indicated that LIBS is a promising and reliable method with its high sensitivity for routine protein analysis in wheat flour and whole meal samples.
Should Science Be Used to Teach Mathematical Skills?
ERIC Educational Resources Information Center
Kren, Sandra R.; Huntsberger, John P.
1977-01-01
Studies elementary school childrens' abilities in (1) measuring and constructing angles, and (2) interpreting and constructing linear graphs as a result of instructional formats. Partitioned into instructional treatments of (1) science, (2) science-mathematics, (3) mathematics, and (4) control were 161 fourth- and fifth-grade children. Mathematics…
Investigating Absolute Value: A Real World Application
ERIC Educational Resources Information Center
Kidd, Margaret; Pagni, David
2009-01-01
Making connections between various representations is important in mathematics. In this article, the authors discuss the numeric, algebraic, and graphical representations of sums of absolute values of linear functions. The initial explanations are accessible to all students who have experience graphing and who understand that absolute value simply…
A calibration method of infrared LVF based spectroradiometer
NASA Astrophysics Data System (ADS)
Liu, Jiaqing; Han, Shunli; Liu, Lei; Hu, Dexin
2017-10-01
In this paper, a calibration method of LVF-based spectroradiometer is summarize, including spectral calibration and radiometric calibration. The spectral calibration process as follow: first, the relationship between stepping motor's step number and transmission wavelength is derivative by theoretical calculation, including a non-linearity correction of LVF;second, a line-to-line method was used to corrected the theoretical wavelength; Finally, the 3.39 μm and 10.69 μm laser is used for spectral calibration validation, show the sought 0.1% accuracy or better is achieved.A new sub-region multi-point calibration method is used for radiometric calibration to improving accuracy, results show the sought 1% accuracy or better is achieved.
Liu, Yan; Cai, Wensheng; Shao, Xueguang
2016-12-05
Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses. Copyright © 2016 Elsevier B.V. All rights reserved.
Procedure for the Selection and Validation of a Calibration Model I-Description and Application.
Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D
2017-05-01
Calibration model selection is required for all quantitative methods in toxicology and more broadly in bioanalysis. This typically involves selecting the equation order (quadratic or linear) and weighting factor correctly modelizing the data. A mis-selection of the calibration model will generate lower quality control (QC) accuracy, with an error up to 154%. Unfortunately, simple tools to perform this selection and tests to validate the resulting model are lacking. We present a stepwise, analyst-independent scheme for selection and validation of calibration models. The success rate of this scheme is on average 40% higher than a traditional "fit and check the QCs accuracy" method of selecting the calibration model. Moreover, the process was completely automated through a script (available in Supplemental Data 3) running in RStudio (free, open-source software). The need for weighting was assessed through an F-test using the variances of the upper limit of quantification and lower limit of quantification replicate measurements. When weighting was required, the choice between 1/x and 1/x2 was determined by calculating which option generated the smallest spread of weighted normalized variances. Finally, model order was selected through a partial F-test. The chosen calibration model was validated through Cramer-von Mises or Kolmogorov-Smirnov normality testing of the standardized residuals. Performance of the different tests was assessed using 50 simulated data sets per possible calibration model (e.g., linear-no weight, quadratic-no weight, linear-1/x, etc.). This first of two papers describes the tests, procedures and outcomes of the developed procedure using real LC-MS-MS results for the quantification of cocaine and naltrexone. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Xu, Andrew Wei
2010-09-01
In genome rearrangement, given a set of genomes G and a distance measure d, the median problem asks for another genome q that minimizes the total distance [Formula: see text]. This is a key problem in genome rearrangement based phylogenetic analysis. Although this problem is known to be NP-hard, we have shown in a previous article, on circular genomes and under the DCJ distance measure, that a family of patterns in the given genomes--represented by adequate subgraphs--allow us to rapidly find exact solutions to the median problem in a decomposition approach. In this article, we extend this result to the case of linear multichromosomal genomes, in order to solve more interesting problems on eukaryotic nuclear genomes. A multi-way capping problem in the linear multichromosomal case imposes an extra computational challenge on top of the difficulty in the circular case, and this difficulty has been underestimated in our previous study and is addressed in this article. We represent the median problem by the capped multiple breakpoint graph, extend the adequate subgraphs into the capped adequate subgraphs, and prove optimality-preserving decomposition theorems, which give us the tools to solve the median problem and the multi-way capping optimization problem together. We also develop an exact algorithm ASMedian-linear, which iteratively detects instances of (capped) adequate subgraphs and decomposes problems into subproblems. Tested on simulated data, ASMedian-linear can rapidly solve most problems with up to several thousand genes, and it also can provide optimal or near-optimal solutions to the median problem under the reversal/HP distance measures. ASMedian-linear is available at http://sites.google.com/site/andrewweixu .
Program Flow Analyzer. Volume 3
1984-08-01
metrics are defined using these basic terms. Of interest is another measure for the size of the program, called the volume: V N x log 2 n. 5 The unit of...correlated to actual data and most useful for test. The formula des - cribing difficulty may be expressed as: nl X N2D - 2 -I/L *Difficulty then, is the...linearly independent program paths through any program graph. A maximal set of these linearly independent paths, called a "basis set," can always be found
Cross-calibration of A.M. constellation sensors for long term monitoring of land surface processes
Meyer, D.; Chander, G.
2006-01-01
Data from multiple sensors must be used together to gain a more complete understanding of land surface processes at a variety of scales. Although higher-level products derived from different sensors (e.g., vegetation cover, albedo, surface temperature) can be validated independently, the degree to which these sensors and their products can be compared to one another is vastly improved if their relative spectro-radiometric responses are known. Most often, sensors are directly calibrated to diffuse solar irradiation or vicariously to ground targets. However, space-based targets are not traceable to metrological standards, and vicarious calibrations are expensive and provide a poor sampling of a sensor's full dynamic range. Cross-calibration of two sensors can augment these methods if certain conditions can be met: (1) the spectral responses are similar, (2) the observations are reasonably concurrent (similar atmospheric & solar illumination conditions), (3) errors due to misregistrations of inhomogeneous surfaces can be minimized (including scale differences), and (4) the viewing geometry is similar (or, some reasonable knowledge of surface bi-directional reflectance distribution functions is available). This study extends on a previous study of Terra/MODIS and Landsat/ETM+ cross calibration by including the Terra/ASTER and EO-1/ALI sensors, exploring the impacts of cross-calibrating sensors when conditions described above are met to some degree but not perfectly. Measures for spectral response differences and methods for cross calibrating such sensors are provided in this study. These instruments are cross calibrated using the Railroad Valley playa in Nevada. Best fit linear coefficients (slope and offset) are provided for ALI-to-MODIS and ETM+-to-MODIS cross calibrations, and root-mean-squared errors (RMSEs) and correlation coefficients are provided to quantify the uncertainty in these relationships. Due to problems with direct calibration of ASTER data, linear fits were developed between ASTER and ETM+ to assess the impacts of spectral bandpass differences between the two systems. In theory, the linear fits and uncertainties can be used to compare radiance and reflectance products derived from each instrument.
Processes and Reasoning in Representations of Linear Functions
ERIC Educational Resources Information Center
Adu-Gyamfi, Kwaku; Bossé, Michael J.
2014-01-01
This study examined student actions, interpretations, and language in respect to questions raised regarding tabular, graphical, and algebraic representations in the context of functions. The purpose was to investigate students' interpretations and specific ways of working within table, graph, and the algebraic on notions fundamental to a…
Introducing Conservation of Momentum
ERIC Educational Resources Information Center
Brunt, Marjorie; Brunt, Geoff
2013-01-01
The teaching of the principle of conservation of linear momentum is considered (ages 15 + ). From the principle, the momenta of two masses in an isolated system are considered. Sketch graphs of the momenta make Newton's laws appear obvious. Examples using different collision conditions are considered. Conservation of momentum is considered…
Inferring Action Structure and Causal Relationships in Continuous Sequences of Human Action
2014-01-01
language processing literature (e.g., Brent, 1999; Venkataraman , 2001), and which were also used by Goldwater et al. (2009). Precision (P) is the...trees in oriented linear graphs. Simon Stevin: Wis-en Natuurkundig Tijdschrift, 28 , 203. Venkataraman , A. (2001). A statistical model for word discovery
ERIC Educational Resources Information Center
Joram, Elana; Hartman, Christina; Trafton, Paul R.
2004-01-01
This article describes a unit of instruction designed to promote algebraic thinking in second graders. Students examined second- and fourth-grade students' ages and heights on a table and graph and described the patterns that they observed in the data.
Networking in the Presence of Adversaries
2014-09-12
a topological graph with linear algebraic constraints. As a practical example, such a model arises from an electric power system in which the power...flow is governed by the Kirchhoff law. When an adversary launches an MiM data attack, part of the sensor data are intercepted and substituted with
ERIC Educational Resources Information Center
Sinclair, Nathalie; Armstrong, Alayne
2011-01-01
Piecewise linear functions and story graphs are concepts usually associated with algebra, but in the authors' classroom, they found success teaching this topic in a distinctly geometrical manner. The focus of the approach was less on learning geometric concepts and more on using spatial and kinetic reasoning. It not only supports the learning of…
The Effects of Multiple Linked Representations on Student Learning in Mathematics.
ERIC Educational Resources Information Center
Ozgun-Koca, S. Asli
This study investigated the effects on student understanding of linear relationships using the linked representation software VideoPoint as compared to using semi-linked representation software. It investigated students' attitudes towards and preferences for mathematical representations--equations, tables, or graphs. An Algebra I class was divided…
Design and calibration of a six-axis MEMS sensor array for use in scoliosis correction surgery
NASA Astrophysics Data System (ADS)
Benfield, David; Yue, Shichao; Lou, Edmond; Moussa, Walied A.
2014-08-01
A six-axis sensor array has been developed to quantify the 3D force and moment loads applied in scoliosis correction surgery. Initially this device was developed to be applied during scoliosis correction surgery and augmented onto existing surgical instrumentation, however, use as a general load sensor is also feasible. The development has included the design, microfabrication, deployment and calibration of a sensor array. The sensor array consists of four membrane devices, each containing piezoresistive sensing elements, generating a total of 16 differential voltage outputs. The calibration procedure has made use of a custom built load application frame, which allows quantified forces and moments to be applied and compared to the outputs from the sensor array. Linear or non-linear calibration equations are generated to convert the voltage outputs from the sensor array back into 3D force and moment information for display or analysis.
High-efficiency non-uniformity correction for wide dynamic linear infrared radiometry system
NASA Astrophysics Data System (ADS)
Li, Zhou; Yu, Yi; Tian, Qi-Jie; Chang, Song-Tao; He, Feng-Yun; Yin, Yan-He; Qiao, Yan-Feng
2017-09-01
Several different integration times are always set for a wide dynamic linear and continuous variable integration time infrared radiometry system, therefore, traditional calibration-based non-uniformity correction (NUC) are usually conducted one by one, and furthermore, several calibration sources required, consequently makes calibration and process of NUC time-consuming. In this paper, the difference of NUC coefficients between different integration times have been discussed, and then a novel NUC method called high-efficiency NUC, which combines the traditional calibration-based non-uniformity correction, has been proposed. It obtains the correction coefficients of all integration times in whole linear dynamic rangesonly by recording three different images of a standard blackbody. Firstly, mathematical procedure of the proposed non-uniformity correction method is validated and then its performance is demonstrated by a 400 mm diameter ground-based infrared radiometry system. Experimental results show that the mean value of Normalized Root Mean Square (NRMS) is reduced from 3.78% to 0.24% by the proposed method. In addition, the results at 4 ms and 70 °C prove that this method has a higher accuracy compared with traditional calibration-based NUC. In the meantime, at other integration time and temperature there is still a good correction effect. Moreover, it greatly reduces the number of correction time and temperature sampling point, and is characterized by good real-time performance and suitable for field measurement.
Deconvolution of mixing time series on a graph
Blocker, Alexander W.; Airoldi, Edoardo M.
2013-01-01
In many applications we are interested in making inference on latent time series from indirect measurements, which are often low-dimensional projections resulting from mixing or aggregation. Positron emission tomography, super-resolution, and network traffic monitoring are some examples. Inference in such settings requires solving a sequence of ill-posed inverse problems, yt = Axt, where the projection mechanism provides information on A. We consider problems in which A specifies mixing on a graph of times series that are bursty and sparse. We develop a multilevel state-space model for mixing times series and an efficient approach to inference. A simple model is used to calibrate regularization parameters that lead to efficient inference in the multilevel state-space model. We apply this method to the problem of estimating point-to-point traffic flows on a network from aggregate measurements. Our solution outperforms existing methods for this problem, and our two-stage approach suggests an efficient inference strategy for multilevel models of multivariate time series. PMID:25309135
Efficient Wide Baseline Structure from Motion
NASA Astrophysics Data System (ADS)
Michelini, Mario; Mayer, Helmut
2016-06-01
This paper presents a Structure from Motion approach for complex unorganized image sets. To achieve high accuracy and robustness, image triplets are employed and (an approximate) camera calibration is assumed to be known. The focus lies on a complete linking of images even in case of large image distortions, e.g., caused by wide baselines, as well as weak baselines. A method for embedding image descriptors into Hamming space is proposed for fast image similarity ranking. The later is employed to limit the number of pairs to be matched by a wide baseline method. An iterative graph-based approach is proposed formulating image linking as the search for a terminal Steiner minimum tree in a line graph. Finally, additional links are determined and employed to improve the accuracy of the pose estimation. By this means, loops in long image sequences are implicitly closed. The potential of the proposed approach is demonstrated by results for several complex image sets also in comparison with VisualSFM.
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Johnson, B. Carol; Early, Edward E.; Eplee, Robert E., Jr.; Barnes, Robert A.; Caffrey, Robert T.
1999-01-01
The Sea-viewing Wide Field-of-view Sensor (SeaWiFS) was originally calibrated by the instrument's manufacturer, Santa Barbara Research Center (SBRC), in November 1993. In preparation for an August 1997 launch, the SeaWiFS Project and the National Institute of Standards and Technology (NIST) undertook a second calibration of SeaWiFS in January and April 1997 at the facility of the spacecraft integrator, Orbital Sciences Corporation (OSC). This calibration occurred in two phases, the first after the final thermal vacuum test, and the second after the final vibration test of the spacecraft. For the calibration, SeaWiFS observed an integrating sphere from the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC) at four radiance levels. The spectral radiance of the sphere at these radiance levels was also measured by the SeaWiFS Transfer Radiometer (SXR). In addition, during the calibration, SeaWiFS and the SXR observed the sphere at 16 radiance levels to determine the linearity of the SeaWiFS response. As part of the calibration analysis, the GSFC sphere was also characterized using a GSFC spectroradiometer. The 1997 calibration agrees with the initial 1993 calibration to within +/- 4%. The new calibration coefficients, computed before and after the vibration test, agree to within 0.5%. The response of the SeaWiFS channels in each band is linear to better than 1%. In order to compare to previous and current methods, the SeaWiFS radiometric responses are presented in two ways: using the nominal center wave-lengths for the eight bands; and using band-averaged spectral radiances. The band-averaged values are used in the flight calibration table. An uncertainty analysis for the calibration coefficients is also presented.
SAR-based change detection using hypothesis testing and Markov random field modelling
NASA Astrophysics Data System (ADS)
Cao, W.; Martinis, S.
2015-04-01
The objective of this study is to automatically detect changed areas caused by natural disasters from bi-temporal co-registered and calibrated TerraSAR-X data. The technique in this paper consists of two steps: Firstly, an automatic coarse detection step is applied based on a statistical hypothesis test for initializing the classification. The original analytical formula as proposed in the constant false alarm rate (CFAR) edge detector is reviewed and rewritten in a compact form of the incomplete beta function, which is a builtin routine in commercial scientific software such as MATLAB and IDL. Secondly, a post-classification step is introduced to optimize the noisy classification result in the previous step. Generally, an optimization problem can be formulated as a Markov random field (MRF) on which the quality of a classification is measured by an energy function. The optimal classification based on the MRF is related to the lowest energy value. Previous studies provide methods for the optimization problem using MRFs, such as the iterated conditional modes (ICM) algorithm. Recently, a novel algorithm was presented based on graph-cut theory. This method transforms a MRF to an equivalent graph and solves the optimization problem by a max-flow/min-cut algorithm on the graph. In this study this graph-cut algorithm is applied iteratively to improve the coarse classification. At each iteration the parameters of the energy function for the current classification are set by the logarithmic probability density function (PDF). The relevant parameters are estimated by the method of logarithmic cumulants (MoLC). Experiments are performed using two flood events in Germany and Australia in 2011 and a forest fire on La Palma in 2009 using pre- and post-event TerraSAR-X data. The results show convincing coarse classifications and considerable improvement by the graph-cut post-classification step.
NASA Astrophysics Data System (ADS)
Sun, Limin; Chen, Lin
2017-10-01
Residual mode correction is found crucial in calibrating linear resonant absorbers for flexible structures. The classic modal representation augmented with stiffness and inertia correction terms accounting for non-resonant modes improves the calibration accuracy and meanwhile avoids complex modal analysis of the full system. This paper explores the augmented modal representation in calibrating control devices with nonlinearity, by studying a taut cable attached with a general viscous damper and its Equivalent Dynamic Systems (EDSs), i.e. the augmented modal representations connected to the same damper. As nonlinearity is concerned, Frequency Response Functions (FRFs) of the EDSs are investigated in detail for parameter calibration, using the harmonic balance method in combination with numerical continuation. The FRFs of the EDSs and corresponding calibration results are then compared with those of the full system documented in the literature for varied structural modes, damper locations and nonlinearity. General agreement is found and in particular the EDS with both stiffness and inertia corrections (quasi-dynamic correction) performs best among available approximate methods. This indicates that the augmented modal representation although derived from linear cases is applicable to a relatively wide range of damper nonlinearity. Calibration of nonlinear devices by this means still requires numerical analysis while the efficiency is largely improved owing to the system order reduction.
Martinuzzo, Marta E; Duboscq, Cristina; Lopez, Marina S; Barrera, Luis H; Vinuales, Estela S; Ceresetto, Jose; Forastiero, Ricardo R; Oyhamburu, Jose
2018-06-01
Rivaroxaban oral anticoagulant does not need laboratory monitoring, but in some situations plasma level measurement is useful. The objective of this paper was to verify analytical performance and compare two rivaroxaban calibrated anti Xa assays/coagulometer systems with specific or other branch calibrators. In 59 samples drawn at trough or peak from patients taking rivaroxaban, plasma levels were measured by HemosIL Liquid anti Xa in ACLTOP 300/500, and STA liquid Anti Xa in TCoag Destiny Plus. HemosIL and STA rivaroxaban calibrators and controls were used. CLSI guideline procedures EP15A3 for precision and trueness, EP6 for linearity, and EP9 for methods comparison were used. Coefficient of variation within run and total precision (CVR and CVWL respectively) of plasmatic rivaroxaban were < 4.2 and < 4.85% and BIAS < 7.4 and < 6.5%, for HemosIL-ACL TOP and STA-Destiny systems, respectively. Linearity verification 8 - 525 ng/mL a Deming regression for methods comparison presented R 0.963, 0.968 and 0.982, with a mean CV 13.3% when using different systems and calibrations. The analytical performance of plasma rivaroxaban was acceptable in both systems, and results from reagent/coagulometer systems are comparable even when calibrating with different branch material.
NASA Astrophysics Data System (ADS)
Cui, Bing; Zhao, Chunhui; Ma, Tiedong; Feng, Chi
2017-02-01
In this paper, the cooperative adaptive consensus tracking problem for heterogeneous nonlinear multi-agent systems on directed graph is addressed. Each follower is modelled as a general nonlinear system with the unknown and nonidentical nonlinear dynamics, disturbances and actuator failures. Cooperative fault tolerant neural network tracking controllers with online adaptive learning features are proposed to guarantee that all agents synchronise to the trajectory of one leader with bounded adjustable synchronisation errors. With the help of linear quadratic regulator-based optimal design, a graph-dependent Lyapunov proof provides error bounds that depend on the graph topology, one virtual matrix and some design parameters. Of particular interest is that if the control gain is selected appropriately, the proposed control scheme can be implemented in a unified framework no matter whether there are faults or not. Furthermore, the fault detection and isolation are not needed to implement. Finally, a simulation is given to verify the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Bibak, Khodakhast; Kapron, Bruce M.; Srinivasan, Venkatesh
2016-09-01
Graphs embedded into surfaces have many important applications, in particular, in combinatorics, geometry, and physics. For example, ribbon graphs and their counting is of great interest in string theory and quantum field theory (QFT). Recently, Koch et al. (2013) [12] gave a refined formula for counting ribbon graphs and discussed its applications to several physics problems. An important factor in this formula is the number of surface-kernel epimorphisms from a co-compact Fuchsian group to a cyclic group. The aim of this paper is to give an explicit and practical formula for the number of such epimorphisms. As a consequence, we obtain an 'equivalent' form of Harvey's famous theorem on the cyclic groups of automorphisms of compact Riemann surfaces. Our main tool is an explicit formula for the number of solutions of restricted linear congruence recently proved by Bibak et al. using properties of Ramanujan sums and of the finite Fourier transform of arithmetic functions.
NASA Astrophysics Data System (ADS)
Biazzo, Indaco; Braunstein, Alfredo; Zecchina, Riccardo
2012-08-01
We study the behavior of an algorithm derived from the cavity method for the prize-collecting steiner tree (PCST) problem on graphs. The algorithm is based on the zero temperature limit of the cavity equations and as such is formally simple (a fixed point equation resolved by iteration) and distributed (parallelizable). We provide a detailed comparison with state-of-the-art algorithms on a wide range of existing benchmarks, networks, and random graphs. Specifically, we consider an enhanced derivative of the Goemans-Williamson heuristics and the dhea solver, a branch and cut integer linear programming based approach. The comparison shows that the cavity algorithm outperforms the two algorithms in most large instances both in running time and quality of the solution. Finally we prove a few optimality properties of the solutions provided by our algorithm, including optimality under the two postprocessing procedures defined in the Goemans-Williamson derivative and global optimality in some limit cases.
Visibility graph analysis of heart rate time series and bio-marker of congestive heart failure
NASA Astrophysics Data System (ADS)
Bhaduri, Anirban; Bhaduri, Susmita; Ghosh, Dipak
2017-09-01
Study of RR interval time series for Congestive Heart Failure had been an area of study with different methods including non-linear methods. In this article the cardiac dynamics of heart beat are explored in the light of complex network analysis, viz. visibility graph method. Heart beat (RR Interval) time series data taken from Physionet database [46, 47] belonging to two groups of subjects, diseased (congestive heart failure) (29 in number) and normal (54 in number) are analyzed with the technique. The overall results show that a quantitative parameter can significantly differentiate between the diseased subjects and the normal subjects as well as different stages of the disease. Further, the data when split into periods of around 1 hour each and analyzed separately, also shows the same consistent differences. This quantitative parameter obtained using the visibility graph analysis thereby can be used as a potential bio-marker as well as a subsequent alarm generation mechanism for predicting the onset of Congestive Heart Failure.
NASA Astrophysics Data System (ADS)
Ren, Jie
2017-12-01
The process by which a kinesin motor couples its ATPase activity with concerted mechanical hand-over-hand steps is a foremost topic of molecular motor physics. Two major routes toward elucidating kinesin mechanisms are the motility performance characterization of velocity and run length, and single-molecular state detection experiments. However, these two sets of experimental approaches are largely uncoupled to date. Here, we introduce an integrative motility state analysis based on a theorized kinetic graph theory for kinesin, which, on one hand, is validated by a wealth of accumulated motility data, and, on the other hand, allows for rigorous quantification of state occurrences and chemomechanical cycling probabilities. An interesting linear scaling for kinesin motility performance across species is discussed as well. An integrative kinetic graph theory analysis provides a powerful tool to bridge motility and state characterization experiments, so as to forge a unified effort for the elucidation of the working mechanisms of molecular motors.
Abusam, A; Keesman, K J; van Straten, G; Spanjers, H; Meinema, K
2001-01-01
When applied to large simulation models, the process of parameter estimation is also called calibration. Calibration of complex non-linear systems, such as activated sludge plants, is often not an easy task. On the one hand, manual calibration of such complex systems is usually time-consuming, and its results are often not reproducible. On the other hand, conventional automatic calibration methods are not always straightforward and often hampered by local minima problems. In this paper a new straightforward and automatic procedure, which is based on the response surface method (RSM) for selecting the best identifiable parameters, is proposed. In RSM, the process response (output) is related to the levels of the input variables in terms of a first- or second-order regression model. Usually, RSM is used to relate measured process output quantities to process conditions. However, in this paper RSM is used for selecting the dominant parameters, by evaluating parameters sensitivity in a predefined region. Good results obtained in calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch proved that the proposed procedure is successful and reliable.
NASA Astrophysics Data System (ADS)
Chinowsky, Timothy M.; Yee, Sinclair S.
2002-02-01
Surface plasmon resonance (SPR) affinity sensing, the problem of bulk refractive index (RI) interference in SPR sensing, and a sensor developed to overcome this problem are briefly reviewed. The sensor uses a design based on Texas Instruments' Spreeta SPR sensor to simultaneously measure both bulk and surface RI. The bulk RI measurement is then used to compensate the surface measurement and remove the effects of bulk RI interference. To achieve accurate compensation, robust data analysis and calibration techniques are necessary. Simple linear data analysis techniques derived from measurements of the sensor response were found to provide a versatile, low noise method for extracting measurements of bulk and surface refractive index from the raw sensor data. Automatic calibration using RI gradients was used to correct the linear estimates, enabling the sensor to produce accurate data even when the sensor has a complicated nonlinear response which varies with time. The calibration procedure is described, and the factors influencing calibration accuracy are discussed. Data analysis and calibration principles are illustrated with an experiment in which sucrose and detergent solutions are used to produce changes in bulk and surface RI, respectively.
Billard, Hélène; Simon, Laure; Desnots, Emmanuelle; Sochard, Agnès; Boscher, Cécile; Riaublanc, Alain; Alexandre-Gouabau, Marie-Cécile; Boquien, Clair-Yves
2016-08-01
Human milk composition analysis seems essential to adapt human milk fortification for preterm neonates. The Miris human milk analyzer (HMA), based on mid-infrared methodology, is convenient for a unique determination of macronutrients. However, HMA measurements are not totally comparable with reference methods (RMs). The primary aim of this study was to compare HMA results with results from biochemical RMs for a large range of protein, fat, and carbohydrate contents and to establish a calibration adjustment. Human milk was fractionated in protein, fat, and skim milk by covering large ranges of protein (0-3 g/100 mL), fat (0-8 g/100 mL), and carbohydrate (5-8 g/100 mL). For each macronutrient, a calibration curve was plotted by linear regression using measurements obtained using HMA and RMs. For fat, 53 measurements were performed, and the linear regression equation was HMA = 0.79RM + 0.28 (R(2) = 0.92). For true protein (29 measurements), the linear regression equation was HMA = 0.9RM + 0.23 (R(2) = 0.98). For carbohydrate (15 measurements), the linear regression equation was HMA = 0.59RM + 1.86 (R(2) = 0.95). A homogenization step with a disruptor coupled to a sonication step was necessary to obtain better accuracy of the measurements. Good repeatability (coefficient of variation < 7%) and reproducibility (coefficient of variation < 17%) were obtained after calibration adjustment. New calibration curves were developed for the Miris HMA, allowing accurate measurements in large ranges of macronutrient content. This is necessary for reliable use of this device in individualizing nutrition for preterm newborns. © The Author(s) 2015.
Calibrating page sized Gafchromic EBT3 films
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crijns, W.; Maes, F.; Heide, U. A. van der
2013-01-15
Purpose: The purpose is the development of a novel calibration method for dosimetry with Gafchromic EBT3 films. The method should be applicable for pretreatment verification of volumetric modulated arc, and intensity modulated radiotherapy. Because the exposed area on film can be large for such treatments, lateral scan errors must be taken into account. The correction for the lateral scan effect is obtained from the calibration data itself. Methods: In this work, the film measurements were modeled using their relative scan values (Transmittance, T). Inside the transmittance domain a linear combination and a parabolic lateral scan correction described the observed transmittancemore » values. The linear combination model, combined a monomer transmittance state (T{sub 0}) and a polymer transmittance state (T{sub {infinity}}) of the film. The dose domain was associated with the observed effects in the transmittance domain through a rational calibration function. On the calibration film only simple static fields were applied and page sized films were used for calibration and measurements (treatment verification). Four different calibration setups were considered and compared with respect to dose estimation accuracy. The first (I) used a calibration table from 32 regions of interest (ROIs) spread on 4 calibration films, the second (II) used 16 ROIs spread on 2 calibration films, the third (III), and fourth (IV) used 8 ROIs spread on a single calibration film. The calibration tables of the setups I, II, and IV contained eight dose levels delivered to different positions on the films, while for setup III only four dose levels were applied. Validation was performed by irradiating film strips with known doses at two different time points over the course of a week. Accuracy of the dose response and the lateral effect correction was estimated using the dose difference and the root mean squared error (RMSE), respectively. Results: A calibration based on two films was the optimal balance between cost effectiveness and dosimetric accuracy. The validation resulted in dose errors of 1%-2% for the two different time points, with a maximal absolute dose error around 0.05 Gy. The lateral correction reduced the RMSE values on the sides of the film to the RMSE values at the center of the film. Conclusions: EBT3 Gafchromic films were calibrated for large field dosimetry with a limited number of page sized films and simple static calibration fields. The transmittance was modeled as a linear combination of two transmittance states, and associated with dose using a rational calibration function. Additionally, the lateral scan effect was resolved in the calibration function itself. This allows the use of page sized films. Only two calibration films were required to estimate both the dose and the lateral response. The calibration films were used over the course of a week, with residual dose errors Less-Than-Or-Slanted-Equal-To 2% or Less-Than-Or-Slanted-Equal-To 0.05 Gy.« less
NASA Technical Reports Server (NTRS)
Faulkner, K. G.; Gluer, C. C.; Grampp, S.; Genant, H. K.
1993-01-01
Quantitative computed tomography (QCT) has been shown to be a precise and sensitive method for evaluating spinal bone mineral density (BMD) and skeletal response to aging and therapy. Precise and accurate determination of BMD using QCT requires a calibration standard to compensate for and reduce the effects of beam-hardening artifacts and scanner drift. The first standards were based on dipotassium hydrogen phosphate (K2HPO4) solutions. Recently, several manufacturers have developed stable solid calibration standards based on calcium hydroxyapatite (CHA) in water-equivalent plastic. Due to differences in attenuating properties of the liquid and solid standards, the calibrated BMD values obtained with each system do not agree. In order to compare and interpret the results obtained on both systems, cross-calibration measurements were performed in phantoms and patients using the University of California San Francisco (UCSF) liquid standard and the Image Analysis (IA) solid standard on the UCSF GE 9800 CT scanner. From the phantom measurements, a highly linear relationship was found between the liquid- and solid-calibrated BMD values. No influence on the cross-calibration due to simulated variations in body size or vertebral fat content was seen, though a significant difference in the cross-calibration was observed between scans acquired at 80 and 140 kVp. From the patient measurements, a linear relationship between the liquid (UCSF) and solid (IA) calibrated values was derived for GE 9800 CT scanners at 80 kVp (IA = [1.15 x UCSF] - 7.32).(ABSTRACT TRUNCATED AT 250 WORDS).
An Expert System toward Buiding An Earth Science Knowledge Graph
NASA Astrophysics Data System (ADS)
Zhang, J.; Duan, X.; Ramachandran, R.; Lee, T. J.; Bao, Q.; Gatlin, P. N.; Maskey, M.
2017-12-01
In this ongoing work, we aim to build foundations of Cognitive Computing for Earth Science research. The goal of our project is to develop an end-to-end automated methodology for incrementally constructing Knowledge Graphs for Earth Science (KG4ES). These knowledge graphs can then serve as the foundational components for building cognitive systems in Earth science, enabling researchers to uncover new patterns and hypotheses that are virtually impossible to identify today. In addition, this research focuses on developing mining algorithms needed to exploit these constructed knowledge graphs. As such, these graphs will free knowledge from publications that are generated in a very linear, deterministic manner, and structure knowledge in a way that users can both interact and connect with relevant pieces of information. Our major contributions are two-fold. First, we have developed an end-to-end methodology for constructing Knowledge Graphs for Earth Science (KG4ES) using existing corpus of journal papers and reports. One of the key challenges in any machine learning, especially deep learning applications, is the need for robust and large training datasets. We have developed techniques capable of automatically retraining models and incrementally building and updating KG4ES, based on ever evolving training data. We also adopt the evaluation instrument based on common research methodologies used in Earth science research, especially in Atmospheric Science. Second, we have developed an algorithm to infer new knowledge that can exploit the constructed KG4ES. In more detail, we have developed a network prediction algorithm aiming to explore and predict possible new connections in the KG4ES and aid in new knowledge discovery.
Alignment of Tractograms As Graph Matching.
Olivetti, Emanuele; Sharmin, Nusrat; Avesani, Paolo
2016-01-01
The white matter pathways of the brain can be reconstructed as 3D polylines, called streamlines, through the analysis of diffusion magnetic resonance imaging (dMRI) data. The whole set of streamlines is called tractogram and represents the structural connectome of the brain. In multiple applications, like group-analysis, segmentation, or atlasing, tractograms of different subjects need to be aligned. Typically, this is done with registration methods, that transform the tractograms in order to increase their similarity. In contrast with transformation-based registration methods, in this work we propose the concept of tractogram correspondence, whose aim is to find which streamline of one tractogram corresponds to which streamline in another tractogram, i.e., a map from one tractogram to another. As a further contribution, we propose to use the relational information of each streamline, i.e., its distances from the other streamlines in its own tractogram, as the building block to define the optimal correspondence. We provide an operational procedure to find the optimal correspondence through a combinatorial optimization problem and we discuss its similarity to the graph matching problem. In this work, we propose to represent tractograms as graphs and we adopt a recent inexact sub-graph matching algorithm to approximate the solution of the tractogram correspondence problem. On tractograms generated from the Human Connectome Project dataset, we report experimental evidence that tractogram correspondence, implemented as graph matching, provides much better alignment than affine registration and comparable if not better results than non-linear registration of volumes.
Short paths in expander graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kleinberg, J.; Rubinfeld, R.
Graph expansion has proved to be a powerful general tool for analyzing the behavior of routing algorithms and the interconnection networks on which they run. We develop new routing algorithms and structural results for bounded-degree expander graphs. Our results are unified by the fact that they are all based upon, and extend, a body of work asserting that expanders are rich in short, disjoint paths. In particular, our work has consequences for the disjoint paths problem, multicommodify flow, and graph minor containment. We show: (i) A greedy algorithm for approximating the maximum disjoint paths problem achieves a polylogarithmic approximation ratiomore » in bounded-degree expanders. Although our algorithm is both deterministic and on-line, its performance guarantee is an improvement over previous bounds in expanders. (ii) For a multicommodily flow problem with arbitrary demands on a bounded-degree expander, there is a (1 + {epsilon})-optimal solution using only flow paths of polylogarithmic length. It follows that the multicommodity flow algorithm of Awerbuch and Leighton runs in nearly linear time per commodity in expanders. Our analysis is based on establishing the following: given edge weights on an expander G, one can increase some of the weights very slightly so the resulting shortest-path metric is smooth - the min-weight path between any pair of nodes uses a polylogarithmic number of edges. (iii) Every bounded-degree expander on n nodes contains every graph with O(n/log{sup O(1)} n) nodes and edges as a minor.« less
Calibration of the optical torque wrench.
Pedaci, Francesco; Huang, Zhuangxiong; van Oene, Maarten; Dekker, Nynke H
2012-02-13
The optical torque wrench is a laser trapping technique that expands the capability of standard optical tweezers to torque manipulation and measurement, using the laser linear polarization to orient tailored microscopic birefringent particles. The ability to measure torque of the order of kBT (∼4 pN nm) is especially important in the study of biophysical systems at the molecular and cellular level. Quantitative torque measurements rely on an accurate calibration of the instrument. Here we describe and implement a set of calibration approaches for the optical torque wrench, including methods that have direct analogs in linear optical tweezers as well as introducing others that are specifically developed for the angular variables. We compare the different methods, analyze their differences, and make recommendations regarding their implementations.
Application of composite small calibration objects in traffic accident scene photogrammetry.
Chen, Qiang; Xu, Hongguo; Tan, Lidong
2015-01-01
In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies.
Ravichandran, Ramamoorthy; Binukumar, Johnson Pichy; Davis, Cheriyathmanjiyil Antony
2013-01-01
The measured dose in water at reference point in phantom is a primary parameter for planning the treatment monitor units (MU); both in conventional and intensity modulated/image guided treatments. Traceability of dose accuracy therefore still depends mainly on the calibration factor of the ion chamber/dosimeter provided by the accredited Secondary Standard Dosimetry Laboratories (SSDLs), under International Atomic Energy Agency (IAEA) network of laboratories. The data related to Nd,water calibrations, thermoluminescent dosimetry (TLD) postal dose validation, inter-comparison of different dosimeter/electrometers, and validity of Nd,water calibrations obtained from different calibration laboratories were analyzed to find out the extent of accuracy achievable. Nd,w factors in Gray/Coulomb calibrated at IBA, GmBH, Germany showed a mean variation of about 0.2% increase per year in three Farmer chambers, in three subsequent calibrations. Another ion chamber calibrated in different accredited laboratory (PTW, Germany) showed consistent Nd,w for 9 years period. The Strontium-90 beta check source response indicated long-term stability of the ion chambers within 1% for three chambers. Results of IAEA postal TL “dose intercomparison” for three photon beams, 6 MV (two) and 15 MV (one), agreed well within our reported doses, with mean deviation of 0.03% (SD 0.87%) (n = 9). All the chamber/electrometer calibrated by a single SSDL realized absorbed doses in water within 0.13% standard deviations. However, about 1-2% differences in absorbed dose estimates observed when dosimeters calibrated from different calibration laboratories are compared in solid phantoms. Our data therefore imply that the dosimetry level maintained for clinical use of linear accelerator photon beams are within recommended levels of accuracy, and uncertainties are within reported values. PMID:24672156
NASA Technical Reports Server (NTRS)
Fulton, James P. (Inventor); Namkung, Min (Inventor); Simpson, John W. (Inventor); Wincheski, Russell A. (Inventor); Nath, Shridhar C. (Inventor)
1998-01-01
A thickness gauging instrument uses a flux focusing eddy current probe and two-point nonlinear calibration algorithm. The instrument is small and portable due to the simple interpretation and operational characteristics of the probe. A nonlinear interpolation scheme incorporated into the instrument enables a user to make highly accurate thickness measurements over a fairly wide calibration range from a single side of nonferromagnetic conductive metals. The instrument is very easy to use and can be calibrated quickly.
Porel, A.; Haty, Sanjukta; Kundu, A.
2011-01-01
The aim of the present study was the development and subsequent validation of a simple, precise and stability-indicating reversed phase HPLC method for the simultaneous determination of guaifenesin, terbutaline sulphate and bromhexine hydrochloride in the presence of their potential impurities in a single run. The photolytic as well as hydrolytic impurities were detected as 3,5-dihydroxybenzoic acid, 3,5-dihydroxybenzaldehyde, 1-(3,5-dihydroxyphenyl)-2-[(1,1-dimethylethyl) amino]-ethanone from terbutaline, 2-methoxyphenol and an unknown impurity identified as (2RS)-3-(2-hydroxyphenoxy)-propane-1,2-diol from guaifenesin. The chromatographic separation of all the three active components and their impurities was achieved on Wakosil II column, using phosphate buffer (pH 3.0) and acetonitrile as mobile phase which was delivered initially in the ratio of 80:20 (v/v) for 18 min, then changed to 60:40 (v/v) for next 12 min, and finally equilibrated back to 80:20 (v/v) for 10 min. Other HPLC parameters were: Flow rate at 1.0 ml/min, detection wavelengths 248 and 280 nm, injection volume 10 μl. The calibration graphs plotted with five concentrations of each component were linear with a regression coefficient R2 >0.9999. The limit of detection and limit of quantitation were estimated for all the five impurities. The established method was then validated for linearity, precision, accuracy, and specificity and demonstrated to be applicable to the determination of the active ingredients in commercial and model cough syrup. No interference from the formulation excipients was observed. These results suggest that this LC method can be used for the determination of multiple active ingredients and their impurities in a cough and cold syrup. PMID:22131621
Porel, A; Haty, Sanjukta; Kundu, A
2011-01-01
The aim of the present study was the development and subsequent validation of a simple, precise and stability-indicating reversed phase HPLC method for the simultaneous determination of guaifenesin, terbutaline sulphate and bromhexine hydrochloride in the presence of their potential impurities in a single run. The photolytic as well as hydrolytic impurities were detected as 3,5-dihydroxybenzoic acid, 3,5-dihydroxybenzaldehyde, 1-(3,5-dihydroxyphenyl)-2-[(1,1-dimethylethyl) amino]-ethanone from terbutaline, 2-methoxyphenol and an unknown impurity identified as (2RS)-3-(2-hydroxyphenoxy)-propane-1,2-diol from guaifenesin. The chromatographic separation of all the three active components and their impurities was achieved on Wakosil II column, using phosphate buffer (pH 3.0) and acetonitrile as mobile phase which was delivered initially in the ratio of 80:20 (v/v) for 18 min, then changed to 60:40 (v/v) for next 12 min, and finally equilibrated back to 80:20 (v/v) for 10 min. Other HPLC parameters were: Flow rate at 1.0 ml/min, detection wavelengths 248 and 280 nm, injection volume 10 μl. The calibration graphs plotted with five concentrations of each component were linear with a regression coefficient R(2) >0.9999. The limit of detection and limit of quantitation were estimated for all the five impurities. The established method was then validated for linearity, precision, accuracy, and specificity and demonstrated to be applicable to the determination of the active ingredients in commercial and model cough syrup. No interference from the formulation excipients was observed. These results suggest that this LC method can be used for the determination of multiple active ingredients and their impurities in a cough and cold syrup.
Youssef, Nadia F
2005-10-04
Stability-indicating high performance liquid chromatography (HPLC), thin-layer chromatography (TLC) and first-derivative of ratio spectra (1DD) methods are developed for the determination of piretanide in presence of its alkaline induced degradates. HPLC method depends on separation of piretanide from its degradates on mu-Bondapak C18 column using methanol:water:acetic acid (70:30:1, v/v/v) as a mobile phase at flow rate 1.0 ml/min and UV detector at 275 nm. TLC densitometic method is based on the difference in Rf-values between the intact drug and its degradates on thin-layer silica gel. Iso-propanol:ammonia 33% (8:2, v/v) was used as a developing mobile phase and the chromatogram was scanned at 275 nm. The derivative of ratio spectra method (1DD) depends on the measurement of the absorbance at 288 nm in the first-derivative of ratio spectra for the determination of the cited drug in the presence of its degradates. Calibration graphs of the three suggested methods are linear in the concentration ranges 0.02-0.3 microg/20 microl, 0.5-10 microg/spot and 5-50 microg/ml, with mean percentage recovery 99.27+/-0.52, 99,17+/-1.01 and 99.65+/-1.01%, respectively. The three proposed methods were successfully applied for the determination of piretanide in bulk powder, laboratory-prepared mixtures and pharmaceutical dosage form with good accuracy and precision. The results were statistically analyzed and compared with those obtained by the official method. Validation of the method was determined with favourable specificity, linearity, precision, and accuracy was assessed by applying the standard addition technique.
2012-01-01
Background Cuscuta species known as dodder, have been used in traditional medicine of eastern and southern Asian countries as liver and kidney tonic. Flavonoids are considered as the main biologically active constituents in Cuscuta plants especially in C. chinensis Lam. Objective In the present study, a fast, simple and reliable method for the simultaneous determination and quantization of C. chinensis flavonols including hyperoside, rutin, isorhamnetin and kaempferol has been developed. Materials and methods The chromatographic separation was carried out on a reversed phase ACE 5 C18 with eluting at a flow rate of 1 ml/min using a gradient with O-phosphoric acid 0.25% : acetonitrile for 42 min. UV spectra were collected across the range of 200–900 nm, extracting 360 nm for the chromatograms. The method was validated according to linearity, selectivity, precision, recovery, LOD and LOQ. Results The method was selective for determination of rutin, hyperoside, isorhamnetin and kampferol. The calibration graphs of flavonols were linear with r2 > 0.999. RSDs% of intra- and inter-day precisions were found 1.3&3.4 for rutin, 1.5&2.8 for hyperoside, 1.3&3.3 for isorhamnetin and 1.7 & 2.9 for kaempferol which were satisfactory. LODs and LOQs were calculated as 1.73 & 8.19 for rutin, 0.09 & 4.19 for hyperoside, 2.09 & 6.3 for isorhamnetin and 0.18 & 0.56 for kaempferol. The recovery averages of above-mentioned flavonols were 90.3%, 97.4%, 98.7% and 90.0%, respectively. Conclusion The simplicity of the method makes it highly valuable for quality control of C. chinensis according to quantization of flavonols. PMID:23352257
Musuku, Adrien; Tan, Aimin; Awaiye, Kayode; Trabelsi, Fethi
2013-09-01
Linear calibration is usually performed using eight to ten calibration concentration levels in regulated LC-MS bioanalysis because a minimum of six are specified in regulatory guidelines. However, we have previously reported that two-concentration linear calibration is as reliable as or even better than using multiple concentrations. The purpose of this research is to compare two-concentration with multiple-concentration linear calibration through retrospective data analysis of multiple bioanalytical projects that were conducted in an independent regulated bioanalytical laboratory. A total of 12 bioanalytical projects were randomly selected: two validations and two studies for each of the three most commonly used types of sample extraction methods (protein precipitation, liquid-liquid extraction, solid-phase extraction). When the existing data were retrospectively linearly regressed using only the lowest and the highest concentration levels, no extra batch failure/QC rejection was observed and the differences in accuracy and precision between the original multi-concentration regression and the new two-concentration linear regression are negligible. Specifically, the differences in overall mean apparent bias (square root of mean individual bias squares) are within the ranges of -0.3% to 0.7% and 0.1-0.7% for the validations and studies, respectively. The differences in mean QC concentrations are within the ranges of -0.6% to 1.8% and -0.8% to 2.5% for the validations and studies, respectively. The differences in %CV are within the ranges of -0.7% to 0.9% and -0.3% to 0.6% for the validations and studies, respectively. The average differences in study sample concentrations are within the range of -0.8% to 2.3%. With two-concentration linear regression, an average of 13% of time and cost could have been saved for each batch together with 53% of saving in the lead-in for each project (the preparation of working standard solutions, spiking, and aliquoting). Furthermore, examples are given as how to evaluate the linearity over the entire concentration range when only two concentration levels are used for linear regression. To conclude, two-concentration linear regression is accurate and robust enough for routine use in regulated LC-MS bioanalysis and it significantly saves time and cost as well. Copyright © 2013 Elsevier B.V. All rights reserved.
Improved measurement linearity and precision for AMCW time-of-flight range imaging cameras.
Payne, Andrew D; Dorrington, Adrian A; Cree, Michael J; Carnegie, Dale A
2010-08-10
Time-of-flight range imaging systems utilizing the amplitude modulated continuous wave (AMCW) technique often suffer from measurement nonlinearity due to the presence of aliased harmonics within the amplitude modulation signals. Typically a calibration is performed to correct these errors. We demonstrate an alternative phase encoding approach that attenuates the harmonics during the sampling process, thereby improving measurement linearity in the raw measurements. This mitigates the need to measure the system's response or calibrate for environmental changes. In conjunction with improved linearity, we demonstrate that measurement precision can also be increased by reducing the duty cycle of the amplitude modulated illumination source (while maintaining overall illumination power).
Stefano Filho, Carlos A; Attux, Romis; Castellano, Gabriela
2017-01-01
Hands motor imagery (MI) has been reported to alter synchronization patterns amongst neurons, yielding variations in the mu and beta bands' power spectral density (PSD) of the electroencephalography (EEG) signal. These alterations have been used in the field of brain-computer interfaces (BCI), in an attempt to assign distinct MI tasks to commands of such a system. Recent studies have highlighted that information may be missing if knowledge about brain functional connectivity is not considered. In this work, we modeled the brain as a graph in which each EEG electrode represents a node. Our goal was to understand if there exists any linear correlation between variations in the synchronization patterns-that is, variations in the PSD of mu and beta bands-induced by MI and alterations in the corresponding functional networks. Moreover, we (1) explored the feasibility of using functional connectivity parameters as features for a classifier in the context of an MI-BCI; (2) investigated three different types of feature selection (FS) techniques; and (3) compared our approach to a more traditional method using the signal PSD as classifier inputs. Ten healthy subjects participated in this study. We observed significant correlations ( p < 0.05) with values ranging from 0.4 to 0.9 between PSD variations and functional network alterations for some electrodes, prominently in the beta band. The PSD method performed better for data classification, with mean accuracies of (90 ± 8)% and (87 ± 7)% for the mu and beta band, respectively, versus (83 ± 8)% and (83 ± 7)% for the same bands for the graph method. Moreover, the number of features for the graph method was considerably larger. However, results for both methods were relatively close, and even overlapped when the uncertainties of the accuracy rates were considered. Further investigation regarding a careful exploration of other graph metrics may provide better alternatives.
A simple proof of orientability in colored group field theory.
Caravelli, Francesco
2012-01-01
Group field theory is an emerging field at the boundary between Quantum Gravity, Statistical Mechanics and Quantum Field Theory and provides a path integral for the gluing of n-simplices. Colored group field theory has been introduced in order to improve the renormalizability of the theory and associates colors to the faces of the simplices. The theory of crystallizations is instead a field at the boundary between graph theory and combinatorial topology and deals with n-simplices as colored graphs. Several techniques have been introduced in order to study the topology of the pseudo-manifold associated to the colored graph. Although of the similarity between colored group field theory and the theory of crystallizations, the connection between the two fields has never been made explicit. In this short note we use results from the theory of crystallizations to prove that color in group field theories guarantees orientability of the piecewise linear pseudo-manifolds associated to each graph generated perturbatively. Colored group field theories generate orientable pseudo-manifolds. The origin of orientability is the presence of two interaction vertices in the action of colored group field theories. In order to obtain the result, we made the connection between the theory of crystallizations and colored group field theory.
Comparison of an Endotracheal Cardiac Output Monitor to a Pulmonary Artery Catheter
2017-12-04
of a FDA approved device, the CONMED endotracheal cardiac output monitor (ECOM) ™ apparatus, by comparing it to the Edwards Vig ilance II monitor...and Use Committee (FWH 20140100A). Results Using GraphPad Prism® to conduct non-linear fit analyses comparing the slopes of the curves for ECOM
Not so Complex: Iteration in the Complex Plane
ERIC Educational Resources Information Center
O'Dell, Robin S.
2014-01-01
The simple process of iteration can produce complex and beautiful figures. In this article, Robin O'Dell presents a set of tasks requiring students to use the geometric interpretation of complex number multiplication to construct linear iteration rules. When the outputs are plotted in the complex plane, the graphs trace pleasing designs…
Human Performance on Hard Non-Euclidean Graph Problems: Vertex Cover
ERIC Educational Resources Information Center
Carruthers, Sarah; Masson, Michael E. J.; Stege, Ulrike
2012-01-01
Recent studies on a computationally hard visual optimization problem, the Traveling Salesperson Problem (TSP), indicate that humans are capable of finding close to optimal solutions in near-linear time. The current study is a preliminary step in investigating human performance on another hard problem, the Minimum Vertex Cover Problem, in which…
Deriving the Regression Line with Algebra
ERIC Educational Resources Information Center
Quintanilla, John A.
2017-01-01
Exploration with spreadsheets and reliance on previous skills can lead students to determine the line of best fit. To perform linear regression on a set of data, students in Algebra 2 (or, in principle, Algebra 1) do not have to settle for using the mysterious "black box" of their graphing calculators (or other classroom technologies).…
Difference-Equation/Flow-Graph Circuit Analysis
NASA Technical Reports Server (NTRS)
Mcvey, I. M.
1988-01-01
Numerical technique enables rapid, approximate analyses of electronic circuits containing linear and nonlinear elements. Practiced in variety of computer languages on large and small computers; for circuits simple enough, programmable hand calculators used. Although some combinations of circuit elements make numerical solutions diverge, enables quick identification of divergence and correction of circuit models to make solutions converge.
NASA Astrophysics Data System (ADS)
Mugombozi, Chuma Francis
The generation of electrical energy, as well as its transportation and consumption, requires complex control systems for the regulation of power and frequency. These control systems must take into account, among others, new energy sources such as wind energy and new technologies for interconnection by high voltage DC link. These control systems must be able to monitor and achieve such regulation in accordance with the dynamics of the energy source, faults and other events which may induce transients phenomena into the power network. Such transients conditions have to be analyzed using the most accurate and detailed hence, complex models of control system. In addition, in the feasibility study phase, the calibration or the setup of equipment as well as in the operation of the power network, one may require decision aid tools for engineers. This includes, for instance, knowledge of energy dissipated into the arresters in transient analysis. These tools use simulation programs data as inputs and may require that complex functions be solved with numerical methods. These functions are part of control system in computer simulator. Moreover, the simulation evolves in a broader context of the development of digital controller, distributed and parallel high performance computing and rapid evolutions in computer (multiprocessor) technology. In such context, a continuing improvement of the control equations solver is welcomed. Control systems are modelled using ax=b simultaneous system of equations. These equations are sometimes non-linear with feedback loops and thus require iterative Newton methods, including the formation of a Jacobian matrix and ordering as well as processing by graph theory tools. The proposed approach is based on the formulation of a reduced rank Jacobian matrix. The dimension is reduced up to the count of feedback loops. With this new approach, gains in computation speed are expected without compromising its accuracy when compared to classical full rank Jacobian matrix representation. A directed graph representation is adopted and a proper approach for detecting and removing cycles within the graph is introduced. A condition of all zero eigenvalues of adjacency matrix of the graph is adopted. The transformation of the graph of controls with no cycle permits a formulation of control equations for only feedback points. This yields a general feedback interconnection (GFBI) representation of control, which is the main contribution of this thesis. Methods for solving (non-linear) equations of control systems were deployed into the new GFBI approach. Five variants of the new approach were illustrated including firstly, a basic Newton method (1), a more robust one, the Dogleg method (2) and a fixed-point iterations method (3). I. The presented approach is implemented in Electromagnetic Transient program EMTP-RV and tested on practical systems of various types and levels of complexity: the PLL, an asynchronous machine with 87 blocks reduced to 23 feedback equations by GFBI, and 12 wind power plants integrated to the IEEE-39 buses system. Further analysis, which opens up avenues for future research includes comparison of the proposed approach against existing ones. With respect to the sole representation, it is shown that the proposed approach is equivalent to full classic representation of system of equations through a proper substitution process which complies with topological sequence and by skipping feedback variable identified by GFBI. Moreover, a second comparison with state space based approach, such as that in MATLAB/Simulink, shows that output evaluation step in state-space approach with algebraic constraints is similar to the GFBI. The algebraic constraints are similar to feedback variables. A difference may arise, however, when the number of algebraic constraints is not the optimal number of cuts for the GFBI method: for the PLL, for example, MATLAB/Simulink generated 3 constraints while the GFBI generated only 2. The GFBI method may offer some advantages in this case. A last analysis axis prompted further work in initialization. It is shown that GFBI method may modifies the convergence properties of iterations of the Newton method. The Newton- Kantorovich theorem, using bounds on the norms of the Jacobian, has been applied to the proposed GFBI and classic full representation of control equations. The expressions of the Jacobian norms have been established for generic cases using Coates graph. It appears from the analysis of a simple case, for the same initial conditions, the behaviour of the Newton- Kantorovich theorem differs in both cases. These differences may also be more pronounced in the non-linear case. Further work would be useful to investigate this aspect and, eventually, pave the way to new initialization approaches. Despite these limitations, not to mention areas for improvement in further work, one notes the contribution of this thesis to improve the gain of time on simulation for the solution of control systems. (Abstract shortened by UMI.).
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α -uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α =2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c =e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c =1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α ≥3 , minimum vertex covers on α -uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c =e /(α -1 ) where the replica symmetry is broken.
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α-uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α=2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c=e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c=1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α≥3, minimum vertex covers on α-uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c=e/(α-1) where the replica symmetry is broken.
A linear framework for time-scale separation in nonlinear biochemical systems.
Gunawardena, Jeremy
2012-01-01
Cellular physiology is implemented by formidably complex biochemical systems with highly nonlinear dynamics, presenting a challenge for both experiment and theory. Time-scale separation has been one of the few theoretical methods for distilling general principles from such complexity. It has provided essential insights in areas such as enzyme kinetics, allosteric enzymes, G-protein coupled receptors, ion channels, gene regulation and post-translational modification. In each case, internal molecular complexity has been eliminated, leading to rational algebraic expressions among the remaining components. This has yielded familiar formulas such as those of Michaelis-Menten in enzyme kinetics, Monod-Wyman-Changeux in allostery and Ackers-Johnson-Shea in gene regulation. Here we show that these calculations are all instances of a single graph-theoretic framework. Despite the biochemical nonlinearity to which it is applied, this framework is entirely linear, yet requires no approximation. We show that elimination of internal complexity is feasible when the relevant graph is strongly connected. The framework provides a new methodology with the potential to subdue combinatorial explosion at the molecular level.
A Linear Kernel for Co-Path/Cycle Packing
NASA Astrophysics Data System (ADS)
Chen, Zhi-Zhong; Fellows, Michael; Fu, Bin; Jiang, Haitao; Liu, Yang; Wang, Lusheng; Zhu, Binhai
Bounded-Degree Vertex Deletion is a fundamental problem in graph theory that has new applications in computational biology. In this paper, we address a special case of Bounded-Degree Vertex Deletion, the Co-Path/Cycle Packing problem, which asks to delete as few vertices as possible such that the graph of the remaining (residual) vertices is composed of disjoint paths and simple cycles. The problem falls into the well-known class of 'node-deletion problems with hereditary properties', is hence NP-complete and unlikely to admit a polynomial time approximation algorithm with approximation factor smaller than 2. In the framework of parameterized complexity, we present a kernelization algorithm that produces a kernel with at most 37k vertices, improving on the super-linear kernel of Fellows et al.'s general theorem for Bounded-Degree Vertex Deletion. Using this kernel,and the method of bounded search trees, we devise an FPT algorithm that runs in time O *(3.24 k ). On the negative side, we show that the problem is APX-hard and unlikely to have a kernel smaller than 2k by a reduction from Vertex Cover.
Approximate labeling via graph cuts based on linear programming.
Komodakis, Nikos; Tziritas, Georgios
2007-08-01
A new framework is presented for both understanding and developing graph-cut-based combinatorial algorithms suitable for the approximate optimization of a very wide class of Markov Random Fields (MRFs) that are frequently encountered in computer vision. The proposed framework utilizes tools from the duality theory of linear programming in order to provide an alternative and more general view of state-of-the-art techniques like the \\alpha-expansion algorithm, which is included merely as a special case. Moreover, contrary to \\alpha-expansion, the derived algorithms generate solutions with guaranteed optimality properties for a much wider class of problems, for example, even for MRFs with nonmetric potentials. In addition, they are capable of providing per-instance suboptimality bounds in all occasions, including discrete MRFs with an arbitrary potential function. These bounds prove to be very tight in practice (that is, very close to 1), which means that the resulting solutions are almost optimal. Our algorithms' effectiveness is demonstrated by presenting experimental results on a variety of low-level vision tasks, such as stereo matching, image restoration, image completion, and optical flow estimation, as well as on synthetic problems.
Li, Xin; Varallyay, Csanad G; Gahramanov, Seymur; Fu, Rongwei; Rooney, William D; Neuwelt, Edward A
2017-11-01
Dynamic susceptibility contrast-magnetic resonance imaging (DSC-MRI) is widely used to obtain informative perfusion imaging biomarkers, such as the relative cerebral blood volume (rCBV). The related post-processing software packages for DSC-MRI are available from major MRI instrument manufacturers and third-party vendors. One unique aspect of DSC-MRI with low-molecular-weight gadolinium (Gd)-based contrast reagent (CR) is that CR molecules leak into the interstitium space and therefore confound the DSC signal detected. Several approaches to correct this leakage effect have been proposed throughout the years. Amongst the most popular is the Boxerman-Schmainda-Weisskoff (BSW) K 2 leakage correction approach, in which the K 2 pseudo-first-order rate constant quantifies the leakage. In this work, we propose a new method for the BSW leakage correction approach. Based on the pharmacokinetic interpretation of the data, the commonly adopted R 2 * expression accounting for contributions from both intravascular and extravasating CR components is transformed using a method mathematically similar to Gjedde-Patlak linearization. Then, the leakage rate constant (K L ) can be determined as the slope of the linear portion of a plot of the transformed data. Using the DSC data of high-molecular-weight (~750 kDa), iron-based, intravascular Ferumoxytol (FeO), the pharmacokinetic interpretation of the new paradigm is empirically validated. The primary objective of this work is to empirically demonstrate that a linear portion often exists in the graph of the transformed data. This linear portion provides a clear definition of the Gd CR pseudo-leakage rate constant, which equals the slope derived from the linear segment. A secondary objective is to demonstrate that transformed points from the initial transient period during the CR wash-in often deviate from the linear trend of the linearized graph. The inclusion of these points will have a negative impact on the accuracy of the leakage rate constant, and even make it time dependent. Copyright © 2017 John Wiley & Sons, Ltd.
Demonstration of a vectorial optical field generator with adaptive close loop control.
Chen, Jian; Kong, Lingjiang; Zhan, Qiwen
2017-12-01
We experimentally demonstrate a vectorial optical field generator (VOF-Gen) with an adaptive close loop control. The close loop control capability is illustrated with the calibration of polarization modulation of the system. To calibrate the polarization ratio modulation, we generate 45° linearly polarized beam and make it propagate through a linear analyzer whose transmission axis is orthogonal to the incident beam. For the retardation calibration, circularly polarized beam is employed and a circular polarization analyzer with the opposite chirality is placed in front of the CCD as the detector. In both cases, the close loop control automatically changes the value of the corresponding calibration parameters in the pre-set ranges to generate the phase patterns applied to the spatial light modulators and records the intensity distribution of the output beam by the CCD camera. The optimized calibration parameters are determined corresponding to the minimum total intensity in each case. Several typical kinds of vectorial optical beams are created with and without the obtained calibration parameters, and the full Stokes parameter measurements are carried out to quantitatively analyze the polarization distribution of the generated beams. The comparisons among these results clearly show that the obtained calibration parameters could remarkably improve the accuracy of the polarization modulation of the VOF-Gen, especially for generating elliptically polarized beam with large ellipticity, indicating the significance of the presented close loop in enhancing the performance of the VOF-Gen.
Spelleken, E; Crowe, S B; Sutherland, B; Challens, C; Kairn, T
2018-03-01
Gafchromic EBT3 film is widely used for patient specific quality assurance of complex treatment plans. Film dosimetry techniques commonly involve the use of transmission scanning to produce TIFF files, which are analysed using a non-linear calibration relationship between the dose and red channel net optical density (netOD). Numerous film calibration techniques featured in the literature have not been independently verified or evaluated. A range of previously published film dosimetry techniques were re-evaluated, to identify whether these methods produce better results than the commonly-used non-linear, netOD method. EBT3 film was irradiated at calibration doses between 0 and 4000 cGy and 25 pieces of film were irradiated at 200 cGy to evaluate uniformity. The film was scanned using two different scanners: The Epson Perfection V800 and the Epson Expression 10000XL. Calibration curves, uncertainty in the fit of the curve, overall uncertainty and uniformity were calculated following the methods described by the different calibration techniques. It was found that protocols based on a conventional film dosimetry technique produced results that were accurate and uniform to within 1%, while some of the unconventional techniques produced much higher uncertainties (> 25% for some techniques). Some of the uncommon methods produced reliable results when irradiated to the standard treatment doses (< 400 cGy), however none could be recommended as an efficient or accurate replacement for a common film analysis technique which uses transmission scanning, red colour channel analysis, netOD and a non-linear calibration curve for measuring doses up to 4000 cGy when using EBT3 film.
Development of a Calibration Strip for Immunochromatographic Assay Detection Systems.
Gao, Yue-Ming; Wei, Jian-Chong; Mak, Peng-Un; Vai, Mang-I; Du, Min; Pun, Sio-Hang
2016-06-29
With many benefits and applications, immunochromatographic (ICG) assay detection systems have been reported on a great deal. However, the existing research mainly focuses on increasing the dynamic detection range or application fields. Calibration of the detection system, which has a great influence on the detection accuracy, has not been addressed properly. In this context, this work develops a calibration strip for ICG assay photoelectric detection systems. An image of the test strip is captured by an image acquisition device, followed by performing a fuzzy c-means (FCM) clustering algorithm and maximin-distance algorithm for image segmentation. Additionally, experiments are conducted to find the best characteristic quantity. By analyzing the linear coefficient, an average value of hue (H) at 14 min is chosen as the characteristic quantity and the empirical formula between H and optical density (OD) value is established. Therefore, H, saturation (S), and value (V) are calculated by a number of selected OD values. Then, H, S, and V values are transferred to the RGB color space and a high-resolution printer is used to print the strip images on cellulose nitrate membranes. Finally, verification of the printed calibration strips is conducted by analyzing the linear correlation between OD and the spectral reflectance, which shows a good linear correlation (R² = 98.78%).
Spectrometer calibration for spectroscopic Fourier domain optical coherence tomography
Szkulmowski, Maciej; Tamborski, Szymon; Wojtkowski, Maciej
2016-01-01
We propose a simple and robust procedure for Fourier domain optical coherence tomography (FdOCT) that allows to linearize the detected FdOCT spectra to wavenumber domain and, at the same time, to determine the wavelength of light for each point of detected spectrum. We show that in this approach it is possible to use any measurable physical quantity that has linear dependency on wavenumber and can be extracted from spectral fringes. The actual values of the measured quantity have no importance for the algorithm and do not need to be known at any stage of the procedure. As example we calibrate a spectral OCT spectrometer using Doppler frequency. The technique of spectral calibration can be in principle adapted to of all kind of Fourier domain OCT devices. PMID:28018723
Temporal Gain Correction for X-Ray Calorimeter Spectrometers
NASA Technical Reports Server (NTRS)
Porter, F. S.; Chiao, M. P.; Eckart, M. E.; Fujimoto, R.; Ishisaki, Y.; Kelley, R. L.; Kilbourne, C. A.; Leutenegger, M. A.; McCammon, D.; Mitsuda, K.
2016-01-01
Calorimetric X-ray detectors are very sensitive to their environment. The boundary conditions can have a profound effect on the gain including heat sink temperature, the local radiation temperature, bias, and the temperature of the readout electronics. Any variation in the boundary conditions can cause temporal variations in the gain of the detector and compromise both the energy scale and the resolving power of the spectrometer. Most production X-ray calorimeter spectrometers, both on the ground and in space, have some means of tracking the gain as a function of time, often using a calibration spectral line. For small gain changes, a linear stretch correction is often sufficient. However, the detectors are intrinsically non-linear and often the event analysis, i.e., shaping, optimal filters etc., add additional non-linearity. Thus for large gain variations or when the best possible precision is required, a linear stretch correction is not sufficient. Here, we discuss a new correction technique based on non-linear interpolation of the energy-scale functions. Using Astro-HSXS calibration data, we demonstrate that the correction can recover the X-ray energy to better than 1 part in 104 over the entire spectral band to above 12 keV even for large-scale gain variations. This method will be used to correct any temporal drift of the on-orbit per-pixel gain using on-board calibration sources for the SXS instrument on the Astro-H observatory.
Design and calibration of a scanning tunneling microscope for large machined surfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grigg, D.A.; Russell, P.E.; Dow, T.A.
During the last year the large sample STM has been designed, built and used for the observation of several different samples. Calibration of the scanner for prope dimensional interpretation of surface features has been a chief concern, as well as corrections for non-linear effects such as hysteresis during scans. Several procedures used in calibration and correction of piezoelectric scanners used in the laboratorys STMs are described.
NASA Astrophysics Data System (ADS)
Laborda, Francisco; Medrano, Jesús; Castillo, Juan R.
2004-06-01
The quality of the quantitative results obtained from transient signals in high-performance liquid chromatography-inductively coupled plasma mass spectrometry (HPLC-ICPMS) and flow injection-inductively coupled plasma mass spectrometry (FI-ICPMS) was investigated under multielement conditions. Quantification methods were based on multiple-point calibration by simple and weighted linear regression, and double-point calibration (measurement of the baseline and one standard). An uncertainty model, which includes the main sources of uncertainty from FI-ICPMS and HPLC-ICPMS (signal measurement, sample flow rate and injection volume), was developed to estimate peak area uncertainties and statistical weights used in weighted linear regression. The behaviour of the ICPMS instrument was characterized in order to be considered in the model, concluding that the instrument works as a concentration detector when it is used to monitorize transient signals from flow injection or chromatographic separations. Proper quantification by the three calibration methods was achieved when compared to reference materials, although the double-point calibration allowed to obtain results of the same quality as the multiple-point calibration, shortening the calibration time. Relative expanded uncertainties ranged from 10-20% for concentrations around the LOQ to 5% for concentrations higher than 100 times the LOQ.
NASA Astrophysics Data System (ADS)
Mei, Yaguang; Cheng, Yuxin; Cheng, Shusen; Hao, Zhongqi; Guo, Lianbo; Li, Xiangyou; Zeng, Xiaoyan
2017-10-01
During the iron-making process in blast furnace, the Si content in liquid pig iron was usually used to evaluate the quality of liquid iron and thermal state of blast furnace. None effective method was found for rapid detecting the Si concentration of liquid iron. Laser-induced breakdown spectroscopy (LIBS) is a kind of atomic emission spectrometry technology based on laser ablation. Its obvious advantage is realizing rapid, in-situ, online analysis of element concentration in open air without sample pretreatment. The characteristics of Si in liquid iron were analyzed from the aspect of thermodynamic theory and metallurgical technology. The relationship between Si and C, Mn, S, P or other alloy elements were revealed based on thermodynamic calculation. Subsequently, LIBS was applied on rapid detection of Si of pig iron in this work. During LIBS detection process, several groups of standard pig iron samples were employed in this work to calibrate the Si content in pig iron. The calibration methods including linear, quadratic and cubic internal standard calibration, multivariate linear calibration and partial least squares (PLS) were compared with each other. It revealed that the PLS improved by normalization was the best calibration method for Si detection by LIBS.
Terrain - Umbra Package v. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oppel, Fred; Hart, Brian; Rigdon, James Brian
This library contains modules that read terrain files (e.g., OpenFlight, Open Scene Graph IVE, GeoTIFF Image) and to read and manage ESRI terrain datasets. All data is stored and managed in Open Scene Graph (OSG). Terrain system accesses OSG and provides elevation data, access to meta-data such as soil types and enables linears, areals and buildings to be placed in a terrain, These geometry objects include boxes, point, path, and polygon (region), and sector modules. Utilities have been made available for clamping objects to the terrain and accessing LOS information. This assertion includes a managed C++ wrapper code (TerrainWrapper) tomore » enable C# applications, such as OpShed and UTU, to incorporate this library.« less
Calibration of the Concorde radiation detection instrument and measurements at SST altitude.
DOT National Transportation Integrated Search
1971-06-01
Performance tests were carried out on a solar cosmic radiation detection instrument developed for the Concorde SST. The instrument calibration curve (log dose-rate vs instrument reading) was reasonably linear from 0.004 to 1 rem/hr for both gamma rad...
Evaluation of Piecewise Polynomial Equations for Two Types of Thermocouples
Chen, Andrew; Chen, Chiachung
2013-01-01
Thermocouples are the most frequently used sensors for temperature measurement because of their wide applicability, long-term stability and high reliability. However, one of the major utilization problems is the linearization of the transfer relation between temperature and output voltage of thermocouples. The linear calibration equation and its modules could be improved by using regression analysis to help solve this problem. In this study, two types of thermocouple and five temperature ranges were selected to evaluate the fitting agreement of different-order polynomial equations. Two quantitative criteria, the average of the absolute error values |e|ave and the standard deviation of calibration equation estd, were used to evaluate the accuracy and precision of these calibrations equations. The optimal order of polynomial equations differed with the temperature range. The accuracy and precision of the calibration equation could be improved significantly with an adequate higher degree polynomial equation. The technique could be applied with hardware modules to serve as an intelligent sensor for temperature measurement. PMID:24351627
NASA Astrophysics Data System (ADS)
McBride, B.; Martins, J. V.; Fernandez Borda, R. A.; Barbosa, H. M.
2017-12-01
The Laboratory for Aerosols, Clouds, and Optics (LACO) at the University of Maryland, Baltimore County (UMBC) present a novel, wide FOV, hyper-angular imaging polarimeter for the microphysical sampling of clouds and aerosols from aircraft and space. The instrument, the Hyper-Angular Rainbow Polarimeter (HARP), is a precursor to the multi-angle imaging polarimeter solicited by the upcoming NASA Aerosols, Clouds, and Ecosystems (ACE) mission. HARP currently operates in two forms: a spaceborne CubeSat slated for a January 2018 launch to the ISS orbit, and an identical aircraft platform that participated in the Lake Michigan Ozone Study (LMOS) and Aerosol Characterization from Polarimeter and Lidar (ACEPOL) NASA campaigns in 2017. To ensure and validate the instrument's ability to produce high quality Level 2 cloud and aerosol microphysical products, a comprehensive calibration scheme that accounts for flatfielding, radiometry, and all optical interference processes that contribute to the retrieval of Stokes parameters I, Q, and U, is applied across the entirety of HARP's 114° FOV. We present an innovative calibration algorithm that convolves incident polarization from a linear polarization state generator with intensity information observed at three distinct linear polarizations. The retrieved results are pixel-level, modified Mueller matrices that characterize the entire HARP optical assembly, without the need to characterize every individual element or perform ellipsometric studies. Here we show results from several pre- and post- LMOS campaign radiometric calibrations at NASA GSFC and polarimetric calibration using a "polarization dome" that allows for full-FOV characterization of Stokes parameters I, Q, and U. The polarization calibration is verified by passing unpolarized light through partially-polarized, tilted glass plates with well-characterized degree of linear polarization (DoLP). We apply this calibration to a stratocumulous cloud deck case observed during the LMOS campaign on June 19 2017, and assess the polarized cloudbow for cloud droplet effective radius and variance information at 0.67µm.
Ensemble Weight Enumerators for Protograph LDPC Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush
2006-01-01
Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.
Fault detection and initial state verification by linear programming for a class of Petri nets
NASA Technical Reports Server (NTRS)
Rachell, Traxon; Meyer, David G.
1992-01-01
The authors present an algorithmic approach to determining when the marking of a LSMG (live safe marked graph) or a LSFC (live safe free choice) net is in the set of live safe markings M. Hence, once the marking of a net is determined to be in M, then if at some time thereafter the marking of this net is determined not to be in M, this indicates a fault. It is shown how linear programming can be used to determine if m is an element of M. The worst-case computational complexity of each algorithm is bounded by the number of linear programs necessary to compute.
NASA Astrophysics Data System (ADS)
Khrustalev, K.
2016-12-01
Current process for the calibration of the beta-gamma detectors used for radioxenon isotope measurements for CTBT purposes is laborious and time consuming. It uses a combination of point sources and gaseous sources resulting in differences between energy and resolution calibrations. The emergence of high resolution SiPIN based electron detectors allows improvements in the calibration and analysis process to be made. Thanks to high electron resolution of SiPIN detectors ( 8-9 keV@129 keV) compared to plastic scintillators ( 35 keV@129keV) there are a lot more CE peaks (from radioxenon and radon progenies) can be resolved and used for energy and resolution calibration in the energy range of the CTBT-relevant radioxenon isotopes. The long term stability of the SiPIN energy calibration allows one to significantly reduce the time of the QC measurements needed for checking the stability of the E/R calibration. The currently used second order polynomials for the E/R calibration fitting are unphysical and shall be replaced by a linear energy calibration for NaI and SiPIN, owing to high linearity and dynamic range of the modern digital DAQ systems, and resolution calibration functions shall be modified to reflect the underlying physical processes. Alternatively, one can completely abandon the use of fitting functions and use only point-values of E/R (similar to the efficiency calibration currently used) at the energies relevant for the isotopes of interest (ROI - Regions Of Interest ). Current analysis considers the detector as a set of single channel analysers, with an established set of coefficients relating the positions of ROIs with the positions of the QC peaks. The analysis of the spectra can be made more robust using peak and background fitting in the ROIs with a single free parameter (peak area) of the potential peaks from the known isotopes and a fixed E/R calibration values set.
NASA Technical Reports Server (NTRS)
Patt, P. J.
1985-01-01
The design of a coaxial linear magnetic spring which incorporates a linear motor to control axial motion and overcome system damping is presented, and the results of static and dynamic tests are reported. The system has nominal stiffness 25,000 N/m and is designed to oscillate a 900-g component over a 4.6-mm stroke in a Stirling-cycle cryogenic refrigerator being developed for long-service (5-10-yr) space applications (Stolfi et al., 1983). Mosaics of 10 radially magnetized high-coercivity SmCO5 segments enclosed in Ti cans are employed, and the device is found to have quality factor 70-100, corresponding to energy-storage efficiency 91-94 percent. Drawings, diagrams, and graphs are provided.
Online fault diagnostics and testing of area gamma radiation monitor using wireless network
NASA Astrophysics Data System (ADS)
Reddy, Padi Srinivas; Kumar, R. Amudhu Ramesh; Mathews, M. Geo; Amarendra, G.
2017-07-01
Periodical surveillance, checking, testing, and calibration of the installed Area Gamma Radiation Monitors (AGRM) in the nuclear plants are mandatory. The functionality of AGRM counting electronics and Geiger-Muller (GM) tube is to be monitored periodically. The present paper describes the development of online electronic calibration and testing of the GM tube from the control room. Two electronic circuits were developed, one for AGRM electronic test and another for AGRM detector test. A dedicated radiation data acquisition system was developed using an open platform communication server and data acquisition software. The Modbus RTU protocol on ZigBee based wireless communication was used for online monitoring and testing. The AGRM electronic test helps to carry out the three-point electronic calibration and verification of accuracy. The AGRM detector test is used to verify the GM threshold voltage and the plateau slope of the GM tube in-situ. The real-time trend graphs generated during these tests clearly identified the state of health of AGRM electronics and GM tube on go/no-go basis. This method reduces the radiation exposures received by the maintenance crew and facilitates quick testing with minimum downtime of the instrument.
Application of Composite Small Calibration Objects in Traffic Accident Scene Photogrammetry
Chen, Qiang; Xu, Hongguo; Tan, Lidong
2015-01-01
In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies. PMID:26011052
40 CFR 92.122 - Smoke meter calibration.
Code of Federal Regulations, 2010 CFR
2010-07-01
... collection equipment response of zero; (b) Calibrated neutral density filters having approximately 10, 20, and 40 percent opacity shall be employed to check the linearity of the instrument. The filter(s) shall.... Filters with exposed filtering media should be checked for opacity every six months; all other filters...
40 CFR 92.122 - Smoke meter calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... collection equipment response of zero; (b) Calibrated neutral density filters having approximately 10, 20, and 40 percent opacity shall be employed to check the linearity of the instrument. The filter(s) shall.... Filters with exposed filtering media should be checked for opacity every six months; all other filters...
40 CFR 92.122 - Smoke meter calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... collection equipment response of zero; (b) Calibrated neutral density filters having approximately 10, 20, and 40 percent opacity shall be employed to check the linearity of the instrument. The filter(s) shall.... Filters with exposed filtering media should be checked for opacity every six months; all other filters...
40 CFR 92.122 - Smoke meter calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... collection equipment response of zero; (b) Calibrated neutral density filters having approximately 10, 20, and 40 percent opacity shall be employed to check the linearity of the instrument. The filter(s) shall.... Filters with exposed filtering media should be checked for opacity every six months; all other filters...
Das, R K; Das, M
2015-09-01
The effects of both acid (acetic acid) and base (ammonia) catalysts in varying on the sol-gel synthesis of SiO2 nanoparticles using tetra ethyl ortho silicate (TEOS) as a precursor was determined by ultrasonic method. The ultrasonic velocity was received by pulsar receiver. The ultrasonic velocity in the sol and the parameter ΔT (time difference between the original pulse and first back wall echo of the sol) was varied with time of gelation. The graphs of ln[ln1/ΔT] vs ln(t), indicate two region - nonlinear region and a linear region. The time corresponds to the point at which the non-linear region change to linear region is considered as gel time for the respective solutions. Gelation time is found to be dependent on the concentration and types of catalyst and is found from the graphs based on Avrami equation. The rate of condensation is found to be faster for base catalyst. The gelation process was also characterized by viscosity measurement. Normal sol-gel process was also carried out along with the ultrasonic one to compare the effectiveness of ultrasonic. The silica gel was calcined and the powdered sample was characterized with scanning electron microscopy, energy dispersive spectra, X-ray diffractogram, and FTIR spectroscopy. Copyright © 2014 Elsevier B.V. All rights reserved.
Topology-induced bifurcations for the nonlinear Schrödinger equation on the tadpole graph.
Cacciapuoti, Claudio; Finco, Domenico; Noja, Diego
2015-01-01
In this paper we give the complete classification of solitons for a cubic nonlinear Schrödinger equation on the simplest network with a nontrivial topology: the tadpole graph, i.e., a ring with a half line attached to it and free boundary conditions at the junction. This is a step toward the modelization of condensate propagation and confinement in quasi-one-dimensional traps. The model, although simple, exhibits a surprisingly rich behavior and in particular we show that it admits: (i) a denumerable family of continuous branches of embedded solitons vanishing on the half line and bifurcating from linear eigenstates and threshold resonances of the system; (ii) a continuous branch of edge solitons bifurcating from the previous families at the threshold of the continuous spectrum with a pitchfork bifurcation; and (iii) a finite family of continuous branches of solitons without linear analog. All the solutions are explicitly constructed in terms of elliptic Jacobian functions. Moreover we show that families of nonlinear bound states of the above kind continue to exist in the presence of a uniform magnetic field orthogonal to the plane of the ring when a well definite flux quantization condition holds true. In this sense the magnetic field acts as a control parameter. Finally we highlight the role of resonances in the linearization as a signature of the occurrence of bifurcations of solitons from the continuous spectrum.
Improving Machining Accuracy of CNC Machines with Innovative Design Methods
NASA Astrophysics Data System (ADS)
Yemelyanov, N. V.; Yemelyanova, I. V.; Zubenko, V. L.
2018-03-01
The article considers achieving the machining accuracy of CNC machines by applying innovative methods in modelling and design of machining systems, drives and machine processes. The topological method of analysis involves visualizing the system as matrices of block graphs with a varying degree of detail between the upper and lower hierarchy levels. This approach combines the advantages of graph theory and the efficiency of decomposition methods, it also has visual clarity, which is inherent in both topological models and structural matrices, as well as the resiliency of linear algebra as part of the matrix-based research. The focus of the study is on the design of automated machine workstations, systems, machines and units, which can be broken into interrelated parts and presented as algebraic, topological and set-theoretical models. Every model can be transformed into a model of another type, and, as a result, can be interpreted as a system of linear and non-linear equations which solutions determine the system parameters. This paper analyses the dynamic parameters of the 1716PF4 machine at the stages of design and exploitation. Having researched the impact of the system dynamics on the component quality, the authors have developed a range of practical recommendations which have enabled one to reduce considerably the amplitude of relative motion, exclude some resonance zones within the spindle speed range of 0...6000 min-1 and improve machining accuracy.
ERIC Educational Resources Information Center
Nagasinghe, Iranga
2010-01-01
This thesis investigates and develops a few acceleration techniques for the search engine algorithms used in PageRank and HITS computations. PageRank and HITS methods are two highly successful applications of modern Linear Algebra in computer science and engineering. They constitute the essential technologies accounted for the immense growth and…
ERIC Educational Resources Information Center
Peterlin, Primoz
2010-01-01
Two methods of data analysis are compared: spreadsheet software and a statistics software suite. Their use is compared analysing data collected in three selected experiments taken from an introductory physics laboratory, which include a linear dependence, a nonlinear dependence and a histogram. The merits of each method are compared. (Contains 7…
Implicit-shifted Symmetric QR Singular Value Decomposition of 3x3 Matrices
2016-04-01
Graph 33, 4, 138:1– 138:11. TREFETHEN, L. N., AND BAU III, D. 1997. Numerical linear algebra , vol. 50. Siam. XU, H., SIN, F., ZHU, Y., AND BARBIČ, J...matrices with minimal branching and elementary floating point operations. Tech. rep., University of Wisconsin- Madison. SAITO, S., ZHOU, Z.-Y., AND
Improving Treatment Plan Implementation in Schools: A Meta-Analysis of Single Subject Design Studies
ERIC Educational Resources Information Center
Noell, George H.; Gansle, Kristin A.; Mevers, Joanna Lomas; Knox, R. Maria; Mintz, Joslyn Cynkus; Dahir, Amanda
2014-01-01
Twenty-nine peer-reviewed journal articles that analyzed intervention implementation in schools using single-case experimental designs were meta-analyzed. These studies reported 171 separate data paths and provided 3,991 data points. The meta-analysis was accomplished by fitting data extracted from graphs in mixed linear growth models. This…
Malinsky, Michelle Duval; Jacoby, Cliffton B; Reagen, William K
2011-01-10
We report herein a simple protein precipitation extraction-liquid chromatography tandem mass spectrometry (LC/MS/MS) method, validation, and application for the analysis of perfluorinated carboxylic acids (C7-C12), perfluorinated sulfonic acids (C4, C6, and C8), and perfluorooctane sulfonamide (FOSA) in fish fillet tissue. The method combines a rapid homogenization and protein precipitation tissue extraction procedure using stable-isotope internal standard (IS) calibration. Method validation in bluegill (Lepomis macrochirus) fillet tissue evaluated the following: (1) method accuracy and precision in both extracted matrix-matched calibration and solvent (unextracted) calibration, (2) quantitation of mixed branched and linear isomers of perfluorooctanoate (PFOA) and perfluorooctanesulfonate (PFOS) with linear isomer calibration, (3) quantitation of low level (ppb) perfluorinated compounds (PFCs) in the presence of high level (ppm) PFOS, and (4) specificity from matrix interferences. Both calibration techniques produced method accuracy of at least 100±13% with a precision (%RSD) ≤18% for all target analytes. Method accuracy and precision results for fillet samples from nine different fish species taken from the Mississippi River in 2008 and 2009 are also presented. Copyright © 2010 Elsevier B.V. All rights reserved.
Multiple Use One-Sided Hypotheses Testing in Univariate Linear Calibration
NASA Technical Reports Server (NTRS)
Krishnamoorthy, K.; Kulkarni, Pandurang M.; Mathew, Thomas
1996-01-01
Consider a normally distributed response variable, related to an explanatory variable through the simple linear regression model. Data obtained on the response variable, corresponding to known values of the explanatory variable (i.e., calibration data), are to be used for testing hypotheses concerning unknown values of the explanatory variable. We consider the problem of testing an unlimited sequence of one sided hypotheses concerning the explanatory variable, using the corresponding sequence of values of the response variable and the same set of calibration data. This is the situation of multiple use of the calibration data. The tests derived in this context are characterized by two types of uncertainties: one uncertainty associated with the sequence of values of the response variable, and a second uncertainty associated with the calibration data. We derive tests based on a condition that incorporates both of these uncertainties. The solution has practical applications in the decision limit problem. We illustrate our results using an example dealing with the estimation of blood alcohol concentration based on breath estimates of the alcohol concentration. In the example, the problem is to test if the unknown blood alcohol concentration of an individual exceeds a threshold that is safe for driving.
Peculiar spectral statistics of ensembles of trees and star-like graphs
NASA Astrophysics Data System (ADS)
Kovaleva, V.; Maximov, Yu; Nechaev, S.; Valba, O.
2017-07-01
In this paper we investigate the eigenvalue statistics of exponentially weighted ensembles of full binary trees and p-branching star graphs. We show that spectral densities of corresponding adjacency matrices demonstrate peculiar ultrametric structure inherent to sparse systems. In particular, the tails of the distribution for binary trees share the ‘Lifshitz singularity’ emerging in the one-dimensional localization, while the spectral statistics of p-branching star-like graphs is less universal, being strongly dependent on p. The hierarchical structure of spectra of adjacency matrices is interpreted as sets of resonance frequencies, that emerge in ensembles of fully branched tree-like systems, known as dendrimers. However, the relaxational spectrum is not determined by the cluster topology, but has rather the number-theoretic origin, reflecting the peculiarities of the rare-event statistics typical for one-dimensional systems with a quenched structural disorder. The similarity of spectral densities of an individual dendrimer and of an ensemble of linear chains with exponential distribution in lengths, demonstrates that dendrimers could be served as simple disorder-less toy models of one-dimensional systems with quenched disorder.
On Parallel Push-Relabel based Algorithms for Bipartite Maximum Matching
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langguth, Johannes; Azad, Md Ariful; Halappanavar, Mahantesh
2014-07-01
We study multithreaded push-relabel based algorithms for computing maximum cardinality matching in bipartite graphs. Matching is a fundamental combinatorial (graph) problem with applications in a wide variety of problems in science and engineering. We are motivated by its use in the context of sparse linear solvers for computing maximum transversal of a matrix. We implement and test our algorithms on several multi-socket multicore systems and compare their performance to state-of-the-art augmenting path-based serial and parallel algorithms using a testset comprised of a wide range of real-world instances. Building on several heuristics for enhancing performance, we demonstrate good scaling for themore » parallel push-relabel algorithm. We show that it is comparable to the best augmenting path-based algorithms for bipartite matching. To the best of our knowledge, this is the first extensive study of multithreaded push-relabel based algorithms. In addition to a direct impact on the applications using matching, the proposed algorithmic techniques can be extended to preflow-push based algorithms for computing maximum flow in graphs.« less
Sparse cliques trump scale-free networks in coordination and competition
Gianetto, David A.; Heydari, Babak
2016-01-01
Cooperative behavior, a natural, pervasive and yet puzzling phenomenon, can be significantly enhanced by networks. Many studies have shown how global network characteristics affect cooperation; however, it is difficult to understand how this occurs based on global factors alone, low-level network building blocks, or motifs are necessary. In this work, we systematically alter the structure of scale-free and clique networks and show, through a stochastic evolutionary game theory model, that cooperation on cliques increases linearly with community motif count. We further show that, for reactive stochastic strategies, network modularity improves cooperation in the anti-coordination Snowdrift game and the Prisoner’s Dilemma game but not in the Stag Hunt coordination game. We also confirm the negative effect of the scale-free graph on cooperation when effective payoffs are used. On the flip side, clique graphs are highly cooperative across social environments. Adding cycles to the acyclic scale-free graph increases cooperation when multiple games are considered; however, cycles have the opposite effect on how forgiving agents are when playing the Prisoner’s Dilemma game. PMID:26899456
Mild traumatic brain injury: graph-model characterization of brain networks for episodic memory.
Tsirka, Vasso; Simos, Panagiotis G; Vakis, Antonios; Kanatsouli, Kassiani; Vourkas, Michael; Erimaki, Sofia; Pachou, Ellie; Stam, Cornelis Jan; Micheloyannis, Sifis
2011-02-01
Episodic memory is among the cognitive functions that can be affected in the acute phase following mild traumatic brain injury (MTBI). The present study used EEG recordings to evaluate global synchronization and network organization of rhythmic activity during the encoding and recognition phases of an episodic memory task varying in stimulus type (kaleidoscope images, pictures, words, and pseudowords). Synchronization of oscillatory activity was assessed using a linear and nonlinear connectivity estimator and network analyses were performed using algorithms derived from graph theory. Twenty five MTBI patients (tested within days post-injury) and healthy volunteers were closely matched on demographic variables, verbal ability, psychological status variables, as well as on overall task performance. Patients demonstrated sub-optimal network organization, as reflected by changes in graph parameters in the theta and alpha bands during both encoding and recognition. There were no group differences in spectral energy during task performance or on network parameters during a control condition (rest). Evidence of less optimally organized functional networks during memory tasks was more prominent for pictorial than for verbal stimuli. Copyright © 2010 Elsevier B.V. All rights reserved.
Peculiar spectral statistics of ensembles of trees and star-like graphs
Kovaleva, V.; Maximov, Yu; Nechaev, S.; ...
2017-07-11
In this paper we investigate the eigenvalue statistics of exponentially weighted ensembles of full binary trees and p-branching star graphs. We show that spectral densities of corresponding adjacency matrices demonstrate peculiar ultrametric structure inherent to sparse systems. In particular, the tails of the distribution for binary trees share the \\Lifshitz singularity" emerging in the onedimensional localization, while the spectral statistics of p-branching star-like graphs is less universal, being strongly dependent on p. The hierarchical structure of spectra of adjacency matrices is interpreted as sets of resonance frequencies, that emerge in ensembles of fully branched tree-like systems, known as dendrimers. However,more » the relaxational spectrum is not determined by the cluster topology, but has rather the number-theoretic origin, re ecting the peculiarities of the rare-event statistics typical for one-dimensional systems with a quenched structural disorder. The similarity of spectral densities of an individual dendrimer and of ensemble of linear chains with exponential distribution in lengths, demonstrates that dendrimers could be served as simple disorder-less toy models of one-dimensional systems with quenched disorder.« less
Sparse cliques trump scale-free networks in coordination and competition
NASA Astrophysics Data System (ADS)
Gianetto, David A.; Heydari, Babak
2016-02-01
Cooperative behavior, a natural, pervasive and yet puzzling phenomenon, can be significantly enhanced by networks. Many studies have shown how global network characteristics affect cooperation; however, it is difficult to understand how this occurs based on global factors alone, low-level network building blocks, or motifs are necessary. In this work, we systematically alter the structure of scale-free and clique networks and show, through a stochastic evolutionary game theory model, that cooperation on cliques increases linearly with community motif count. We further show that, for reactive stochastic strategies, network modularity improves cooperation in the anti-coordination Snowdrift game and the Prisoner’s Dilemma game but not in the Stag Hunt coordination game. We also confirm the negative effect of the scale-free graph on cooperation when effective payoffs are used. On the flip side, clique graphs are highly cooperative across social environments. Adding cycles to the acyclic scale-free graph increases cooperation when multiple games are considered; however, cycles have the opposite effect on how forgiving agents are when playing the Prisoner’s Dilemma game.
Peculiar spectral statistics of ensembles of trees and star-like graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovaleva, V.; Maximov, Yu; Nechaev, S.
In this paper we investigate the eigenvalue statistics of exponentially weighted ensembles of full binary trees and p-branching star graphs. We show that spectral densities of corresponding adjacency matrices demonstrate peculiar ultrametric structure inherent to sparse systems. In particular, the tails of the distribution for binary trees share the \\Lifshitz singularity" emerging in the onedimensional localization, while the spectral statistics of p-branching star-like graphs is less universal, being strongly dependent on p. The hierarchical structure of spectra of adjacency matrices is interpreted as sets of resonance frequencies, that emerge in ensembles of fully branched tree-like systems, known as dendrimers. However,more » the relaxational spectrum is not determined by the cluster topology, but has rather the number-theoretic origin, re ecting the peculiarities of the rare-event statistics typical for one-dimensional systems with a quenched structural disorder. The similarity of spectral densities of an individual dendrimer and of ensemble of linear chains with exponential distribution in lengths, demonstrates that dendrimers could be served as simple disorder-less toy models of one-dimensional systems with quenched disorder.« less
Mostafa, G A; Ghazy, S E
2001-10-01
A simple, rapid and selective procedure for the indirect spectrophotometric determination of Se(IV) and As(V) has been developed. It is based on the reduction of Se(IV) to Se(0) and As(V) to As(III) with hydroiodic acid (KI + HCl). The liberated iodine, equivalent to each analyte, is quantitatively extracted with oleic acid (HOL) surfactant. The iodine-HOL system exhibits its maximum absorbance at 435 nm. The different analytical parameters affecting the extraction and determination processes have been examined. The calibration graphs were found to be linear over the ranges 5-120 and 0.25-20 ppm of Se(IV) and As(V), with lower detection limits of 2.5 and 0.15 ppm and molar absorptivities of 1 x 10(4) and 0.5 x 10(4) dm3 mol(-1) cm(-1), respectively. Sandell's sensitivity was calculated to be 0.0078 and 0.0149 microg/cm2 in the same order. The relative standard deviation for five replicate analyses of 40 ppm Se(IV) and 4 ppm As(V) were 1.0 and 0.9%, respectively. The proposed procedure in the presence of EDTA as a masking agent for foreign ions has been successfully applied to the determination of Se(IV) in a reference sample and As(V) in copper metal, in addition to their determination in spiked and polluted water samples.
Zhao, Jiao; Lu, Yunhui; Fan, Chongyang; Wang, Jun; Yang, Yaling
2015-02-05
A novel and simple method for the sensitive determination of trace amounts of nitrite in human urine and blood has been developed by combination of cloud point extraction (CPE) and microplate assay. The method is based on the Griess reaction and the reaction product is extracted into nonionic surfactant Triton-X114 using CPE technique. In this study, decolorization treatment of urine and blood was applied to overcome the interference of matrix and enhance the sensitivity of nitrite detection. Multi-sample can be simultaneously detected thanks to a 96-well microplate technique. The effects of different operating parameters such as type of decolorizing agent, concentration of surfactant (Triton X-114), addition of (NH4)2SO4, extraction temperature and time, interfering elements were studied and optimum conditions were obtained. Under the optimum conditions, a linear calibration graph was obtained in the range of 10-400 ng mL(-1) of nitrite with limit of detection (LOD) of 2.5 ng mL(-1). The relative standard deviation (RSD) for determination of 100 ng mL(-1) of nitrite was 2.80%. The proposed method was successfully applied for the determination of nitrite in the urine and blood samples with recoveries of 92.6-101.2%. Copyright © 2014 Elsevier B.V. All rights reserved.
Simultaneous injection effective mixing flow analysis of urinary albumin using dye-binding reaction.
Ratanawimarnwong, Nuanlaor; Ponhong, Kraingkrai; Teshima, Norio; Nacapricha, Duangjai; Grudpan, Kate; Sakai, Tadao; Motomizu, Shoji
2012-07-15
A new four-channel simultaneous injection effective mixing flow analysis (SIEMA) system has been assembled for the determination of urinary albumin. The SIEMA system consisted of a syringe pump, two 5-way cross connectors, four holding coils, five 3-way solenoid valves, a 50-cm long mixing coil and a spectrophotometer. Tetrabromophenol blue anion (TBPB) in Triton X-100 micelle reacted with albumin at pH 3.2 to form a blue ion complex with a λ(max) 625nm. TBPB, Triton X-100, acetate buffer and albumin standard solutions were aspirated into four individual holding coils by a syringe pump and then the aspirated zones were simultaneously pushed in the reverse direction to the detector flow cell. Baseline drift, due to adsorption of TBPB-albumin complex on the wall of the hydrophobic PTFE tubing, was minimized by aspiration of Triton X-100 and acetate buffer solutions between samples. The calibration graph was linear in the range of 10-50μg/mL and the detection limit for albumin (3σ) was 0.53μg/mL. The RSD (n=11) at 30μg/mL was 1.35%. The sample throughput was 37/h. With a 10-fold dilution, interference from urine matrix was removed. The proposed method has advantages in terms of simple automation operation and short analysis time. Copyright © 2012 Elsevier B.V. All rights reserved.
Zhao, Wenhui; Sheng, Na; Zhu, Rong; Wei, Fangdi; Cai, Zheng; Zhai, Meijuan; Du, Shuhu; Hu, Qin
2010-07-15
Molecularly imprinted polymers for bisphenol A (BPA) were prepared by using surface molecular imprinting technique. Analogues of BPA, namely 4,4'-dihydroxybiphenyl and 3,3',5,5'-tetrabromobisphenol A, were used as the dummy templates instead of BPA, to avoid the leakage of a trace amount of the target analyte (BPA). The resulting dummy molecularly imprinted polymers (DMIPs) showed the large sorption capacity, high recognition ability and fast binding kinetics for BPA. The maximal sorption capacity was up to 958 micromol g(-1), and it only took 40 min for DMIPs to achieve the sorption equilibrium. The DMIPs were successfully applied to the solid-phase extraction coupled with HPLC/UV for the determination of BPA in water samples. The calibration graph of the analytical method was linear with a correlation coefficient more than 0.999 in the concentration range of 0.0760-0.912 ng mL(-1) of BPA. The limit of detection was 15.2 pg mL(-1) (S/N=3). Recoveries were in the range of 92.9-102% with relative standard deviation (RSD) less than 11%. The trace amounts of BPA in tap water, drinking water, rain and leachate of one-off tableware were determined by the method built, and the satisfactory results were obtained. 2010 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Naixing; Qi Ping
1992-06-01
In this paper the absorption spectra of 4f electron transitions of the systems of neodymium and erbium with 8-hydroxyquinoline-5-sulphonic acid and diethylamine have been studied by normal and third-derivative spectrophotometry. Their molar absorptivities are 80 1.mol{sup {minus}1}.cm{sup {minus}1} for neodymium and 65 1.mol{sup {minus}1}.cm{sup {minus}1} for erbium. Use of the third-derivative spectra, eliminates the interference by other rare earths and increases the sensitivity for Nd and Er. The derivative molar absorptivities are 390 1.mol{sup {minus}1}.cm{sup {minus}1} for Nd and 367 1.mol{sup {minus}1}.cm{sup {minus}1} for Er. The calibration graphs were linear up to 11.8 {mu}g/ml of Nd and 12.3 {mu}g/ml ofmore » Er, respectively. The relative standard deviations evaluated from eleven independent determinations of 7.2 {mu}g/ml (for Nd) and 8.3 {mu}g/ml (for Er) are 1.3% and 1.4%, respectively. The detection limits are 0.2 {mu}g/ml for Nd and 0.3 {mu}g/ml for Er. The method has been developed for determining those two elements in mixture of lanthanides by means of the third-derivative spectra and the analytical results obtained are satisfactory.« less
NASA Astrophysics Data System (ADS)
El-Didamony, A. M.; Shehata, A. M.
2014-09-01
Two simple, rapid and sensitive spectrophotometric methods have been proposed for the assay of bisoprolol fumarate (BSF), propranolol hydrochloride (PRH), and timolol maleate (TIM) either in bulk or in pharmaceutical formulations. The methods are based on the reaction of the selected drugs with methyl orange (MO) and eriochrome black T in acidic buffers, after extracting in dichloromethane and measured quantitatively with maximum absorption at 428 and 518 nm for MO and EBT, respectively. The analytical parameters and their effects on the reported systems are investigated. The extracts are intensely colored and very stable at room temperature. The calibration graphs were linear over the concentration range of 0.8-6.4, 0.4-3.6, 0.8-5.6 μg/mL for BSF, PRH, and TIM, respectively, with MO and 0.8-6.4, 0.4-3.2, and 0.8-8.0 μg/mL for BSF, PRH, and TIM, respectively, with EBT. The stoichiometry of the complexes was found to be 1 : 1 in all cases. The proposed methods were successfully extended to pharmaceutical preparations. Excipients used as additive in commercial formulations did not interfere in the analysis. The proposed methods can be recommended for quality control and routine analysis where time, cost effectiveness and high specificity of analytical technique are of great importance.
Detection of yeast Saccharomyces cerevisiae with ionic liquid mediated carbon dots.
Wang, Jia-Li; Teng, Ji-Yuan; Jia, Te; Shu, Yang
2018-02-01
Hydrophobic nitrogen-doped carbon dots are prepared with energetic ionic liquid (1,3-dibutylimidazolium dicyandiamide, BbimDCN) as carbon source. A yield of as high as 58% is obtained for the carbon dots, shortly termed as BbimDCN-OCDs, due to the presence of thermal-instable N(CN) 2 - moiety. BbimDCN-OCDs exhibit favorable biocompability and excellent imaging capacity for fluorescence labelling of yeast cell Saccharomyces cerevisiae. In addition, chitosan-modified Dy 3+ -doped magnetic nanoparticles (shortly as Chitosan@Fe 2.75 Dy 0.25 O 4 ) with superparamagnetism are prepared. The electrostatic attraction between positively charged magnetic nanoparticles and negatively charged yeast cells facilitates exclusive recognition/isolation of S. cerevisiae. In practice, S. cerevisiae is labelled by BbimDCN-OCDs and adhered onto the Chitosan@Fe 2.75 Dy 0.25 O 4 . The yeast/ BbimDCN-OCDs/Chitosan@Fe 2.75 Dy 0.25 O 4 composite is then isolated with an external magnet and the fluorescence from BbimDCN-OCDs incorporated in S. cerevisiae is monitored. The fluorescence intensity is linearly correlated with the content of yeast cell, showing a calibration graph of F = 3.01log[C]+11.7, offering a detection limit of 5×10 2 CFU/mL. S. cerevisiae content in various real sample matrixes are quantified by using this protocol. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorokin, N. I., E-mail: sorokin@ns.crys.ras.ru; Krivandina, E. A.; Zhmurova, Z. I.
2013-11-15
The density of single crystals of nonstoichiometric phases Ba{sub 1-x}La{sub x}F{sub 2+x} (0 {<=} x {<=} 0.5) and Sr{sub 0.8}La{sub 0.2-x}Lu{sub x}F{sub 2.2} (0 {<=} x {<=} 0.2) with the fluorite (CaF{sub 2}) structure type and R{sub 1-y}Sr{sub y}F{sub 3-y} (R = Pr, Nd; 0 {<=} y {<=} 0.15) with the tysonite (LaF{sub 3}) structure type has been measured. Single crystals were grown from a melt by the Bridgman method. The measured concentration dependences of single crystal density are linear. The interstitial and vacancy models of defect formation in the fluorite and tysonite phases, respectively, are confirmed. To implement themore » composition control of single crystals of superionic conductors M{sub 1-x}R{sub x}F{sub 2+x} and R{sub 1-y}M{sub y}F{sub 3-y} in practice, calibration graphs of X-ray density in the MF{sub 2}-RF{sub 3} systems (M = Ca, Sr, Ba, Cd, Pb; R = La-Lu, Y) are plotted.« less
Ensafi, Ali A; Ghaderi, Ali R
2007-09-05
An on-line flow system was used to develop a selective and efficient on-line sorbent extraction preconcentration system for cadmium. The method is based on adsorption of cadmium ions onto the activated carbon modified with methyl thymol blue. Then the adsorbed ions were washed using 0.5M HNO(3) and the eluent was used to determine the Cd(II) ions using flame atomic absorption spectrometry. The results obtained show that the modified activated carbon has the greatest adsorption capacity of 80 microg of Cd(II) per 1.0 g of the solid phase. The optimal pH value for the quantitative preconcentration was 9.0 and full desorption is achieved by using 0.5M HNO(3) solution. It is established that the solid phase can be used repeatedly without a considerable adsorption capacity loss. The detection limit was less than 1 ngmL(-1) Cd(II), with an enrichment factor of 1000. The calibration graph was linear in the range of 1-2000 ngmL(-1) Cd(II). The developed method has been applied to the determination of trace cadmium (II) in water samples and in the following reference materials: sewage sludge (CRM144R), and sea water (CASS.4) with satisfactory results. The accuracy was assessed through recovery experiments.
Talio, María C; Zambrano, Karen; Kaplan, Marcos; Acosta, Mariano; Gil, Raúl A; Luconi, Marta O; Fernández, Liliana P
2015-10-01
A new environmental friendly methodology based on fluorescent signal enhancement of rhodamine B dye is proposed for Pb(II) traces quantification using a preconcentration step based on the coacervation phenomenon. A cationic surfactant (cetyltrimethylammonium bromide, CTAB) and potassium iodine were chosen for this aim. The coacervate phase was collected on a filter paper disk and the solid surface fluorescence signal was determined in a spectrofluorometer. Experimental variables that influence on preconcentration step and fluorimetric sensitivity have been optimized using uni-variation assays. The calibration graph using zero th order regression was linear from 7.4×10(-4) to 3.4 μg L(-1) with a correlation coefficient of 0.999. Under the optimal conditions, a limit of detection of 2.2×10(-4) μg L(-1) and a limit of quantification of 7.4×10(-4) μg L(-1) were obtained. The method showed good sensitivity, adequate selectivity with good tolerance to foreign ions, and was applied to the determination of trace amounts of Pb(II) in refill solutions for e-cigarettes with satisfactory results validated by ICP-MS. The proposed method represents an innovative application of coacervation processes and of paper filters to solid surface fluorescence methodology. Copyright © 2015 Elsevier B.V. All rights reserved.
Akhond, Morteza; Absalan, Ghodratollah; Pourshamsi, Tayebe; Ramezani, Amir M
2016-07-01
Gas-assisted dispersive liquid-phase microextraction (GA-DLPME) has been developed for preconcentration and spectrophotometric determination of copper ion in different water samples. The ionic liquid 1-hexyl-3-methylimidazolium hexafluorophosphate and argon gas, respectively, were used as the extracting solvent and disperser. The procedure was based on direct reduction of Cu(II) to Cu(I) by hydroxylamine hydrochloride, followed by extracting Cu(I) into ionic liquid phase by using neocuproine as the chelating agent. Several experimental variables that affected the GA-DLPME efficiency were investigated and optimized. Under the optimum experimental conditions (IL volume, 50µL; pH, 6.0; acetate buffer, 1.5molL(-1); reducing agent concentration, 0.2molL(-1); NC concentration, 120µgmL(-1); Ar gas bubbling time, 6min; argon flow rate, 1Lmin(-1); NaCl concentration, 6% w/w; and centrifugation time, 3min), the calibration graph was linear over the concentration range of 0.30-2.00µgmL(-1) copper ion with a limit of detection of 0.07µgmL(-1). Relative standard deviation for five replicate determinations of 1.0µgmL(-1) copper ion was found to be 3.9%. The developed method was successfully applied to determination of both Cu(I) and Cu(II) species in water samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Eguílaz, Marcos; Villalonga, Reynaldo; Yáñez-Sedeño, Paloma; Pingarrón, José M
2011-10-15
The design of a novel biosensing electrode surface, combining the advantages of magnetic ferrite nanoparticles (MNPs) functionalized with glutaraldehyde (GA) and poly(diallyldimethylammonium chloride) (PDDA)-coated multiwalled carbon nanotubes (MWCNTs) as platforms for the construction of high-performance multienzyme biosensors, is reported in this work. Before the immobilization of enzymes, GA-MNP/PDDA/MWCNT composites were prepared by wrapping of carboxylated MWCNTs with positively charged PDDA and interaction with GA-functionalized MNPs. The nanoconjugates were characterized by scanning electron microscopy (SEM) and electrochemistry. The electrode platform was used to construct a bienzyme biosensor for the determination of cholesterol, which implied coimmobilization of cholesterol oxidase (ChOx) and peroxidase (HRP) and the use of hydroquinone as redox mediator. Optimization of all variables involved in the preparation and analytical performance of the bienzyme electrode was accomplished. At an applied potential of -0.05 V, a linear calibration graph for cholesterol was obtained in the 0.01-0.95 mM concentration range. The detection limit (0.85 μM), the apparent Michaelis-Menten constant (1.57 mM), the stability of the biosensor, and the calculated activation energy can be advantageously compared with the analytical characteristics of other CNT-based cholesterol biosensors reported in the literature. Analysis of human serum spiked with cholesterol at different concentration levels yielded recoveries between 100% and 103% © 2011 American Chemical Society
Rohani Moghadam, Masoud; Poorakbarian Jahromi, Sayedeh Maria; Darehkordi, Ali
2016-02-01
A newly synthesized bis thiosemicarbazone ligand, (2Z,2'Z)-2,2'-((4S,5R)-4,5,6-trihydroxyhexane-1,2-diylidene)bis(N-phenylhydrazinecarbothioamide), was used to make a complex with Cu(2+), Ni(2+), Co(2+) and Fe(3+) for their simultaneous spectrophotometric determination using chemometric methods. By Job's method, the ratio of metal to ligand in Ni(2+) was found to be 1:2, whereas it was 1:4 for the others. The effect of pH on the sensitivity and selectivity of the formed complexes was studied according to the net analyte signal (NAS). Under optimum conditions, the calibration graphs were linear in the ranges of 0.10-3.83, 0.20-3.83, 0.23-5.23 and 0.32-8.12 mg L(-1) with the detection limits of 2, 3, 4 and 10 μg L(-1) for Cu(2+), Co(2+), Ni(2+) and Fe(3+) respectively. The OSC-PLS1 for Cu(2+) and Ni(2+), the PLS1 for Co(2+) and the PC-FFANN for Fe(3+) were selected as the best models. The selected models were successfully applied for the simultaneous determination of elements in some foodstuffs and vegetables. Copyright © 2015 Elsevier Ltd. All rights reserved.
Esteves, Lorena C R; Oliveira, Thaís R O; Souza, Elias C; Bomfeti, Cleide A; Gonçalves, Andrea M; Oliveira, Luiz C A; Barbosa, Fernando; Pereira, Márcio C; Rodrigues, Jairo L
2015-04-01
An easy, fast and environment-friendly method for COD determination in water is proposed. The procedure is based on the oxidation of organic matter by the H2O2/Fe(3-x)Co(x)O4 system. The Fe(3-x)Co(x)O4 nanoparticles activate the H2O2 molecule to produce hydroxyl radicals, which are highly reactive for oxidizing organic matter in an aqueous medium. After the oxidation step, the organic matter amounts can be quantified by comparing the quantity of H2O2 consumed. Moreover, the proposed COD method has several distinct advantages, since it does not use toxic reagents and the oxidation reaction of organic matter is conducted at room temperature and atmospheric pressure. Method detection limit is 2.0 mg L(-1) with intra- and inter-day precision lower than 1% (n=5). The calibration graph is linear in the range of 2.0-50 mg L(-1) with a sample throughput of 25 samples h(-1). Data are validated based on the analysis of six contaminated river water samples by the proposed method and by using a comparative method validated and marketed by Merck, with good agreement between the results (t test, 95%). Copyright © 2014 Elsevier B.V. All rights reserved.
IRiS: construction of ARG networks at genomic scales.
Javed, Asif; Pybus, Marc; Melé, Marta; Utro, Filippo; Bertranpetit, Jaume; Calafell, Francesc; Parida, Laxmi
2011-09-01
Given a set of extant haplotypes IRiS first detects high confidence recombination events in their shared genealogy. Next using the local sequence topology defined by each detected event, it integrates these recombinations into an ancestral recombination graph. While the current system has been calibrated for human population data, it is easily extendible to other species as well. IRiS (Identification of Recombinations in Sequences) binary files are available for non-commercial use in both Linux and Microsoft Windows, 32 and 64 bit environments from https://researcher.ibm.com/researcher/view_project.php?id = 2303 parida@us.ibm.com.
Accelerated stress testing of amorphous silicon solar cells
NASA Technical Reports Server (NTRS)
Stoddard, W. G.; Davis, C. W.; Lathrop, J. W.
1985-01-01
A technique for performing accelerated stress tests of large-area thin a-Si solar cells is presented. A computer-controlled short-interval test system employing low-cost ac-powered ELH illumination and a simulated a-Si reference cell (seven individually bandpass-filtered zero-biased crystalline PIN photodiodes) calibrated to the response of an a-Si control cell is described and illustrated with flow diagrams, drawings, and graphs. Preliminary results indicate that while most tests of a program developed for c-Si cells are applicable to a-Si cells, spurious degradation may appear in a-Si cells tested at temperatures above 130 C.
Unsteady aerodynamic characterization of a military aircraft in vertical gusts
NASA Technical Reports Server (NTRS)
Lebozec, A.; Cocquerez, J. L.
1985-01-01
The effects of 2.5-m/sec vertical gusts on the flight characteristics of a 1:8.6 scale model of a Mirage 2000 aircraft in free flight at 35 m/sec over a distance of 30 m are investigated. The wind-tunnel setup and instrumentation are described; the impulse-response and local-coefficient-identification analysis methods applied are discussed in detail; and the modification and calibration of the gust-detection probes are reviewed. The results are presented in graphs, and good general agreement is obtained between model calculations using the two analysis methods and the experimental measurements.
A Linear Viscoelastic Model Calibration of Sylgard 184.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, Kevin Nicholas; Brown, Judith Alice
2017-04-01
We calibrate a linear thermoviscoelastic model for solid Sylgard 184 (90-10 formulation), a lightly cross-linked, highly flexible isotropic elastomer for use both in Sierra / Solid Mechanics via the Universal Polymer Model as well as in Sierra / Structural Dynamics (Salinas) for use as an isotropic viscoelastic material. Material inputs for the calibration in both codes are provided. The frequency domain master curve of oscillatory shear was obtained from a report from Los Alamos National Laboratory (LANL). However, because the form of that data is different from the constitutive models in Sierra, we also present the mapping of the LANLmore » data onto Sandia’s constitutive models. Finally, blind predictions of cyclic tension and compression out to moderate strains of 40 and 20% respectively are compared with Sandia’s legacy cure schedule material. Although the strain rate of the data is unknown, the linear thermoviscoelastic model accurately predicts the experiments out to moderate strains for the slower strain rates, which is consistent with the expectation that quasistatic test procedures were likely followed. This good agreement comes despite the different cure schedules between the Sandia and LANL data.« less
A non-linear piezoelectric actuator calibration using N-dimensional Lissajous figure
NASA Astrophysics Data System (ADS)
Albertazzi, A.; Viotti, M. R.; Veiga, C. L. N.; Fantin, A. V.
2016-08-01
Piezoelectric translators (PZTs) are very often used as phase shifters in interferometry. However, they typically present a non-linear behavior and strong hysteresis. The use of an additional resistive or capacitive sensor make possible to linearize the response of the PZT by feedback control. This approach works well, but makes the device more complex and expensive. A less expensive approach uses a non-linear calibration. In this paper, the authors used data from at least five interferograms to form N-dimensional Lissajous figures to establish the actual relationship between the applied voltages and the resulting phase shifts [1]. N-dimensional Lissajous figures are formed when N sinusoidal signals are combined in an N-dimensional space, where one signal is assigned to each axis. It can be verified that the resulting Ndimensional ellipsis lays in a 2D plane. By fitting an ellipsis equation to the resulting 2D ellipsis it is possible to accurately compute the resulting phase value for each interferogram. In this paper, the relationship between the resulting phase shift and the applied voltage is simultaneously established for a set of 12 increments by a fourth degree polynomial. The results in speckle interferometry show that, after two or three interactions, the calibration error is usually smaller than 1°.
40 CFR 92.122 - Smoke meter calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... equipment response of zero; (b) Calibrated neutral density filters having approximately 10, 20, and 40 percent opacity shall be employed to check the linearity of the instrument. The filter(s) shall be... beam of light from the light source emanates, and the recorder response shall be noted. Filters with...
40 CFR 89.319 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... each range calibrated, if the deviation from a least-squares best-fit straight line is 2 percent or... ±0.3 percent of full scale on the zero, the best-fit non-linear equation which represents the data to within these limits shall be used to determine concentration. (d) Oxygen interference optimization...
40 CFR 89.319 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... each range calibrated, if the deviation from a least-squares best-fit straight line is 2 percent or... ±0.3 percent of full scale on the zero, the best-fit non-linear equation which represents the data to within these limits shall be used to determine concentration. (d) Oxygen interference optimization...
40 CFR 89.319 - Hydrocarbon analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... each range calibrated, if the deviation from a least-squares best-fit straight line is 2 percent or... ±0.3 percent of full scale on the zero, the best-fit non-linear equation which represents the data to within these limits shall be used to determine concentration. (d) Oxygen interference optimization...
40 CFR 89.320 - Carbon monoxide analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...
40 CFR 89.320 - Carbon monoxide analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... monoxide as described in this section. (b) Initial and periodic interference check. Prior to its... engineering practice. For each range calibrated, if the deviation from a least-squares best-fit straight line... range. If the deviation exceeds these limits, the best-fit non-linear equation which represents the data...